Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is intended to show how to use pandas, and sql alchemy to upload data into DB2-switch.
Install using pip or any other package manager pandas, sqlalchemy and pg8000. The later one is the driver to connect to the db.
Step1: After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname.
It allows you to create different engines to connect to serveral dbs.
Step2: Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name.
Before doing this I already checked that the data is properly organized.
Step3: Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this
Step4: The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped. | Python Code:
import pandas as pd
from sqlalchemy import create_engine
Explanation: This notebook is intended to show how to use pandas, and sql alchemy to upload data into DB2-switch.
Install using pip or any other package manager pandas, sqlalchemy and pg8000. The later one is the driver to connect to the db.
End of explanation
def connection(user,passwd,dbname, echo_i=False):
str1 = ('postgresql+pg8000://' + user +':' + passw + '@switch-db2.erg.berkeley.edu:5432/'
+ dbname + '?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory')
engine = create_engine(str1,echo=echo_i)
return engine
user = 'jdlara'
passw = 'Amadeus-2010'
dbname = 'apl_cec'
engine= connection(user,passw,dbname)
Explanation: After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname.
It allows you to create different engines to connect to serveral dbs.
End of explanation
excel_file = 'PGEFeedersFinal.xlsx'
tab_name = ['substation_banks','substations','feeders_limits_data','feeder_minimpacts']
schema_for_upload = 'PGE'
for name in tab_name:
pd_data = pd.read_excel(excel_file, sheetname=name, encoding='UTF-8')
pd_data.to_sql(name, engine, schema=schema_for_upload, if_exists='replace',chunksize=100)
Explanation: Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name.
Before doing this I already checked that the data is properly organized.
End of explanation
def create_geom(table,schema,engine):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
print query
k.execute(query)
query = ('alter table ' + table + ' drop column if exists geom;')
print query
k.execute(query)
query = 'SELECT AddGeometryColumn (\''+ schema + '\',\''+ table + '\',\'geom\''+',4326,\'POINT\',2);'
print query
k.execute(query)
query = ('UPDATE ' + table + ' set geom = ST_SetSRID(st_makepoint(' + table + '.lon, ' +
table + '.lat), 4326)::geometry;')
k.execute(query)
print query
return 'geom column added with SRID 4326'
table = 'feeders'
schema = 'PGE'
create_geom(table,schema,engine)
Explanation: Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this:
set search_path = SCHEMA, public;
alter table TABLE drop column if exists geom;
SELECT AddGeometryColumn ('SCHEMA','TABLE','geom',4326,'POINT',2);
UPDATE TABLE set geom = ST_SetSRID(st_makepoint(TABLE.lon, TABLE.lat), 4326)::geometry;
End of explanation
col = 'feeder_no'
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
k.execute(query)
query = ('alter table ' + table + ' ADD CONSTRAINT '+ table +'_pk PRIMARY KEY (' + col + ')')
print query
k.execute(query)
ALTER TABLE table_name
ADD CONSTRAINT [ constraint_name ]
PRIMARY KEY (index_col1, index_col2, ... index_col_n)
Explanation: The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped.
End of explanation |
3,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix Factorization via Singular Value Decomposition
Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations.
So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix
Step1: These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll pivot ratings_df to get that and call the new variable R.
Step2: The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
Step3: Singular Value Decomposition
Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function svds because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after).
Step4: Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form.
Step5: Making Predictions from the Decomposed Matrices
I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$.
I also need to add the user means back to get the actual star ratings prediction. | Python Code:
import pandas as pd
import numpy as np
r_cols = ['user_id', 'movie_id', 'rating']
m_cols = ['movie_id', 'title', 'genres']
ratings_df = pd.read_csv('ratings.dat',sep='::', names=r_cols, engine='python', usecols=range(3), dtype = int)
movies_df = pd.read_csv('movies.dat', sep='::', names=m_cols, engine='python')
movies_df['movie_id'] = movies_df['movie_id'].apply(pd.to_numeric)
movies_df.head(3)
ratings_df.head(3)
Explanation: Matrix Factorization via Singular Value Decomposition
Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations.
So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix:
$$\begin{equation}
R = U\Sigma V^{T}
\end{equation}$$
where R is users's ratings matrix, $U$ is the user "features" matrix, $\Sigma$ is the diagonal matrix of singular values (essentially weights), and $V^{T}$ is the movie "features" matrix. $U$ and $V^{T}$ are orthogonal, and represent different things. $U$ represents how much users "like" each feature and $V^{T}$ represents how relevant each feature is to each movie.
To get the lower rank approximation, we take these matrices and keep only the top $k$ features, which we think of as the underlying tastes and preferences vectors.
End of explanation
R_df = ratings_df.pivot(index = 'user_id', columns ='movie_id', values = 'rating').fillna(0)
R_df.head()
Explanation: These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll pivot ratings_df to get that and call the new variable R.
End of explanation
R = R_df.as_matrix()
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
Explanation: The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
End of explanation
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned, k = 50)
Explanation: Singular Value Decomposition
Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function svds because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after).
End of explanation
sigma = np.diag(sigma)
Explanation: Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form.
End of explanation
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df.head()
def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.user_id == (userID)]
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'movie_id', right_on = 'movie_id').
sort_values(['rating'], ascending=False)
)
print('User {0} has already rated {1} movies.'.format(userID, user_full.shape[0]))
print('Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations))
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['movie_id'].isin(user_full['movie_id'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'movie_id',
right_on = 'movie_id').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
already_rated, predictions = recommend_movies(preds_df, 837, movies_df, ratings_df, 10)
predictions
already_rated.head(10)
Explanation: Making Predictions from the Decomposed Matrices
I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$.
I also need to add the user means back to get the actual star ratings prediction.
End of explanation |
3,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook I´ll create functions for easing the development of geostatistical models using the GPFlow (James H, et.al )the library for modelling gaussian processes in Tensor Flow (Google) (Great Library, btw).
Requirements
Inputs
Design Matrix X composed of coovariates and spatio-temporal coordinates.
A desired hypespace $A \subseteq \mathbb{R}^{n}$ (e.g. Borelian, Closed, Discrete,Partition)
An aditional set of hyperparameters and initializations.
Processing
A wrapper with GPflow regressor (This will be experimental)
Outputs
The fitted GPR model.
A tensor composed of the coordinates of two dimensions and the predicted field given a initial condition (tensor of rank two.
Get some sample data
Step1: GPFlow first approximation
Step2: Buidling a grid for the interpolation (prediction)
The first step is to inspect the range of the geographical space.
Step3: Lets build a mesh grid and then a pcolor using that meshgrid.
Step4: We can get the direct Elevation data with
Step5: Using all* covariates for predicting elevation | Python Code:
run ../../../../traversals/tests.py
Explanation: In this notebook I´ll create functions for easing the development of geostatistical models using the GPFlow (James H, et.al )the library for modelling gaussian processes in Tensor Flow (Google) (Great Library, btw).
Requirements
Inputs
Design Matrix X composed of coovariates and spatio-temporal coordinates.
A desired hypespace $A \subseteq \mathbb{R}^{n}$ (e.g. Borelian, Closed, Discrete,Partition)
An aditional set of hyperparameters and initializations.
Processing
A wrapper with GPflow regressor (This will be experimental)
Outputs
The fitted GPR model.
A tensor composed of the coordinates of two dimensions and the predicted field given a initial condition (tensor of rank two.
Get some sample data
End of explanation
import tensorflow as tf
import GPflow as gf
import pandas as pd
#k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [3,4])
X = pd.concat((rd[['MeanTemperature_mean','Precipitation_mean','WindSpeed_mean']],s[['Longitude','Latitude']]),axis=1)
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1])
X = s[['Longitude','Latitude']]
Y = rd['Elevation_mean']
mx = X.as_matrix()
my = Y.as_matrix().reshape(16,1)
mx.shape
meanf = gf.mean_functions.Linear(np.ones((2,1)), np.ones(1))
m = gf.gpr.GPR(mx,my,k,mean_function=meanf)
m.likelihood.variance = 70
m.optimize()
print(m)
Explanation: GPFlow first approximation
End of explanation
plt.style.use('ggplot')
X.plot.scatter('Longitude','Latitude')
Explanation: Buidling a grid for the interpolation (prediction)
The first step is to inspect the range of the geographical space.
End of explanation
Nn = 300
predicted_x = np.linspace(min(X.Longitude),max(X.Longitude),Nn)
predicted_y = np.linspace(min(X.Latitude),max(X.Latitude),Nn)
Xx, Yy = np.meshgrid(predicted_x,predicted_y)
predicted_coordinates = np.vstack([Xx.ravel(), Yy.ravel()]).transpose()
predicted_coordinates.shape
means,variances = m.predict_y(predicted_coordinates)
upperl = (np.sqrt(variances))/2.0
lowerl = -1 * upperl
### Let´s plot
#X.plot.scatter('Longitude','Latitude')
plt.pcolor(Xx,Yy,means.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
##
## Upper limit
plt.pcolor(Xx,Yy,variances.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
## Upper limit
plt.pcolor(Xx,Yy,upperl.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
## Lower limit
plt.pcolor(Xx,Yy,lowerl.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
min(upperl)
Explanation: Lets build a mesh grid and then a pcolor using that meshgrid.
End of explanation
elev = big_t.associatedData.getAssociatedRasterAreaData('Elevation')
elev.display_field()
print(elev.rasterdata.bands[0].data().shape)
## But we can extract directly the info from this raster.
from django.contrib.gis.geos import Point
true_elevs = map(lambda p : elev.getValue(Point(*p)),predicted_coordinates)
# so the errors are:
errors= means - true_elevs
plt.hist(errors,bins=50)
plt.scatter(range(len(errors)),errors)
Explanation: We can get the direct Elevation data with:
End of explanation
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [6,7])
X = pd.concat((rd[['MaxTemperature_mean', u'MeanTemperature_mean',
u'MinTemperature_mean', u'Precipitation_mean', u'SolarRadiation_mean',
u'Vapor_mean']],s[['Longitude','Latitude']]),axis=1)
mx = X.as_matrix()
#Y is still elevation (4,4) matrix
my = Y.as_matrix().reshape(16,1)
meanf = gf.mean_functions.Linear(np.ones((8,1)), np.ones(1))
m = gf.gpr.GPR(mx,my,k,mean_function=meanf)
m.likelihood.variance = 10
m.optimize()
print(m)
X.columns
mx = X.as_matrix()
my = Y.as_matrix().reshape(16,1)
mx.shape
meanf = gf.mean_functions.Linear(np.ones((8,1)), np.ones(1))
m = gf.gpr.GPR(mx,my,k,mean_function=meanf)
m.likelihood.variance = 10
m.optimize()
print(m)
# Now Let´s do a Logistic Regression
s
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1])
X = s[['Longitude','Latitude']]
Y = s[['Falconidae']]
mx = X.as_matrix()
my = Y.as_matrix().reshape(16,1)
meanf = gf.mean_functions.Linear(np.ones((2,1)), np.ones(1))
## I need a likelihood function !
m = gf.gpmc.GPMC(mx,my,k,mean_function=meanf)
#m.likelihood.variance = 10
m.optimize()
#print(m)
Explanation: Using all* covariates for predicting elevation
End of explanation |
3,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation of a force sensor
Andrés Marrugo, PhD
Universidad Tecnológica de Bolívar
A force sensor (FSR) is evaluated experimentally. To do so, the resistance of the sensor is measured for a range of forces as follows
Step1: Sensitivity is the slope of the resistance versus force curve and is clearly a nonlinear quantity. However, we recall that force resistive sensors have a linear relation between force ($F$) and conductance ($1/R$). Therefore it is simpler to first calculate the conductance $C$. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
F = np.array([50,100,150,200,250,300,350,400,450,500,550,600,650])
R = np.array([500,256.4,169.5,144.9,125,100,95.2,78.1,71.4,65.8,59.9,60,55.9])
plt.plot(R,F,'*')
plt.ylabel('R [Omega]')
plt.xlabel('Force [N]')
plt.show()
Explanation: Evaluation of a force sensor
Andrés Marrugo, PhD
Universidad Tecnológica de Bolívar
A force sensor (FSR) is evaluated experimentally. To do so, the resistance of the sensor is measured for a range of forces as follows:
Calculate the sensitivity of the sensor throughout its range.
| | | | | | | | | | | | | | |
| -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- |
| F [N] |50|100|150|200|250|300|350|400|450|500|550|600|650|
| R [$\Omega$] |500|256.4|169.5|144.9|125|100|95.2|78.1|71.4|65.8|59.9|60|55.9|
| | | | | | | | | | | | | | |
End of explanation
C = 1/R
plt.plot(F,C,'*')
plt.ylabel('C [Siemens]')
plt.xlabel('Force [N]')
plt.show()
# polyfit computes the coefficients a and b of degree=1
a,b = np.polyfit(F,C,1)
print('The coefficients are a =',a,'b =',b)
C1 = a*F+b
plt.plot(C1,F,':b',label='Fitted line')
plt.plot(C,F,'*')
plt.ylabel('C [Siemens]')
plt.xlabel('Force [N]')
plt.show()
Explanation: Sensitivity is the slope of the resistance versus force curve and is clearly a nonlinear quantity. However, we recall that force resistive sensors have a linear relation between force ($F$) and conductance ($1/R$). Therefore it is simpler to first calculate the conductance $C$.
End of explanation |
3,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anyway, under a gigabyte. So, nothing to worry about even if we have 24 cores.
Step1: Interesting... the S&P 500 ETF
Step2: Doing some compute
We'll use a "big" table to get some sense of timings
Step3: Using numexpr?
numexpr is currently not set up to do reductions via HDF5. I've opened an issue here
Step4: h5py
Step5: h5py may be a touch faster than pytables for this kind of usage. But why does pandas use pytables?
Step6: Dask
It seems that there should be no need to, e.g., use h5py - but dask's read_hdf doens't seem to be working nicely...
Step7: spy_h5py = h5py.File(fname)[max_sym]
Step8: Dask for an actual distributed task (but only on one file for now)
Step9: This ends up being a little faster than just using blaze (see below), but about half the time is spent setting thigs up in Dask.
Step10: Blaze?
Holy crap!
Step11: Read directly with Blaze
Somehow this is not as impressive
Step12: Do some actual compute with Blaze
Step13: Pandas?
To load with Pandas, you need to close the pytables session | Python Code:
# But what symbol is that?
max_sym = None
max_rows = 0
for sym, rows in rec_counts.items():
if rows > max_rows:
max_rows = rows
max_sym = sym
max_sym, max_rows
Explanation: Anyway, under a gigabyte. So, nothing to worry about even if we have 24 cores.
End of explanation
# Most symbols also have way less rows - note this is log xvals
plt.hist(list(rec_counts.values()), bins=50, log=True)
plt.show()
Explanation: Interesting... the S&P 500 ETF
End of explanation
spy = taq_tb.get_node(max_sym)
# PyTables is record oriented...
%timeit np.mean(list(x['Bid_Price'] for x in spy.iterrows()))
# But this is faster...
%timeit np.mean(spy[:]['Bid_Price'])
np.mean(spy[:]['Bid_Price'])
Explanation: Doing some compute
We'll use a "big" table to get some sense of timings
End of explanation
spy_bp = spy.cols.Bid_Price
# this works...
np.mean(spy_bp)
# But it can't use numexpr
expr = tb.Expr('sum(spy_bp)')
# You can use numexpr to get the values of the column... but that's silly
# (sum doesn't work right, and the axis argument is non-functional)
%timeit result = expr.eval().mean()
tb.Expr('spy_bp').eval().mean()
Explanation: Using numexpr?
numexpr is currently not set up to do reductions via HDF5. I've opened an issue here:
https://github.com/PyTables/PyTables/issues/548
End of explanation
taq_tb.close()
%%time
spy_h5py = h5py.File(fname)[max_sym]
np.mean(spy_h5py['Bid_Price'])
Explanation: h5py
End of explanation
%%timeit
np.mean(spy_h5py['Bid_Price'])
Explanation: h5py may be a touch faster than pytables for this kind of usage. But why does pandas use pytables?
End of explanation
taq_tb.close()
Explanation: Dask
It seems that there should be no need to, e.g., use h5py - but dask's read_hdf doens't seem to be working nicely...
End of explanation
store = pd.HDFStore(fname)
store = pd.HDFStore('../test-data/')
# this is a fine way to iterate over our datasets (in addition to what's available in PyTables and h5py)
it = store.items()
key, tab = next(it)
tab
# The columns argument doesn't seem to work...
store.select(max_sym, columns=['Bid_Price']).head()
# columns also doesn't work here...
pd.read_hdf(fname, max_sym, columns=['Bid_Price']).head()
# So we use h5py (actually, pytables appears faster...)
spy_dask = dd.from_array(spy_h5py)
mean_job = spy_dask['Bid_Price'].mean()
mean_job.compute()
# This is appreciably slower than directly computing the mean w/ numpy
%timeit mean_job.compute()
Explanation: spy_h5py = h5py.File(fname)[max_sym]
End of explanation
class DDFs:
# A (key, table) list
datasets = []
dbag = None
def __init__(self, h5fname):
h5in = h5py.File(h5fname)
h5in.visititems(self.collect_dataset)
def collect_dataset(self, key, table):
if isinstance(table, h5py.Dataset):
self.datasets.append(dd.from_array(table)['Bid_Price'].mean())
def compute_mean(self):
# This is still very slow!
self.results = {key: result for key, result in dd.compute(*self.datasets)}
%%time
ddfs = DDFs(fname)
ddfs.datasets[:5]
len(ddfs.datasets)
dd.compute?
%%time
results = dd.compute(*ddfs.datasets[:20])
import dask.multiprocessing
%%time
# This crashes out throwing lots of KeyErrors
results = dd.compute(*ddfs.datasets[:20], get=dask.multiprocessing.get)
results[0]
Explanation: Dask for an actual distributed task (but only on one file for now)
End of explanation
from dask import delayed
@delayed
def mean_column(key, data, column='Bid_Price'):
return key, blaze.data(data)[column].mean()
class DDFs:
# A (key, table) list
datasets = []
def __init__(self, h5fname):
h5in = h5py.File(h5fname)
h5in.visititems(self.collect_dataset)
def collect_dataset(self, key, table):
if isinstance(table, h5py.Dataset):
self.datasets.append(mean_column(key, table))
def compute_mean(self, limit=None):
# Note that a limit of None includes all values
self.results = {key: result for key, result in dd.compute(*self.datasets[:limit])}
%%time
ddfs = DDFs(fname)
%%time
ddfs.compute_mean()
next(iter(ddfs.results.items()))
# You can also compute individual results as needed
ddfs.datasets[0].compute()
Explanation: This ends up being a little faster than just using blaze (see below), but about half the time is spent setting thigs up in Dask.
End of explanation
spy_blaze = blaze.data(spy_h5py)
%time
spy_blaze['Ask_Price'].mean()
taq_tb = tb.open_file(fname)
spy_tb = taq_tb.get_node(max_sym)
spy_blaze = blaze.data(spy_tb)
%time spy_blaze['Bid_Price'].mean()
taq_tb.close()
Explanation: Blaze?
Holy crap!
End of explanation
%%time
blaze_h5_file = blaze.data(fname)
# This is rather nice
blaze_h5_file.SPY.no_suffix.Bid_Price.mean()
blaze_h5_file.ZFKOJB.no_suffix.Bid_Price.mean()
Explanation: Read directly with Blaze
Somehow this is not as impressive
End of explanation
taq_h5py = h5py.File(fname)
class SymStats:
means = {}
def compute_stats(self, key, table):
if isinstance(table, h5py.Dataset):
self.means[key] = blaze.data(table)['Bid_Price'].mean()
ss = SymStats()
%time taq_h5py.visititems(ss.compute_stats)
means = iter(ss.means.items())
next(means)
ss.means['SPY/no_suffix']
Explanation: Do some actual compute with Blaze
End of explanation
taq_tb = tb.open_file(fname)
taq_tb.close()
pd.read_hdf?
pd.read_hdf(fname, max_sym, start=0, stop=1, chunksize=1)
max_sym
fname
%%timeit
node = taq_tb.get_node(max_sym)
pd.DataFrame.from_records(node[0:1])
%%timeit
# I've also tried this with `.get_node()`, same speed
pd.DataFrame.from_records(taq_tb.root.IXQAJE.no_suffix)
%%timeit
pd.read_hdf(fname, max_sym)
# Pandas has optimizations it likes to do with
%timeit spy_df = pd.read_hdf(fname, max_sym)
# Actually do it
spy_df = pd.read_hdf(fname, max_sym)
# This is fast, but loading is slow...
%timeit spy_df.Bid_Price.mean()
Explanation: Pandas?
To load with Pandas, you need to close the pytables session
End of explanation |
3,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Symbulate Documentation
Markov Processes
<a id='mc'></a>
Random processes are typically collections of dependent random variables, but allowing arbitrary associations between values at different points in time makes analysis intractable. Markov processes are stochastic processes which obey a specific kind of "next step" dependence structure. For a Markov process, roughly speaking, given the present, the future is conditionally independent of the past. The dependence assumption in Markov chains allows for a rich
theory and tractable probabilistic models that have applications in a wide range of situations.
<a id='contents'></a>
Discrete time Markov chains
Continuous time Markov chains
Poisson processes
Arrival and jump times
< Random processes | Contents | Symbulate graphics >
Be sure to import Symbulate using the following commands.
Step1: <a id='dtmc'></a>
Discrete time Markov chains
A discrete time Markov chain is a discrete time, discrete state random process which satisfies for all $n$
Step2: Find the probability that it is rainy on Friday ($n=5$).
Step3: Find the conditional probability that it is rainy on Friday given that it is rainy on Thursday. (The following should return, approximately, the second row of the transition matrix.)
Step4: Find the conditional probability that it is rainy on Friday given that it is rainy on Thursday and cloudy on Wednesday. (This demonstrates the Markov property
Step5: Find the probability that it is rainy on Friday and Saturday.
Step6: State labels
The state space can be any list of values (like ['cloud', 'rain', 'sun']). If state_labels are not specified, the default is to label the states 0, 1, 2, ... When the states are numerical values, plots can be created, and methods like .mean() and .sd() can be applied.
Step7: <a id='ctmc'></a>
Continuous time Markov chains
A continuous time Markov chain is a continuous time, discrete state random process which satisfies for all $t$
Step8: If it is currently sunny, find the probability that it is raining 36 hours from now.
Step9: Given that it is raining 36 hours from now, find the probability that it is sunny 48 hours from now.
Step10: State labels
As for discrete time Markov chains, the state space for a continuous time Markov chain can be any list of values (like ['cloud', 'rain', 'sun']). If state_labels are not specified, the default is to label the states 0, 1, 2, ... When the states are numerical values, plots can be created, and methods like .mean() and .sd() can be applied.
Step11: <a id='poisson'></a>
Poisson processes
Events occurs over time according to a Poisson process $N(t), t\ge0$, with rate $\lambda$ if at most one event occurs at a time, the times between events are indepent and exponentially distributed with rate $\lambda$, and $N(t)$ counts the number of events that occur in the time interval $[0,t]$ (with $N(0) = 0$). In other words, a Poisson process is a continuous time Markov chain whose rate matrix $Q$ satisfies
$$
q(i, j) =
\begin{cases}
\lambda, & j = i+1,\
-\lambda, & j = i,\
0, & \text{otherwise.}
\end{cases}
$$
In Symbulate, PoissonProcess defines a Poisson process; the single parameter is rate.
Example. Customers arrive according a Poisson process with rate 2 per minute.
Step12: Simulate and plot a single sample path, and find the process value for this sample path at time 4
Step13: Simulate many sample paths and plot the mean function
Step14: Approximate the distribution of $N(3)$, the number of customers who arrive in the first 3 minutes, and its mean and variance. (Should be Poisson(6).)
Step15: <a id='times'></a>
Arrival and jump times
For continuous time Markov chains (including Poisson processes) the times between jumps (or arrivals) and the times of the jumps themselves are random variables. These random times can be accessed with .JumpTimes() or InterjumpTimes(). The jump times and interjump times are sequences; individual components can be accessed with brackets, e.g. .JumpTimes()[1] for the time of the first jump.
The sequences of states the chain visits can be accessed with .States().
Example. Continuing the weather example.
Step16: Simulate the jump times for one sample path.
Step17: Simulate the state sequence for one sample path.
Step18: Let $T$ denote the time at which the weather moves from being currently sunny to a different state (either rainy or cloudy.)
Step19: Note that InterjumpTimes are indexed starting at 0, so .InterjumpTimes()[0] is time between time 0 and the first jump, .InterjumpTimes()[1] is the time between the first and second jump, etc.
Step20: For a continuous time Markov chain, the times between jumps are independent.
Step21: For a PoissonProcess, arrival times and interarrival times can be accessed with .ArrivalTimes() and .InterarrivalTimes().
Example. Let $N$ be a Poisson process with rate 2.
Step22: Simulate the arrival times for one sample path.
Step23: Let $T$ be the time of the first arrival. Approximate the distribution of $T$, and its mean and variance. (Should be Exponential with rate 2.)
Step24: Approximate the conditional distribution of $T$ given that there is exactly 1 arrival in the first 3 units of time. (Should be Uniform on (0,3).)
Step25: The times between the first two arrivals should be independent | Python Code:
from symbulate import *
%matplotlib inline
Explanation: Symbulate Documentation
Markov Processes
<a id='mc'></a>
Random processes are typically collections of dependent random variables, but allowing arbitrary associations between values at different points in time makes analysis intractable. Markov processes are stochastic processes which obey a specific kind of "next step" dependence structure. For a Markov process, roughly speaking, given the present, the future is conditionally independent of the past. The dependence assumption in Markov chains allows for a rich
theory and tractable probabilistic models that have applications in a wide range of situations.
<a id='contents'></a>
Discrete time Markov chains
Continuous time Markov chains
Poisson processes
Arrival and jump times
< Random processes | Contents | Symbulate graphics >
Be sure to import Symbulate using the following commands.
End of explanation
states = ["cloud", "rain", "sun"]
TransitionMatrix = [[0.3, 0.2, 0.5],
[0.5, 0.3, 0.2],
[0.3, 0.0, 0.7]]
InitialDistribution = [0, 0, 1] # sunny on Sunday
X = MarkovChain(TransitionMatrix, InitialDistribution, states)
Explanation: <a id='dtmc'></a>
Discrete time Markov chains
A discrete time Markov chain is a discrete time, discrete state random process which satisfies for all $n$:
Given $X_n$ ("the present"), $(X_{n+1}, X_{n+2}, \ldots)$ ("the future") is conditionally independent of $(X_{n-1}, X_{n-2}, \ldots, X_0)$ ("the past").
In Symbulate a discrete time Markov chain is defined with MarkovChain. The probabilistic behavior of a discrete time Markov chain is fully specified by the following, which are the parameters of MarkovChain.
state_labels: The state space of possible values of the process. (Default is to label the states 0, 1, 2, ...)
initia_dist: The initial distribution, which specifies the probability distribution at time 0
transition_matrix: The (one-step) transition probability matrix, whose $(i, j)$ entry specifies the probability that the chain is in state $j$ at the next time step given that it is currently in state $i$: $P(X_{n+1} = j\, | X_n = i)$. All rows sums must be 1.
Example. The weather in a certain city can be classified as either cloudy, rainy, or sunny and follows a discrete time Markov chain.
* Given that it is cloudy today, tomorrow it will be cloudy with probability 0.3, rainy with probability 0.2, or sunny with probability 0.5.
* Given that it is rainy today, tomorrow it will be cloudy with probability 0.5, rainy with probability 0.3, or sunny with probability 0.2.
* Given that it is sunny today, tomorrow it will be cloudy with probability 0.3, rainy with probability 0, or sunny with probability 0.7.
Suppose that it is sunny on Sunday. (So we'll call Sunday $n=0$.)
End of explanation
X[5].sim(10000).tabulate(normalize = True)
Explanation: Find the probability that it is rainy on Friday ($n=5$).
End of explanation
(X[5] | (X[4] == "rain")).sim(10000).tabulate(normalize = True)
Explanation: Find the conditional probability that it is rainy on Friday given that it is rainy on Thursday. (The following should return, approximately, the second row of the transition matrix.)
End of explanation
(X[5] | ((X[4] == "rain") & (X[3] == "cloud"))).sim(10000).tabulate(normalize = True)
Explanation: Find the conditional probability that it is rainy on Friday given that it is rainy on Thursday and cloudy on Wednesday. (This demonstrates the Markov property: conditioning additionally on the value of $X_3$ does not change the conditional distribution from the previous part.)
End of explanation
(X[5] & X[6]).sim(10000).tabulate(normalize = True)
Explanation: Find the probability that it is rainy on Friday and Saturday.
End of explanation
TransitionMatrix = [[0.3, 0.2, 0.5],
[0.5, 0.3, 0.2],
[0.3, 0.0, 0.7]]
InitialDistribution = [0, 0, 1] # sunny on Sunday
X = MarkovChain(TransitionMatrix, InitialDistribution)
X.sim(1).plot(alpha = 1)
X.sim(10).plot()
X[5].sim(10000).plot()
(X[5] | (X[4] == 1) ).sim(10000).plot()
(X[4] & X[5]).sim(10000).plot(jitter = True)
Explanation: State labels
The state space can be any list of values (like ['cloud', 'rain', 'sun']). If state_labels are not specified, the default is to label the states 0, 1, 2, ... When the states are numerical values, plots can be created, and methods like .mean() and .sd() can be applied.
End of explanation
states = ["cloud", "rain", "sun"]
Q = [[-0.50, 0.15, 0.35],
[ 0.60, -1, 0.40],
[ 1/3, 0.0, -1/3]]
InitialDistribution = [0, 0, 1] # sunny currently
X = ContinuousTimeMarkovChain(Q, InitialDistribution, states)
Explanation: <a id='ctmc'></a>
Continuous time Markov chains
A continuous time Markov chain is a continuous time, discrete state random process which satisfies for all $t$:
Given $X_t$ ("the present"), $(X_{u},u \ge t)$ ("the future") is conditionally independent of $(X_{s}, s \le t )$ ("the past").
In a discrete time Markov chain, state transitions occur at every point in time, $n = 0, 1, 2, \ldots$. A continuous time Markov chain behaves like a discrete time Markov chain in which the times between state transitions are independent and exponentially distributed.
The amount of time a chain stays in a state has an exponential distribution, with a rate parameter that can depend on the current state.
When the chain "jumps" to a new state, the jumps behave like a discrete time Markov chain.
The times between jumps are independent.
In Symbulate a continuous time Markov chain is defined with ContinuousTimeMarkovChain. The probabilistic behavior of a continuous time Markov chain is fully specified by the following, which are the parameters of ContinuousMarkovChain.
state_labels: The state space of possible values of the process. (Default is to label the states 0, 1, 2, ...)
initial_dist: The initial distribution, which specifies the probability distribution at time 0
generator_matrix: The generator matrix or transition rate matrix, $Q$, whose $(i, j)$ entry specifies the rate at which the chain "attempts to transition" to state $j$ given that it is currently in state $i$.
For small $h$, $P(X_{t+h} = j\, | X_t = i) \approx h q(i,j)$
The total departure rate from states $i$ is $\lambda(i) = \sum_{j\neq i} q(i,j)$
The diagonal entries are the $-1$ times the total departure rates from each state, $q(i,i) = -\lambda(i)$, so that the all row sums are 0.
The probability that when the chain departs state $i$ it jumps to state $j$ is $q(i,j)/\lambda(i)$.
Example. The weather in a certain city can be classified as either cloudy, rainy, or sunny and follows a continuous time Markov chain.
* Given that it is cloudy currently, it will next be rainy with probability 0.3, or sunny with probability 0.7.
* Given that it is rainy currently, it will next be cloudy with probability 0.6 or sunny with probability 0.4.
* Given that it is sunny currently, it will next be cloudy with probability 1.
* On average it stays cloudy for 2 days, rainy for 1 day, and sunny for 3 days.
Suppose that it is currently sunny.
End of explanation
X[1.5].sim(10000).tabulate(normalize = True)
Explanation: If it is currently sunny, find the probability that it is raining 36 hours from now.
End of explanation
(X[2] | (X[1.5] == "rain")).sim(10000).tabulate(normalize = True)
Explanation: Given that it is raining 36 hours from now, find the probability that it is sunny 48 hours from now.
End of explanation
Q = [[-0.50, 0.15, 0.35],
[ 0.60, -1, 0.40],
[ 1/3, 0.0, -1/3]]
InitialDistribution = [0, 0, 1] # sunny currently
X = ContinuousTimeMarkovChain(Q, InitialDistribution)
X.sim(1).plot(alpha = 1)
Explanation: State labels
As for discrete time Markov chains, the state space for a continuous time Markov chain can be any list of values (like ['cloud', 'rain', 'sun']). If state_labels are not specified, the default is to label the states 0, 1, 2, ... When the states are numerical values, plots can be created, and methods like .mean() and .sd() can be applied.
End of explanation
N = PoissonProcess(rate = 2)
Explanation: <a id='poisson'></a>
Poisson processes
Events occurs over time according to a Poisson process $N(t), t\ge0$, with rate $\lambda$ if at most one event occurs at a time, the times between events are indepent and exponentially distributed with rate $\lambda$, and $N(t)$ counts the number of events that occur in the time interval $[0,t]$ (with $N(0) = 0$). In other words, a Poisson process is a continuous time Markov chain whose rate matrix $Q$ satisfies
$$
q(i, j) =
\begin{cases}
\lambda, & j = i+1,\
-\lambda, & j = i,\
0, & \text{otherwise.}
\end{cases}
$$
In Symbulate, PoissonProcess defines a Poisson process; the single parameter is rate.
Example. Customers arrive according a Poisson process with rate 2 per minute.
End of explanation
n = N.sim(1)
n.plot(alpha = 1)
n[4]
Explanation: Simulate and plot a single sample path, and find the process value for this sample path at time 4
End of explanation
sims = N.sim(100)
sims.plot()
sims.mean().plot('r--')
Explanation: Simulate many sample paths and plot the mean function
End of explanation
sims = N[3].sim(10000)
sims.plot()
sims.mean(), sims.var()
Explanation: Approximate the distribution of $N(3)$, the number of customers who arrive in the first 3 minutes, and its mean and variance. (Should be Poisson(6).)
End of explanation
states = ["cloud", "rain", "sun"]
Q = [[-0.50, 0.15, 0.35],
[ 0.60, -1, 0.40],
[ 1/3, 0.0, -1/3]]
InitialDistribution = [0, 0, 1] # sunny currently
X = ContinuousTimeMarkovChain(Q, InitialDistribution, states)
Explanation: <a id='times'></a>
Arrival and jump times
For continuous time Markov chains (including Poisson processes) the times between jumps (or arrivals) and the times of the jumps themselves are random variables. These random times can be accessed with .JumpTimes() or InterjumpTimes(). The jump times and interjump times are sequences; individual components can be accessed with brackets, e.g. .JumpTimes()[1] for the time of the first jump.
The sequences of states the chain visits can be accessed with .States().
Example. Continuing the weather example.
End of explanation
X.JumpTimes().sim(1)
Explanation: Simulate the jump times for one sample path.
End of explanation
X.States().sim(1)
Explanation: Simulate the state sequence for one sample path.
End of explanation
T = X.JumpTimes()[1]
sims = T.sim(10000)
sims.plot()
sims.mean(), sims.var()
Explanation: Let $T$ denote the time at which the weather moves from being currently sunny to a different state (either rainy or cloudy.)
End of explanation
T = X.InterjumpTimes()[0]
sims = T.sim(10000)
sims.plot()
sims.mean(), sims.var()
Explanation: Note that InterjumpTimes are indexed starting at 0, so .InterjumpTimes()[0] is time between time 0 and the first jump, .InterjumpTimes()[1] is the time between the first and second jump, etc.
End of explanation
sims = (X.InterjumpTimes()[0] & X.InterjumpTimes()[1]).sim(10000)
sims.plot(alpha = 0.1)
sims.corr()
Explanation: For a continuous time Markov chain, the times between jumps are independent.
End of explanation
N = PoissonProcess(rate = 2)
Explanation: For a PoissonProcess, arrival times and interarrival times can be accessed with .ArrivalTimes() and .InterarrivalTimes().
Example. Let $N$ be a Poisson process with rate 2.
End of explanation
N.ArrivalTimes().sim(1)
Explanation: Simulate the arrival times for one sample path.
End of explanation
T = N.ArrivalTimes()[1]
t = T.sim(10000)
t.plot()
t.mean(), t.var()
Explanation: Let $T$ be the time of the first arrival. Approximate the distribution of $T$, and its mean and variance. (Should be Exponential with rate 2.)
End of explanation
(T | (N[3] == 1)).sim(10000).plot()
Explanation: Approximate the conditional distribution of $T$ given that there is exactly 1 arrival in the first 3 units of time. (Should be Uniform on (0,3).)
End of explanation
W0 = N.InterarrivalTimes()[0]
W1 = N.InterarrivalTimes()[1]
sims = (W0 & W1).sim(10000)
sims.plot()
sims.corr()
Explanation: The times between the first two arrivals should be independent
End of explanation |
3,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an RNN in PyTorch
In this notebook, I'll construct a character-level RNN with PyTorch. If you are unfamiliar with character-level RNNs, check out this great article by Andrej Karpathy. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina, one of my favorite novels. I call this project Anna KaRNNa.
Step1: Now we have the text, encode it as integers.
Step2: Processing the data
We're one-hot encoding the data, so I'll make a function to do that.
I'll also create mini-batches for training. We'll take the encoded characters and split them into multiple sequences, given by n_seqs (also refered to as "batch size" in other places). Each of those sequences will be n_steps long.
Step3: Defining the network with PyTorch
Here I'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. I'm also going to write a method for predicting characters.
Step4: Time to train
Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes (number of sequences and number of steps), and start the training. With the train function, we can set the number of epochs, the learning rate, and other parameters. Also, we can run the training on a GPU by setting cuda=True.
Step5: Getting the best model
To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network.
After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.
Step6: Sampling
Now that the model is trained, we'll want to sample from it. To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!
Top K sampling
Our predictions come from a categorcial probability distribution over all the possible characters. We can make the sampled text more reasonable but less variable by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text.
Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.
Step7: Loading a checkpoint | Python Code:
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable
with open('anna.txt', 'r') as f:
text = f.read()
Explanation: Building an RNN in PyTorch
In this notebook, I'll construct a character-level RNN with PyTorch. If you are unfamiliar with character-level RNNs, check out this great article by Andrej Karpathy. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina, one of my favorite novels. I call this project Anna KaRNNa.
End of explanation
chars = tuple(set(text))
int2char = dict(enumerate(chars))
char2int = {ch: ii for ii, ch in int2char.items()}
encoded = np.array([char2int[ch] for ch in text])
Explanation: Now we have the text, encode it as integers.
End of explanation
def one_hot_encode(arr, n_labels):
# Initialize the the encoded array
one_hot = np.zeros((np.multiply(*arr.shape), n_labels), dtype=np.float32)
# Fill the appropriate elements with ones
one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.
# Finally reshape it to get back to the original array
one_hot = one_hot.reshape((*arr.shape, n_labels))
return one_hot
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns mini-batches of size
n_seqs x n_steps from arr.
'''
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
try:
y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+n_steps]
except IndexError:
y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0]
yield x, y
Explanation: Processing the data
We're one-hot encoding the data, so I'll make a function to do that.
I'll also create mini-batches for training. We'll take the encoded characters and split them into multiple sequences, given by n_seqs (also refered to as "batch size" in other places). Each of those sequences will be n_steps long.
End of explanation
class CharRNN(nn.Module):
def __init__(self, tokens, n_steps=100, n_hidden=256, n_layers=2,
drop_prob=0.5, lr=0.001):
super().__init__()
self.drop_prob = drop_prob
self.n_layers = n_layers
self.n_hidden = n_hidden
self.lr = lr
self.chars = tokens
self.int2char = dict(enumerate(self.chars))
self.char2int = {ch: ii for ii, ch in self.int2char.items()}
self.dropout = nn.Dropout(drop_prob)
self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers,
dropout=drop_prob, batch_first=True)
self.fc = nn.Linear(n_hidden, len(self.chars))
self.init_weights()
def forward(self, x, hc):
''' Forward pass through the network '''
x, (h, c) = self.lstm(x, hc)
x = self.dropout(x)
# Stack up LSTM outputs
x = x.view(x.size()[0]*x.size()[1], self.n_hidden)
x = self.fc(x)
return x, (h, c)
def predict(self, char, h=None, cuda=False, top_k=None):
''' Given a character, predict the next character.
Returns the predicted character and the hidden state.
'''
if cuda:
self.cuda()
else:
self.cpu()
if h is None:
h = self.init_hidden(1)
x = np.array([[self.char2int[char]]])
x = one_hot_encode(x, len(self.chars))
inputs = Variable(torch.from_numpy(x), volatile=True)
if cuda:
inputs = inputs.cuda()
h = tuple([Variable(each.data, volatile=True) for each in h])
out, h = self.forward(inputs, h)
p = F.softmax(out).data
if cuda:
p = p.cpu()
if top_k is None:
top_ch = np.arange(len(self.chars))
else:
p, top_ch = p.topk(top_k)
top_ch = top_ch.numpy().squeeze()
p = p.numpy().squeeze()
char = np.random.choice(top_ch, p=p/p.sum())
return self.int2char[char], h
def init_weights(self):
''' Initialize weights for fully connected layer '''
initrange = 0.1
# Set bias tensor to all zeros
self.fc.bias.data.fill_(0)
# FC weights as random uniform
self.fc.weight.data.uniform_(-1, 1)
def init_hidden(self, n_seqs):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x n_seqs x n_hidden,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
return (Variable(weight.new(self.n_layers, n_seqs, self.n_hidden).zero_()),
Variable(weight.new(self.n_layers, n_seqs, self.n_hidden).zero_()))
def train(net, data, epochs=10, n_seqs=10, n_steps=50, lr=0.001, clip=5, val_frac=0.1, cuda=False, print_every=10):
''' Traing a network
Arguments
---------
net: CharRNN network
data: text data to train the network
epochs: Number of epochs to train
n_seqs: Number of mini-sequences per mini-batch, aka batch size
n_steps: Number of character steps per mini-batch
lr: learning rate
clip: gradient clipping
val_frac: Fraction of data to hold out for validation
cuda: Train with CUDA on a GPU
print_every: Number of steps for printing training and validation loss
'''
net.train()
opt = torch.optim.Adam(net.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss()
# create training and validation data
val_idx = int(len(data)*(1-val_frac))
data, val_data = data[:val_idx], data[val_idx:]
if cuda:
net.cuda()
counter = 0
n_chars = len(net.chars)
for e in range(epochs):
h = net.init_hidden(n_seqs)
for x, y in get_batches(data, n_seqs, n_steps):
counter += 1
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
x, y = torch.from_numpy(x), torch.from_numpy(y)
inputs, targets = Variable(x), Variable(y)
if cuda:
inputs, targets = inputs.cuda(), targets.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([Variable(each.data) for each in h])
net.zero_grad()
output, h = net.forward(inputs, h)
loss = criterion(output, targets.view(n_seqs*n_steps))
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm(net.parameters(), clip)
opt.step()
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(n_seqs)
val_losses = []
for x, y in get_batches(val_data, n_seqs, n_steps):
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
x, y = torch.from_numpy(x), torch.from_numpy(y)
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([Variable(each.data, volatile=True) for each in val_h])
inputs, targets = Variable(x, volatile=True), Variable(y, volatile=True)
if cuda:
inputs, targets = inputs.cuda(), targets.cuda()
output, val_h = net.forward(inputs, val_h)
val_loss = criterion(output, targets.view(n_seqs*n_steps))
val_losses.append(val_loss.data[0])
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.4f}...".format(loss.data[0]),
"Val Loss: {:.4f}".format(np.mean(val_losses)))
Explanation: Defining the network with PyTorch
Here I'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. I'm also going to write a method for predicting characters.
End of explanation
if 'net' in locals():
del net
net = CharRNN(chars, n_hidden=512, n_layers=2)
n_seqs, n_steps = 128, 100
train(net, encoded, epochs=25, n_seqs=n_seqs, n_steps=n_steps, lr=0.001, cuda=True, print_every=10)
Explanation: Time to train
Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes (number of sequences and number of steps), and start the training. With the train function, we can set the number of epochs, the learning rate, and other parameters. Also, we can run the training on a GPU by setting cuda=True.
End of explanation
checkpoint = {'n_hidden': net.n_hidden,
'n_layers': net.n_layers,
'state_dict': net.state_dict(),
'tokens': net.chars}
with open('rnn.net', 'wb') as f:
torch.save(checkpoint, f)
Explanation: Getting the best model
To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network.
After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.
End of explanation
def sample(net, size, prime='The', top_k=None, cuda=False):
if cuda:
net.cuda()
else:
net.cpu()
net.eval()
# First off, run through the prime characters
chars = [ch for ch in prime]
h = net.init_hidden(1)
for ch in prime:
char, h = net.predict(ch, h, cuda=cuda, top_k=top_k)
chars.append(char)
# Now pass in the previous character and get a new one
for ii in range(size):
char, h = net.predict(chars[-1], h, cuda=cuda, top_k=top_k)
chars.append(char)
return ''.join(chars)
print(sample(net, 2000, prime='Anna', top_k=5, cuda=False))
Explanation: Sampling
Now that the model is trained, we'll want to sample from it. To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!
Top K sampling
Our predictions come from a categorcial probability distribution over all the possible characters. We can make the sampled text more reasonable but less variable by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text.
Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.
End of explanation
with open('rnn.net', 'rb') as f:
checkpoint = torch.load(f)
loaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers'])
loaded.load_state_dict(checkpoint['state_dict'])
print(sample(loaded, 2000, cuda=True, top_k=5, prime="And Levin said"))
Explanation: Loading a checkpoint
End of explanation |
3,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Save and load a model using a distribution strategy
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Load and prepare the data with TensorFlow Datasets and tf.data, and create the model using tf.distribute.MirroredStrategy
Step3: Train the model with tf.keras.Model.fit
Step4: Save and load the model
Now that you have a simple model to work with, let's explore the saving/loading APIs.
There are two kinds of APIs available
Step5: Restore the model without tf.distribute.Strategy
Step6: After restoring the model, you can continue training on it, even without needing to call Model.compile again, since it was already compiled before saving. The model is saved in TensorFlow's standard SavedModel proto format. For more information, please refer to the guide to SavedModel format.
Now, restore the model and train it using a tf.distribute.Strategy
Step7: As the Model.fit output shows, loading works as expected with tf.distribute.Strategy. The strategy used here does not have to be the same strategy used before saving.
The tf.saved_model API
Saving the model with lower-level API is similar to the Keras API
Step8: Loading can be done with tf.saved_model.load. However, since it is a lower-level API (and hence has a wider range of use cases), it does not return a Keras model. Instead, it returns an object that contain functions that can be used to do inference. For example
Step9: The loaded object may contain multiple functions, each associated with a key. The "serving_default" key is the default key for the inference function with a saved Keras model. To do inference with this function
Step10: You can also load and do inference in a distributed manner
Step11: Calling the restored function is just a forward pass on the saved model (tf.keras.Model.predict). What if you want to continue training the loaded function? Or what if you need to embed the loaded function into a bigger model? A common practice is to wrap this loaded object into a Keras layer to achieve this. Luckily, TF Hub has hub.KerasLayer for this purpose, shown here
Step12: In the above example, Tensorflow Hub's hub.KerasLayer wraps the result loaded back from tf.saved_model.load into a Keras layer that is used to build another model. This is very useful for transfer learning.
Which API should I use?
For saving, if you are working with a Keras model, use the Keras Model.save API unless you need the additional control allowed by the low-level API. If what you are saving is not a Keras model, then the lower-level API, tf.saved_model.save, is your only choice.
For loading, your API choice depends on what you want to get from the model loading API. If you cannot (or do not want to) get a Keras model, then use tf.saved_model.load. Otherwise, use tf.keras.models.load_model. Note that you can get a Keras model back only if you saved a Keras model.
It is possible to mix and match the APIs. You can save a Keras model with Model.save, and load a non-Keras model with the low-level API, tf.saved_model.load.
Step13: Saving/Loading from a local device
When saving and loading from a local I/O device while training on remote devices—for example, when using a Cloud TPU—you must use the option experimental_io_device in tf.saved_model.SaveOptions and tf.saved_model.LoadOptions to set the I/O device to localhost. For example
Step15: Caveats
One special case is when you create Keras models in certain ways, and then save them before training. For example
Step16: A SavedModel saves the tf.types.experimental.ConcreteFunction objects generated when you trace a tf.function (check When is a Function tracing? in the Introduction to graphs and tf.function guide to learn more). If you get a ValueError like this it's because Model.save was not able to find or create a traced ConcreteFunction.
Caution
Step17: Usually the model's forward pass—the call method—will be traced automatically when the model is called for the first time, often via the Keras Model.fit method. A ConcreteFunction can also be generated by the Keras Sequential and Functional APIs, if you set the input shape, for example, by making the first layer either a tf.keras.layers.InputLayer or another layer type, and passing it the input_shape keyword argument.
To verify if your model has any traced ConcreteFunctions, check if Model.save_spec is None
Step18: Let's use tf.keras.Model.fit to train the model, and notice that the save_spec gets defined and model saving will work | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow_datasets as tfds
import tensorflow as tf
Explanation: Save and load a model using a distribution strategy
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/save_and_load"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This tutorial demonstrates how you can save and load models in a SavedModel format with tf.distribute.Strategy during or after training. There are two kinds of APIs for saving and loading a Keras model: high-level (tf.keras.Model.save and tf.keras.models.load_model) and low-level (tf.saved_model.save and tf.saved_model.load).
To learn about SavedModel and serialization in general, please read the saved model guide, and the Keras model serialization guide. Let's start with a simple example.
Caution: TensorFlow models are code and it is important to be careful with untrusted code. Learn more in Using TensorFlow securely.
Import dependencies:
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
def get_data():
datasets = tfds.load(name='mnist', as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
return train_dataset, eval_dataset
def get_model():
with mirrored_strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
return model
Explanation: Load and prepare the data with TensorFlow Datasets and tf.data, and create the model using tf.distribute.MirroredStrategy:
End of explanation
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
Explanation: Train the model with tf.keras.Model.fit:
End of explanation
keras_model_path = '/tmp/keras_save'
model.save(keras_model_path)
Explanation: Save and load the model
Now that you have a simple model to work with, let's explore the saving/loading APIs.
There are two kinds of APIs available:
High-level (Keras): Model.save and tf.keras.models.load_model
Low-level: tf.saved_model.save and tf.saved_model.load
The Keras API
Here is an example of saving and loading a model with the Keras API:
End of explanation
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
Explanation: Restore the model without tf.distribute.Strategy:
End of explanation
another_strategy = tf.distribute.OneDeviceStrategy('/cpu:0')
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
Explanation: After restoring the model, you can continue training on it, even without needing to call Model.compile again, since it was already compiled before saving. The model is saved in TensorFlow's standard SavedModel proto format. For more information, please refer to the guide to SavedModel format.
Now, restore the model and train it using a tf.distribute.Strategy:
End of explanation
model = get_model() # get a fresh model
saved_model_path = '/tmp/tf_save'
tf.saved_model.save(model, saved_model_path)
Explanation: As the Model.fit output shows, loading works as expected with tf.distribute.Strategy. The strategy used here does not have to be the same strategy used before saving.
The tf.saved_model API
Saving the model with lower-level API is similar to the Keras API:
End of explanation
DEFAULT_FUNCTION_KEY = 'serving_default'
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
Explanation: Loading can be done with tf.saved_model.load. However, since it is a lower-level API (and hence has a wider range of use cases), it does not return a Keras model. Instead, it returns an object that contain functions that can be used to do inference. For example:
End of explanation
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
Explanation: The loaded object may contain multiple functions, each associated with a key. The "serving_default" key is the default key for the inference function with a saved Keras model. To do inference with this function:
End of explanation
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
result = another_strategy.run(inference_func, args=(batch,))
print(result)
break
Explanation: You can also load and do inference in a distributed manner:
End of explanation
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# Wrap what's loaded to a KerasLayer
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
model.fit(train_dataset, epochs=2)
Explanation: Calling the restored function is just a forward pass on the saved model (tf.keras.Model.predict). What if you want to continue training the loaded function? Or what if you need to embed the loaded function into a bigger model? A common practice is to wrap this loaded object into a Keras layer to achieve this. Luckily, TF Hub has hub.KerasLayer for this purpose, shown here:
End of explanation
model = get_model()
# Saving the model using Keras `Model.save`
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# Loading the model using the lower-level API
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
Explanation: In the above example, Tensorflow Hub's hub.KerasLayer wraps the result loaded back from tf.saved_model.load into a Keras layer that is used to build another model. This is very useful for transfer learning.
Which API should I use?
For saving, if you are working with a Keras model, use the Keras Model.save API unless you need the additional control allowed by the low-level API. If what you are saving is not a Keras model, then the lower-level API, tf.saved_model.save, is your only choice.
For loading, your API choice depends on what you want to get from the model loading API. If you cannot (or do not want to) get a Keras model, then use tf.saved_model.load. Otherwise, use tf.keras.models.load_model. Note that you can get a Keras model back only if you saved a Keras model.
It is possible to mix and match the APIs. You can save a Keras model with Model.save, and load a non-Keras model with the low-level API, tf.saved_model.load.
End of explanation
model = get_model()
# Saving the model to a path on localhost.
saved_model_path = '/tmp/tf_save'
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model.save(saved_model_path, options=save_options)
# Loading the model from a path on localhost.
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
load_options = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost')
loaded = tf.keras.models.load_model(saved_model_path, options=load_options)
Explanation: Saving/Loading from a local device
When saving and loading from a local I/O device while training on remote devices—for example, when using a Cloud TPU—you must use the option experimental_io_device in tf.saved_model.SaveOptions and tf.saved_model.LoadOptions to set the I/O device to localhost. For example:
End of explanation
class SubclassedModel(tf.keras.Model):
Example model defined by subclassing `tf.keras.Model`.
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
try:
my_model.save(keras_model_path)
except ValueError as e:
print(f'{type(e).__name__}: ', *e.args)
Explanation: Caveats
One special case is when you create Keras models in certain ways, and then save them before training. For example:
End of explanation
tf.saved_model.save(my_model, saved_model_path)
x = tf.saved_model.load(saved_model_path)
x.signatures
Explanation: A SavedModel saves the tf.types.experimental.ConcreteFunction objects generated when you trace a tf.function (check When is a Function tracing? in the Introduction to graphs and tf.function guide to learn more). If you get a ValueError like this it's because Model.save was not able to find or create a traced ConcreteFunction.
Caution: You shouldn't save a model without at least one ConcreteFunction, since the low-level API will otherwise generate a SavedModel with no ConcreteFunction signatures (learn more about the SavedModel format). For example:
End of explanation
print(my_model.save_spec() is None)
Explanation: Usually the model's forward pass—the call method—will be traced automatically when the model is called for the first time, often via the Keras Model.fit method. A ConcreteFunction can also be generated by the Keras Sequential and Functional APIs, if you set the input shape, for example, by making the first layer either a tf.keras.layers.InputLayer or another layer type, and passing it the input_shape keyword argument.
To verify if your model has any traced ConcreteFunctions, check if Model.save_spec is None:
End of explanation
BATCH_SIZE_PER_REPLICA = 4
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
dataset_size = 100
dataset = tf.data.Dataset.from_tensors(
(tf.range(5, dtype=tf.float32), tf.range(5, dtype=tf.float32))
).repeat(dataset_size).batch(BATCH_SIZE)
my_model.compile(optimizer='adam', loss='mean_squared_error')
my_model.fit(dataset, epochs=2)
print(my_model.save_spec() is None)
my_model.save(keras_model_path)
Explanation: Let's use tf.keras.Model.fit to train the model, and notice that the save_spec gets defined and model saving will work:
End of explanation |
3,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Histogram
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
A histogram displays a frequency distribution using bars. It lets us quickly see where most of the observations are clustered. The height of each bar represents the number of observations that lie in each interval.
Step1: The graph above shows, for example, that the daily returns on the S&P 500 were between 0.010 and 0.013 on 10 of the days in 2014. Note that we are completely discarding the dates corresponding to these returns.
An alternative way to display the data would be using a cumulative distribution function, in which the height of a bar represents the number of observations that lie in that bin or in one of the previous ones. This graph is always nondecreasing since you cannot have a negative number of observations. The choice of graph depends on the information you are interested in.
Step2: Scatter plot
A scatter plot is useful for visualizing the relationship between two data sets. We use two data sets which have some sort of correspondence, such as the date on which the measurement was taken. Each point represents two corresponding values from the two data sets. However, we don't plot the date that the measurements were taken on.
Step3: Line graph
A line graph can be used when we want to track the development of the y value as the x value changes. For instance, when we are plotting the price of a stock, showing it as a line graph instead of just plotting the data points makes it easier to follow the price over time. This necessarily involves "connecting the dots" between the data points. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Get returns data for S&P 500
start = '2014-01-01'
end = '2015-01-01'
spy = get_pricing('SPY', fields='price', start_date=start, end_date=end).pct_change()[1:]
# Plot a histogram using 20 bins
fig = plt.figure(figsize = (16, 7))
_, bins, _ = plt.hist(spy, 20)
labels = ['%.3f' % a for a in bins] # Reduce precision so labels are legible
plt.xticks(bins, labels)
plt.xlabel('Returns')
plt.ylabel('Number of Days')
plt.title('Frequency distribution of S&P 500 returns, 2014');
Explanation: Histogram
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
A histogram displays a frequency distribution using bars. It lets us quickly see where most of the observations are clustered. The height of each bar represents the number of observations that lie in each interval.
End of explanation
# Example of a cumulative histogram
fig = plt.figure(figsize = (16, 7))
_, bins, _ = plt.hist(spy, 20, cumulative='True')
labels = ['%.3f' % a for a in bins]
plt.xticks(bins, labels)
plt.xlabel('Returns')
plt.ylabel('Number of Days')
plt.title('Cumulative distribution of S&P 500 returns, 2014');
Explanation: The graph above shows, for example, that the daily returns on the S&P 500 were between 0.010 and 0.013 on 10 of the days in 2014. Note that we are completely discarding the dates corresponding to these returns.
An alternative way to display the data would be using a cumulative distribution function, in which the height of a bar represents the number of observations that lie in that bin or in one of the previous ones. This graph is always nondecreasing since you cannot have a negative number of observations. The choice of graph depends on the information you are interested in.
End of explanation
# Get returns data for some security
asset = get_pricing('MSFT', fields='price', start_date=start, end_date=end).pct_change()[1:]
# Plot the asset returns vs S&P 500 returns
plt.scatter(asset, spy)
plt.xlabel('MSFT')
plt.ylabel('SPY')
plt.title('Returns in 2014');
Explanation: Scatter plot
A scatter plot is useful for visualizing the relationship between two data sets. We use two data sets which have some sort of correspondence, such as the date on which the measurement was taken. Each point represents two corresponding values from the two data sets. However, we don't plot the date that the measurements were taken on.
End of explanation
spy.plot()
plt.ylabel('Returns');
Explanation: Line graph
A line graph can be used when we want to track the development of the y value as the x value changes. For instance, when we are plotting the price of a stock, showing it as a line graph instead of just plotting the data points makes it easier to follow the price over time. This necessarily involves "connecting the dots" between the data points.
End of explanation |
3,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text retrieval
This guide will introduce techniques for organizing text data. It will show how to analyze a large corpus of text, extracting feature vectors for individual documents, in order to be able to retrieve documents with similar content.
scipy and scikit-learn are required to run through this document, as well as a corpus of text documents. This code can be adapted to work with a set of documents you collect. For the purpose of this example, we will use the well-known Reuters-21578 dataset with 90 categories. To download this dataset, download it manually from here, or run download.sh in the data folder (to get all the other data for ml4a-guides as well), or just run
Step1: Once you've downloaded and unzipped the dataset, take a look inside the folder. It is split into two folders, "training" and "test". Each of those contains 91 subfolders, corresponding to pre-labeled categories, which will be useful for us later when we want to try classifying the category of an unknown message. In this notebook, we are not worried about training a classifier, so we'll end up using both sets together.
Let's note the location of the folder into a variable data_dir.
Step2: Let's open up a single message and look at the contents. This is the very first message in the training folder, inside of the "acq" folder, which is a category apparently containing news of corporate acquisitions.
Step3: Our collection contains over 15,000 articles with a lot of information. It would take way too long to get through all the information.
Step4: Let's load all of our documents (news articles) into a single list called docs. We'll iterate through each group, grab all of the posts in each group (from both training and test directories), and add the text of the post into the docs list. We will make sure to exclude duplicate posts by cheking if we've seen the post index before.
Step5: We will now use sklearn's TfidfVectorizer to compute the tf-idf matrix of our collection of documents. The tf-idf matrix is an nxm matrix with the n rows corresponding to our n documents and the m columns corresponding to our terms. The values corresponds to the "importance" of each term to each document, where importance is *. In this case, terms are just all the unique words in the corpus, minus english stopwords, which are the most common words in the english language, e.g. "it", "they", "and", "a", etc. In some cases, terms can be n-grams (n-length sequences of words) or more complex, but usually just words.
To compute our tf-idf matrix, run
Step6: We see that the variable tfidf is a sparse matrix with a row for each document, and a column for each unique term in the corpus.
Thus, we can interpret each row of this matrix as a feature vector which describes a document. Two documents which have identical rows have the same collection of words in them, although not necessarily in the same order; word order is not preserved in the tf-idf matrix. Regardless, it seems reasonable to expect that if two documents have similar or close tf-idf vectors, they probably have similar content.
Step7: In practice however, the term-document matrix alone has several disadvantages. For one, it is very high-dimensional and sparse (mostly zeroes), thus it is computationally costly.
Additionally, it ignores similarity among groups of terms. For example, the words "seat" and "chair" are related, but in a raw term-document matrix they are separate columns. So two sentences with one of each word will not be computed as similarly.
One solution is to use latent semantic analysis (LSA, or sometimes called latent semantic indexing). LSA is a dimensionality reduction technique closely related to principal component analysis, which is commonly used to reduce a high-dimensional set of terms into a lower-dimensional set of "concepts" or components which are linear combinations of the terms.
To do so, we use sklearn's TruncatedSVD function which gives us the LSA by computing a singular value decomposition (SVD) of the tf-idf matrix.
Step8: How to interpret this? lsa holds our latent semantic analysis, expressing our 100 concepts. It has a vector for each concept, which holds the weight of each term to that concept. tfidf_lsa is our transformed document matrix where each document is a weighted sum of the concepts.
In a simpler analysis with, for example, two topics (sports and tacos), one concept might assign high weights for sports-related terms (ball, score, tournament) and the other one might have high weights for taco-related concepts (cheese, tomato, lettuce). In a more complex one like this one, the concepts may not be as interpretable. Nevertheless, we can investigate the weights for each concept, and look at the top-weighted ones. For example, here are the top terms in concept 1.
Step9: The top terms in concept 1 appear related to accounting balance sheets; terms like "net", "loss", "profit".
Now, back to our documents. Recall that tfidf_lsa is a transformation of our original tf-idf matrix from the term-space into a concept-space. The concept space is much more valuable, and we can use it to query most similar documents. We expect that two documents which about similar things should have similar vectors in tfidf_lsa. We can use a simple distance metric to measure the similarity, euclidean distance or cosine similarity being the two most common.
Here, we'll select a single query document (index 300), calculate the distance of every other document to our query document, and take the one with the smallest distance to the query. | Python Code:
import os
Explanation: Text retrieval
This guide will introduce techniques for organizing text data. It will show how to analyze a large corpus of text, extracting feature vectors for individual documents, in order to be able to retrieve documents with similar content.
scipy and scikit-learn are required to run through this document, as well as a corpus of text documents. This code can be adapted to work with a set of documents you collect. For the purpose of this example, we will use the well-known Reuters-21578 dataset with 90 categories. To download this dataset, download it manually from here, or run download.sh in the data folder (to get all the other data for ml4a-guides as well), or just run:
wget http://disi.unitn.it/moschitti/corpora/Reuters21578-Apte-90Cat.tar.gz
tar -xzf Reuters21578-Apte-90Cat.tar.gz
End of explanation
data_dir = '../data/Reuters21578-Apte-90Cat'
Explanation: Once you've downloaded and unzipped the dataset, take a look inside the folder. It is split into two folders, "training" and "test". Each of those contains 91 subfolders, corresponding to pre-labeled categories, which will be useful for us later when we want to try classifying the category of an unknown message. In this notebook, we are not worried about training a classifier, so we'll end up using both sets together.
Let's note the location of the folder into a variable data_dir.
End of explanation
post_path = os.path.join(data_dir, "training", "acq", "0000005")
with open (post_path, "r") as p:
raw_text = p.read()
print(raw_text)
Explanation: Let's open up a single message and look at the contents. This is the very first message in the training folder, inside of the "acq" folder, which is a category apparently containing news of corporate acquisitions.
End of explanation
# this gives us all the groups (from training subfolder, but same for test)
groups = [g for g in os.listdir(os.path.join(data_dir, "training")) if os.path.isdir(os.path.join(data_dir, "training", g))]
print groups
Explanation: Our collection contains over 15,000 articles with a lot of information. It would take way too long to get through all the information.
End of explanation
import re
docs = []
post_idx = []
for g, group in enumerate(groups):
if g%10==0:
print ("reading group %d / %d"%(g+1, len(groups)))
posts_training = [os.path.join(data_dir, "training", group, p) for p in os.listdir(os.path.join(data_dir, "training", group)) if os.path.isfile(os.path.join(data_dir, "training", group, p))]
posts_test = [os.path.join(data_dir, "test", group, p) for p in os.listdir(os.path.join(data_dir, "test", group)) if os.path.isfile(os.path.join(data_dir, "test", group, p))]
posts = posts_training + posts_test
for post in posts:
idx = post.split("/")[-1]
if idx not in post_idx:
post_idx.append(idx)
with open(post, "r") as p:
raw_text = p.read()
raw_text = re.sub(r'[^\x00-\x7f]',r'', raw_text)
docs.append(raw_text)
print("\nwe have %d documents in %d groups"%(len(docs), len(groups)))
print("\nhere is document 100:\n%s"%docs[100])
Explanation: Let's load all of our documents (news articles) into a single list called docs. We'll iterate through each group, grab all of the posts in each group (from both training and test directories), and add the text of the post into the docs list. We will make sure to exclude duplicate posts by cheking if we've seen the post index before.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(stop_words='english')
tfidf = vectorizer.fit_transform(docs)
tfidf
Explanation: We will now use sklearn's TfidfVectorizer to compute the tf-idf matrix of our collection of documents. The tf-idf matrix is an nxm matrix with the n rows corresponding to our n documents and the m columns corresponding to our terms. The values corresponds to the "importance" of each term to each document, where importance is *. In this case, terms are just all the unique words in the corpus, minus english stopwords, which are the most common words in the english language, e.g. "it", "they", "and", "a", etc. In some cases, terms can be n-grams (n-length sequences of words) or more complex, but usually just words.
To compute our tf-idf matrix, run:
End of explanation
doc_idx = 5
doc_tfidf = tfidf.getrow(doc_idx)
all_terms = vectorizer.get_feature_names()
terms = [all_terms[i] for i in doc_tfidf.indices]
values = doc_tfidf.data
print(docs[doc_idx])
print("document's term-frequency pairs:")
print(", ".join("\"%s\"=%0.2f"%(t,v) for t,v in zip(terms,values)))
Explanation: We see that the variable tfidf is a sparse matrix with a row for each document, and a column for each unique term in the corpus.
Thus, we can interpret each row of this matrix as a feature vector which describes a document. Two documents which have identical rows have the same collection of words in them, although not necessarily in the same order; word order is not preserved in the tf-idf matrix. Regardless, it seems reasonable to expect that if two documents have similar or close tf-idf vectors, they probably have similar content.
End of explanation
from sklearn.decomposition import TruncatedSVD
lsa = TruncatedSVD(n_components=100)
tfidf_lsa = lsa.fit_transform(tfidf)
Explanation: In practice however, the term-document matrix alone has several disadvantages. For one, it is very high-dimensional and sparse (mostly zeroes), thus it is computationally costly.
Additionally, it ignores similarity among groups of terms. For example, the words "seat" and "chair" are related, but in a raw term-document matrix they are separate columns. So two sentences with one of each word will not be computed as similarly.
One solution is to use latent semantic analysis (LSA, or sometimes called latent semantic indexing). LSA is a dimensionality reduction technique closely related to principal component analysis, which is commonly used to reduce a high-dimensional set of terms into a lower-dimensional set of "concepts" or components which are linear combinations of the terms.
To do so, we use sklearn's TruncatedSVD function which gives us the LSA by computing a singular value decomposition (SVD) of the tf-idf matrix.
End of explanation
components = lsa.components_[1]
all_terms = vectorizer.get_feature_names()
idx_top_terms = sorted(range(len(components)), key=lambda k: components[k])
print("10 highest-weighted terms in concept 1:")
for t in idx_top_terms[:10]:
print(" - %s : %0.02f"%(all_terms[t], t))
Explanation: How to interpret this? lsa holds our latent semantic analysis, expressing our 100 concepts. It has a vector for each concept, which holds the weight of each term to that concept. tfidf_lsa is our transformed document matrix where each document is a weighted sum of the concepts.
In a simpler analysis with, for example, two topics (sports and tacos), one concept might assign high weights for sports-related terms (ball, score, tournament) and the other one might have high weights for taco-related concepts (cheese, tomato, lettuce). In a more complex one like this one, the concepts may not be as interpretable. Nevertheless, we can investigate the weights for each concept, and look at the top-weighted ones. For example, here are the top terms in concept 1.
End of explanation
from scipy.spatial import distance
query_idx = 400
# take the concept representation of our query document
query_features = tfidf_lsa[query_idx]
# calculate the distance between query and every other document
distances = [ distance.euclidean(query_features, feat) for feat in tfidf_lsa ]
# sort indices by distances, excluding the first one which is distance from query to itself (0)
idx_closest = sorted(range(len(distances)), key=lambda k: distances[k])[1:]
# print our results
query_doc = docs[query_idx]
return_doc = docs[idx_closest[0]]
print("QUERY DOCUMENT:\n %s \nMOST SIMILAR DOCUMENT TO QUERY:\n %s" %(query_doc, return_doc))
Explanation: The top terms in concept 1 appear related to accounting balance sheets; terms like "net", "loss", "profit".
Now, back to our documents. Recall that tfidf_lsa is a transformation of our original tf-idf matrix from the term-space into a concept-space. The concept space is much more valuable, and we can use it to query most similar documents. We expect that two documents which about similar things should have similar vectors in tfidf_lsa. We can use a simple distance metric to measure the similarity, euclidean distance or cosine similarity being the two most common.
Here, we'll select a single query document (index 300), calculate the distance of every other document to our query document, and take the one with the smallest distance to the query.
End of explanation |
3,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Likelihood Analysis with Python
The python likelihood tools are a very powerful set of analysis tools that expand upon the command line tools provided with the Fermi Science Tools package. Not only can you perform all of the same likelihood analysis with the python tools that you can with the standard command line tools but you can directly access all of the model parameters. You can more easily script a standard analysis like light curve generation. There are also a few things built into the python tools that are not available from the command line like the calculation of upper limits.
There are many user contributed packages built upon the python backbone of the Science Tools and we are going to highlight and use a few of them in this tutorial like likeSED, make3FGLxml, and the LATAnalysisScripts.
This sample analysis is based on the PG 1553+113 analysis performed by the LAT team and described in Abdo, A. A. et al. 2010, ApJ, 708, 1310. At certain points we will refer to this article as well as the Cicerone. After you complete this tutorial you should be able to reproduce all of the data analysis performed in this publication including generating a spectrum (individual bins and a butterfly plot) and produce a light curve with the python tools. This tutorial assumes you have the most recent ScienceTools installed. We will also make significant use of python, so you might want to familiarize yourself with python (there's a beginner's guide at http
Step1: Now, you'll first need to make a file list with the names of your input event files
Step2: In the following analysis we've assumed that you've named your list of data files PG1553_events.list and the spacecraft file PG1553_SC.fits.
Step3: Perform Event Selections
You could follow the unbinned likelihood tutorial to perform your event selections using gtlike, gtmktime etc. directly from the command line, and then use pylikelihood later. But we're going to go ahead and use python. The gt_apps module provides methods to call these tools from within python. This'll get us used to using python.
So, let's jump into python
Step4: Now, you can see what objects are part of the gt_apps module by executing
Step5: Which brings up the help documentation for that module (type 'x' to exit). The python object for gtselect is called filter and we first need to set all of it's options. This is very similar to calling gtselect form the command line and inputting all of the variables interactively. It might not seem that convenient to do it this way but it's really nice once you start building up scripts (see Building a Light Curve) and reading back options and such. For example, towards the end of this thread, we'll want to generate a light curve and we'll have to run the likelihood analysis for each datapoint. It'll be much easier to do all of this within python and change the tmin and tmax in an iterative fashion. Note that these python objects are just wrappers for the standalone tools so if you want any information on their options, see the corresponding documentation for the standalone tool.
Step6: Once this is done, run gtselect
Step7: Note that you can see exactly what gtselect will do if you run it by typing
Step8: You have access to any of the inputs by directly accessing the filter['OPTIONS'] options.
Next, you need to run gtmktime. This is accessed within python via the maketime object
Step9: We're using the most conservative and most commonly used cuts described in detail in the Cicerone.
Livetime Cubes and Exposure Maps
At this point, you could make a counts map of the events we just selected using gtbin (it's called evtbin within python) and I won't discourage you but we're going to go ahead and create a livetime cube and exposure map. This might take a few minutes to complete so if you want to create a counts map and have a look at it, get these processes going and open another terminal to work on your counts map (see the likelihood tutorial for an example of running gtbin to produce a counts map).
Livetime Cube
This step will take approximately 15 - 30 minutes to complete so if you want to just download the PG1553_ltCube from us you can skip this step.
Step10: While you're waiting, you might have noticed that not all of the command line science tools have an equivalent object in gt_apps. This is easy to fix. Say you want to use gtltcubesun from within python. Just make it a GtApp
Step11: Exposure Map
Step12: Generate XML Model File
We need to create an XML file with all of the sources of interest within the Region of Interest (ROI) of PG 1553+113 so we can correctly model the background. For more information on the format of the model file and how to create one, see the likelihood analysis tutorial. We'll use the user contributed tool make3FGLxml.py to create a model file based on the LAT 4-year LAT catalog. You'll need to download the FITS version of this file at http
Step13: Now that we have all of the files we need, you can generate your model file
Step14: For more information on the make3FGLxml.py module, see the [usage notes(/ssc/data/analysis/user/readme_make3FGLxml.txt)
You should now have a file called 'PG1553_model.xml'. Open it up with your favorite text editor and take a look at it. It should look like PG1553_model.xml. There should be seven sources within 4 degrees of the ROI center, nine sources between 4 and 8 degrees, and eight sources between 8 and 12 degrees (4 of which are beyond 10 degrees and frozen). In all, there are 38 sources beyond 10 degrees. The script designates these as outside of the ROI (which is 10 degrees) and instructs us to leave all of the variables for these sources fixed. We'll agree with this (these sources are outside of our ROI but could still affect our fit since photons from these sources could fall within our ROI due to the LAT PSF). At the bottom of the file, the two diffuse sources are listed (the galactic diffuse and extragalactic isotropic).
Notice that we've deviated a bit from the published paper here. In the paper, the LAT team only included two sources; one from the 0FGL catalog and another, non-catalog source. This is because the later LAT catalogs had not been released at the time. However, these 3FGL sources are still in the data we've downloaded and should be modeled.
Back to looking at our XML model file, notice that all of the sources have been set up with various spectral models (see the Cicerone for details on the different spectral models) and the module we ran filled in the values for the spectrum from the 3FGL catalog. Also notice that PG 1553+113 is listed in the model file as 3FGL J1555.7+1111 with all of the parameters filled in for us. It's actually offset from the center of our ROI by 0.008 degrees. How nice! The only problem with this is that trying to use the 3FGL model for for multiple years of data (Log Parabola) will cause us some issues for such a short time duration as we are analyzing here. Therefore we want to change the model for 3FGL J155.7+1111 to a simple power-law for the purposes of this analysis thread. You'll have to modify the relevant python scripts on your own to match whatever source model may be relevant for your data.
In the xml file, change the entry for 3FGL J1555.7+1111 from
to
Compute the diffuse source responses.
The diffuse source responses tell the likelihood fitter what the expected contribution would be for each diffuse source, given the livetime associated with each event. The source model XML file must contain all of the diffuse sources to be fit. The gtdiffrsp tool will add one column to the event data file for each diffuse source. The diffuse response depends on the instrument response function (IRF), which must be in agreement with the selection of events, i.e. the event class and event type we are using in our analysis. Since we are using SOURCE class, CALDB should use the P8R2_SOURCE_V6 IRF for this tool.
If the diffuse responses are not precomputed using gtdiffrsp, then the gtlike tool will compute them at runtime (during the next step). However, as this step is very computationally intensive (often taking ~hours to complete), and it is very likely you will need to run gtlike more than once, it is probably wise to precompute these quantities.
Step15: Run the Likelihood Analysis
It's time to actually run the likelihood analysis now. First, you need to import the pyLikelihood module and then the UnbinnedAnalysis functions (there's also a binned analysis module that you can import to do binned likelihood analysis which behaves almost exactly the same as the unbinned analysis module). For more details on the pyLikelihood module, check out the pyLikelihood Usage Notes.
Step16: By now, you'll have two objects, 'obs', an UnbinnedObs object and like, an UnbinnedAnalysis object. You can view these objects attributes and set them from the command line in various ways. For example
Step17: or you can get directly at the objects attributes and methods by
Step18: or get even more details by executing
Step19: There are a lot of attributes and here you start to see the power of using pyLikelihood since you'll be able (once the fit is done) to access any of these attributes directly within python and use them in your own scripts. For example, you can see that the like object has a 'tol' attribute which we can read back to see what it is and then set it to what we want it to be.
Step20: The tolType can be '0' for relative or '1' for absolute.
Step21: Now, we're ready to do the actual fit. This next step will take approximately 10 minutes to complete. We're doing something a bit fancy here. We're getting the minimizating object (and calling it likeobj) from the logLike object so that we can access it later. We pass this object to the fit routine so that it knows which fitting object to use. We're also telling the code to calculate the
covariance matrix so we can get at the errors.
Step22: The number that is printed out here is the -log(Likelihood) of the total fit to the data. You can print the results of the fit by accessing like.model. You can also access the fit for a particular source by doing the following (the source name must match that in the XML model file).
Note there is a bug in the XML file that puts a trailing space in the source name.
Step23: You can plot the results of the fit by executing the plot command. The results are shown below | Python Code:
!mkdir working
import urllib
url_base = "https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/pyLikelihood/"
datafiles = ["L1504241622054B65347F25_PH00.fits",
"L1504241622054B65347F25_PH01.fits",
"L1504241622054B65347F25_SC00.fits",]
for datafile in datafiles:
urllib.urlretrieve(url_base+datafile,"working/"+datafile)
ls working/
Explanation: Likelihood Analysis with Python
The python likelihood tools are a very powerful set of analysis tools that expand upon the command line tools provided with the Fermi Science Tools package. Not only can you perform all of the same likelihood analysis with the python tools that you can with the standard command line tools but you can directly access all of the model parameters. You can more easily script a standard analysis like light curve generation. There are also a few things built into the python tools that are not available from the command line like the calculation of upper limits.
There are many user contributed packages built upon the python backbone of the Science Tools and we are going to highlight and use a few of them in this tutorial like likeSED, make3FGLxml, and the LATAnalysisScripts.
This sample analysis is based on the PG 1553+113 analysis performed by the LAT team and described in Abdo, A. A. et al. 2010, ApJ, 708, 1310. At certain points we will refer to this article as well as the Cicerone. After you complete this tutorial you should be able to reproduce all of the data analysis performed in this publication including generating a spectrum (individual bins and a butterfly plot) and produce a light curve with the python tools. This tutorial assumes you have the most recent ScienceTools installed. We will also make significant use of python, so you might want to familiarize yourself with python (there's a beginner's guide at http://wiki.python.org/moin/BeginnersGuide. This tutorial also assumes that you've gone through the non-python based unbinned likelihood thread. This tutorial should take approximately 8 hours to complete (depending on your computer's speed) if you do everything but there are some steps you can skip along the way which shave off about 4 hours of that.
Note: This tutorial is generated from a jupyter notebook which you can download and run yourself (the prefereed method). You can also run individual commands listed on this page. If you do that, be aware that some commands must be executed in an ipython/jupyter environment.
Get the Data
For this thread the original data were extracted from the LAT data server with the following selections (these selections are similar to those in the paper):
Search Center (RA,Dec) = (238.929,11.1901)
Radius = 20 degrees
Start Time (MET) = 239557417 seconds (2008-08-04T15:43:37)
Stop Time (MET) = 256970880 seconds (2009-02-22T04:48:00)
Minimum Energy = 100 MeV
Maximum Energy = 300000 MeV
We've provided direct links to the event files as well as the spacecraft data file if you don't want to take the time to use the download server. For more information on how to download LAT data please see the Extract LAT Data tutorial.
L1504241622054B65347F25_PH00.fits
L1504241622054B65347F25_PH01.fits
L1504241622054B65347F25_SC00.fits
Make a working directory and then download all of the files into that directory.
End of explanation
ls -1 working/*PH*.fits > working/PG1553_events.list
mv working/L1504241622054B65347F25_SC00.fits working/PG1553_SC.fits
Explanation: Now, you'll first need to make a file list with the names of your input event files:
End of explanation
ls working/
Explanation: In the following analysis we've assumed that you've named your list of data files PG1553_events.list and the spacecraft file PG1553_SC.fits.
End of explanation
import gt_apps as my_apps
Explanation: Perform Event Selections
You could follow the unbinned likelihood tutorial to perform your event selections using gtlike, gtmktime etc. directly from the command line, and then use pylikelihood later. But we're going to go ahead and use python. The gt_apps module provides methods to call these tools from within python. This'll get us used to using python.
So, let's jump into python:
Ok, we want to run gtselect inside but we first need to import the gt_apps module to gain access to it.
End of explanation
help(my_apps)
Explanation: Now, you can see what objects are part of the gt_apps module by executing:
End of explanation
my_apps.filter['evclass'] = 128
my_apps.filter['evtype'] = 3
my_apps.filter['ra'] = 238.929
my_apps.filter['dec'] = 11.1901
my_apps.filter['rad'] = 10
my_apps.filter['emin'] = 100
my_apps.filter['emax'] = 300000
my_apps.filter['zmax'] = 90
my_apps.filter['tmin'] = 239557417
my_apps.filter['tmax'] = 256970880
my_apps.filter['infile'] = '@working/PG1553_events.list'
my_apps.filter['outfile'] = 'working/PG1553_filtered.fits'
Explanation: Which brings up the help documentation for that module (type 'x' to exit). The python object for gtselect is called filter and we first need to set all of it's options. This is very similar to calling gtselect form the command line and inputting all of the variables interactively. It might not seem that convenient to do it this way but it's really nice once you start building up scripts (see Building a Light Curve) and reading back options and such. For example, towards the end of this thread, we'll want to generate a light curve and we'll have to run the likelihood analysis for each datapoint. It'll be much easier to do all of this within python and change the tmin and tmax in an iterative fashion. Note that these python objects are just wrappers for the standalone tools so if you want any information on their options, see the corresponding documentation for the standalone tool.
End of explanation
my_apps.filter.run()
Explanation: Once this is done, run gtselect:
End of explanation
my_apps.filter.command()
Explanation: Note that you can see exactly what gtselect will do if you run it by typing:
End of explanation
my_apps.maketime['scfile'] = 'working/PG1553_SC.fits'
my_apps.maketime['filter'] = '(DATA_QUAL>0)&&(LAT_CONFIG==1)'
my_apps.maketime['roicut'] = 'no'
my_apps.maketime['evfile'] = 'working/PG1553_filtered.fits'
my_apps.maketime['outfile'] = 'working/PG1553_filtered_gti.fits'
my_apps.maketime.run()
Explanation: You have access to any of the inputs by directly accessing the filter['OPTIONS'] options.
Next, you need to run gtmktime. This is accessed within python via the maketime object:
End of explanation
my_apps.expCube['evfile'] = 'working/PG1553_filtered_gti.fits'
my_apps.expCube['scfile'] = 'working/PG1553_SC.fits'
my_apps.expCube['outfile'] = 'working/PG1553_ltCube.fits'
my_apps.expCube['zmax'] = 90
my_apps.expCube['dcostheta'] = 0.025
my_apps.expCube['binsz'] = 1
my_apps.expCube.run()
Explanation: We're using the most conservative and most commonly used cuts described in detail in the Cicerone.
Livetime Cubes and Exposure Maps
At this point, you could make a counts map of the events we just selected using gtbin (it's called evtbin within python) and I won't discourage you but we're going to go ahead and create a livetime cube and exposure map. This might take a few minutes to complete so if you want to create a counts map and have a look at it, get these processes going and open another terminal to work on your counts map (see the likelihood tutorial for an example of running gtbin to produce a counts map).
Livetime Cube
This step will take approximately 15 - 30 minutes to complete so if you want to just download the PG1553_ltCube from us you can skip this step.
End of explanation
from GtApp import GtApp
expCubeSun = GtApp('gtltcubesun','Likelihood')
expCubeSun.command()
Explanation: While you're waiting, you might have noticed that not all of the command line science tools have an equivalent object in gt_apps. This is easy to fix. Say you want to use gtltcubesun from within python. Just make it a GtApp:
End of explanation
my_apps.expMap['evfile'] = 'working/PG1553_filtered_gti.fits'
my_apps.expMap['scfile'] = 'working/PG1553_SC.fits'
my_apps.expMap['expcube'] = 'working/PG1553_ltCube.fits'
my_apps.expMap['outfile'] = 'working/PG1553_expMap.fits'
my_apps.expMap['irfs'] = 'CALDB'
my_apps.expMap['srcrad'] = 20
my_apps.expMap['nlong'] = 120
my_apps.expMap['nlat'] = 120
my_apps.expMap['nenergies'] = 37
my_apps.expMap.run()
Explanation: Exposure Map
End of explanation
urllib.urlretrieve('http://fermi.gsfc.nasa.gov/ssc/data/analysis/user/make3FGLxml.py','make3FGLxml.py')
urllib.urlretrieve('https://fermi.gsfc.nasa.gov/ssc/data/access/lat/4yr_catalog/gll_psc_v16.fit',
'working/gll_psc_v16.fit')
!ln -s $FERMI_DIR/refdata/fermi/galdiffuse/iso_P8R2_SOURCE_V6_v06.txt working/iso_P8R2_SOURCE_V6_v06.txt
!ln -s $FERMI_DIR/refdata/fermi/galdiffuse/gll_iem_v06.fits working/gll_iem_v06.fits
ls working
Explanation: Generate XML Model File
We need to create an XML file with all of the sources of interest within the Region of Interest (ROI) of PG 1553+113 so we can correctly model the background. For more information on the format of the model file and how to create one, see the likelihood analysis tutorial. We'll use the user contributed tool make3FGLxml.py to create a model file based on the LAT 4-year LAT catalog. You'll need to download the FITS version of this file at http://fermi.gsfc.nasa.gov/ssc/data/access/lat/4yr_catalog/ and get the make3FGLxml.py tool from the user contributed software page and put them both in your working directory. Also make sure you have the most recent galactic diffuse and isotropic model files which can be found at http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html. In the following we assume that you have the galactic diffuse and isotropic files in your Science Tools install and we just make local symbolic links.
End of explanation
from make3FGLxml import *
mymodel = srcList('working/gll_psc_v16.fit','working/PG1553_filtered_gti.fits','working/PG1553_model.xml')
mymodel.makeModel('working/gll_iem_v06.fits',
'working/gll_iem_v06',
'working/iso_P8R2_SOURCE_V6_v06.txt',
'working/iso_P8R2_SOURCE_V6_v06')
Explanation: Now that we have all of the files we need, you can generate your model file:
End of explanation
my_apps.diffResps['evfile'] = 'working/PG1553_filtered_gti.fits'
my_apps.diffResps['scfile'] = 'working/PG1553_SC.fits'
my_apps.diffResps['srcmdl'] = 'working/PG1553_model.xml'
my_apps.diffResps['irfs'] = 'CALDB'
my_apps.diffResps.run()
Explanation: For more information on the make3FGLxml.py module, see the [usage notes(/ssc/data/analysis/user/readme_make3FGLxml.txt)
You should now have a file called 'PG1553_model.xml'. Open it up with your favorite text editor and take a look at it. It should look like PG1553_model.xml. There should be seven sources within 4 degrees of the ROI center, nine sources between 4 and 8 degrees, and eight sources between 8 and 12 degrees (4 of which are beyond 10 degrees and frozen). In all, there are 38 sources beyond 10 degrees. The script designates these as outside of the ROI (which is 10 degrees) and instructs us to leave all of the variables for these sources fixed. We'll agree with this (these sources are outside of our ROI but could still affect our fit since photons from these sources could fall within our ROI due to the LAT PSF). At the bottom of the file, the two diffuse sources are listed (the galactic diffuse and extragalactic isotropic).
Notice that we've deviated a bit from the published paper here. In the paper, the LAT team only included two sources; one from the 0FGL catalog and another, non-catalog source. This is because the later LAT catalogs had not been released at the time. However, these 3FGL sources are still in the data we've downloaded and should be modeled.
Back to looking at our XML model file, notice that all of the sources have been set up with various spectral models (see the Cicerone for details on the different spectral models) and the module we ran filled in the values for the spectrum from the 3FGL catalog. Also notice that PG 1553+113 is listed in the model file as 3FGL J1555.7+1111 with all of the parameters filled in for us. It's actually offset from the center of our ROI by 0.008 degrees. How nice! The only problem with this is that trying to use the 3FGL model for for multiple years of data (Log Parabola) will cause us some issues for such a short time duration as we are analyzing here. Therefore we want to change the model for 3FGL J155.7+1111 to a simple power-law for the purposes of this analysis thread. You'll have to modify the relevant python scripts on your own to match whatever source model may be relevant for your data.
In the xml file, change the entry for 3FGL J1555.7+1111 from
to
Compute the diffuse source responses.
The diffuse source responses tell the likelihood fitter what the expected contribution would be for each diffuse source, given the livetime associated with each event. The source model XML file must contain all of the diffuse sources to be fit. The gtdiffrsp tool will add one column to the event data file for each diffuse source. The diffuse response depends on the instrument response function (IRF), which must be in agreement with the selection of events, i.e. the event class and event type we are using in our analysis. Since we are using SOURCE class, CALDB should use the P8R2_SOURCE_V6 IRF for this tool.
If the diffuse responses are not precomputed using gtdiffrsp, then the gtlike tool will compute them at runtime (during the next step). However, as this step is very computationally intensive (often taking ~hours to complete), and it is very likely you will need to run gtlike more than once, it is probably wise to precompute these quantities.
End of explanation
import pyLikelihood
from UnbinnedAnalysis import *
obs = UnbinnedObs('working/PG1553_filtered_gti.fits',
'working/PG1553_SC.fits',
expMap='working/PG1553_expMap.fits',
expCube='working/PG1553_ltCube.fits',
irfs='CALDB')
like = UnbinnedAnalysis(obs,'working/PG1553_model.xml',optimizer='NewMinuit')
Explanation: Run the Likelihood Analysis
It's time to actually run the likelihood analysis now. First, you need to import the pyLikelihood module and then the UnbinnedAnalysis functions (there's also a binned analysis module that you can import to do binned likelihood analysis which behaves almost exactly the same as the unbinned analysis module). For more details on the pyLikelihood module, check out the pyLikelihood Usage Notes.
End of explanation
print obs
print like
Explanation: By now, you'll have two objects, 'obs', an UnbinnedObs object and like, an UnbinnedAnalysis object. You can view these objects attributes and set them from the command line in various ways. For example:
End of explanation
dir(like)
Explanation: or you can get directly at the objects attributes and methods by:
End of explanation
help(like)
Explanation: or get even more details by executing:
End of explanation
like.tol
like.tolType
Explanation: There are a lot of attributes and here you start to see the power of using pyLikelihood since you'll be able (once the fit is done) to access any of these attributes directly within python and use them in your own scripts. For example, you can see that the like object has a 'tol' attribute which we can read back to see what it is and then set it to what we want it to be.
End of explanation
like.tol = 0.0001
Explanation: The tolType can be '0' for relative or '1' for absolute.
End of explanation
likeobj = pyLike.NewMinuit(like.logLike)
like.fit(verbosity=0,covar=True,optObject=likeobj)
Explanation: Now, we're ready to do the actual fit. This next step will take approximately 10 minutes to complete. We're doing something a bit fancy here. We're getting the minimizating object (and calling it likeobj) from the logLike object so that we can access it later. We pass this object to the fit routine so that it knows which fitting object to use. We're also telling the code to calculate the
covariance matrix so we can get at the errors.
End of explanation
like.model['3FGL J1555.7+1111 ']
Explanation: The number that is printed out here is the -log(Likelihood) of the total fit to the data. You can print the results of the fit by accessing like.model. You can also access the fit for a particular source by doing the following (the source name must match that in the XML model file).
Note there is a bug in the XML file that puts a trailing space in the source name.
End of explanation
like.setPlotter(plotter='python')
%matplotlib inline
Explanation: You can plot the results of the fit by executing the plot command. The results are shown below:
End of explanation |
3,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Análise de Sentimentos
1. Objetivo
O objetivo da análise de sentimentos e classificação de textos é determinar o valor subjetivo de um documento de texto. Aqui trabalharemos apenas com um classificaçao de polaridade em extremos, ou seja, classificar como positivo ou negativo o conteúdo de um documento de texto.
1. Problema
O problema proposto para abordar a aprendizagem da análise de sentimentos é descrito a seguir
Step1: 1. Tokenization
Para a criação de um dicionário, precisa-se transformar o texto em tokens. Para auxiliar nessa tarefa pode-se utilizar uma biblioteca para processamento de linguagem natural, no exemplo abaixo, usa-se o nltk. O nltk é uma biblioteca Python de código aberto que realiza esta função.
Step2: 1. Dicionário
Para criar um dicionário, precisamos usar apenas a coluna que contém as sentenças no arquivo e ignorar a coluna da polaridade. Novamente, o nltk facilita esse processo. Obtidas as sentenças, usaremos o nltk para quebrá-las em tokens.
Step3: 1. Word normalization
Uma abordagem comum na análise de sentimentos é o uso de digramas ou trigramas, isso auxilia na classificação de sentenças. Dado um vetor de tokens onde já se foram eliminados as repetições e stop words, podemos criar digramas como segue
Step4: 1. Vetorização
Usa-se a vetorização para tornar possível o trabalho dos classificadores. Na vetorização do texto, cada palavra é tratada como uma característica | Python Code:
import pandas
imdb = pandas.read_csv('data/imdb_labelled.txt', sep="\t", names=["sentences", "polarity"])
yelp = pandas.read_csv('data/yelp_labelled.txt', sep="\t", names=["sentences", "polarity"])
amazon = pandas.read_csv('data/amazon_cells_labelled.txt', sep="\t", names=["sentences", "polarity"])
big = pandas.DataFrame()
big = big.append([imdb, yelp, amazon])
big.to_csv('big.csv', index=False, encoding='utf-8')
Explanation: 1. Análise de Sentimentos
1. Objetivo
O objetivo da análise de sentimentos e classificação de textos é determinar o valor subjetivo de um documento de texto. Aqui trabalharemos apenas com um classificaçao de polaridade em extremos, ou seja, classificar como positivo ou negativo o conteúdo de um documento de texto.
1. Problema
O problema proposto para abordar a aprendizagem da análise de sentimentos é descrito a seguir:
A empresa Amazon deseja obter um sistema inteligente para processar os comentários
de seus clientes sobre os seus produtos, podendo classificar tais comentários dentre as
categorias: positivo ou negativo. Para isso ela disponibiliza três bases de dados com
sentenças rotuladas.
1. Os Dados
Os dados estão organizados em sentença e rótulo, sendo 0 negativo e 1 positivo
As bases são provenientes dos seguintes sites:
* imdb.com
* amazon.com
* yelp.com
1. Preparação
Alguns módulos e bibliotecas devem ser instalados para a execução deste projeto. Para adiantar essa fase de preparação e concluí-la rapidamente sem mais preocupações, disponibiliza-se todo o ambiente utilizando ferramenta Docker. O Dockerfile pode ser visualizado abaixo:
```Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install scikit-learn pandas matplotlib scipy jupyter nltk
RUN chmod +x boot.sh
EXPOSE 8888
CMD ["/bin/sh", "./boot.sh"]
```
Junto com o Dockerfile, apresenta-se também o arquivo boot.sh, que instala alguns módulos da biblioteca nltk e inicia a execução do jupyter notebook.
```
python << END
import sys
import nltk
nltk.download('punkt')
nltk.download('stopwords')
END
jupyter notebook --ip=0.0.0.0 --allow-root
```
Para utilizar isso, primeiro installe o docker e em seguida execute o seguinte comando no diretório do seu Dockerfile:
docker build -t machine-learn .
Para executar o container, use o comando a seguir substituindo o ~ pelo path onde fez o clone do repositório:
docker run --name machine-learn-container -p 8888:8888 -v ~/amazon/jeferson:/code machine-learn:latest /bin/sh ./boot.sh
1. Pré-processamento
As três bases de dados do problema devem ser usadas em conjunto. A biblioteca pandas pode ser usada para facilitar esse processo. O código abaixo importa as bases e as concatena em uma base maior. Ao final, é gerado um arquivo com extensão csv, o qual será usado para os treinos e análises futuras.
End of explanation
import nltk
sentence = 'My test for nltk library!!'
tokens = nltk.word_tokenize(sentence)
print(tokens)
Explanation: 1. Tokenization
Para a criação de um dicionário, precisa-se transformar o texto em tokens. Para auxiliar nessa tarefa pode-se utilizar uma biblioteca para processamento de linguagem natural, no exemplo abaixo, usa-se o nltk. O nltk é uma biblioteca Python de código aberto que realiza esta função.
End of explanation
import nltk
sentences = big['sentences']
all_sentences_strigs = sentences.str.lower()
all_sentences_tokenized = [] #Has
for sentence_string in all_sentences_strigs:
sentence_tokenized = nltk.word_tokenize(sentence_string)
all_sentences_tokenized.append(sentence_tokenized)
all_tokens = [] # Has all sentences tokens
for sentence_tokenized in all_sentences_tokenized:
all_tokens.extend(sentence_tokenized)
dictionary = set()
dictionary.update(all_tokens)
Explanation: 1. Dicionário
Para criar um dicionário, precisamos usar apenas a coluna que contém as sentenças no arquivo e ignorar a coluna da polaridade. Novamente, o nltk facilita esse processo. Obtidas as sentenças, usaremos o nltk para quebrá-las em tokens.
End of explanation
dictionary_of_digrams = set()
tokens_for_test = ['teste1', 'teste2', 'teste3', 'teste4']
for x in range(len(tokens_for_test)):
if x + 1 < len(tokens_for_test):
digram = [tokens[x]+' '+tokens_for_test[x+1]]
dictionary_of_digrams.update(digram)
print(dictionary_of_digrams)
Explanation: 1. Word normalization
Uma abordagem comum na análise de sentimentos é o uso de digramas ou trigramas, isso auxilia na classificação de sentenças. Dado um vetor de tokens onde já se foram eliminados as repetições e stop words, podemos criar digramas como segue:
End of explanation
all_sentences_vetorized = []
for sentence_tokenized in all_sentences_tokenized:
for token in sentence_tokenized:
if token in all_tokens_indexed:
tonken_position = all_tokens_indexed[token]
sentence_tokenized[tonken_position] += 1
all_sentences_vetorized.append(sentence_tokenized)
X = numpy.array(all_sentences_vetorized)
Explanation: 1. Vetorização
Usa-se a vetorização para tornar possível o trabalho dos classificadores. Na vetorização do texto, cada palavra é tratada como uma característica
End of explanation |
3,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load Data
Step1: The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.
Step2: Load the class-names.
Step3: Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
Step4: Load the test-set.
Step5: The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
Step6: The data dimensions are used in several places in the source-code below. They have already been defined in the cifar10 module, so we just need to import them.
Step7: The images are 32 x 32 pixels, but we will crop the images to 24 x 24 pixels.
Step8: Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step9: Plot a few images to see if data is correct
Step10: The pixelated images above are what the neural network will get as input. The images might be a bit easier for the human eye to recognize if we smoothen the pixels.
Step11: Data augmentation for images
Step12: The function above is called for each image in the input batch using the following function.
Step13: In order to plot the distorted images, we create the pre-processing graph for TensorFlow, so we may execute it later.
Step14: Creating Main Processing
https
Step15: Creating Neural Network
Note that the neural network is enclosed in the variable-scope named 'network'. This is because we are actually creating two neural networks in the TensorFlow graph. By assigning a variable-scope like this, we can re-use the variables for the two neural networks, so the variables that are optimized for the training-network are re-used for the other network that is used for testing.
Step16: Create Neural Network for Training Phase
Note that trainable=False which means that TensorFlow will not try to optimize this variable.
Step17: Create the neural network to be used for training. The create_network() function returns both y_pred and loss, but we only need the loss-function during training.
Step18: Create an optimizer which will minimize the loss-function. Also pass the global_step variable to the optimizer so it will be increased by one after each iteration.
Step19: Create Neural Network for Test Phase / Inference
Now create the neural network for the test-phase. Once again the create_network() function returns the predicted class-labels y_pred for the input images, as well as the loss-function to be used during optimization. During testing we only need y_pred.
Step20: We then calculate the predicted class number as an integer. The output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.
Step21: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step22: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step23: Saver
In order to save the variables of the neural network, so they can be reloaded quickly without having to train the network again, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.
Step24: Getting the Weights
Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.
We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes. Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
Step25: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like
Step26: Getting the Layer Outputs
Similarly we also need to retrieve the outputs of the convolutional layers. The function for doing this is slightly different than the function above for getting the weights. Here we instead retrieve the last tensor that is output by the convolutional layer.
Step27: Get the output of the convoluational layers so we can plot them later.
Step28: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step29: Restore or initialize variables
Step30: Create the directory if it does not exist.
Step31: This is the base-filename for the checkpoints, TensorFlow will append the iteration number, etc.
Step32: First try to restore the latest checkpoint. This may fail and raise an exception e.g. if such a checkpoint does not exist, or if you have changed the TensorFlow graph.
Step33: Helper-function to get a random training-batch
There are 50,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
Step34: Function for selecting a random batch of images from the training-set.
Step35: Optimization
The progress is printed every 100 iterations. A checkpoint is saved every 1000 iterations and also after the last iteration.
Step36: Plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
Step37: Plot confusion matrix
Step38: Calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
Step39: Calculate the predicted class for the test-set.
Step40: Helper-functions for the classification accuracy
Step41: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Step42: Helper-function for plotting convolutional weights
Step43: Helper-function for plotting the output of convolutional layers
Step44: Examples of distorted input images
In order to artificially inflate the number of images available for training, the neural network uses pre-processing with random distortions of the input images. This should hopefully make the neural network more flexible at recognizing and classifying images.
This is a helper-function for plotting distorted input images.
Step45: Helper-function for getting an image and its class-number from the test-set.
Step46: Get an image and its true class from the test-set.
Step47: Plot 9 random distortions of the image. If you re-run this code you will get slightly different results.
Step48: Perform optimization
Step49: Results
Examples of mis-classifications are plotted below.
Step50: Convolutional Weights
The following shows some of the weights (or filters) for the first convolutional layer. There are 3 input channels so there are 3 of these sets, which you may plot by changing the input_channel.
Note that positive weights are red and negative weights are blue.
Step51: Output of convolutional layers
Helper-function for plotting an image.
Step52: Plot an image from the test-set. The raw pixelated image is used as input to the neural network.
Step53: Use the raw image as input to the neural network and plot the output of the first convolutional layer.
Step54: Using the same image as input to the neural network, now plot the output of the second convolutional layer.
Step55: Predicted class-labels
Get the predicted class-label and class-number for this image.
Step56: Print the predicted class-label.
Step57: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
import cifar10
Explanation: Load Data
End of explanation
cifar10.maybe_download_and_extract()
Explanation: The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
class_names = cifar10.load_class_names()
class_names
Explanation: Load the class-names.
End of explanation
images_train, cls_train, labels_train = cifar10.load_training_data()
Explanation: Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
End of explanation
images_test, cls_test, labels_test = cifar10.load_test_data()
Explanation: Load the test-set.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(images_train)))
print("- Test-set:\t\t{}".format(len(images_test)))
Explanation: The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
End of explanation
from cifar10 import img_size, num_channels, num_classes
Explanation: The data dimensions are used in several places in the source-code below. They have already been defined in the cifar10 module, so we just need to import them.
End of explanation
img_size_cropped = 24
Explanation: The images are 32 x 32 pixels, but we will crop the images to 24 x 24 pixels.
End of explanation
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true) == 9
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing if we need to print ensemble and best-net.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
# Plot image.
ax.imshow(images[i, :, :, :],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
ax.set_xlabel(xlabel)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
Explanation: Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
images = images_test[0:9]
cls_true = cls_test[0:9]
plot_images(images=images, cls_true=cls_true, smooth=False)
Explanation: Plot a few images to see if data is correct
End of explanation
plot_images(images=images, cls_true=cls_true, smooth=True)
x = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name='x')
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, axis=1)
Explanation: The pixelated images above are what the neural network will get as input. The images might be a bit easier for the human eye to recognize if we smoothen the pixels.
End of explanation
def pre_process_image(image, training):
# This function takes a single image as input,
# and a boolean whether to build the training or testing graph.
if training:
# Randomly crop the input image.
image = tf.random_crop(image, size=[img_size_cropped, img_size_cropped, num_channels])
# Randomly flip the image horizontally.
image = tf.image.random_flip_left_right(image)
# Randomly adjust hue, contrast and saturation.
image = tf.image.random_hue(image, max_delta=0.05)
image = tf.image.random_contrast(image, lower=0.3, upper=1.0)
image = tf.image.random_brightness(image, max_delta=0.2)
image = tf.image.random_saturation(image, lower=0.0, upper=2.0)
# Some of these functions may overflow and result in pixel
# values beyond the [0, 1] range. A simple solution is to limit the range.
image = tf.minimum(image, 1.0)
image = tf.maximum(image, 0.0)
else:
# Crop the input image around the centre so it is the same
# size as images that are randomly cropped during training.
image = tf.image.resize_image_with_crop_or_pad(image,
target_height=img_size_cropped,
target_width=img_size_cropped)
return image
Explanation: Data augmentation for images
End of explanation
def pre_process(images, training):
# Use TensorFlow to loop over all the input images and call
# the function above which takes a single image as input.
images = tf.map_fn(lambda image: pre_process_image(image, training), images)
return images
Explanation: The function above is called for each image in the input batch using the following function.
End of explanation
distorted_images = pre_process(images=x, training=True)
Explanation: In order to plot the distorted images, we create the pre-processing graph for TensorFlow, so we may execute it later.
End of explanation
def main_network(images, training):
images = tf.cast(images, tf.float32)
x_pretty = pt.wrap(images)
if training:
phase = pt.Phase.train
else:
phase = pt.Phase.infer
# Can't wrap it to pretty tensor because
# 'Layer' object has no attribute 'local_response_normalization'
normalize = lambda x: pt.wrap(
tf.nn.local_response_normalization(x, depth_radius=5.0, bias=2.0, alpha=1e-4, beta=0.75))
with pt.defaults_scope(activation_fn=tf.nn.relu, phase=phase):
layers = []
for i in ["left", "right"]:
first_conv = x_pretty.\
conv2d(kernel=5, depth=48, name='conv_1_' + i)
first_conv_norm = normalize(first_conv)
first_conv_norm_pool = first_conv_norm.\
max_pool(kernel=3, stride=2, edges='VALID', name='pool_1_' + i)
second_conv = first_conv_norm_pool.\
conv2d(kernel=3, depth=128, bias=tf.ones_initializer(), name='conv_2_' + i)
second_conv_norm = normalize(second_conv)
second_conv_norm_pooled = pt.wrap(second_conv_norm).\
max_pool(kernel=2, stride=2, edges='VALID', name='pool_2_' + i)
layers.append(second_conv_norm_pooled)
first_interlayer = pt.wrap(tf.concat([layers[-2], layers[-1]], axis=3))
for i in ["left", "right"]:
cur_layer = first_interlayer.\
conv2d(kernel=3, depth=192, name='conv_3_' + i).\
conv2d(kernel=3, depth=192, name='conv_4_' + i).\
conv2d(kernel=3, depth=128, name='conv_5_' + i).\
max_pool(kernel=3, stride=2, edges='VALID', name='pool_3_' + i)
layers.append(cur_layer)
second_interlayer = pt.wrap(tf.concat([layers[-2], layers[-1]], axis=3))
print(second_interlayer.shape)
y_pred, loss = second_interlayer.\
flatten().\
fully_connected(1024, name='fully_conn_1').\
dropout(0.2, name='dropout_1').\
fully_connected(512, name='fully_conn_2').\
dropout(0.2, name='dropout_2').\
fully_connected(10, name='fully_conn_3').\
softmax_classifier(num_classes=num_classes, labels=y_true)
return y_pred, loss
Explanation: Creating Main Processing
https://github.com/google/prettytensor/blob/master/prettytensor/pretty_tensor_image_methods.py
End of explanation
def create_network(training):
# Wrap the neural network in the scope named 'network'.
# Create new variables during training, and re-use during testing.
with tf.variable_scope('network', reuse=not training):
images = x
images = pre_process(images=images, training=training)
y_pred, loss = main_network(images=images, training=training)
return y_pred, loss
Explanation: Creating Neural Network
Note that the neural network is enclosed in the variable-scope named 'network'. This is because we are actually creating two neural networks in the TensorFlow graph. By assigning a variable-scope like this, we can re-use the variables for the two neural networks, so the variables that are optimized for the training-network are re-used for the other network that is used for testing.
End of explanation
global_step = tf.Variable(initial_value=0,
name='global_step', trainable=False)
Explanation: Create Neural Network for Training Phase
Note that trainable=False which means that TensorFlow will not try to optimize this variable.
End of explanation
_, loss = create_network(training=True)
Explanation: Create the neural network to be used for training. The create_network() function returns both y_pred and loss, but we only need the loss-function during training.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step=global_step)
Explanation: Create an optimizer which will minimize the loss-function. Also pass the global_step variable to the optimizer so it will be increased by one after each iteration.
End of explanation
y_pred, _ = create_network(training=False)
Explanation: Create Neural Network for Test Phase / Inference
Now create the neural network for the test-phase. Once again the create_network() function returns the predicted class-labels y_pred for the input images, as well as the loss-function to be used during optimization. During testing we only need y_pred.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: We then calculate the predicted class number as an integer. The output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
saver = tf.train.Saver()
Explanation: Saver
In order to save the variables of the neural network, so they can be reloaded quickly without having to train the network again, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.
End of explanation
def get_weights_variable(layer_name):
with tf.variable_scope("network/" + layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
Explanation: Getting the Weights
Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.
We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes. Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
End of explanation
weights_conv1 = get_weights_variable(layer_name='conv_1_left')
weights_conv2 = get_weights_variable(layer_name='conv_1_right')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(weights_conv1).shape)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(weights_conv2).shape)
Explanation: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
End of explanation
def get_layer_output(layer_name):
# The name of the last operation of the convolutional layer.
# This assumes you are using Relu as the activation-function.
tensor_name = "network/" + layer_name + "/Relu:0"
# Get the tensor with this name.
tensor = tf.get_default_graph().get_tensor_by_name(tensor_name)
return tensor
Explanation: Getting the Layer Outputs
Similarly we also need to retrieve the outputs of the convolutional layers. The function for doing this is slightly different than the function above for getting the weights. Here we instead retrieve the last tensor that is output by the convolutional layer.
End of explanation
output_conv1 = get_layer_output(layer_name='conv_1_left')
output_conv2 = get_layer_output(layer_name='conv_1_right')
Explanation: Get the output of the convoluational layers so we can plot them later.
End of explanation
# to prevent tensorflow from allocating the totality of a GPU memory
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
session = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
save_dir = 'checkpoints_alex_net/'
Explanation: Restore or initialize variables
End of explanation
if not os.path.exists(save_dir):
os.makedirs(save_dir)
Explanation: Create the directory if it does not exist.
End of explanation
save_path = os.path.join(save_dir, 'cifar10_cnn')
Explanation: This is the base-filename for the checkpoints, TensorFlow will append the iteration number, etc.
End of explanation
try:
print("Trying to restore last checkpoint ...")
last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=save_dir)
saver.restore(session, save_path=last_chk_path)
print("Restored checkpoint from:", last_chk_path)
except:
print("Failed to restore checkpoint. Initializing variables instead.")
session.run(tf.global_variables_initializer())
Explanation: First try to restore the latest checkpoint. This may fail and raise an exception e.g. if such a checkpoint does not exist, or if you have changed the TensorFlow graph.
End of explanation
train_batch_size = 64
Explanation: Helper-function to get a random training-batch
There are 50,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
def random_batch():
num_images = len(images_train)
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
x_batch = images_train[idx, :, :, :]
y_batch = labels_train[idx, :]
return x_batch, y_batch
Explanation: Function for selecting a random batch of images from the training-set.
End of explanation
def optimize(num_iterations):
start_time = time.time()
for i in range(num_iterations):
x_batch, y_true_batch = random_batch()
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
i_global, _ = session.run([global_step, optimizer],
feed_dict=feed_dict_train)
if (i_global % 200 == 0) or (i == num_iterations - 1):
batch_acc = session.run(accuracy,
feed_dict=feed_dict_train)
msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
print(msg.format(i_global, batch_acc))
# Save a checkpoint to disk every 1000 iterations (and last).
if (i_global % 1000 == 0) or (i == num_iterations - 1):
saver.save(session,
save_path=save_path,
global_step=global_step)
print("Saved checkpoint.")
end_time = time.time()
time_dif = end_time - start_time
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
Explanation: Optimization
The progress is printed every 100 iterations. A checkpoint is saved every 1000 iterations and also after the last iteration.
End of explanation
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
incorrect = (correct == False)
images = images_test[incorrect]
cls_pred = cls_pred[incorrect]
cls_true = cls_test[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
# Print the confusion matrix as text.
for i in range(num_classes):
class_name = "({}) {}".format(i, class_names[i])
print(cm[i, :], class_name)
# Print the class-numbers for easy reference.
class_numbers = [" ({0})".format(i) for i in range(num_classes)]
print("".join(class_numbers))
Explanation: Plot confusion matrix
End of explanation
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
num_images = len(images)
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
Explanation: Calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
def predict_cls_test():
return predict_cls(images = images_test,
labels = labels_test,
cls_true = cls_test)
Explanation: Calculate the predicted class for the test-set.
End of explanation
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
# Return the classification accuracy
# and the number of correct classifications.
return correct.mean(), correct.sum()
Explanation: Helper-functions for the classification accuracy
End of explanation
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = classification_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
Explanation: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
End of explanation
def plot_conv_weights(weights, input_channel=0):
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
print("Min: {0:.5f}, Max: {1:.5f}".format(w.min(), w.max()))
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
abs_max = max(abs(w_min), abs(w_max))
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=-abs_max, vmax=abs_max,
interpolation='nearest', cmap='seismic')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
Explanation: Helper-function for plotting convolutional weights
End of explanation
def plot_layer_output(layer_output, image):
feed_dict = {x: [image]}
# Retrieve the output of the layer after inputting this image.
values = session.run(layer_output, feed_dict=feed_dict)
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
values_min = np.min(values)
values_max = np.max(values)
# Number of image channels output by the conv. layer.
num_images = values.shape[3]
# Number of grid-cells to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_images))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid image-channels.
if i<num_images:
# Get the images for the i'th output channel.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, vmin=values_min, vmax=values_max,
interpolation='nearest', cmap='binary')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
Explanation: Helper-function for plotting the output of convolutional layers
End of explanation
def plot_distorted_image(image, cls_true):
# Repeat the input image 9 times.
image_duplicates = np.repeat(image[np.newaxis, :, :, :], 9, axis=0)
feed_dict = {x: image_duplicates}
# Calculate only the pre-processing of the TensorFlow graph
# which distorts the images in the feed-dict.
result = session.run(distorted_images, feed_dict=feed_dict)
plot_images(images=result, cls_true=np.repeat(cls_true, 9))
Explanation: Examples of distorted input images
In order to artificially inflate the number of images available for training, the neural network uses pre-processing with random distortions of the input images. This should hopefully make the neural network more flexible at recognizing and classifying images.
This is a helper-function for plotting distorted input images.
End of explanation
def get_test_image(i):
return images_test[i, :, :, :], cls_test[i]
Explanation: Helper-function for getting an image and its class-number from the test-set.
End of explanation
img, cls = get_test_image(16)
Explanation: Get an image and its true class from the test-set.
End of explanation
plot_distorted_image(img, cls)
Explanation: Plot 9 random distortions of the image. If you re-run this code you will get slightly different results.
End of explanation
tf.summary.FileWriter('graphs', sess.graph)
# if False:
optimize(num_iterations=100000)
Explanation: Perform optimization
End of explanation
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: Results
Examples of mis-classifications are plotted below.
End of explanation
plot_conv_weights(weights=weights_conv1, input_channel=0)
Explanation: Convolutional Weights
The following shows some of the weights (or filters) for the first convolutional layer. There are 3 input channels so there are 3 of these sets, which you may plot by changing the input_channel.
Note that positive weights are red and negative weights are blue.
End of explanation
def plot_image(image):
fig, axes = plt.subplots(1, 2)
ax0 = axes.flat[0]
ax1 = axes.flat[1]
ax0.imshow(image, interpolation='nearest')
ax1.imshow(image, interpolation='spline16')
ax0.set_xlabel('Raw')
ax1.set_xlabel('Smooth')
plt.show()
Explanation: Output of convolutional layers
Helper-function for plotting an image.
End of explanation
img, cls = get_test_image(16)
plot_image(img)
Explanation: Plot an image from the test-set. The raw pixelated image is used as input to the neural network.
End of explanation
plot_layer_output(output_conv1, image=img)
Explanation: Use the raw image as input to the neural network and plot the output of the first convolutional layer.
End of explanation
plot_layer_output(output_conv2, image=img)
Explanation: Using the same image as input to the neural network, now plot the output of the second convolutional layer.
End of explanation
label_pred, cls_pred = session.run([y_pred, y_pred_cls],
feed_dict={x: [img]})
Explanation: Predicted class-labels
Get the predicted class-label and class-number for this image.
End of explanation
# Set the rounding options for numpy.
np.set_printoptions(precision=3, suppress=True)
# Print the predicted label.
print(label_pred[0])
class_names[3]
class_names[5]
Explanation: Print the predicted class-label.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
3,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a synthetic 1PL/2PL IRT model and sample an interaction history from it
Step1: Verify that models.OneParameterLogisticModel can recover parameters. We would only expect this to be possible when USING_2PL = False.
Step2: Verify that models.TwoParameterLogisticModel can recover parameters. We would only expect this to be possible when USING_2PL = True.
Step3: Verify that models.MIRTModel can recover parameters
Step4: Verify that all models achieve similar training AUCs
Step5: Construct a synthetic embedding
Step6: Sample interactions from the synthetic embedding
Step7: Estimate an embedding from the sampled interactions
Step12: Visualize the estimated embedding vs. the true embedding | Python Code:
num_students = 2000
num_assessments = 3000
num_ixns_per_student = 1000
USING_2PL = False # False => using 1PL
proficiencies = np.random.normal(0, 1, num_students)
difficulties = np.random.normal(0, 1, num_assessments)
if USING_2PL:
discriminabilities = np.random.normal(0, 1, num_assessments)
else:
discriminabilities = np.ones(num_assessments)
student_ids = ['S'+str(x) for x in xrange(num_students)]
assessment_ids = ['A'+str(x) for x in xrange(num_assessments)]
ixns = [None] * (num_students * num_ixns_per_student)
assessment_idxes = range(num_assessments)
for student_idx, student_id in enumerate(student_ids):
for t in xrange(num_ixns_per_student):
module_idx = random.choice(assessment_idxes)
pass_likelihood = 1 / (1 + math.exp(-(discriminabilities[module_idx]*proficiencies[student_idx] + difficulties[module_idx])))
ixns[student_idx * num_ixns_per_student + t] = {
'student_id' : student_id,
'module_id' : assessment_ids[module_idx],
'module_type' : datatools.AssessmentInteraction.MODULETYPE,
'outcome' : np.random.random() < pass_likelihood,
'timestep' : t+1
}
history = datatools.InteractionHistory(pd.DataFrame(ixns))
history.idx_of_student_id = lambda x: int(x[1:])
history.idx_of_assessment_id = lambda x: int(x[1:])
mirt_model = models.MIRTModel(history, dims=1, using_assessment_factors=USING_2PL)
estimator = est.MIRTMAPEstimator(
regularization_constant=1e-3,
ftol=1e-5,
debug_mode_on=True)
mirt_model.fit(estimator)
onepl_model = models.OneParameterLogisticModel(
history.data, select_regularization_constant=True)
onepl_model.fit()
twopl_model = models.TwoParameterLogisticModel(
history.data, select_regularization_constant=True)
twopl_model.fit()
student_idxes = [int(k[1:]) for k in history.data['student_id'].unique()]
assessment_idxes = [int(k[1:]) for k in history.data['module_id'].unique()]
Explanation: Generate a synthetic 1PL/2PL IRT model and sample an interaction history from it
End of explanation
plt.xlabel('True difficulties')
plt.ylabel('Estimated difficulties')
plt.scatter(difficulties[assessment_idxes], onepl_model.model.coef_[0, num_students:])
plt.show()
plt.xlabel('Estimated difficulty - true difficulty')
plt.ylabel('Frequency (number of assessments)')
plt.hist(onepl_model.model.coef_[0, num_students:] - difficulties[assessment_idxes], bins=20)
plt.show()
plt.xlabel('True proficiencies')
plt.ylabel('Estimated proficiencies')
plt.scatter(proficiencies[student_idxes], onepl_model.model.coef_[0, :num_students])
plt.show()
plt.xlabel('Estimated proficiency - true proficiency')
plt.ylabel('Frequency (number of students)')
plt.hist(onepl_model.model.coef_[0, :num_students] - proficiencies[student_idxes], bins=20)
plt.show()
Explanation: Verify that models.OneParameterLogisticModel can recover parameters. We would only expect this to be possible when USING_2PL = False.
End of explanation
plt.xlabel('True difficulties')
plt.ylabel('Estimated difficulties')
plt.scatter(difficulties[assessment_idxes], twopl_model.model.coef_[0, (num_students*num_assessments):])
plt.show()
plt.xlabel('Estimated difficulty - true difficulty')
plt.ylabel('Frequency (number of assessments)')
plt.hist(twopl_model.model.coef_[0, (num_students*num_assessments):] - difficulties[assessment_idxes], bins=20)
plt.show()
est_params = twopl_model.model.coef_[0, :(num_students*num_assessments)]
true_params = discriminabilities[:, None].dot(proficiencies[:, None].T).ravel()
plt.xlabel('True proficiency*discriminability')
plt.ylabel('Estimated proficiency*discriminability')
plt.scatter(true_params, est_params)
plt.show()
plt.xlabel('Estimated proficiency*discriminability - true proficiency*discriminability')
plt.ylabel('Frequency (number of student-assessment pairs)')
plt.hist(est_params - true_params, bins=20)
plt.show()
Explanation: Verify that models.TwoParameterLogisticModel can recover parameters. We would only expect this to be possible when USING_2PL = True.
End of explanation
plt.xlabel('True difficulties')
plt.ylabel('Estimated difficulties')
plt.scatter(difficulties, mirt_model.assessment_offsets)
plt.show()
plt.xlabel('Estimated difficulty - true difficulty')
plt.ylabel('Frequency (number of assessments)')
plt.hist(mirt_model.assessment_offsets - difficulties, bins=20)
plt.show()
plt.xlabel('True proficiencies')
plt.ylabel('Estimated proficiencies')
plt.scatter(proficiencies, mirt_model.student_factors[:, 0])
plt.show()
plt.xlabel('Estimated proficiency - true proficiency')
plt.ylabel('Frequency (number of students)')
plt.hist(mirt_model.student_factors[:, 0] - proficiencies, bins=20)
plt.show()
plt.xlabel('True discriminabilities')
plt.ylabel('Estimated discriminabilities')
plt.scatter(discriminabilities, mirt_model.assessment_factors[:, 0])
plt.show()
plt.xlabel('Estimated discriminability - true discriminability')
plt.ylabel('Frequency (number of assessments)')
plt.hist(mirt_model.assessment_factors[:, 0] - discriminabilities, bins=20)
plt.show()
Explanation: Verify that models.MIRTModel can recover parameters
End of explanation
# models.OneParameterLogisticModel
evaluate.training_auc(onepl_model, history, plot_roc_curve=True)
# models.TwoParameterLogisticModel
evaluate.training_auc(twopl_model, history, plot_roc_curve=True)
# models.MIRTModel
evaluate.training_auc(mirt_model, history, plot_roc_curve=True)
# true model
true_model = copy.deepcopy(mirt_model)
true_model.student_factors[:, 0] = proficiencies
true_model.assessment_factors[:, 0] = discriminabilities
true_model.assessment_offsets = difficulties
evaluate.training_auc(true_model, history, plot_roc_curve=True)
Explanation: Verify that all models achieve similar training AUCs
End of explanation
num_students = 10000
num_assessment_interactions_per_step = 100
grid_size = 5
embedding_dimension = 2
num_assessments = grid_size ** 2
num_lessons = 2 * grid_size * (grid_size - 1)
num_lesson_interactions_per_student = 2 * (grid_size - 1) + 2
S = np.zeros((num_students, embedding_dimension, num_lesson_interactions_per_student))
A = np.zeros((num_assessments, embedding_dimension))
L = np.zeros((num_lessons, embedding_dimension))
Q = np.zeros((num_lessons, embedding_dimension))
lesson_idx_of_loc = {}
assessment_idx_of_loc = {}
cell_size = 10 / (grid_size - 1)
lesson_count = 0
for i in xrange(grid_size):
for j in xrange(grid_size):
A[grid_size * i + j, :] = [i, j]
assessment_idx_of_loc[(i, j)] = grid_size * i + j
if j < grid_size - 1:
Q[lesson_count, :] = [i, j]
L[lesson_count, :] = [0, 1]
lesson_idx_of_loc[(i, j, 0, 1)] = lesson_count
lesson_count += 1
if i < grid_size - 1:
Q[lesson_count, :] = [i, j]
L[lesson_count, :] = [1, 0]
lesson_idx_of_loc[(i, j, 1, 0)] = lesson_count
lesson_count += 1
A *= cell_size
Q *= cell_size
L *= cell_size
A = np.maximum(1e-3, A)
Q = np.maximum(1e-3, Q)
lesson_loc_of_idx = {v: k for k, v in lesson_idx_of_loc.iteritems()}
assessment_loc_of_idx = {v: k for k, v in assessment_idx_of_loc.iteritems()}
Explanation: Construct a synthetic embedding
End of explanation
id_of_loc = lambda x: '-'.join(str(z) for z in x)
data = []
for student_idx in xrange(num_students):
student_id = 'S' + str(student_idx)
steps = ([(0, 1)] * (grid_size - 1)) + ([(1, 0)] * (grid_size - 1))
random.shuffle(steps)
x, y = 0, 0
t = 1
assessment_idx = assessment_idx_of_loc[(0, 0)]
assessment_id = id_of_loc(assessment_loc_of_idx[assessment_idx])
pass_likelihood = 1 / (1 + math.exp(-(np.dot(S[student_idx, :, t], A[assessment_idx, :]) / np.linalg.norm(A[assessment_idx, :]) - np.linalg.norm(A[assessment_idx, :]))))
outcome = random.random() < pass_likelihood
data.append({
'student_id' : student_id,
'module_id' : assessment_id,
'module_type' : datatools.AssessmentInteraction.MODULETYPE,
'timestep' : t,
'outcome' : outcome})
for i, j in steps:
lesson_idx = lesson_idx_of_loc[(x, y, i, j)]
lesson_id = id_of_loc(lesson_loc_of_idx[lesson_idx])
data.append({
'student_id' : student_id,
'module_id' : lesson_id,
'module_type' : datatools.LessonInteraction.MODULETYPE,
'timestep' : t,
'outcome' : None})
x += i
y += j
# DEBUG
S[student_idx, :, t+1] = S[student_idx, :, t] + L[lesson_idx, :]# / (1 + math.exp(-(np.dot(S[student_idx, :, t], Q[lesson_idx, :]) / np.linalg.norm(Q[lesson_idx, :]) - np.linalg.norm(Q[lesson_idx, :]))))
t += 1
for _ in xrange(num_assessment_interactions_per_step):
assessment_idx = random.randint(0, num_assessments - 1)
assessment_id = id_of_loc(assessment_loc_of_idx[assessment_idx])
pass_likelihood = 1 / (1 + math.exp(-(np.dot(S[student_idx, :, t], A[assessment_idx, :]) / np.linalg.norm(A[assessment_idx, :]) - np.linalg.norm(A[assessment_idx, :]))))
outcome = random.random() < pass_likelihood
# BEGIN DEBUG
if assessment_idx_of_loc[(0, 0)] == assessment_idx:
outcome = random.random() < 0.1
# END DEBUG
data.append({
'student_id' : student_id,
'module_id' : assessment_id,
'module_type' : datatools.AssessmentInteraction.MODULETYPE,
'timestep' : t,
'outcome' : outcome})
history = datatools.InteractionHistory(pd.DataFrame(data))
assessment_idx_map = {id_of_loc(loc): idx for idx, loc in assessment_loc_of_idx.iteritems()}
lesson_idx_map = {id_of_loc(loc): idx for idx, loc in lesson_loc_of_idx.iteritems()}
history.compute_idx_maps(assessment_idx=assessment_idx_map, lesson_idx=lesson_idx_map)
len(history.data)
history_path = os.path.join('data', 'lse_synthetic_history.pkl')
with open(history_path, 'wb') as f:
pickle.dump(history, f, pickle.HIGHEST_PROTOCOL)
Explanation: Sample interactions from the synthetic embedding
End of explanation
model = models.EmbeddingModel(
history, embedding_dimension=2,
using_lessons=True, using_prereqs=False, using_bias=True,
learning_update_variance_constant=0.5)
estimator = est.EmbeddingMAPEstimator(
regularization_constant=1e-3, using_scipy=True,
debug_mode_on=True, ftol=1e-4)
model.fit(estimator)
model = models.OneParameterLogisticModel(history.data, select_regularization_constant=True)
model.fit()
evaluate.training_auc(model, history, plot_roc_curve=True)
Explanation: Estimate an embedding from the sampled interactions
End of explanation
plt.scatter(A[:, 0], A[:, 1])
for assessment_idx in xrange(num_assessments):
plt.annotate(id_of_assessment_idx(assessment_idx), (A[assessment_idx, 0], A[assessment_idx, 1]))
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i, j + 1)]]
plt.plot(A[assessment_idxes, 0], A[assessment_idxes, 1], c='black')
if i < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i + 1, j)]]
plt.plot(A[assessment_idxes, 0], A[assessment_idxes, 1], c='black')
plt.show()
plt.scatter(model.assessment_embeddings[:, 0], model.assessment_embeddings[:, 1])
for assessment_idx in xrange(num_assessments):
plt.annotate(id_of_assessment_idx(assessment_idx), (model.assessment_embeddings[assessment_idx, 0], model.assessment_embeddings[assessment_idx, 1]))
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i, j + 1)]]
plt.plot(model.assessment_embeddings[assessment_idxes, 0], model.assessment_embeddings[assessment_idxes, 1], c='black')
if i < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i + 1, j)]]
plt.plot(model.assessment_embeddings[assessment_idxes, 0], model.assessment_embeddings[assessment_idxes, 1], c='black')
plt.show()
plt.quiver(Q[:, 0], Q[:, 1], L[:, 0], L[:, 1], pivot='tail', color='black')
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i, j + 1)]]
plt.plot(Q[lesson_idxes, 0], Q[lesson_idxes, 1], c='black')
if i < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i + 1, j)]]
plt.plot(Q[lesson_idxes, 0], Q[lesson_idxes, 1], c='black')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
plt.quiver(model.prereq_embeddings[:, 0], model.prereq_embeddings[:, 1], model.lesson_embeddings[:, 0], model.lesson_embeddings[:, 1], pivot='tail', color='black')
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i, j + 1)]]
plt.plot(model.prereq_embeddings[lesson_idxes, 0], model.prereq_embeddings[lesson_idxes, 1], c='black')
if i < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i + 1, j)]]
plt.plot(model.prereq_embeddings[lesson_idxes, 0], model.prereq_embeddings[lesson_idxes, 1], c='black')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
right_lesson_idxes = [lesson_idx_of_loc[(i, j, 1, 0)] for i in xrange(grid_size) for j in xrange(grid_size) if (i, j, 1, 0) in lesson_idx_of_loc]
up_lesson_idxes = [lesson_idx_of_loc[(i, j, 0, 1)] for i in xrange(grid_size) for j in xrange(grid_size) if (i, j, 0, 1) in lesson_idx_of_loc]
plt.quiver(0, 0, L[right_lesson_idxes, 0], L[right_lesson_idxes, 1], pivot='tail', color='red', alpha=0.25)
plt.quiver(0, 0, L[up_lesson_idxes, 0], L[up_lesson_idxes, 1], pivot='tail', color='blue', alpha=0.25)
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
plt.quiver(0, 0, model.lesson_embeddings[right_lesson_idxes, 0], model.lesson_embeddings[right_lesson_idxes, 1], pivot='tail', color='red', alpha=0.25)
plt.quiver(0, 0, model.lesson_embeddings[up_lesson_idxes, 0], model.lesson_embeddings[up_lesson_idxes, 1], pivot='tail', color='blue', alpha=0.25)
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
plt.scatter(L[right_lesson_idxes, 0], L[right_lesson_idxes, 1], color='red', label='1-0')
plt.scatter(L[up_lesson_idxes, 0], L[up_lesson_idxes, 1], color='blue', label='0-1')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.legend(loc='best')
plt.show()
plt.scatter(model.lesson_embeddings[right_lesson_idxes, 0], model.lesson_embeddings[right_lesson_idxes, 1], color='red', label='1-0')
plt.scatter(model.lesson_embeddings[up_lesson_idxes, 0], model.lesson_embeddings[up_lesson_idxes, 1], color='blue', label='0-1')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.legend(loc='best')
plt.show()
student_idxes = random.sample(range(num_students), 10)
for student_idx in student_idxes:
plt.scatter(S[student_idx, 0, :], S[student_idx, 1, :], c='black')
for i in xrange(num_lesson_interactions_per_student):
plt.plot(S[student_idx, 0, i:(i+2)], S[student_idx, 1, i:(i+2)], c='black')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.show()
for student_idx in student_idxes:
plt.scatter(model.student_embeddings[student_idx, 0, :], model.student_embeddings[student_idx, 1, :], c='black')
for i in xrange(num_lesson_interactions_per_student):
plt.plot(model.student_embeddings[student_idx, 0, i:(i+2)], model.student_embeddings[student_idx, 1, i:(i+2)], c='black')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.show()
for student_idx in student_idxes:
for i in xrange(embedding_dimension):
plt.plot(S[student_idx, i, :], '-s', label='Skill 1')
plt.xlabel('Timestep')
plt.ylabel('Skill')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.legend(loc='best')
plt.show()
for student_idx in student_idxes:
for i in xrange(embedding_dimension):
plt.plot(model.student_embeddings[student_idx, i, :], '-s', label='Skill 1')
plt.xlabel('Timestep')
plt.ylabel('Skill')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.legend(loc='best')
plt.show()
Explanation: Visualize the estimated embedding vs. the true embedding
End of explanation |
3,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accuracy vs Mag DEIMOS Spec Test Set
In this notebook we examine the accuracy as a function of magnitude for sources with spectroscopic classifications from DEIMOS COSMOS survey. The DEIMOS set contains $\sim$ 10K sources, and $\sim$ 2.7K sources are crossmatched with PS1 catalog.
The overall accuracy for the classification by the ML model we developed is $\sim$ 95%, but the FoM @FPR=0.05 is lower than 0.4, which is worse than the FoM obtained with HSTxPS1 catalog.
We found the accuracy of the HST classification is also $\sim$ 95%.
The performance of the ML model, therefore, is reasonable because the ML model is trained with the HST classification.
Step1: if "Remarks" contains "star", the source is classifyed star.
Step2: The distribution of galaxies looks similar to that of the HST COSMOS catalog, but that of stars has a peak at i-mag$\sim$22, which is not shown in that of the HSTxPS1 catalog.
Step3: ROC curve and Accuracy
Step4: Accuracy v.s. MAG with DEIMOSxHST
Step5: Cross-matching the sources in the DEIMOS catalog within radius = 0.5 arcsec around those in the HST catalog
Step6: The distribution of the distance has a peak at 0.1 arcsec. Changes the cross-matching radius to 0.3 arcsec.
Step7: Remove duplicated sources.
Step8: Remove the sources which are not able to be classified to star or galaxy by the DEIMOS catalog.
Step9: Accuracy v.s. MAG | Python Code:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
_df = pd.read_table('DEIMOS/deimos_10K_March2018/deimos.tbl', header=None)
arr = np.empty((len(_df), len(_df.iloc[0][0].split())), dtype='<U50')
for i in range(len(_df)):
i_row = [k for k in _df.iloc[i][0].split(' ') if (k != '')and(k != ' ')]
for j in range(len(_df.iloc[0][0].split())):
arr[i][j] = i_row[j]
df = pd.DataFrame(arr)
ra = np.array(df[1], dtype=float)
dec = np.array(df[2], dtype=float)
sel = np.array(df[3], dtype=int)
imag = np.array(df[4].replace('null', '-999').replace(' null', '-999'), dtype=float)
kmag = np.array(df[5].replace('null', '-999').replace(' null', '-999'), dtype=float)
zspec = np.array(df[6].replace('null', '-999').replace(' null', '-999'), dtype=float)
Qflag = np.array(df[7].replace('null', '-999').replace(' null', '-999'), dtype=int)
Q = np.array(df[8].replace('null', '-999').replace(' null', '-999'), dtype=float)
np.array(df[9][0:20])
sgFlag = np.empty(len(df), dtype=int)
for i in range(len(df[9])):
if 'star' in df[9][i]:
sgFlag[i] = 1 # star
elif 'null' in df[9][i]:
sgFlag[i] = -999 # null
else:
sgFlag[i] = 0 # galaxy
Explanation: Accuracy vs Mag DEIMOS Spec Test Set
In this notebook we examine the accuracy as a function of magnitude for sources with spectroscopic classifications from DEIMOS COSMOS survey. The DEIMOS set contains $\sim$ 10K sources, and $\sim$ 2.7K sources are crossmatched with PS1 catalog.
The overall accuracy for the classification by the ML model we developed is $\sim$ 95%, but the FoM @FPR=0.05 is lower than 0.4, which is worse than the FoM obtained with HSTxPS1 catalog.
We found the accuracy of the HST classification is also $\sim$ 95%.
The performance of the ML model, therefore, is reasonable because the ML model is trained with the HST classification.
End of explanation
plt.hist(imag[sgFlag!=-999], bins=np.arange(15, 28, 0.2), color='0.8', label='All')
plt.hist(imag[sgFlag==0], bins=np.arange(15, 28, 0.2), alpha=0.5, label='GALAXY')
plt.hist(imag[sgFlag==1], bins=np.arange(15, 28, 0.2), alpha=0.5, label='STAR')
plt.yscale('log')
plt.xlabel('i mag'); plt.ylabel('#')
plt.legend(loc='best')
plt.show()
Explanation: if "Remarks" contains "star", the source is classifyed star.
End of explanation
df = pd.DataFrame()
df['ra'] = ra; df['dec'] = dec
df['sel'] = sel
df['imag'] = imag; df['kmag'] = kmag
df['zspec'] = zspec
df['Qflag'] = Qflag; df['Q'] = Q
df['class'] = sgFlag
df[0:10]
df.to_csv('./DEIMOS/DEIMOS.csv', index=None)
import star_galaxy_models
rf_obj = star_galaxy_models.RandomForestModel()
rf_obj.read_rf_from_pickle()
features = ['wwpsfChiSq', 'wwExtNSigma', 'wwpsfLikelihood',
'wwPSFKronRatio', 'wwPSFKronDist', 'wwPSFApRatio',
'wwmomentRH', 'wwmomentXX', 'wwmomentXY', 'wwmomentYY',
'wwKronRad']
from sklearn.metrics import roc_curve, accuracy_score, auc, make_scorer
Explanation: The distribution of galaxies looks similar to that of the HST COSMOS catalog, but that of stars has a peak at i-mag$\sim$22, which is not shown in that of the HSTxPS1 catalog.
End of explanation
ps1_dei = pd.read_csv('./DEIMOS/PS1_DEIMOS_features.csv').drop_duplicates(subset='objid')
print("PS1xDEIMOS catalog constains %i sources."%len(ps1_dei))
ps1_dei_det_mask = np.logical_and(ps1_dei['class'] != -999, (ps1_dei.nDetections>0)&(ps1_dei.wwKronFlux>0))
ps1_dei = ps1_dei[ps1_dei_det_mask]
print("%i sources are classified by both of the DEIMOS and the ML model."%len(ps1_dei))
ps1_df = pd.read_csv('./DEIMOS/HST_COSMOS_features.csv')
dupl_mask = np.empty(len(ps1_dei), dtype=bool)
for i in range(len(dupl_mask)):
dupl_mask[i] = ps1_dei.objid.iloc[i] in np.array(ps1_df.objid)
print("Only %i sources are included both of the PS1xDEIMOS and the PS1xHST catalog..."%np.sum(dupl_mask))
ps1_dei = ps1_dei[~dupl_mask]
#print("%i sources are not contained in PS1xHST catalog."%len(ps1_dei))
kron_mag = -2.5*np.log10(ps1_dei.wwKronFlux/3631)
ps1_dei_features = ps1_dei[features]
ps1_dei_class = ps1_dei['class']
ps1_dei_score = rf_obj.rf_clf_.predict_proba(ps1_dei_features)
ps1_dei_pred = rf_obj.rf_clf_.predict(ps1_dei_features)
print("Overall accuracy of the classification by the ML model is %f"%accuracy_score(ps1_dei_class, ps1_dei_pred))
fpr, tpr, thre = roc_curve(ps1_dei_class, ps1_dei_score[:,1])
plt.grid(linestyle='dotted')
plt.plot(fpr, tpr, 'k-')
#plt.xscale('log'); plt.yscale('log')
plt.xlim(1e-3, 1e-1); plt.ylim(0.1, 1.01)
plt.xlabel('FPR'); plt.ylabel('TPR')
plt.show()
ps1_dei_class = np.array(ps1_dei_class)
ps1_dei_score = np.array(ps1_dei_score)
kron_mag = np.array(kron_mag)
binwidth = 1.5
Nboot = 100
mag_array = np.arange(14 , 23+binwidth, binwidth)
kron_mag = np.array(-2.5*np.log10(ps1_dei['wwKronFlux']/3631))
ml_acc_arr = np.zeros_like(mag_array, dtype=float)
ml_boot_scatt = np.vstack((np.zeros_like(mag_array, dtype=float), np.zeros_like(mag_array, dtype=float)))
for bin_num, binedge in enumerate(mag_array):
bin_sources = np.where((kron_mag >= binedge) & (kron_mag < binedge + binwidth))
ml_acc_arr[bin_num] = accuracy_score(ps1_dei_class[bin_sources],
ps1_dei_pred[bin_sources])
ml_boot_acc = np.empty(Nboot)
for i in range(Nboot):
boot_sources = np.random.choice(bin_sources[0], len(bin_sources[0]),
replace=True)
ml_boot_acc[i] = accuracy_score(ps1_dei_class[boot_sources],
ps1_dei_pred[boot_sources])
ml_boot_scatt[:,bin_num] = np.percentile(ml_boot_acc, [16, 84])
from sklearn.neighbors import KernelDensity
kde_grid = np.linspace(10,26,200)
deimos_stars = np.where(ps1_dei_class == 1)
deimos_gal = np.where(ps1_dei_class == 0)
deimos_kde_gal_norm = len(deimos_gal[0])/len(ps1_dei_class)
deimos_kde_star_norm = 1 - deimos_kde_gal_norm
kde_deimos = KernelDensity(bandwidth=1.059*np.std(kron_mag, ddof=1)*len(kron_mag)**(-0.2),
rtol=1E-4)
kde_deimos.fit(kron_mag[:, np.newaxis])
kde_deimos_stars = KernelDensity(bandwidth=1.059*np.std(kron_mag[deimos_stars], ddof=1)*len(kron_mag[deimos_stars])**(-0.2),
rtol=1E-4)
kde_deimos_stars.fit(kron_mag[deimos_stars[0], np.newaxis])
kde_deimos_gal = KernelDensity(bandwidth=1.059*np.std(kron_mag[deimos_gal], ddof=1)*len(kron_mag[deimos_gal])**(-0.2),
rtol=1E-4)
kde_deimos_gal.fit(kron_mag[deimos_gal[0], np.newaxis])
pdf_deimos = np.exp(kde_deimos.score_samples(kde_grid[:, np.newaxis]))
pdf_deimos_stars = np.exp(kde_deimos_stars.score_samples(kde_grid[:, np.newaxis]))
pdf_deimos_gal = np.exp(kde_deimos_gal.score_samples(kde_grid[:, np.newaxis]))
from matplotlib.ticker import MultipleLocator
#import seaborn as sns
color_dict = {'ml': "black"}
mag_bin_centers = mag_array + binwidth/2
#cmap_star = sns.cubehelix_palette(rot=0.5, light=0.7,dark=0.3,as_cmap=True)
#cmap_gal = sns.cubehelix_palette(start=0.3,rot=-0.5,light=0.7,dark=0.3,as_cmap=True)
fig, ax = plt.subplots(figsize=(8, 5))
ax.grid(linestyle='dotted', zorder=1)
ax.errorbar(mag_bin_centers, ml_acc_arr,
yerr=np.abs(ml_boot_scatt - ml_acc_arr),
ls='-', lw=.75, fmt='o',
color=color_dict['ml'], label="ML model",
linewidth=1.5, markersize=7.5, zorder=5)
# add KDE plots
ax.fill(kde_grid, pdf_deimos + 0.5, alpha=0.4, color="0.7", zorder=2)
ax.fill(kde_grid, pdf_deimos_gal*deimos_kde_gal_norm + 0.5, alpha=0.7, zorder=3)#, color=cmap_gal(0.25))
ax.fill(kde_grid, pdf_deimos_stars*deimos_kde_star_norm + 0.5, alpha=0.7, zorder=4)#, color=cmap_star(0.25))
ax.set_ylim(0.5,1.01)
ax.set_xlim(14, 24)
ax.tick_params(which="both", top=True, right=True, labelsize=15)
ax.set_xlabel('whiteKronMag', fontsize=15)
ax.set_ylabel('Accuracy', fontsize=15)
ax.yaxis.set_minor_locator(MultipleLocator(0.025))
ax.xaxis.set_major_locator(MultipleLocator(2))
ax.xaxis.set_minor_locator(MultipleLocator(0.5))
#ax.legend(bbox_to_anchor=(0.01, 0.3, 1., 0.102), loc=3, fontsize=13)
fig.subplots_adjust(top=0.98,right=0.98,left=0.1,bottom=0.12)
Explanation: ROC curve and Accuracy
End of explanation
from astropy.table import Table
deimos = pd.read_csv('./DEIMOS/DEIMOS.csv')
hst = Table.read('./DEIMOS/HST_COSMOS.fit').to_pandas()
hstX = np.empty((len(hst), 2), dtype=np.float64)
hstX[:, 0] = hst['ALPHA_J2000']
hstX[:, 1] = hst['DELTA_J2000']
deiX = np.empty((len(deimos), 2), dtype=np.float64)
deiX[:, 0] = deimos['ra']
deiX[:, 1] = deimos['dec']
Explanation: Accuracy v.s. MAG with DEIMOSxHST
End of explanation
from astroML.crossmatch import crossmatch_angular
max_radius = 0.5 / 3600 # 0.5 arcsec
dist, ind = crossmatch_angular(hstX, deiX, max_radius)
match = ~np.isinf(dist)
print("The number of sources cross-matched is %i"%np.sum(match))
plt.hist(dist[match]*3600, bins=np.arange(0, 0.5,0.01))
plt.xlabel('Distance')
plt.show()
Explanation: Cross-matching the sources in the DEIMOS catalog within radius = 0.5 arcsec around those in the HST catalog
End of explanation
from astroML.crossmatch import crossmatch_angular
max_radius = 0.3 / 3600 # 0.3 arcsec
dist, ind = crossmatch_angular(hstX, deiX, max_radius)
match = ~np.isinf(dist)
print("The number of sources cross-matched is %i"%np.sum(match))
plt.hist(dist[match]*3600, bins=np.arange(0, 0.5,0.01))
plt.xlabel('Distance')
plt.show()
hst_match = hst[match]
deimos_match = deimos.loc[ind[match]]
Explanation: The distribution of the distance has a peak at 0.1 arcsec. Changes the cross-matching radius to 0.3 arcsec.
End of explanation
dupl_mask = deimos_match.duplicated('ra')
deimos_match_uniq = deimos_match[~dupl_mask.values]
hst_match_uniq = hst_match[~dupl_mask.values]
Explanation: Remove duplicated sources.
End of explanation
good_mask = deimos_match_uniq["class"] != -999
deimos_match_uniq_good = deimos_match_uniq[good_mask.values]
hst_match_uniq_good = hst_match_uniq[good_mask.values]
print("The number of sources used to verify the classification accuracy is %i"%len(deimos_match_uniq))
xlims = [12, 29]
ylims = [12, 29]
plt.hexbin(hst_match_uniq["MAG_BEST"], deimos_match_uniq["imag"],
extent=[xlims[0], xlims[1], ylims[0], ylims[1]],
bins='log', cmap='viridis')
plt.xlim(xlims); plt.ylim(ylims)
plt.xlabel('MAG_BEST(HST)')
plt.ylabel('imag(DEIMOS)')
from sklearn.metrics import accuracy_score
print("The overall accuracy od the crassification of the HST catalog is %0.4f"\
%accuracy_score(deimos_match_uniq_good["class"], hst_match_uniq_good["MU_CLASS"]-1))
Explanation: Remove the sources which are not able to be classified to star or galaxy by the DEIMOS catalog.
End of explanation
dei_class = np.array(deimos_match_uniq_good["class"], dtype=int)
hst_class = np.array(hst_match_uniq_good["MU_CLASS"]-1, dtype=int)
kron_mag = np.array(hst_match_uniq_good["MAG_BEST"])
binwidth = 1
Nboot = 100
mag_array = np.arange(14 , 26+binwidth, binwidth)
ml_acc_arr = np.zeros_like(mag_array, dtype=float)
ml_boot_scatt = np.vstack((np.zeros_like(mag_array, dtype=float), np.zeros_like(mag_array, dtype=float)))
for bin_num, binedge in enumerate(mag_array):
bin_sources = np.where((kron_mag >= binedge) & (kron_mag < binedge + binwidth))
ml_acc_arr[bin_num] = accuracy_score(dei_class[bin_sources],
hst_class[bin_sources])
ml_boot_acc = np.empty(Nboot)
for i in range(Nboot):
boot_sources = np.random.choice(bin_sources[0], len(bin_sources[0]),
replace=True)
ml_boot_acc[i] = accuracy_score(dei_class[boot_sources],
hst_class[boot_sources])
ml_boot_scatt[:,bin_num] = np.percentile(ml_boot_acc, [16, 84])
from sklearn.neighbors import KernelDensity
kde_grid = np.linspace(10,29,200)
deimos_stars = np.where(dei_class == 1)
deimos_gal = np.where(dei_class == 0)
deimos_kde_gal_norm = len(deimos_gal[0])/len(dei_class)
deimos_kde_star_norm = 1 - deimos_kde_gal_norm
kde_deimos = KernelDensity(bandwidth=1.059*np.std(kron_mag, ddof=1)*len(kron_mag)**(-0.2),
rtol=1E-4)
kde_deimos.fit(kron_mag[:, np.newaxis])
kde_deimos_stars = KernelDensity(bandwidth=1.059*np.std(kron_mag[deimos_stars], ddof=1)*len(kron_mag[deimos_stars])**(-0.2),
rtol=1E-4)
kde_deimos_stars.fit(kron_mag[deimos_stars[0], np.newaxis])
kde_deimos_gal = KernelDensity(bandwidth=1.059*np.std(kron_mag[deimos_gal], ddof=1)*len(kron_mag[deimos_gal])**(-0.2),
rtol=1E-4)
kde_deimos_gal.fit(kron_mag[deimos_gal[0], np.newaxis])
pdf_deimos = np.exp(kde_deimos.score_samples(kde_grid[:, np.newaxis]))
pdf_deimos_stars = np.exp(kde_deimos_stars.score_samples(kde_grid[:, np.newaxis]))
pdf_deimos_gal = np.exp(kde_deimos_gal.score_samples(kde_grid[:, np.newaxis]))
from matplotlib.ticker import MultipleLocator
#import seaborn as sns
color_dict = {'ml': "black"}
mag_bin_centers = mag_array + binwidth/2
#cmap_star = sns.cubehelix_palette(rot=0.5, light=0.7,dark=0.3,as_cmap=True)
#cmap_gal = sns.cubehelix_palette(start=0.3,rot=-0.5,light=0.7,dark=0.3,as_cmap=True)
fig, ax = plt.subplots(figsize=(8, 5))
ax.grid(linestyle='dotted', zorder=1)
ax.errorbar(mag_bin_centers, ml_acc_arr,
yerr=np.abs(ml_boot_scatt - ml_acc_arr),
ls='-', lw=.75, fmt='o',
color=color_dict['ml'], label="ML model",
linewidth=1.5, markersize=7.5, zorder=5)
# add KDE plots
ax.fill(kde_grid, pdf_deimos + 0.5, alpha=0.4, color="0.7", zorder=2)
ax.fill(kde_grid, pdf_deimos_gal*deimos_kde_gal_norm + 0.5, alpha=0.7, zorder=3)#, color=cmap_gal(0.25))
ax.fill(kde_grid, pdf_deimos_stars*deimos_kde_star_norm + 0.5, alpha=0.7, zorder=4)#, color=cmap_star(0.25))
ax.set_ylim(0.5,1.01)
ax.set_xlim(14, 27)
ax.tick_params(which="both", top=True, right=True, labelsize=15)
ax.set_xlabel('MAG_BEST', fontsize=15)
ax.set_ylabel('Accuracy', fontsize=15)
ax.yaxis.set_minor_locator(MultipleLocator(0.025))
ax.xaxis.set_major_locator(MultipleLocator(2))
ax.xaxis.set_minor_locator(MultipleLocator(0.5))
#ax.legend(bbox_to_anchor=(0.01, 0.3, 1., 0.102), loc=3, fontsize=13)
fig.subplots_adjust(top=0.98,right=0.98,left=0.1,bottom=0.12)
Explanation: Accuracy v.s. MAG
End of explanation |
3,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Inverse Regression with Yelp reviews
In this note we'll use gensim to turn the Word2Vec machinery into a document classifier, as in Document Classification by Inversion of Distributed Language Representations from ACL 2015.
Data and prep
First, download to the same directory as this note the data from the Yelp recruiting contest on kaggle
Step1: First, we define a super simple parser
Step2: And put everything together in a review generator that provides tokenized sentences and the number of stars for every review.
Step3: For example
Step4: Now, since the files are small we'll just read everything into in-memory lists. It takes a minute ...
Step5: Finally, write a function to generate sentences -- ordered lists of words -- from reviews that have certain star ratings
Step6: Word2Vec modeling
We fit out-of-the-box Word2Vec
Step7: Build vocab from all sentences (you could also pre-train the base model from a neutral or un-labeled vocabulary)
Step8: Now, we will deep copy each base model and do star-specific training. This is where the big computations happen...
Step10: Inversion of the distributed representations
At this point, we have 5 different word2vec language representations. Each 'model' has been trained conditional (i.e., limited to) text from a specific star rating. We will apply Bayes rule to go from p(text|stars) to p(stars|text).
For any new sentence we can obtain its likelihood (lhd; actually, the composite likelihood approximation; see the paper) using the score function in the word2vec class. We get the likelihood for each sentence in the first test review, then convert to a probability over star ratings. Every sentence in the review is evaluated separately and the final star rating of the review is an average vote of all the sentences. This is all in the following handy wrapper.
Step11: Test set example
As an example, we apply the inversion on the full test set. | Python Code:
# ### uncomment below if you want...
# ## ... copious amounts of logging info
# import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
# rootLogger = logging.getLogger()
# rootLogger.setLevel(logging.INFO)
# ## ... or auto-reload of gensim during development
# %load_ext autoreload
# %autoreload 2
Explanation: Deep Inverse Regression with Yelp reviews
In this note we'll use gensim to turn the Word2Vec machinery into a document classifier, as in Document Classification by Inversion of Distributed Language Representations from ACL 2015.
Data and prep
First, download to the same directory as this note the data from the Yelp recruiting contest on kaggle:
* https://www.kaggle.com/c/yelp-recruiting/download/yelp_training_set.zip
* https://www.kaggle.com/c/yelp-recruiting/download/yelp_test_set.zip
You'll need to sign-up for kaggle.
You can then unpack the data and grab the information we need.
Tutorial Requirements:
1. gensim (and all of its own requirements)
1. pandas
1. matplotlib
End of explanation
import re
contractions = re.compile(r"'|-|\"")
# all non alphanumeric
symbols = re.compile(r'(\W+)', re.U)
# single character removal
singles = re.compile(r'(\s\S\s)', re.I|re.U)
# separators (any whitespace)
seps = re.compile(r'\s+')
# cleaner (order matters)
def clean(text):
text = text.lower()
text = contractions.sub('', text)
text = symbols.sub(r' \1 ', text)
text = singles.sub(' ', text)
text = seps.sub(' ', text)
return text
# sentence splitter
alteos = re.compile(r'([!\?])')
def sentences(l):
l = alteos.sub(r' \1 .', l).rstrip("(\.)*\n")
return l.split(".")
Explanation: First, we define a super simple parser
End of explanation
from zipfile import ZipFile
import json
def YelpReviews(label):
with ZipFile("yelp_%s_set.zip"%label, 'r') as zf:
with zf.open("yelp_%s_set/yelp_%s_set_review.json"%(label,label)) as f:
for line in f:
if type(line) is bytes:
line = line.decode('utf-8')
rev = json.loads(line)
yield {'y':rev['stars'],\
'x':[clean(s).split() for s in sentences(rev['text'])]}
Explanation: And put everything together in a review generator that provides tokenized sentences and the number of stars for every review.
End of explanation
next(YelpReviews("test"))
Explanation: For example:
End of explanation
revtrain = list(YelpReviews("training"))
print(len(revtrain), "training reviews")
## and shuffle just in case they are ordered
import numpy as np
np.random.shuffle(revtrain)
Explanation: Now, since the files are small we'll just read everything into in-memory lists. It takes a minute ...
End of explanation
def StarSentences(reviews, stars=[1,2,3,4,5]):
for r in reviews:
if r['y'] in stars:
for s in r['x']:
yield s
Explanation: Finally, write a function to generate sentences -- ordered lists of words -- from reviews that have certain star ratings
End of explanation
from gensim.models import Word2Vec
import multiprocessing
## create a w2v learner
basemodel = Word2Vec(
workers=multiprocessing.cpu_count(), # use your cores
iter=3, # iter = sweeps of SGD through the data; more is better
hs=1, negative=0 # we only have scoring for the hierarchical softmax setup
)
print(basemodel)
Explanation: Word2Vec modeling
We fit out-of-the-box Word2Vec
End of explanation
basemodel.build_vocab(StarSentences(revtrain))
Explanation: Build vocab from all sentences (you could also pre-train the base model from a neutral or un-labeled vocabulary)
End of explanation
from copy import deepcopy
starmodels = [deepcopy(basemodel) for i in range(5)]
for i in range(5):
slist = list(StarSentences(revtrain, [i+1]))
print(i+1, "stars (", len(slist), ")")
starmodels[i].train( slist, total_examples=len(slist) )
Explanation: Now, we will deep copy each base model and do star-specific training. This is where the big computations happen...
End of explanation
docprob takes two lists
* docs: a list of documents, each of which is a list of sentences
* models: the candidate word2vec models (each potential class)
it returns the array of class probabilities. Everything is done in-memory.
import pandas as pd # for quick summing within doc
def docprob(docs, mods):
# score() takes a list [s] of sentences here; could also be a sentence generator
sentlist = [s for d in docs for s in d]
# the log likelihood of each sentence in this review under each w2v representation
llhd = np.array( [ m.score(sentlist, len(sentlist)) for m in mods ] )
# now exponentiate to get likelihoods,
lhd = np.exp(llhd - llhd.max(axis=0)) # subtract row max to avoid numeric overload
# normalize across models (stars) to get sentence-star probabilities
prob = pd.DataFrame( (lhd/lhd.sum(axis=0)).transpose() )
# and finally average the sentence probabilities to get the review probability
prob["doc"] = [i for i,d in enumerate(docs) for s in d]
prob = prob.groupby("doc").mean()
return prob
Explanation: Inversion of the distributed representations
At this point, we have 5 different word2vec language representations. Each 'model' has been trained conditional (i.e., limited to) text from a specific star rating. We will apply Bayes rule to go from p(text|stars) to p(stars|text).
For any new sentence we can obtain its likelihood (lhd; actually, the composite likelihood approximation; see the paper) using the score function in the word2vec class. We get the likelihood for each sentence in the first test review, then convert to a probability over star ratings. Every sentence in the review is evaluated separately and the final star rating of the review is an average vote of all the sentences. This is all in the following handy wrapper.
End of explanation
# read in the test set
revtest = list(YelpReviews("test"))
# get the probs (note we give docprob a list of lists of words, plus the models)
probs = docprob( [r['x'] for r in revtest], starmodels )
import matplotlib
%matplotlib inline
probpos = pd.DataFrame({"out-of-sample prob positive":probs[[3,4]].sum(axis=1),
"true stars":[r['y'] for r in revtest]})
probpos.boxplot("out-of-sample prob positive",by="true stars", figsize=(12,5))
Explanation: Test set example
As an example, we apply the inversion on the full test set.
End of explanation |
3,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CMSIS-DSP Python package example
Installing and importing the needed packages
The following command may take some time to execute
Step1: Creating the signal
Conversion functions to use CMSIS-DSP FFTs with complex numbers
CMSIS-DSP FFTs are processing array of complex numbers which are represented in memory asan array of floats. There is no specific data types for complex numbers.
The Python array is containing complex numbers. They need to be replaced by a sequence of real numbers.
The two functions below are doing those conversions.
Step2: You can play with the slider to change the frequency of the signal.
Don't forget to reconvert the signal to a Q15 format if you want to test the Q15 FFT.
Step3: Using the F32 CMSIS-DSP FFT
The arm_cfft_instance_f32 is created and initialized.
Step4: The log magnitude of the FFT is computed and siplayed.
Step5: Using the Q15 CMSIS-DSP FFT
The signal must be converted to Q15 each time it is changed with the slider above.
Step6: The arm_cfft_instance_q15 is created and initialized | Python Code:
!pip install cmsisdsp
import numpy as np
import cmsisdsp as dsp
import cmsisdsp.fixedpoint as f
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed, interact_manual,FloatSlider
import ipywidgets as widgets
Explanation: CMSIS-DSP Python package example
Installing and importing the needed packages
The following command may take some time to execute : the full cmsisdsp library is built.
End of explanation
# Array of complex numbers as an array of real numbers
def imToReal1D(a):
ar=np.zeros(np.array(a.shape) * 2)
ar[0::2]=a.real
ar[1::2]=a.imag
return(ar)
# Array of real numbers as an array of complex numbers
def realToIm1D(ar):
return(ar[0::2] + 1j * ar[1::2])
nb = 512
signal = None
Explanation: Creating the signal
Conversion functions to use CMSIS-DSP FFTs with complex numbers
CMSIS-DSP FFTs are processing array of complex numbers which are represented in memory asan array of floats. There is no specific data types for complex numbers.
The Python array is containing complex numbers. They need to be replaced by a sequence of real numbers.
The two functions below are doing those conversions.
End of explanation
@interact(f=FloatSlider(100,min=10,max=150,step=20,continuous_update=False))
def gen_signal(f=100):
global signal
global nb
signal = np.sin(2 * np.pi * np.arange(nb)*f / nb) + 0.1*np.random.randn(nb)
plt.plot(signal)
plt.show()
Explanation: You can play with the slider to change the frequency of the signal.
Don't forget to reconvert the signal to a Q15 format if you want to test the Q15 FFT.
End of explanation
# CMSIS-DSP FFT F32 initialization
cfftf32=dsp.arm_cfft_instance_f32()
status=dsp.arm_cfft_init_f32(cfftf32,nb)
print(status)
Explanation: Using the F32 CMSIS-DSP FFT
The arm_cfft_instance_f32 is created and initialized.
End of explanation
# Re-evaluate this each time you change the signal
signalR = imToReal1D(signal)
resultR = dsp.arm_cfft_f32(cfftf32,signalR,0,1)
resultI = realToIm1D(resultR)
mag=20 * np.log10(np.abs(resultI))
plt.plot(mag[1:nb//2])
plt.show()
Explanation: The log magnitude of the FFT is computed and siplayed.
End of explanation
# Convert the signal to Q15 and viewed as a real array
signalR = imToReal1D(signal)
signalRQ15 = f.toQ15(signalR)
Explanation: Using the Q15 CMSIS-DSP FFT
The signal must be converted to Q15 each time it is changed with the slider above.
End of explanation
# Initialize the Q15 CFFT
cfftq15 = dsp.arm_cfft_instance_q15()
status = dsp.arm_cfft_init_q15(cfftq15,nb)
print(status)
# Compute the Q15 CFFT and convert back to float and complex array
resultR = dsp.arm_cfft_q15(cfftq15,signalRQ15,0,1)
resultR = f.Q15toF32(resultR)
resultI = realToIm1D(resultR)*nb
mag = 20 * np.log10(np.abs(resultI))
plt.plot(mag[1:nb//2])
plt.show()
Explanation: The arm_cfft_instance_q15 is created and initialized
End of explanation |
3,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flow Distribution for the Two Treatment Trains
Problem Definition
The two 60 L/s trains need proper flow control. They need a flow control system to split the plant flow evenly between the two trains that would enable fine grain flow control. This distribution system should keep flow control for each train independent - such that decreasing one train's flow doesn't increase the other's.
Existing Conduction Line
The existing conduction line is composed of two independent pipes of 4" and 6" size. Presumably, one was added after the other in an attempt to augment the flow rate. Two pressure breaks, one for each line, are located 30 meters higher in elevation and 455 meters away from the proposed plant site. By definition, these two pressure breaks have a free surface, and therefore the difference in elevation between the pressure break and the plant's entrance tank represents the maximum available head for delivering, splitting and controlling the flow. The diagram below summarizes the existing system components
Step1: Changing the Pipes
The headloss in both the 4" and 6" lines is too great to handle the {{flow_branch}} flow rate. Therefore larger diameter pipe needs to be installed to reduce the headloss in the conduction line(s). There are multiple options for how to both increase the conduction line capacitiy and split the flow efficiently
Step2: Using a 10 inch or 12 inch pipe would potentially leave enough remaining available headloss to use for flow control.
Flow Distribution
Now the question is about flow distribution. The effect of shutting off one train potentially effects the flow rate of the other. Determining the extent of this effect is a flow distribution problem, much like those done throughout plant design. By studying the various flow paths, one can determine the effect of shutting off a flow path during the worst case scenario. There are several steps to designing the optimal system. First, the goal is to reduce the headloss in the shared line, because that headloss changes when one branch is turned off and the flow rate is halved. As the shared headloss reduces, the leftover headloss is taken up by the remaining line, increasing train flow. The steps to define the optimal pipe configuration are as follows
Step3: 3 m of head is lost to the entrance in order to supply the full plant flow. This is a significant portion of the full head available, and is in the shared headloss section, meaning it will negatively impact the flow distribution cross-talk error. Increasing the two lines to 8" would decrease the headloss substantially, and not require too much work
Step4: Now the required headloss is less than 1 m, wich will help reduce shared headloss dramatically.
Total Shared (Trunk) Headloss
Now a conservative estimate of the headloss from the main conduction line is added to form the total shared headloss. The remaining head is used in the branch lengths. Using so much head to drive the flow through each branch leads to using smaller pipes and thus smaller gate valves. The following calculations prove that a 6" branch pipe diameter can be used and still achieve the full train flow rate.
Step5: There is an extreme difference in headloss between the 4" and 6" option. The 6" branch diameter would not have enough headloss to enable fine-grain control of flow rate because the valve has to be at least 3/4 closed to even begin reducing the flow rate below the full branch flow rate. Therefore, the size of a short section with the gate valve could be reduced to 4". The following calculation shows the max headloss of the proposed system
Step6: The headloss table reveals that a 4" gate valve will yield a reasonable resolution for the gate valve position. This is further expounded upon in the flow row, that shows a single branch will have favorable flow distribution across the gate valve's range.
3. System Error (Cross-Talk Effect)
Step7: Confirming Exit Line Flow Rates | Python Code:
from aide_design.play import *
from IPython.display import display
pipe.ID_sch40 = np.vectorize(pipe.ID_sch40)
pipe.ID_sch40 = np.vectorize(pipe.ID_sch40)
################## Constants #################
flow_branch = 60 *u.L/u.s
flow_full = flow_branch * 2
nd_pipe_train_4 = 4 *u.inch
sdr_pipe = 17
nd_pipe_train_6 = 6 * u.inch
# these measurements are from Minty's notebook TODO: change to reflect topography study
l_total = 455.06 *u.m
height_pressure_break_4 = 1090.12 * u.m
height_pressure_break_6 = 1091.29 * u.m
# this measurement is from AutoCAD TODO: change to reflect topography study
height_plant = 1058 * u.m
PVC_ROUGHNESS = mat.PIPE_ROUGH_PVC
NU_WATER = exp.NU_WATER
# a conservative estimate for k TODO: change to reflect actual accessories
k_conduction_line = exp.K_MINOR_EL90 * 7
# Getting function inputs into simple form
head_4 = height_pressure_break_4 - height_plant
head_6 = height_pressure_break_6 - height_plant
id_4 = pipe.ID_SDR(nd_pipe_train_4, sdr_pipe)
id_6 = pipe.ID_SDR(nd_pipe_train_6, sdr_pipe)
#################### headloss calculations ############################
headloss_train_4 = pc.headloss(flow_branch, id_4, l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
headloss_train_6 = pc.headloss(flow_branch, id_6, l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
print("Headloss in 4 inch line: " + str(headloss_train_4) + " and available head is: " + str(head_4))
print("Headloss in 6 inch line: " + str(headloss_train_6) + " and available head is: " + str(head_6))
##################### total flow calculation ###########################
flow_4 = pc.flow_pipe(id_4,head_4,l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
flow_6 = pc.flow_pipe(id_6,head_6,l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
flow_actual_with_two_lines = (flow_4 + flow_6).to(u.L/u.s)
print("Flow to the plant with both lines and available head is: " + str(flow_actual_with_two_lines))
Explanation: Flow Distribution for the Two Treatment Trains
Problem Definition
The two 60 L/s trains need proper flow control. They need a flow control system to split the plant flow evenly between the two trains that would enable fine grain flow control. This distribution system should keep flow control for each train independent - such that decreasing one train's flow doesn't increase the other's.
Existing Conduction Line
The existing conduction line is composed of two independent pipes of 4" and 6" size. Presumably, one was added after the other in an attempt to augment the flow rate. Two pressure breaks, one for each line, are located 30 meters higher in elevation and 455 meters away from the proposed plant site. By definition, these two pressure breaks have a free surface, and therefore the difference in elevation between the pressure break and the plant's entrance tank represents the maximum available head for delivering, splitting and controlling the flow. The diagram below summarizes the existing system components:
<img src="https://docs.google.com/drawings/d/e/2PACX-1vTYoz334ZI_fy6hpKUyfmm7Ap24bQDkuBVZXC4JJvACmSd-VeLFAUI5RsWscA-FHlxnKEQmn-Kz-H0U/pub?w=1056&h=816">
Use the Existing 4" and 6" Lines
The simplest solution is to use the current pressure break as the flow distribution system with the two existing lines (4" and 6") as the incoming lines for each train. To make sure this will work, we need to ensure the 4" line can handle the full 60 L/s
End of explanation
# Make a table with available pipe sizes
pipe_sdr = 26
pipe_diameters_nd = [6,8,10,12]#*u.inch
pipe_diameters_id = pipe.ID_sch40(pipe_diameters_nd)
headloss_various_diameters = pc.headloss(flow_full, pipe_diameters_id*u.inch,
l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
df = pd.DataFrame(np.array(headloss_various_diameters.magnitude), index=pipe_diameters_id, columns=['Headloss (m)'])
#Graph headloss for different pipe diameters
df.index.name = 'Pipe Diameter (ID, inch)'
df.name = 'Total Headloss Through Various Pipe Diameters'
df.plot().set(ylabel="Headloss (m)", title = df.name)
plt.show()
display(df)
Explanation: Changing the Pipes
The headloss in both the 4" and 6" lines is too great to handle the {{flow_branch}} flow rate. Therefore larger diameter pipe needs to be installed to reduce the headloss in the conduction line(s). There are multiple options for how to both increase the conduction line capacitiy and split the flow efficiently:
Distribution box at the plant with one large conduction line running from the existing plants.
Distribution box at the location of the current pressure breaks, with two lines running to the plant, one for each train.
Combine the flow with Ys from the two current pressure breaks into a large line, and split at the plant into each train
The first two options involve the construction of a distribution box, an unnecessary, more complex and expensive solution. All options will use two gate valves (one for each train) at each train entrance tank for fine-grain control of each flow rate. The third option will be investigated first, as it is the simplest to construct, the least expensive, and has no functional drawbacks.
To size the main trunk line, an appropriate target headloss must be chosen. Below is a graph that lists the headloss at different pipe sizes given the parameters of this plant:
End of explanation
id_12 = pipe.ID_SDR(12, sdr_pipe)
# conservative minor loss coefficient in both lines pressure break to tee:
k_value_pressure_break_to_tee_6_inch = exp.K_MINOR_PIPE_ENTRANCE + \
exp.K_MINOR_90 + k.k_value_expansion(id_6, id_12, flow_branch)
k_value_pressure_break_to_tee_4_inch = exp.K_MINOR_PIPE_ENTRANCE + \
exp.K_MINOR_90 + exp.K_MINOR_EL45 + k.k_value_expansion(id_4, id_12, flow_branch)
print("k value in 6 inch line: " + str(k_value_pressure_break_to_tee_6_inch))
print('k value in 4 inch line: ' + str(k_value_pressure_break_to_tee_4_inch))
# conservative pipe lengths from pressure break to tee:
l_pressure_break_to_tee_6_inch = 4 * u.m
l_pressure_break_to_tee_4_inch = 4 * u.m
# determine headloss through both 4" and 6" pipes by defining headloss range:
headloss_range_pressure_break_to_tee = np.linspace(0.1,10,100) * u.m
# calculate the added flow rates for all the headlosses in the range:
flow_range_pressure_break_to_tee = pc.flow_pipe(id_4, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_4_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_4_inch) + \
pc.flow_pipe(id_6, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_6_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_6_inch)
# graph of flow rates for various flow rates:
df = pd.DataFrame(np.array(flow_range_pressure_break_to_tee.to(u.L/u.s)),
index=np.array(headloss_range_pressure_break_to_tee),
columns = ['4" and 6" lines'])
df.index.name = 'Headloss (m)'
df.columns.name = 'flow (L/s)'
df.name = 'Headloss v. Flow rate for Pressure Break to Tee'
df.plot().set(ylabel=df.columns.name, title=df.name)
plt.show()
Explanation: Using a 10 inch or 12 inch pipe would potentially leave enough remaining available headloss to use for flow control.
Flow Distribution
Now the question is about flow distribution. The effect of shutting off one train potentially effects the flow rate of the other. Determining the extent of this effect is a flow distribution problem, much like those done throughout plant design. By studying the various flow paths, one can determine the effect of shutting off a flow path during the worst case scenario. There are several steps to designing the optimal system. First, the goal is to reduce the headloss in the shared line, because that headloss changes when one branch is turned off and the flow rate is halved. As the shared headloss reduces, the leftover headloss is taken up by the remaining line, increasing train flow. The steps to define the optimal pipe configuration are as follows:
Pipe Length Geometry: make a guess for the ideal pipe geometry, attempting to minimize shared headloss and maximize train branch headloss.
Headloss Calculations: determine minor and major losses throughout the system.
System Error (Cross-Talk Effect): calculate the effect of cross-talk over a range of flow rates.
1. Pipe Length Geometry
The initial pipe design is based on limited knowledge of the site, and is supposed to convey a conservative guess for the condction and distribution line geometry. When a full topography of the site and the two upstream pressure breaks, a more precise design will be made and analyzed. The video below is a rendering of the preliminary design of the conduction and train-distribution system:
In summary, the proposed plan is to augment both lines running from the pressure break to 8" lines. The two lines will immediately plumb into a main 12" conduction line. The main line will run 455 m to the plant site, where it splits at a tee into two 4" lines. The following calculations ensure the cross-talk between the two trains are minimized.
2. Headloss Calculations
The headloss in the various components of the system is critical in calculating the effect of cross-talk.
Headloss From the Pressure Break to the Tee
The first section of the conduction line is where the two smaller lines join the 10" conduction line. To calculate the headloss through the two pipes, an iterative approach is used. First, the flowrates for various headlosses through the 6" and 4" lines combined are calculated. Because the head of the 4" and 6" line is known to be the same at the Tee, it is assumed that the headloss is the same (pressure breaks have the same free surface.) When these two added flow rates together equal the full plant flow rate, the resulting headloss through both pipes represent the first losses in the distribution system:
End of explanation
# id of 8" pipe
diam_8 = pipe.ID_SDR(6, sdr_pipe)
# calculate the added flow rates for all the headlosses in the range:
flow_range_pressure_break_to_tee = pc.flow_pipe(diam_8, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_4_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_4_inch) + \
pc.flow_pipe(diam_8, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_6_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_6_inch)
# dataframe of flow rates for various flow rates:
df = df.assign(flow_8_inch=np.array(flow_range_pressure_break_to_tee.to(u.L/u.s)))
df.plot()
plt.show()
Explanation: 3 m of head is lost to the entrance in order to supply the full plant flow. This is a significant portion of the full head available, and is in the shared headloss section, meaning it will negatively impact the flow distribution cross-talk error. Increasing the two lines to 8" would decrease the headloss substantially, and not require too much work:
End of explanation
# set a conservative guess for a pressure break to tee headloss determined above:
headloss_pressure_break_to_tee = 1 * u.m
# headloss in the combined trunk:
headloss_conduction_line = pc.headloss(flow_full, 12*u.inch,
l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
# total shared headloss:
headloss_shared = headloss_conduction_line + headloss_pressure_break_to_tee
# set the headloss available as the difference in height from pressure break to plant entrance:
head_available_total = min(height_pressure_break_4, height_pressure_break_6) - height_plant
head_available_for_trains = head_available_total - headloss_shared
print('The total shared headloss is: ' + str(headloss_shared))
print('The remaining headloss available for each train: ' + str(head_available_for_trains))
# calculate the headloss for various pipe sizes for a singe train branch:
pipe_diameters_nd_branch = [3,4,6,8]*u.inch
pipe_diameters_id_branch = pipe.ID_sch40(pipe_diameters_nd_branch)*u.inch
# calculate minor losses:
k_value_tee_to_plant_entrance = k.k_value_reduction(id_12, pipe_diameters_id_branch, flow_branch)\
+ exp.K_MINOR_90*4 + exp.K_MINOR_GATE_VALVE
# calculate length:
l_branch = 5 * u.m + 5 * u.m + 2 * u.m
headloss_branch = pc.headloss(flow_branch, pipe_diameters_id_branch, l_branch,
exp.NU_WATER, mat.PIPE_ROUGH_PVC, k_value_tee_to_plant_entrance)
pd.DataFrame(np.array([np.array(pipe_diameters_id_branch),np.array(headloss_branch)]),
columns=pipe_diameters_nd_branch, index=['Pipe Size (inch)', 'Headloss (m)'])
Explanation: Now the required headloss is less than 1 m, wich will help reduce shared headloss dramatically.
Total Shared (Trunk) Headloss
Now a conservative estimate of the headloss from the main conduction line is added to form the total shared headloss. The remaining head is used in the branch lengths. Using so much head to drive the flow through each branch leads to using smaller pipes and thus smaller gate valves. The following calculations prove that a 6" branch pipe diameter can be used and still achieve the full train flow rate.
End of explanation
# k values for the gate valve at various positions
gate_valve_positions = [1, 0.75, 0.5, 0.25]
k_values_gate_valve = [0.17, 0.9, 4.5, 24]
gate_valve_pipe_section_guess = 10*u.inch
k_value_gate_valve_section = k.k_value_orifice(i, diam_4, l_gate_orifice, flow_branch) + k_values_gate_valve[0]
headloss_various_positions_gate_valve = pc.headloss(flow_branch, diam_4, l_gate_orifice,
NU_WATER,PVC_ROUGHNESS,k_value_gate_valve_section)
pd.options.display.float_format = '{:,.1f}'.format
# headloss_whole_system_various_flow_rates = pc.flow_pipe(diam_12,l_tee_to_plant_entrance, flow_branch,)
pd.DataFrame(np.array([k_value_gate_valve_section,headloss_various_positions_gate_valve.magnitude]),columns=k_value_gate_positions,
index=['Gate valve k values for different positions (1 is fully open, 0 fully closed)','headloss (m)'])
l_pipes_final_design = np.array([l_pressure_break_to_tee_4_inch.magnitude, pipe_length_trains, l_branch, l_gate_orifice])
id_pipes_final_design = np.array([diam_8, diam_12, diam_6, diam_4])
k_pipes_final_design = np.array([k_value_pressure_break_to_tee_6_inch, k_pipe, k_value_tee_to_plant_entrance, k_value_gate_valve_section])
pipeline.flow_pipeline(id_pipes_final_design, l_pipes_final_design, k_pipes_final_design)
Explanation: There is an extreme difference in headloss between the 4" and 6" option. The 6" branch diameter would not have enough headloss to enable fine-grain control of flow rate because the valve has to be at least 3/4 closed to even begin reducing the flow rate below the full branch flow rate. Therefore, the size of a short section with the gate valve could be reduced to 4". The following calculation shows the max headloss of the proposed system:
Gate Valve Reduction Headloss
A 4" gate valve is proposed to simultaneously increase headloss and decrease price. To calculate the headloss used by the new configuration, flow through the reduced gate valve is modeled as a thick orifice with an additional coefficient for the valve itself. Our goal is to determine what length the 4" valve section should be to enable fine grain control with the gate valve. This is done by trying to use the remaining headloss in this section.
End of explanation
# Calculating the flow throughout the whole system with only one train on:
# pc.flow_pipe()
Explanation: The headloss table reveals that a 4" gate valve will yield a reasonable resolution for the gate valve position. This is further expounded upon in the flow row, that shows a single branch will have favorable flow distribution across the gate valve's range.
3. System Error (Cross-Talk Effect)
End of explanation
height_pressure_break_after_plant_4 = 1008 * u.m
height_pressure_break_after_plant_6 = 1009 * u.m
#################### headloss calculations ############################
# a conservative estimate for k TODO: change to reflect actual accessories
k_exit_line = exp.K_MINOR_EL90 * 7
# dimensions derived from the topography study
d_z = 45.83 * u.m
d_x = 444.77 *u.m
d_y = 372.49 * u.m
length_exit_line = (d_z**2 + d_x**2 + d_y**2)**0.5
head_exit_line = d_z
print(length_exit_line)
headloss_exit_4 = pc.headloss(flow_branch, id_4, l_total,NU_WATER,PVC_ROUGHNESS,k_exit_line)
headloss_exit_6 = pc.headloss(flow_branch, id_6, l_total,NU_WATER,PVC_ROUGHNESS,k_exit_line)
print("Headloss in 4 inch line: {} and available head is: {}".format(head_exit_line,headloss_exit_4))
print("Headloss in 6 inch line: " + str(headloss_exit_6) + " and available head is: " + str(head_exit_line))
##################### total flow calculation ###########################
flow_exit_4 = pc.flow_pipe(id_4,head_exit_line,length_exit_line,NU_WATER,PVC_ROUGHNESS,k_exit_line)
flow_exit_6 = pc.flow_pipe(id_6,head_exit_line,length_exit_line, NU_WATER,PVC_ROUGHNESS,k_exit_line)
flow_actual_exit_with_two_lines = (flow_exit_4 + flow_exit_6).to(u.L/u.s)
print("Flow in the 4 inch line is: "+str(flow_exit_4.to(u.L/u.s)))
print("Flow in the 6 inch line is: "+str(flow_exit_6.to(u.L/u.s)))
print("Flow to the plant with both lines and available head is: " + str(flow_actual_exit_with_two_lines))
Explanation: Confirming Exit Line Flow Rates
End of explanation |
3,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
mpl_toolkits
In addition to the core library of matplotlib, there are a few additional utilities that are set apart from matplotlib proper for some reason or another, but are often shipped with matplotlib.
Basemap - shipped separately from matplotlib due to size of mapping data that are included.
mplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.
axes_grid1 - An enhanced SubplotAxes. Very Enhanced...
mplot3d
By taking advantage of matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots are.
Step1: axes_grid1
This module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.
One can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.
Step2: This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it "Parasite Axes".
Step6: And finally, as a nice teaser of what else axes_grid1 can do... | Python Code:
from mpl_toolkits.mplot3d import Axes3D, axes3d
fig, ax = plt.subplots(1, 1, subplot_kw={'projection': '3d'})
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
Explanation: mpl_toolkits
In addition to the core library of matplotlib, there are a few additional utilities that are set apart from matplotlib proper for some reason or another, but are often shipped with matplotlib.
Basemap - shipped separately from matplotlib due to size of mapping data that are included.
mplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.
axes_grid1 - An enhanced SubplotAxes. Very Enhanced...
mplot3d
By taking advantage of matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots are.
End of explanation
from mpl_toolkits.axes_grid1 import AxesGrid
fig = plt.figure()
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (2, 2),
axes_pad = 0.2,
share_all=True,
label_mode = "L", # similar to "label_outer"
cbar_location = "right",
cbar_mode="single",
)
extent = (-3,4,-4,3)
for i in range(4):
im = grid[i].imshow(Z, extent=extent, interpolation="nearest")
grid.cbar_axes[0].colorbar(im)
plt.show()
Explanation: axes_grid1
This module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.
One can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.
End of explanation
%load http://matplotlib.org/mpl_examples/axes_grid/demo_parasite_axes2.py
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
if 1:
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
offset = 60
new_fixed_axis = par2.get_grid_helper().new_fixed_axis
par2.axis["right"] = new_fixed_axis(loc="right",
axes=par2,
offset=(offset, 0))
par2.axis["right"].toggle(all=True)
host.set_xlim(0, 2)
host.set_ylim(0, 2)
host.set_xlabel("Distance")
host.set_ylabel("Density")
par1.set_ylabel("Temperature")
par2.set_ylabel("Velocity")
p1, = host.plot([0, 1, 2], [0, 1, 2], label="Density")
p2, = par1.plot([0, 1, 2], [0, 3, 2], label="Temperature")
p3, = par2.plot([0, 1, 2], [50, 30, 15], label="Velocity")
par1.set_ylim(0, 4)
par2.set_ylim(1, 65)
host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
par2.axis["right"].label.set_color(p3.get_color())
plt.draw()
plt.show()
#plt.savefig("Test")
Explanation: This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it "Parasite Axes".
End of explanation
%load http://matplotlib.org/mpl_toolkits/axes_grid/examples/demo_floating_axes.py
from matplotlib.transforms import Affine2D
import mpl_toolkits.axisartist.floating_axes as floating_axes
import numpy as np
import mpl_toolkits.axisartist.angle_helper as angle_helper
from matplotlib.projections import PolarAxes
from mpl_toolkits.axisartist.grid_finder import FixedLocator, MaxNLocator, \
DictFormatter
def setup_axes1(fig, rect):
A simple one.
tr = Affine2D().scale(2, 1).rotate_deg(30)
grid_helper = floating_axes.GridHelperCurveLinear(tr, extremes=(0, 4, 0, 4))
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
aux_ax = ax1.get_aux_axes(tr)
grid_helper.grid_finder.grid_locator1._nbins = 4
grid_helper.grid_finder.grid_locator2._nbins = 4
return ax1, aux_ax
def setup_axes2(fig, rect):
With custom locator and formatter.
Note that the extreme values are swapped.
#tr_scale = Affine2D().scale(np.pi/180., 1.)
tr = PolarAxes.PolarTransform()
pi = np.pi
angle_ticks = [(0, r"$0$"),
(.25*pi, r"$\frac{1}{4}\pi$"),
(.5*pi, r"$\frac{1}{2}\pi$")]
grid_locator1 = FixedLocator([v for v, s in angle_ticks])
tick_formatter1 = DictFormatter(dict(angle_ticks))
grid_locator2 = MaxNLocator(2)
grid_helper = floating_axes.GridHelperCurveLinear(tr,
extremes=(.5*pi, 0, 2, 1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None,
)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder=0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
def setup_axes3(fig, rect):
Sometimes, things like axis_direction need to be adjusted.
# rotate a bit for better orientation
tr_rotate = Affine2D().translate(-95, 0)
# scale degree to radians
tr_scale = Affine2D().scale(np.pi/180., 1.)
tr = tr_rotate + tr_scale + PolarAxes.PolarTransform()
grid_locator1 = angle_helper.LocatorHMS(4)
tick_formatter1 = angle_helper.FormatterHMS()
grid_locator2 = MaxNLocator(3)
ra0, ra1 = 8.*15, 14.*15
cz0, cz1 = 0, 14000
grid_helper = floating_axes.GridHelperCurveLinear(tr,
extremes=(ra0, ra1, cz0, cz1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None,
)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# adjust axis
ax1.axis["left"].set_axis_direction("bottom")
ax1.axis["right"].set_axis_direction("top")
ax1.axis["bottom"].set_visible(False)
ax1.axis["top"].set_axis_direction("bottom")
ax1.axis["top"].toggle(ticklabels=True, label=True)
ax1.axis["top"].major_ticklabels.set_axis_direction("top")
ax1.axis["top"].label.set_axis_direction("top")
ax1.axis["left"].label.set_text(r"cz [km$^{-1}$]")
ax1.axis["top"].label.set_text(r"$\alpha_{1950}$")
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder=0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
if 1:
import matplotlib.pyplot as plt
fig = plt.figure(1, figsize=(8, 4))
fig.subplots_adjust(wspace=0.3, left=0.05, right=0.95)
ax1, aux_ax2 = setup_axes1(fig, 131)
aux_ax2.bar([0, 1, 2, 3], [3, 2, 1, 3])
#theta = np.random.rand(10) #*.5*np.pi
#radius = np.random.rand(10) #+1.
#aux_ax1.scatter(theta, radius)
ax2, aux_ax2 = setup_axes2(fig, 132)
theta = np.random.rand(10)*.5*np.pi
radius = np.random.rand(10)+1.
aux_ax2.scatter(theta, radius)
ax3, aux_ax3 = setup_axes3(fig, 133)
theta = (8 + np.random.rand(10)*(14-8))*15. # in degrees
radius = np.random.rand(10)*14000.
aux_ax3.scatter(theta, radius)
plt.show()
Explanation: And finally, as a nice teaser of what else axes_grid1 can do...
End of explanation |
3,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fish detection
In this notebook we address the problem of detecting and cropping the fishes from the data images. This is a problem of computer vision that has no easy solution. We considered different approaches, being the most relevant ones the following three
Step2: After importing the libraries we declare the functions that we need for the fish detection. As introduced above we need a slidding window and a classificator to determine the probability of a frame containing a fish.
For the slidding window we will sweep the image with a square capturing (not storing) the frames. Once the image is completely swept, it is resized smaller and swept again. The effect of this is that the image is swept with squares of different sizes. To do this we use the functions pyramid and sliding window.
For the classificator we extract the HOG features, which give a characterization of the image, and we give them to a SVM. The SVM has the advantage that is very fast, so given the very big amount of images to classify and the many frames that are extracted per image it is the best option.
Given the poor results obtained when using a single SVM distinguishing between "fish" and "no fish", we adopt the following strategy for classification. We build seven SVM with two classes each, "no fish", and one per class of fish (ALB, BET, DOL, LAG, SHARK, YFT), and one SVM for all of these classes convined. Then we select the frame that gives the highest probability for each class of fish, producing in this way six cropped images from the original image. Then, using the SVM of "fish" vs "no fish" we select out of the six selected frames the one that is the most similar to a fish. The code doing this will be seen later.
Step3: The last definitions to be made are the constant parameters used along this notebook. Due to the unbalance amount of data, we oversample the classes (BET, DOL, LAG and SHARK).
Step4: Next, we generate the HOG arrays for all seven classifiers
Step5: As already explained, the output of the following code are six frames per image, stored in a folder called "buffer". The fact that we have the test data organized in classes does not influence neither de detection nor the classification, it just helps us to check the results.
Step6: Finally, we apply the SVM "Fish" vs "No Fish" in order to select the image which is the most similar to a fish and we store in a folder called "fish_detected" | Python Code:
import os
import glob
import time
from SimpleCV import *
import scipy
import numpy as np
import tensorflow as tf
import collections
import matplotlib.pyplot as plt
import cv2
import imutils
from skimage.transform import pyramid_gaussian
import argparse
import cv2
from scipy import ndimage
from scipy.ndimage import sum as ndi_sum
from subprocess import check_output
from skimage.transform import pyramid_gaussian
from sklearn.svm import SVC
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import log_loss
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Fish detection
In this notebook we address the problem of detecting and cropping the fishes from the data images. This is a problem of computer vision that has no easy solution. We considered different approaches, being the most relevant ones the following three:
- Passing the whole image to a CNN: As shown in the data exploration section, the images contain many different elements being the fishes just a small part of the image. Given the small amount of available training data this strategy gave a very low performance, very close to random classification. Therefore, we discarded this possibility.
- Template matching: Most of the classes contain images in which the fishes are in similar positions. Thus, using several templates a number of fishes can be succesfully detected and cropped. However, this approach presents some important problems. Firstly, the test set does not necessarily contain images in which the fishes match any template, yielding the template matching useless. And secondly, in order to detect an acceptable number of fishes a big number of templates should be used for each class, every image should be compared against all the templates of all classes which takes an extremely long time.
- Slidding window: The main idea is to sweep every image with a slidding window of different sizes and finding the probability for each frame of containinng a fish. The frame with the highest probability is then selected and stored.
After trying all these possibilities we concluded that the slidding window offered the best trade-off between performance and computation time, so we took this option. In order to implement this system a classifier is necessary, which means that we need training data. However, the training data provided by the kaggle competition is not adequate because it consists of whole images, not of frames or cropped fishes. For this reason we have need to modify this images in order to obtain cropped fishes.
To do this we firstly cut some images manually (20 of each class) and trained the slidding window with two classes, "fish" and "no fish". Then, we ran it on several of the original images and selected manually the ones that were well detected and feeded them to the "fish" class. The wrongly detected frames were given to the "no fish" class. We repeated this process iteratively many times, the performance being increased little by little every time. Nevertheless, this process was very time consuming, so to speed it up we used template matching. In the classes "LAG", "SHARK" and "DOL" the template matching was very effective. After a long process we obtained about 2500 images of cropped fishes and 8500 frames of "no fish". Some of the pieces of code used in this process, like the template matching, are not included in this or any other notebook since we were used as a mean to obtain the fish detector here presented.
End of explanation
################################## Functions definition ###################################
#These functions are inspired from http://www.pyimagesearch.com/
def pyramid(image, scale=1.5, minSize=(30, 30)):
# yield the original image
yield image
# keep looping over the pyramid
while True:
# compute the new dimensions of the image and resize it
w = int(image.shape[1] / scale)
image = imutils.resize(image, width=w)
# if the resized image does not meet the supplied minimum
# size, then stop constructing the pyramid
if image.shape[0] < minSize[1] or image.shape[1] < minSize[0]:
break
# yield the next image in the pyramid
yield image
def sliding_window(image, stepSize, windowSize):
# slide a window across the image
for y in xrange(0, image.shape[0], stepSize):
for x in xrange(0, image.shape[1], stepSize):
# yield the current window
yield (x, y, image[y:y + windowSize[1], x:x + windowSize[0]])
# lenet 5
def findHOGFeatures(self, n_divs=3, n_bins=6):
**SUMMARY**
Get HOG(Histogram of Oriented Gradients) features from the image.
**PARAMETERS**
* *n_divs* - the number of divisions(cells).
* *n_divs* - the number of orientation bins.
**RETURNS**
Returns the HOG vector in a numpy array
n_HOG = n_divs * n_divs * n_bins # Size of HOG vector
HOG = np.zeros((n_HOG, 1)) # Initialize output HOG vector
# Apply sobel on image to find x and y orientations of the image
Icv = self.getNumpyCv2()
Ix = cv2.Sobel(Icv, ddepth=cv.CV_32F, dx=1, dy=0, ksize=3)
Iy = cv2.Sobel(Icv, ddepth=cv.CV_32F, dx=0, dy=1, ksize=3)
Ix = Ix.transpose(1, 0, 2)
Iy = Iy.transpose(1, 0, 2)
cellx = self.width / n_divs # width of each cell(division)
celly = self.height / n_divs # height of each cell(division)
# Area of image
img_area = self.height * self.width
#Range of each bin
BIN_RANGE = (2 * pi) / n_bins
angles = np.arctan2(Iy, Ix)
magnit = ((Ix ** 2) + (Iy ** 2)) ** 0.5
height, width = self.height, self.width
bins = (angles[...,0] % (2 * pi) / BIN_RANGE).astype(int)
x, y = np.mgrid[:width, :height]
x = x * n_divs // width
y = y * n_divs // height
labels = (x * n_divs + y) * n_bins + bins
index = np.arange(n_HOG)
HOG = ndi_sum(magnit[..., 0], labels, index)
return HOG / (height*width)
Explanation: After importing the libraries we declare the functions that we need for the fish detection. As introduced above we need a slidding window and a classificator to determine the probability of a frame containing a fish.
For the slidding window we will sweep the image with a square capturing (not storing) the frames. Once the image is completely swept, it is resized smaller and swept again. The effect of this is that the image is swept with squares of different sizes. To do this we use the functions pyramid and sliding window.
For the classificator we extract the HOG features, which give a characterization of the image, and we give them to a SVM. The SVM has the advantage that is very fast, so given the very big amount of images to classify and the many frames that are extracted per image it is the best option.
Given the poor results obtained when using a single SVM distinguishing between "fish" and "no fish", we adopt the following strategy for classification. We build seven SVM with two classes each, "no fish", and one per class of fish (ALB, BET, DOL, LAG, SHARK, YFT), and one SVM for all of these classes convined. Then we select the frame that gives the highest probability for each class of fish, producing in this way six cropped images from the original image. Then, using the SVM of "fish" vs "no fish" we select out of the six selected frames the one that is the most similar to a fish. The code doing this will be seen later.
End of explanation
#Define some values and constants
fish_classes = ['ALB','BET','LAG','DOL','SHARK','YFT','NoF']
fish_classes_test = ['Fish','NoFish']
number_classes = len(fish_classes)
main_path_train = '../train_cut_oversample'
main_path_test = '../test'
extension = "*.jpg"
Explanation: The last definitions to be made are the constant parameters used along this notebook. Due to the unbalance amount of data, we oversample the classes (BET, DOL, LAG and SHARK).
End of explanation
############################## Get HOG of fish and No-fish cases ###################################
#One array per classifier
HOG = []
HOG_n = []
HOG_ALB = []
HOG_BET = []
HOG_DOL = []
HOG_LAG = []
HOG_SHARK = []
HOG_YFT = []
#Construct arrays
for classes in fish_classes:
#Acces the files
path_class = os.path.join(main_path_train,classes)
directory = os.path.join(path_class, extension)
files = glob.glob(directory)
for file in files:
new_img = cv2.imread(file)
H = findHOGFeatures(Image(new_img))
if classes != 'NoF':
HOG.append(H)
if classes == 'ALB':
HOG_ALB.append(H)
if classes == 'BET':
HOG_BET.append(H)
if classes == 'DOL':
HOG_DOL.append(H)
if classes == 'LAG':
HOG_LAG.append(H)
if classes == 'SHARK':
HOG_SHARK.append(H)
if classes == 'YFT':
HOG_YFT.append(H)
else:
HOG_n.append(H)
HOG = np.array(HOG)
HOG_ALB = np.array(HOG_ALB)
HOG_BET = np.array(HOG_BET)
HOG_DOL = np.array(HOG_DOL)
HOG_LAG = np.array(HOG_LAG)
HOG_SHARK = np.array(HOG_SHARK)
HOG_YFT = np.array(HOG_YFT)
HOG_n = np.array(HOG_n)
#Print shapes of the arrays
print HOG.shape
print HOG_ALB.shape
print HOG_BET.shape
print HOG_DOL.shape
print HOG_LAG.shape
print HOG_SHARK.shape
print HOG_YFT.shape
print HOG_n.shape
############################## Build and train the classifiers ###################################
#SVM with all classes against No Fish
X = np.concatenate((HOG, HOG_n),axis = 0)
class_one = np.ones(HOG.shape[0])
class_zero = np.zeros(HOG_n.shape[0])
y = np.concatenate((class_one, class_zero), axis=0)
clf_all = SVC(probability=True)
clf_all.fit(X, y)
#SVM: ALB vs No Fish
X = np.concatenate((HOG_ALB, HOG_n),axis = 0)
class_one = np.ones(HOG_ALB.shape[0])
class_zero = np.zeros(HOG_n.shape[0])
y = np.concatenate((class_one, class_zero), axis=0)
clf_ALB = SVC(probability=True)
clf_ALB.fit(X,y)
#SVM: BET vs No Fish
X = np.concatenate((HOG_BET, HOG_n),axis = 0)
class_one = np.ones(HOG_BET.shape[0])
class_zero = np.zeros(HOG_n.shape[0])
y = np.concatenate((class_one, class_zero), axis=0)
clf_BET = SVC(probability=True)
clf_BET.fit(X,y)
#SVM: DOL vs No Fish
X = np.concatenate((HOG_DOL, HOG_n),axis = 0)
class_one = np.ones(HOG_DOL.shape[0])
class_zero = np.zeros(HOG_n.shape[0])
y = np.concatenate((class_one, class_zero), axis=0)
clf_DOL = SVC(probability=True)
clf_DOL.fit(X,y)
#SVM: LAG vs No Fish
X = np.concatenate((HOG_LAG, HOG_n),axis = 0)
class_one = np.ones(HOG_LAG.shape[0])
class_zero = np.zeros(HOG_n.shape[0])
y = np.concatenate((class_one, class_zero), axis=0)
clf_LAG = SVC(probability=True)
clf_LAG.fit(X,y)
#SVM: SHARK vs No Fish
X = np.concatenate((HOG_SHARK, HOG_n),axis = 0)
class_one = np.ones(HOG_SHARK.shape[0])
class_zero = np.zeros(HOG_n.shape[0])
y = np.concatenate((class_one, class_zero), axis=0)
clf_SHARK = SVC(probability=True)
clf_SHARK.fit(X,y)
#SVM: YFT vs No Fish
X = np.concatenate((HOG_YFT, HOG_n),axis = 0)
class_one = np.ones(HOG_YFT.shape[0])
class_zero = np.zeros(HOG_n.shape[0])
y = np.concatenate((class_one, class_zero), axis=0)
clf_YFT = SVC(probability=True)
clf_YFT.fit(X,y)
Explanation: Next, we generate the HOG arrays for all seven classifiers
End of explanation
###################################### Apply 6 classifiers (buffer) ##################################
(winW, winH) = (100, 100)
#Apply classifier on test
directory = os.path.join(main_path_test, extension)
files = glob.glob(directory)
extension = "*.jpg"
for classes in fish_classes:
path_class = os.path.join(main_path_test,classes)
directory = os.path.join(path_class, extension)
files = glob.glob(directory)
for file in files:
image = cv2.imread(file)
prob_ALB = 0
prob_BET = 0
prob_DOL = 0
prob_LAG = 0
prob_SHARK = 0
prob_YFT = 0
# loop over the image pyramid
for resized in pyramid(image, scale=1.5):
# loop over the sliding window for each layer of the pyramid
for (x, y, window) in sliding_window(resized, stepSize=64, windowSize=(winW, winH)):
# if the window does not meet our desired window size, ignore it
if window.shape[0] != winH or window.shape[1] != winW:
continue
H = findHOGFeatures(Image(window))
#Predict probability for each class
p_ALB = clf_ALB.predict_proba([H])
p_BET = clf_BET.predict_proba([H])
p_DOL = clf_DOL.predict_proba([H])
p_LAG = clf_LAG.predict_proba([H])
p_SHARK = clf_SHARK.predict_proba([H])
p_YFT = clf_YFT.predict_proba([H])
#Store frame with the highest probability per class
if prob_ALB < p_ALB[0,1]:
prob_ALB = p_ALB[0,1]
wind_ALB = window
if prob_BET< p_BET[0,1]:
prob_BET = p_BET[0,1]
wind_BET = window
if prob_DOL<p_DOL[0,1]:
prob_DOL = p_DOL[0,1]
wind_DOL = window
if prob_LAG<p_LAG[0,1]:
prob_LAG = p_LAG[0,1]
wind_LAG = window
if prob_SHARK<p_SHARK[0,1]:
prob_SHARK = p_SHARK[0,1]
wind_SHARK = window
if prob_YFT<p_YFT[0,1]:
prob_YFT = p_YFT[0,1]
wind_YFT = window
j = 0
for wind in [wind_ALB,wind_BET,wind_DOL,wind_LAG,wind_SHARK,wind_YFT] :
f = str(os.path.basename(file))
cv2.imwrite("buffer/"+str(classes)+"/"+f[:-4]+"_"+str(j)+"0.jpg", wind)
j = j+1
Explanation: As already explained, the output of the following code are six frames per image, stored in a folder called "buffer". The fact that we have the test data organized in classes does not influence neither de detection nor the classification, it just helps us to check the results.
End of explanation
###################################### Apply 1 classifier (fish_detected) ##################################
#from PIL import Image
path = "buffer/"
extension2 = "*_00.jpg"
nam = ""
directory = os.path.join(path, extension)
files = glob.glob(directory)
for classes in fish_classes:
#Access folders
path_class = os.path.join(path,classes)
directory = os.path.join(path_class, extension2)
files = glob.glob(directory)
for file in files:
prob_fish = 0
f = str(os.path.basename(file))
#Access files
ext = f[:-6]+"*.jpg"
direct = os.path.join(path_class, ext)
for name in glob.glob(direct):
#Open image
img = cv2.imread(name)
if img.shape == (100,100,3): #Check that the image generated by the slidding window has the right size
#Predict probabilities
H = findHOGFeatures(Image(img))
aux = clf_all.predict_proba([H])
#Store highest probability frame
if prob_fish < aux[0,1]:
prob_fish = aux[0,1]
img = np.reshape(img, (ROWS_RESIZE, COLS_RESIZE,3))
img_save = img
nam = name
#Save frame
cv2.imwrite("fish_detected/"+str(classes)+"/"+str(os.path.basename(nam)), img_save)
Explanation: Finally, we apply the SVM "Fish" vs "No Fish" in order to select the image which is the most similar to a fish and we store in a folder called "fish_detected"
End of explanation |
3,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining inputs
Need to define some heterogenous factors of production...
Step1: Note that we are shifting the distributions of worker skill and firm productivity to the right by 1.0 in order to try and avoid issues with having workers (firms) with near zero skill (productivity).
Step2: Defining a production process
Next need to define some production process...
Step3: Define a boundary value problem
Step4: Pick some collocation solver
Step5: Compute some decent initial guess
Currently I guess that $\mu(x)$ is has the form...
$$ \hat{\mu}(x) = \beta_0 + \beta_1 f(x) $$
(i.e., a linear translation) of some function $f$. Using my $\hat{\mu}(x)$, I can then back out a guess for $\theta(x)$ implied by the model...
$$ \hat{\theta}(x) = \frac{H(x)}{\hat{\mu}'(x)} $$
Step6: Solve the model!
Step7: Plot some results
Step8: Plot factor payments
Note the factor_payment_1 is wages and factor_payment_2 is profits...
Step9: Plot firm size against wages and profits
Step10: Plot the density for firm size
As you can see, the theta function is hump-shaped. Nothing special, but when calculating the pdf some arrangements have to be done for this
Step11: Distributions of factor payments
Can plot the distributions of average factor payments...
Step12: Widget | Python Code:
# define some workers skill
x, loc1, mu1, sigma1 = sym.var('x, loc1, mu1, sigma1')
skill_cdf = 0.5 + 0.5 * sym.erf((sym.log(x - loc1) - mu1) / sym.sqrt(2 * sigma1**2))
skill_params = {'loc1': 1e0, 'mu1': 0.0, 'sigma1': 1.0}
workers = pyam.Input(var=x,
cdf=skill_cdf,
params=skill_params,
bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles!
alpha=0.005,
measure=1.0
)
# define some firms
y, loc2, mu2, sigma2 = sym.var('y, loc2, mu2, sigma2')
productivity_cdf = 0.5 + 0.5 * sym.erf((sym.log(y - loc2) - mu2) / sym.sqrt(2 * sigma2**2))
productivity_params = {'loc2': 1e0, 'mu2': 0.0, 'sigma2': 1.0}
firms = pyam.Input(var=y,
cdf=productivity_cdf,
params=productivity_params,
bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles!
alpha=0.005,
measure=1.0
)
Explanation: Defining inputs
Need to define some heterogenous factors of production...
End of explanation
xs = np.linspace(workers.lower, workers.upper, 1e4)
plt.plot(xs, workers.evaluate_pdf(xs))
plt.xlabel('Worker skill, $x$', fontsize=20)
plt.show()
Explanation: Note that we are shifting the distributions of worker skill and firm productivity to the right by 1.0 in order to try and avoid issues with having workers (firms) with near zero skill (productivity).
End of explanation
# define symbolic expression for CES between x and y
omega_A, sigma_A = sym.var('omega_A, sigma_A')
A = ((omega_A * x**((sigma_A - 1) / sigma_A) +
(1 - omega_A) * y**((sigma_A - 1) / sigma_A))**(sigma_A / (sigma_A - 1)))
# define symbolic expression for CES between x and y
r, l, omega_B, sigma_B = sym.var('r, l, omega_B, sigma_B')
B = ((omega_B * r**((sigma_B - 1) / sigma_B) +
(1 - omega_B) * l**((sigma_B - 1) / sigma_B))**(sigma_B / (sigma_B - 1)))
F = A * B
# negative assortativity requires that sigma_A * sigma_B > 1
F_params = {'omega_A':0.25, 'omega_B':0.5, 'sigma_A':2.0, 'sigma_B':1.0 }
Explanation: Defining a production process
Next need to define some production process...
End of explanation
problem = pyam.AssortativeMatchingProblem(assortativity='negative',
input1=workers,
input2=firms,
F=sym.limit(F, sigma_B, 1),
F_params=F_params)
Explanation: Define a boundary value problem
End of explanation
solver = pycollocation.OrthogonalPolynomialSolver(problem)
Explanation: Pick some collocation solver
End of explanation
initial_guess = pyam.OrthogonalPolynomialInitialGuess(solver)
initial_polys = initial_guess.compute_initial_guess("Chebyshev",
degrees={'mu': 40, 'theta': 70},
f=lambda x, alpha: x**alpha,
alpha=1.0)
# quickly plot the initial conditions
xs = np.linspace(workers.lower, workers.upper, 1000)
plt.plot(xs, initial_polys['mu'](xs))
plt.plot(xs, initial_polys['theta'](xs))
plt.grid('on')
Explanation: Compute some decent initial guess
Currently I guess that $\mu(x)$ is has the form...
$$ \hat{\mu}(x) = \beta_0 + \beta_1 f(x) $$
(i.e., a linear translation) of some function $f$. Using my $\hat{\mu}(x)$, I can then back out a guess for $\theta(x)$ implied by the model...
$$ \hat{\theta}(x) = \frac{H(x)}{\hat{\mu}'(x)} $$
End of explanation
domain = [workers.lower, workers.upper]
initial_coefs = {'mu': initial_polys['mu'].coef,
'theta': initial_polys['theta'].coef}
solver.solve(kind="Chebyshev",
coefs_dict=initial_coefs,
domain=domain,
method='hybr')
solver.result.success
Explanation: Solve the model!
End of explanation
viz = pyam.Visualizer(solver)
viz.interpolation_knots = np.linspace(workers.lower, workers.upper, 1000)
viz.residuals.plot()
plt.show()
viz.normalized_residuals[['mu', 'theta']].plot(logy=True)
plt.show()
viz.solution.tail()
viz.solution[['mu', 'theta']].plot(subplots=True)
plt.show()
viz.solution[['Fxy', 'Fyl']].plot()
plt.show()
Explanation: Plot some results
End of explanation
viz.solution[['factor_payment_1', 'factor_payment_2']].plot(subplots=True)
plt.show()
Explanation: Plot factor payments
Note the factor_payment_1 is wages and factor_payment_2 is profits...
End of explanation
fig, axes = plt.subplots(1, 2, sharey=True)
axes[0].scatter(viz.solution.factor_payment_1, viz.solution.theta, alpha=0.5,
edgecolor='none')
axes[0].set_ylim(0, 1.05 * viz.solution.theta.max())
axes[0].set_xlabel('Wages, $w$')
axes[0].set_ylabel(r'Firm size, $\theta$')
axes[1].scatter(viz.solution.factor_payment_2, viz.solution.theta, alpha=0.5,
edgecolor='none')
axes[1].set_xlabel(r'Profits, $\pi$')
plt.show()
# to get correlation just use pandas!
viz.solution.corr()
# or a subset
viz.solution[['theta', 'factor_payment_1']].corr()
# or actual values!
viz.solution.corr().loc['theta']['factor_payment_1']
Explanation: Plot firm size against wages and profits
End of explanation
fig, axes = plt.subplots(1, 3)
theta_pdf = viz.compute_pdf('theta', normalize=True)
theta_pdf.plot(ax=axes[0])
axes[0].set_xlabel(r'Firm size, $\theta$')
axes[0].set_title(r'pdf')
theta_cdf = viz.compute_cdf(theta_pdf)
theta_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
axes[1].set_xlabel(r'Firm size, $\theta$')
theta_sf = viz.compute_sf(theta_cdf)
theta_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
axes[2].set_xlabel(r'Firm size, $\theta$')
plt.tight_layout()
plt.show()
Explanation: Plot the density for firm size
As you can see, the theta function is hump-shaped. Nothing special, but when calculating the pdf some arrangements have to be done for this: sort the thetas preserving the order (so we can relate them to their xs) and then use carefully the right x for calculating the pdf.
The principle of Philipp's trick is:
$pdf_x(x_i)$ can be interpreted as number of workers with ability x. $\theta_i$ is the size of the firms that employs workers of kind $x_i$. As all firms that match with workers type $x_i$ choose the same firm size, $pdf_x(x_i)/\theta_i$ is the number of firms of size $\theta_i$.
Say there are 100 workers with ability $x_i$, and their associated firm size $\theta_i$ is 2. Then there are $100/2 = 50$ $ \theta_i$ firms
End of explanation
fig, axes = plt.subplots(1, 3)
factor_payment_1_pdf = viz.compute_pdf('factor_payment_1', normalize=True)
factor_payment_1_pdf.plot(ax=axes[0])
axes[0].set_title(r'pdf')
factor_payment_1_cdf = viz.compute_cdf(factor_payment_1_pdf)
factor_payment_1_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
factor_payment_1_sf = viz.compute_sf(factor_payment_1_cdf)
factor_payment_1_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
plt.tight_layout()
plt.show()
fig, axes = plt.subplots(1, 3)
factor_payment_2_pdf = viz.compute_pdf('factor_payment_2', normalize=True)
factor_payment_2_pdf.plot(ax=axes[0])
axes[0].set_title(r'pdf')
factor_payment_2_cdf = viz.compute_cdf(factor_payment_2_pdf)
factor_payment_2_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
factor_payment_2_sf = viz.compute_sf(factor_payment_2_cdf)
factor_payment_2_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
plt.tight_layout()
plt.show()
Explanation: Distributions of factor payments
Can plot the distributions of average factor payments...
End of explanation
from IPython.html import widgets
def interactive_plot(viz, omega_A=0.25, omega_B=0.5, sigma_A=0.5, sigma_B=1.0,
loc1=1.0, mu1=0.0, sigma1=1.0, loc2=1.0, mu2=0.0, sigma2=1.0):
# update new parameters as needed
new_F_params = {'omega_A': omega_A, 'omega_B': omega_B,
'sigma_A': sigma_A, 'sigma_B': sigma_B}
viz.solver.problem.F_params = new_F_params
new_input1_params = {'loc1': loc1, 'mu1': mu1, 'sigma1': sigma1}
viz.solver.problem.input1.params = new_input1_params
new_input2_params = {'loc2': loc2, 'mu2': mu2, 'sigma2': sigma2}
viz.solver.problem.input2.params = new_input2_params
# solve the model using a hotstart initial guess
domain = [viz.solver.problem.input1.lower, viz.solver.problem.input1.upper]
initial_coefs = viz.solver._coefs_array_to_dict(viz.solver.result.x, viz.solver.degrees)
viz.solver.solve(kind="Chebyshev",
coefs_dict=initial_coefs,
domain=domain,
method='hybr')
if viz.solver.result.success:
viz._Visualizer__solution = None # should not need to access this!
viz.interpolation_knots = np.linspace(domain[0], domain[1], 1000)
viz.solution[['mu', 'theta']].plot(subplots=True)
viz.normalized_residuals[['mu', 'theta']].plot(logy=True)
else:
print "Foobar!"
viz_widget = widgets.fixed(viz)
# widgets for the model parameters
eps = 1e-2
omega_A_widget = widgets.FloatSlider(value=0.25, min=eps, max=1-eps, step=eps,
description=r"$\omega_A$")
sigma_A_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps,
description=r"$\sigma_A$")
omega_B_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps,
description=r"$\omega_B$")
sigma_B_widget = widgets.fixed(1.0)
# widgets for input distributions
loc_widget = widgets.fixed(1.0)
mu_1_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps,
description=r"$\mu_1$")
mu_2_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps,
description=r"$\mu_2$")
sigma_1_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps,
description=r"$\sigma_1$")
sigma_2_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps,
description=r"$\sigma_2$")
widgets.interact(interactive_plot, viz=viz_widget, omega_A=omega_A_widget,
sigma_A=sigma_A_widget, omega_B=omega_B_widget,
sigma_B=sigma_B_widget, sigma1=sigma_1_widget,
loc1=loc_widget, mu1 = mu_1_widget,
loc2=loc_widget, sigma2=sigma_2_widget, mu2 = mu_2_widget)
# widget is changing the parameters of the underlying solver
solver.result.x
Explanation: Widget
End of explanation |
3,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: OT for image color adaptation
This example presents a way of transferring colors between two image
with Optimal Transport as introduced in [6]
[6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014).
Regularized discrete optimal transport.
SIAM Journal on Imaging Sciences, 7(3), 1853-1882.
Step3: Generate data
Step4: Plot original image
Step5: Scatter plot of colors
Step6: Instantiate the different transport algorithms and fit them
Step7: Plot new images | Python Code:
# Authors: Remi Flamary <[email protected]>
# Stanislas Chambon <[email protected]>
#
# License: MIT License
import numpy as np
from scipy import ndimage
import matplotlib.pylab as pl
import ot
r = np.random.RandomState(42)
def im2mat(I):
Converts and image to matrix (one pixel per line)
return I.reshape((I.shape[0] * I.shape[1], I.shape[2]))
def mat2im(X, shape):
Converts back a matrix to an image
return X.reshape(shape)
def minmax(I):
return np.clip(I, 0, 1)
Explanation: OT for image color adaptation
This example presents a way of transferring colors between two image
with Optimal Transport as introduced in [6]
[6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014).
Regularized discrete optimal transport.
SIAM Journal on Imaging Sciences, 7(3), 1853-1882.
End of explanation
# Loading images
I1 = ndimage.imread('../data/ocean_day.jpg').astype(np.float64) / 256
I2 = ndimage.imread('../data/ocean_sunset.jpg').astype(np.float64) / 256
X1 = im2mat(I1)
X2 = im2mat(I2)
# training samples
nb = 1000
idx1 = r.randint(X1.shape[0], size=(nb,))
idx2 = r.randint(X2.shape[0], size=(nb,))
Xs = X1[idx1, :]
Xt = X2[idx2, :]
Explanation: Generate data
End of explanation
pl.figure(1, figsize=(6.4, 3))
pl.subplot(1, 2, 1)
pl.imshow(I1)
pl.axis('off')
pl.title('Image 1')
pl.subplot(1, 2, 2)
pl.imshow(I2)
pl.axis('off')
pl.title('Image 2')
Explanation: Plot original image
End of explanation
pl.figure(2, figsize=(6.4, 3))
pl.subplot(1, 2, 1)
pl.scatter(Xs[:, 0], Xs[:, 2], c=Xs)
pl.axis([0, 1, 0, 1])
pl.xlabel('Red')
pl.ylabel('Blue')
pl.title('Image 1')
pl.subplot(1, 2, 2)
pl.scatter(Xt[:, 0], Xt[:, 2], c=Xt)
pl.axis([0, 1, 0, 1])
pl.xlabel('Red')
pl.ylabel('Blue')
pl.title('Image 2')
pl.tight_layout()
Explanation: Scatter plot of colors
End of explanation
# EMDTransport
ot_emd = ot.da.EMDTransport()
ot_emd.fit(Xs=Xs, Xt=Xt)
# SinkhornTransport
ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1)
ot_sinkhorn.fit(Xs=Xs, Xt=Xt)
# prediction between images (using out of sample prediction as in [6])
transp_Xs_emd = ot_emd.transform(Xs=X1)
transp_Xt_emd = ot_emd.inverse_transform(Xt=X2)
transp_Xs_sinkhorn = ot_emd.transform(Xs=X1)
transp_Xt_sinkhorn = ot_emd.inverse_transform(Xt=X2)
I1t = minmax(mat2im(transp_Xs_emd, I1.shape))
I2t = minmax(mat2im(transp_Xt_emd, I2.shape))
I1te = minmax(mat2im(transp_Xs_sinkhorn, I1.shape))
I2te = minmax(mat2im(transp_Xt_sinkhorn, I2.shape))
Explanation: Instantiate the different transport algorithms and fit them
End of explanation
pl.figure(3, figsize=(8, 4))
pl.subplot(2, 3, 1)
pl.imshow(I1)
pl.axis('off')
pl.title('Image 1')
pl.subplot(2, 3, 2)
pl.imshow(I1t)
pl.axis('off')
pl.title('Image 1 Adapt')
pl.subplot(2, 3, 3)
pl.imshow(I1te)
pl.axis('off')
pl.title('Image 1 Adapt (reg)')
pl.subplot(2, 3, 4)
pl.imshow(I2)
pl.axis('off')
pl.title('Image 2')
pl.subplot(2, 3, 5)
pl.imshow(I2t)
pl.axis('off')
pl.title('Image 2 Adapt')
pl.subplot(2, 3, 6)
pl.imshow(I2te)
pl.axis('off')
pl.title('Image 2 Adapt (reg)')
pl.tight_layout()
pl.show()
Explanation: Plot new images
End of explanation |
3,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch. 6 - Minibatch Gradient Descent
Before you head of into the challenge, there is one more topic we need to cover
Step1: Now we will generate a dataset with 10,000 examples. This should be enough to showcase the concept while still working with reasonable resources.
Step2: As you can see, the problem is the same as in the last chapter. We need to separate three groups of customers. But this time around we have a lot more customers.
Step3: Generating minibatches
A minibatch is a randomly drawn subset of the training set. We will define a method to create an array of these subsets we can loop over.
Step4: We have now defined a helper function that can generate a set of minibatches from our training set. Note that the batches get randomly drawn without replacement. It has been shown that this leads to faster learning. Now we need to incorporate the minibatch method into our training function. The other functions of our neural net stay unaffected so we are going to define them here now before moving to the training function.
Step5: Minibatches in training
In training, we loop over forward and backward propagation multiple times to optimize the model parameters. So far, we have used the entire training set. Since we now have multiple batches, it means we have to add another loop, to loop over all the batches in one epoch. By using minibatches, we also add a new hyperparameter, the minibatch size, that is, how many examples should be in one minibatch.
Step6: As you can see, our model now can learn on the much bigger dataset without draining too much memory. In fact, it has achieved even better results than before because it now has more data to work with. We can confirm this by plotting the decision boundary again.
Step7: Minibatches and noisy training
If we pay close attention to how our model learns now, we see that the loss decay has become a lot more noisy.
Step8: The noise comes from the fact that the smaller batches might be statistically different from the larger training set. It can therefore be a good idea to adjust the learning rate a bit to make sure that an outlier minibatch does not move the model all too much. | Python Code:
# Package imports
# Matplotlib is a matlab like plotting library
import matplotlib
import matplotlib.pyplot as plt
# Numpy handles matrix operations
import numpy as np
# SciKitLearn is a useful machine learning utilities library
import sklearn
# The sklearn dataset module helps generating datasets
import sklearn.datasets
import sklearn.linear_model
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import accuracy_score
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates decision boundary plot
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], s=1,c=y, cmap=plt.cm.Accent)
Explanation: Ch. 6 - Minibatch Gradient Descent
Before you head of into the challenge, there is one more topic we need to cover: Minibatches.
So far we have used our entire training set at once to train our models. That worked fine as long as we had only 200 or so examples. But as we saw in chapter 1, machine learning works best with massive datasets with hundreds of thousands or even millions of examples. To train the neural network for Face ID on the iPhone, apple used over a billion images of faces. No computer has the memory capacity to process a billion images at once, so in order to make training on larger sets feasible we need to divide the training set into smaller 'minibatches'.
In this chapter we will have a look at how to implement minibatch gradient descent.
Warning: Since we will start working with large amounts of data now, this might no longer work on your laptop. Run this notebook on a cloud computing instance with more RAM and CPUs
As it is custom, we will load all our libraries first and define the decision boundary helper method first
End of explanation
# Generate a BIG dataset and plot it
# This might take a little while
num_samples = 10000
np.random.seed(0)
X, y = sklearn.datasets.make_blobs(n_samples=num_samples,centers=3,cluster_std=0.8)
# We will make the points a little bit smaller so that the graph is more clear
plt.scatter(X[:,0], X[:,1], s=1, c=y, cmap=plt.cm.Accent)
Explanation: Now we will generate a dataset with 10,000 examples. This should be enough to showcase the concept while still working with reasonable resources.
End of explanation
# Generate one hot encoding
# Reshape from array to vector
y = y.reshape(num_samples,1)
# Generate one hot encoding
enc = OneHotEncoder()
onehot = enc.fit_transform(y)
# Convert to numpy vector
y = onehot.toarray()
Explanation: As you can see, the problem is the same as in the last chapter. We need to separate three groups of customers. But this time around we have a lot more customers.
End of explanation
def get_mini_batches(X, y, batch_size):
'''
Generates an array of randomly drawn minibatches.
'''
# First we shuffle the training data so that it is later easier to draw random samples from it
# Generate random indexes, sampeled without replacement
random_idxs = np.random.choice(len(y), len(y), replace=False)
# Generate a shuffled version of the examples in X
X_shuffled = X[random_idxs,:]
# Generate a shuffeled version of the targets in y
# Note that since we use the same random indexes for X and y the example and the targets will still match
y_shuffled = y[random_idxs]
# List statements to move through set and sample batches
mini_batches = [(X_shuffled[i:i+batch_size,:], y_shuffled[i:i+batch_size]) for
i in range(0, len(y), batch_size)]
return mini_batches
Explanation: Generating minibatches
A minibatch is a randomly drawn subset of the training set. We will define a method to create an array of these subsets we can loop over.
End of explanation
def softmax(z):
'''
Calculates the softmax activation of a given input x
See: https://en.wikipedia.org/wiki/Softmax_function
'''
#Calculate exponent term first
exp_scores = np.exp(z)
return exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
def softmax_loss(y,y_hat):
'''
Calculates the generalized logistic loss between a prediction y_hat and the labels y
See: http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/
We need to clip values that get too close to zero to avoid zeroing out.
Zeroing out is when a number gets so small that the computer replaces it with 0.
Therefore, we clip numbers to a minimum value.
'''
# Clipping calue
minval = 0.000000000001
# Number of samples
m = y.shape[0]
# Loss formula, note that np.sum sums up the entire matrix and therefore does the job of two sums from the formula
loss = -1/m * np.sum(y * np.log(y_hat.clip(min=minval)))
return loss
# Log loss derivative, equal to softmax loss derivative
def loss_derivative(y,y_hat):
'''
Calculates the gradient (derivative) of the loss between point y and y_hat
See: https://stats.stackexchange.com/questions/219241/gradient-for-logistic-loss-function
'''
return (y_hat-y)
def tanh_derivative(x):
'''
Calculates the derivative of the tanh function that is used as the first activation function
See: https://socratic.org/questions/what-is-the-derivative-of-tanh-x
'''
return (1 - np.power(x, 2))
def forward_prop(model,a0):
'''
Forward propagates through the model, stores results in cache.
See: https://stats.stackexchange.com/questions/147954/neural-network-forward-propagation
A0 is the activation at layer zero, it is the same as X
'''
# Load parameters from model
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Linear step
z1 = a0.dot(W1) + b1
# First activation function
a1 = np.tanh(z1)
# Second linear step
z2 = a1.dot(W2) + b2
# Second activation function
a2 = softmax(z2)
cache = {'a0':a0,'z1':z1,'a1':a1,'z2':z2,'a2':a2}
return cache
def backward_prop(model,cache,y):
'''
Backward propagates through the model to calculate gradients.
Stores gradients in grads dictionary.
See: https://en.wikipedia.org/wiki/Backpropagation
'''
# Load parameters from model
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Load forward propagation results
a0,a1, a2 = cache['a0'],cache['a1'],cache['a2']
# Get number of samples
m = y.shape[0]
# Backpropagation
# Calculate loss derivative with respect to output
dz2 = loss_derivative(y=y,y_hat=a2)
# Calculate loss derivative with respect to second layer weights
dW2 = 1/m*(a1.T).dot(dz2)
# Calculate loss derivative with respect to second layer bias
db2 = 1/m*np.sum(dz2, axis=0)
# Calculate loss derivative with respect to first layer
dz1 = np.multiply(dz2.dot(W2.T) ,tanh_derivative(a1))
# Calculate loss derivative with respect to first layer weights
dW1 = 1/m*np.dot(a0.T, dz1)
# Calculate loss derivative with respect to first layer bias
db1 = 1/m*np.sum(dz1, axis=0)
# Store gradients
grads = {'dW2':dW2,'db2':db2,'dW1':dW1,'db1':db1}
return grads
def initialize_parameters(nn_input_dim,nn_hdim,nn_output_dim):
'''
Initializes weights with random number between -1 and 1
Initializes bias with 0
Assigns weights and parameters to model
'''
# First layer weights
W1 = 2 *np.random.randn(nn_input_dim, nn_hdim) - 1
# First layer bias
b1 = np.zeros((1, nn_hdim))
# Second layer weights
W2 = 2 * np.random.randn(nn_hdim, nn_output_dim) - 1
# Second layer bias
b2 = np.zeros((1, nn_output_dim))
# Package and return model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
return model
def update_parameters(model,grads,learning_rate):
'''
Updates parameters accoarding to gradient descent algorithm
See: https://en.wikipedia.org/wiki/Gradient_descent
'''
# Load parameters
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Update parameters
W1 -= learning_rate * grads['dW1']
b1 -= learning_rate * grads['db1']
W2 -= learning_rate * grads['dW2']
b2 -= learning_rate * grads['db2']
# Store and return parameters
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
return model
def predict(model, x):
'''
Predicts y_hat as 1 or 0 for a given input X
'''
# Do forward pass
c = forward_prop(model,x)
#get y_hat
y_hat = np.argmax(c['a2'], axis=1)
return y_hat
Explanation: We have now defined a helper function that can generate a set of minibatches from our training set. Note that the batches get randomly drawn without replacement. It has been shown that this leads to faster learning. Now we need to incorporate the minibatch method into our training function. The other functions of our neural net stay unaffected so we are going to define them here now before moving to the training function.
End of explanation
def train(model,X_,y_,learning_rate, batch_size = 32, epochs=20000, print_loss=False):
# Generate minibatches:
minibatches = get_mini_batches(X=X_,y=y_,batch_size=batch_size)
# Set up loss tracking array, will hold losses for each minibatch
losses = []
# Gradient descent. Loop over epochs
for i in range(0, epochs):
# Loop through the minibatches
for mb in minibatches:
# Get examples
X_mb = mb[0]
# Get targets
y_mb = mb[1]
# Forward propagation
cache = forward_prop(model,X_mb)
#a1, probs = cache['a1'],cache['a2']
# Backpropagation
grads = backward_prop(model,cache,y_mb)
# Gradient descent parameter update
# Assign new parameters to the model
model = update_parameters(model=model,grads=grads,learning_rate=learning_rate)
# Track losses
a2 = cache['a2']
loss = softmax_loss(y_mb,a2)
losses.append(loss)
# Pring loss & accuracy every 100 iterations
if print_loss and i % 100 == 0:
a2 = cache['a2']
print('Loss after iteration',i,':',softmax_loss(y_mb,a2))
y_hat = predict(model,X_)
y_true = y.argmax(axis=1)
print('Accuracy after iteration',i,':',accuracy_score(y_pred=y_hat,y_true=y_true)*100,'%')
return model, losses
# Hyper parameters
hiden_layer_size = 3
# I picked this value because it showed good results in my experiments
learning_rate = 0.01
# Small mini batch size
batch_size = 32
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
# This is what we return at the end
model = initialize_parameters(nn_input_dim=2, nn_hdim= hiden_layer_size, nn_output_dim= 3)
model, losses = train(model,X,y,learning_rate=learning_rate,batch_size=batch_size,epochs=1000,print_loss=True)
Explanation: Minibatches in training
In training, we loop over forward and backward propagation multiple times to optimize the model parameters. So far, we have used the entire training set. Since we now have multiple batches, it means we have to add another loop, to loop over all the batches in one epoch. By using minibatches, we also add a new hyperparameter, the minibatch size, that is, how many examples should be in one minibatch.
End of explanation
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model,x))
plt.title("Decision Boundary for hidden layer size 3")
Explanation: As you can see, our model now can learn on the much bigger dataset without draining too much memory. In fact, it has achieved even better results than before because it now has more data to work with. We can confirm this by plotting the decision boundary again.
End of explanation
plt.plot(losses[:2000])
Explanation: Minibatches and noisy training
If we pay close attention to how our model learns now, we see that the loss decay has become a lot more noisy.
End of explanation
# Hyper parameters
hiden_layer_size = 3
# Smaller learning rate for minibatches
learning_rate = 0.001
# Small mini batch size
batch_size = 32
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
# This is what we return at the end
model = initialize_parameters(nn_input_dim=2, nn_hdim= hiden_layer_size, nn_output_dim= 3)
model, losses = train(model,X,y,learning_rate=learning_rate,batch_size=batch_size,epochs=1000,print_loss=True)
plt.plot(losses[:2000])
Explanation: The noise comes from the fact that the smaller batches might be statistically different from the larger training set. It can therefore be a good idea to adjust the learning rate a bit to make sure that an outlier minibatch does not move the model all too much.
End of explanation |
3,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 10 Key
CHE 116
Step1: The $p$-value is 0.46, so the data is likely normal
2.2 Answer
The null hypothesis is that $\hat{\alpha} = 0$. Since we're testing against the null hpyothesis, our degrees of freedom will be $N - 1$.
Step2: The $p$-value is 0.29, so we cannot reject the null hypothesis. No intercept necessary
2.3 Answer
Let's make the null hypothesis that the slope is positive. We will create a T statistic, which should correspond to some interval/$p$-value that gets smaller (closer to our significance threshold) as we get more negative in our slope. This will work
Step3: Due to the high standard error, there is not enough evidence to reject the null hpyothesis of a positive slope
2.4 Answer
Step4: 2.5 Answer
$N - D = 43$
2.6 Answer
$$F_{21} = \frac{\partial f(\hat{\beta}, x_2)}{\partial \beta_1} = \hat{\beta}_0 x_2 y_2^{\hat{\beta}_1} \ln y_2$$
Step5: 3. Regression in Excel (30 Points)
Regress the data in the next cell to a slope/intercept equation. Use the np.savetxt to create a CSV file. Provide the following labeled/bolded quantities at the top of your Excel file
Step6: Answer
As reported by linest
Step7: 4 Answer
We will linearize the equation to dimensions $\left[1, x, x^2\right]$ and dimensions $\left[1, x\right]$ using the equations from lecture notes. Note
Step8: Our $T$-value is 0.46. Our MATLAB install doesn't have the stats add-on, so we'll use a table or Python to look up the CDF function. Our degrees of freedom comes from the null hypothesis, which is $N - 2 = 18$
Step9: So we do not have enough evidence for the extra $x^2$ term in the regression. The fit parameters are -2.5 (intercept) and 1.7 (slope)
5. Python Regression (40 Points)
Regress the following data to this equation
Step10: Answer
Justification for Regression - 5 Points
Step11: There is a strong correlation in the data, with a $p$-value of $10^{-9}$. This indicates a regression is justified to perform
Regression - 15 Points
Step12: This is a non-convex problem based on this test. I will use basin-hopping then
Step13: It tried to put negative values in a log, so I will add a bound
Step14: Checking Residuals and Fit - 5 Points
Step15: The $R^2$ value is $0.895$ and the $p$-value for normality is $0.82$. These two data together show that the residuals are likely normally distributed and the regression fits well
Plotting the fit - 5 Points
Step16: The fit looks excellent
Error Analysis - 10 Points
The partials needed for error analysis are | Python Code:
import scipy.stats as ss
ss.shapiro([-26.6,-24.0, -20.9, -25.8, -24.3, -22.6, -23.0, -26.8, -26.5, -23.6, -20.0, -23.1, -22.4, -22.5])
Explanation: Homework 10 Key
CHE 116: Numerical Methods and Statistics
Prof. Andrew White
Version 1 (3/30/2016)
0. Revise a Problem (15 Bonus Points on HW 7)
Revist a problem you got wrong on homework 7. If you got a perfect score in homework 7, state that fact. Go through each part you missed and state what your answer was and what your mistake was. If you completed this already on homework 8, state that you completed this on homework 8
For example:
Problem 1.1
My answer used the scipy comb function instead of factorial.
1. Short Answer Problems (16 Points)
A $t$-test and $zM$ test rely on the assumption of normality. How could you test that assumption?
What is $\hat{\alpha}$ in OLS? Use words.
What is $S_{\epsilon}$ in OLS? Use words.
What is the difference between SSR and TSS? Use words
We learned three ways to do regression. One way was with algebraic equations (OLS-1D). What were the other two ways?
What are the steps to complete for a good regression analysis?
Is a goodness of fit applicable to a non-linear regression?
If you residuals are not normal, is a regression still valid?
1. Answers
Shapiro Wilks
The best-fit intercept
The standard error in residuals
SSR is the sum of squared distance between fit y and data y. TTS is the sum of squared distance between average y and all y data.
With optimization (minimization) and matrix algebra
(1) Justify with Spearmann test (2) Regress (3) Check normality of residuals (4) hypothesis tests/confidence intervals as needed
Yes
no
2. Exercises (24 Points)
Are these numbers normally distributed? [-26.6,-24.0, -20.9, -25.8, -24.3, -22.6, -23.0, -26.8, -26.5, -23.6, -20.0, -23.1, -22.4, -22.5]
Given $\hat{\alpha} = 1.2$, $\hat{\beta} = -5.3$, $N = 14$, $S^2_\alpha = 0.8$, $S^2_\epsilon = 0.2$, $S^2_\beta = 12$, conduct a hypothesis test on the existence of the intercept.
Conduct a hpyothesis test for the slope being negative using the above data. This is a one-sided hypothesis test. Hint: a good null hypothesis would be that the slope is positive
Write a function which computes the SSR for $\hat{y} = \beta_0 + \beta_1 \cos \beta_2 x $. Your function should take in one argument. You may assume $x$ and $y$ are defined.
In OLS-ND, if my ${\mathbf X}$ has dimensions of $53 \times 5$, how many degrees of freedom do I have?
If my model equation is $\hat{z} = \beta_0 x y^{\,\beta_1}$, what would ${\mathbf F_{21}}$ be if $\hat{\beta_0} = 1.5$, $\hat{\beta_1} = 2.0$, $x_1 = 1.0$, $x_2 = 1.5$, $y_1 = 0.5$, $y_2 = 1.2$.
2.1 Answer
End of explanation
import numpy as np
T = (1.2 - 0) / np.sqrt(1.2)
1 - (ss.t.cdf(T, 14- 1) - ss.t.cdf(-T, 14- 1))
Explanation: The $p$-value is 0.46, so the data is likely normal
2.2 Answer
The null hypothesis is that $\hat{\alpha} = 0$. Since we're testing against the null hpyothesis, our degrees of freedom will be $N - 1$.
End of explanation
T = -5.3 / np.sqrt(12)
ss.t.cdf(T, 14 - 1)
Explanation: The $p$-value is 0.29, so we cannot reject the null hypothesis. No intercept necessary
2.3 Answer
Let's make the null hypothesis that the slope is positive. We will create a T statistic, which should correspond to some interval/$p$-value that gets smaller (closer to our significance threshold) as we get more negative in our slope. This will work:
$$ p = \int_{-\infty}^{T} p(T)$$
where $T$ is our negative value reflecting how negative the slope is.
You can use 1 or 2 deducted degrees of freedom. 1 is correct, since there is no degree of freedom for the intercept here, but it's a little bit tricky to see that.
End of explanation
def ssr(beta):
yhat= beta[0] + beta[1] * np.cos(x * beta[2])
return np.sum( (y - yhat)**2)
Explanation: Due to the high standard error, there is not enough evidence to reject the null hpyothesis of a positive slope
2.4 Answer
End of explanation
from math import log
1.5 * 1.5 * 1.2**(2.0) * log(1.2)
Explanation: 2.5 Answer
$N - D = 43$
2.6 Answer
$$F_{21} = \frac{\partial f(\hat{\beta}, x_2)}{\partial \beta_1} = \hat{\beta}_0 x_2 y_2^{\hat{\beta}_1} \ln y_2$$
End of explanation
x = [0.5,1.3, 2.1, 1.0, 2.1, 1.7, 1.2, 3.9, 3.9, 1.5, 3.5, 3.9, 5.7, 4.7, 5.8, 4.6, 5.1, 5.9, 5.5, 6.4, 6.7, 7.8, 7.4, 6.7, 8.4, 6.9, 10.2, 9.7, 10.0, 9.9]
y = [-1.6,0.5, 3.0, 3.1, 1.5, -1.8, -3.6, 7.0, 8.6, 2.2, 9.3, 3.6, 14.1, 9.5, 14.0, 7.4, 6.4, 17.2, 11.8, 12.2, 18.9, 21.9, 20.6, 15.7, 23.7, 13.6, 26.8, 22.0, 27.5, 23.3]
np.savetxt(fname='data.csv', delimiter=',', X=np.column_stack( (x, y)))
Explanation: 3. Regression in Excel (30 Points)
Regress the data in the next cell to a slope/intercept equation. Use the np.savetxt to create a CSV file. Provide the following labeled/bolded quantities at the top of your Excel file:
The slope with confidence interval
The intercept with confidence interval
A $p$-value for existence of slope. Use Excel to generate your T value.
You do not need to do all the steps for a good regression, but do make a plot of your fit and the data. Use the linest command in Excel to compute the slope/intercept and standard errors
End of explanation
x = [-5.8,-4.6, -3.9, -3.4, -1.8, -2.1, -3.0, -0.8, 0.4, -0.2, -0.4, -0.0, 2.0, 1.1, 1.4, 1.2, 3.3, 4.3, 4.3, 3.0]
y = [-6.4,-7.7, -9.3, -9.2, -8.9, -7.3, -9.5, -5.0, -3.7, -6.9, -4.0, -3.8, 2.6, -0.6, -0.7, -0.1, 5.0, 4.8, 8.5, 2.5]
Explanation: Answer
As reported by linest:
Slope is $3.0 \pm 0.3$
Intercept is $-4 \pm 2$
The $p$-value is $0.00016$
4. Regression in Matlab (30 Points)
Regress the following non-linear equation in Matlab:
$$y =\beta_0 + \beta_1 x + \beta_2 x^2 $$
Perform the regression with and without $\beta_2$. Should there be a $\beta_2$ term? Justify your answer. You do not need to do all the steps for a good regression. Do plot your two regressions and original data.
Hints:
Try doing this in a MATLAB notebook so that you have syntax highlighting and autocomplete
We do not have the stats module installed for Matlab, so if you have a T-statistic you need to evaluate use a quick python cell or look it up in a table.
If you find yourself doing very complex optimization, stop and think.
End of explanation
%load_ext pymatbridge
%%matlab
x = [-5.8,-4.6, -3.9, -3.4, -1.8, -2.1, -3.0, -0.8, 0.4, -0.2, -0.4, -0.0, 2.0, 1.1, 1.4, 1.2, 3.3, 4.3, 4.3, 3.0];
y = [-6.4,-7.7, -9.3, -9.2, -8.9, -7.3, -9.5, -5.0, -3.7, -6.9, -4.0, -3.8, 2.6, -0.6, -0.7, -0.1, 5.0, 4.8, 8.5, 2.5];
[nothing N] = size(x);
%get regressed fit
x_mat_1 = cat(1, ones(1, N), x)';
beta_mat_1 = inv(x_mat_1' * x_mat_1) * x_mat_1' * y'
%get regressed fit for 2nd degree polynomial
x_mat_2 = cat(1, ones(1, N), x, x .^ 2)';
beta_mat_2 = inv(x_mat_2' * x_mat_2) * x_mat_2' * y'
%get DOF
dof_2 = N - 3
%get error
s2_e_2 = sum((x_mat_2 * beta_mat_2 - y') .^ 2) / dof_2;
s_b_2 = sqrt(diag((s2_e_2 * inv(x_mat_2' * x_mat_2))))
%compute T-value
T = beta_mat_2(3) / s_b_2
%for plotting, some of the x values are out of order so I'll make new ones
x_plot = linspace(-6, 5, 100);
x_mat_plot_2 = cat(1, ones(1, 100), x_plot, x_plot .^ 2)';
x_mat_plot_1 = cat(1, ones(1, 100), x_plot)';
plot(x, y, 'o')
hold on
plot(x_plot, x_mat_plot_1 * beta_mat_1, '-')
plot(x_plot, x_mat_plot_2 * beta_mat_2, '-')
legend('data', 'fit - line', 'fit - 2nd degree')
Explanation: 4 Answer
We will linearize the equation to dimensions $\left[1, x, x^2\right]$ and dimensions $\left[1, x\right]$ using the equations from lecture notes. Note: writing the code below is easier done in a matlab notebook and then copied back. That will you give you autocomplete, syntax highlighting, etc.
End of explanation
import scipy.stats as ss
1 - (ss.t.cdf(0.46, 18) - ss.t.cdf(-0.46, 18))
Explanation: Our $T$-value is 0.46. Our MATLAB install doesn't have the stats add-on, so we'll use a table or Python to look up the CDF function. Our degrees of freedom comes from the null hypothesis, which is $N - 2 = 18$
End of explanation
x = [1.4,2.3, 3.7, 5.3, 6.6, 8.2, 10.2, 11.8, 12.7, 13.3, 14.6, 17.3, 18.6, 19.5, 21.6, 22.7, 23.6, 24.1]
y = [1.0,0.3, -0.1, -0.1, -0.3, -0.4, -0.4, -0.5, -0.4, -0.5, -0.4, -0.6, -0.8, -0.8, -0.6, -0.9, -0.7, -1.1]
Explanation: So we do not have enough evidence for the extra $x^2$ term in the regression. The fit parameters are -2.5 (intercept) and 1.7 (slope)
5. Python Regression (40 Points)
Regress the following data to this equation:
$$ \hat{y} = \beta_0 \ln \frac{x}{\beta_1} $$
Follow regression best practices, including writing out all necessary equations in Markdown
End of explanation
ss.spearmanr(x, y)
Explanation: Answer
Justification for Regression - 5 Points
End of explanation
import numpy as np
import scipy.optimize as opt
def ssr(betas, data):
'''Compute the SSR given the betas and the data. The data should be a list containing two arrays, the x and y data.'''
x = data[0]
y = data[1]
yhat = betas[0] * np.log(x / betas[1])
return np.sum((yhat - y)**2)
#test my function
betas = [1,1]
x = np.array(x)
y = np.array (y)
ssr(betas, data=[x, y])
result = opt.minimize(ssr, x0=[1,1], args=([x,y],))
print(result.x)
#check another point to see if it's non-concex
result = opt.minimize(ssr, x0=[-2,100], args=([x,y]))
print(result.x)
Explanation: There is a strong correlation in the data, with a $p$-value of $10^{-9}$. This indicates a regression is justified to perform
Regression - 15 Points
End of explanation
minimizer_kwargs = {'args': ([x,y])}
result = opt.basinhopping(ssr, x0=[1000,1000], niter=1000, minimizer_kwargs=minimizer_kwargs)
print(result)
Explanation: This is a non-convex problem based on this test. I will use basin-hopping then
End of explanation
minimizer_kwargs = {'args': ([x,y]), 'bounds':[(-np.infty, np.infty), (10**-10, np.infty)]}
result = opt.basinhopping(ssr, x0=[100,100], niter=10000, minimizer_kwargs=minimizer_kwargs)
print(result)
#store them in a nice spot for later
beta_hat = result.x
Explanation: It tried to put negative values in a log, so I will add a bound
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
resids = beta_hat[0] * np.log(x / beta_hat[1]) - y
plt.hist(resids)
plt.show()
print(ss.shapiro(resids))
SSR = np.sum(resids ** 2)
TSS = np.sum((np.mean(y) - y)**2)
print(1 - SSR / TSS)
Explanation: Checking Residuals and Fit - 5 Points
End of explanation
plt.plot(x,y,'o', label='data')
plt.plot(x, beta_hat[0] * np.log(x / beta_hat[1]), label='fit')
plt.legend()
plt.show()
Explanation: The $R^2$ value is $0.895$ and the $p$-value for normality is $0.82$. These two data together show that the residuals are likely normally distributed and the regression fits well
Plotting the fit - 5 Points
End of explanation
import scipy.linalg as linalg
#compute the F-matrix
F_mat = np.column_stack( (np.log(x / beta_hat[1]), np.ones(len(x)) * -beta_hat[0] / beta_hat[1]) )
#standard error in residuals
s2_e = np.sum(resids**2) / (len(x) - len(beta_hat))
#standard error in parametrs
s2_b = s2_e * linalg.inv(F_mat.transpose().dot(F_mat))
#go from matrix of squares to actual standard errors
s_b = np.sqrt(np.diag(s2_b))
#use a for loop to print them all pretty
from IPython import display
for beta, se,i in zip(beta_hat, s_b, range(len(s_b))):
display.display(display.Latex('$\\beta_{} = {:.2} \pm {:.2}$'.format(i, beta, se * ss.t.ppf(0.975, len(x) - len(beta_hat)))))
Explanation: The fit looks excellent
Error Analysis - 10 Points
The partials needed for error analysis are:
$$\frac{\partial f(x_i, \hat{\beta})}{\partial \beta_0} = \ln \frac{x}{\beta_1}$$
$$\frac{\partial f(x_i, \hat{\beta})}{\partial \beta_1} = -\beta_0 \frac{\beta_1}{x} \frac{x}{\beta_1^2} = -\frac{\beta_0}{\beta_1}$$
End of explanation |
3,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logit Transform and Normalize Methylation Data
Step1: Prepare Data for Association Tests
The association tests take a while to run in serial so we do them in a map-reduce type format
The idea is we break the data into 100 chunks, run the tests in parallel, and then combine the results
This is not entirely necissary but drops run-time from ~15 min to about 15 seconds | Python Code:
df = df_hiv.ix[:, pred_c.index]
dd = logit_adj(df)
m = dd.ix[:, ti(duration == 'Control')].mean(1)
s = dd.ix[:, ti(duration == 'Control')].std(1)
df_norm = dd.subtract(m, axis=0).divide(s, axis=0)
df_norm = df_norm.clip(-7,7)
df_norm.shape
Explanation: Logit Transform and Normalize Methylation Data
End of explanation
def chunkify_df(df, store, table_name, N=100):
df = df.dropna(1)
for i in range(N):
g = df.index[i::N]
dd = df.ix[g]
dd.to_hdf(store, '{}/chunk_{}'.format(table_name, i))
duration.ix[df_norm.columns].value_counts()
hiv.value_counts()
store = '/cellar/users/agross/Data/tmp/for_parallel.h5'
store = pd.HDFStore(store)
(hiv == 'HIV+').ix[pred_c.index].to_hdf(store, 'HIV')
#store['bio_age'] = mc_adj_c
#store['cell_counts'] = cell_counts
#store['age'] = age
#store['gender'] = gender == 'M'
#store['bio_age'] = age_adv.append(age_adv0)
chunkify_df(df_norm, store.filename, 'hiv_consented')
store.close()
store.open()
Explanation: Prepare Data for Association Tests
The association tests take a while to run in serial so we do them in a map-reduce type format
The idea is we break the data into 100 chunks, run the tests in parallel, and then combine the results
This is not entirely necissary but drops run-time from ~15 min to about 15 seconds
End of explanation |
3,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: This notebook gives intuition about the basics of Bayesian inference. This first example is borrowed from Cam Davidson-Pilon's online book, Probabilistic Programming & Bayesian Methods for Hackers (https
Step3: Now let's look at the relation of classical and Bayesian inference, in the context of linear regression. This presentation borrows heavily from http
Step4: Now we compute both standard GLM (using ordinary least squares) and Bayesian estimates for the regression, varying the prior variance on the parameters. | Python Code:
%matplotlib inline
from IPython.core.pylabtools import figsize
import numpy as np
import numpy
from matplotlib import pyplot as plt
figsize(11, 9)
import scipy.stats as stats
dist = stats.beta
n_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]
data = stats.bernoulli.rvs(0.5, size=n_trials[-1])
x = np.linspace(0, 1, 100)
def hpd(x,y,pct):
return indices for highest posterior density from array
idx=numpy.argsort(y)[::-1]
sorted_data=y[idx]
hits=idx[numpy.where(numpy.cumsum(sorted_data)<=pct)[0]]
return [x[numpy.min(hits)],x[numpy.max(hits)]],hits
print ('95% highest posterior density interval')
# For thealready prepared, I'm using Binomial's conj. prior.
for k, N in enumerate(n_trials):
sx = plt.subplot(len(n_trials) / 2, 2, k + 1)
plt.xlabel("$p$, probability of heads") \
if k in [0, len(n_trials) - 1] else None
plt.setp(sx.get_yticklabels(), visible=False)
heads = data[:N].sum()
y = dist.pdf(x, 1 + heads, 1 + N - heads)
# compute the 95% highest posterior density
hpdint,hpdhits=hpd(x,y,95)
hpdhits=numpy.sort(hpdhits)
print (k,'tosses',hpdint)
plt.plot(x, y, label="observe %d tosses,\n %d heads" % (N, heads))
plt.fill_between(x[hpdhits], 0, y[hpdhits], color="#348ABD", alpha=0.4)
plt.vlines(0.5, 0, 4, color="k", linestyles="--", lw=1)
leg = plt.legend()
leg.get_frame().set_alpha(0.4)
plt.autoscale(tight=True)
plt.suptitle("Bayesian updating of posterior probabilities",
y=1.02,
fontsize=14)
plt.tight_layout()
Explanation: This notebook gives intuition about the basics of Bayesian inference. This first example is borrowed from Cam Davidson-Pilon's online book, Probabilistic Programming & Bayesian Methods for Hackers (https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers)
End of explanation
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.stats
def make_continuous_data(mean=[45,100],var=[10,10],cor=0.6,N=100):
generate a synthetic data set with two variables
cor=numpy.array([[1.,cor],[cor,1.]])
var=numpy.array([[var[0],0],[0,var[1]]])
cov=var.dot(cor).dot(var)
return numpy.random.multivariate_normal(mean,cov,N)
n=25
d=make_continuous_data(N=n)
plt.scatter(d[:,0],d[:,1])
plt.xlabel('age')
plt.ylabel('processing speed')
print ('r=',numpy.corrcoef(d.T)[0,1])
Explanation: Now let's look at the relation of classical and Bayesian inference, in the context of linear regression. This presentation borrows heavily from http://www.stats.ox.ac.uk/~cholmes/Courses/BDA/bda_mcmc.pdf
First let's generate some data, using the same code we used in the machine learning example.
End of explanation
y=d[:,1]
X=numpy.vstack((d[:,0]-numpy.mean(d[:,0]),numpy.ones(d.shape[0]))).T
sigma2=1 # this is the variance - just set to 1 for this example
priorvals=10.**numpy.arange(-8,8)
bhat_bayes=numpy.zeros((len(priorvals),2))
bhat_glm=numpy.linalg.inv(X.T.dot(X)).dot(X.T.dot(y))
resid=y - X.dot(bhat_glm)
df=(X.shape[0] - X.shape[1])
mse=resid.dot(resid)
sigma2hat=(mse)/float(df)
print ('beta_hat (GLM):',bhat_glm)
for i in range(len(priorvals)):
prior_variance=priorvals[i]
v=numpy.identity(2)*prior_variance
bhat_bayes[i,:]=numpy.linalg.inv(X.T.dot(X) + (sigma2hat/prior_variance)*numpy.identity(2)).dot(X.T.dot(y))
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(range(len(priorvals)),bhat_bayes[:,0])
plt.xticks(range(len(priorvals)),priorvals, rotation='vertical')
plt.xlabel('prior variance')
plt.ylabel('parameter estimate')
plt.plot([0,len(priorvals)],[bhat_glm[0],bhat_glm[0]],color='green')
plt.legend(['Bayesian estimate','OLS estimate'],loc=4)
plt.subplot(122)
plt.plot(range(len(priorvals)),bhat_bayes[:,1])
plt.xticks(range(len(priorvals)),priorvals, rotation='vertical')
plt.xlabel('prior variance')
plt.ylabel('parameter estimate')
plt.plot([0,len(priorvals)],[bhat_glm[1],bhat_glm[1]],color='green')
plt.legend(['Bayesian estimate','OLS estimate'],loc=4)
Explanation: Now we compute both standard GLM (using ordinary least squares) and Bayesian estimates for the regression, varying the prior variance on the parameters.
End of explanation |
3,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple example of generating playlist by multilable learning
Step1: Data loading
Load playlists.
Step2: Load song_id --> track_id mapping
Step3: Load song tags, build track_id --> tag mapping.
Step4: Data cleaning
Use the subset of playlist such that the first song (i.e. the seed song) in each playlist has tag(s).
Step5: The set of unique songs, in multilabel learning, we have a label for each song in this set.
Step6: Data analysis
For the most part, playlists contain less than 10 songs. The most common playlist length is 2 songs.
Step7: Song_id --> Song_name mapping.
Step8: One-hot tag encoding
Indicator of tags
Step12: Feature extraction
Build features (1-hot encoding of tag) for a song given its song_id.
Step13: Training & Testing
Train a logistic regression model for each label.
Step14: Evaluation
Compute AUC.
Step15: Compute average precision.
Result analysis
Coefficient matrix (#Genres, #Songs).
Step16: Top 10 songs of each genre (w.r.t.) the coefficients. | Python Code:
%matplotlib inline
import os, sys, time
import pickle as pkl
import numpy as np
import pandas as pd
import sklearn as sk
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
import seaborn as sns
data_dir = 'data'
faotm = os.path.join(data_dir, 'aotm-2011/aotm-2011-subset.pkl')
fmap = os.path.join(data_dir, 'aotm-2011/map_song_track.pkl')
ftag = os.path.join(data_dir, 'msd/msd_tagtraum_cd2c.cls')
Explanation: A simple example of generating playlist by multilable learning
End of explanation
playlists = pkl.load(open(faotm, 'rb'))
print('#Playlists: %d' % len(playlists))
playlists[0]
print('#Songs: %d' % len({songID for p in playlists for songID in p['filtered_lists'][0]}))
lengths = [len(p['filtered_lists'][0]) for p in playlists]
#plt.hist(lengths, bins=20)
print('Average playlist length: %.1f' % np.mean(lengths))
Explanation: Data loading
Load playlists.
End of explanation
song2TrackID = pkl.load(open(fmap, 'rb'))
{ k : song2TrackID[k] for k in list(song2TrackID.keys())[:10] }
Explanation: Load song_id --> track_id mapping: a song may correspond to multiple tracks.
End of explanation
track2Tags = dict()
with open(ftag) as f:
for line in f:
if line[0] == '#': continue
tid, tag = line.strip().split('\t')
#print(tid, tag)
track2Tags[tid] = tag
print('#(Track, Tag): %d' % len(track2Tags))
{ k : track2Tags[k] for k in list(track2Tags.keys())[:10] }
Explanation: Load song tags, build track_id --> tag mapping.
End of explanation
subset_ix = []
seedSong2Tag = { }
for ix in range(len(playlists)):
# the list of song IDs in the playlist
songIDs = playlists[ix]['filtered_lists'][0]
# seed song
seedSongID = songIDs[0]
seedTrackIDs = song2TrackID[seedSongID]
# make sure that at least one track for the song has a corresponding tag
flag = [ (trackID in track2Tags) for trackID in seedTrackIDs]
if not np.any(flag):
continue
seedSong2Tag[playlists[ix]['mix_id']] = [ track2Tags[seedTrackIDs[i]] for i in range(0, len(flag)) if flag[i] == True ]
subset_ix.append(ix)
#seedSong2Tag
playlists_subset = [playlists[ix] for ix in subset_ix]
print('#Playlists used: %d' % len(subset_ix))
Explanation: Data cleaning
Use the subset of playlist such that the first song (i.e. the seed song) in each playlist has tag(s).
End of explanation
song_set = sorted({songID for p in playlists_subset for songID in p['filtered_lists'][0]})
print('#Songs used: %d' % len(song_set))
print(song_set[:10])
Explanation: The set of unique songs, in multilabel learning, we have a label for each song in this set.
End of explanation
playlist_lengths = [len(playlist['filtered_lists'][0]) for playlist in playlists_subset]
plt.hist(playlist_lengths, bins=20)
print('Average playlist length: %.1f' % np.mean(playlist_lengths))
Explanation: Data analysis
For the most part, playlists contain less than 10 songs. The most common playlist length is 2 songs.
End of explanation
songID2Name = {s[1]: s[0] for p in playlists_subset for s in p['playlist']}
#songID2Name
Explanation: Song_id --> Song_name mapping.
End of explanation
# the set of unique tags
tag_set = sorted(set(track2Tags.values()))
print('#Tags: %d' % len(tag_set))
tag_indicator = { tag: ix for ix, tag in enumerate(tag_set) }
tag_indicator
Explanation: One-hot tag encoding
Indicator of tags: tag --> index mapping.
End of explanation
def gen_features(song_id, song2TrackID = song2TrackID, tag_indicator = tag_indicator):
Generate one-hot feature vector for a given song ID
features = np.zeros(len(tag_set), dtype = np.float)
trackIDs = song2TrackID[song_id]
cnt = 0
for trackID in trackIDs:
if trackID in track2Tags:
cnt += 1
tag = track2Tags[trackID]
tag_ix = tag_indicator[tag]
features[tag_ix] = 1
# must have at least one tag for the song, else useless
assert(cnt >= 1)
return features
def gen_feature_map(song_id, seed):
Generate feature mapping for a given (label, query) pair
#return gen_features(song_id) - gen_features(seed) # feature map
return gen_features(seed) # a trivial feature map
def gen_training_set(label_ix, playlists = playlists_subset, song_set = song_set):
Create the labelled dataset for a given song index
Input:
- label_ix: song index, number in { 0, ..., # songs }
- playlists: which playlists to create features for
Output:
- (Feature, Label) pair (X, y), with # num playlists rows
X comprises the features for each seed song and the given song
y comprises the indicator of whether the given song is present in the respective playlist
assert(label_ix >= 0)
assert(label_ix < len(song_set))
N = len(playlists)
d = len(tag_set)
X = np.zeros((N, d), dtype = np.float)
y = np.zeros(N, dtype = np.float)
whichSong = song_set[label_ix]
for i in range(len(playlists)):
playlist = playlists[i]['filtered_lists'][0]
seed = playlist[0]
X[i,:] = gen_feature_map(whichSong, seed)
y[i] = int(whichSong in playlist)
return X, y
gen_feature_map(song_set[100], playlists_subset[0]['filtered_lists'][0][0])
Explanation: Feature extraction
Build features (1-hot encoding of tag) for a song given its song_id.
End of explanation
classifiers = [LogisticRegression(class_weight='balanced') for i in range(len(song_set))]
allPreds = [ ]
allTruths = [ ]
coefMat = [ ]
labelIndices = [ ]
Y = np.NAN * np.ones((len(playlists_subset), len(song_set)))
for label_ix in range(len(song_set)):
X, y = gen_training_set(label_ix)
Y[:,label_ix] = y
# by fixing random seed, the same playlists will be in the test set each time
X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(X, y, \
test_size = 0.33, \
random_state = 31)
if np.max(y_train) == 0.0: # or np.max(y_test) == 0.0:
continue
classifiers[label_ix].fit(X_train, y_train)
allPreds.append(classifiers[label_ix].decision_function(X_test))
allTruths.append(y_test)
coefMat.append(classifiers[label_ix].coef_.reshape(-1))
labelIndices.append(label_ix)
#print(classifiers[label_ix].coef_)
#print(classifiers[label_ix].intercept_)
allPreds = np.array(allPreds).T
allTruths = np.array(allTruths).T
print(allPreds.shape)
print(allTruths.shape)
Explanation: Training & Testing
Train a logistic regression model for each label.
End of explanation
aucs = [ ]
for i in range(0,allPreds.shape[0]):
pred = allPreds[i,:]
truth = allTruths[i,:]
if np.max(truth) == 0.0:
continue
aucs.append(sk.metrics.roc_auc_score(truth, pred))
print('Average AUC: %1.4f' % np.mean(aucs))
plt.hist(aucs, bins = 10);
Explanation: Evaluation
Compute AUC.
End of explanation
coefMat = np.array(coefMat).T
coefMat.shape
#sns.heatmap(coefMat[:, :30])
Explanation: Compute average precision.
Result analysis
Coefficient matrix (#Genres, #Songs).
End of explanation
labelIndices = np.array(labelIndices)
Top10Songs_ix = [ ]
for i in range(coefMat.shape[0]):
ix = np.argsort(coefMat[i, :])[::-1][:10]
Top10Songs_ix.append(labelIndices[ix])
Bot10Songs_ix = [ ]
for i in range(coefMat.shape[0]):
ix = np.argsort(coefMat[i, :])[:10]
Bot10Songs_ix.append(labelIndices[ix])
#Top10Songs_ix
#np.array(song_set)[Top10Songs_ix[0]]
cols = ['Genre.Count'] + ['Top %d' % k for k in range(1, 11)] + ['Bot %d' % k for k in range(1, 11)]
Top10Songs = pd.DataFrame(np.zeros((len(tag_set), 21), dtype = object),
index = tag_set, columns = cols)
# number of appearances of playlists with each genre
S = X.sum(axis = 0)
idx = np.argsort(S)[::-1]
#[(tag_set[i], S[i]) for i in idx]
# number of appearances of each song in a playlist
plt.hist(Y.sum(axis = 0));
plt.xlabel('# of playlist appearances');
for i in range(len(tag_set)):
row = tag_set[i]
Top10Songs.loc[row, 'Genre.Count'] = S[i]
for j in range(10):
song_ix = Top10Songs_ix[i][j]
songID = song_set[song_ix]
songName = (songID, songID2Name[songID][0], songID2Name[songID][1])
col = 'Top %d' % (j+1)
Top10Songs.loc[row, col] = songName
song_ix = Bot10Songs_ix[i][j]
songID = song_set[song_ix]
songName = (songID, songID2Name[songID][0], songID2Name[songID][1])
col = 'Bot %d' % (j+1)
Top10Songs.loc[row, col] = songName
Top10Songs = Top10Songs.sort_values(['Genre.Count'], ascending=False)
Top10Songs.head(5)
rapPlaylists = [ k for k in seedSong2Tag if 'Rap' in seedSong2Tag[k] ]
[ p['playlist'] for p in playlists_subset if p['mix_id'] in rapPlaylists ]
Explanation: Top 10 songs of each genre (w.r.t.) the coefficients.
End of explanation |
3,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Captioning with RNNs
In this exercise you will implement a vanilla recurrent neural networks and use them it to train a model that can generate novel captions for images.
Step2: Install h5py
The COCO dataset we will be using is stored in HDF5 format. To load HDF5 files, we will need to install the h5py Python package. From the command line, run
Step3: Microsoft COCO
For this exercise we will use the 2014 release of the Microsoft COCO dataset which has become the standard testbed for image captioning. The dataset consists of 80,000 training images and 40,000 validation images, each annotated with 5 captions written by workers on Amazon Mechanical Turk.
You should have already downloaded the data by changing to the cs231n/datasets directory and running the script get_assignment3_data.sh. If you haven't yet done so, run that script now. Warning
Step4: Look at the data
It is always a good idea to look at examples from the dataset before working with it.
You can use the sample_coco_minibatch function from the file cs231n/coco_utils.py to sample minibatches of data from the data structure returned from load_coco_data. Run the following to sample a small minibatch of training data and show the images and their captions. Running it multiple times and looking at the results helps you to get a sense of the dataset.
Note that we decode the captions using the decode_captions function and that we download the images on-the-fly using their Flickr URL, so you must be connected to the internet to view images.
Step5: Recurrent Neural Networks
As discussed in lecture, we will use recurrent neural network (RNN) language models for image captioning. The file cs231n/rnn_layers.py contains implementations of different layer types that are needed for recurrent neural networks, and the file cs231n/classifiers/rnn.py uses these layers to implement an image captioning model.
We will first implement different types of RNN layers in cs231n/rnn_layers.py.
Vanilla RNN
Step6: Vanilla RNN
Step7: Vanilla RNN
Step8: Vanilla RNN
Step9: Word embedding
Step10: Word embedding
Step11: Temporal Affine layer
At every timestep we use an affine function to transform the RNN hidden vector at that timestep into scores for each word in the vocabulary. Because this is very similar to the affine layer that you implemented in assignment 2, we have provided this function for you in the temporal_affine_forward and temporal_affine_backward functions in the file cs231n/rnn_layers.py. Run the following to perform numeric gradient checking on the implementation. You should see errors less than 1e-9.
Step12: Temporal Softmax loss
In an RNN language model, at every timestep we produce a score for each word in the vocabulary. We know the ground-truth word at each timestep, so we use a softmax loss function to compute loss and gradient at each timestep. We sum the losses over time and average them over the minibatch.
However there is one wrinkle
Step13: RNN for image captioning
Now that you have implemented the necessary layers, you can combine them to build an image captioning model. Open the file cs231n/classifiers/rnn.py and look at the CaptioningRNN class.
Implement the forward and backward pass of the model in the loss function. For now you only need to implement the case where cell_type='rnn' for vanialla RNNs; you will implement the LSTM case later. After doing so, run the following to check your forward pass using a small test case; you should see error less than 1e-10.
Step14: Run the following cell to perform numeric gradient checking on the CaptioningRNN class; you should errors around 5e-6 or less.
Step15: Overfit small data
Similar to the Solver class that we used to train image classification models on the previous assignment, on this assignment we use a CaptioningSolver class to train image captioning models. Open the file cs231n/captioning_solver.py and read through the CaptioningSolver class; it should look very familiar.
Once you have familiarized yourself with the API, run the following to make sure your model overfit a small sample of 100 training examples. You should see losses of less than 0.1.
Step16: Test-time sampling
Unlike classification models, image captioning models behave very differently at training time and at test time. At training time, we have access to the ground-truth caption, so we feed ground-truth words as input to the RNN at each timestep. At test time, we sample from the distribution over the vocabulary at each timestep, and feed the sample as input to the RNN at the next timestep.
In the file cs231n/classifiers/rnn.py, implement the sample method for test-time sampling. After doing so, run the following to sample from your overfitted model on both training and validation data. The samples on training data should be very good; the samples on validation data probably won't make sense. | Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time, os, json
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.rnn_layers import *
from cs231n.captioning_solver import CaptioningSolver
from cs231n.classifiers.rnn import CaptioningRNN
from cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions
from cs231n.image_utils import image_from_url
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Image Captioning with RNNs
In this exercise you will implement a vanilla recurrent neural networks and use them it to train a model that can generate novel captions for images.
End of explanation
!pip install h5py
Explanation: Install h5py
The COCO dataset we will be using is stored in HDF5 format. To load HDF5 files, we will need to install the h5py Python package. From the command line, run: <br/>
pip install h5py <br/>
If you receive a permissions error, you may need to run the command as root: <br/>
sudo pip install h5py
You can also run commands directly from the Jupyter notebook by prefixing the command with the "!" character:
End of explanation
# Load COCO data from disk; this returns a dictionary
# We'll work with dimensionality-reduced features for this notebook, but feel
# free to experiment with the original features by changing the flag below.
data = load_coco_data(pca_features=True)
# Print out all the keys and values from the data dictionary
for k, v in data.items():
if type(v) == np.ndarray:
print(k, type(v), v.shape, v.dtype)
else:
print(k, type(v), len(v))
Explanation: Microsoft COCO
For this exercise we will use the 2014 release of the Microsoft COCO dataset which has become the standard testbed for image captioning. The dataset consists of 80,000 training images and 40,000 validation images, each annotated with 5 captions written by workers on Amazon Mechanical Turk.
You should have already downloaded the data by changing to the cs231n/datasets directory and running the script get_assignment3_data.sh. If you haven't yet done so, run that script now. Warning: the COCO data download is ~1GB.
We have preprocessed the data and extracted features for you already. For all images we have extracted features from the fc7 layer of the VGG-16 network pretrained on ImageNet; these features are stored in the files train2014_vgg16_fc7.h5 and val2014_vgg16_fc7.h5 respectively. To cut down on processing time and memory requirements, we have reduced the dimensionality of the features from 4096 to 512; these features can be found in the files train2014_vgg16_fc7_pca.h5 and val2014_vgg16_fc7_pca.h5.
The raw images take up a lot of space (nearly 20GB) so we have not included them in the download. However all images are taken from Flickr, and URLs of the training and validation images are stored in the files train2014_urls.txt and val2014_urls.txt respectively. This allows you to download images on the fly for visualization. Since images are downloaded on-the-fly, you must be connected to the internet to view images.
Dealing with strings is inefficient, so we will work with an encoded version of the captions. Each word is assigned an integer ID, allowing us to represent a caption by a sequence of integers. The mapping between integer IDs and words is in the file coco2014_vocab.json, and you can use the function decode_captions from the file cs231n/coco_utils.py to convert numpy arrays of integer IDs back into strings.
There are a couple special tokens that we add to the vocabulary. We prepend a special <START> token and append an <END> token to the beginning and end of each caption respectively. Rare words are replaced with a special <UNK> token (for "unknown"). In addition, since we want to train with minibatches containing captions of different lengths, we pad short captions with a special <NULL> token after the <END> token and don't compute loss or gradient for <NULL> tokens. Since they are a bit of a pain, we have taken care of all implementation details around special tokens for you.
You can load all of the MS-COCO data (captions, features, URLs, and vocabulary) using the load_coco_data function from the file cs231n/coco_utils.py. Run the following cell to do so:
End of explanation
# Sample a minibatch and show the images and captions
batch_size = 3
captions, features, urls = sample_coco_minibatch(data, batch_size=batch_size)
for i, (caption, url) in enumerate(zip(captions, urls)):
# plt.imshow(image_from_url(url))
# plt.axis('off')
caption_str = decode_captions(caption, data['idx_to_word'])
print("url {}".format(url))
print(caption_str)
# plt.title(caption_str)
# plt.show()
Explanation: Look at the data
It is always a good idea to look at examples from the dataset before working with it.
You can use the sample_coco_minibatch function from the file cs231n/coco_utils.py to sample minibatches of data from the data structure returned from load_coco_data. Run the following to sample a small minibatch of training data and show the images and their captions. Running it multiple times and looking at the results helps you to get a sense of the dataset.
Note that we decode the captions using the decode_captions function and that we download the images on-the-fly using their Flickr URL, so you must be connected to the internet to view images.
End of explanation
N, D, H = 3, 10, 4
x = np.linspace(-0.4, 0.7, num=N*D).reshape(N, D)
prev_h = np.linspace(-0.2, 0.5, num=N*H).reshape(N, H)
Wx = np.linspace(-0.1, 0.9, num=D*H).reshape(D, H)
Wh = np.linspace(-0.3, 0.7, num=H*H).reshape(H, H)
b = np.linspace(-0.2, 0.4, num=H)
next_h, _ = rnn_step_forward(x, prev_h, Wx, Wh, b)
expected_next_h = np.asarray([
[-0.58172089, -0.50182032, -0.41232771, -0.31410098],
[ 0.66854692, 0.79562378, 0.87755553, 0.92795967],
[ 0.97934501, 0.99144213, 0.99646691, 0.99854353]])
print('next_h error: ', rel_error(expected_next_h, next_h))
Explanation: Recurrent Neural Networks
As discussed in lecture, we will use recurrent neural network (RNN) language models for image captioning. The file cs231n/rnn_layers.py contains implementations of different layer types that are needed for recurrent neural networks, and the file cs231n/classifiers/rnn.py uses these layers to implement an image captioning model.
We will first implement different types of RNN layers in cs231n/rnn_layers.py.
Vanilla RNN: step forward
Open the file cs231n/rnn_layers.py. This file implements the forward and backward passes for different types of layers that are commonly used in recurrent neural networks.
First implement the function rnn_step_forward which implements the forward pass for a single timestep of a vanilla recurrent neural network. After doing so run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
from cs231n.rnn_layers import rnn_step_forward, rnn_step_backward
np.random.seed(231)
N, D, H = 4, 5, 6
x = np.random.randn(N, D)
h = np.random.randn(N, H)
Wx = np.random.randn(D, H)
Wh = np.random.randn(H, H)
b = np.random.randn(H)
out, cache = rnn_step_forward(x, h, Wx, Wh, b)
dnext_h = np.random.randn(*out.shape)
fx = lambda x: rnn_step_forward(x, h, Wx, Wh, b)[0]
fh = lambda prev_h: rnn_step_forward(x, h, Wx, Wh, b)[0]
fWx = lambda Wx: rnn_step_forward(x, h, Wx, Wh, b)[0]
fWh = lambda Wh: rnn_step_forward(x, h, Wx, Wh, b)[0]
fb = lambda b: rnn_step_forward(x, h, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dnext_h)
dprev_h_num = eval_numerical_gradient_array(fh, h, dnext_h)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dnext_h)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dnext_h)
db_num = eval_numerical_gradient_array(fb, b, dnext_h)
dx, dprev_h, dWx, dWh, db = rnn_step_backward(dnext_h, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dprev_h error: ', rel_error(dprev_h_num, dprev_h))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
Explanation: Vanilla RNN: step backward
In the file cs231n/rnn_layers.py implement the rnn_step_backward function. After doing so run the following to numerically gradient check your implementation. You should see errors less than 1e-8.
End of explanation
N, T, D, H = 2, 3, 4, 5
x = np.linspace(-0.1, 0.3, num=N*T*D).reshape(N, T, D)
h0 = np.linspace(-0.3, 0.1, num=N*H).reshape(N, H)
Wx = np.linspace(-0.2, 0.4, num=D*H).reshape(D, H)
Wh = np.linspace(-0.4, 0.1, num=H*H).reshape(H, H)
b = np.linspace(-0.7, 0.1, num=H)
h, _ = rnn_forward(x, h0, Wx, Wh, b)
expected_h = np.asarray([
[
[-0.42070749, -0.27279261, -0.11074945, 0.05740409, 0.22236251],
[-0.39525808, -0.22554661, -0.0409454, 0.14649412, 0.32397316],
[-0.42305111, -0.24223728, -0.04287027, 0.15997045, 0.35014525],
],
[
[-0.55857474, -0.39065825, -0.19198182, 0.02378408, 0.23735671],
[-0.27150199, -0.07088804, 0.13562939, 0.33099728, 0.50158768],
[-0.51014825, -0.30524429, -0.06755202, 0.17806392, 0.40333043]]])
print('h error: ', rel_error(expected_h, h))
Explanation: Vanilla RNN: forward
Now that you have implemented the forward and backward passes for a single timestep of a vanilla RNN, you will combine these pieces to implement a RNN that process an entire sequence of data.
In the file cs231n/rnn_layers.py, implement the function rnn_forward. This should be implemented using the rnn_step_forward function that you defined above. After doing so run the following to check your implementation. You should see errors less than 1e-7.
End of explanation
np.random.seed(231)
N, D, T, H = 2, 3, 10, 5
x = np.random.randn(N, T, D)
h0 = np.random.randn(N, H)
Wx = np.random.randn(D, H)
Wh = np.random.randn(H, H)
b = np.random.randn(H)
out, cache = rnn_forward(x, h0, Wx, Wh, b)
dout = np.random.randn(*out.shape)
dx, dh0, dWx, dWh, db = rnn_backward(dout, cache)
fx = lambda x: rnn_forward(x, h0, Wx, Wh, b)[0]
fh0 = lambda h0: rnn_forward(x, h0, Wx, Wh, b)[0]
fWx = lambda Wx: rnn_forward(x, h0, Wx, Wh, b)[0]
fWh = lambda Wh: rnn_forward(x, h0, Wx, Wh, b)[0]
fb = lambda b: rnn_forward(x, h0, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dh0_num = eval_numerical_gradient_array(fh0, h0, dout)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dout)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
print('dx error: ', rel_error(dx_num, dx))
print('dh0 error: ', rel_error(dh0_num, dh0))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
Explanation: Vanilla RNN: backward
In the file cs231n/rnn_layers.py, implement the backward pass for a vanilla RNN in the function rnn_backward. This should run back-propagation over the entire sequence, calling into the rnn_step_backward function that you defined above. You should see errors less than 5e-7.
End of explanation
N, T, V, D = 2, 4, 5, 3
x = np.asarray([[0, 3, 1, 2], [2, 1, 0, 3]])
W = np.linspace(0, 1, num=V*D).reshape(V, D)
out, _ = word_embedding_forward(x, W)
expected_out = np.asarray([
[[ 0., 0.07142857, 0.14285714],
[ 0.64285714, 0.71428571, 0.78571429],
[ 0.21428571, 0.28571429, 0.35714286],
[ 0.42857143, 0.5, 0.57142857]],
[[ 0.42857143, 0.5, 0.57142857],
[ 0.21428571, 0.28571429, 0.35714286],
[ 0., 0.07142857, 0.14285714],
[ 0.64285714, 0.71428571, 0.78571429]]])
print('out error: ', rel_error(expected_out, out))
Explanation: Word embedding: forward
In deep learning systems, we commonly represent words using vectors. Each word of the vocabulary will be associated with a vector, and these vectors will be learned jointly with the rest of the system.
In the file cs231n/rnn_layers.py, implement the function word_embedding_forward to convert words (represented by integers) into vectors. Run the following to check your implementation. You should see error around 1e-8.
End of explanation
np.random.seed(231)
N, T, V, D = 50, 3, 5, 6
x = np.random.randint(V, size=(N, T))
W = np.random.randn(V, D)
out, cache = word_embedding_forward(x, W)
dout = np.random.randn(*out.shape)
dW = word_embedding_backward(dout, cache)
f = lambda W: word_embedding_forward(x, W)[0]
dW_num = eval_numerical_gradient_array(f, W, dout)
print('dW error: ', rel_error(dW, dW_num))
Explanation: Word embedding: backward
Implement the backward pass for the word embedding function in the function word_embedding_backward. After doing so run the following to numerically gradient check your implementation. You should see errors less than 1e-11.
End of explanation
np.random.seed(231)
# Gradient check for temporal affine layer
N, T, D, M = 2, 3, 4, 5
x = np.random.randn(N, T, D)
w = np.random.randn(D, M)
b = np.random.randn(M)
out, cache = temporal_affine_forward(x, w, b)
dout = np.random.randn(*out.shape)
fx = lambda x: temporal_affine_forward(x, w, b)[0]
fw = lambda w: temporal_affine_forward(x, w, b)[0]
fb = lambda b: temporal_affine_forward(x, w, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dw_num = eval_numerical_gradient_array(fw, w, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
dx, dw, db = temporal_affine_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Temporal Affine layer
At every timestep we use an affine function to transform the RNN hidden vector at that timestep into scores for each word in the vocabulary. Because this is very similar to the affine layer that you implemented in assignment 2, we have provided this function for you in the temporal_affine_forward and temporal_affine_backward functions in the file cs231n/rnn_layers.py. Run the following to perform numeric gradient checking on the implementation. You should see errors less than 1e-9.
End of explanation
# Sanity check for temporal softmax loss
from cs231n.rnn_layers import temporal_softmax_loss
N, T, V = 100, 1, 10
def check_loss(N, T, V, p):
x = 0.001 * np.random.randn(N, T, V)
y = np.random.randint(V, size=(N, T))
mask = np.random.rand(N, T) <= p
print(temporal_softmax_loss(x, y, mask)[0])
check_loss(100, 1, 10, 1.0) # Should be about 2.3
check_loss(100, 10, 10, 1.0) # Should be about 23
check_loss(5000, 10, 10, 0.1) # Should be about 2.3
# Gradient check for temporal softmax loss
N, T, V = 7, 8, 9
x = np.random.randn(N, T, V)
y = np.random.randint(V, size=(N, T))
mask = (np.random.rand(N, T) > 0.5)
loss, dx = temporal_softmax_loss(x, y, mask, verbose=False)
dx_num = eval_numerical_gradient(lambda x: temporal_softmax_loss(x, y, mask)[0], x, verbose=False)
print('dx error: ', rel_error(dx, dx_num))
Explanation: Temporal Softmax loss
In an RNN language model, at every timestep we produce a score for each word in the vocabulary. We know the ground-truth word at each timestep, so we use a softmax loss function to compute loss and gradient at each timestep. We sum the losses over time and average them over the minibatch.
However there is one wrinkle: since we operate over minibatches and different captions may have different lengths, we append <NULL> tokens to the end of each caption so they all have the same length. We don't want these <NULL> tokens to count toward the loss or gradient, so in addition to scores and ground-truth labels our loss function also accepts a mask array that tells it which elements of the scores count towards the loss.
Since this is very similar to the softmax loss function you implemented in assignment 1, we have implemented this loss function for you; look at the temporal_softmax_loss function in the file cs231n/rnn_layers.py.
Run the following cell to sanity check the loss and perform numeric gradient checking on the function. You should see an error for dx less than 1e-7.
End of explanation
N, D, W, H = 10, 20, 30, 40
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
V = len(word_to_idx)
T = 13
model = CaptioningRNN(word_to_idx,
input_dim=D,
wordvec_dim=W,
hidden_dim=H,
cell_type='rnn',
dtype=np.float64)
# Set all model parameters to fixed values
for k, v in model.params.items():
model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape)
features = np.linspace(-1.5, 0.3, num=(N * D)).reshape(N, D)
captions = (np.arange(N * T) % V).reshape(N, T)
loss, grads = model.loss(features, captions)
expected_loss = 9.83235591003
print('loss: ', loss)
print('expected loss: ', expected_loss)
print('difference: ', abs(loss - expected_loss))
Explanation: RNN for image captioning
Now that you have implemented the necessary layers, you can combine them to build an image captioning model. Open the file cs231n/classifiers/rnn.py and look at the CaptioningRNN class.
Implement the forward and backward pass of the model in the loss function. For now you only need to implement the case where cell_type='rnn' for vanialla RNNs; you will implement the LSTM case later. After doing so, run the following to check your forward pass using a small test case; you should see error less than 1e-10.
End of explanation
np.random.seed(231)
batch_size = 2
timesteps = 3
input_dim = 4
wordvec_dim = 5
hidden_dim = 6
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
vocab_size = len(word_to_idx)
captions = np.random.randint(vocab_size, size=(batch_size, timesteps))
features = np.random.randn(batch_size, input_dim)
model = CaptioningRNN(word_to_idx,
input_dim=input_dim,
wordvec_dim=wordvec_dim,
hidden_dim=hidden_dim,
cell_type='rnn',
dtype=np.float64,
)
loss, grads = model.loss(features, captions)
for param_name in sorted(grads):
f = lambda _: model.loss(features, captions)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s relative error: %e' % (param_name, e))
Explanation: Run the following cell to perform numeric gradient checking on the CaptioningRNN class; you should errors around 5e-6 or less.
End of explanation
np.random.seed(231)
small_data = load_coco_data(max_train=50)
small_rnn_model = CaptioningRNN(
cell_type='rnn',
word_to_idx=data['word_to_idx'],
input_dim=data['train_features'].shape[1],
hidden_dim=512,
wordvec_dim=256,
)
small_rnn_solver = CaptioningSolver(small_rnn_model, small_data,
update_rule='adam',
num_epochs=50,
batch_size=25,
optim_config={
'learning_rate': 5e-3,
},
lr_decay=0.95,
verbose=True, print_every=10,
)
small_rnn_solver.train()
# Plot the training losses
plt.plot(small_rnn_solver.loss_history)
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.title('Training loss history')
plt.show()
Explanation: Overfit small data
Similar to the Solver class that we used to train image classification models on the previous assignment, on this assignment we use a CaptioningSolver class to train image captioning models. Open the file cs231n/captioning_solver.py and read through the CaptioningSolver class; it should look very familiar.
Once you have familiarized yourself with the API, run the following to make sure your model overfit a small sample of 100 training examples. You should see losses of less than 0.1.
End of explanation
for split in ['train', 'val']:
minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2)
gt_captions, features, urls = minibatch
gt_captions = decode_captions(gt_captions, data['idx_to_word'])
sample_captions = small_rnn_model.sample(features)
sample_captions = decode_captions(sample_captions, data['idx_to_word'])
for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls):
plt.imshow(image_from_url(url))
plt.title('%s\n%s\nGT:%s' % (split, sample_caption, gt_caption))
plt.axis('off')
plt.show()
Explanation: Test-time sampling
Unlike classification models, image captioning models behave very differently at training time and at test time. At training time, we have access to the ground-truth caption, so we feed ground-truth words as input to the RNN at each timestep. At test time, we sample from the distribution over the vocabulary at each timestep, and feed the sample as input to the RNN at the next timestep.
In the file cs231n/classifiers/rnn.py, implement the sample method for test-time sampling. After doing so, run the following to sample from your overfitted model on both training and validation data. The samples on training data should be very good; the samples on validation data probably won't make sense.
End of explanation |
3,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EXP 2-HighOrder
In this experiment we generate 1000 high-order sequences each comprising 10 SDRs. The process of generating these sequences is as follows
Step1: Feed sequences to the TM
Step2: ISI analysis (with Poisson model too)
Step3: Raster Plots
Step4: Quick Accuracy Test
Step5: Elad Plot
Step6: Save TM
Step7: Analysis of input | Python Code:
import numpy as np
import random
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from nupic.bindings.algorithms import TemporalMemory as TM
from htmresearch.support.neural_correlations_utils import *
uintType = "uint32"
random.seed(1)
symbolsPerSequence = 10
numSequences = 1000
epochs = 10
totalTS = epochs * numSequences * symbolsPerSequence
tm = TM(columnDimensions = (2048,),
cellsPerColumn=8,
initialPermanence=0.21,
connectedPermanence=0.3,
minThreshold=15,
maxNewSynapseCount=40,
permanenceIncrement=0.1,
permanenceDecrement=0.1,
activationThreshold=15,
predictedSegmentDecrement=0.01,
)
sparsity = 0.02
sparseCols = int(tm.numberOfColumns() * sparsity)
Explanation: EXP 2-HighOrder
In this experiment we generate 1000 high-order sequences each comprising 10 SDRs. The process of generating these sequences is as follows: We generate a sequence S with 10 random SDRs. Then, we generate sequence S' by substituting n number of SDRs at the beginning and end of S by choosing 2n random SDRs. We repeat this process for 500 times which results in 1000 sequences. We present these sequences to the TM with learning "on". Each training epoch starts by shuffling the 1000 sequences and presenting each of them to the TM. During the simulation we keep track of spike trains from all cells. We use this data to estimate pairwise correlations among cells.
End of explanation
# create sequences
allSequences = []
for s in range(numSequences):
if s % 2 == 0:
sequence = generateRandomSequence(symbolsPerSequence, tm.numberOfColumns(), sparsity)
allSequences.append(sequence)
else:
sequenceHO = generateHOSequence(sequence, symbolsPerSequence, tm.numberOfColumns(), sparsity)
allSequences.append(sequenceHO)
spikeTrains = np.zeros((tm.numberOfCells(), totalTS), dtype = "uint32")
columnUsage = np.zeros(tm.numberOfColumns(), dtype="uint32")
spikeCount = np.zeros(totalTS, dtype="uint32")
ts = 0
entropyX = []
entropyY = []
negPCCX_cells = []
negPCCY_cells = []
numSpikesX = []
numSpikesY = []
numSpikes = 0
negPCCX_cols = []
negPCCY_cols = []
traceX = []
traceY = []
# Randomly generate the indices of the columns to keep track during simulation time
colIndicesLarge = np.random.permutation(tm.numberOfColumns())[0:125] # keep track of 125 columns = 1000 cells
for epoch in range(epochs):
# shuffle sequences
print ""
print "Epoch: " + str(epoch)
seqIndices = np.random.permutation(np.arange(numSequences))
for s in range(numSequences):
tm.reset()
if s > 0 and s % 100 == 0:
print str(s) + " sequences processed"
for symbol in range(symbolsPerSequence):
tm.compute(allSequences[seqIndices[s]][symbol], learn=True)
for cell in tm.getActiveCells():
spikeTrains[cell, ts] = 1
numSpikes += 1
spikeCount[ts] += 1
# Obtain active columns:
activeColumnsIndices = [tm.columnForCell(i) for i in tm.getActiveCells()]
currentColumns = [1 if i in activeColumnsIndices else 0 for i in range(tm.numberOfColumns())]
for col in np.nonzero(currentColumns)[0]:
columnUsage[col] += 1
if ts > 0 and ts % int(totalTS * 0.1) == 0:
numSpikesX.append(ts)
numSpikesY.append(numSpikes)
numSpikes = 0
#print "++ Analyzing correlations (cells at random) ++"
subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), ts, 1000)
(corrMatrix, numNegPCC) = computePWCorrelations(subSpikeTrains, removeAutoCorr=True)
negPCCX_cells.append(ts)
negPCCY_cells.append(numNegPCC)
bins = 300
plt.hist(corrMatrix.ravel(), bins, alpha=0.5)
plt.xlim(-0.05,0.1)
plt.xlabel("PCC")
plt.ylabel("Frequency")
plt.savefig("cellsHist_" + str(ts))
plt.close()
traceX.append(ts)
#traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.5))
#traceY.append(np.std(corrMatrix))
#traceY.append(sum(1 for i in corrMatrix.ravel() if i > -0.05 and i < 0.1))
traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.0))
entropyX.append(ts)
entropyY.append(computeEntropy(subSpikeTrains))
#print "++ Analyzing correlations (whole columns) ++"
### First the LARGE subsample of columns:
subSpikeTrains = subSampleWholeColumn(spikeTrains, colIndicesLarge, tm.getCellsPerColumn(), ts, 1000)
(corrMatrix, numNegPCC) = computePWCorrelationsWithinCol(subSpikeTrains, True, tm.getCellsPerColumn())
negPCCX_cols.append(ts)
negPCCY_cols.append(numNegPCC)
#print "++ Generating histogram ++"
plt.hist(corrMatrix.ravel(), alpha=0.5)
plt.xlabel("PCC")
plt.ylabel("Frequency")
plt.savefig("colsHist_" + str(ts))
plt.close()
ts += 1
print "*** DONE ***"
plt.plot(traceX, traceY)
plt.xlabel("Time")
plt.ylabel("Positive PCC Count")
plt.savefig("positivePCCTrace")
plt.close()
sparsityTraceX = []
sparsityTraceY = []
for i in range(totalTS - 1000):
sparsityTraceX.append(i)
sparsityTraceY.append(np.mean(spikeCount[i:1000 + i]) / tm.numberOfCells())
plt.plot(sparsityTraceX, sparsityTraceY)
plt.xlabel("Time")
plt.ylabel("Sparsity")
plt.savefig("sparsityTrace")
plt.close()
# plot trace of negative PCCs
plt.plot(negPCCX_cells, negPCCY_cells)
plt.xlabel("Time")
plt.ylabel("Negative PCC Count")
plt.savefig("negPCCTrace_cells")
plt.close()
plt.plot(negPCCX_cols, negPCCY_cols)
plt.xlabel("Time")
plt.ylabel("Negative PCC Count")
plt.savefig("negPCCTrace_cols")
plt.close()
plt.plot(numSpikesX, numSpikesY)
plt.xlabel("Time")
plt.ylabel("Num Spikes")
plt.savefig("numSpikesTrace")
plt.close()
# plot entropy
plt.plot(entropyX, entropyY)
plt.xlabel("Time")
plt.ylabel("Entropy")
plt.savefig("entropyTM")
plt.close()
plt.hist(columnUsage)
plt.xlabel("Number of times active")
plt.ylabel("Number of columns")
plt.savefig("columnUsage")
plt.close()
Explanation: Feed sequences to the TM
End of explanation
subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), 0, 0)
isi = computeISI(subSpikeTrains)
# Print ISI distribution of TM
#bins = np.linspace(np.min(isi), np.max(isi), 50)
bins = 100
plt.hist(isi, bins)
plt.xlim(0,1000)
# plt.xlim(89500,92000)
plt.xlabel("ISI")
plt.ylabel("Frequency")
plt.savefig("isiTM")
plt.close()
print np.mean(isi)
print np.std(isi)
print np.std(isi)/np.mean(isi)
# Generate spike distribution
spikeCount = []
for cell in range(np.shape(subSpikeTrains)[0]):
spikeCount.append(np.count_nonzero(subSpikeTrains[cell,:]))
bins = 25
plt.hist(spikeCount, bins)
plt.xlabel("Spike Count")
plt.ylabel("Number of cells")
plt.savefig("spikesHistTM")
plt.close()
#firingRate = 18
firingRate = np.mean(subSpikeTrains) * 1000
print "firing rate: " + str(firingRate)
pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0])
isi = computeISI(pSpikeTrain)
# Print ISI distribution of Poisson model
#bins = np.linspace(np.min(isi), np.max(isi), 50)
bins = 100
plt.hist(isi, bins)
plt.xlim(0,600)
# plt.xlim(89500,92000)
plt.xlabel("ISI")
plt.ylabel("Frequency")
plt.savefig("isiPOI")
plt.close()
print np.mean(isi)
print np.std(isi)
print np.std(isi)/np.mean(isi)
# Generate spike distribution
spikeCount = []
for cell in range(np.shape(pSpikeTrain)[0]):
spikeCount.append(np.count_nonzero(pSpikeTrain[cell,:]))
bins = 25
plt.hist(spikeCount, bins)
plt.xlabel("Spike Count")
plt.ylabel("Number of cells")
plt.savefig("spikesHistPOI")
plt.close()
Explanation: ISI analysis (with Poisson model too)
End of explanation
subSpikeTrains = subSample(spikeTrains, 100, tm.numberOfCells(), -1, 1000)
rasterPlot(subSpikeTrains, "TM")
pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0])
rasterPlot(pSpikeTrain, "Poisson")
Explanation: Raster Plots
End of explanation
simpleAccuracyTest("random", tm, allSequences)
Explanation: Quick Accuracy Test
End of explanation
# Sample from both TM_SpikeTrains and Poisson_SpikeTrains. 10 cells for 1000 (?) timesteps
wordLength = 10
firingRate = np.mean(subSpikeTrains) * 1000
# generate all 2^N strings:
binaryStrings = list(itertools.product([0, 1], repeat=wordLength))
trials = 10
x = [] #observed
y = [] #predicted by random model
for t in range(trials):
print "Trial: " + str(t)
# sample from spike trains
subSpikeTrains = subSample(spikeTrains, wordLength, tm.numberOfCells(), 0, 0)
pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0])
for i in range(2**wordLength):
if i == 0:
continue
# if i % 100 == 0:
# print str(i) + " words processed"
binaryWord = np.array(binaryStrings[i], dtype="uint32")
x.append(countInSample(binaryWord, subSpikeTrains))
y.append(countInSample(binaryWord, pSpikeTrain))
# print "**All words processed**"
# print ""
print "*** DONE ***"
plt.loglog(x, y, 'bo',basex=10)
plt.xlabel("Observed")
plt.ylabel("Predicted")
plt.plot(x,x,'k-')
plt.xlim(0,np.max(x))
plt.savefig("EladPlot")
plt.close()
Explanation: Elad Plot
End of explanation
saveTM(tm)
# to load the TM back from the file do:
with open('tm.nta', 'rb') as f:
proto2 = TemporalMemoryProto_capnp.TemporalMemoryProto.read(f, traversal_limit_in_words=2**61)
tm = TM.read(proto2)
Explanation: Save TM
End of explanation
overlapMatrix = inputAnalysis(allSequences, "random", tm.numberOfColumns())
# show heatmap of overlap matrix
plt.imshow(overlapMatrix, cmap='spectral', interpolation='nearest')
cb = plt.colorbar()
cb.set_label('Overlap Score')
plt.savefig("overlapScore_heatmap")
plt.close()
# plt.show()
# generate histogram
bins = 60
(n, bins, patches) = plt.hist(overlapMatrix.ravel(), bins, alpha=0.5)
plt.xlabel("Overlap Score")
plt.ylabel("Frequency")
plt.savefig("overlapScore_hist")
plt.xlim(0.1,1.0)
plt.ylim(0,200000)
plt.xlabel("Overlap Score")
plt.ylabel("Frequency")
plt.savefig("overlapScore_hist_ZOOM")
plt.close()
flag = False
for i in range(numSequences*symbolsPerSequence):
for j in range(numSequences*symbolsPerSequence):
if overlapMatrix[i,j] == 1:
print i,j
flag = True
break
if flag == True:
break
print overlapMatrix[1,11]
print allSequences[0][1]
print allSequences[1][1]
print percentOverlap(allSequences[0][1], allSequences[1][1], tm.numberOfColumns())
Explanation: Analysis of input
End of explanation |
3,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scale heights for typical atmospheric soundings
Plot McClatchey's US Standard Atmospheres
There are five different average profiles for the tropics, subarctic summer, subarctic winter, midlatitude summer, midlatitude winter. These are called the US Standard Atmospheres. This notebook shows how to read and plot the soundings, and calculate the pressure and density scale heights.
Step1: Reading the h5 file
Use HDFViewer to check the layout of std_soundings.h5. There are five soundings.
The soundings have six columns and 33 rows (i.e. 33 height levels). The variables are
z, press, temp, rmix, den, o3den -- where rmix is the mixing ratio of water vapor, den is the dry air density and o3den is the ozone density. The units are
m, pa, K, kg/kg, kg/m^3, kg/m^3
I will read the 6 column soundings into a pandas (panel data) DataFrame, which is like a matrix except the columns can be accessed by column name in addition to column number. The main advantage for us is that it's easier to keep track of which variables we're plotting
Step2: We use these keys to get a dataframe with 6 columns, and 33 levels. Here's an example for the midsummer sounding
Step3: Plot temp and vapor mixing ratio rmix ($\rho_{H2O}/\rho_{air}$)
Step5: Calculating scale heights for temperature and air density
Here is equation 1 of the hydrostatic balance notes
$$\frac{ 1}{\overline{H_p}} = \overline{ \left ( \frac{1 }{H} \right )} = \frac{\int_{0 }^{z}!\frac{1}{H} dz^\prime }{z-0} $$
where
$$H=R_d T/g$$
and here is the Python code to do that integral
Step7: Similarly, equation (5) of the hydrostatic balance notes
is
Step8: How do $\overline{H_p}$ and $\overline{H_\rho}$ compare for the tropical sounding?
Step9: How well do these average values represent the pressure and density profiles?
Step10: Now check the hydrostatic approximation by plotting the pressure column against
$$p(z) = p_0 \exp \left (-z/\overline{H_p} \right )$$
vs. the actual sounding p(T)
Step11: Again plot the hydrostatic approximation
$$\rho(z) = \rho_0 \exp \left (-z/\overline{H_\rho} \right )$$
vs. the actual sounding $\rho(z)$
Step12: <a name="oct7assign"></a>
Assignment for Friday
Add cells to this notebook to
Step13: 2. Define a function that takes a sounding dataframe and returns the "total precipitable water", which is defined as | Python Code:
from matplotlib import pyplot as plt
import matplotlib.ticker as ticks
import urllib
import numpy as np
from a301utils.a301_readfile import download
import h5py
filename='std_soundings.h5'
download(filename)
Explanation: Scale heights for typical atmospheric soundings
Plot McClatchey's US Standard Atmospheres
There are five different average profiles for the tropics, subarctic summer, subarctic winter, midlatitude summer, midlatitude winter. These are called the US Standard Atmospheres. This notebook shows how to read and plot the soundings, and calculate the pressure and density scale heights.
End of explanation
from pandas import DataFrame
with h5py.File(filename) as infile:
sound_dict={}
print('soundings: ',list(infile.keys()))
#
# names are separated by commas, so split them up
# and strip leading blanks
#
column_names=infile.attrs['variable_names'].split(',')
column_names = [item.strip() for item in column_names]
column_units = infile.attrs['units'].split(',')
column_units = [item.strip() for item in column_units]
for name in infile.keys():
data = infile[name][...]
sound_dict[name]=DataFrame(data,columns=column_names)
Explanation: Reading the h5 file
Use HDFViewer to check the layout of std_soundings.h5. There are five soundings.
The soundings have six columns and 33 rows (i.e. 33 height levels). The variables are
z, press, temp, rmix, den, o3den -- where rmix is the mixing ratio of water vapor, den is the dry air density and o3den is the ozone density. The units are
m, pa, K, kg/kg, kg/m^3, kg/m^3
I will read the 6 column soundings into a pandas (panel data) DataFrame, which is like a matrix except the columns can be accessed by column name in addition to column number. The main advantage for us is that it's easier to keep track of which variables we're plotting
End of explanation
midsummer=sound_dict['midsummer']
print(midsummer.head())
list(midsummer.columns)
Explanation: We use these keys to get a dataframe with 6 columns, and 33 levels. Here's an example for the midsummer sounding
End of explanation
%matplotlib inline
plt.style.use('ggplot')
meters2km=1.e-3
plt.close('all')
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(11,8))
for a_name,df in sound_dict.items():
ax1.plot(df['temp'],df['z']*meters2km,label=a_name)
ax1.set(ylim=(0,40),title='Temp soundings',ylabel='Height (km)',
xlabel='Temperature (K)')
ax2.plot(df['rmix']*1.e3,df['z']*meters2km,label=a_name)
ax2.set(ylim=(0,8),title='Vapor soundings',ylabel='Height (km)',
xlabel='vapor mixing ratio (g/kg)')
ax1.legend()
_=ax2.legend()
Explanation: Plot temp and vapor mixing ratio rmix ($\rho_{H2O}/\rho_{air}$)
End of explanation
g=9.8 #don't worry about g(z) for this exercise
Rd=287. #kg/m^3
def calcScaleHeight(df):
Calculate the pressure scale height H_p
Parameters
----------
T: vector (float)
temperature (K)
p: vector (float) of len(T)
pressure (pa)
z: vector (float) of len(T
height (m)
Returns
-------
Hbar: vector (float) of len(T)
pressure scale height (m)
z=df['z'].values
Temp=df['temp'].values
dz=np.diff(z)
TLayer=(Temp[1:] + Temp[0:-1])/2.
oneOverH=g/(Rd*TLayer)
Zthick=z[-1] - z[0]
oneOverHbar=np.sum(oneOverH*dz)/Zthick
Hbar = 1/oneOverHbar
return Hbar
Explanation: Calculating scale heights for temperature and air density
Here is equation 1 of the hydrostatic balance notes
$$\frac{ 1}{\overline{H_p}} = \overline{ \left ( \frac{1 }{H} \right )} = \frac{\int_{0 }^{z}!\frac{1}{H} dz^\prime }{z-0} $$
where
$$H=R_d T/g$$
and here is the Python code to do that integral:
End of explanation
def calcDensHeight(df):
Calculate the density scale height H_rho
Parameters
----------
T: vector (float)
temperature (K)
p: vector (float) of len(T)
pressure (pa)
z: vector (float) of len(T
height (m)
Returns
-------
Hbar: vector (float) of len(T)
density scale height (m)
z=df['z'].values
Temp=df['temp'].values
dz=np.diff(z)
TLayer=(Temp[1:] + Temp[0:-1])/2.
dTdz=np.diff(Temp)/np.diff(z)
oneOverH=g/(Rd*TLayer) + (1/TLayer*dTdz)
Zthick=z[-1] - z[0]
oneOverHbar=np.sum(oneOverH*dz)/Zthick
Hbar = 1/oneOverHbar
return Hbar
Explanation: Similarly, equation (5) of the hydrostatic balance notes
is:
$$\frac{d\rho }{\rho} = - \left ( \frac{1 }{H} +
\frac{1 }{T} \frac{dT }{dz} \right ) dz \equiv - \frac{dz }{H_\rho} $$
Which leads to
$$\frac{ 1}{\overline{H_\rho}} = \frac{\int_{0 }^{z}!\left [ \frac{1}{H} + \frac{1 }{T} \frac{dT }{dz} \right ] dz^\prime }{z-0} $$
and the following python function:
End of explanation
sounding='tropics'
#
# grab the dataframe and get the sounding columns
#
df=sound_dict[sounding]
#
# limit calculation to bottom 20 km
#
top = 20.e3
df = df.loc[df['z']<top]
Hbar= calcScaleHeight(df)
Hrho= calcDensHeight(df)
print("pressure scale height for the {} sounding is {:5.2f} km".format(sounding,Hbar*1.e-3))
print("density scale height for the {} is {:5.2f} km".format(sounding,Hrho*1.e-3))
Explanation: How do $\overline{H_p}$ and $\overline{H_\rho}$ compare for the tropical sounding?
End of explanation
theFig,theAx=plt.subplots(1,1)
theAx.semilogy(df['temp'].values,df['press'].values/100.)
#
# need to flip the y axis since pressure decreases with height
#
theAx.invert_yaxis()
tickvals=[1000,800, 600, 400, 200, 100, 50,1]
theAx.set_yticks(tickvals)
majorFormatter = ticks.FormatStrFormatter('%d')
theAx.yaxis.set_major_formatter(majorFormatter)
theAx.set_yticklabels(tickvals)
theAx.set_ylim([1000.,50.])
theAx.set_title('{} temperature profile'.format(sounding))
theAx.set_xlabel('Temperature (K)')
_=theAx.set_ylabel('pressure (hPa)')
Explanation: How well do these average values represent the pressure and density profiles?
End of explanation
fig,theAx=plt.subplots(1,1)
hydroPress=df['press'].values[0]*np.exp(-df['z'].values/Hbar)
theAx.plot(df['press'].values/100.,df['z'].values/1000.,label='sounding')
theAx.plot(hydroPress/100.,df['z'].values/1000.,label='hydrostat approx')
theAx.set_title('height vs. pressure for tropics')
theAx.set_xlabel('pressure (hPa)')
theAx.set_ylabel('height (km)')
theAx.set_xlim([500,1000])
theAx.set_ylim([0,5])
tickVals=[500, 600, 700, 800, 900, 1000]
theAx.set_xticks(tickVals)
theAx.set_xticklabels(tickVals)
_=theAx.legend(loc='best')
Explanation: Now check the hydrostatic approximation by plotting the pressure column against
$$p(z) = p_0 \exp \left (-z/\overline{H_p} \right )$$
vs. the actual sounding p(T):
End of explanation
fig,theAx=plt.subplots(1,1)
hydroDens=df['den'].values[0]*np.exp(-df['z']/Hrho)
theAx.plot(df['den'],df['z'].values/1000.,label='sounding')
theAx.plot(hydroDens,df['z'].values/1000.,label='hydrostat approx')
theAx.set_title('height vs. density for the tropics')
theAx.set_xlabel('density ($kg\,m^{-3}$)')
theAx.set_ylabel('height (km)')
theAx.set_ylim([0,5])
_=theAx.legend(loc='best')
Explanation: Again plot the hydrostatic approximation
$$\rho(z) = \rho_0 \exp \left (-z/\overline{H_\rho} \right )$$
vs. the actual sounding $\rho(z)$:
End of explanation
for name,df in sound_dict.items():
top = 10.e3
df = df.loc[df['z']<top]
press_height = calcScaleHeight(df)
dens_height = calcDensHeight(df)
print('{}: \npress height={:5.2f} km, \ndens height = {:5.2f} km\n'\
.format(name,press_height/1.e3,dens_height/1.e3))
Explanation: <a name="oct7assign"></a>
Assignment for Friday
Add cells to this notebook to:
1. Print out the density and pressure scale heights for each of the five soundings
End of explanation
def calc_wv(df):
rhov = df['rmix'].values*df['den'].values
mid_rhov = (rhov[1:] + rhov[:-1])/2.
col_wv = np.sum(mid_rhov*np.diff(df['z'].values))
#
# convert kg/m^3 to meters, and meters to cm
#
col_wv = col_wv/1000.*100.
return col_wv
for name,df in sound_dict.items():
top = 10.e3
df = df.loc[df['z']<top]
col_wv = calc_wv(df)
print('{}: wv = {:5.2f} cm'.format(name,col_wv))
Explanation: 2. Define a function that takes a sounding dataframe and returns the "total precipitable water", which is defined as:
$$W = \int_0^{z_{top}} \rho_v dz $$
Do a change of units to convert $kg\,m^{-2}$ to $cm\,m^{-2}$ using the density of liquid water (1000 $kg\,m^{-3}$) -- that is, turn the kg of water in the 1 square meter column into cubic meters and turn that into $cm/m^{-2}$
3. Use your function to print out W for all five soundings
End of explanation |
3,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1> ILI286 - Computación Científica II </h1>
<h2> Ecuaciones Diferenciales Parciales
Step1: <div id='intro' />
Introducción
En el siguiente notebook se estudia la resolución numérica de ecuaciones diferenciales parciales elípticas. La resolución de estas tiene gran importancia, ya que aparecen repetidas veces en diferentes modelos físicos relacionados con los potenciales de energía. Por ejemplo, el potencial de carga eléctrica de acuerdo a las ecuaciones de Maxwell puede ser escrito como
Step2: en donde cada punto interior (azul), representa un punto donde queremos conocer el valor de la función $u(x,y)$. Consideraremos además que $h_x$ y $h_y$ el space step de la malla.
Dado que las derivadas del Laplaciano son parciales, se puede utilizar diferencias finitas sin ninguna modificación. Para aproximar cada segunda derivada se utiliza diferencias centradas (centered difference formula), que posee un error de orden cuadrático en $h$.
$$
\underbrace{ \frac{u(x-h_x,y) - 2 u(x,y) + u(x+h_x, y)}{h_x^2} + O(h_x^2) }{= u{xx}(x,y)} + \underbrace{ \frac{u(x,y-h_y) - 2 u(x,y) + u(x, y+h_y)}{h_y^2} + O(h_y^2) }{= u{yy}(x,y)} = f(x,y)
$$
Ocupando la notación $u(x_i,y_j) = w_{ij}$ para el punto $(x_i, y_j)$ de la malla, la ecuación de discretización es finalmente
Step3: La función solve_laplace() es la encargada de construir el sistema lineal correspondiente para al problema P a resolver.
Q
Step4: Ecuación de Helmholtz
Buscaremos resolver la ecuación de Helmholtz con condición de frontera de Dirichlet
Step5: La función solve_herlmotz() es la encargada de construir el sistema lineal correspondiente para al problema P a resolver.
Q | Python Code:
import numpy as np
from mpl_toolkits.mplot3d import axes3d
from matplotlib import pyplot as plt
from ipywidgets import interact
from ipywidgets import IntSlider
import sympy as sym
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
sym.init_printing()
%matplotlib inline
def plot(x,y,w,elev=40,azim=230):
# Plot the solution
X,Y = np.meshgrid(y,x)
W = w.reshape(X.shape)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.plot_wireframe(X, Y, W)
#ax.plot_surface(X, Y, W, alpha=0.25)
plt.xlabel("y")
plt.ylabel("x")
#ax.set_zlim(0.,1.)
#plt.savefig("sol%dx%d.png"%(Nx+1,Ny+1))
ax.view_init(elev,azim)
plt.show()
Explanation: <center>
<h1> ILI286 - Computación Científica II </h1>
<h2> Ecuaciones Diferenciales Parciales: Elípticas </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.14</h2>
</center>
Tabla de Contenidos
Introducción
Marco Teórico
Ecuación de Poisson
Ecuación de Helmotz
Acknowledgements
End of explanation
x = np.linspace(0., 1., 10)
y = np.linspace(0., 1., 10)
xgrid, ygrid = np.meshgrid(x, y, sparse=False)
plt.figure(figsize=(8,8))
plt.scatter(xgrid.ravel(), ygrid.ravel())
plt.plot((0,1),(0,0), 'g--' ,lw=2)
plt.plot((1,1),(0,1), 'g--' ,lw=2)
plt.plot((1,0),(1,1), 'g--' ,lw=2)
plt.plot((0,0),(1,0), 'g--' ,lw=2)
plt.title('Esquema de discretizacion uniforme')
plt.xlim(-0.1,1.1)
plt.ylim(-0.1,1.1)
plt.axis('equal')
plt.show()
Explanation: <div id='intro' />
Introducción
En el siguiente notebook se estudia la resolución numérica de ecuaciones diferenciales parciales elípticas. La resolución de estas tiene gran importancia, ya que aparecen repetidas veces en diferentes modelos físicos relacionados con los potenciales de energía. Por ejemplo, el potencial de carga eléctrica de acuerdo a las ecuaciones de Maxwell puede ser escrito como:
$$
\Delta u = -\frac{\rho}{\epsilon},
$$
con $u$ el potencial eléctrico, $\rho$ la densidad de carga, y $\epsilon$ el la permitividad eléctrica.
El método numérico que estudiaremos se basa en diferencias finitas, del mismo modo en que se hizo para resolver EDOs anteriormente, esto es, las derivadas son aproximadas por gradientes calculados numéricamente en base a los valores vecinos de cada punto. Sin embargo ahora las derivadas son parciales!, pero como veremos esto no es gran problema.
<div id='teo' />
Marco Teórico
Si consideramos una función $u(x,y)$ dos veces diferenciable, entonces se define el operador Laplaciano como:
$$
\Delta u(x,y) = u_{xx}(x,y) + u_{yy}(x,y),
$$
si se considera además una función $f(x,y)$, entonces es posible definir:
$$
\Delta u(x,y) = u_{xx}(x,y) + u_{yy}(x,y) = f(x,y), \ \ \ \text{con } \ x,y \in \Omega \ \text{ y condiciones de borde en } \ \partial \Omega
$$
como la ecuación de Poisson, la cual es una de las más conocidas dentro de la clase _ Elípticas _. El caso particular donde $f(x,y) = 0$ se conoce como ecuación de Laplace.
Formulación numérica
Sea la ecuación de Laplace $\Delta u(x,y) = 0$ sobre un dominio rectangular $[x_a, x_b] \times [y_a, y_b]$, con condiciones de borde de Dirichlet:
\begin{align}
u(x,y_a) &= g_1(x) \
u(x,y_b) &= g_2(x) \
u(x_a,y) &= g_3(y) \
u(x_b,y) &= g_4(y) .
\end{align}
Para resolver este problema por diferencias finitas, es necesario discretizar el dominio $\Omega$ sobre el que se define la función. Tal discretización se puede ver gráficamente como se muestra a continuación:
End of explanation
# Problema 1
xmin, xmax = -1, 1.
ymin, ymax = -1, 1.
f = lambda x,y : x*y
bottom = lambda x : 0# np.sin(np.pi*x)
top = lambda x : 0 #np.sin(np.pi*x)
left = lambda y : 0
right = lambda y: 0
P1 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problema 2
xmin, xmax = 0., 1.
ymin, ymax = 0., 1.
f = lambda x,y : x
bottom = lambda x : np.sin(np.pi*x)
top = lambda x : np.sin(np.pi*x)
left = lambda y : 0
right = lambda y: 0
P2 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problema 3
xmin, xmax = -1, 1.
ymin, ymax = -1, 1.
f = lambda x,y : 0
bottom = lambda x : np.sin(np.pi*x)
top = lambda x : -np.sin(np.pi*x)
left = lambda y : 0
right = lambda y: 0
P3 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problema 4
xmin, xmax = 0, 1.
ymin, ymax = 0, 1.
f = lambda x,y : x*np.exp(y)
bottom = lambda x : x
top = lambda x : x*np.exp(1)
left = lambda y : 0*y
right = lambda y: np.exp(y)
P4 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
P_Poisson=[('P1', P1), ('P2', P2),('P3', P3),('P4', P4)]
Explanation: en donde cada punto interior (azul), representa un punto donde queremos conocer el valor de la función $u(x,y)$. Consideraremos además que $h_x$ y $h_y$ el space step de la malla.
Dado que las derivadas del Laplaciano son parciales, se puede utilizar diferencias finitas sin ninguna modificación. Para aproximar cada segunda derivada se utiliza diferencias centradas (centered difference formula), que posee un error de orden cuadrático en $h$.
$$
\underbrace{ \frac{u(x-h_x,y) - 2 u(x,y) + u(x+h_x, y)}{h_x^2} + O(h_x^2) }{= u{xx}(x,y)} + \underbrace{ \frac{u(x,y-h_y) - 2 u(x,y) + u(x, y+h_y)}{h_y^2} + O(h_y^2) }{= u{yy}(x,y)} = f(x,y)
$$
Ocupando la notación $u(x_i,y_j) = w_{ij}$ para el punto $(x_i, y_j)$ de la malla, la ecuación de discretización es finalmente:
$$
\frac{w_{i-1,j} - 2 w_{i,j} + w_{i+1,j}}{h_x^2} + \frac{w_{i,j-1} - 2 w_{i,j} + w_{i,j+1}}{h_y^2} \approx f(x_i, y_j),
$$
válida para todos los puntos $(x_i, y_j) \in \Omega - \partial \Omega$ (puntos interiores). Hay una aspecto muy importante a notar: Esta ecuación es lineal en los $\mathbf{w_{i,j}}$!. El procedimiento es entonces como sigue:
Evaluar la ecuación discretizada para cada punto interior $(x_i, y_j)$, utilizando la información de los valores en la frontera.
Construir un sistema lineal $A \mathbf{w} = b$, en donde $\mathbf{w}$ es el vector con todos los $w_{i,j}$. (Para su construcción revisar el libro guía: Numerical Analysis, Timothy Sauer).
Resolver el sistema anterior para $\mathbf{w}$, utilizando cualquier solver de sistemas lineales (LU, PALU, Jacobi, Gauss-Seidel, etc).
Ecuación de Poisson
Buscaremos resolver la ecuación de Poisson con condición de frontera de Dirichlet:
\begin{align}
\Delta u(x,y) &= f(x,y) \ , \ (x, y) \in \Omega=[0,1]\times[0,1] \
u(x,0) &= b(x)\
u(x,1) &= t(x)\
u(0, y) &= l(y)\
u(1,y) &= r(y)
\end{align}
A continuación se definen tres problemas a resolver.
End of explanation
def solve_laplace(P, Nx=30, Ny=30,flag_plot=False,elev=40,azim=230):
# Discretize x and y
x = np.linspace(P["xmin"], P["xmax"], Nx+1)
y = np.linspace(P["ymin"], P["ymax"], Ny+1)
# Define the discretization parameters
dx = x[1]-x[0]
dy = y[1]-y[0]
# Create the matrix and the right hand size vector
A = np.zeros([(Nx+1)*(Ny+1), (Nx+1)*(Ny+1)])
b = np.zeros([(Nx+1)*(Ny+1), 1])
# Define global indexing
def index(i, j, nCols=(Ny+1)):
return j + i*nCols
# Fill up the matrix and right hand side vector
for i in range(Nx+1):
for j in range(Ny+1):
k = index(i,j)
if j==0: # y=ymin, bottom
A[k,k] = 1.
b[k] = P["b"](x[i])
elif i==Nx: # x=xmax, right
A[k,k] = 1.
b[k] = P["r"](y[j])
elif j==Ny: # y=ymax, top
A[k,k] = 1.
b[k] = P["t"](x[i])
elif i==0: # x=xmin, left
A[k,k] = 1.
b[k] = P["l"](y[j])
else:
A[k, k] = -2./dx**2 - 2./dy**2
A[k,index(i+1,j)] = 1./dx**2
A[k,index(i-1,j)] = 1./dx**2
A[k,index(i,j-1)] = 1./dy**2
A[k,index(i,j+1)] = 1./dy**2
b[k] = P["f"](x[i], y[j])
# Solve the linear system
w = np.linalg.solve(A, b)
if flag_plot:
plot(x,y,w,elev,azim)
return
return x, y, w
elev_widget = IntSlider(min=0, max=180, step=10, value=40)
azim_widget = IntSlider(min=0, max=360, step=10, value=230)
interact(solve_laplace,P=P_Poisson,Nx=(5,50,5),Ny=(5,50,5),flag_plot=[True],elev=elev_widget,azim=azim_widget)
Explanation: La función solve_laplace() es la encargada de construir el sistema lineal correspondiente para al problema P a resolver.
Q: ¿Podría usted explicar esta construcción en base a lo visto en la formulación teórica?.
End of explanation
# Problem 1
Lambda = 0.1
xmin, xmax = 0., 1.
ymin, ymax = 0., 1.
f = lambda x,y : 0
bottom = lambda x : 0
top = lambda x : 0
left = lambda y : 1
right = lambda y: 1
P1 = {"Lambda":Lambda, "f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problem 2
Lambda = 2.0
xmin, xmax = 0., 1.
ymin, ymax = 0., 1.
f = lambda x,y : 0
bottom = lambda x : 0
top = lambda x : 0
left = lambda y : 1
right = lambda y: 1
P2 = {"Lambda":Lambda, "f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problem 3
Lambda = 0.0
xmin, xmax = -1, 1.
ymin, ymax = -1, 1.
f = lambda x,y : 0
bottom = lambda x : 0
top = lambda x : 0
left = lambda y : np.sin(np.pi*y)
right = lambda y: -np.sin(np.pi*y)
P3 = {"Lambda":Lambda, "f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
P_Helmholtz=[('P1', P1), ('P2', P2),('P3', P3)]
Explanation: Ecuación de Helmholtz
Buscaremos resolver la ecuación de Helmholtz con condición de frontera de Dirichlet:
\begin{align}
\Delta u(x,y) -\lambda \, u(x,y)&= f(x,y) \ , \ (x, y) \in \Omega=[0,1]\times[0,1] \
u(x,0) &= b(x)\
u(x,1) &= t(x)\
u(0, y) &= l(y)\
u(1,y) &= r(y)
\end{align}
A continuación se definen tres problemas a resolver.
End of explanation
def solve_helmholtz(P, Nx=30, Ny=30,flag_plot=False,elev=40,azim=230):
# Discretize x and y
x = np.linspace(P["xmin"], P["xmax"], Nx+1)
y = np.linspace(P["ymin"], P["ymax"], Ny+1)
L = P["Lambda"]
# Define the discretization parameters
dx = x[1]-x[0]
dy = y[1]-y[0]
# Create the matrix and the right hand size vector
A = np.zeros([(Nx+1)*(Ny+1), (Nx+1)*(Ny+1)])
b = np.zeros([(Nx+1)*(Ny+1), 1])
# Define global indexing
def index(i, j, nCols=(Ny+1)):
return j + i*nCols
# Fill up the matrix and right hand side vector
for i in range(Nx+1):
for j in range(Ny+1):
k = index(i,j)
if j==0: # y=ymin, bottom
A[k,k] = -1.5/dy
A[k,index(i,j+1)] = 2.0/dy
A[k,index(i,j+2)] =-0.5/dy
b[k] = P["b"](x[i])
elif i==Nx: # x=xmax, right
A[k,k] = 1.
b[k] = P["r"](y[j])
elif j==Ny: # y=ymax, top
A[k,k] = 1.5/dy
A[k,index(i,j-1)] = -2.0/dy
A[k,index(i,j-2)] = +0.5/dy
b[k] = P["t"](x[i])
elif i==0: # x=xmin, left
A[k,k] = 1.
b[k] = P["l"](y[j])
else:
A[k, k] = -2./dx**2 - 2./dy**2 - L
A[k,index(i+1,j)] = 1./dx**2
A[k,index(i-1,j)] = 1./dx**2
A[k,index(i,j-1)] = 1./dy**2
A[k,index(i,j+1)] = 1./dy**2
b[k] = P["f"](x[i], y[j])
# Solve the linear system
w = np.linalg.solve(A, b)
if flag_plot:
plot(x,y,w,elev,azim)
return
return x, y, w
elev_widget = IntSlider(min=0, max=180, step=10, value=40)
azim_widget = IntSlider(min=0, max=360, step=10, value=230)
interact(solve_helmholtz,P=P_Helmholtz,Nx=(5,50,5),Ny=(5,50,5),flag_plot=[True],elev=elev_widget,azim=azim_widget)
Explanation: La función solve_herlmotz() es la encargada de construir el sistema lineal correspondiente para al problema P a resolver.
Q: ¿Podría usted explicar esta construcción en base a lo visto en la formulación teórica?.
End of explanation |
3,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: FlatMap
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply FlatMap in multiple ways to yield zero or more elements per each input element into the resulting PCollection.
FlatMap accepts a function that returns an iterable,
where each of the output iterable's elements is an element of the resulting PCollection.
Example 1
Step3: <table align="left" style="margin-right
Step4: <table align="left" style="margin-right
Step5: <table align="left" style="margin-right
Step6: <table align="left" style="margin-right
Step7: <table align="left" style="margin-right
Step8: <table align="left" style="margin-right
Step9: <table align="left" style="margin-right
Step10: <table align="left" style="margin-right | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master//Users/dcavazos/src/beam/examples/notebooks/documentation/transforms/python/element-wise/flatmap-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/flatmap"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: FlatMap
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.FlatMap"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Applies a simple 1-to-many mapping function over each element in the collection.
The many elements are flattened into the resulting collection.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'🍓Strawberry 🥕Carrot 🍆Eggplant',
'🍅Tomato 🥔Potato',
])
| 'Split words' >> beam.FlatMap(str.split)
| beam.Map(print)
)
Explanation: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply FlatMap in multiple ways to yield zero or more elements per each input element into the resulting PCollection.
FlatMap accepts a function that returns an iterable,
where each of the output iterable's elements is an element of the resulting PCollection.
Example 1: FlatMap with a predefined function
We use the function str.split which takes a single str element and outputs a list of strs.
This pipeline splits the input element using whitespaces, creating a list of zero or more elements.
End of explanation
import apache_beam as beam
def split_words(text):
return text.split(',')
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'🍓Strawberry,🥕Carrot,🍆Eggplant',
'🍅Tomato,🥔Potato',
])
| 'Split words' >> beam.FlatMap(split_words)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 2: FlatMap with a function
We define a function split_words which splits an input str element using the delimiter ',' and outputs a list of strs.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
['🍓Strawberry', '🥕Carrot', '🍆Eggplant'],
['🍅Tomato', '🥔Potato'],
])
| 'Flatten lists' >> beam.FlatMap(lambda elements: elements)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: FlatMap with a lambda function
For this example, we want to flatten a PCollection of lists of strs into a PCollection of strs.
Each input element is already an iterable, where each element is what we want in the resulting PCollection.
We use a lambda function that returns the same input element it received.
End of explanation
import apache_beam as beam
def generate_elements(elements):
for element in elements:
yield element
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
['🍓Strawberry', '🥕Carrot', '🍆Eggplant'],
['🍅Tomato', '🥔Potato'],
])
| 'Flatten lists' >> beam.FlatMap(generate_elements)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 4: FlatMap with a generator
For this example, we want to flatten a PCollection of lists of strs into a PCollection of strs.
We use a generator to iterate over the input list and yield each of the elements.
Each yielded result in the generator is an element in the resulting PCollection.
End of explanation
import apache_beam as beam
def format_plant(icon, plant):
if icon:
yield '{}{}'.format(icon, plant)
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
('🍓', 'Strawberry'),
('🥕', 'Carrot'),
('🍆', 'Eggplant'),
('🍅', 'Tomato'),
('🥔', 'Potato'),
(None, 'Invalid'),
])
| 'Format' >> beam.FlatMapTuple(format_plant)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 5: FlatMapTuple for key-value pairs
If your PCollection consists of (key, value) pairs,
you can use FlatMapTuple to unpack them into different function arguments.
End of explanation
import apache_beam as beam
def split_words(text, delimiter=None):
return text.split(delimiter)
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'🍓Strawberry,🥕Carrot,🍆Eggplant',
'🍅Tomato,🥔Potato',
])
| 'Split words' >> beam.FlatMap(split_words, delimiter=',')
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 6: FlatMap with multiple arguments
You can pass functions with multiple arguments to FlatMap.
They are passed as additional positional arguments or keyword arguments to the function.
In this example, split_words takes text and delimiter as arguments.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
delimiter = pipeline | 'Create delimiter' >> beam.Create([','])
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'🍓Strawberry,🥕Carrot,🍆Eggplant',
'🍅Tomato,🥔Potato',
])
| 'Split words' >> beam.FlatMap(
lambda text, delimiter: text.split(delimiter),
delimiter=beam.pvalue.AsSingleton(delimiter),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 7: FlatMap with side inputs as singletons
If the PCollection has a single value, such as the average from another computation,
passing the PCollection as a singleton accesses that value.
In this example, we pass a PCollection the value ',' as a singleton.
We then use that value as the delimiter for the str.split method.
End of explanation
import apache_beam as beam
def normalize_and_validate_durations(plant, valid_durations):
plant['duration'] = plant['duration'].lower()
if plant['duration'] in valid_durations:
yield plant
with beam.Pipeline() as pipeline:
valid_durations = pipeline | 'Valid durations' >> beam.Create([
'annual',
'biennial',
'perennial',
])
valid_plants = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'Perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'BIENNIAL'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'unknown'},
])
| 'Normalize and validate durations' >> beam.FlatMap(
normalize_and_validate_durations,
valid_durations=beam.pvalue.AsIter(valid_durations),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 8: FlatMap with side inputs as iterators
If the PCollection has multiple values, pass the PCollection as an iterator.
This accesses elements lazily as they are needed,
so it is possible to iterate over large PCollections that won't fit into memory.
End of explanation
import apache_beam as beam
def replace_duration_if_valid(plant, durations):
if plant['duration'] in durations:
plant['duration'] = durations[plant['duration']]
yield plant
with beam.Pipeline() as pipeline:
durations = pipeline | 'Durations dict' >> beam.Create([
(0, 'annual'),
(1, 'biennial'),
(2, 'perennial'),
])
valid_plants = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 2},
{'icon': '🥕', 'name': 'Carrot', 'duration': 1},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 2},
{'icon': '🍅', 'name': 'Tomato', 'duration': 0},
{'icon': '🥔', 'name': 'Potato', 'duration': -1},
])
| 'Replace duration if valid' >> beam.FlatMap(
replace_duration_if_valid,
durations=beam.pvalue.AsDict(durations),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/flat_map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection),
but this requires that all the elements fit into memory.
Example 9: FlatMap with side inputs as dictionaries
If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary.
Each element must be a (key, value) pair.
Note that all the elements of the PCollection must fit into memory for this.
If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.
End of explanation |
3,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Spots in PHOEBE 2 vs PHOEBE Legacy
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Spots and Compute Options
Step3: Let's use the external atmospheres available for both phoebe1 and phoebe2
Step4: Plotting | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: Comparing Spots in PHOEBE 2 vs PHOEBE Legacy
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_spot(component='primary', relteff=0.8, radius=20, colat=45, colon=90, feature='spot01')
b.add_dataset('lc', times=np.linspace(0,1,101))
b.add_compute('phoebe', irrad_method='none', compute='phoebe2')
b.add_compute('legacy', irrad_method='none', compute='phoebe1')
Explanation: Adding Spots and Compute Options
End of explanation
b.set_value_all('atm', 'extern_planckint')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.run_compute('phoebe2', model='phoebe2model')
b.run_compute('phoebe1', model='phoebe1model')
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
afig, mplfig = b.plot(legend=True, ylim=(1.95, 2.05), show=True)
Explanation: Plotting
End of explanation |
3,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with mpl-probscale
Installation
mpl-probscale is developed on Python 3.6. It is also tested on Python 3.4, 3.5, and even 2.7 (for the time being).
From conda
Official releases of mpl-probscale can be found on conda-forge
Step1: Background
Built-in matplotlib scales
To the casual user, you can set matplotlib scales to either "linear" or "log" (logarithmic). There are others (e.g., logit, symlog), but I haven't seen them too much in the wild.
Linear scales are the default
Step2: Logarithmic scales can work well when your data cover several orders of magnitude and don't have to be in base 10.
Step3: Probability Scales
mpl-probscale lets you use probability scales. All you need to do is import it.
Before importing, there is no probability scale available in matplotlib
Step4: To access probability scales, simply import the probscale module.
Step5: Probability scales default to the standard normal distribution (note that the formatting is a percentage-based probability)
You can even use different probability distributions, though it can be tricky. You have to pass a frozen distribution from either scipy.stats or paramnormal to the dist kwarg in ax.set_[x|y]scale.
Here's a standard normal scale with two different beta scales and a linear scale for comparison.
Step6: Ready-made probability plots
mpl-probscale ships with a small viz module that can help you make a probability plot of a sample.
With only the sample data, probscale.probplot will create a figure, compute the plotting position and non-exceedance probabilities, and plot everything
Step7: You should specify the matplotlib axes on which the plot should occur if you want to customize the plot using matplotlib commands directly
Step8: Lots of other options are directly accessible from the probplot function signature.
Step9: Percentile and Quantile plots
For convenience, you can do percentile and quantile plots with the same function.
Step10: Working with seaborn FacetGrids
Good news, everyone. The probplot function generally works as expected with FacetGrids. | Python Code:
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
import numpy
from matplotlib import pyplot
from scipy import stats
import seaborn
clear_bkgd = {'axes.facecolor':'none', 'figure.facecolor':'none'}
seaborn.set(style='ticks', context='talk', color_codes=True, rc=clear_bkgd)
Explanation: Getting started with mpl-probscale
Installation
mpl-probscale is developed on Python 3.6. It is also tested on Python 3.4, 3.5, and even 2.7 (for the time being).
From conda
Official releases of mpl-probscale can be found on conda-forge:
conda install --channel=conda-forge mpl-probscale
Fairly recent builds of the development version are available on my channel:
conda install --channel=conda-forge mpl-probscale
From PyPI
Official source releases are also available on PyPI
pip install probscale
From source
mpl-probscale is a pure python package. It should be fairly trivial to install from source on any platform. To do that, download or clone from github, unzip the archive if necessary then do:
cd mpl-probscale # or wherever the setup.py got placed
pip install .
I recommend pip install . over python setup.py install for reasons I don't fully understand.
End of explanation
fig, ax = pyplot.subplots()
seaborn.despine(fig=fig)
Explanation: Background
Built-in matplotlib scales
To the casual user, you can set matplotlib scales to either "linear" or "log" (logarithmic). There are others (e.g., logit, symlog), but I haven't seen them too much in the wild.
Linear scales are the default:
End of explanation
fig, (ax1, ax2) = pyplot.subplots(nrows=2, figsize=(8,3))
ax1.set_xscale('log')
ax1.set_xlim(left=1e-3, right=1e3)
ax1.set_xlabel("Base 10")
ax1.set_yticks([])
ax2.set_xscale('log', basex=2)
ax2.set_xlim(left=2**-3, right=2**3)
ax2.set_xlabel("Base 2")
ax2.set_yticks([])
seaborn.despine(fig=fig, left=True)
Explanation: Logarithmic scales can work well when your data cover several orders of magnitude and don't have to be in base 10.
End of explanation
try:
fig, ax = pyplot.subplots()
ax.set_xscale('prob')
except ValueError as e:
pyplot.close(fig)
print(e)
Explanation: Probability Scales
mpl-probscale lets you use probability scales. All you need to do is import it.
Before importing, there is no probability scale available in matplotlib:
End of explanation
import probscale
fig, ax = pyplot.subplots(figsize=(8, 3))
ax.set_xscale('prob')
ax.set_xlim(left=0.5, right=99.5)
ax.set_xlabel('Normal probability scale (%)')
seaborn.despine(fig=fig)
Explanation: To access probability scales, simply import the probscale module.
End of explanation
fig, (ax1, ax2, ax3, ax4) = pyplot.subplots(figsize=(9, 5), nrows=4)
for ax in [ax1, ax2, ax3, ax4]:
ax.set_xlim(left=2, right=98)
ax.set_yticks([])
ax1.set_xscale('prob')
ax1.set_xlabel('Normal probability scale, as percents')
beta1 = stats.beta(a=3, b=2)
ax2.set_xscale('prob', dist=beta1)
ax2.set_xlabel('Beta probability scale (α=3, β=2)')
beta2 = stats.beta(a=2, b=7)
ax3.set_xscale('prob', dist=beta2)
ax3.set_xlabel('Beta probability scale (α=2, β=7)')
ax4.set_xticks(ax1.get_xticks()[12:-12])
ax4.set_xlabel('Linear scale (for reference)')
seaborn.despine(fig=fig, left=True)
Explanation: Probability scales default to the standard normal distribution (note that the formatting is a percentage-based probability)
You can even use different probability distributions, though it can be tricky. You have to pass a frozen distribution from either scipy.stats or paramnormal to the dist kwarg in ax.set_[x|y]scale.
Here's a standard normal scale with two different beta scales and a linear scale for comparison.
End of explanation
numpy.random.seed(0)
sample = numpy.random.normal(loc=4, scale=2, size=37)
fig = probscale.probplot(sample)
seaborn.despine(fig=fig)
Explanation: Ready-made probability plots
mpl-probscale ships with a small viz module that can help you make a probability plot of a sample.
With only the sample data, probscale.probplot will create a figure, compute the plotting position and non-exceedance probabilities, and plot everything:
End of explanation
fig, ax = pyplot.subplots(figsize=(7, 3))
probscale.probplot(sample, ax=ax)
ax.set_ylabel('Normal Values')
ax.set_xlabel('Non-exceedance probability')
ax.set_xlim(left=1, right=99)
seaborn.despine(fig=fig)
Explanation: You should specify the matplotlib axes on which the plot should occur if you want to customize the plot using matplotlib commands directly:
End of explanation
fig, ax = pyplot.subplots(figsize=(3, 7))
numpy.random.seed(0)
new_sample = numpy.random.lognormal(mean=2.0, sigma=0.75, size=37)
probscale.probplot(
new_sample,
ax=ax,
probax='y', # flip the plot
datascale='log', # scale of the non-probability axis
bestfit=True, # draw a best-fit line
estimate_ci=True,
datalabel='Lognormal Values', # labels and markers...
problabel='Non-exceedance probability',
scatter_kws=dict(marker='d', zorder=2, mew=1.25, mec='w', markersize=10),
line_kws=dict(color='0.17', linewidth=2.5, zorder=0, alpha=0.75),
)
ax.set_ylim(bottom=1, top=99)
seaborn.despine(fig=fig)
Explanation: Lots of other options are directly accessible from the probplot function signature.
End of explanation
fig, (ax1, ax2, ax3) = pyplot.subplots(nrows=3, figsize=(8, 7))
probscale.probplot(sample, ax=ax1, plottype='pp', problabel='Percentiles')
probscale.probplot(sample, ax=ax2, plottype='qq', problabel='Quantiles')
probscale.probplot(sample, ax=ax3, plottype='prob', problabel='Probabilities')
ax2.set_xlim(left=-2.5, right=2.5)
ax3.set_xlim(left=0.5, right=99.5)
fig.tight_layout()
seaborn.despine(fig=fig)
Explanation: Percentile and Quantile plots
For convenience, you can do percentile and quantile plots with the same function.
End of explanation
plot = (
seaborn.load_dataset("tips")
.assign(pct=lambda df: 100 * df['tip'] / df['total_bill'])
.pipe(seaborn.FacetGrid, hue='sex', col='time', row='smoker', margin_titles=True, aspect=1., size=4)
.map(probscale.probplot, 'pct', bestfit=True, scatter_kws=dict(alpha=0.75), probax='y')
.add_legend()
.set_ylabels('Non-Exceedance Probability')
.set_xlabels('Tips as percent of total bill')
.set(ylim=(0.5, 99.5), xlim=(0, 100))
)
Explanation: Working with seaborn FacetGrids
Good news, everyone. The probplot function generally works as expected with FacetGrids.
End of explanation |
3,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Effective Tensorflow 2
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Recommendations for idiomatic TensorFlow 2
Refactor your code into smaller modules
A good practice is to refactor your code into smaller functions that are called as needed. For best performance, you should try to decorate the largest blocks of computation that you can in a tf.function (note that the nested python functions called by a tf.function do not require their own separate decorations, unless you want to use different jit_compile settings for the tf.function). Depending on your use case, this could be multiple training steps or even your whole training loop. For inference use cases, it might be a single model forward pass.
Adjust the default learning rate for some tf.keras.optimizers
<a name="optimizer_defaults"></a>
Some Keras optimizers have different learning rates in TF2. If you see a change in convergence behavior for your models, check the default learning rates.
There are no changes for optimizers.SGD, optimizers.Adam, or optimizers.RMSprop.
The following default learning rates have changed
Step3: Then prepare the data for training
Step4: To keep the example short, trim the dataset to only return 5 batches
Step5: Use regular Python iteration to iterate over training data that fits in memory. Otherwise, tf.data.Dataset is the best way to stream training data from disk. Datasets are iterables (not iterators), and work just like other Python iterables in eager execution. You can fully utilize dataset async prefetching/streaming features by wrapping your code in tf.function, which replaces Python iteration with the equivalent graph operations using AutoGraph.
python
@tf.function
def train(model, dataset, optimizer)
Step6: <a name="custom_loop"></a>
Customize training and write your own loop
If Keras models work for you, but you need more flexibility and control of the training step or the outer training loops, you can implement your own training steps or even entire training loops. See the Keras guide on customizing fit to learn more.
You can also implement many things as a tf.keras.callbacks.Callback.
This method has many of the advantages mentioned previously, but gives you control of the train step and even the outer loop.
There are three steps to a standard training loop
Step7: Take advantage of tf.function with Python control flow
tf.function provides a way to convert data-dependent control flow into graph-mode
equivalents like tf.cond and tf.while_loop.
One common place where data-dependent control flow appears is in sequence
models. tf.keras.layers.RNN wraps an RNN cell, allowing you to either
statically or dynamically unroll the recurrence. As an example, you could reimplement dynamic unroll as follows.
Step8: Read the tf.function guide for a more information.
New-style metrics and losses
Metrics and losses are both objects that work eagerly and in tf.functions.
A loss object is callable, and expects (y_true, y_pred) as arguments
Step9: Use metrics to collect and display data
You can use tf.metrics to aggregate data and tf.summary to log summaries and redirect it to a writer using a context manager. The summaries are emitted directly to the writer which means that you must provide the step value at the callsite.
python
summary_writer = tf.summary.create_file_writer('/tmp/summaries')
with summary_writer.as_default()
Step10: Keras metric names
<a name="keras_metric_names"></a>
Keras models are consistent about handling metric names. When you pass a string in the list of metrics, that exact string is used as the metric's name. These names are visible in the history object returned by model.fit, and in the logs passed to keras.callbacks. is set to the string you passed in the metric list. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import tensorflow_datasets as tfds
Explanation: Effective Tensorflow 2
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/effective_tf2"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/effective_tf2.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/effective_tf2.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/effective_tf2.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This guide provides a list of best practices for writing code using TensorFlow 2 (TF2), it is written for users who have recently switched over from TensorFlow 1 (TF1). Refer to the migrate section of the guide for more info on migrating your TF1 code to TF2.
Setup
Import TensorFlow and other dependencies for the examples in this guide.
End of explanation
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
Explanation: Recommendations for idiomatic TensorFlow 2
Refactor your code into smaller modules
A good practice is to refactor your code into smaller functions that are called as needed. For best performance, you should try to decorate the largest blocks of computation that you can in a tf.function (note that the nested python functions called by a tf.function do not require their own separate decorations, unless you want to use different jit_compile settings for the tf.function). Depending on your use case, this could be multiple training steps or even your whole training loop. For inference use cases, it might be a single model forward pass.
Adjust the default learning rate for some tf.keras.optimizers
<a name="optimizer_defaults"></a>
Some Keras optimizers have different learning rates in TF2. If you see a change in convergence behavior for your models, check the default learning rates.
There are no changes for optimizers.SGD, optimizers.Adam, or optimizers.RMSprop.
The following default learning rates have changed:
optimizers.Adagrad from 0.01 to 0.001
optimizers.Adadelta from 1.0 to 0.001
optimizers.Adamax from 0.002 to 0.001
optimizers.Nadam from 0.002 to 0.001
Use tf.Modules and Keras layers to manage variables
tf.Modules and tf.keras.layers.Layers offer the convenient variables and
trainable_variables properties, which recursively gather up all dependent
variables. This makes it easy to manage variables locally to where they are
being used.
Keras layers/models inherit from tf.train.Checkpointable and are integrated
with @tf.function, which makes it possible to directly checkpoint or export
SavedModels from Keras objects. You do not necessarily have to use Keras'
Model.fit API to take advantage of these integrations.
Read the section on transfer learning and fine-tuning in the Keras guide to learn how to collect a subset of relevant variables using Keras.
Combine tf.data.Datasets and tf.function
The TensorFlow Datasets package (tfds) contains utilities for loading predefined datasets as tf.data.Dataset objects. For this example, you can load the MNIST dataset using tfds:
End of explanation
BUFFER_SIZE = 10 # Use a much larger value for real code
BATCH_SIZE = 64
NUM_EPOCHS = 5
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
Explanation: Then prepare the data for training:
Re-scale each image.
Shuffle the order of the examples.
Collect batches of images and labels.
End of explanation
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
test_data = mnist_test.map(scale).batch(BATCH_SIZE)
STEPS_PER_EPOCH = 5
train_data = train_data.take(STEPS_PER_EPOCH)
test_data = test_data.take(STEPS_PER_EPOCH)
image_batch, label_batch = next(iter(train_data))
Explanation: To keep the example short, trim the dataset to only return 5 batches:
End of explanation
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
# Model is the full model w/o custom layers
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_data, epochs=NUM_EPOCHS)
loss, acc = model.evaluate(test_data)
print("Loss {}, Accuracy {}".format(loss, acc))
Explanation: Use regular Python iteration to iterate over training data that fits in memory. Otherwise, tf.data.Dataset is the best way to stream training data from disk. Datasets are iterables (not iterators), and work just like other Python iterables in eager execution. You can fully utilize dataset async prefetching/streaming features by wrapping your code in tf.function, which replaces Python iteration with the equivalent graph operations using AutoGraph.
python
@tf.function
def train(model, dataset, optimizer):
for x, y in dataset:
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
prediction = model(x, training=True)
loss = loss_fn(prediction, y)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
If you use the Keras Model.fit API, you won't have to worry about dataset
iteration.
python
model.compile(optimizer=optimizer, loss=loss_fn)
model.fit(dataset)
<a name="keras_training_loops"></a>
Use Keras training loops
If you don't need low-level control of your training process, using Keras' built-in fit, evaluate, and predict methods is recommended. These methods provide a uniform interface to train the model regardless of the implementation (sequential, functional, or sub-classed).
The advantages of these methods include:
They accept Numpy arrays, Python generators and, tf.data.Datasets.
They apply regularization, and activation losses automatically.
They support tf.distribute where the training code remains the same regardless of the hardware configuration.
They support arbitrary callables as losses and metrics.
They support callbacks like tf.keras.callbacks.TensorBoard, and custom callbacks.
They are performant, automatically using TensorFlow graphs.
Here is an example of training a model using a Dataset. For details on how this works, check out the tutorials.
End of explanation
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss=tf.math.add_n(model.losses)
pred_loss=loss_fn(labels, predictions)
total_loss=pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for epoch in range(NUM_EPOCHS):
for inputs, labels in train_data:
train_step(inputs, labels)
print("Finished epoch", epoch)
Explanation: <a name="custom_loop"></a>
Customize training and write your own loop
If Keras models work for you, but you need more flexibility and control of the training step or the outer training loops, you can implement your own training steps or even entire training loops. See the Keras guide on customizing fit to learn more.
You can also implement many things as a tf.keras.callbacks.Callback.
This method has many of the advantages mentioned previously, but gives you control of the train step and even the outer loop.
There are three steps to a standard training loop:
Iterate over a Python generator or tf.data.Dataset to get batches of examples.
Use tf.GradientTape to collect gradients.
Use one of the tf.keras.optimizers to apply weight updates to the model's variables.
Remember:
Always include a training argument on the call method of subclassed layers and models.
Make sure to call the model with the training argument set correctly.
Depending on usage, model variables may not exist until the model is run on a batch of data.
You need to manually handle things like regularization losses for the model.
There is no need to run variable initializers or to add manual control dependencies. tf.function handles automatic control dependencies and variable initialization on creation for you.
End of explanation
class DynamicRNN(tf.keras.Model):
def __init__(self, rnn_cell):
super(DynamicRNN, self).__init__(self)
self.cell = rnn_cell
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.float32, shape=[None, None, 3])])
def call(self, input_data):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
timesteps = tf.shape(input_data)[0]
batch_size = tf.shape(input_data)[1]
outputs = tf.TensorArray(tf.float32, timesteps)
state = self.cell.get_initial_state(batch_size = batch_size, dtype=tf.float32)
for i in tf.range(timesteps):
output, state = self.cell(input_data[i], state)
outputs = outputs.write(i, output)
return tf.transpose(outputs.stack(), [1, 0, 2]), state
lstm_cell = tf.keras.layers.LSTMCell(units = 13)
my_rnn = DynamicRNN(lstm_cell)
outputs, state = my_rnn(tf.random.normal(shape=[10,20,3]))
print(outputs.shape)
Explanation: Take advantage of tf.function with Python control flow
tf.function provides a way to convert data-dependent control flow into graph-mode
equivalents like tf.cond and tf.while_loop.
One common place where data-dependent control flow appears is in sequence
models. tf.keras.layers.RNN wraps an RNN cell, allowing you to either
statically or dynamically unroll the recurrence. As an example, you could reimplement dynamic unroll as follows.
End of explanation
cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
cce([[1, 0]], [[-1.0,3.0]]).numpy()
Explanation: Read the tf.function guide for a more information.
New-style metrics and losses
Metrics and losses are both objects that work eagerly and in tf.functions.
A loss object is callable, and expects (y_true, y_pred) as arguments:
End of explanation
# Create the metrics
loss_metric = tf.keras.metrics.Mean(name='train_loss')
accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss=tf.math.add_n(model.losses)
pred_loss=loss_fn(labels, predictions)
total_loss=pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Update the metrics
loss_metric.update_state(total_loss)
accuracy_metric.update_state(labels, predictions)
for epoch in range(NUM_EPOCHS):
# Reset the metrics
loss_metric.reset_states()
accuracy_metric.reset_states()
for inputs, labels in train_data:
train_step(inputs, labels)
# Get the metric results
mean_loss=loss_metric.result()
mean_accuracy = accuracy_metric.result()
print('Epoch: ', epoch)
print(' loss: {:.3f}'.format(mean_loss))
print(' accuracy: {:.3f}'.format(mean_accuracy))
Explanation: Use metrics to collect and display data
You can use tf.metrics to aggregate data and tf.summary to log summaries and redirect it to a writer using a context manager. The summaries are emitted directly to the writer which means that you must provide the step value at the callsite.
python
summary_writer = tf.summary.create_file_writer('/tmp/summaries')
with summary_writer.as_default():
tf.summary.scalar('loss', 0.1, step=42)
Use tf.metrics to aggregate data before logging them as summaries. Metrics are stateful; they accumulate values and return a cumulative result when you call the result method (such as Mean.result). Clear accumulated values with Model.reset_states.
```python
def train(model, optimizer, dataset, log_freq=10):
avg_loss = tf.keras.metrics.Mean(name='loss', dtype=tf.float32)
for images, labels in dataset:
loss = train_step(model, optimizer, images, labels)
avg_loss.update_state(loss)
if tf.equal(optimizer.iterations % log_freq, 0):
tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations)
avg_loss.reset_states()
def test(model, test_x, test_y, step_num):
# training=False is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
loss = loss_fn(model(test_x, training=False), test_y)
tf.summary.scalar('loss', loss, step=step_num)
train_summary_writer = tf.summary.create_file_writer('/tmp/summaries/train')
test_summary_writer = tf.summary.create_file_writer('/tmp/summaries/test')
with train_summary_writer.as_default():
train(model, optimizer, dataset)
with test_summary_writer.as_default():
test(model, test_x, test_y, optimizer.iterations)
```
Visualize the generated summaries by pointing TensorBoard to the summary log
directory:
shell
tensorboard --logdir /tmp/summaries
Use the tf.summary API to write summary data for visualization in TensorBoard. For more info, read the tf.summary guide.
End of explanation
model.compile(
optimizer = tf.keras.optimizers.Adam(0.001),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name="my_accuracy")])
history = model.fit(train_data)
history.history.keys()
Explanation: Keras metric names
<a name="keras_metric_names"></a>
Keras models are consistent about handling metric names. When you pass a string in the list of metrics, that exact string is used as the metric's name. These names are visible in the history object returned by model.fit, and in the logs passed to keras.callbacks. is set to the string you passed in the metric list.
End of explanation |
3,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Avengers Data
You can also see this notebook rendered on github
Step1: Filter out the bad years
Since the data was collected from a community site, where most of the contributions came from individual users, there's room for errors to surface in the dataset. If you plot a histogram of the values in the Year column, which describe the year that Avenger was introduced, you'll immediately notice some oddities. There are quite a few Avengers who look like they were introduced in 1900, which we know is a little fishy. The Avengers weren't introduced in the comic series until the 1960's!
Step2: Consolidating deaths
We are interested in the number of total deaths each character experienced and we'd like a field containing that distilled information. Right now, there are 5 fields (Death1 to Death5) that each contain a binary value representing if a superhero experienced that death or not. For example, a superhero can experience Death1, then Death2, etc. until they were no longer brought back to life by the writers.
We'd like to coalesce that information into just one field so we can do numerical analysis more easily.
Create a new column, Deaths, that contains the number of times each superhero died. The possible values for each death field are YES, NO, and the Pandas NaN value used to represent missing data. Keep all of the the original columns (including Death1 to Death5) and update true_avengers with the new Deaths column.
Step3: I sorted the output by the new Deaths column and it looks like some character "Jocasta" has died 5 times! Followed by Mar-Vell with 4 deaths.
Years since joining
For the final task, we want to know if the Years since joining field accurately reflects the Year column. If an Avenger was introduced in Year 1960, is the Years since joined value for that Avenger 55?
Calculate the number of rows where Years since joined is accurate. This challenge was created in 2015, so use that as the reference year. We want to know for how many rows Years since joined was correctly calculated as Year value subtracted from 2015. | Python Code:
import pandas as pd
avengers = pd.read_csv("avengers.csv")
avengers.head(5)
Explanation: Avengers Data
You can also see this notebook rendered on github: https://github.com/eggie5/ipython-notebooks/blob/master/avengers/Avengers.ipynb
Life and Death of the Avengers
The Avengers are a well-known and widely loved team of superheroes in the Marvel universe that were introduced in the 1960's in the original comic book series. They've since become popularized again through the recent Disney movies as part of the new Marvel Cinematic Universe.
The team at FiveThirtyEight wanted to dissect the deaths of the Avengers in the comics over the years. The writers were known to kill off and revive many of the superheroes so they were curious to know what data they could grab from the Marvel Wikia site, a fan-driven community site, to explore further. To learn how they collected their data, available on their Github repo, read the writeup they published on their site.
Exploring the Data
While the FiveThirtyEight team has done a wonderful job acquiring this data, the data still has some inconsistencies. Your mission, if you choose to accept it, is to clean up their dataset so it can be more useful for analysis in Pandas. First things first, let's read our dataset into Padas as a DataFrame and preview the first 5 rows to get a better sense of our data.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
true_avengers = pd.DataFrame()
avengers['Year'].hist()
This is obviously a mistake in the data and you should remove all Avengers before 1960 from the DataFrame.
We only want to keep the Avengers who were introduced after 1960. Filter out all Avengers introduced before 1960 and store only the ones added in 1960 or later in `true_avengers`.
selector = avengers['Year'] > 1940
true_avengers = avengers[selector]
true_avengers['Year'].hist()
Explanation: Filter out the bad years
Since the data was collected from a community site, where most of the contributions came from individual users, there's room for errors to surface in the dataset. If you plot a histogram of the values in the Year column, which describe the year that Avenger was introduced, you'll immediately notice some oddities. There are quite a few Avengers who look like they were introduced in 1900, which we know is a little fishy. The Avengers weren't introduced in the comic series until the 1960's!
End of explanation
pd.options.mode.chained_assignment = None # default='warn'
columns = ['Death1', 'Death2', 'Death3', 'Death4', 'Death5']
true_avengers[columns]
def clean_row(row):
val = 0
for column in columns:
if(row[column] == "YES"):
val += 1
return val
death_column_vector = true_avengers.apply(lambda row: clean_row(row), axis=1)
true_avengers['Deaths']=death_column_vector
true_avengers.sort("Deaths", ascending=0)
Explanation: Consolidating deaths
We are interested in the number of total deaths each character experienced and we'd like a field containing that distilled information. Right now, there are 5 fields (Death1 to Death5) that each contain a binary value representing if a superhero experienced that death or not. For example, a superhero can experience Death1, then Death2, etc. until they were no longer brought back to life by the writers.
We'd like to coalesce that information into just one field so we can do numerical analysis more easily.
Create a new column, Deaths, that contains the number of times each superhero died. The possible values for each death field are YES, NO, and the Pandas NaN value used to represent missing data. Keep all of the the original columns (including Death1 to Death5) and update true_avengers with the new Deaths column.
End of explanation
joined_accuracy_count = int()
correct_joined_years = true_avengers[true_avengers['Years since joining'] == (2015 - true_avengers['Year'])]
joined_accuracy_count = len(correct_joined_years)
joined_accuracy_count
Explanation: I sorted the output by the new Deaths column and it looks like some character "Jocasta" has died 5 times! Followed by Mar-Vell with 4 deaths.
Years since joining
For the final task, we want to know if the Years since joining field accurately reflects the Year column. If an Avenger was introduced in Year 1960, is the Years since joined value for that Avenger 55?
Calculate the number of rows where Years since joined is accurate. This challenge was created in 2015, so use that as the reference year. We want to know for how many rows Years since joined was correctly calculated as Year value subtracted from 2015.
End of explanation |
3,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a left cerebellum volume source space
Generate a volume source space of the left cerebellum and plot its vertices
relative to the left cortical surface source space and the FreeSurfer
segmentation file.
Step1: Setup the source spaces
Step2: Plot the positions of each source space | Python Code:
# Author: Alan Leggitt <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import setup_source_space, setup_volume_source_space
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
subject = 'sample'
aseg_fname = subjects_dir + '/sample/mri/aseg.mgz'
Explanation: Generate a left cerebellum volume source space
Generate a volume source space of the left cerebellum and plot its vertices
relative to the left cortical surface source space and the FreeSurfer
segmentation file.
End of explanation
# setup a cortical surface source space and extract left hemisphere
surf = setup_source_space(subject, subjects_dir=subjects_dir, add_dist=False)
lh_surf = surf[0]
# setup a volume source space of the left cerebellum cortex
volume_label = 'Left-Cerebellum-Cortex'
sphere = (0, 0, 0, 0.12)
lh_cereb = setup_volume_source_space(
subject, mri=aseg_fname, sphere=sphere, volume_label=volume_label,
subjects_dir=subjects_dir, sphere_units='m')
# Combine the source spaces
src = surf + lh_cereb
Explanation: Setup the source spaces
End of explanation
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
surfaces='white', coord_frame='head',
src=src)
mne.viz.set_3d_view(fig, azimuth=173.78, elevation=101.75,
distance=0.30, focalpoint=(-0.03, -0.01, 0.03))
Explanation: Plot the positions of each source space
End of explanation |
3,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Head model and forward computation
The aim of this tutorial is to be a getting started for forward computation.
For more extensive details and presentation of the general concepts for forward
modeling, see ch_forward.
Step1: Computing the forward operator
To compute a forward operator we need
Step2: Visualizing the coregistration
The coregistration is the operation that allows to position the head and the
sensors in a common coordinate system. In the MNE software the transformation
to align the head and the sensors in stored in a so-called trans file.
It is a FIF file that ends with -trans.fif. It can be obtained with
Step3: Compute Source Space
The source space defines the position and orientation of the candidate source
locations. There are two types of source spaces
Step4: The surface based source space src contains two parts, one for the left
hemisphere (258 locations) and one for the right hemisphere (258
locations). Sources can be visualized on top of the BEM surfaces in purple.
Step5: To compute a volume based source space defined with a grid of candidate
dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0) mm
you can use the following code.
Obviously here, the sphere is not perfect. It is not restricted to the
brain and it can miss some parts of the cortex.
Step6: To compute a volume based source space defined with a grid of candidate
dipoles inside the brain (requires the
Step7: <div class="alert alert-info"><h4>Note</h4><p>Some sources may appear to be outside the BEM inner skull contour.
This is because the ``slices`` are decimated for plotting here.
Each slice in the figure actually represents several MRI slices,
but only the MRI voxels and BEM boundaries for a single (midpoint
of the given slice range) slice are shown, whereas the source space
points plotted on that midpoint slice consist of all points
for which that slice (out of all slices shown) was the closest.</p></div>
Now let's see how to view all sources in 3D.
Step8: Compute forward solution
We can now compute the forward solution.
To reduce computation we'll just compute a single layer BEM (just inner
skull) that can then be used for MEG (not EEG).
We specify if we want a one-layer or a three-layer BEM using the
conductivity parameter.
The BEM solution requires a BEM model which describes the geometry
of the head the conductivities of the different tissues.
Step9: Note that the
Step10: <div class="alert alert-danger"><h4>Warning</h4><p>Forward computation can remove vertices that are too close to (or outside)
the inner skull surface. For example, here we have gone from 516 to 474
vertices in use. For many functions, such as
Step11: We can explore the content of fwd to access the numpy array that contains
the gain matrix.
Step12: To extract the numpy array containing the forward operator corresponding to
the source space fwd['src'] with cortical orientation constraint
we can use the following | Python Code:
import os.path as op
import mne
from mne.datasets import sample
data_path = sample.data_path()
# the raw file containing the channel location + types
sample_dir = op.join(data_path, 'MEG', 'sample',)
raw_fname = op.join(sample_dir, 'sample_audvis_raw.fif')
# The paths to Freesurfer reconstructions
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
Explanation: Head model and forward computation
The aim of this tutorial is to be a getting started for forward computation.
For more extensive details and presentation of the general concepts for forward
modeling, see ch_forward.
End of explanation
plot_bem_kwargs = dict(
subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', orientation='coronal',
slices=[50, 100, 150, 200])
mne.viz.plot_bem(**plot_bem_kwargs)
Explanation: Computing the forward operator
To compute a forward operator we need:
a -trans.fif file that contains the coregistration info.
a source space
the :term:BEM surfaces
Compute and visualize BEM surfaces
The :term:BEM surfaces are the triangulations of the interfaces between
different tissues needed for forward computation. These surfaces are for
example the inner skull surface, the outer skull surface and the outer skin
surface, a.k.a. scalp surface.
Computing the BEM surfaces requires FreeSurfer and makes use of
the command-line tools mne watershed_bem or mne flash_bem, or
the related functions :func:mne.bem.make_watershed_bem or
:func:mne.bem.make_flash_bem.
Here we'll assume it's already computed. It takes a few minutes per subject.
For EEG we use 3 layers (inner skull, outer skull, and skin) while for
MEG 1 layer (inner skull) is enough.
Let's look at these surfaces. The function :func:mne.viz.plot_bem
assumes that you have the bem folder of your subject's FreeSurfer
reconstruction, containing the necessary surface files. Here we use a smaller
than default subset of slices for speed.
End of explanation
# The transformation file obtained by coregistration
trans = op.join(sample_dir, 'sample_audvis_raw-trans.fif')
info = mne.io.read_info(raw_fname)
# Here we look at the dense head, which isn't used for BEM computations but
# is useful for coregistration.
mne.viz.plot_alignment(info, trans, subject=subject, dig=True,
meg=['helmet', 'sensors'], subjects_dir=subjects_dir,
surfaces='head-dense')
Explanation: Visualizing the coregistration
The coregistration is the operation that allows to position the head and the
sensors in a common coordinate system. In the MNE software the transformation
to align the head and the sensors in stored in a so-called trans file.
It is a FIF file that ends with -trans.fif. It can be obtained with
:func:mne.gui.coregistration (or its convenient command line
equivalent mne coreg), or mrilab if you're using a Neuromag
system.
Here we assume the coregistration is done, so we just visually check the
alignment with the following code.
End of explanation
src = mne.setup_source_space(subject, spacing='oct4', add_dist='patch',
subjects_dir=subjects_dir)
print(src)
Explanation: Compute Source Space
The source space defines the position and orientation of the candidate source
locations. There are two types of source spaces:
surface-based source space when the candidates are confined to a
surface.
volumetric or discrete source space when the candidates are discrete,
arbitrarily located source points bounded by the surface.
Surface-based source space is computed using
:func:mne.setup_source_space, while volumetric source space is computed
using :func:mne.setup_volume_source_space.
We will now compute a surface-based source space with an 'oct4'
resolution. See setting_up_source_space for details on source space
definition and spacing parameter.
<div class="alert alert-danger"><h4>Warning</h4><p>``'oct4'`` is used here just for speed, for real analyses the recommended
spacing is ``'oct6'``.</p></div>
End of explanation
mne.viz.plot_bem(src=src, **plot_bem_kwargs)
Explanation: The surface based source space src contains two parts, one for the left
hemisphere (258 locations) and one for the right hemisphere (258
locations). Sources can be visualized on top of the BEM surfaces in purple.
End of explanation
sphere = (0.0, 0.0, 0.04, 0.09)
vol_src = mne.setup_volume_source_space(
subject, subjects_dir=subjects_dir, sphere=sphere, sphere_units='m',
add_interpolator=False) # just for speed!
print(vol_src)
mne.viz.plot_bem(src=vol_src, **plot_bem_kwargs)
Explanation: To compute a volume based source space defined with a grid of candidate
dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0) mm
you can use the following code.
Obviously here, the sphere is not perfect. It is not restricted to the
brain and it can miss some parts of the cortex.
End of explanation
surface = op.join(subjects_dir, subject, 'bem', 'inner_skull.surf')
vol_src = mne.setup_volume_source_space(
subject, subjects_dir=subjects_dir, surface=surface,
add_interpolator=False) # Just for speed!
print(vol_src)
mne.viz.plot_bem(src=vol_src, **plot_bem_kwargs)
Explanation: To compute a volume based source space defined with a grid of candidate
dipoles inside the brain (requires the :term:BEM surfaces) you can use the
following.
End of explanation
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
surfaces='white', coord_frame='mri',
src=src)
mne.viz.set_3d_view(fig, azimuth=173.78, elevation=101.75,
distance=0.30, focalpoint=(-0.03, -0.01, 0.03))
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Some sources may appear to be outside the BEM inner skull contour.
This is because the ``slices`` are decimated for plotting here.
Each slice in the figure actually represents several MRI slices,
but only the MRI voxels and BEM boundaries for a single (midpoint
of the given slice range) slice are shown, whereas the source space
points plotted on that midpoint slice consist of all points
for which that slice (out of all slices shown) was the closest.</p></div>
Now let's see how to view all sources in 3D.
End of explanation
conductivity = (0.3,) # for single layer
# conductivity = (0.3, 0.006, 0.3) # for three layers
model = mne.make_bem_model(subject='sample', ico=4,
conductivity=conductivity,
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
Explanation: Compute forward solution
We can now compute the forward solution.
To reduce computation we'll just compute a single layer BEM (just inner
skull) that can then be used for MEG (not EEG).
We specify if we want a one-layer or a three-layer BEM using the
conductivity parameter.
The BEM solution requires a BEM model which describes the geometry
of the head the conductivities of the different tissues.
End of explanation
fwd = mne.make_forward_solution(raw_fname, trans=trans, src=src, bem=bem,
meg=True, eeg=False, mindist=5.0, n_jobs=1,
verbose=True)
print(fwd)
Explanation: Note that the :term:BEM does not involve any use of the trans file. The BEM
only depends on the head geometry and conductivities.
It is therefore independent from the MEG data and the head position.
Let's now compute the forward operator, commonly referred to as the
gain or leadfield matrix.
See :func:mne.make_forward_solution for details on the meaning of each
parameter.
End of explanation
print(f'Before: {src}')
print(f'After: {fwd["src"]}')
Explanation: <div class="alert alert-danger"><h4>Warning</h4><p>Forward computation can remove vertices that are too close to (or outside)
the inner skull surface. For example, here we have gone from 516 to 474
vertices in use. For many functions, such as
:func:`mne.compute_source_morph`, it is important to pass ``fwd['src']``
or ``inv['src']`` so that this removal is adequately accounted for.</p></div>
End of explanation
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
Explanation: We can explore the content of fwd to access the numpy array that contains
the gain matrix.
End of explanation
fwd_fixed = mne.convert_forward_solution(fwd, surf_ori=True, force_fixed=True,
use_cps=True)
leadfield = fwd_fixed['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
Explanation: To extract the numpy array containing the forward operator corresponding to
the source space fwd['src'] with cortical orientation constraint
we can use the following:
End of explanation |
3,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1> Scientific Programming in Python </h1>
<h2> Topic 4
Step4: En esta actividad implementaremos una conocida métrica para medir disimilitud entre conjuntos
Step8: Paso 2.
Notar que los cambios son mínimos
Step9: Paso 3. | Python Code:
import numba
import numpy as np
import numexpr as ne
import matplotlib.pyplot as plt
Explanation: <center>
<h1> Scientific Programming in Python </h1>
<h2> Topic 4: Just in Time Compilation: Numba and NumExpr </h2>
</center>
Notebook created by Martín Villanueva - [email protected] - DI UTFSM - April 2017.
End of explanation
def metric_python(x, y):
standard Euclidean distance
ret = x-y
ret *= ret
return np.sqrt(ret).sum()
def inf_dist_python(x, Y):
inf distance between row x and array Y
m = Y.shape[0]
inf = np.inf
for i in range(m):
dist = metric_python(x, Y[i])
if dist < inf:
inf = dist
return inf
def hausdorff_python(X, Y):
Hausdorff distance between arrays X and Y
m = X.shape[0]
n = Y.shape[0]
sup1 = -1.
sup2 = -1.
for i in range(m):
inf1 = inf_dist_python(X[i], Y)
if inf1 > sup1:
sup1 = inf1
for i in range(n):
inf2 = inf_dist_python(Y[i], X)
if inf2 > sup2:
sup2 = inf2
return max(sup1, sup2)
Explanation: En esta actividad implementaremos una conocida métrica para medir disimilitud entre conjuntos: La métrica de Hausdorff. Esta corresponde a un métrica o distancia ocupada para medir cuán disímiles son dos subconjuntos dados.
Esta tiene muchas aplicaciones, en particular para comparar el parecido entre imágenes. En el caso en donde los conjuntos son arreglos bidimensionales, la definición es la siguiente:
Sean $X \in \mathbb{R}^{m \times 3}$ e $Y \in \mathbb{R}^{n \times 3}$ dos matrices, la métrica/distancia de Hausdorff sobre sobre estas como:
$$
d_H(X,Y) = \max \left(\ \max_{i\leq m} \min_{j \leq n} d(X[i],Y[j]), \ \max_{j\leq n} \min_{i \leq m} d(Y[j],X[i]) \ \right)
$$
donde $d$ es la distancia Euclideana clásica. ($X[i]$ indíca la i-ésima fila de X).
Ilustración unidimensional: Distancia entre funciones.
<img src='data/hausdorff.png' style="width: 600px;">
Implemente la métrica de Hausdorff en Python clásico.
Implemente la métrica de Hausdorff usando Numba (Forzando el modo nopython y definiendo explícitamente las signatures de las funciones).
Cree 10 arreglos $X,Y$ aleatorios, con cantidad creciente de filas, y realice análsis de tiempos de ejecuciones de las funciones anteriores sobre estos arreglos.
Concluya.
Paso 1.
End of explanation
@numba.jit('float64 (float64[:], float64[:])')
def metric_numba(x, y):
standard Euclidean distance
ret = x-y
ret *= ret
return np.sqrt(ret).sum()
@numba.jit('float64 (float64[:], float64[:,:])', nopython=True)
def inf_dist_numba(x, Y):
inf distance between row x and array Y
m = Y.shape[0]
inf = np.inf
for i in range(m):
dist = metric_numba(x, Y[i])
if dist < inf:
inf = dist
return inf
@numba.jit('float64 (float64[:,:], float64[:,:])', nopython=True)
def hausdorff_numba(X, Y):
Hausdorff distance between arrays X and Y
m = X.shape[0]
n = Y.shape[0]
sup1 = -1.
sup2 = -1.
for i in range(m):
inf1 = inf_dist_numba(X[i], Y)
if inf1 > sup1:
sup1 = inf1
for i in range(n):
inf2 = inf_dist_numba(Y[i], X)
if inf2 > sup2:
sup2 = inf2
return max(sup1, sup2)
Explanation: Paso 2.
Notar que los cambios son mínimos: Decoradores + nombres de funciones.
End of explanation
#nrows = [10**n for n in range(10)]
nrows = np.linspace(100,5000,10).astype(int)
for nrow in nrows:
X = np.random.random((nrow,3))
Y = np.random.random((nrow,3))
tp = %timeit -o hausdorff_python(X,Y)
tn = %timeit -o hausdorff_numba(X,Y)
print("Number of rows: {0}".format(nrow))
print("Best time in Python: {0}".format(tp.best))
print("Best time in Numba: {0} \n".format(tn.best))
del X,Y
Explanation: Paso 3.
End of explanation |
3,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Persistent homology
This demo explains how to use Dionysus for persistent homology computation. First necessary imports.
Step1: We will compute persistent homology of a 2-simplex (triangle) ABC. The filtration is as follows
Step2: Now the persistent homology is computed.
Step3: Now output the computed persistence diagram. For each critical cell that appears in the filtration the time of Birth and Death is given as well as the cell that kills it (its pair). The features that persist forever have Death value set to inf. | Python Code:
from dionysus import Simplex, Filtration, StaticPersistence, \
vertex_cmp, data_cmp, data_dim_cmp, \
DynamicPersistenceChains
from math import sqrt
Explanation: Persistent homology
This demo explains how to use Dionysus for persistent homology computation. First necessary imports.
End of explanation
scx = [Simplex((2,), 0), # C
Simplex((0,), 1), # A
Simplex((1,), 1), # B
Simplex((0,1), 2), # AB
Simplex((1,2), 3), # BC
Simplex((0,2), 3), # AC
Simplex((0,1,2), 4), # ABC
]
Explanation: We will compute persistent homology of a 2-simplex (triangle) ABC. The filtration is as follows: first the top vertex (C) of the triangle is added, then the rest of vertices (A and B) followed by the the bottom edge (AB), then the rest of the edges (AC and BC) and finally the triangle is filled in (ABC).
End of explanation
f = Filtration(scx, data_cmp)
p = DynamicPersistenceChains(f)
p.pair_simplices()
smap = p.make_simplex_map(f)
Explanation: Now the persistent homology is computed.
End of explanation
print "{:>10}{:>10}{:>10}{:>10}".format("First", "Second", "Birth", "Death")
for i in (i for i in p if i.sign()):
b = smap[i]
if i.unpaired():
print "{:>10}{:>10}{:>10}{:>10}".format(b, '', b.data, "inf")
else:
d = smap[i.pair()]
print "{:>10}{:>10}{:>10}{:>10}".format(b, d, b.data, d.data)
Explanation: Now output the computed persistence diagram. For each critical cell that appears in the filtration the time of Birth and Death is given as well as the cell that kills it (its pair). The features that persist forever have Death value set to inf.
End of explanation |
3,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time the functions
This notebooks measures the runtime of each functionality.
Step1: Binarization
Step2: Binary detection
Step3: MSER detection
Step4: Conclusion
The tophat operation (for protrusions and indentations) is the bottleneck, takes 4.9 (of in total 5.7) seconds.
The binarization is also quite slow because the function connectedComponentsWithStats is called for every threshold level.
The MSER detection is somewhat faster for a color image (about 2-3 times as fast). | Python Code:
import numpy as np
import cv2
import sys
import os
sys.path.insert(0, os.path.abspath('..'))
import salientregions as sr
import cProfile
%pylab inline
#Load the image
path_to_image = 'images/graffiti.jpg'
img = cv2.imread(path_to_image)
sr.show_image(img)
%%timeit
#Time: creation of the detector
det = sr.SalientDetector(SE_size_factor=0.20,
lam_factor=4)
det = sr.SalientDetector(SE_size_factor=0.20,
lam_factor=4)
%%timeit
#Time: detect all regions in color image
regions = det.detect(img,
find_holes=True,
find_islands=True,
find_indentations=True,
find_protrusions=True,
visualize=False)
cProfile.run('det.detect(img, find_holes=True, find_islands=True, find_indentations=True, \
find_protrusions=True, visualize=False)')
%%timeit
#Only holes and islands
regions = det.detect(img,
find_holes=True,
find_islands=True,
find_indentations=False,
find_protrusions=False,
visualize=False)
Explanation: Time the functions
This notebooks measures the runtime of each functionality.
End of explanation
lam_factor = 3
area_factor_large = 0.001
area_factor_verylarge = 0.1
lam = 50
connectivity = 4
weights=(0.33,0.33,0.33)
grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
%%timeit
#Creation of the binarizer
binarizer = sr.DatadrivenBinarizer(area_factor_large=area_factor_large, area_factor_verylarge=area_factor_verylarge,
lam=lam, weights=weights, connectivity=connectivity)
binarizer = sr.DatadrivenBinarizer(area_factor_large=area_factor_large, area_factor_verylarge=area_factor_verylarge,
lam=lam, weights=weights, connectivity=connectivity)
%%timeit
#The binarization
binarized = binarizer.binarize(grayscale, visualize=False)
cProfile.run('binarizer.binarize(grayscale, visualize=False)')
Explanation: Binarization
End of explanation
binarized = binarizer.binarize(grayscale, visualize=False)
regions = det.detect(img,
find_holes=True,
find_islands=True,
find_indentations=True,
find_protrusions=True,
visualize=False)
se = det.SE
area_factor=0.05
%%timeit
detector = sr.BinaryDetector(se, lam, area_factor, connectivity)
regions = detector.detect(binarized, visualize=False)
detector = sr.BinaryDetector(se, lam, area_factor, connectivity)
cProfile.run('detector.detect(binarized, visualize=False)')
#Only holes and islands
detector = sr.BinaryDetector(se, lam, area_factor, connectivity)
cProfile.run('detector.detect(binarized, find_indentations=False, \
find_protrusions=False, visualize=False)')
Explanation: Binary detection
End of explanation
mser = cv2.MSER_create()
%%timeit
regions = mser.detectRegions(img, None)
cProfile.run('mser.detectRegions(img, None)')
Explanation: MSER detection
End of explanation
%timeit cv2.morphologyEx(binarized, cv2.MORPH_TOPHAT, se)
%timeit cv2.morphologyEx(binarized, cv2.MORPH_OPEN, se)
%timeit cv2.erode(binarized, se)
%timeit cv2.dilate(binarized, se)
Explanation: Conclusion
The tophat operation (for protrusions and indentations) is the bottleneck, takes 4.9 (of in total 5.7) seconds.
The binarization is also quite slow because the function connectedComponentsWithStats is called for every threshold level.
The MSER detection is somewhat faster for a color image (about 2-3 times as fast).
End of explanation |
3,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 1
Create Data - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file.
Get Data - We will learn how to read in the text file. The data consist of baby names and the number of baby names born in the year 1880.
Prepare Data - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records.
Analyze Data - We will simply find the most popular name in a specific year.
Present Data - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year.
The pandas library is used for all the data analysis excluding a small piece of the data presentation section. The matplotlib library will only be needed for the data presentation section. Importing the libraries is the first step we will take in the lesson.
Step1: Create Data
The data set will consist of 5 baby names and the number of births recorded for that year (1880).
Step2: To merge these two lists together we will use the zip function.
Step3: We are basically done creating the data set. We now will use the pandas library to export this data set into a csv file.
df will be a DataFrame object. You can think of this object holding the contents of the BabyDataSet in a format similar to a sql table or an excel spreadsheet. Lets take a look below at the contents inside df.
Step4: Export the dataframe to a csv file. We can name the file births1880.csv. The function to_csv will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
Step5: The only parameters we will use is index and header. Setting these parameters to True will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
Step6: Get Data
To pull in the csv file, we will use the pandas function read_csv. Let us take a look at this function and what inputs it takes.
Step7: Even though this functions has many parameters, we will simply pass it the location of the text file.
Location = C
Step8: Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.
Step9: This brings us the our first problem of the exercise. The read_csv function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names.
To correct this we will pass the header parameter to the read_csv function and set it to None (means null in python).
Step10: If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.
Step11: You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the index of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates.
[Names, Births] can be though of as column headers similar to the ones found in an Excel spreadsheet or sql database.
Delete the csv file now that we are done using it.
Step12: Prepare Data
The data we have consists of baby names and the number of births in the year 1880. We already know that we have 5 records and none of the records are missing (non-null values).
The Names column at this point is of no concern since it most likely is just composed of alpha numeric strings (baby names). There is a chance of bad data in this column but we will not worry about that at this point of the analysis. The Births column should just contain integers representing the number of babies born in a specific year with a specific name. We can check if the all the data is of the data type integer. It would not make sense to have this column have a data type of float. I would not worry about any possible outliers at this point of the analysis.
Realize that aside from the check we did on the "Names" column, briefly looking at the data inside the dataframe should be as far as we need to go at this stage of the game. As we continue in the data analysis life cycle we will have plenty of opportunities to find any issues with the data set.
Step13: As you can see the Births column is of type int64, thus no floats (decimal numbers) or alpha numeric characters will be present in this column.
Analyze Data
To find the most popular name or the baby name with the higest birth rate, we can do one of the following.
Sort the dataframe and select the top row
Use the max() attribute to find the maximum value
Step14: Present Data
Here we can plot the Births column and label the graph to show the end user the highest point on the graph. In conjunction with the table, the end user has a clear picture that Mel is the most popular baby name in the data set.
plot() is a convinient attribute where pandas lets you painlessly plot the data in your dataframe. We learned how to find the maximum value of the Births column in the previous section. Now to find the actual baby name of the 973 value looks a bit tricky, so lets go over it.
Explain the pieces | Python Code:
# Import all libraries needed for the tutorial
# General syntax to import specific functions in a library:
##from (library) import (specific library function)
from pandas import DataFrame, read_csv
# General syntax to import a library but no functions:
##import (library) as (give the library a nickname/alias)
import matplotlib.pyplot as plt
import pandas as pd #this is how I usually import pandas
import sys #only needed to determine Python version number
# Enable inline plotting
%matplotlib inline
print('Python version ' + sys.version)
print( 'Pandas version ' + pd.__version__)
Explanation: Lesson 1
Create Data - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file.
Get Data - We will learn how to read in the text file. The data consist of baby names and the number of baby names born in the year 1880.
Prepare Data - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records.
Analyze Data - We will simply find the most popular name in a specific year.
Present Data - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year.
The pandas library is used for all the data analysis excluding a small piece of the data presentation section. The matplotlib library will only be needed for the data presentation section. Importing the libraries is the first step we will take in the lesson.
End of explanation
# The inital set of baby names and bith rates
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
Explanation: Create Data
The data set will consist of 5 baby names and the number of births recorded for that year (1880).
End of explanation
zip?
BabyDataSet = zip(names,births)
BabyDataSet
Explanation: To merge these two lists together we will use the zip function.
End of explanation
df = pd.DataFrame(data = dict(BabyDataSet), columns=['Names', 'Births'])
df
Explanation: We are basically done creating the data set. We now will use the pandas library to export this data set into a csv file.
df will be a DataFrame object. You can think of this object holding the contents of the BabyDataSet in a format similar to a sql table or an excel spreadsheet. Lets take a look below at the contents inside df.
End of explanation
df.to_csv?
Explanation: Export the dataframe to a csv file. We can name the file births1880.csv. The function to_csv will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
End of explanation
df.to_csv('births1880.csv',index=False,header=False)
Explanation: The only parameters we will use is index and header. Setting these parameters to True will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
End of explanation
read_csv?
Explanation: Get Data
To pull in the csv file, we will use the pandas function read_csv. Let us take a look at this function and what inputs it takes.
End of explanation
Location = r'C:\Users\david\notebooks\pandas\births1880.csv'
df = pd.read_csv(Location)
Explanation: Even though this functions has many parameters, we will simply pass it the location of the text file.
Location = C:\Users\ENTER_USER_NAME.xy\startups\births1880.csv
Note: Depending on where you save your notebooks, you may need to modify the location above.
End of explanation
df
Explanation: Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.
End of explanation
df = pd.read_csv(Location, header=None)
df
Explanation: This brings us the our first problem of the exercise. The read_csv function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names.
To correct this we will pass the header parameter to the read_csv function and set it to None (means null in python).
End of explanation
df = pd.read_csv(Location, names=['Names','Births'])
df
Explanation: If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.
End of explanation
import os
os.remove(Location)
Explanation: You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the index of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates.
[Names, Births] can be though of as column headers similar to the ones found in an Excel spreadsheet or sql database.
Delete the csv file now that we are done using it.
End of explanation
# Check data type of the columns
df.dtypes
# Check data type of Births column
df.Births.dtype
Explanation: Prepare Data
The data we have consists of baby names and the number of births in the year 1880. We already know that we have 5 records and none of the records are missing (non-null values).
The Names column at this point is of no concern since it most likely is just composed of alpha numeric strings (baby names). There is a chance of bad data in this column but we will not worry about that at this point of the analysis. The Births column should just contain integers representing the number of babies born in a specific year with a specific name. We can check if the all the data is of the data type integer. It would not make sense to have this column have a data type of float. I would not worry about any possible outliers at this point of the analysis.
Realize that aside from the check we did on the "Names" column, briefly looking at the data inside the dataframe should be as far as we need to go at this stage of the game. As we continue in the data analysis life cycle we will have plenty of opportunities to find any issues with the data set.
End of explanation
# Method 1:
Sorted = df.sort(['Births'], ascending=False)
Sorted.head(1)
# Method 2:
df['Births'].max()
Explanation: As you can see the Births column is of type int64, thus no floats (decimal numbers) or alpha numeric characters will be present in this column.
Analyze Data
To find the most popular name or the baby name with the higest birth rate, we can do one of the following.
Sort the dataframe and select the top row
Use the max() attribute to find the maximum value
End of explanation
# Create graph
df['Births'].plot()
# Maximum value in the data set
MaxValue = df['Births'].max()
# Name associated with the maximum value
MaxName = df['Names'][df['Births'] == df['Births'].max()].values
# Text to display on graph
Text = str(MaxValue) + " - " + MaxName
# Add text to graph
plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
print("The most popular name")
df[df['Births'] == df['Births'].max()]
#Sorted.head(1) can also be used
Explanation: Present Data
Here we can plot the Births column and label the graph to show the end user the highest point on the graph. In conjunction with the table, the end user has a clear picture that Mel is the most popular baby name in the data set.
plot() is a convinient attribute where pandas lets you painlessly plot the data in your dataframe. We learned how to find the maximum value of the Births column in the previous section. Now to find the actual baby name of the 973 value looks a bit tricky, so lets go over it.
Explain the pieces:
df['Names'] - This is the entire list of baby names, the entire Names column
df['Births'] - This is the entire list of Births in the year 1880, the entire Births column
df['Births'].max() - This is the maximum value found in the Births column
[df['Births'] == df['Births'].max()] IS EQUAL TO [Find all of the records in the Births column where it is equal to 973]
df['Names'][df['Births'] == df['Births'].max()] IS EQUAL TO Select all of the records in the Names column WHERE [The Births column is equal to 973]
An alternative way could have been to use the Sorted dataframe:
Sorted['Names'].head(1).value
The str() function simply converts an object into a string.
End of explanation |
3,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AveragePooling1D
[pooling.AveragePooling1D.0] input 6x6, pool_size=2, strides=None, padding='valid'
Step1: [pooling.AveragePooling1D.1] input 6x6, pool_size=2, strides=1, padding='valid'
Step2: [pooling.AveragePooling1D.2] input 6x6, pool_size=2, strides=3, padding='valid'
Step3: [pooling.AveragePooling1D.3] input 6x6, pool_size=2, strides=None, padding='same'
Step4: [pooling.AveragePooling1D.4] input 6x6, pool_size=2, strides=1, padding='same'
Step5: [pooling.AveragePooling1D.5] input 6x6, pool_size=2, strides=3, padding='same'
Step6: [pooling.AveragePooling1D.6] input 6x6, pool_size=3, strides=None, padding='valid'
Step7: [pooling.AveragePooling1D.7] input 7x7, pool_size=3, strides=1, padding='same'
Step8: [pooling.AveragePooling1D.8] input 7x7, pool_size=3, strides=3, padding='same'
Step9: export for Keras.js tests | Python Code:
data_in_shape = (6, 6)
L = AveragePooling1D(pool_size=2, strides=None, padding='valid')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(250)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: AveragePooling1D
[pooling.AveragePooling1D.0] input 6x6, pool_size=2, strides=None, padding='valid'
End of explanation
data_in_shape = (6, 6)
L = AveragePooling1D(pool_size=2, strides=1, padding='valid')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(251)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.1] input 6x6, pool_size=2, strides=1, padding='valid'
End of explanation
data_in_shape = (6, 6)
L = AveragePooling1D(pool_size=2, strides=3, padding='valid')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(252)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.2] input 6x6, pool_size=2, strides=3, padding='valid'
End of explanation
data_in_shape = (6, 6)
L = AveragePooling1D(pool_size=2, strides=None, padding='same')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(253)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.3] input 6x6, pool_size=2, strides=None, padding='same'
End of explanation
data_in_shape = (6, 6)
L = AveragePooling1D(pool_size=2, strides=1, padding='same')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(254)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.4] input 6x6, pool_size=2, strides=1, padding='same'
End of explanation
data_in_shape = (6, 6)
L = AveragePooling1D(pool_size=2, strides=3, padding='same')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(255)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.5] input 6x6, pool_size=2, strides=3, padding='same'
End of explanation
data_in_shape = (6, 6)
L = AveragePooling1D(pool_size=3, strides=None, padding='valid')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(256)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.6] input 6x6, pool_size=3, strides=None, padding='valid'
End of explanation
data_in_shape = (7, 7)
L = AveragePooling1D(pool_size=3, strides=1, padding='same')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(257)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.7] input 7x7, pool_size=3, strides=1, padding='same'
End of explanation
data_in_shape = (7, 7)
L = AveragePooling1D(pool_size=3, strides=3, padding='same')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(258)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling1D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling1D.8] input 7x7, pool_size=3, strides=3, padding='same'
End of explanation
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
3,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras for Text Classification
Learning Objectives
Learn how to create a text classification datasets using BigQuery.
Learn how to tokenize and integerize a corpus of text for training in Keras.
Learn how to do one-hot-encodings in Keras.
Learn how to use embedding layers to represent words in Keras.
Learn about the bag-of-word representation for sentences.
Learn how to use DNN/CNN/RNN model to classify text in keras.
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).
Step15: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that
Step16: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Step17: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
Step18: Preparing the train/test splits
Let's split our data into train and test splits
Step19: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
Step20: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
Step21: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Step22: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
Step23: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Step24: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs)
Step25: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Step26: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps. | Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
Explanation: Keras for Text Classification
Learning Objectives
Learn how to create a text classification datasets using BigQuery.
Learn how to tokenize and integerize a corpus of text for training in Keras.
Learn how to do one-hot-encodings in Keras.
Learn how to use embedding layers to represent words in Keras.
Learn about the bag-of-word representation for sentences.
Learn how to use DNN/CNN/RNN model to classify text in keras.
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
Explanation: Replace the variable values in the cell below:
End of explanation
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
title_dataset.source.value_counts()
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
End of explanation
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow.keras.layers import (
Embedding,
Flatten,
GRU,
Conv1D,
Lambda,
Dense,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
Explanation: Let's write the sample datatset to disk.
End of explanation
LOGDIR = "./text_models"
DATA_DIR = "./data"
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
Explanation: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).
End of explanation
tokenizer = Tokenizer()
tokenizer.fit_on_texts(titles_df.title)
integerized_titles = tokenizer.texts_to_sequences(titles_df.title)
integerized_titles[:3]
VOCAB_SIZE = len(tokenizer.index_word)
VOCAB_SIZE
DATASET_SIZE = tokenizer.document_count
DATASET_SIZE
MAX_LEN = max(len(sequence) for sequence in integerized_titles)
MAX_LEN
Explanation: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that:
End of explanation
# TODO 1
def create_sequences(texts, max_len=MAX_LEN):
sequences = tokenizer.texts_to_sequences(texts)
padded_sequences = pad_sequences(sequences, max_len, padding='post')
return padded_sequences
sequences = create_sequences(titles_df.title[:3])
sequences
titles_df.source[:4]
Explanation: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
End of explanation
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
# TODO 2
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes)
return one_hots
encode_labels(titles_df.source[:4])
Explanation: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
End of explanation
N_TRAIN = int(DATASET_SIZE * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
sources_train.value_counts()
sources_valid.value_counts()
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)
X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
Explanation: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
End of explanation
def build_dnn_model(embed_dim):
model = Sequential([
Embedding(VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN]), # TODO 3
Lambda(lambda x: tf.reduce_mean(x, axis=1)), # TODO 4
Dense(N_CLASSES, activation='softmax') # TODO 5
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'dnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
BATCH_SIZE = 300
EPOCHS = 100
EMBED_DIM = 10
PATIENCE = 0
dnn_model = build_dnn_model(embed_dim=EMBED_DIM)
dnn_history = dnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(dnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(dnn_history.history)[['accuracy', 'val_accuracy']].plot()
dnn_model.summary()
Explanation: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
End of explanation
def build_rnn_model(embed_dim, units):
model = Sequential([
Embedding(VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True), # TODO 3
GRU(units), # TODO 5
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'rnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 0
rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)
history = rnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
rnn_model.summary()
Explanation: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs):
End of explanation
def build_cnn_model(embed_dim, filters, ksize, strides):
model = Sequential([
Embedding(
VOCAB_SIZE + 1,
embed_dim,
input_shape=[MAX_LEN],
mask_zero=True), # TODO 3
Conv1D( # TODO 5
filters=filters,
kernel_size=ksize,
strides=strides,
activation='relu',
),
Flatten(), # TODO 5
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'cnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 5
FILTERS = 200
STRIDES = 2
KSIZE = 3
PATIENCE = 0
cnn_model = build_cnn_model(
embed_dim=EMBED_DIM,
filters=FILTERS,
strides=STRIDES,
ksize=KSIZE,
)
cnn_history = cnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(cnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(cnn_history.history)[['accuracy', 'val_accuracy']].plot()
cnn_model.summary()
Explanation: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
End of explanation |
3,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Text generation with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download the Shakespeare dataset
Change the following line to run this code on your own data.
Step3: Read the data
First, look in the text
Step4: Process the text
Vectorize the text
Before training, we need to map strings to a numerical representation. Create two lookup tables
Step5: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
Step6: Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and tries to predict the index for "i" as the next character. At the next timestep, it does the same thing but the RNN considers the previous step context in addition to the current input character.
Step7: Create training batches
We used tf.data to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
Step9: Build The Model
We manually implement the model from scratch, using tf.numpy and some low-level TF ops. A Model object has three layers
Step10: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character.
Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output
Step11: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length
Step12: This gives us, at each timestep, a prediction of the next character index
Step13: Decode these to see the text predicted by this untrained model
Step14: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
Loss function
We define the loss function from scratch, using tf.nn.log_softmax. (Our definition is the same as tf.keras.losses.sparse_categorical_crossentropy.)
Step15: Optimizer
Keeping the DIY spirit, we implement the Adam optimizer from scratch.
Step16: Training loop
Again, we write our training loop from scratch.
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
Step17: Generate text
The following code block generates the text | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
import numpy as np
import os
import time
Explanation: Text generation with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Note: Enable GPU acceleration to execute this notebook faster. In Colab: Runtime > Change runtime type > Hardware accelerator > GPU. If running locally make sure TensorFlow version >= 2.4.
This tutorial includes runnable code implemented using tf.experimental.numpy. The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":
<pre>
QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.
BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?
ESCALUS:
The cause why then we are all resolved more sons.
VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.
QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.
PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m
</pre>
While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:
The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
Setup
Import TensorFlow and other libraries
End of explanation
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Explanation: Download the Shakespeare dataset
Change the following line to run this code on your own data.
End of explanation
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
Explanation: Read the data
First, look in the text:
End of explanation
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
Explanation: Process the text
Vectorize the text
Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
End of explanation
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//(seq_length+1)
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
Explanation: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
End of explanation
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
Explanation: Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and tries to predict the index for "i" as the next character. At the next timestep, it does the same thing but the RNN considers the previous step context in addition to the current input character.
End of explanation
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
Explanation: Create training batches
We used tf.data to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
End of explanation
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
class Embedding:
def __init__(self, vocab_size, embedding_dim):
self._vocab_size = vocab_size
self._embedding_dim = embedding_dim
self._built = False
def __call__(self, inputs):
if not self._built:
self.build(inputs)
return tnp.take(self.weights, inputs, axis=0)
def build(self, inputs):
del inputs
self.weights = tf.Variable(tnp.random.randn(
self._vocab_size, self._embedding_dim).astype(np.float32))
self._built = True
class GRUCell:
Builds a traditional GRU cell with dense internal transformations.
Gated Recurrent Unit paper: https://arxiv.org/abs/1412.3555
def __init__(self, n_units, forget_bias=0.0):
self._n_units = n_units
self._forget_bias = forget_bias
self._built = False
def __call__(self, inputs):
if not self._built:
self.build(inputs)
x, gru_state = inputs
# Dense layer on the concatenation of x and h.
y = tnp.dot(tnp.concatenate([x, gru_state], axis=-1), self.w1) + self.b1
# Update and reset gates.
u, r = tnp.split(tf.sigmoid(y), 2, axis=-1)
# Candidate.
c = tnp.dot(tnp.concatenate([x, r * gru_state], axis=-1), self.w2) + self.b2
new_gru_state = u * gru_state + (1 - u) * tnp.tanh(c)
return new_gru_state
def build(self, inputs):
# State last dimension must be n_units.
assert inputs[1].shape[-1] == self._n_units
# The dense layer input is the input and half of the GRU state.
dense_shape = inputs[0].shape[-1] + self._n_units
self.w1 = tf.Variable(tnp.random.uniform(
-0.01, 0.01, (dense_shape, 2 * self._n_units)).astype(tnp.float32))
self.b1 = tf.Variable((tnp.random.randn(2 * self._n_units) * 1e-6 + self._forget_bias
).astype(tnp.float32))
self.w2 = tf.Variable(tnp.random.uniform(
-0.01, 0.01, (dense_shape, self._n_units)).astype(tnp.float32))
self.b2 = tf.Variable((tnp.random.randn(self._n_units) * 1e-6).astype(tnp.float32))
self._built = True
@property
def weights(self):
return (self.w1, self.b1, self.w2, self.b2)
class GRU:
def __init__(self, n_units, forget_bias=0.0, stateful=False):
self._cell = GRUCell(n_units, forget_bias)
self._stateful = stateful
self._built = False
def __call__(self, inputs):
if not self._built:
self.build(inputs)
if self._stateful:
state = self.state.read_value()
else:
state = self._init_state(inputs.shape[0])
inputs = tnp.transpose(inputs, (1, 0, 2))
output = tf.scan(
lambda gru_state, x: self._cell((x, gru_state)),
inputs, state)
if self._stateful:
self.state.assign(output[-1, ...])
return tnp.transpose(output, [1, 0, 2])
def _init_state(self, batch_size):
return tnp.zeros([batch_size, self._cell._n_units], tnp.float32)
def reset_state(self):
if not self._stateful:
return
self.state.assign(tf.zeros_like(self.state))
def create_state(self, batch_size):
self.state = tf.Variable(self._init_state(batch_size))
def build(self, inputs):
s = inputs.shape[0:1] + inputs.shape[2:]
shapes = (s, s[:-1] + (self._cell._n_units,))
self._cell.build([tf.TensorSpec(x, tf.float32) for x in shapes])
if self._stateful:
self.create_state(inputs.shape[0])
else:
self.state = ()
self._built = True
@property
def weights(self):
return self._cell.weights
class Dense:
def __init__(self, n_units, activation=None):
self._n_units = n_units
self._activation = activation
self._built = False
def __call__(self, inputs):
if not self._built:
self.build(inputs)
y = tnp.dot(inputs, self.w) +self.b
if self._activation != None:
y = self._activation(y)
return y
def build(self, inputs):
shape_w = (inputs.shape[-1], self._n_units)
lim = tnp.sqrt(6.0 / (shape_w[0] + shape_w[1]))
self.w = tf.Variable(tnp.random.uniform(-lim, lim, shape_w).astype(tnp.float32))
self.b = tf.Variable((tnp.random.randn(self._n_units) * 1e-6).astype(tnp.float32))
self._built = True
@property
def weights(self):
return (self.w, self.b)
class Model:
def __init__(self, vocab_size, embedding_dim, rnn_units, forget_bias=0.0, stateful=False, activation=None):
self._embedding = Embedding(vocab_size, embedding_dim)
self._gru = GRU(rnn_units, forget_bias=forget_bias, stateful=stateful)
self._dense = Dense(vocab_size, activation=activation)
self._layers = [self._embedding, self._gru, self._dense]
self._built = False
def __call__(self, inputs):
if not self._built:
self.build(inputs)
xs = inputs
for layer in self._layers:
xs = layer(xs)
return xs
def build(self, inputs):
self._embedding.build(inputs)
self._gru.build(tf.TensorSpec(inputs.shape + (self._embedding._embedding_dim,), tf.float32))
self._dense.build(tf.TensorSpec(inputs.shape + (self._gru._cell._n_units,), tf.float32))
self._built = True
@property
def weights(self):
return [layer.weights for layer in self._layers]
@property
def state(self):
return self._gru.state
def create_state(self, *args):
self._gru.create_state(*args)
def reset_state(self, *args):
self._gru.reset_state(*args)
model = Model(
vocab_size = vocab_size,
embedding_dim=embedding_dim,
rnn_units=rnn_units,
stateful=True)
Explanation: Build The Model
We manually implement the model from scratch, using tf.numpy and some low-level TF ops. A Model object has three layers: Embedding, GRU and Dense. Embedding and Dense are little more than just wrappers around tnp.take and tnp.dot, but we can use them to familiarize ourself with the structure of a layer. Each layer has two essential methods: build and __call__. build creates and initializes the layer's weights and state, which are things that change during the training process. __call__ is the forward function that calculates outputs given inputs, using the layer's weights and state internally.
Our model (more precisely the GRU layer) is stateful, because each call of __call__ will change its internal state, affecting the next call.
End of explanation
for input_example_batch, target_example_batch in dataset.take(1):
input_example_batch = tnp.asarray(input_example_batch)
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
Explanation: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character.
Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output:
End of explanation
example_batch_predictions[0]
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
Explanation: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length:
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
End of explanation
sampled_indices
Explanation: This gives us, at each timestep, a prediction of the next character index:
End of explanation
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
Explanation: Decode these to see the text predicted by this untrained model:
End of explanation
def one_hot(labels, n):
return (labels[..., np.newaxis] == tnp.arange(n)).astype(np.float32)
def loss_fn(labels, predictions):
predictions = tf.nn.log_softmax(predictions)
return -tnp.sum(predictions * one_hot(tnp.asarray(labels), predictions.shape[-1]), axis=-1)
example_batch_loss = loss_fn(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", tnp.mean(example_batch_loss))
Explanation: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
Loss function
We define the loss function from scratch, using tf.nn.log_softmax. (Our definition is the same as tf.keras.losses.sparse_categorical_crossentropy.)
End of explanation
class Adam:
def __init__(self, learning_rate=0.001, b1=0.9, b2=0.999, eps=1e-7):
self._lr = learning_rate
self._b1 = b1
self._b2 = b2
self._eps = eps
self._built = False
def build(self, weights):
self._m = tf.nest.map_structure(lambda x: tf.Variable(tnp.zeros_like(x)), weights)
self._v = tf.nest.map_structure(lambda x: tf.Variable(tnp.zeros_like(x)), weights)
self._step = tf.Variable(tnp.asarray(0, np.int64))
self._built = True
def _update(self, weights_var, grads, m_var, v_var):
b1 = self._b1
b2 = self._b2
eps = self._eps
step = tnp.asarray(self._step, np.float32)
lr = self._lr
weights = tnp.asarray(weights_var)
m = tnp.asarray(m_var)
v = tnp.asarray(v_var)
m = (1 - b1) * grads + b1 * m # First moment estimate.
v = (1 - b2) * (grads ** 2) + b2 * v # Second moment estimate.
mhat = m / (1 - b1 ** (step + 1)) # Bias correction.
vhat = v / (1 - b2 ** (step + 1))
weights_var.assign_sub((lr * mhat / (tnp.sqrt(vhat) + eps)).astype(weights.dtype))
m_var.assign(m)
v_var.assign(v)
def apply_gradients(self, weights, grads):
if not self._built:
self.build(weights)
tf.nest.map_structure(lambda *args: self._update(*args), weights, grads, self._m, self._v)
self._step.assign_add(1)
@property
def state(self):
return (self._step, self._m, self._v)
optimizer = Adam()
Explanation: Optimizer
Keeping the DIY spirit, we implement the Adam optimizer from scratch.
End of explanation
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
# tape.watch(tf.nest.flatten(weights))
predictions = model(inp)
loss = tnp.mean(loss_fn(target, predictions))
weights = model.weights
grads = tape.gradient(loss, weights)
optimizer.apply_gradients(weights, grads)
return loss
# Training step
EPOCHS = 10
model.create_state(BATCH_SIZE)
for epoch in range(EPOCHS):
start = time.time()
# initializing the hidden state at the start of every epoch
model.reset_state()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
print ('Epoch {} Loss {}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
Explanation: Training loop
Again, we write our training loop from scratch.
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
End of explanation
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.create_state(1)
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the character returned by the model
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted character as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
Explanation: Generate text
The following code block generates the text:
It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.
Get the prediction distribution of the next character using the start string and the RNN state.
Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.
The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one character. After predicting the next character, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted characters.
Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
To keep this prediction step simple, use a batch size of 1.
End of explanation |
3,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-hr', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: AWI
Source ID: AWI-CM-1-0-HR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:37
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
3,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[WIP] From Climatology Test to Anomaly Detection
Objective
Step1: Synthetic data
Let's create some synthetic data to illustrate some concepts.
Step3: How does this dataset look like?
Step4: Data Distribution
Let's plot an histogram
Step5: We know that this dataset has a normal distribution, so we can approximate it to a Gaussian.
Step6: Bad data
Let's add some bad measurements in random positions on our dataset
Step7: Climatology Test
Note that if the number of bad measurements is small, it doesn't compromise the estimate of the mean and standard deviation.
This is the concept of the climatology test. Any value beyond 3 standard deviations is still possible, but improbable. As long as the data are actually normally distributed and there is enough observations to estimate the mean and standard deviation, we can model it and easily predict how improbable would be a measurement.
This is a good solution, more restrictive than the Global Range test, but that doesn't cover everything. It is possible bad measurements in the range of feasible values.
Different perspectives from different tests
Let's consider another case where the data has some periodicity.
Step8: Most of the bad data is clearly distinct from the good data pattern, but is inside the feasible range so the climatology can't do much to distinguish the good from bad data.
Let's try a different test, the gradient check.
Step9: The spike projects the original data in a new space, and this projection is commonly called "feature" in the Machine Learning world. Note that the spike feature allow to better distinguish the good data from bad data.
Gronell & Wijffels, 2008
Beyond the climatology of actual measurements, let's do climatologies of features, such as gradient and spike.
Step10: Climatology Test
Any value beyond 3 standard deviations is still possible, but improbable. This is the traditional climatology test. As long as the observations are actually a normally distributed and there is enough observations to estimate the mean and standard deviation, we can model it and easily predict how improbable would be a measurement. | Python Code:
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
import numpy as np
from scipy import stats
import cotede
output_notebook()
Explanation: [WIP] From Climatology Test to Anomaly Detection
Objective:
Explain the concept of the Anomaly Detection approach to quality control
Create a synthetic conceptual case, with random normally distributed data on 3 dimensions. Each dimension is normal, so bad data doesn't necessarily can be seen by all dimensions, but on might be visible in one single dimension. and can explore the corners.
End of explanation
# Number of samples
N = 3000
# True mean and standard deviation of this dataset
mu, sigma = 0, 1
# Let's fix the random seed so everyone gets the same result
np.random.seed(42)
t = np.arange(N)
x = np.random.normal(mu, sigma, N)
# w = np.blackman(11)
# x = np.convolve(x, w, 'same')
Explanation: Synthetic data
Let's create some synthetic data to illustrate some concepts.
End of explanation
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t, x, size=8, line_color="orange", fill_color="orange", fill_alpha=0.5)
show(p) # show the results
def plot_hist(hist, edges):
Plot an histogram
Create an histogram from the output of numpy.hist().
We will create several histograms in this notebook so let's save this as a function to
reuse this code.
#title = 'test'
# p = figure(title=title, tools='', background_fill_color="#fafafa")
p = figure(plot_width=750, plot_height=300,
tools='', background_fill_color="#fafafa")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:],
fill_color="navy", line_color="white", alpha=0.5)
# p.line(x, pdf, line_color="#ff8888", line_width=4, alpha=0.7, legend_label="PDF")
# p.line(x, cdf, line_color="orange", line_width=2, alpha=0.7, legend_label="CDF")
p.y_range.start = 0
# p.legend.location = "center_right"
# p.legend.background_fill_color = "#fefefe"
p.xaxis.axis_label = 'x'
p.yaxis.axis_label = 'Pr(x)'
p.grid.grid_line_color="white"
return p
Explanation: How does this dataset look like?
End of explanation
hist, edges = np.histogram(x, density=True, bins=50)
p = plot_hist(hist, edges)
show(p)
Explanation: Data Distribution
Let's plot an histogram
End of explanation
mu_estimated, sigma_estimated = stats.norm.fit(x)
print("Estimated mean: {:.3f}, and standard deviation: {:.3f}".format(mu_estimated, sigma_estimated))
x_ref = np.linspace(x.min(), x.max(), 1000)
pdf = stats.norm.pdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
# sf = stats.norm.sf(x_ref, loc=mu_estimated, scale=sigma_estimated)
p = plot_hist(hist, edges)
p.line(x_ref, pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
# p.line(x_ref, sf, line_color="red", line_width=8, alpha=0.7, legend_label="SF")
show(p)
Explanation: We know that this dataset has a normal distribution, so we can approximate it to a Gaussian.
End of explanation
N_bad = 5
idx = np.random.permutation(x.size)[:N_bad]
x[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
print(sorted(x[idx]))
idx_good = [tn not in idx for tn in t]
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Some bad measurements")
p.circle(t[idx_good], x[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
# p.line([0, N], 2*[-6 * sigma], line_color="orange", line_width=3, alpha=0.7)
# p.line([0, N], 2*[6 * sigma], line_color="orange", line_width=3, alpha=0.7)
show(p) # show the results
mu_estimated, sigma_estimated = stats.norm.fit(x)
print("Estimated mean: {:.3f}, and standard deviation: {:.3f}".format(mu_estimated, sigma_estimated))
x_ref = np.linspace(x.min(), x.max(), 1000)
pdf = stats.norm.pdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
p = plot_hist(hist, edges)
p.line(x_ref, pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
# p.line(x_ref, sf, line_color="red", line_width=8, alpha=0.7, legend_label="SF")
p.triangle(x[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
Explanation: Bad data
Let's add some bad measurements in random positions on our dataset
End of explanation
x2 = x + 2 * np.sin(2 * np.pi * t/1000)
x2[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t[idx_good], x2[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x2[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p)
mu_estimated, sigma_estimated = stats.norm.fit(x2)
print("Estimated mean: {:.3f}, and standard deviation: {:.3f}".format(mu_estimated, sigma_estimated))
x_ref = np.linspace(x.min(), x.max(), 1000)
pdf = stats.norm.pdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
hist, edges = np.histogram(x2, density=True, bins=50)
p = plot_hist(hist, edges)
p.line(x_ref, pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
# p.line(x_ref, sf, line_color="red", line_width=8, alpha=0.7, legend_label="SF")
p.triangle(x2[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
Explanation: Climatology Test
Note that if the number of bad measurements is small, it doesn't compromise the estimate of the mean and standard deviation.
This is the concept of the climatology test. Any value beyond 3 standard deviations is still possible, but improbable. As long as the data are actually normally distributed and there is enough observations to estimate the mean and standard deviation, we can model it and easily predict how improbable would be a measurement.
This is a good solution, more restrictive than the Global Range test, but that doesn't cover everything. It is possible bad measurements in the range of feasible values.
Different perspectives from different tests
Let's consider another case where the data has some periodicity.
End of explanation
import cotede.qctests
y_gradient = cotede.qctests.gradient(x2)
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Spike")
p.circle(t[idx_good], y_gradient[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], y_gradient[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
import cotede.qctests
y_spike = np.abs(cotede.qctests.tukey53H(x2))
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Spike")
p.circle(t[idx_good], y_spike[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], y_spike[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
Explanation: Most of the bad data is clearly distinct from the good data pattern, but is inside the feasible range so the climatology can't do much to distinguish the good from bad data.
Let's try a different test, the gradient check.
End of explanation
gradient_mu, gradient_sigma = stats.norm.fit(y_gradient[np.isfinite(y_gradient)])
gradient_mu, gradient_sigma
gradient_mu, gradient_sigma = stats.norm.fit(y_gradient[np.isfinite(y_gradient)])
y_ref = np.linspace(np.nanmin(y_gradient), np.nanmax(y_gradient), 50)
gradient_pdf = stats.norm.pdf(y_ref, loc=gradient_mu, scale=gradient_sigma)
gradient_hist, gradient_edges = np.histogram(y_gradient[np.isfinite(y_gradient)], density=True, bins=50)
p = plot_hist(gradient_hist, gradient_edges)
p.line(y_ref, gradient_pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
p.triangle(y_gradient[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
spike_mu, spike_sigma = stats.norm.fit(y_spike[np.isfinite(y_spike)])
y_ref = np.linspace(np.nanmin(y_spike), np.nanmax(y_spike), 50)
spike_pdf = stats.norm.pdf(y_ref, loc=spike_mu, scale=spike_sigma)
spike_hist, spike_edges = np.histogram(y_spike[np.isfinite(y_spike)], density=True, bins=50)
p = plot_hist(spike_hist, spike_edges)
p.line(y_ref, spike_pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
p.triangle(y_spike[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
y_gradient = cotede.qctests.gradient(x2)
p = figure(plot_width=750, plot_height=300, title="Spike")
p.circle(y[idx_good], y_gradient[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(y[idx], y_gradient[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
x3 = x/20 + 2 * np.sin(2 * np.pi * t/2000)
# x2[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t[idx_good], x2[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x2[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p)
x3 = x/20 + 2 * np.cos(2 * np.pi * t/6000)
x3[1150:1250] += np.random.normal(0, .2, 100)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t[idx_good], x3[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
# p.triangle(t[idx], x3[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
y4 = cotede.qctests.rate_of_change(x3)
p = figure(plot_width=750, plot_height=300)
p.circle(t, y4, size=8, line_color="green", fill_color="green", fill_alpha=0.3)
# p.triangle(t[idx], x3[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p)
y.compressed()
import matplotlib.pyplot as plt
plt.hist(y)
spike_hist
stats.norm.pdf(x[idx], loc=mu_estimated, scale=sigma_estimated)
pdf = stats.norm.cdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
pdf
from seabird import fCNV
!pip install seabird
data = fCNV('/Users/castelao/work/science/articles/cotedepaper/data/dPIRX010.cnv')
p = figure(plot_width=500, plot_height=600)
p.circle(data['TEMP'], -data['PRES'], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
show(p)
plt.hist(cotede.qctests.rate_of_change(data['TEMP']), 50)
Explanation: The spike projects the original data in a new space, and this projection is commonly called "feature" in the Machine Learning world. Note that the spike feature allow to better distinguish the good data from bad data.
Gronell & Wijffels, 2008
Beyond the climatology of actual measurements, let's do climatologies of features, such as gradient and spike.
End of explanation
# Number of samples
N = 300
N_bad = 24
# True mean and standard deviation of this dataset
mu, sigma = 0, 0.1
# Let's fix the random seed so everyone gets the same result
np.random.seed(42)
t = np.arange(N)
noise = np.random.normal(mu, sigma, N)
x = 3 * np.sin(2 * np.pi * t / 190 + 0.3) + noise
chunk = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
x[160:160+chunk.size] += chunk
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t, x, size=8, line_color="orange", fill_color="orange", fill_alpha=0.5, legend_label="Good values")
# p.triangle(data["epoch"][idx_bad], data["water_level"][idx_bad], size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
# Number of samples
N = 3000
# True mean and standard deviation of this dataset
mu, sigma = 0, 1
# Let's fix the random seed so everyone gets the same result
np.random.seed(42)
t = np.arange(N)
x = np.random.normal(mu, sigma, N)
x = np.cumsum(x-np.mean(x))
np.mean(x)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t, x, size=8, line_color="orange", fill_color="orange", fill_alpha=0.5)
show(p) # show the results
N_bad = 5
idx = np.random.permutation(x.size)[:N_bad]
x[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
print(sorted(x[idx]))
x[idx]
idx_good = [tn not in idx for tn in t]
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Some bad measurements")
p.circle(t[idx_good], x[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
# p.line([0, N], 2*[-6 * sigma], line_color="orange", line_width=3, alpha=0.7)
# p.line([0, N], 2*[6 * sigma], line_color="orange", line_width=3, alpha=0.7)
show(p) # show the results
Explanation: Climatology Test
Any value beyond 3 standard deviations is still possible, but improbable. This is the traditional climatology test. As long as the observations are actually a normally distributed and there is enough observations to estimate the mean and standard deviation, we can model it and easily predict how improbable would be a measurement.
End of explanation |
3,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
# Setting the size of the image to be encode and decoded
img_size = 784
# Setting the learning rate
learning_rate = 0.01
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, img_size], name='inputs')
targets_ = tf.placeholder(tf.float32, [None, img_size], name='labels')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs=inputs_, units=encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(inputs=encoded, units=img_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
3,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Precursors
Step1: SNP activity difference compute
Analyzing noncoding variation associated with disease is a major application of Basenji. I now offer several tools to enable that analysis. If you have a small set of variants and know what datasets are most relevant, basenji_sat_vcf.py lets you perform a saturation mutagenesis of the variant and surrounding region to see the relevant nearby motifs.
If you want scores measuring the influence of those variants on all datasets,
* basenji_sad.py computes my SNP activity difference (SAD) score--the predicted change in aligned fragments to the region.
* basenji_sed.py computes my SNP expression difference (SED) score--the predicted change in aligned fragments to gene TSS's.
Here, I'll demonstrate those two programs. You'll need
* Trained model
* Input file (FASTA or HDF5 with test_in/test_out)
First, you can either train your own model in the Train/test tutorial or use one that I pre-trained from the models subdirectory.
As an example, we'll study a prostate cancer susceptibility allele of rs339331 that increases RFX6 expression by modulating HOXB13 chromatin binding (http
Step2: SNP activity difference output
The output HDF5 stores the SNP and target information and predicted scores. | Python Code:
if not os.path.isfile('data/hg19.ml.fa'):
subprocess.call('curl -o data/hg19.ml.fa https://storage.googleapis.com/basenji_tutorial_data/hg19.ml.fa', shell=True)
subprocess.call('curl -o data/hg19.ml.fa.fai https://storage.googleapis.com/basenji_tutorial_data/hg19.ml.fa.fai', shell=True)
if not os.path.isdir('models/heart'):
os.mkdir('models/heart')
if not os.path.isfile('models/heart/model_best.h5'):
subprocess.call('curl -o models/heart/model_best.h5 https://storage.googleapis.com/basenji_tutorial_data/model_best.h5', shell=True)
lines = [['index','identifier','file','clip','sum_stat','description']]
lines.append(['0', 'CNhs11760', 'data/CNhs11760.bw', '384', 'sum', 'aorta'])
lines.append(['1', 'CNhs12843', 'data/CNhs12843.bw', '384', 'sum', 'artery'])
lines.append(['2', 'CNhs12856', 'data/CNhs12856.bw', '384', 'sum', 'pulmonic_valve'])
samples_out = open('data/heart_wigs.txt', 'w')
for line in lines:
print('\t'.join(line), file=samples_out)
samples_out.close()
Explanation: Precursors
End of explanation
! basenji_sad.py -f data/hg19.ml.fa -o output/rfx6_sad --rc --shift "1,0,-1" -t data/heart_wigs.txt models/params_small.json models/heart/model_best.h5 data/rs339331.vcf
Explanation: SNP activity difference compute
Analyzing noncoding variation associated with disease is a major application of Basenji. I now offer several tools to enable that analysis. If you have a small set of variants and know what datasets are most relevant, basenji_sat_vcf.py lets you perform a saturation mutagenesis of the variant and surrounding region to see the relevant nearby motifs.
If you want scores measuring the influence of those variants on all datasets,
* basenji_sad.py computes my SNP activity difference (SAD) score--the predicted change in aligned fragments to the region.
* basenji_sed.py computes my SNP expression difference (SED) score--the predicted change in aligned fragments to gene TSS's.
Here, I'll demonstrate those two programs. You'll need
* Trained model
* Input file (FASTA or HDF5 with test_in/test_out)
First, you can either train your own model in the Train/test tutorial or use one that I pre-trained from the models subdirectory.
As an example, we'll study a prostate cancer susceptibility allele of rs339331 that increases RFX6 expression by modulating HOXB13 chromatin binding (http://www.nature.com/ng/journal/v46/n2/full/ng.2862.html).
First, we'll use basenji_sad.py to predict across the region for each allele and compute stats about the mean and max differences.
The most relevant options are:
| Option/Argument | Value | Note |
|:---|:---|:---|
| -f | data/hg19.ml.fa | Genome fasta. |
| -g | data/human.hg19.genome | Genome assembly chromosome length to bound gene sequences. |
| -o | rfx6_sad | Outplot plot directory. |
| --rc | True | Ensemble predictions for forward and reverse complement sequences. |
| --shift | 1,0,-1 | Ensemble predictions for sequences shifted by 1, 0, and -1 bp. |
| -t | data/heart_wigs.txt | Target labels. |
| params_file | models/params_small.json | JSON specified parameters to setup the model architecture and optimization. |
| model_file | models/heart/model_best.h5 | Trained saved model parameters. |
| vcf_file | data/rs339331.vcf | VCF file specifying variants to score. |
End of explanation
sad_h5 = h5py.File('output/rfx6_sad/sad.h5', 'r')
list(sad_h5.keys())
for snp_key in ['snp', 'chr', 'pos', 'ref_allele']:
print(snp_key, sad_h5[snp_key][:])
for ti in range(3):
cols = (ti, sad_h5['SAD'][0,ti], sad_h5['target_ids'][ti], sad_h5['target_labels'][ti])
print('%2d %7.4f %12s %s' % cols)
Explanation: SNP activity difference output
The output HDF5 stores the SNP and target information and predicted scores.
End of explanation |
3,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hello World!
Un-attributed images in the presentation are author's own creation. To use them, check the attributions here
Step1: Hmm.. DFs look similar to SQL Tables, don't they?
<span style="color
Step2: Subsetting DataSets
<center><img src="Github/khalaq/tech_quotes/datapoints.jpg" width="500" height="550" ></center>
Why do we need subsets?
We want to screen out anomalous, partial datapoints.
We want to divide our huge data into small chunks and apply different analysis on each set.
Divide data into train set and test set.
Ways of subsetting in Pandas
Step3: Merging Data Frames
How to merge?
concat()
merge()
append()
Vertically merge
Horizontal merge
Inner JOIN merge
Outer JOIN merge
Add new Keys while merging
<center><img src="Github/khalaq/tech_images/df_merge.jpg" width="700" height="700" ></center>
Real Power | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#so that we can view the graphs inside the notebook
df = pd.read_csv("wine.csv")
df.head(3)
Explanation: Hello World!
Un-attributed images in the presentation are author's own creation. To use them, check the attributions here:
https://github.com/sara-02/khalaq
Data Wrangling with Python Pandas
An Introduction for newbies
- [Sarah Masud](https:github.com/sara-02)
How is Data Wrangling different from Machine Learning?
<span style="color:red">JUNK IN</span> ==> ML Model ==> <span style="color:red">JUNK OUT</span>
Data Preparartion is the key to sucess in ML!
Data Analysis is a part and parcel of ML and the MOST important one.
Why do we need pandas, why not our dear excel?
Fundamental Data Types in Pandas
Series
Dataframes (focus of this presentation)
Dataframes are - n-D array with indexing on both rows and columns
End of explanation
df.head()
df.tail()
df['deaths'].count()
df['deaths'].min()
df['deaths'].max()
df['deaths'].mean()
df['deaths'].describe()
df['deaths'].plot(kind='box')
Explanation: Hmm.. DFs look similar to SQL Tables, don't they?
<span style="color:green">Similarity</span>: The arrangement of data to in tabular format
<span style="color:green">Similarity</span>: Ability to perform JOIN operations
<span style="color:red">Disimilarity</span>: Pandas is not a language
<span style="color:red">Disimilarity</span>: You don't use Pandas as a datastore/backend
<span style="color:red">Disimilarity</span>: No need define schema in Pandas
Advantage of Pandas over SQL for data analysis
Performs automatic data alignment.
Performs faster subsetting.
Real Power Load data from various backend sources, making backend aganostic analysis.
Real Power Store the result in any and as many datasources as needed.
Basic Stats on Loaded Data
Getting to know how the data looks
End of explanation
num = range(1,6)
mul2 = [x*2 for x in num]
mul3 = [x*3 for x in num]
mul4 = [x*4 for x in num]
mul5 = [x*35 for x in num]
data = [num, mul2, mul3, mul4, mul5]
df1 = pd.DataFrame(data, index=['v', 'w', 'x', 'y', 'z'], columns=['A', 'B','C','D', 'E'])
df1
#### Only Column
df1[['A']]
#### Only Row
df1.loc[['v']]
df1.loc[['v','w'],['A','B']] # rows and columns
df1.iloc[0:2, 0:2] #Using default index numbers
Explanation: Subsetting DataSets
<center><img src="Github/khalaq/tech_quotes/datapoints.jpg" width="500" height="550" ></center>
Why do we need subsets?
We want to screen out anomalous, partial datapoints.
We want to divide our huge data into small chunks and apply different analysis on each set.
Divide data into train set and test set.
Ways of subsetting in Pandas:
By Column names
By row labels
By row-column index
Combination of both labels and columns
End of explanation
df.groupby('country')['deaths'].mean().plot(kind='bar')
Explanation: Merging Data Frames
How to merge?
concat()
merge()
append()
Vertically merge
Horizontal merge
Inner JOIN merge
Outer JOIN merge
Add new Keys while merging
<center><img src="Github/khalaq/tech_images/df_merge.jpg" width="700" height="700" ></center>
Real Power: Merge from different datasources!
Inline Visualization
Visulaize as you analyse.
End of explanation |
3,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Procedural Python and Unit Tests
In this section, our main goal will be to outline how to go from the kind of trial-and-error exploratory data analysis we explored this morning, into a nice, linear, reproducible analysis.
Step1: Step 1
Step2: Step 2
Step4: Use Python to unzip and load the data
Step5: (paste the above function in pronto_utils.py)
Step6: Step 3
Step7: (paste the above function in pronto_utils.py)
Step8: Breakout | Python Code:
import this
Explanation: Procedural Python and Unit Tests
In this section, our main goal will be to outline how to go from the kind of trial-and-error exploratory data analysis we explored this morning, into a nice, linear, reproducible analysis.
End of explanation
URL = "https://s3.amazonaws.com/pronto-data/open_data_year_one.zip"
import urllib.request
urllib.request.urlretrieve?
import os
os.path.exists('open_data_year_one.zip')
# Python 2:
# from urllib import urlretrieve
# Python 3:
from urllib.request import urlretrieve
import os
def download_if_needed(url, filename, force_download=False):
if force_download or not os.path.exists(filename):
urlretrieve(url, filename)
else:
pass
download_if_needed(URL, 'open_data_year_one.zip')
!ls
Explanation: Step 1: Downloading the Data
We want a function that will download the data automatically if it does not already exist.
End of explanation
from pronto_utils import download_if_needed
download_if_needed(URL, 'open_data_year_one.zip')
Explanation: Step 2: Make a Package
Now that this function works, let's create a Python package that we can import it from
(Use a text editor to edit pronto_utils.py)
End of explanation
import zipfile
import pandas as pd
def load_trip_data(filename='open_data_year_one.zip'):
Load trip data from the zipfile; return as DataFrame
download_if_needed(URL, filename)
zf = zipfile.ZipFile(filename)
return pd.read_csv(zf.open('2015_trip_data.csv'))
data = load_trip_data()
data.head()
Explanation: Use Python to unzip and load the data:
End of explanation
from pronto_utils import load_trip_data
data = load_trip_data()
data.head()
Explanation: (paste the above function in pronto_utils.py)
End of explanation
import pandas as pd
from pronto_utils import load_trip_data
def test_trip_data():
df = load_trip_data()
assert isinstance(df, pd.DataFrame)
assert df.shape == (142846, 12)
test_trip_data()
Explanation: Step 3: Write a Unit Test
Let's write a unit test to make sure our download script works properly. We will use pytest here.
End of explanation
!py.test pronto_utils.py
Explanation: (paste the above function in pronto_utils.py)
End of explanation
%matplotlib inline
def plot_totals_by_birthyear():
df = load_trip_data()
totals_by_birthyear = df.birthyear.value_counts().sort_index()
return totals_by_birthyear.plot(linestyle='steps')
plot_totals_by_birthyear()
def test_plot_totals():
ax = plot_totals_by_birthyear()
assert len(ax.lines) == 1
import numpy as np
import matplotlib as mpl
def test_plot_totals_by_birthyear():
ax = plot_totals_by_birthyear()
# Some tests of the output that dig into the
# matplotlib internals
assert len(ax.lines) == 1
line = ax.lines[0]
x, y = line.get_data()
assert np.all((x > 1935) & (x < 2000))
assert y.mean() == 1456
test_plot_totals_by_birthyear()
!py.test pronto_utils.py
Explanation: Breakout: Add functionality
Working in pairs, do the following:
create a function that will plot an interesting aspect of this data
once you are happy with the function, copy it into the Python package you have created
Write a "smoke-test" – this is a test that calls the function, but doesn't necessarily validate the output. This can be useful for "testing" plotting functions, because it's generally difficult to programatically evaluate the plot output itself
For step 3, you'll have to tell matplotlib not to invoke the graphical backend, which you can do by putting the following at the top of the test file:
python
import matplotlib as mpl
mpl.use('Agg') # Don't invoke graphical backend for plots
If you want to go farther with testing the output of your plot, matplotlib has some useful plot testing tools that you can use.
End of explanation |
3,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PMOD TC1 Sensor demonstration
This demonstration shows how to use the PmodTC1. You will also see how to plot a graph using matplotlib.
The PmodTC1 is required.
The thermocouple sensor is initialized and set to log a reading every 1 second. The temperature of the sensor
can be changed by touching it with warm fingers or by blowing on it.
1. Use TC1 read() to read the current temperature
Step1: 2. Starting logging temperature once every second
Step2: 3. Modifying the temperture
Touch the thermocouple with warm fingers; or
Blow on the thermocouple with cool air
Stop the logging whenever you are finished trying to change the sensor's value.
Step3: 4. Plot values over time | Python Code:
from pynq import Overlay
Overlay("base.bit").download()
from pynq.iop import Pmod_TC1
from pynq.iop import PMODB
# TC1 sensor is on PMODB
my_tc1 = Pmod_TC1(PMODB)
r = my_tc1.read()
print('Raw Register Value: %08x hex' % r)
print('Ref Junction Temp: %.4f' % my_tc1.reg_to_ref(r))
print('Thermocouple Temp: %.2f' % my_tc1.reg_to_tc(r))
print('Alarm flags: %08x hex' % my_tc1.reg_to_alarms(r))
Explanation: PMOD TC1 Sensor demonstration
This demonstration shows how to use the PmodTC1. You will also see how to plot a graph using matplotlib.
The PmodTC1 is required.
The thermocouple sensor is initialized and set to log a reading every 1 second. The temperature of the sensor
can be changed by touching it with warm fingers or by blowing on it.
1. Use TC1 read() to read the current temperature
End of explanation
my_tc1.start_log()
Explanation: 2. Starting logging temperature once every second
End of explanation
my_tc1.stop_log()
log = my_tc1.get_log()
Explanation: 3. Modifying the temperture
Touch the thermocouple with warm fingers; or
Blow on the thermocouple with cool air
Stop the logging whenever you are finished trying to change the sensor's value.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
tc = [my_tc1.reg_to_tc(v) for v in log]
ref = [my_tc1.reg_to_ref(v) for v in log]
plt.plot(range(len(tc)), tc, 'ro', label='Thermocouple')
plt.plot(range(len(ref)), ref, 'bo', label='Ref Junction')
plt.title('TC1 Sensor log')
plt.axis([0, len(log), min(tc+ref)*0.9, max(tc+ref)*1.1])
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Temperature (C)')
plt.grid()
plt.show()
Explanation: 4. Plot values over time
End of explanation |
3,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
sPlot
This notebook is devoted to explanation what is sPlot and how to use hep_ml.splot.
If you prefer explanation without code, find it here
sPlot is a way to reconstruct features of mixture components based on known properties of distributions. This method is frequently used in High Energy Physics.
Step1: Simple example of sPlot
First start from simple (and not very useful in practice) example.
Assume we have two types of particles (say, electrons and positrons).
Distribution of some characteristic is different for them (let this be px momentum projection).
Step2: Observed distributions
Picture above shows how this distibution should look like,
but due to inaccuracies during classification we will observe different picture.
Let's assume that with probability 80% particle is classified correctly (and we are not using px during classification).
And when we look at distribution of px for particles which were classified as electrons or positrons,
we see that they were distorted. We lost the original shapes of distributions.
Step3: Applying sWeights
We can think of it in the following way
Step4: Compare
let's compare reconstructed distribution for electrons with original
Step5: More complex case
In the case when we have only two 'bins' is simple and straightforward. But when there are more than two bins, the solution is not unique. There are many appropriate combinations of sWeights, which one to choose?
Step6: but things in practice are more complex. We have not bins, but continuos distribution (which can be treated as many bins).
Typically this is distribution over mass. By fitting mass we are able to split mixture into two parts
Step7: Of course we don't have labels which events are signal and which are background before we actually.
And we observe the mixture of two distributions
Step8: We have no information about real labels
But we know a priori that background is distributed as exponential distribution and signal - as gaussian (more complex models can be met in practice, but idea is the same).
After fitting the mixture (let me skip this process), we will get the following result
Step9: Fitting doesn't give us information about real labels
But it gives information about probabilities, thus now we can estimate number of signal and background events within each bin.
We won't use bins, but instead we will get for each event probability that it is signal or background
Step10: Appying sPlot
sPlot converts probabilities to sWeights, using implementation from hep_ml
Step11: Using sWeights to reconstruct initial distribution
Let's check that we achieved our goal and can reconstruct momentum distribution for signal and background
Step12: Important requirement of sPlot
Reconstructed variable (i.e. $p$) and splotted variable (i.e. mass) shall be statistically independent for each class.
Read the line above again. Reconstructed and splotted variable are correlated
Step13: But within each class there is no correlation, so the requirement is satisfied
Step14: as a demonstration why this is important let's use sweights to reconstruct mass (obviously mass is correlated with mass) | Python Code:
%matplotlib inline
import numpy
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = [15, 6]
size = 10000
sig_data = numpy.random.normal(-1, 1, size=size)
bck_data = numpy.random.normal(1, 1, size=size)
Explanation: sPlot
This notebook is devoted to explanation what is sPlot and how to use hep_ml.splot.
If you prefer explanation without code, find it here
sPlot is a way to reconstruct features of mixture components based on known properties of distributions. This method is frequently used in High Energy Physics.
End of explanation
plt.subplot(121)
plt.hist(sig_data, color='b', alpha=0.5, bins=30, label='electron')
plt.hist(bck_data, color='r', alpha=0.5, bins=30, label='positron')
plt.xlim(-5, 5), plt.xlabel('px')
plt.legend()
Explanation: Simple example of sPlot
First start from simple (and not very useful in practice) example.
Assume we have two types of particles (say, electrons and positrons).
Distribution of some characteristic is different for them (let this be px momentum projection).
End of explanation
n_sig1, n_bck1 = 8000, 2000
n_sig2, n_bck2 = 2000, 8000
first_bin = numpy.concatenate([sig_data[:n_sig1], bck_data[:n_bck1]])
second_bin = numpy.concatenate([sig_data[n_sig1:], bck_data[n_bck1:]])
plt.subplot(121)
plt.bar([0, 2], [n_sig1, n_sig2], width=1, color='b', alpha=0.5)
plt.bar([0, 2], [n_bck1, n_bck2], width=1, bottom=[n_sig1, n_sig2], color='r', alpha=0.5)
plt.xlim(-0.5, 3.5)
plt.axis('off')
plt.xticks([0.5, 2.5], ['as electrons', 'as positrons'])
plt.text(0.5, -300, 'as electron', horizontalalignment='center', verticalalignment='top', fontsize=20)
plt.text(2.5, -300, 'as positron', horizontalalignment='center', verticalalignment='top', fontsize=20)
plt.title('Proportion of events being classified as')
plt.subplot(122)
plt.hist(first_bin, alpha=0.5, bins=30, label='as electrons', color=(0.22, 0., 0.66))
plt.hist(second_bin, alpha=0.5, bins=30, label='as positrons', color=(0.66, 0., 0.22))
plt.legend()
plt.title('Distributions')
plt.xlim(-5, 5), plt.xlabel('px')
pass
Explanation: Observed distributions
Picture above shows how this distibution should look like,
but due to inaccuracies during classification we will observe different picture.
Let's assume that with probability 80% particle is classified correctly (and we are not using px during classification).
And when we look at distribution of px for particles which were classified as electrons or positrons,
we see that they were distorted. We lost the original shapes of distributions.
End of explanation
def plot_with_weights(datas, weights, **kargs):
assert len(datas) == len(weights)
data = numpy.concatenate(datas)
weight = numpy.concatenate([numpy.ones(len(d)) * w for d, w in zip(datas, weights) ])
plt.hist(data, weights=weight, alpha=0.5, bins=30, **kargs)
plt.subplot(121)
plot_with_weights([first_bin, second_bin], [n_bck2, -n_bck1], normed=True, label='reconstructed electron')
plot_with_weights([first_bin, second_bin], [-n_sig2, n_sig1], normed=True, color='r', label='reconstructed positron')
plt.xlabel('px')
plt.legend()
pass
Explanation: Applying sWeights
We can think of it in the following way: there are 2 bins. In first 80% are electrons, 20% are signal. And visa versa in second bin.
To reconstruct initial distribution, we can plot histogram, where each event from first bin has weight 0.8,
and each event from second bin has weight -0.2. This numbers are called sWeights.
So, if we had 8000 $e^{-}$ + 2000 $e^{+}$ in first bin and 8000 $e^{+}$ + 2000 $e^{-}$ ($ e^-, e^+$ are electron and positron). After summing with introduced sWeights:
$$
\big[ 8000 e^{-} + 2000 e^{+} \big] \times 0.8 + \big[ 2000 e^{-} + 8000 e^{+} \big] \times (- 0.2) =
6800 e^{-}
$$
Positrons with positive and negative weights compensated each other, and we will get pure electrons.
At this moment we ignore normalization of sWeights (because it doesn't play role when we want to reconstruct shape).
End of explanation
plt.subplot(121)
plot_with_weights([first_bin, second_bin], [n_bck2, -n_bck1], normed=True, label='reconstructed electons', edgecolor='none')
plot_with_weights([sig_data], [1], normed=True, label='original electons', edgecolor='none')
plt.legend()
pass
Explanation: Compare
let's compare reconstructed distribution for electrons with original:
End of explanation
plt.subplot(121)
plt.bar([0, 2, 4], [3, 2, 1], width=1, color='b', alpha=0.5)
plt.bar([0, 2, 4], [1, 2, 3], width=1, bottom=[3, 2, 1], color='r', alpha=0.5)
plt.xlim(-1, 6)
plt.ylim(-0.5, 5)
plt.axis('off')
plt.text(0.5, -0.5, 'Bin 1', horizontalalignment='center', verticalalignment='top', fontsize=20)
plt.text(2.5, -0.5, 'Bin 2', horizontalalignment='center', verticalalignment='top', fontsize=20)
plt.text(4.5, -0.5, 'Bin 3', horizontalalignment='center', verticalalignment='top', fontsize=20)
Explanation: More complex case
In the case when we have only two 'bins' is simple and straightforward. But when there are more than two bins, the solution is not unique. There are many appropriate combinations of sWeights, which one to choose?
End of explanation
from scipy.stats import norm, expon
size = 10000
sig_mass_distr = norm(loc=4, scale=1)
bck_mass_distr = expon(scale=4)
sig_mass = sig_mass_distr.rvs(size=size)
bck_mass = bck_mass_distr.rvs(size=size)
sig_p = numpy.random.normal(5, 1, size=size)
bck_p = numpy.random.normal(3, 1, size=size)
plt.subplot(121)
plt.hist(sig_mass, bins=20, normed=True)
plt.hist(bck_mass, bins=20, normed=True, range=(0, 10), alpha=0.5)
plt.xlabel('mass')
plt.subplot(122)
plt.hist(sig_p, bins=20, normed=True)
plt.hist(bck_p, bins=20, normed=True, range=(0, 10), alpha=0.5)
plt.xlabel('p')
Explanation: but things in practice are more complex. We have not bins, but continuos distribution (which can be treated as many bins).
Typically this is distribution over mass. By fitting mass we are able to split mixture into two parts: signal channel and everything else.
Building sPlot over mass
Let's show how this works. First we generate two fake distributions (signal and background) with 2 variables: mass and momentum.
End of explanation
mass = numpy.concatenate([sig_mass, bck_mass])
p = numpy.concatenate([sig_p, bck_p])
sorter = numpy.argsort(mass)
mass = mass[sorter]
p = p[sorter]
plt.subplot(121)
plt.hist(mass, bins=20, range=(0, 10))
plt.xlabel('mass')
plt.subplot(122)
plt.hist(p, bins=20)
plt.xlabel('p')
Explanation: Of course we don't have labels which events are signal and which are background before we actually.
And we observe the mixture of two distributions:
End of explanation
x = numpy.linspace(0, 10)
plt.hist(mass, bins=30, range=[0, 10], normed=True, alpha=0.4)
plt.plot(x, norm.pdf(x, loc=4, scale=1) / 2., label='signal')
plt.plot(x, expon.pdf(x, scale=4) / 2., label='bck')
plt.plot(x, 0.5 * (norm.pdf(x, loc=4, scale=1) + expon.pdf(x, scale=4)), label='sig + bck')
plt.legend(fontsize=20)
Explanation: We have no information about real labels
But we know a priori that background is distributed as exponential distribution and signal - as gaussian (more complex models can be met in practice, but idea is the same).
After fitting the mixture (let me skip this process), we will get the following result:
End of explanation
import pandas
probs = pandas.DataFrame(dict(sig=sig_mass_distr.pdf(mass), bck=bck_mass_distr.pdf(mass)))
probs = probs.div(probs.sum(axis=1), axis=0)
plt.plot(mass, probs.sig, label='sig probability')
plt.plot(mass, probs.bck, label='bck probability')
plt.xlim(0, 10), plt.legend(), plt.xlabel('mass')
Explanation: Fitting doesn't give us information about real labels
But it gives information about probabilities, thus now we can estimate number of signal and background events within each bin.
We won't use bins, but instead we will get for each event probability that it is signal or background:
End of explanation
from hep_ml import splot
sWeights = splot.compute_sweights(probs)
plt.plot(mass, sWeights.sig, label='sig sWeight')
plt.plot(mass, sWeights.bck, label='bck sWeight')
plt.xlim(0, 10), plt.legend(), plt.xlabel('mass')
Explanation: Appying sPlot
sPlot converts probabilities to sWeights, using implementation from hep_ml:
As you can see, there are also negative sWeights, which are needed to compensate the contributions of other class.
End of explanation
plt.subplot(121)
hist_conf = dict(bins=30, alpha=0.5, range=[0, 10])
plt.hist(sig_p, label='original sig p', **hist_conf)
plt.hist(p, weights=sWeights.sig, label='reconstructed sig p', **hist_conf)
plt.xlabel('p'), plt.legend()
plt.subplot(122)
plt.hist(bck_p, label='original bck p', **hist_conf)
plt.hist(p, weights=sWeights.bck, label='reconstructed bck p', **hist_conf)
plt.xlabel('p'), plt.legend()
Explanation: Using sWeights to reconstruct initial distribution
Let's check that we achieved our goal and can reconstruct momentum distribution for signal and background:
End of explanation
numpy.corrcoef(abs(mass - 4), p) [0, 1]
Explanation: Important requirement of sPlot
Reconstructed variable (i.e. $p$) and splotted variable (i.e. mass) shall be statistically independent for each class.
Read the line above again. Reconstructed and splotted variable are correlated:
End of explanation
print numpy.corrcoef(abs(sig_mass - 4), sig_p)[0, 1]
print numpy.corrcoef(abs(bck_mass - 4), bck_p)[0, 1]
Explanation: But within each class there is no correlation, so the requirement is satisfied:
End of explanation
plt.subplot(121)
hist_conf = dict(bins=30, alpha=0.5, range=[-1, 7])
plt.hist(sig_mass, label='original sig mass', **hist_conf)
plt.hist(mass, weights=sWeights.sig, label='reconstructed sig mass', **hist_conf)
plt.xlabel('mass'), plt.legend()
Explanation: as a demonstration why this is important let's use sweights to reconstruct mass (obviously mass is correlated with mass):
End of explanation |
3,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sumário
Funções de Ativação
Funções Auxiliares
Funções de Custo
Inicialização de Pesos
Regularização
Learning Rate Decay
Batch Normalization
Batch Generator
Implementação
Testes da Implementação
Exemplos do Notebook da Intuição
Regressão
Regressão Linear Simples
Regressão Linear Multivariada
Regressão Quadrática
Regressão Cúbica
Regressão Logarítimica
Regressão Exponencial
Classificação Binária
Porta AND/OR
Porta XOR
2 Clusters
4 Clusters
Círculos
Moons
Espiral
Classificação Multiclasse
3 Clusters Multiclasse
4 Clusters Multiclasse
Espiral - 5 Classes
Make Classification - 4 Classes
Iris Dataset
Referências
Imports and Configurações
Step1: Funções de Ativação
Step2: Funções Auxiliares
Step3: Funções de Custo
Para Regressão
Para Classificação Binária
Para Classificação Multiclasse
Step4: Inicialização de Pesos
Regularização
Batch Generator
Learning Rate Decay
Batch Normalization
Implementação
Exemplos do Notebook da Intuição
Exemplo 1
Step5: Exemplo 2
Step6: Gradient Checking
Regressão
Regressão Linear Simples - Exemplo do Perceptron
Step7: Regressão Linear Multivariada - Exercício de Regressão do Perceptron
Step8: Regressão Quadrática
Step9: Regressão Cúbica
Step10: Regressão Logarítimica
Step11: Regressão Exponencial
Step12: Classificação Binária
Porta AND/OR
Step13: Porta XOR
Step14: 2 Clusters
Step15: 4 Clusters
Step16: Círculos
Step17: Moons
Step18: Espiral
Step19: Classificação Multiclasse
3 Clusters Multiclasse
Step20: 4 Clusters Multiclasse
Step21: Espiral - 5 Classes
Step22: Make Classification - 4 Classes
Step23: Iris Dataset | Python Code:
import numpy as np
import _pickle as pkl
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.datasets.samples_generator import make_blobs, make_circles, make_moons, make_classification
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from utils import plot
from utils.samples_generator import make_spiral, make_square, make_cubic, make_exp, make_log10
%matplotlib inline
Explanation: Sumário
Funções de Ativação
Funções Auxiliares
Funções de Custo
Inicialização de Pesos
Regularização
Learning Rate Decay
Batch Normalization
Batch Generator
Implementação
Testes da Implementação
Exemplos do Notebook da Intuição
Regressão
Regressão Linear Simples
Regressão Linear Multivariada
Regressão Quadrática
Regressão Cúbica
Regressão Logarítimica
Regressão Exponencial
Classificação Binária
Porta AND/OR
Porta XOR
2 Clusters
4 Clusters
Círculos
Moons
Espiral
Classificação Multiclasse
3 Clusters Multiclasse
4 Clusters Multiclasse
Espiral - 5 Classes
Make Classification - 4 Classes
Iris Dataset
Referências
Imports and Configurações
End of explanation
def linear(x, derivative=False):
return np.ones_like(x) if derivative else x
def sigmoid(x, derivative=False):
if derivative:
y = sigmoid(x)
return y*(1 - y)
return 1.0/(1.0 + np.exp(-x))
def tanh(x, derivative=False):
if derivative:
y = tanh(x)
return 1 - y**2
return (np.exp(x) - np.exp(-x))/(np.exp(x) + np.exp(-x))
def relu(x, derivative=False):
if derivative:
return np.where(x <= 0, 0, 1)
return np.maximum(0, x)
def leaky_relu(x, derivative=False):
alpha = 0.1
if derivative:
return np.where(x <= 0, alpha, 1)
return np.where(x <= 0, alpha*x, x)
def elu(x, derivative=False):
alpha = 1.0
if derivative:
y = elu(x)
return np.where(x <= 0, y + alpha, 1)
return np.where(x <= 0, alpha*(np.exp(x) - 1), x)
Explanation: Funções de Ativação
End of explanation
def softmax(x, y_oh=None, derivative=False):
if derivative:
y_pred = softmax(x)
k = np.nonzero(y_pred * y_oh)
pk = y_pred[k]
y_pred[k] = pk * (1.0 - pk)
return y_pred
exp = np.exp(x)
return exp / np.sum(exp, axis=1, keepdims=True)
Explanation: Funções Auxiliares
End of explanation
def neg_log_likelihood(y_oh, y_pred, derivative=False):
k = np.nonzero(y_pred * y_oh)
pk = y_pred[k]
if derivative:
y_pred[k] = (-1.0 / pk)
return y_pred
return np.mean(-np.log(pk))
def softmax_neg_log_likelihood(y_oh, y_pred, derivative=False):
y_softmax = softmax(y_pred)
if derivative:
return -(y_oh - y_softmax) / y_oh.shape[0]
return neg_log_likelihood(y_oh, y_softmax)
Explanation: Funções de Custo
Para Regressão
Para Classificação Binária
Para Classificação Multiclasse
End of explanation
x = np.array([[0.05, 0.10]])
y = np.array([[0.01, 0.99]])
w1 = np.array([[0.15, 0.20], [0.25, 0.30]])
b1 = np.array([[0.35]])
w2 = np.array([[0.40, 0.45], [0.50, 0.55]])
b2 = np.array([[0.60]])
# insira sua rede aqui!
Explanation: Inicialização de Pesos
Regularização
Batch Generator
Learning Rate Decay
Batch Normalization
Implementação
Exemplos do Notebook da Intuição
Exemplo 1
End of explanation
x = np.array([[0.1, 0.2, 0.7]])
y = np.array([[1, 0, 0]])
D_in, D_out = x.shape[1], y.shape[1]
w1 = np.array([[0.1, 0.2, 0.3], [0.3, 0.2, 0.7], [0.4, 0.3, 0.9]])
b1 = np.ones((1,3))
w2 = np.array([[0.2, 0.3, 0.5], [0.3, 0.5, 0.7], [0.6, 0.4, 0.8]])
b2 = np.ones((1,3))
w3 = np.array([[0.1, 0.4, 0.8], [0.3, 0.7, 0.2], [0.5, 0.2, 0.9]])
b3 = np.ones((1,3))
# insira sua rede aqui!
Explanation: Exemplo 2
End of explanation
data = np.loadtxt('data/medidas.csv', delimiter=',', skiprows=1)
print(data.shape)
x, y = data[:,0].reshape(-1,1), data[:,1].reshape(-1,1)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(), x.max())
plt.scatter(x, y)
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
Explanation: Gradient Checking
Regressão
Regressão Linear Simples - Exemplo do Perceptron
End of explanation
data = np.loadtxt('data/notas.csv', delimiter=',', skiprows=1)
print(data.shape)
x, y = data[:,:-1], data[:,-1].reshape(-1,1)
print(x.shape, y.shape)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
plt.scatter(x, y)
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
Explanation: Regressão Linear Multivariada - Exercício de Regressão do Perceptron
End of explanation
x, y = make_square(n_samples=100, x_min=-10, x_max=10, a=1, b=1, c=1, noise=10)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
Explanation: Regressão Quadrática
End of explanation
x, y = make_cubic(n_samples=100, x_min=-4, x_max=4, a=1, b=0, c=-10, d=0, noise=3)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
Explanation: Regressão Cúbica
End of explanation
x, y = make_log10(n_samples=100, x_min=1, x_max=100, noise=0.3)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
Explanation: Regressão Logarítimica
End of explanation
x, y = make_exp(n_samples=100, x_min=0, x_max=5, noise=10)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
Explanation: Regressão Exponencial
End of explanation
x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1]).reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Predições:', y_pred, sep='\n')
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, cmap='bwr')
Explanation: Classificação Binária
Porta AND/OR
End of explanation
x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0]).reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Predições:', y_pred, sep='\n')
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, cmap='bwr')
Explanation: Porta XOR
End of explanation
x, y = make_blobs(n_samples=100, n_features=2, centers=2, random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
threshold = 0 if nn.layers[-1].activation == linear else 0.5
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred >= threshold)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=threshold, cmap='bwr')
Explanation: 2 Clusters
End of explanation
x, y = make_blobs(n_samples=500, n_features=2, cluster_std=0.9, centers=[(-3, -3), (3, 3), (-3, 3), (3, -3)], random_state=1234)
y = y.reshape(-1, 1)
y = np.where(y >= 2, 1, 0)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
Explanation: 4 Clusters
End of explanation
x, y = make_circles(n_samples=500, noise=0.1, factor=0.4, random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
Explanation: Círculos
End of explanation
x, y = make_moons(200, noise=0.20)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
Explanation: Moons
End of explanation
x, y = make_spiral(n_samples=100, n_class=2, radius=5, laps=1.75)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
Explanation: Espiral
End of explanation
x, y = make_blobs(n_samples=300, n_features=2, centers=[(0, -3), (-3, 3), (3, 3)], random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::60])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
Explanation: Classificação Multiclasse
3 Clusters Multiclasse
End of explanation
x, y = make_blobs(n_samples=400, n_features=2, centers=[(-3, 0), (3, 0), (0, 3), (0, -3)], random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::70])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
Explanation: 4 Clusters Multiclasse
End of explanation
x, y = make_spiral(n_samples=100, n_class=5, radius=1, laps=0.5)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::100])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
Explanation: Espiral - 5 Classes
End of explanation
x, y = make_classification(n_samples=100, n_classes=4, n_features=2, n_clusters_per_class=1, n_redundant=0, n_repeated=0, random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::10])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
Explanation: Make Classification - 4 Classes
End of explanation
data = load_iris()
x, y = data.data[:, 2:], data.target.reshape(-1,1)
print(data.feature_names)
print(data.target_names)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::20])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
Explanation: Iris Dataset
End of explanation |
3,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook arguments
measurement_id (int)
Step1: Selecting a data file
Step2: Data load and Burst search
Load and process the data
Step3: Compute background and burst search
Step4: Let's take a look at the photon waiting times histograms and at the fitted background rates
Step5: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel.
Let's plot a timetrace for the background to see is there are significat variations during the measurement
Step6: We can look at the timetrace of the photon stream (binning)
Step7: Burst selection and FRET
Step8: Selecting bursts by size
Step9: 2-Gaussian peaks
Step11: Fit
Step12: $$f(x) = \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$
$$\log f(x) = \log \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} = \log{A} -\log{\sigma} - \log\sqrt{2\pi} -\frac{(x - \mu)^2}{2 \sigma^2}$$
$$w_1 \; f_1(x) + w_2 \; f_2(x) + w_3 \; f_3(x)$$
$$\log (w_1 \; f_1(x)) = \log{w_1} + \log{f_1(x)}$$
Step13: Kinetics
Definitions
Step14: Moving-window processing
Step15: Burst-data
Step16: Population fraction | Python Code:
import time
from pathlib import Path
import pandas as pd
from scipy.stats import linregress
from scipy import optimize
from IPython.display import display
from fretbursts import *
sns = init_notebook(fs=14)
import lmfit; lmfit.__version__
import phconvert; phconvert.__version__
Explanation: Notebook arguments
measurement_id (int): Select the measurement from the list. Valid values: 0 .. 3
1-spot realtime kinetics
<p class=lead>This notebook executes the realtime-kinetics analysis.</p>
The first cell of this notebook selects which measurement is analyzed.
Measurements can be processed one-by-one, by manually running this notebook,
or in batch by using the notebook: "1-spot bubble-bubble kinetics - Run-All".
Loading the software
End of explanation
path = Path('./data/')
pattern = 'singlespot*.hdf5'
filenames = list(str(f) for f in path.glob(pattern))
filenames
basenames = list(f.stem for f in path.glob(pattern))
basenames
start_times = [600, 900, 900,
600, 600, 600, 600, 600, 600,
600, 600, 600] # time of NTP injection and start of kinetics
filename = filenames[measurement_id]
start_time = start_times[measurement_id]
filename
import os
assert os.path.exists(filename)
Explanation: Selecting a data file
End of explanation
d = loader.photon_hdf5(filename)
plot_alternation_hist(d)
loader.alex_apply_period(d)
d.time_max
Explanation: Data load and Burst search
Load and process the data:
End of explanation
d.calc_bg(bg.exp_fit, time_s=10, tail_min_us='auto', F_bg=1.7)
Explanation: Compute background and burst search:
End of explanation
dplot(d, hist_bg);
Explanation: Let's take a look at the photon waiting times histograms and at the fitted background rates:
End of explanation
dplot(d, timetrace_bg);
xlim(start_time - 150, start_time + 150)
Explanation: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel.
Let's plot a timetrace for the background to see is there are significat variations during the measurement:
End of explanation
#dplot(d, timetrace)
#xlim(2, 3); ylim(-100, 100);
Explanation: We can look at the timetrace of the photon stream (binning):
End of explanation
#%%timeit -n1 -r1
ddc = bext.burst_search_and_gate(d)
ds1 = ddc.select_bursts(select_bursts.size, th1=25)
ds = ds1.select_bursts(select_bursts.naa, th1=25)
Explanation: Burst selection and FRET
End of explanation
bpl.alex_jointplot(ds)
ds0 = ds.select_bursts(select_bursts.time, time_s1=0, time_s2=start_time-10)
dplot(ds0, hist_fret, pdf=False);
weights = 'size'
bext.bursts_fitter(ds0, weights=weights)
ds0.E_fitter.fit_histogram(mfit.factory_two_gaussians(p1_center=0.5, p2_center=0.9), verbose=False)
dplot(ds0, hist_fret, show_model=True, weights=weights);
ds0.E_fitter.params
weights = None
bext.bursts_fitter(ds0, weights=weights)
ds0.E_fitter.fit_histogram(mfit.factory_two_gaussians(p1_center=0.5, p2_center=0.9), verbose=False)
dplot(ds0, hist_fret, show_model=True, weights=weights);
ds0.E_fitter.params
Explanation: Selecting bursts by size
End of explanation
def gauss2(**params0):
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2
model.set_param_hint('p1_center', **{'value': 0.6, 'min': 0.3, 'max': 0.8, **params0.get('p1_center', {})})
model.set_param_hint('p2_center', **{'value': 0.9, 'min': 0.8, 'max': 1.0, **params0.get('p2_center', {})})
for sigma in ['p%d_sigma' % i for i in (1, 2)]:
model.set_param_hint(sigma, **{'value': 0.02, 'min': 0.01, **params0.get(sigma, {})})
for ampl in ['p%d_amplitude' % i for i in (1, 2)]:
model.set_param_hint(ampl, **{'value': 0.5, 'min': 0.01, **params0.get(ampl, {})})
model.name = '3 gauss peaks'
return model
#%matplotlib notebook
#fig, ax = plt.subplots(figsize=(12, 8))
#dplot(dm0, scatter_fret_size, ax=ax)
bext.bursts_fitter(ds0, weights=None)
ds0.E_fitter.fit_histogram(gauss2(), verbose=False)
mfit.plot_mfit(ds0.E_fitter)
params_2gauss = ds0.E_fitter.params
plt.xlabel('E')
plt.ylabel('PDF')
plt.title('')
params_2gauss
ds_final = ds.select_bursts(select_bursts.time, time_s1=start_time+300, time_s2=ds.time_max + 1)
ds_final.num_bursts
bext.bursts_fitter(ds_final, weights=None)
model = gauss2()
model.set_param_hint('p2_center', value=params_2gauss.p2_center[0], vary=False)
ds_final.E_fitter.fit_histogram(model, verbose=False)
fig, ax = plt.subplots(figsize=(12, 6))
mfit.plot_mfit(ds_final.E_fitter, ax=ax)
params_2gauss1 = ds_final.E_fitter.params
params_2gauss1
#del params_2gauss0
is_runoff = 'runoff' in filename.lower()
if 'params_2gauss0' not in locals():
params_2gauss0 = params_2gauss.copy()
if is_runoff:
params_2gauss0.p2_center = params_2gauss1.p2_center
else:
params_2gauss0.p1_center = params_2gauss1.p1_center
params_2gauss0.p1_amplitude + params_2gauss0.p2_amplitude
'params_2gauss0' in locals()
Explanation: 2-Gaussian peaks
End of explanation
from scipy import optimize
params_fixed = dict(
mu1=float(params_2gauss0.p1_center),
mu2=float(params_2gauss0.p2_center),
sig1=float(params_2gauss0.p1_sigma),
sig2=float(params_2gauss0.p2_sigma),
)
def em_weights_2gauss(x, a2, mu1, mu2, sig1, sig2):
Responsibility function for a 2-Gaussian model.
Return 2 arrays of size = x.size: the responsibility of
each Gaussian population.
a1 = 1 - a2
assert np.abs(a1 + a2 - 1) < 1e-3
f1 = a1 * gauss_pdf(x, mu1, sig1)
f2 = a2 * gauss_pdf(x, mu2, sig2)
γ1 = f1 / (f1 + f2)
γ2 = f2 / (f1 + f2)
return γ1, γ2
def em_fit_2gauss(x, a2_0, params_fixed, print_every=10, max_iter=100, rtol=1e-3):
a2_new = a2_0
rel_change = 1
i = 0
while rel_change > rtol and i < max_iter:
# E-step
γ1, γ2 = em_weights_2gauss(x, a2_new, **params_fixed)
assert np.allclose(γ1.sum() + γ2.sum(), x.size)
# M-step
a2_old = a2_new
a2_new = γ2.sum()/γ2.size
# Convergence
rel_change = np.abs((a2_old - a2_new)/a2_new)
i += 1
if (i % print_every) == 0:
print(i, a2_new, rel_change)
return a2_new, i
from matplotlib.pylab import normpdf as gauss_pdf
# Model PDF to be maximized
def model_pdf(x, a2, mu1, mu2, sig1, sig2):
a1 = 1 - a2
#assert np.abs(a1 + a2 + a3 - 1) < 1e-3
return (a1 * gauss_pdf(x, mu1, sig1) +
a2 * gauss_pdf(x, mu2, sig2))
def func2min_lmfit(params, x):
a2 = params['a2'].value
mu1 = params['mu1'].value
mu2 = params['mu2'].value
sig1 = params['sig1'].value
sig2 = params['sig2'].value
return -np.sqrt(np.log(model_pdf(x, a2, mu1, mu2, sig1, sig2)))
def func2min_scipy(params_fit, params_fixed, x):
a2 = params_fit
mu1 = params_fixed['mu1']
mu2 = params_fixed['mu2']
sig1 = params_fixed['sig1']
sig2 = params_fixed['sig2']
return -np.log(model_pdf(x, a2, mu1, mu2, sig1, sig2)).sum()
# create a set of Parameters
params = lmfit.Parameters()
params.add('a2', value=0.5, min=0)
for k, v in params_fixed.items():
params.add(k, value=v, vary=False)
Explanation: Fit
End of explanation
x = ds0.E_
#x
#result = lmfit.minimize(func2min_lmfit, params, args=(x,), method='nelder')
#lmfit.report_fit(result.params)
#optimize.brute(func2min_scipy, ranges=((0.01, 0.99), (0.01, 0.99)), Ns=101, args=(params, x))
res_em = em_fit_2gauss(x, 0.5, params_fixed)
res_em
res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), method='Nelder-Mead')
res
res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), bounds=((0,1),), method='SLSQP')
res
res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), bounds=((0,1),), method='TNC')
res
bins = np.arange(-0.1, 1.1, 0.025)
plt.hist(x, bins, histtype='step', lw=2, normed=True);
xx = np.arange(-0.1, 1.1, 0.005)
#plt.plot(xx, model_pdf(xx, params))
plt.plot(xx, model_pdf(xx, a2=res_em[0], **params_fixed))
Explanation: $$f(x) = \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$
$$\log f(x) = \log \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} = \log{A} -\log{\sigma} - \log\sqrt{2\pi} -\frac{(x - \mu)^2}{2 \sigma^2}$$
$$w_1 \; f_1(x) + w_2 \; f_2(x) + w_3 \; f_3(x)$$
$$\log (w_1 \; f_1(x)) = \log{w_1} + \log{f_1(x)}$$
End of explanation
def _kinetics_fit_em(dx, a2_0, params_fixed, **kwargs):
kwargs = {'max_iter': 100, 'print_every': 101, **kwargs}
a2, i = em_fit_2gauss(dx.E_, a2_0, params_fixed, **kwargs)
return a2, i < kwargs['max_iter']
def _kinetics_fit_ll(dx, a2_0, params_fixed, **kwargs):
kwargs = {'method':'Nelder-Mead', **kwargs}
res = optimize.minimize(func2min_scipy, x0=[a2_0], args=(params_fixed, dx.E_),
**kwargs)
return res.x[0], res.success
def _kinetics_fit_hist(dx, a2_0, params_fixed):
E_fitter = bext.bursts_fitter(dx)
model = mfit.factory_two_gaussians()
model.set_param_hint('p1_center', value=params_fixed['mu1'], vary=False)
model.set_param_hint('p2_center', value=params_fixed['mu2'], vary=False)
model.set_param_hint('p1_sigma', value=params_fixed['sig1'], vary=False)
model.set_param_hint('p2_sigma', value=params_fixed['sig2'], vary=False)
E_fitter.fit_histogram(model, verbose=False)
return (float(E_fitter.params.p2_amplitude),
dx.E_fitter.fit_res[0].success)
def kinetics_fit(ds_slices, params_fixed, a2_0=0.5, method='em', **method_kws):
fit_func = {
'em': _kinetics_fit_em,
'll': _kinetics_fit_ll,
'hist': _kinetics_fit_hist}
fit_list = []
for dx in ds_slices:
a2, success = fit_func[method](dx, a2_0, params_fixed, **method_kws)
df_i = pd.DataFrame(data=dict(p2_amplitude=a2,
p1_center=params_fixed['mu1'], p2_center=params_fixed['mu2'],
p1_sigma=params_fixed['sig1'], p2_sigma=params_fixed['sig2'],
tstart=dx.slice_tstart, tstop=dx.slice_tstop,
tmean=0.5*(dx.slice_tstart + dx.slice_tstop)),
index=[0.5*(dx.slice_tstart + dx.slice_tstop)])
if not success:
print('* ', end='', flush=True)
continue
fit_list.append(df_i)
print(flush=True)
return pd.concat(fit_list)
start_time/60
Explanation: Kinetics
Definitions
End of explanation
def print_slices(moving_window_params):
msg = ' - Slicing measurement:'
for name in ('start', 'stop', 'step', 'window'):
msg += ' %s = %.1fs' % (name, moving_window_params[name])
print(msg, flush=True)
num_slices = len(bext.moving_window_startstop(**moving_window_params))
print(' Number of slices %d' % num_slices, flush=True)
t1 = time.time()
time.ctime()
ds.calc_max_rate(m=10)
ds_high = ds.select_bursts(select_bursts.E, E1=0.85)
step = 10
params = {}
for window in windows:
moving_window_params = dict(start=0, stop=ds.time_max, step=step, window=window)
print_slices(moving_window_params)
ds_slices = bext.moving_window_chunks(ds, time_zero=start_time, **moving_window_params)
for meth in ['em', 'll', 'hist']:
print(' >>> Fitting method %s ' % meth, end='', flush=True)
p = kinetics_fit(ds_slices, params_fixed, method=meth)
print(flush=True)
p['kinetics'] = p.p2_amplitude
p = p.round(dict(p1_center=3, p1_sigma=4, p2_amplitude=4, p2_center=3, p2_sigma=4, kinetics=4))
params[meth, window, step] = p
print('Moving-window processing duration: %d seconds.' % (time.time() - t1))
Explanation: Moving-window processing
End of explanation
#moving_window_params = dict(start=0, stop=dsc.time_max, step=1, window=30)
moving_window_params
ds_slices_high = bext.moving_window_chunks(ds_high, **moving_window_params)
df = bext.moving_window_dataframe(**moving_window_params) - start_time
df['size_mean'] = [di.nt_.mean() for di in ds_slices]
df['size_max'] = [di.nt_.max() for di in ds_slices]
df['num_bursts'] = [di.num_bursts[0] for di in ds_slices]
df['burst_width'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices]
df['burst_width_high'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices_high]
df['phrate_mean'] = [di.max_rate_.mean() for di in ds_slices]
df = df.round(dict(tmean=1, tstart=1, tstop=1, size_mean=2, size_max=1,
burst_width=2, burst_width_high=2, phrate_mean=1))
df
labels = ('num_bursts', 'burst_width', 'size_mean', 'phrate_mean',)
fig, axes = plt.subplots(len(labels), 1, figsize=(12, 3*len(labels)))
for ax, label in zip(axes, labels):
ax.plot('tstart', label, data=df)
ax.legend(loc='best')
#ax.set_ylim(0)
# %%timeit -n1 -r1
# meth = 'em'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
# %%timeit -n1 -r1
# meth = 'hist'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
# %%timeit -n1 -r1
# meth = 'll'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
out_fname = 'results/%s_burst_data_vs_time__window%ds_step%ds.csv' % (
Path(filename).stem, moving_window_params['window'], moving_window_params['step'])
out_fname
df.to_csv(out_fname)
Explanation: Burst-data
End of explanation
# np.abs((params['em', 30, 1] - params['ll', 30, 1]).p2_amplitude).max()
methods = ('em', 'll', 'hist')
for meth in methods:
plt.figure(figsize=(14, 3))
plt.plot(params['em', windows[0], step].index, params['em', windows[0], step].kinetics, 'h', color='gray', alpha=0.2)
plt.plot(params['em', windows[1], step].index, params['em', windows[1], step].kinetics, 'h', alpha=0.3)
# (params['em', 5, 1].kinetics - params['ll', 5, 1].kinetics).plot()
for window in windows:
for meth in methods:
out_fname = ('results/' + Path(filename).stem +
'_%sfit_ampl_only__window%ds_step%ds.csv' % (meth, window, step))
print('- Saving: ', out_fname)
params[meth, window, step].to_csv(out_fname)
d
Explanation: Population fraction
End of explanation |
3,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>CASE - Observation data</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
Step1: Introduction
Observation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available.
In this example, data originates from a study of a Chihuahuan desert ecosystem near Portal, Arizona. It is a long-term observation study in 24 different plots (each plot identified with a verbatimLocality identifier) and defines, apart from the species, location and date of the observations, also the sex and the weight (if available).
The data consists of two data sets
Step2: <div class="alert alert-success">
**EXERCISE 2**
Create a new column with the name `eventDate` which contains datetime-aware information of each observation. To do so, combine the columns `day`, `month` and `year` into a datetime-aware data type by using the `pd.to_datetime` function from Pandas (check the help of that function to see how multiple columns with the year, month and day can be converted).
<details><summary>Hints</summary>
- `pd.to_datetime` can automatically combine the information from multiple columns. To select multiple columns, use a list of column names, e.g. `df[["my_col1", "my_col2"]]`
- To create a new column, assign the result to new name, e.g. `df["my_new_col"] = df["my_col"] + 1`
</details>
Step3: <div class="alert alert-success">
**EXERCISE 3**
For convenience when this dataset will be combined with other datasets, add a new column, `datasetName`, to the survey data set with `"Ecological Archives E090-118-D1."` as value for each of the individual records (static value for the entire data set)
<details><summary>Hints</summary>
- When a column does not exist, a new `df["a_new_column"]` can be created by assigning a value to it.
- Pandas will automatically broadcast a single string value to each of the rows in the DataFrame.
</details>
Step4: Cleaning the verbatimSex column
Step5: For the further analysis (and the species concerned in this specific data set), the sex information should be either male or female. We want to create a new column, named sex and convert the current values to the corresponding sex, taking into account the following mapping
Step6: Tackle missing values (NaN) and duplicate values
See pandas_07_missing_values.ipynb for an overview of functionality to work with missing values.
<div class="alert alert-success">
**EXERCISE 5**
How many records in the data set have no information about the `species`? Use the `isna()` method to find out.
<details><summary>Hints</summary>
- Do NOT use `survey_data_processed['species'] == np.nan`, but use the available method `isna()` to check if a value is NaN
- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.
</details>
Step7: <div class="alert alert-success">
**EXERCISE 6**
How many duplicate records are present in the dataset? Use the method `duplicated()` to check if a row is a duplicate.
<details><summary>Hints</summary>
- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.
</details>
Step8: <div class="alert alert-success">
**EXERCISE 7**
- Select all duplicate data by filtering the `observations` data and assign the result to a new variable `duplicate_observations`. The `duplicated()` method provides a `keep` argument define which duplicates (if any) to mark.
- Sort the `duplicate_observations` data on both the columns `eventDate` and `verbatimLocality` and show the first 9 records.
<details><summary>Hints</summary>
- Check the documentation of the `duplicated` method to find out which value the argument `keep` requires to select all duplicate data.
- `sort_values()` can work with a single columns name as well as a list of names.
</details>
Step9: <div class="alert alert-success">
**EXERCISE 8**
- Exclude the duplicate values (i.e. keep the first occurrence while removing the other ones) from the `observations` data set and save the result as `observations_unique`. Use the `drop duplicates()` method from Pandas.
- How many observations are still left in the data set?
<details><summary>Hints</summary>
- `keep=First` is the default option for `drop_duplicates`
- The number of rows in a DataFrame is equal to the `len`gth
</details>
Step10: <div class="alert alert-success">
**EXERCISE 9**
Use the `dropna()` method to find out
Step11: <div class="alert alert-success">
**EXERCISE 10**
Filter the `observations` data and select only those records that do not have a `species_ID` while having information on the `sex`. Store the result as variable `not_identified`.
<details><summary>Hints</summary>
- To combine logical operators element-wise in Pandas, use the `&` operator.
- Pandas provides both a `isna()` and a `notna()` method to check the existence of `NaN` values.
</details>
Step12: Adding the names of the observed species
Step13: In the data set observations, the column specied_ID provides only an identifier instead of the full name. The name information is provided in a separate file species_names.csv
Step14: The species names contains for each identifier in the ID column the scientific name of a species. The species_names data set contains in total 38 different scientific names
Step15: For further analysis, let's combine both in a single DataFrame in the following exercise.
<div class="alert alert-success">
**EXERCISE 11**
Combine the DataFrames `observations_data` and `species_names` by adding the corresponding species name information (name, class, kingdom,..) to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data`.
<details><summary>Hints</summary>
- This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier.
- Take into account that our key-column is different for `observations` and `species_names`, respectively `specied_ID` and `ID`. The `pd.merge()` function has `left_on` and `right_on` keywords to specify the name of the column in the left and right `DataFrame` to merge on.
</details>
Step16: Select subsets according to taxa of species
Step17: <div class="alert alert-success">
**EXERCISE 12**
- Select the observations for which the `taxa` is equal to 'Rabbit', 'Bird' or 'Reptile'. Assign the result to a variable `non_rodent_species`. Use the `isin` method for the selection.
<details><summary>Hints</summary>
- You do not have to combine three different conditions, but use the `isin` operator with a list of names.
</details>
Step18: <div class="alert alert-success">
**EXERCISE 13**
Select the observations for which the `name` starts with the characters 'r' (make sure it does not matter if a capital character is used in the 'taxa' name). Call the resulting variable `r_species`.
<details><summary>Hints</summary>
- Remember the `.str.` construction to provide all kind of string functionalities? You can combine multiple of these after each other.
- If the presence of capital letters should not matter, make everything lowercase first before comparing (`.lower()`)
</details>
Step19: <div class="alert alert-success">
**EXERCISE 14**
Select the observations that are not Birds. Call the resulting variable <code>non_bird_species</code>.
<details><summary>Hints</summary>
- Logical operators like `==`, `!=`, `>`,... can still be used.
</details>
Step20: <div class="alert alert-success">
**EXERCISE 15**
Select the __Bird__ (taxa is Bird) observations from 1985-01 till 1989-12 usint the `eventDate` column. Call the resulting variable `birds_85_89`.
<details><summary>Hints</summary>
- No hints, you can do this! (with the help of some `<=` and `&`, and don't forget the put brackets around each comparison that you combine)
</details>
Step21: <div class="alert alert-success">
**EXERCISE 16**
- Drop the observations for which no `weight` information is available.
- On the filtered data, compare the median weight for each of the species (use the `name` column)
- Sort the output from high to low median weight (i.e. descending)
__Note__ You can do this all in a single line statement, but don't have to do it as such!
<details><summary>Hints</summary>
- You will need `dropna`, `groupby`, `median` and `sort_values`.
</details>
Step22: Species abundance
<div class="alert alert-success">
**EXERCISE 17**
Which 8 species (use the `name` column to identify the different species) have been observed most over the entire data set?
<details><summary>Hints</summary>
- Pandas provide a function to combine sorting and showing the first n records, see [here](https
Step23: <div class="alert alert-success">
**EXERCISE 18**
- What is the number of different species (`name`) in each of the `verbatimLocality` plots? Use the `nunique` method. Assign the output to a new variable `n_species_per_plot`.
- Define a Matplotlib `Figure` (`fig`) and `Axes` (`ax`) to prepare a plot. Make an horizontal bar chart using Pandas `plot` function linked to the just created Matplotlib `ax`. Each bar represents the `species per plot/verbatimLocality`. Change the y-label to 'Plot number'.
<details><summary>Hints</summary>
- _...in each of the..._ should provide a hint to use `groupby` for this exercise. The `nunique` is the aggregation function for each of the groups.
- `fig, ax = plt.subplots()` prepares a Matplotlib Figure and Axes.
</details>
Step24: <div class="alert alert-success">
**EXERCISE 19**
- What is the number of plots (`verbatimLocality`) each of the species (`name`) have been observed in? Assign the output to a new variable `n_plots_per_species`. Sort the counts from low to high.
- Make an horizontal bar chart using Pandas `plot` function to show the number of plots each of the species was found (using the `n_plots_per_species` variable).
<details><summary>Hints</summary>
- Use the previous exercise to solve this one.
</details>
Step25: <div class="alert alert-success">
**EXERCISE 20**
- Starting from the `survey_data`, calculate the amount of males and females present in each of the plots (`verbatimLocality`). The result should return the counts for each of the combinations of `sex` and `verbatimLocality`. Assign to a new variable `n_plot_sex` and ensure the counts are in a column named "count".
- Use a `pivot_table` to convert the `n_plot_sex` DataFrame to a new DataFrame with the `verbatimLocality` as index and `male`/`female` as column names. Assign to a new variable `pivoted`.
<details><summary>Hints</summary>
- _...for each of the combinations..._ `groupby` can also be used with multiple columns at the same time.
- If a `groupby` operation gives a Series as result, you can give that Series a name with the `.rename(..)` method.
- `reset_index()` is useful function to convert multiple indices into columns again.
</details>
Step26: As such, we can use the variable pivoted to plot the result
Step27: <div class="alert alert-success">
**EXERCISE 21**
Recreate the previous plot with the `catplot` function from the Seaborn library directly starting from <code>survey_data</code>.
<details><summary>Hints</summary>
- Check the `kind` argument of the `catplot` function to find out how to use counts to define the bars instead of a `y` value.
- To link a column to different colors, use the `hue` argument
- Using `height` and `aspect`, the figure size can be optimized.
</details>
Step28: <div class="alert alert-success">
**EXERCISE 22**
- Create a table, called `heatmap_prep`, based on the `survey_data` DataFrame with the row index the individual years, in the column the months of the year (1-> 12) and as values of the table, the counts for each of these year/month combinations.
- Using the seaborn <a href="http
Step29: Remark that we started from a tidy data format (also called long format) and converted to short format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values.
<div class="alert alert-success">
**EXERCISE 23**
- Make a summary table with the number of records of each of the species in each of the plots (called `verbatimLocality`)? Each of the species `name`s is a row index and each of the `verbatimLocality` plots is a column name.
- Use the Seaborn <a href="http
Step30: <div class="alert alert-success">
**EXERCISE 24**
Make a plot visualizing the evolution of the number of observations for each of the individual __years__ (i.e. annual counts) using the `resample` method.
<details><summary>Hints</summary>
- You want to `resample` the data using the `eventDate` column to create annual counts. If the index is not a datetime-index, you can use the `on=` keyword to specify which datetime column to use.
- `resample` needs an aggregation function on how to combine the values within a single 'group' (in this case data within a year). In this example, we want to know the `size` of each group, i.e. the number of records within each year.
</details>
Step31: (OPTIONAL SECTION) Evolution of species during monitoring period
In this section, all plots can be made with the embedded Pandas plot function, unless specificly asked
<div class="alert alert-success">
**EXERCISE 25**
Plot using Pandas `plot` function the number of records for `Dipodomys merriami` for each month of the year (January (1) -> December (12)), aggregated over all years.
<details><summary>Hints</summary>
- _...for each month of..._ requires `groupby`.
- `resample` is not useful here, as we do not want to change the time-interval, but look at month of the year (over all years)
</details>
Step32: <div class="alert alert-success">
**EXERCISE 26**
Plot, for the species 'Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis' and 'Chaetodipus baileyi', the monthly number of records as a function of time for the whole monitoring period. Plot each of the individual species in a separate subplot and provide them all with the same y-axis scale
<details><summary>Hints</summary>
- `isin` is useful to select from within a list of elements.
- `groupby` AND `resample` need to be combined. We do want to change the time-interval to represent data as a function of time (`resample`) and we want to do this _for each name/species_ (`groupby`). The order matters!
- `unstack` is a Pandas function a bit similar to `pivot`. Check the [unstack documentation](https
Step33: <div class="alert alert-success">
**EXERCISE 27**
Recreate the same plot as in the previous exercise using Seaborn `relplot` functon with the `month_evolution` variable.
<details><summary>Hints</summary>
- We want to have the `counts` as a function of `eventDate`, so link these columns to y and x respectively.
- To create subplots in Seaborn, the usage of _facetting_ (splitting data sets to multiple facets) is used by linking a column name to the `row`/`col` parameter.
- Using `height` and `aspect`, the figure size can be optimized.
</details>
Step34: <div class="alert alert-success">
**EXERCISE 28**
Plot the annual amount of occurrences for each of the 'taxa' as a function of time using Seaborn. Plot each taxa in a separate subplot and do not share the y-axis among the facets.
<details><summary>Hints</summary>
- Combine `resample` and `groupby`!
- Check out the previous exercise for the plot function.
- Pass the `sharey=False` to the `facet_kws` argument as a dictionary.
</details>
Step35: <div class="alert alert-success">
**EXERCISE 29**
The observations where taken by volunteers. You wonder on which day of the week the most observations where done. Calculate for each day of the week (`weekday`) the number of observations and make a barplot.
<details><summary>Hints</summary>
- Did you know the Python standard Library has a module `calendar` which contains names of week days, month names,...?
</details> | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn-whitegrid')
Explanation: <p><font size="6"><b>CASE - Observation data</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
# %load _solutions/case2_observations1.py
# %load _solutions/case2_observations2.py
# %load _solutions/case2_observations3.py
Explanation: Introduction
Observation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available.
In this example, data originates from a study of a Chihuahuan desert ecosystem near Portal, Arizona. It is a long-term observation study in 24 different plots (each plot identified with a verbatimLocality identifier) and defines, apart from the species, location and date of the observations, also the sex and the weight (if available).
The data consists of two data sets:
observations.csv the individual observations.
species_names.csv the overview list of the species names.
Let's start with the observations data!
Reading in the observations data
<div class="alert alert-success">
**EXERCISE 1**
- Read in the `data/observations.csv` file with Pandas and assign the resulting DataFrame to a variable with the name `observations`.
- Make sure the 'occurrenceID' column is used as the index of the resulting DataFrame while reading in the data set.
- Inspect the first five rows of the DataFrame and the data types of each of the data columns.
<details><summary>Hints</summary>
- All read functions in Pandas start with `pd.read_...`.
- Setting a column as index can be done with an argument of the `read_csv` function To check the documentation of a function, use the keystroke combination of SHIFT + TAB when the cursor is on the function.
- Remember `.head()` and `.info()`?
</details>
End of explanation
# %load _solutions/case2_observations4.py
Explanation: <div class="alert alert-success">
**EXERCISE 2**
Create a new column with the name `eventDate` which contains datetime-aware information of each observation. To do so, combine the columns `day`, `month` and `year` into a datetime-aware data type by using the `pd.to_datetime` function from Pandas (check the help of that function to see how multiple columns with the year, month and day can be converted).
<details><summary>Hints</summary>
- `pd.to_datetime` can automatically combine the information from multiple columns. To select multiple columns, use a list of column names, e.g. `df[["my_col1", "my_col2"]]`
- To create a new column, assign the result to new name, e.g. `df["my_new_col"] = df["my_col"] + 1`
</details>
End of explanation
# %load _solutions/case2_observations5.py
Explanation: <div class="alert alert-success">
**EXERCISE 3**
For convenience when this dataset will be combined with other datasets, add a new column, `datasetName`, to the survey data set with `"Ecological Archives E090-118-D1."` as value for each of the individual records (static value for the entire data set)
<details><summary>Hints</summary>
- When a column does not exist, a new `df["a_new_column"]` can be created by assigning a value to it.
- Pandas will automatically broadcast a single string value to each of the rows in the DataFrame.
</details>
End of explanation
observations["verbatimSex"].unique()
Explanation: Cleaning the verbatimSex column
End of explanation
# %load _solutions/case2_observations6.py
# %load _solutions/case2_observations7.py
# %load _solutions/case2_observations8.py
Explanation: For the further analysis (and the species concerned in this specific data set), the sex information should be either male or female. We want to create a new column, named sex and convert the current values to the corresponding sex, taking into account the following mapping:
* M -> male
* F -> female
* R -> male
* P -> female
* Z -> nan
<div class="alert alert-success">
**EXERCISE 4**
- Express the mapping of the values (e.g. `M` -> `male`) into a Python dictionary object with the variable name `sex_dict`. `Z` values correspond to _Not a Number_, which can be defined as `np.nan`.
- Use the `sex_dict` dictionary to replace the values in the `verbatimSex` column to the new values and save the mapped values in a new column 'sex' of the DataFrame.
- Check the conversion by printing the unique values within the new column `sex`.
<details><summary>Hints</summary>
- A dictionary is a Python standard library data structure, see https://docs.python.org/3/tutorial/datastructures.html#dictionaries - no Pandas magic involved when you need a key/value mapping.
- When you need to replace values, look for the Pandas method `replace`.
</details>
End of explanation
# %load _solutions/case2_observations9.py
Explanation: Tackle missing values (NaN) and duplicate values
See pandas_07_missing_values.ipynb for an overview of functionality to work with missing values.
<div class="alert alert-success">
**EXERCISE 5**
How many records in the data set have no information about the `species`? Use the `isna()` method to find out.
<details><summary>Hints</summary>
- Do NOT use `survey_data_processed['species'] == np.nan`, but use the available method `isna()` to check if a value is NaN
- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.
</details>
End of explanation
# %load _solutions/case2_observations10.py
Explanation: <div class="alert alert-success">
**EXERCISE 6**
How many duplicate records are present in the dataset? Use the method `duplicated()` to check if a row is a duplicate.
<details><summary>Hints</summary>
- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.
</details>
End of explanation
# %load _solutions/case2_observations11.py
Explanation: <div class="alert alert-success">
**EXERCISE 7**
- Select all duplicate data by filtering the `observations` data and assign the result to a new variable `duplicate_observations`. The `duplicated()` method provides a `keep` argument define which duplicates (if any) to mark.
- Sort the `duplicate_observations` data on both the columns `eventDate` and `verbatimLocality` and show the first 9 records.
<details><summary>Hints</summary>
- Check the documentation of the `duplicated` method to find out which value the argument `keep` requires to select all duplicate data.
- `sort_values()` can work with a single columns name as well as a list of names.
</details>
End of explanation
# %load _solutions/case2_observations12.py
# %load _solutions/case2_observations13.py
Explanation: <div class="alert alert-success">
**EXERCISE 8**
- Exclude the duplicate values (i.e. keep the first occurrence while removing the other ones) from the `observations` data set and save the result as `observations_unique`. Use the `drop duplicates()` method from Pandas.
- How many observations are still left in the data set?
<details><summary>Hints</summary>
- `keep=First` is the default option for `drop_duplicates`
- The number of rows in a DataFrame is equal to the `len`gth
</details>
End of explanation
# %load _solutions/case2_observations14.py
# %load _solutions/case2_observations15.py
# %load _solutions/case2_observations16.py
Explanation: <div class="alert alert-success">
**EXERCISE 9**
Use the `dropna()` method to find out:
- For how many observations (rows) we have all the information available (i.e. no NaN values in any of the columns)?
- For how many observations (rows) we do have the `species_ID` data available ?
- Remove the data without `species_ID` data from the observations and assign the result to a new variable `observations_with_ID`
<details><summary>Hints</summary>
- `dropna` by default removes by default all rows for which _any_ of the columns contains a `NaN` value.
- To specify which specific columns to check, use the `subset` argument
</details>
End of explanation
# %load _solutions/case2_observations17.py
# %load _solutions/case2_observations18.py
Explanation: <div class="alert alert-success">
**EXERCISE 10**
Filter the `observations` data and select only those records that do not have a `species_ID` while having information on the `sex`. Store the result as variable `not_identified`.
<details><summary>Hints</summary>
- To combine logical operators element-wise in Pandas, use the `&` operator.
- Pandas provides both a `isna()` and a `notna()` method to check the existence of `NaN` values.
</details>
End of explanation
# Recap from previous exercises - remove duplicates and observations without species information
observations_unique_ = observations.drop_duplicates()
observations_data = observations_unique_.dropna(subset=['species_ID'])
Explanation: Adding the names of the observed species
End of explanation
species_names = pd.read_csv("data/species_names.csv")
species_names.head()
Explanation: In the data set observations, the column specied_ID provides only an identifier instead of the full name. The name information is provided in a separate file species_names.csv:
End of explanation
species_names.shape
Explanation: The species names contains for each identifier in the ID column the scientific name of a species. The species_names data set contains in total 38 different scientific names:
End of explanation
# %load _solutions/case2_observations19.py
Explanation: For further analysis, let's combine both in a single DataFrame in the following exercise.
<div class="alert alert-success">
**EXERCISE 11**
Combine the DataFrames `observations_data` and `species_names` by adding the corresponding species name information (name, class, kingdom,..) to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data`.
<details><summary>Hints</summary>
- This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier.
- Take into account that our key-column is different for `observations` and `species_names`, respectively `specied_ID` and `ID`. The `pd.merge()` function has `left_on` and `right_on` keywords to specify the name of the column in the left and right `DataFrame` to merge on.
</details>
End of explanation
survey_data['taxa'].value_counts()
#survey_data.groupby('taxa').size()
Explanation: Select subsets according to taxa of species
End of explanation
# %load _solutions/case2_observations20.py
# %load _solutions/case2_observations21.py
Explanation: <div class="alert alert-success">
**EXERCISE 12**
- Select the observations for which the `taxa` is equal to 'Rabbit', 'Bird' or 'Reptile'. Assign the result to a variable `non_rodent_species`. Use the `isin` method for the selection.
<details><summary>Hints</summary>
- You do not have to combine three different conditions, but use the `isin` operator with a list of names.
</details>
End of explanation
# %load _solutions/case2_observations22.py
# %load _solutions/case2_observations23.py
r_species["name"].value_counts()
Explanation: <div class="alert alert-success">
**EXERCISE 13**
Select the observations for which the `name` starts with the characters 'r' (make sure it does not matter if a capital character is used in the 'taxa' name). Call the resulting variable `r_species`.
<details><summary>Hints</summary>
- Remember the `.str.` construction to provide all kind of string functionalities? You can combine multiple of these after each other.
- If the presence of capital letters should not matter, make everything lowercase first before comparing (`.lower()`)
</details>
End of explanation
# %load _solutions/case2_observations24.py
len(non_bird_species)
Explanation: <div class="alert alert-success">
**EXERCISE 14**
Select the observations that are not Birds. Call the resulting variable <code>non_bird_species</code>.
<details><summary>Hints</summary>
- Logical operators like `==`, `!=`, `>`,... can still be used.
</details>
End of explanation
# %load _solutions/case2_observations25.py
# %load _solutions/case2_observations26.py
Explanation: <div class="alert alert-success">
**EXERCISE 15**
Select the __Bird__ (taxa is Bird) observations from 1985-01 till 1989-12 usint the `eventDate` column. Call the resulting variable `birds_85_89`.
<details><summary>Hints</summary>
- No hints, you can do this! (with the help of some `<=` and `&`, and don't forget the put brackets around each comparison that you combine)
</details>
End of explanation
# %load _solutions/case2_observations27.py
# %load _solutions/case2_observations28.py
Explanation: <div class="alert alert-success">
**EXERCISE 16**
- Drop the observations for which no `weight` information is available.
- On the filtered data, compare the median weight for each of the species (use the `name` column)
- Sort the output from high to low median weight (i.e. descending)
__Note__ You can do this all in a single line statement, but don't have to do it as such!
<details><summary>Hints</summary>
- You will need `dropna`, `groupby`, `median` and `sort_values`.
</details>
End of explanation
# %load _solutions/case2_observations29.py
# %load _solutions/case2_observations30.py
Explanation: Species abundance
<div class="alert alert-success">
**EXERCISE 17**
Which 8 species (use the `name` column to identify the different species) have been observed most over the entire data set?
<details><summary>Hints</summary>
- Pandas provide a function to combine sorting and showing the first n records, see [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nlargest.html)...
</details>
End of explanation
# %load _solutions/case2_observations31.py
# %load _solutions/case2_observations32.py
# %load _solutions/case2_observations33.py
Explanation: <div class="alert alert-success">
**EXERCISE 18**
- What is the number of different species (`name`) in each of the `verbatimLocality` plots? Use the `nunique` method. Assign the output to a new variable `n_species_per_plot`.
- Define a Matplotlib `Figure` (`fig`) and `Axes` (`ax`) to prepare a plot. Make an horizontal bar chart using Pandas `plot` function linked to the just created Matplotlib `ax`. Each bar represents the `species per plot/verbatimLocality`. Change the y-label to 'Plot number'.
<details><summary>Hints</summary>
- _...in each of the..._ should provide a hint to use `groupby` for this exercise. The `nunique` is the aggregation function for each of the groups.
- `fig, ax = plt.subplots()` prepares a Matplotlib Figure and Axes.
</details>
End of explanation
# %load _solutions/case2_observations34.py
Explanation: <div class="alert alert-success">
**EXERCISE 19**
- What is the number of plots (`verbatimLocality`) each of the species (`name`) have been observed in? Assign the output to a new variable `n_plots_per_species`. Sort the counts from low to high.
- Make an horizontal bar chart using Pandas `plot` function to show the number of plots each of the species was found (using the `n_plots_per_species` variable).
<details><summary>Hints</summary>
- Use the previous exercise to solve this one.
</details>
End of explanation
# %load _solutions/case2_observations35.py
# %load _solutions/case2_observations36.py
Explanation: <div class="alert alert-success">
**EXERCISE 20**
- Starting from the `survey_data`, calculate the amount of males and females present in each of the plots (`verbatimLocality`). The result should return the counts for each of the combinations of `sex` and `verbatimLocality`. Assign to a new variable `n_plot_sex` and ensure the counts are in a column named "count".
- Use a `pivot_table` to convert the `n_plot_sex` DataFrame to a new DataFrame with the `verbatimLocality` as index and `male`/`female` as column names. Assign to a new variable `pivoted`.
<details><summary>Hints</summary>
- _...for each of the combinations..._ `groupby` can also be used with multiple columns at the same time.
- If a `groupby` operation gives a Series as result, you can give that Series a name with the `.rename(..)` method.
- `reset_index()` is useful function to convert multiple indices into columns again.
</details>
End of explanation
pivoted.plot(kind='bar', figsize=(12, 6), rot=0)
Explanation: As such, we can use the variable pivoted to plot the result:
End of explanation
# %load _solutions/case2_observations37.py
Explanation: <div class="alert alert-success">
**EXERCISE 21**
Recreate the previous plot with the `catplot` function from the Seaborn library directly starting from <code>survey_data</code>.
<details><summary>Hints</summary>
- Check the `kind` argument of the `catplot` function to find out how to use counts to define the bars instead of a `y` value.
- To link a column to different colors, use the `hue` argument
- Using `height` and `aspect`, the figure size can be optimized.
</details>
End of explanation
# %load _solutions/case2_observations38.py
Explanation: <div class="alert alert-success">
**EXERCISE 22**
- Create a table, called `heatmap_prep`, based on the `survey_data` DataFrame with the row index the individual years, in the column the months of the year (1-> 12) and as values of the table, the counts for each of these year/month combinations.
- Using the seaborn <a href="http://seaborn.pydata.org/generated/seaborn.heatmap.html">documentation</a>, make a heatmap starting from the `heatmap_prep` variable.
<details><summary>Hints</summary>
- A `pivot_table` has an `aggfunc` parameter by which the aggregation of the cells combined into the year/month element are combined (e.g. mean, max, count,...).
- You can use the `ID` to count the number of observations.
- seaborn has an `heatmap` function which requires a short-form DataFrame, comparable to giving each element in a table a color value.
</details>
End of explanation
# %load _solutions/case2_observations39.py
# %load _solutions/case2_observations40.py
Explanation: Remark that we started from a tidy data format (also called long format) and converted to short format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values.
<div class="alert alert-success">
**EXERCISE 23**
- Make a summary table with the number of records of each of the species in each of the plots (called `verbatimLocality`)? Each of the species `name`s is a row index and each of the `verbatimLocality` plots is a column name.
- Use the Seaborn <a href="http://seaborn.pydata.org/generated/seaborn.heatmap.html">documentation</a> to make a heatmap.
<details><summary>Hints</summary>
- Make sure to pass the correct columns to respectively the `index`, `columns`, `values` and `aggfunc` parameters of the `pivot_table` function. You can use the `ID` to count the number of observations for each name/locality combination (when counting rows, the exact column doesn't matter).
</details>
End of explanation
# %load _solutions/case2_observations41.py
Explanation: <div class="alert alert-success">
**EXERCISE 24**
Make a plot visualizing the evolution of the number of observations for each of the individual __years__ (i.e. annual counts) using the `resample` method.
<details><summary>Hints</summary>
- You want to `resample` the data using the `eventDate` column to create annual counts. If the index is not a datetime-index, you can use the `on=` keyword to specify which datetime column to use.
- `resample` needs an aggregation function on how to combine the values within a single 'group' (in this case data within a year). In this example, we want to know the `size` of each group, i.e. the number of records within each year.
</details>
End of explanation
# %load _solutions/case2_observations42.py
# %load _solutions/case2_observations43.py
Explanation: (OPTIONAL SECTION) Evolution of species during monitoring period
In this section, all plots can be made with the embedded Pandas plot function, unless specificly asked
<div class="alert alert-success">
**EXERCISE 25**
Plot using Pandas `plot` function the number of records for `Dipodomys merriami` for each month of the year (January (1) -> December (12)), aggregated over all years.
<details><summary>Hints</summary>
- _...for each month of..._ requires `groupby`.
- `resample` is not useful here, as we do not want to change the time-interval, but look at month of the year (over all years)
</details>
End of explanation
# %load _solutions/case2_observations44.py
# %load _solutions/case2_observations45.py
# %load _solutions/case2_observations46.py
Explanation: <div class="alert alert-success">
**EXERCISE 26**
Plot, for the species 'Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis' and 'Chaetodipus baileyi', the monthly number of records as a function of time for the whole monitoring period. Plot each of the individual species in a separate subplot and provide them all with the same y-axis scale
<details><summary>Hints</summary>
- `isin` is useful to select from within a list of elements.
- `groupby` AND `resample` need to be combined. We do want to change the time-interval to represent data as a function of time (`resample`) and we want to do this _for each name/species_ (`groupby`). The order matters!
- `unstack` is a Pandas function a bit similar to `pivot`. Check the [unstack documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html) as it might be helpful for this exercise.
</details>
End of explanation
# Given as solution..
subsetspecies = survey_data[survey_data["name"].isin(['Dipodomys merriami', 'Dipodomys ordii',
'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]
month_evolution = subsetspecies.groupby("name").resample('M', on='eventDate').size().rename("counts")
month_evolution = month_evolution.reset_index()
# %load _solutions/case2_observations47.py
Explanation: <div class="alert alert-success">
**EXERCISE 27**
Recreate the same plot as in the previous exercise using Seaborn `relplot` functon with the `month_evolution` variable.
<details><summary>Hints</summary>
- We want to have the `counts` as a function of `eventDate`, so link these columns to y and x respectively.
- To create subplots in Seaborn, the usage of _facetting_ (splitting data sets to multiple facets) is used by linking a column name to the `row`/`col` parameter.
- Using `height` and `aspect`, the figure size can be optimized.
</details>
End of explanation
# %load _solutions/case2_observations48.py
# %load _solutions/case2_observations49.py
# %load _solutions/case2_observations50.py
Explanation: <div class="alert alert-success">
**EXERCISE 28**
Plot the annual amount of occurrences for each of the 'taxa' as a function of time using Seaborn. Plot each taxa in a separate subplot and do not share the y-axis among the facets.
<details><summary>Hints</summary>
- Combine `resample` and `groupby`!
- Check out the previous exercise for the plot function.
- Pass the `sharey=False` to the `facet_kws` argument as a dictionary.
</details>
End of explanation
# %load _solutions/case2_observations51.py
Explanation: <div class="alert alert-success">
**EXERCISE 29**
The observations where taken by volunteers. You wonder on which day of the week the most observations where done. Calculate for each day of the week (`weekday`) the number of observations and make a barplot.
<details><summary>Hints</summary>
- Did you know the Python standard Library has a module `calendar` which contains names of week days, month names,...?
</details>
End of explanation |
3,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executing Code
In this notebook we'll look at some of the issues surrounding executing
code in the notebook.
Backtraces
When you interrupt a computation, or if an exception is raised but not
caught, you will see a backtrace of what was happening when the program
halted. The backtrace is color highlighted to help you find the information
you need to debug the problem.
Step1: Python Debugging
You can also turn on the Python debugger inside a notebook using the
magic invocation %pdb on. When an exception occurs, the debugger
will activate inside the output cell. You can then type commands
and see responses from the stopped state of the program.
Some commands
Step2: Output
Normal output is shown after the In[] area. Output written to stdout is shown in one color,
while output written to stderr is shown with a red background.
Step3: Asynchronous Output
Output written to stdout and stderr shows up immediately in the notebook, you don't have
to wait for the evaluation to finish before you see anything. Here is demo.
Step4: Threads
You can start multiple threads and use the standard Python threading libraries such as
threads and threading to coordinate between them.
Note that because of the global interpreter lock in CPython two threads
with work to do will never run at the same time.
Step5: Multiprocessing
It is possible to use the multiprocessing library inside Pineapple notebooks. The multiprocessing library spawns multiple interpreters
which can actually run in parallel. Of course this is still no guarantee
of higher performance. | Python Code:
def f(x):
return 1.0 / x
def g(x):
return x - 1.0
f(g(1.0))
Explanation: Executing Code
In this notebook we'll look at some of the issues surrounding executing
code in the notebook.
Backtraces
When you interrupt a computation, or if an exception is raised but not
caught, you will see a backtrace of what was happening when the program
halted. The backtrace is color highlighted to help you find the information
you need to debug the problem.
End of explanation
%pdb on
f(g(1.0))
Explanation: Python Debugging
You can also turn on the Python debugger inside a notebook using the
magic invocation %pdb on. When an exception occurs, the debugger
will activate inside the output cell. You can then type commands
and see responses from the stopped state of the program.
Some commands:
- h help
- w print stack trace
- p expr print expressions
- q quit
- r restart
Full documentation on the debugger can be found at Python debugger pdb.
End of explanation
import sys
print('Hello, world!')
sys.stdout.write('We meet again, stdout.')
sys.stderr.write('Error, you appear to have created a black hole.')
Explanation: Output
Normal output is shown after the In[] area. Output written to stdout is shown in one color,
while output written to stderr is shown with a red background.
End of explanation
import time
for i in range(10):
print(i)
time.sleep(0.5)
Explanation: Asynchronous Output
Output written to stdout and stderr shows up immediately in the notebook, you don't have
to wait for the evaluation to finish before you see anything. Here is demo.
End of explanation
import threading
class SummingThread(threading.Thread):
def __init__(self, low, high):
super(SummingThread, self).__init__()
self.low = low
self.high = high
self.total = 0
def run(self):
for i in range(self.low, self.high):
self.total += i
def sequential_sum(n):
total = 0
for i in range(0, n):
total += i
return total
def parallel_sum(n):
thread1 = SummingThread(0, n//2)
thread2 = SummingThread(n//2, n)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
return thread1.total + thread2.total
%timeit sequential_sum(100000)
%timeit parallel_sum(1000000)
Explanation: Threads
You can start multiple threads and use the standard Python threading libraries such as
threads and threading to coordinate between them.
Note that because of the global interpreter lock in CPython two threads
with work to do will never run at the same time.
End of explanation
from time import sleep
from multiprocessing import Pool
def f(p):
low, high = p
total = 0
for i in range(low, high):
total += i
return total
def sequential_sum(n):
total = 0
for i in range(0, n):
total += i
return total
def parallel_sum(n):
p = Pool(2)
results = p.map(f, [[0, n//2], [n//2, n]])
return results[0] + results[1]
if __name__ == "__main__":
%timeit sequential_sum(10000)
%timeit parallel_sum(100000)
Explanation: Multiprocessing
It is possible to use the multiprocessing library inside Pineapple notebooks. The multiprocessing library spawns multiple interpreters
which can actually run in parallel. Of course this is still no guarantee
of higher performance.
End of explanation |
3,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Gradient-Boosted-Tree-Inferencing" data-toc-modified-id="Gradient-Boosted-Tree-Inferencing-1"><span class="toc-item-num">1 </span>Gradient Boosted Tree Inferencing</a></span><ul class="toc-item"><li><span><a href="#Translating-to-Production-Language" data-toc-modified-id="Translating-to-Production-Language-1.1"><span class="toc-item-num">1.1 </span>Translating to Production Language</a></span><ul class="toc-item"><li><span><a href="#Preparation" data-toc-modified-id="Preparation-1.1.1"><span class="toc-item-num">1.1.1 </span>Preparation</a></span></li><li><span><a href="#Regression" data-toc-modified-id="Regression-1.1.2"><span class="toc-item-num">1.1.2 </span>Regression</a></span></li><li><span><a href="#Binary-Classification" data-toc-modified-id="Binary-Classification-1.1.3"><span class="toc-item-num">1.1.3 </span>Binary Classification</a></span></li><li><span><a href="#Multiclass-Classification" data-toc-modified-id="Multiclass-Classification-1.1.4"><span class="toc-item-num">1.1.4 </span>Multiclass Classification</a></span></li><li><span><a href="#C++-Implementation" data-toc-modified-id="C++-Implementation-1.1.5"><span class="toc-item-num">1.1.5 </span>C++ Implementation</a></span></li></ul></li><li><span><a href="#ONNX" data-toc-modified-id="ONNX-1.2"><span class="toc-item-num">1.2 </span>ONNX</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Gradient Boosted Tree Inferencing
Once we train our machine learning model, depending on the use case, we may wish to operationize it by putting it behind a service for (near) real time inferencing. We can definitely generate predictions in batch offline, store them in some downstream tables or look up services, and pull out pre-computed predictions when needed. Although this batch prediction approach might sound easier to implement, and we might not have to worry about latency issues when it comes to real time services, this paradigm does come with its limitations. e.g.
Cold start problem, if a new entity, whether it's users coming to the website or items being listed on a marketplace, there will be no precomputed recommendations available.
Not having access to real time features. Dynamic features are features based on what’s happening right now – what a user is watching, what people just liked, knowing these will allow us to generate more accurate or relevant predictions based on latest information.
Potentially wasted computation/storage. If we generate predictions for every possible user each day, and only 5% of them login to use our website, then the compute used to generate 95% of our predictions will be wasted.
Translating to Production Language
It's very common in industry setting to prototype a machine learning model in Python and translate it into other languages such as C++, Java, etc, when it comes to deploying. This usually happens where the core application is written in other languages such as C++, Java, etc. and it is an extremely time sensitive application where we can't afford the cost of calling an external API to fetch the model prediction.
In this section, we'll be looking at how we can achieve this with Gradient Boosted Trees, specifically XGBoost. Different library might have different ways to doing this, but the concept should be similar.
Tree Structure
A typical model dump from XGBoost looks like the following
Step2: Binary Classification
Step3: Multiclass Classification
Step4: C++ Implementation
The rest of the content is about implementing the boosted tree inferencing logic in C++, all the code resides in the gbt_inference folder for those interested. In practice, we don't always have to rely on naive code that we've implemented to solidify our understanding. e.g. the m2cgen (Model 2 Code Generator) project is one of the many projects out there that focuses on converting a trained model into native code. If we export our regression model, we can see that the inferencing logic is indeed a bunch of if else statements followed by a summation at the very end.
Step5: ONNX
Another way to achieving this is through ONNX, directly quoting from its documentation.
ONNX Runtime provides an easy way to run machine learned models with high performance on CPU or GPU without dependencies on the training framework. Machine learning frameworks are usually optimized for batch training rather than for prediction, which is a more common scenario in applications, sites, and services
We'll walk through the process of converting our boosted tree model into ONNX format, and benchmark the inference runtime. Here, we are doing it for classification model, but the process should be similar for regression based models.
Step6: Upon porting our model to onnx format, we can use it for inferencing. This section uses the Python API for benchmarking. | Python Code:
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
import os
import numpy as np
import pandas as pd
import m2cgen as m2c
import sklearn.datasets as datasets
from xgboost import XGBClassifier, XGBRegressor
import onnxruntime as rt
from skl2onnx import convert_sklearn, update_registered_converter
from skl2onnx.common.data_types import FloatTensorType
from skl2onnx.common.shape_calculator import calculate_linear_classifier_output_shapes
from onnxmltools.convert.xgboost.operator_converters.XGBoost import convert_xgboost
# prevent scientific notations
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%watermark -a 'Ethen' -u -d -v -p numpy,pandas,sklearn,m2cgen,xgboost
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Gradient-Boosted-Tree-Inferencing" data-toc-modified-id="Gradient-Boosted-Tree-Inferencing-1"><span class="toc-item-num">1 </span>Gradient Boosted Tree Inferencing</a></span><ul class="toc-item"><li><span><a href="#Translating-to-Production-Language" data-toc-modified-id="Translating-to-Production-Language-1.1"><span class="toc-item-num">1.1 </span>Translating to Production Language</a></span><ul class="toc-item"><li><span><a href="#Preparation" data-toc-modified-id="Preparation-1.1.1"><span class="toc-item-num">1.1.1 </span>Preparation</a></span></li><li><span><a href="#Regression" data-toc-modified-id="Regression-1.1.2"><span class="toc-item-num">1.1.2 </span>Regression</a></span></li><li><span><a href="#Binary-Classification" data-toc-modified-id="Binary-Classification-1.1.3"><span class="toc-item-num">1.1.3 </span>Binary Classification</a></span></li><li><span><a href="#Multiclass-Classification" data-toc-modified-id="Multiclass-Classification-1.1.4"><span class="toc-item-num">1.1.4 </span>Multiclass Classification</a></span></li><li><span><a href="#C++-Implementation" data-toc-modified-id="C++-Implementation-1.1.5"><span class="toc-item-num">1.1.5 </span>C++ Implementation</a></span></li></ul></li><li><span><a href="#ONNX" data-toc-modified-id="ONNX-1.2"><span class="toc-item-num">1.2 </span>ONNX</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
X, y = datasets.load_diabetes(return_X_y=True, as_frame=True)
X = X[["age", "sex", "bmi", "bp"]]
X.head()
regression_model_params = {
'n_estimators': 2,
'max_depth': 3,
'base_score': 0.0
}
regression_model = XGBRegressor(**regression_model_params).fit(X, y)
regression_model
regression_model.get_booster().dump_model("regression.txt")
regression_model.predict(X.iloc[[0]])
Explanation: Gradient Boosted Tree Inferencing
Once we train our machine learning model, depending on the use case, we may wish to operationize it by putting it behind a service for (near) real time inferencing. We can definitely generate predictions in batch offline, store them in some downstream tables or look up services, and pull out pre-computed predictions when needed. Although this batch prediction approach might sound easier to implement, and we might not have to worry about latency issues when it comes to real time services, this paradigm does come with its limitations. e.g.
Cold start problem, if a new entity, whether it's users coming to the website or items being listed on a marketplace, there will be no precomputed recommendations available.
Not having access to real time features. Dynamic features are features based on what’s happening right now – what a user is watching, what people just liked, knowing these will allow us to generate more accurate or relevant predictions based on latest information.
Potentially wasted computation/storage. If we generate predictions for every possible user each day, and only 5% of them login to use our website, then the compute used to generate 95% of our predictions will be wasted.
Translating to Production Language
It's very common in industry setting to prototype a machine learning model in Python and translate it into other languages such as C++, Java, etc, when it comes to deploying. This usually happens where the core application is written in other languages such as C++, Java, etc. and it is an extremely time sensitive application where we can't afford the cost of calling an external API to fetch the model prediction.
In this section, we'll be looking at how we can achieve this with Gradient Boosted Trees, specifically XGBoost. Different library might have different ways to doing this, but the concept should be similar.
Tree Structure
A typical model dump from XGBoost looks like the following:
booster[0]:
0:[bmi<0.00942232087] yes=1,no=2,missing=1
1:[bmi<-0.0218342301] yes=3,no=4,missing=3
3:[bmi<-0.0584798381] yes=7,no=8,missing=7
7:leaf=25.84091
8:leaf=33.0292702
4:[bp<0.0270366594] yes=9,no=10,missing=9
9:leaf=38.7487526
10:leaf=51.0882378
2:[bp<0.0235937908] yes=5,no=6,missing=5
5:leaf=53.0696678
6:leaf=69.4000015
booster[1]:
0:[bmi<0.00511107268] yes=1,no=2,missing=1
1:[bp<0.0390867069] yes=3,no=4,missing=3
3:[bmi<-0.0207564179] yes=7,no=8,missing=7
7:leaf=21.0474758
8:leaf=27.7326946
4:[bmi<0.000799824367] yes=9,no=10,missing=9
9:leaf=36.1850548
10:leaf=14.9188232
2:[bmi<0.0730132312] yes=5,no=6,missing=5
5:[bp<6.75072661e-05] yes=11,no=12,missing=11
11:leaf=31.3889732
12:leaf=43.4056664
6:[bp<-0.0498541184] yes=13,no=14,missing=13
13:leaf=13.0395498
14:leaf=59.377037
There are 3 distinct information:
booster Gradient Boosting Tree is an ensemble tree method, each new booster indicates the start of a new tree. The number of trees we have will be equivalent to the number of trees we specified for the model (e.g. for the sklearn XGBoost API, n_estimators controls this) multiplied by the number of distinct classes. For regression model or binary classification model, the number of booster in the model dump will be exactly equal to the number of trees we've specified. Whereas for multi class classification, say we have 3 classes, then tree 0 will contribute to the raw prediction of class 0, tree 1 to class 1, tree 2 to class 2, tree 3 to class 0 and so on.
node Following the booster is each tree's if-else structure. e.g. for node 0, if the feature bmi is less than a threshold, then it will branch to node 1 else it will branch to node 2.
leaf Once we reach the leaf, we can accumulate the response prediction. e.g. node 7 is a leaf, and the prediction for this node is 25.84091.
Raw Prediction
We mentioned that to get the prediction for a given input, we sum up the response prediction associated from each tree's leaf node. The holds true for regression models, but for other models, we will need to perform a transformation on top the raw prediction to get to the probabilities. e.g. for when building a binary classification, a logistic transformation will be needed on top of the raw prediction, whereas for the multi-class classification, a softmax transformation is needed.
Preparation
All the examples below, be it regression, binary classification or multi class classification all follow the same structure.
We load some pre-processed data.
Train a quick XGBoost model.
Dump the raw model to disk.
Generate a sample prediction so we can later verify whether the prediction matches with the model converted to cpp.
Regression
End of explanation
X, y = datasets.make_classification(n_samples=10000, n_features=5, random_state=42, n_classes=2)
X
binary_model_params = {
'n_estimators': 3,
'max_depth': 3,
'tree_method': 'hist',
'grow_policy': 'lossguide'
}
binary_model = XGBClassifier(**binary_model_params).fit(X, y)
binary_model
binary_model.get_booster().dump_model("binary_class.txt")
inputs = np.array([[0.0, 0.2, 0.4, 0.6, 0.8]])
binary_model.predict_proba(inputs)
Explanation: Binary Classification
End of explanation
X, y = datasets.load_iris(return_X_y=True, as_frame=True)
X.head()
multi_class_model_params = {
'n_estimators': 2,
'max_depth': 3
}
multi_class_model = XGBClassifier(**multi_class_model_params).fit(X, y)
multi_class_model
multi_class_model.get_booster().dump_model("multi_class.txt")
inputs = np.array([[5.1, 3.5, 1.4, 0.2]])
multi_class_model.predict_proba(inputs)
Explanation: Multiclass Classification
End of explanation
code = m2c.export_to_c(regression_model)
print(code)
Explanation: C++ Implementation
The rest of the content is about implementing the boosted tree inferencing logic in C++, all the code resides in the gbt_inference folder for those interested. In practice, we don't always have to rely on naive code that we've implemented to solidify our understanding. e.g. the m2cgen (Model 2 Code Generator) project is one of the many projects out there that focuses on converting a trained model into native code. If we export our regression model, we can see that the inferencing logic is indeed a bunch of if else statements followed by a summation at the very end.
End of explanation
n_features = 5
X, y = datasets.make_classification(n_samples=10000, n_features=n_features, random_state=42, n_classes=2)
feature_names = [f'f{i}'for i in range(n_features)]
print(f'num rows: {X.shape[0]}, num cols: {X.shape[1]}')
X
tree = XGBClassifier(
n_estimators=20,
max_depth=3,
learning_rate=0.2,
tree_method='hist',
verbosity=1
)
tree.fit(X, y, eval_set=[(X, y)])
tree.predict_proba(X[:1])
xgboost_checkpoint = 'model.json'
tree.save_model(xgboost_checkpoint)
tree_loaded = XGBClassifier()
tree_loaded.load_model(xgboost_checkpoint)
assert np.allclose(tree.predict_proba(X[:1]), tree_loaded.predict_proba(X[:1]))
input_payloads = [
{
'f0': -2.24456934,
'f1': -1.36232827,
'f2': 1.55433334,
'f3': -2.0869092,
'f4': -1.27760482
}
]
rows = []
for input_payload in input_payloads:
row = [input_payload[feature] for feature in feature_names]
rows.append(row)
np_rows = np.array(rows, dtype=np.float32)
tree.predict_proba(np_rows)[:, 1]
%%timeit
rows = []
for input_payload in input_payloads:
row = [input_payload[feature] for feature in feature_names]
rows.append(row)
np_rows = np.array(rows, dtype=np.float32)
tree.predict_proba(np_rows)[:, 1]
def convert_xgboost_to_onnx(model, num_features: int, checkpoint: str):
# boiler plate code for registering the xgboost converter
update_registered_converter(
XGBClassifier, 'XGBoostXGBClassifier',
calculate_linear_classifier_output_shapes, convert_xgboost,
options={'nocl': [True, False], 'zipmap': [True, False, 'columns']}
)
# perform the actual conversion specifying the types of our inputs,
# at the time of writing this, it doesn't support categorical types
# that are common in boosted tree libraries such as xgboost or lightgbm
model_onnx = convert_sklearn(
model, 'xgboost',
[('input', FloatTensorType([None, num_features]))],
target_opset={'': 15, 'ai.onnx.ml': 2}
)
with open(checkpoint, "wb") as f:
f.write(model_onnx.SerializeToString())
onnx_model_checkpoint = 'xgboost.onnx'
convert_xgboost_to_onnx(tree, len(feature_names), onnx_model_checkpoint)
Explanation: ONNX
Another way to achieving this is through ONNX, directly quoting from its documentation.
ONNX Runtime provides an easy way to run machine learned models with high performance on CPU or GPU without dependencies on the training framework. Machine learning frameworks are usually optimized for batch training rather than for prediction, which is a more common scenario in applications, sites, and services
We'll walk through the process of converting our boosted tree model into ONNX format, and benchmark the inference runtime. Here, we are doing it for classification model, but the process should be similar for regression based models.
End of explanation
sess = rt.InferenceSession(onnx_model_checkpoint)
input_name = sess.get_inputs()[0].name
output_names = [output.name for output in sess.get_outputs()]
np_rows = np.array(rows, dtype=np.float32)
onnx_predict_label, onnx_predict_score = sess.run(output_names, {input_name: np_rows})
onnx_predict_score
%%timeit
rows = []
for input_payload in input_payloads:
row = [input_payload[feature] for feature in feature_names]
rows.append(row)
np_rows = np.array(rows, dtype=np.float32)
onnx_predict_label, onnx_predict_score = sess.run(output_names, {input_name: np_rows})
Explanation: Upon porting our model to onnx format, we can use it for inferencing. This section uses the Python API for benchmarking.
End of explanation |
3,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handling Event Data
by
Step1: Let's inspect the small event log.
The first line (i.e., row) specifies the name of each column (i.e., event attribute).
Observe that, in the data table described by the file, we have 5 columns, being
Step2: Formatting Data Frames
Now we have loaded our first event log, it is time to put some pm4py into the mix.
pm4py uses standardized column names to represent the case identifier, the activity name and the timstamp.
These are, respectively, case
Step3: Observe that the column names are updated as expected.
Let us assume that we are not only interested in the number of events and cases, yet, we also want to figure out what
activities occur first, and what activities occur last in the traces described by the event log.
pm4py has a specific built-in function for this, i.e., pm4py.get_start_activities() and pm4py.get_end_activities() respectively.
Step4: The pm4py.get_start_activities() and pm4py.get_end_activities() both return a dictionary containing the activities
as a key, and, the number of observations (i.e., number of traces in which they occur first, respectively, last) in
the event log.
pm4py exploits a built-in pandas function to detect the format of the timestamps in the input data automatically.
However, pandas looks at the timestamp values in each row in isolation.
In some cases, this can lead to problems.
For example, if the provided value is 2020-01-18, i.e., first the year, then the month, and then the day of the date,
in some cases, a value of 2020-02-01 may be interpreted wrongly as January 2nd, i.e., rather than February 1st.
To alleviate this problem, an additional parameter can be provided to the format_dataframe() method, i.e.,
the timest_format parameter. The default Python timestamp format codes can be used to provide the timestamp format.
In this example, the timestamp format is %Y-%m-%d %H
Step5: Exporting Event Data
Now we have seen how to import event data into pm4py, let’s take a look at the opposite, i.e., exporting event data.
Exporting of event logs can be very useful, e.g., we might want to convert a .csv file into a .xes file or we might
want to filter out certain (noisy) cases and save the filtered event log. Like importing, exporting of event data is
possible in two ways, i.e., exporting to csv (using pandas) and exporting event logs to xes. In the upcoming
sections, we show how to export an event log stored as a pandas data frame into a csv file, a pandas data frame as an
xes file, a pm4py event log object as a csv file and finally, a pm4py event log object as an xes file.
Storing a Pandas Data Frame as a csv file
Storing an event log that is represented as a pandas dataframe is straightforward, i.e., we can directly use the to_csv
(full reference here) function
of the pandas DataFrame object. Consider the following example snippet of code, in which we show this functionality.
Step6: Storing a Pandas DataFrame as a .xes file
It is also possible to store a pandas data frame to a xes file. This is simply done by calling the pm4py.write_xes()
function. You can pass the dataframe as an input parameter to the function, i.e., pm4py handles the internal conversion
of the dataframe to an event log object prior to writing it to disk. Note that this construct only works if you have
formatted the data frame, i.e., as highlighted earlier in the importing CSV section.
Step7: Storing an Event Log object as a .csv file
In some cases, we might want to store an event log object, e.g., obtained by importing a .xes file, as a csv file.
For example, certain (commercial) process mining tools only support csv importing.
For this purpose, pm4py offers conversion functionality that allows you to convert your event log object into a data frame,
which you can subsequently export using pandas.
Step8: Storing an Event Log Object as a .xes File
Storing an event log object as a .xes file is rather straightforward. In pm4py, the write_xes() method allows us to do so.
Consider the simple example script below in which we show an example of this functionality. | Python Code:
import pandas as pd
df = pd.read_csv('data/running_example.csv', sep=';')
df
Explanation: Handling Event Data
by: Sebastiaan J. van Zelst
Process mining exploits Event Logs to generate knowledge of a process.
A wide variety of information systems, e.g., SAP, ORACLE, SalesForce, etc., allow us to extract, in one way or the other,
event logs similar to the example event logs.
All the examples we show in this notebook and all algorithms implemented in pm4py assume that we have already extracted
the event data into an appropriate event log format.
Hence, the core of pm4py does not support any data extraction features.
In order to support interoperability between different process mining tools and libraries, two standard data formats are
used to capture event logs, i.e., Comma Separated Value (CSV) files and eXtensible Event Stream (XES) files.
CSV files resemble the example tables shown in the previous section, i.e., Table 1 and Table 2. Each line in such a file
describes an event that occurred. The columns represent the same type of data, as shown in the examples, e.g., the case
for which the event occurred, the activity, the timestamp, the resource executing the activity, etc.
The XES file format is an XML-based format that allows us to describe process behavior.
We will not go into specific details w.r.t. the format of XES files, i.e., we refer to http://xes-standard.org/ for an
overview.
In this tutorial, we will use an oftenly used dummy example event log to explain the basic process mining operations.
The process that we are considering is a simplified process related to customer complaint handling, i.e., taken from the
book of van der Aalst (https://www.springer.com/de/book/9783662498507). The process, and the event data we are going to
use, looks as follows.
Importing CSV Files
Let’s get started!
We have prepared a small sample event log, containing behavior similar equal to the process model in Figure 3.
You can find the sample event log here.
We are going to load the event data, and, we are going to count how many cases are present in the event log, as well as
the number of events. Note that, for all this, we are effectively using a third-party library called pandas.
We do so because pandas is the de-facto standard of loading/manipulating csv-based data.
Hence, any process mining algorithm implemented in pm4py, using an event log as an input, can work directly with a
pandas file!
End of explanation
# number of cases
len(df['case_id'].unique())
# number of events
len(df)
Explanation: Let's inspect the small event log.
The first line (i.e., row) specifies the name of each column (i.e., event attribute).
Observe that, in the data table described by the file, we have 5 columns, being: case_id, activity,
timestamp, costs and org:resource.
The first column represents the case identifier, i.e., allowing us to identify what activity has been logged in the
context of what instance of the process.
The second column (activity) records the activity that has been performed.
The third column shows at what point in time the activity was recorded (timestamp).
In this example data, additional information is present as well.
In this case, the fourth column tracks the costs of the activity (costs attribute), whereas the fifth row tracks what
resource has performed the activity (org:resource).
Observe that, row 2-10 show the events that have been recorded for the process identified by case identifier 3.
We observe that first a register request activity was performed, followed by the examine casually, check ticket, decide,
reinitiate request, examine thoroughly, check ticket,decide, and finally, pay compensation activities.
Note that, in this case, the recorded process instance behaves as described by the model depicted in Figure 3.
Let's investigate some basic statistics of our log, e.g., the total number of cases described and the total number of events.
End of explanation
import pm4py
log = pm4py.format_dataframe(df, case_id='case_id',activity_key='activity',
timestamp_key='timestamp')
log
Explanation: Formatting Data Frames
Now we have loaded our first event log, it is time to put some pm4py into the mix.
pm4py uses standardized column names to represent the case identifier, the activity name and the timstamp.
These are, respectively, case:concept:name, concept:name and time:timestamp.
Hence, to make pm4py work with the provided csv file, we need to rename the case_id, activity and timestamp columns.
pm4py provides a dedicated utility function for this:
End of explanation
pm4py.get_start_activities(log)
pm4py.get_end_activities(log)
Explanation: Observe that the column names are updated as expected.
Let us assume that we are not only interested in the number of events and cases, yet, we also want to figure out what
activities occur first, and what activities occur last in the traces described by the event log.
pm4py has a specific built-in function for this, i.e., pm4py.get_start_activities() and pm4py.get_end_activities() respectively.
End of explanation
log_xes = pm4py.read_xes('data/running_example.xes')
pm4py.get_start_activities(log_xes)
pm4py.get_end_activities(log_xes)
Explanation: The pm4py.get_start_activities() and pm4py.get_end_activities() both return a dictionary containing the activities
as a key, and, the number of observations (i.e., number of traces in which they occur first, respectively, last) in
the event log.
pm4py exploits a built-in pandas function to detect the format of the timestamps in the input data automatically.
However, pandas looks at the timestamp values in each row in isolation.
In some cases, this can lead to problems.
For example, if the provided value is 2020-01-18, i.e., first the year, then the month, and then the day of the date,
in some cases, a value of 2020-02-01 may be interpreted wrongly as January 2nd, i.e., rather than February 1st.
To alleviate this problem, an additional parameter can be provided to the format_dataframe() method, i.e.,
the timest_format parameter. The default Python timestamp format codes can be used to provide the timestamp format.
In this example, the timestamp format is %Y-%m-%d %H:%M:%S%z.
In general, we advise to always specify the timestamp format.
Importing XES Files
Next to CSV files, event data can also be stored in an XML-based format, i.e., in XES files.
In an XES file, we can describe a containment relation, i.e., a log contains a number of traces, which in turn contain several events.
Furthermore, an object, i.e., a log, trace, or event, is allowed to have attributes.
The advantage is that certain data attributes that are constant for a log or a trace, can be stored at that level.
For example, assume that we only know the total costs of a case, rather than the costs of the individual events.
If we want to store this information in a CSV file, we either need to replicate this information (i.e., we can only
store data in rows, which directly refer to events), or, we need to explicitly define that certain columns only get a
value once, i.e., referring to case-level attributes.
The XES standard more naturally supports the storage of this type of information.
Click here to obtain the .xes file of the running_example.
Importing an XES file is fairly straightforward.
pm4py has a special read_xes()-function that can parse a given xes file and load it in pm4py, i.e., as an Event Log object.
Consider the following code snippet, in which we show how to import an XES event log.
Like the previous example, the script outputs activities that can start and end a trace.
End of explanation
log.to_csv('running_example_exported.csv')
Explanation: Exporting Event Data
Now we have seen how to import event data into pm4py, let’s take a look at the opposite, i.e., exporting event data.
Exporting of event logs can be very useful, e.g., we might want to convert a .csv file into a .xes file or we might
want to filter out certain (noisy) cases and save the filtered event log. Like importing, exporting of event data is
possible in two ways, i.e., exporting to csv (using pandas) and exporting event logs to xes. In the upcoming
sections, we show how to export an event log stored as a pandas data frame into a csv file, a pandas data frame as an
xes file, a pm4py event log object as a csv file and finally, a pm4py event log object as an xes file.
Storing a Pandas Data Frame as a csv file
Storing an event log that is represented as a pandas dataframe is straightforward, i.e., we can directly use the to_csv
(full reference here) function
of the pandas DataFrame object. Consider the following example snippet of code, in which we show this functionality.
End of explanation
pm4py.write_xes(log, 'running_example_csv_exported_as_xes.xes')
Explanation: Storing a Pandas DataFrame as a .xes file
It is also possible to store a pandas data frame to a xes file. This is simply done by calling the pm4py.write_xes()
function. You can pass the dataframe as an input parameter to the function, i.e., pm4py handles the internal conversion
of the dataframe to an event log object prior to writing it to disk. Note that this construct only works if you have
formatted the data frame, i.e., as highlighted earlier in the importing CSV section.
End of explanation
df = pm4py.convert_to_dataframe(log_xes)
df.to_csv('running_example_xes_exported_as_csv.csv')
Explanation: Storing an Event Log object as a .csv file
In some cases, we might want to store an event log object, e.g., obtained by importing a .xes file, as a csv file.
For example, certain (commercial) process mining tools only support csv importing.
For this purpose, pm4py offers conversion functionality that allows you to convert your event log object into a data frame,
which you can subsequently export using pandas.
End of explanation
pm4py.write_xes(log_xes, 'running_example_exported.xes')
Explanation: Storing an Event Log Object as a .xes File
Storing an event log object as a .xes file is rather straightforward. In pm4py, the write_xes() method allows us to do so.
Consider the simple example script below in which we show an example of this functionality.
End of explanation |
3,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas 4
Step1: <a id=want></a>
The want operator
We need to know what we're trying to do -- what we want the data to look like. We say we apply the want operator.
Some problems we've run across that ask to be solved
Step2: Comment. Note that the variable item_price has dtype object. The reason is evidently the dollar sign. We'd prefer to have it as a number, specifically a float.
Example
Step3: Comments. This is mostly text data, which means it's assigned the dtype object. Which is fine. But there are two things that would make the data easier to work with
Step4: Comment. Note the commas separating answers with more than one choice. We want to unpack them somehow.
Example
Step5: Comments. Here we have a couple issues.
The first column includes a space and a number
Step6: Example
Step7: Comment. This has several issues
Step8: <a id='strings'></a>
String methods
We can treat variables as strings in Pandas in much the same way we dealt with strings in core Python. Run the code below to remind yourself how this works.
Step9: Pandas string methods. We can do the same thing to all the observations of a variable with so-called string methods. We append .str to a variable in a dataframe and then apply the string method of our choice. If this is part of converting a number-like entry that has mistakenly been given dtype object, we then convert its dtype with the astype method.
Example. Let's use a string method to fix the item_price variable in the Chipotle dataframe. This has three parts
Step10: Comment. We did everything here in one line
Step11: Comment. Not quite, we only want to split once.
Step12: Comments.
Note that we need two str's here
Step13: What to do. We use the replace method on the whole dataframe. To mark something as missing, we replace it as None, which Pandas interprets as missing and labels NaN.
Step14: Comment. Replace automatically updates the dtypes. Here the double dots led us to label the variables as objects. After the replace, they're now floats, as they should be.
Step15: Comment. Some people prefer to use the numpy nan. Here's an example. The only advantage is that we avoid possible conflicts with other uses of the value None.
Step16: Comment. Unlike the string methods we described earlier, this use of replace affects complete entries, not elements of string entries. For example, suppose we tried to replace the periods in decimal numbers with an asterisk. We could try the following, but it doesn't work
Step17: Working with missing values
Step18: Comment. We usually don't have to worry about this, Pandas takes care of missing values automatically.
Comment. Let's try a picture to give us a feeling of accomplishment. What else would you say we need? How would we get it?
Step19: <a id='selection'></a>
Selecting variables and observations
The word selection refers to choosing a subset of variables or observations using their labels or index. Similar methods are sometimes referred to as slicing, subsetting, indexing, or filtering. We'll treat the terms as synonymous.
There are lots of ways to do this. Mostly we do "Boolean" selection, which we address in the next section. We review more direct options here, mostly at high speed because they're not things we use much.
In the outline below, df is a dataframe, var and varn are variable names, vlist = ['var1', 'var2'] is a list of variable names, and nlist = [0, 3, 4] is a list of numerical variable or observation indexes, and n1 and n2 are integers. Some of the basic selection/indexing/slicing methods have the form
Step20: Exercise. Try each of these in a different cell and explain what they do
Step21: Find variable and country codes. Which ones do we want? Let's start by seeing that's available. Here we create special dataframes that include all the variables and their definitions and all the countries.
Note the use of the drop_duplicates method, which does what it sounds like.
Step22: Exercise.
Construct a list of countries with countries = weo[['ISO', 'Country']]; that is, without applying the drop_duplicates method. How large is it? How many duplicates have we dropped?
What are the country codes (ISO) for Argentina, Germany, and Greece?
What are the variable codes (WEO Subject Code) for government debt (gross debt, percent of GDP) and net lending/borrowing (also percent of GDP)?
Comment. Now that we have the country and variable codes, we can be more explicit about what we want. We want observations with those country and variable codes.
We work up to the solution one step at a time.
Comparisons for series
We can construct comparisons for dataframe columns much as we did with simple variables. The difference is that we get a complete column or True/False responses, not just one.
Mutiple comparisons have a different syntax than we saw earlier. and is replaced by &, and or is replaced by |. And when we have more than comparison, we need to enclose them in parentheses.
Here's an example.
Exercise. Compute and explain the comparisons
Step23: Exercise. Construct dataframes for which
small['Units'] does not equal 'National currency'.
small['Units'] equals 'National currency' and small['2011'] is greater than 100.
<a id='isin'></a>
The isin method
Pay attention now, this is really useful. Suppose we want to extract the data for which weo['ISO'] == 'ARG' (Argentina) or weo['ISO'] == 'GRC' (Greece). We could do that by combining the comparisons
Step24: Comment. We're choosing 2 variables from 45, so there are lots of Falses.
Step25: Comment. We can do the same thing with countries. If we want to choose two variables and three countries, the code looks like
Step26: Comments.
We've now done what we described when we applied the want operator.
This is a go-to method. Circle it for later reference.
This is a go-to method. Circle it for later reference.
Exercise. Use the isin method to extract Gross domestic product in US dollars for China, India, and the United States. Assign the result to the dataframe gdp.
Exercise (challenging). Plot the variable gdp['2015'] as a bar chart. What would you say it needs?
<a id='contains'></a>
The contains method
Another useful one. The contains string method for series identifies observations that contain a specific string. If yes, the observation is labelled True, if no, False. A little trick converts the True/False outcomes to ones and zeros.
We apply it to the Media variable of the Entry Poll dataframe ep. You may recall that this variable could have more than one response. We tease them apart with the contains method. Our want is to have a yes/no variable for each response.
Step27: Comment. That's pretty good, we now know which students mentioned Twitter and which did not. It's more useful, though, to convert this to zeros (False) and ones (True), which we do with this trick
Step28: Comment. Now let's do the same for some of the other entries and save them in new variables.
Step29: Exercise. What would you change in this graph? How would you do it? (Words are enough.)
Review
Exercise. We explore the Census's Business Dynamics Statistics, a huge collection of data about firms. We've extracted a small piece of one of their databases that includes these variables for 2013 | Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
Explanation: Pandas 4: Describing data
More Pandas data management tools.
describe
mean, median, std etc
value_counts
groupby (***)
Apps
Poll
Chipotle
MovieLens (with requests worked in)
Note: requires internet access to run.
<!--
internal links http://sebastianraschka.com/Articles/2014_ipython_internal_links.html
-->
This IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp.
<a id=prelims></a>
Preliminaries
End of explanation
url = 'https://raw.githubusercontent.com/TheUpshot/chipotle/master/orders.tsv'
chp = pd.read_csv(url, sep='\t') # tab (\t) delimited
print('Variable dtypes:\n', chp.dtypes, sep='')
chp.head()
Explanation: <a id=want></a>
The want operator
We need to know what we're trying to do -- what we want the data to look like. We say we apply the want operator.
Some problems we've run across that ask to be solved:
Numerical data is contaminated by commas (marking thousands) or dollar signs.
Row and column labels are contaminated.
Missing values are marked erratically.
We have too much data, would prefer to choose a subset.
Variables run across rows rather than down columns.
What we want in each case is the opposite of what we have: we want nicely formatted numbers, clean row and column labels, and so on.
We'll solve the first four problems here, the last one in the next notebook.
Example: Chipotle data
This data comes from a New York Times story
End of explanation
pd.set_option("display.width", 80)
import pandas as pd
url1 = 'http://pages.stern.nyu.edu/~dbackus/Data/'
url2 = 'Data-Bootcamp-entry-poll_s16.csv'
url = url1 + url2
ep = pd.read_csv(url, header=0)
print('Dimensions:', ep.shape)
print('\nData types:\n', ep.dtypes, sep='')
ep.head(2)
Explanation: Comment. Note that the variable item_price has dtype object. The reason is evidently the dollar sign. We'd prefer to have it as a number, specifically a float.
Example: Data Bootcamp entry poll
This is the poll we did at the start of the course. Responses were collected in a Google spreadsheet, which we converted to a csv and uploaded to our website.
End of explanation
# rename variables
newnames = ['time', 'program', 'career', 'programming', 'stats', 'media',
'other', 'major', 'data', 'why', 'topics']
newnames = [name.title() for name in newnames]
ep.columns = newnames
ep.head()
# check multi-response question to see what we're dealing with
ep['Media'].head(20)
Explanation: Comments. This is mostly text data, which means it's assigned the dtype object. Which is fine. But there are two things that would make the data easier to work with:
The column names are excessively verbose. This one's easy: We replace them with single words. Which we do below.
The second one is harder. Two of the questions -- social media and special topics -- say "mark all that apply." In the spreadsheet, we have a list of every choice the person checked. Our want is to count the number of each type of response. For example, we might want a bar chart that gives us the number of each response. The question is how we get there.
End of explanation
url1 = 'http://www.oecd.org/health/health-systems/'
url2 = 'OECD-Health-Statistics-2015-Frequently-Requested-Data.xls'
docs = pd.read_excel(url1+url2,
skiprows=3,
usecols=[0, 51, 52, 53, 54, 55, 57],
sheetname='Physicians',
# na_values=['..'],
skip_footer=21)
print('Dimensions:', docs.shape)
print('\nIndex', docs.index.tolist(), sep='')
print('\nVariable dtypes:\n', docs.dtypes.tail(8), sep='')
docs.head()
Explanation: Comment. Note the commas separating answers with more than one choice. We want to unpack them somehow.
Example: OECD healthcare statistics
The OECD collects healthcare data on lots of (mostly rich) countries, which is helpful in producing comparisons. Here we use a spreadsheet linked in one of their documents.
End of explanation
names = list(docs)
docs = docs.rename(columns={names[0]: 'Country'})
docs.head(2)
Explanation: Comments. Here we have a couple issues.
The first column includes a space and a number: Australia 1, Chile 3, etc. We care about this because when we plot the data across countries, the country labels are going to be country names, so we want them in a better form than this.
The ..'s in the sheet lead us to label any column that includes them as dtype object. Here we want to label them as missing values.
If we want to plot each country against time, then we'll need to switch the rows and columns somehow, so that the x axis in the plot (the year) is the index and not the column label.
One more thing before we proceeed: change the name of the country variable.
End of explanation
url1 = 'http://www.imf.org/external/pubs/ft/weo/2015/02/weodata/'
url2 = 'WEOOct2015all.xls'
url = url1 + url2
weo = pd.read_csv(url, sep='\t',
usecols=[1,2,3,4,6,40,41,42,43,44],
thousands=',',
na_values=['n/a', '--']
)
print('Variable dtypes:\n', weo.dtypes, sep='')
weo.head()
Explanation: Example: World Economic Outlook
The IMF's World Economic Outlook database contains a broad range of macroeconomic data for a large number of countries. It's updated twice a year and is a go-to source for things like current account balances (roughly, the trade balance) and government debt and deficits. It also has a few quirks, as we'll see.
Example. Run the following code as is, and with the thousands and na_values parameters commented out. How do the dtypes differ?
End of explanation
weo.T.head(10)
Explanation: Comment. This has several issues:
The variables run across rows with observations labeled 1980, 1981, etc across the top. We saw the same problem in the previous example.
If we run the first version of the read_csv statement, the data columns (1980, 1981, etc) have dtype object. A little work suggests that this is because they include commas marking thousands.
The entries labeled n/a need to be marked as missing values.
We can solve the last two in the read_csv function by deleting the hash -- which is what we see in the second read_csv statement. The other one takes some work.
Question. Can we transpose the whole thing to get the data running down columns?
End of explanation
dollars = '$123.45'
print('Type of variable dollars:', type(dollars))
num = dollars.replace('$', '')
num = float(num)
print('Type of variable num:', type(num))
Explanation: <a id='strings'></a>
String methods
We can treat variables as strings in Pandas in much the same way we dealt with strings in core Python. Run the code below to remind yourself how this works.
End of explanation
chp.head()
chpnum = chp.copy()
print('Original dtype:', chpnum['item_price'].dtype)
# create a copy of the df to play with
# delete dollar signs
chpnum['item_price'].str.replace('$', '').head()
# delete dollar signs, convert to float, and assign back to chpnum
chpnum['item_price'] = chpnum['item_price'].str.replace('$', '').astype(float)
print('New dtype:', chpnum['item_price'].dtype)
# assign back to chp for future use
chp = chpnum
Explanation: Pandas string methods. We can do the same thing to all the observations of a variable with so-called string methods. We append .str to a variable in a dataframe and then apply the string method of our choice. If this is part of converting a number-like entry that has mistakenly been given dtype object, we then convert its dtype with the astype method.
Example. Let's use a string method to fix the item_price variable in the Chipotle dataframe. This has three parts:
Use the method str to identify this as a string method.
Apply the string method of our choice (here replace) to fix the string.
Use the astype method to convert the fixed-up string to a float.
We start by making a copy of the chp dataframe that we can experiment with.
End of explanation
# try this with an example first
country = 'United States 1'
# get documentation for the rsplit method
#country.rsplit?
# an example
country.rsplit()
Explanation: Comment. We did everything here in one line: replace the dollar sign with a string method, then converted to float using astype. If you think this is too dense, you might break it into two steps.
Example. Here we strip off the numbers at the end of the indexes in the OECD docs dataframe. This involves some experimentation:
Play with the rsplit method to see how it works.
Apply rsplit to the example country = 'United States 1'.
Use a string method to do this to all the entries of the variable Country.
End of explanation
# what about this?
country.rsplit(maxsplit=1)
# one more step, we want the first component of the list
country.rsplit(maxsplit=1)[0]
docs["Country"].head()
# now do this for the variable Country
#docs['Country'].str.rsplit(maxsplit=1).str[0].head() # explain why this doesn't work
docs['Country'].str.rsplit(n=1).str[0].head()
# some people prefer the get method to slicing
docs['Country'].str.rsplit(n=1).str.get(0).head()
# now assign it to newdocs and see what we have
newdocs = docs.copy()
newdocs['Country'] = newdocs['Country'].str.rsplit(n=1).str.get(0)
newdocs.head()
# assign it back to docs for future use
docs = newdocs
Explanation: Comment. Not quite, we only want to split once.
End of explanation
docs = newdocs
docs.head()
Explanation: Comments.
Note that we need two str's here: one to do the split, the other to extract the first element.
For reasons that mystify us, we ran into problems when we used maxsplit=1, but it works with n=1.
This is probably more than you want to know, but file away the possibilities in case you need them.
<a id='missing'></a>
Missing values
It's important to label missing values, so that Pandas doesn't interpret entries as strings. Pandas is also smart enough to ignore things labeled missing when it does calculations or graphs. If we compute, for example, the mean of a variable, the default is to ignore missing values.
We've seen that we can label certain entries as missing values in read statements: read_csv, read_excel, and so on. Here we do it directly, mostly to remind ourselves what's involved.
Marking missing values
Example. The docs dataframe contains a number of instances of .. (double period). How can we mark them as missing values?
End of explanation
docs.replace(to_replace=['..'], value=[None]).head()
Explanation: What to do. We use the replace method on the whole dataframe. To mark something as missing, we replace it as None, which Pandas interprets as missing and labels NaN.
End of explanation
docs.dtypes.head()
docsna = docs.replace(to_replace=['..'], value=[None])
docsna.dtypes.head()
Explanation: Comment. Replace automatically updates the dtypes. Here the double dots led us to label the variables as objects. After the replace, they're now floats, as they should be.
End of explanation
docs.replace(to_replace=['..'], value=[np.nan]).head()
# assign back to docs
docs = docs.replace(to_replace=['..'], value=[np.nan])
Explanation: Comment. Some people prefer to use the numpy nan. Here's an example. The only advantage is that we avoid possible conflicts with other uses of the value None.
End of explanation
docs.replace(to_replace=['.'], value=['*']).head()
Explanation: Comment. Unlike the string methods we described earlier, this use of replace affects complete entries, not elements of string entries. For example, suppose we tried to replace the periods in decimal numbers with an asterisk. We could try the following, but it doesn't work: the decimal numbers don't change.
End of explanation
# grab a variable to play with
var = docsna[2013].head(10)
var
# which ones are missing ("null")?
var.isnull()
# which ones are not missing ("not null")?
var.notnull()
# drop the missing
var.dropna()
Explanation: Working with missing values
End of explanation
docs[2013].plot.barh(figsize=(4, 12))
Explanation: Comment. We usually don't have to worry about this, Pandas takes care of missing values automatically.
Comment. Let's try a picture to give us a feeling of accomplishment. What else would you say we need? How would we get it?
End of explanation
# we create a small dataframe to experiment with
small = weo.head()
small
Explanation: <a id='selection'></a>
Selecting variables and observations
The word selection refers to choosing a subset of variables or observations using their labels or index. Similar methods are sometimes referred to as slicing, subsetting, indexing, or filtering. We'll treat the terms as synonymous.
There are lots of ways to do this. Mostly we do "Boolean" selection, which we address in the next section. We review more direct options here, mostly at high speed because they're not things we use much.
In the outline below, df is a dataframe, var and varn are variable names, vlist = ['var1', 'var2'] is a list of variable names, and nlist = [0, 3, 4] is a list of numerical variable or observation indexes, and n1 and n2 are integers. Some of the basic selection/indexing/slicing methods have the form:
df[var] extracts a variable -- a series, in other words.
df[var][3] extracts observation 3 (starting at zero) from the series df[var].
df[vlist] extracts a new dataframe consisting of the variables in vlist.
df[nlist] does the same thing.
df[n1:n2] extracts observations n1 to n2-1, the traditional slicing syntax.
We find the last one confusing: it extracts rows, not columns. Pandas guru Wes McKinney notes: "This might seem inconsistent to some readers." Yup! We don't do it much, partly for that reason.
<!-- page 127 top -->
The Pandas docs push the loc and iloc methods. We'll ignore them -- we don't use them much -- but if you're interested, see the docs.
End of explanation
weo.head()
Explanation: Exercise. Try each of these in a different cell and explain what they do:
small[['ISO', 'Units']]
small[[0, 4]]
small['2011']
small['2011'][3]
small[1:3]
<a id='boolean'></a>
<a id='boolean'></a>
Boolean selection
This is mostly what we do: we choose observations that satisfy one or more conditions. We work through this one step at a time:
Example: apply the want operator
Comparisons for dataframes
Boolean selection: select observations for which the comparison is True
The isin method
This is easier to describe with an example.
Example: Apply the want operator to WEO
Our want here is to take the weo dataframe and extract government debt and deficits for a given set of countries. Putting this to work involves several steps.
Here's the head of the dataframe to give us a sense of what we're dealing with.
End of explanation
variable_list = weo[['WEO Subject Code', 'Subject Descriptor', 'Units']].drop_duplicates()
print('Number of variables: ', variable_list.shape[0])
variable_list.head()
country_list = weo[['ISO', 'Country']].drop_duplicates()
print('Number of countries: ', country_list.shape[0])
country_list.head()
Explanation: Find variable and country codes. Which ones do we want? Let's start by seeing that's available. Here we create special dataframes that include all the variables and their definitions and all the countries.
Note the use of the drop_duplicates method, which does what it sounds like.
End of explanation
ncunits = small['Units'] == 'National currency'
small[ncunits]
small[small['Units'] == 'National currency']
Explanation: Exercise.
Construct a list of countries with countries = weo[['ISO', 'Country']]; that is, without applying the drop_duplicates method. How large is it? How many duplicates have we dropped?
What are the country codes (ISO) for Argentina, Germany, and Greece?
What are the variable codes (WEO Subject Code) for government debt (gross debt, percent of GDP) and net lending/borrowing (also percent of GDP)?
Comment. Now that we have the country and variable codes, we can be more explicit about what we want. We want observations with those country and variable codes.
We work up to the solution one step at a time.
Comparisons for series
We can construct comparisons for dataframe columns much as we did with simple variables. The difference is that we get a complete column or True/False responses, not just one.
Mutiple comparisons have a different syntax than we saw earlier. and is replaced by &, and or is replaced by |. And when we have more than comparison, we need to enclose them in parentheses.
Here's an example.
Exercise. Compute and explain the comparisons:
small['Units'] == 'National currency'
small['2011'] >= 100
(small['Units'] == 'National currency') & (small['2011'] >= 100)
(small['Units'] == 'National currency') | (small['2011'] >= 100)
Boolean selection
Boolean selection simply chooses those observations for which a condition is True. Some people refer to this as filtering.
Example. We choose obervations for which the units are 'National currency'. We do this first in two steps, then in one.
End of explanation
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo['WEO Subject Code'].isin(vlist).head(45)
Explanation: Exercise. Construct dataframes for which
small['Units'] does not equal 'National currency'.
small['Units'] equals 'National currency' and small['2011'] is greater than 100.
<a id='isin'></a>
The isin method
Pay attention now, this is really useful. Suppose we want to extract the data for which weo['ISO'] == 'ARG' (Argentina) or weo['ISO'] == 'GRC' (Greece). We could do that by combining the comparisons:
python
(weo['ISO'] == 'ARG') | (weo['ISO'] == 'GRC')
Remind youself that | stands for "or." (What do we use for "and"?)
A simpler approach is to apply the isin method to a variable. This sets the comparison equal to True if the value of the observation is of weo['ISO'] equals any element in a list. We could do the same thing using mulitple comparisons, but this is a lot easier.
Let's see how this works.
Example. Let's apply the same logic to variable codes. If we want to extract the observations with codes
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
we would use
End of explanation
# this time let's use the result of isin for selection
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo[weo['WEO Subject Code'].isin(vlist)].head(6)
Explanation: Comment. We're choosing 2 variables from 45, so there are lots of Falses.
End of explanation
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
countries = ['ARG', 'DEU', 'GRC']
weo_sub = weo[weo['WEO Subject Code'].isin(variables) & weo['ISO'].isin(countries)]
weo_sub
Explanation: Comment. We can do the same thing with countries. If we want to choose two variables and three countries, the code looks like:
End of explanation
# recall
ep['Media'].head(10)
# the contains method
ep['Media'].str.contains('Twitter').head(10)
Explanation: Comments.
We've now done what we described when we applied the want operator.
This is a go-to method. Circle it for later reference.
This is a go-to method. Circle it for later reference.
Exercise. Use the isin method to extract Gross domestic product in US dollars for China, India, and the United States. Assign the result to the dataframe gdp.
Exercise (challenging). Plot the variable gdp['2015'] as a bar chart. What would you say it needs?
<a id='contains'></a>
The contains method
Another useful one. The contains string method for series identifies observations that contain a specific string. If yes, the observation is labelled True, if no, False. A little trick converts the True/False outcomes to ones and zeros.
We apply it to the Media variable of the Entry Poll dataframe ep. You may recall that this variable could have more than one response. We tease them apart with the contains method. Our want is to have a yes/no variable for each response.
End of explanation
ep['Media'].str.contains('Twitter').head(10)*1
Explanation: Comment. That's pretty good, we now know which students mentioned Twitter and which did not. It's more useful, though, to convert this to zeros (False) and ones (True), which we do with this trick: we multiply by 1.
End of explanation
media = ['None', 'Twitter', 'Facebook', 'Blog']
oldep = ep.copy()
vnames = []
for x in media:
newname = 'Media' + ':' + x
vnames.append(newname)
ep[newname] = ep['Media'].str.contains(x)*1
vnames
media = ep[vnames]
media.head()
media_counts = media.sum()
media_counts
media_counts.plot.barh()
Explanation: Comment. Now let's do the same for some of the other entries and save them in new variables.
End of explanation
data = {'Size': ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49', 'e) 50 to 99',
'f) 100 to 249', 'g) 250 to 499', 'h) 500 to 999', 'i) 1000 to 2499',
'j) 2500 to 4999', 'k) 5000 to 9999', 'l) 10000+'],
'Firms': [2846416, 1020772, 598153, 373345, 115544, 63845,
19389, 9588, 6088, 2287, 1250, 1357],
'Emp': [5998912, 6714924, 8151891, 11425545, 8055535, 9788341,
6611734, 6340775, 8321486, 6738218, 6559020, 32556671]}
bds = pd.DataFrame(data)
bds .head(3)
Explanation: Exercise. What would you change in this graph? How would you do it? (Words are enough.)
Review
Exercise. We explore the Census's Business Dynamics Statistics, a huge collection of data about firms. We've extracted a small piece of one of their databases that includes these variables for 2013:
Size: size category of firms based on number of employees
Firms: number of firms in each size category
Emp: number of employees in each size category
Run the code cell below to load the data.
End of explanation |
3,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using pre-trained word embeddings in a Keras model
Based on https
Step1: Preparing the Embedding layer
Step2: Training a 1D convnet | Python Code:
from __future__ import print_function
import os
import sys
import numpy as np
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Dense, Input, GlobalMaxPooling1D
from keras.layers import Conv1D, MaxPooling1D, Embedding, Flatten
from keras.models import Model
from sklearn.datasets import fetch_20newsgroups
data_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'),
shuffle=True, random_state=42)
data_test = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes'),
shuffle=True, random_state=42)
texts = data_train.data
labels = data_train.target
labels_index = {}
for i,l in enumerate(data_train.target_names):
labels_index[i] = l
labels_index
data_train.data[0]
MAX_SEQUENCE_LENGTH = 1000
MAX_NB_WORDS = 20000
EMBEDDING_DIM = 100
VALIDATION_SPLIT = 0.2
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
labels = to_categorical(np.asarray(labels))
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# split the data into a training set and a validation set
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
nb_validation_samples = int(VALIDATION_SPLIT * data.shape[0])
x_train = data[:-nb_validation_samples]
y_train = labels[:-nb_validation_samples]
x_val = data[-nb_validation_samples:]
y_val = labels[-nb_validation_samples:]
Explanation: Using pre-trained word embeddings in a Keras model
Based on https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html
End of explanation
DATA_DIR = '/home/jorge/data/text'
embeddings_index = {}
f = open(os.path.join(DATA_DIR, 'glove.6B/glove.6B.100d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
from keras.layers import Embedding
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
Explanation: Preparing the Embedding layer
End of explanation
from keras.optimizers import SGD
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation='relu')(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(35)(x) # global max pooling
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
preds = Dense(len(labels_index), activation='softmax')(x)
model = Model(sequence_input, preds)
model.summary()
sgd_optimizer = SGD(lr=0.01, momentum=0.99, decay=0.001, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd_optimizer,
metrics=['acc'])
# happy learning!
model.fit(x_train, y_train, validation_data=(x_val, y_val),
epochs=50, batch_size=128)
Explanation: Training a 1D convnet
End of explanation |
3,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>On Galerkin approximations for the QG equations</h1>
<h2>Supplementary material for subsection on the $\beta-$Eady model</h2>
<h3>Wave structure for Charney mode</h3>
<p></p>
</h3>Cesar B. Rocha*</h3>
</h3>, William R. Young, and Ian Grooms </h3>
<p></p>
</h4>Winter 2015 </h4>
<p></p>
*Scripps Institution of Oceanography, University of California, San Diego, 9500 Gilman Dr. MC 0213, La Jolla, CA/USA, crocha@ucsd.edu
Step1: A function to compute difference matrices
Step2: Load data
Step3: set up domain
Step4: compute wavestructure
We solved the problem for the streamfunction $\phi$ vertical structure. The streamfunction is them
\begin{equation}
\psi = |\phi(z)|\cos{(i k x +P_{\psi}(z))}
\end{equation}
The associated PV is
\begin{equation}
q = \partial^2_{xx}\psi + \partial^2_{zz}\psi = (-k^2\,|\phi(z)| + \partial^2_{zz}\,\,|\phi(z)| - |\phi(z)|(\partial_zP_{\psi}(z))^2)) \cos{(i k x +P_{\psi}(z)}) -(2\partial_z|\phi(z)|\partial_z P_{\psi}(z)) \sin{(i k x +P_{\psi}(z)})\,
\end{equation}
The phase is
\begin{equation}
P_{\psi}(z) = \text{tan$^{-1}$}\frac{\text{Im}(\hat{\psi})}{\text{Re}(\hat{\psi})}\,.
\end{equation} | Python Code:
from __future__ import division
import numpy as np
from numpy import pi, sqrt,cos
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 25, 'legend.handlelength' : 1.25})
%matplotlib inline
import seaborn as sns
#sns.set(style="darkgrid")
sns.set_context("paper", font_scale=5, rc={"lines.linewidth": 1.5})
Explanation: <h1>On Galerkin approximations for the QG equations</h1>
<h2>Supplementary material for subsection on the $\beta-$Eady model</h2>
<h3>Wave structure for Charney mode</h3>
<p></p>
</h3>Cesar B. Rocha*</h3>
</h3>, William R. Young, and Ian Grooms </h3>
<p></p>
</h4>Winter 2015 </h4>
<p></p>
*Scripps Institution of Oceanography, University of California, San Diego, 9500 Gilman Dr. MC 0213, La Jolla, CA/USA, crocha@ucsd.edu
End of explanation
def D_matrices(N):
''' create an N x N difference matrices
D1 :: first-order, centered difference
D2 :: second-order, centered difference
'''
D2 = np.zeros((N,N))
D1 = np.zeros((N,N))
for i in range(N):
D2[i,i] = -2.
if i<N-1:
D2[i,i+1],D1[i,i+1] = 1.,-1
if i>0:
D2[i,i-1],D1[i,i-1] = 1.,1.
return D1,D2
Explanation: A function to compute difference matrices
End of explanation
data_path = 'linear_charney_num_kappa_8.npz'
charney = np.load(data_path)
kappa = charney['kappa']
phi_max = charney['e_num'][1:-1] # do no consider ghost points
N = charney['N']
# the critical level
zc = charney['c_num'].real - 1. # recall the domain has depth 1
Explanation: Load data
End of explanation
# vertical coordinate
dz = 1./N # vertical resolution
z = np.arange(-dz/2,-1.-dz/2.,-dz) # level array
# horizontal coordinate
x = np.linspace(0,np.pi,100)
# grid
X,Z = np.meshgrid(x,z)
Explanation: set up domain
End of explanation
# wave structure in xz-plane
phi_max_abs = np.abs(phi_max)
phi_max_phase = np.arctan2(phi_max.imag,phi_max.real)
phase = np.repeat(phi_max_phase,x.size).reshape(z.size,x.size)
mag = np.repeat(phi_max_abs,x.size).reshape(z.size,x.size)
# wave structure
PSI = mag*np.cos( kappa*X + phase )
phi = charney['e_num'][:]
phi_abs = np.abs(phi)
phi_phase = np.arctan2(phi.imag,phi.real)
D1,D2 = D_matrices(N+2)
D1,D2 = np.matrix(D1),np.matrix(D2)
phi_abs_prime = np.array(D1*np.matrix(phi_abs).T)[1:-1]/(2*dz)
phi_abs_dprime = np.array(D2*np.matrix(phi_abs).T)[1:-1]/(dz**2)
phi_phase_prime = np.array(D1*np.matrix(phi_phase).T)[1:-1]/(2*dz)
phi_phase_dprime = np.array(D2*np.matrix(phi_phase).T)[1:-1]/(dz**2)
mag_prime = np.repeat(phi_abs_prime,x.size).reshape(z.size,x.size)
mag_dprime = np.repeat(phi_abs_dprime,x.size).reshape(z.size,x.size)
phase_prime = np.repeat(phi_phase_prime,x.size).reshape(z.size,x.size)
phase_dprime = np.repeat(phi_phase_dprime,x.size).reshape(z.size,x.size)
cost = np.cos( kappa*X + phase)
sint = np.sin( kappa*X + phase)
PV = (-(kappa**2)*mag + mag_dprime - mag*(phase_prime**2) )*cost \
- (2.*mag_prime*phase_prime + mag*phase_dprime)*sint
lw = 2.
aph = .5
# PV and psi wave structure
plt.figure(figsize=(12,9))
plt.contour(X,Z,1e2*PSI,np.linspace(-10,10,9),colors='k')
plt.contourf(X,Z,PV,np.linspace(-6.,6.,9),cmap='RdBu_r',extend='both')
#plt.plot(x,np.ones(x.size)*zc,'w--',linewidth=lw,alpha=1)
plt.text(-0.375,zc-.01,r' $z_c \rightarrow$',fontsize=35)
cb = plt.colorbar(extend='both',shrink=.9)
cb.ax.text(.0,1.075,'PV',rotation=0,fontsize=30)
plt.text(2.4, -.075, r"$\beta-$Eady Problem, $\kappa = 8$", size=25, rotation=0.,\
ha="center", va="center",\
bbox = dict(boxstyle="round",ec='k',fc='w'))
plt.xticks([0.,pi/4,pi/2,3*pi/4,pi],[r'$0$',r'$\pi/4$',r'$\pi/2$',\
r'$3\,\pi/4$',r'$\pi$'])
plt.ylabel('$z/H$')
plt.xlabel(r'$x/L_d$')
plt.savefig('figs/wave-structure_pv_psi_kappa_8_num.eps')
Explanation: compute wavestructure
We solved the problem for the streamfunction $\phi$ vertical structure. The streamfunction is them
\begin{equation}
\psi = |\phi(z)|\cos{(i k x +P_{\psi}(z))}
\end{equation}
The associated PV is
\begin{equation}
q = \partial^2_{xx}\psi + \partial^2_{zz}\psi = (-k^2\,|\phi(z)| + \partial^2_{zz}\,\,|\phi(z)| - |\phi(z)|(\partial_zP_{\psi}(z))^2)) \cos{(i k x +P_{\psi}(z)}) -(2\partial_z|\phi(z)|\partial_z P_{\psi}(z)) \sin{(i k x +P_{\psi}(z)})\,
\end{equation}
The phase is
\begin{equation}
P_{\psi}(z) = \text{tan$^{-1}$}\frac{\text{Im}(\hat{\psi})}{\text{Re}(\hat{\psi})}\,.
\end{equation}
End of explanation |
3,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
output
Step1: I. prepare mapping_PDalpha file
calculate average PD alpha diversity at highest rarefaction 5870
Step2: Add PD alpha diversity into mapping file
Step3: output mapping file with PD alpha diveristy
Step4: II. scatterplots of PD alpha vs. 5 vitaminD measurements
prepare file with only 5 VitD variables and PD
Step5: plot scatterplots separately due to different scale
Step6: III. linear regression of PD alpha diversity vs. 5 vitaminD measurements
Step7: diagnostic results | Python Code:
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
from statsmodels.compat import lzip
import statsmodels.stats.api as sms
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: output: 'mapping_PDalpha.txt'(mapping file with PD alpha diversity)
scatterplots of PD alpha diversity vs. 5 vitaminD measurements (output: 'vitamin_pd.txt' (5 VitD variables with PD)
linear regression of PD alpha diversity vs. 5 vitaminD measurements & diagnostic plots and tests
effect size analysis of alpha diversity with medata (see notebook R_alpha_RDA)
reference on statsmodel ols and disagnostic plots
http://mpastell.com/pweave/_downloads/linear_regression.html (visualization)
http://www.statsmodels.org/dev/examples/notebooks/generated/regression_diagnostics.html (tests)
End of explanation
alpha = pd.read_csv('../data/shannon.txt', sep='\t')
alpha.tail(10)
alpha.shape
# look at only the highest rarefaction depth
alpha_high = alpha.loc[alpha['sequences per sample'] == 5870]
# take average of 10 iterations as alpha value
alpha_high = alpha_high.drop(['Unnamed: 0', 'sequences per sample', 'iteration'], axis=1)
alpha_high.head()
alpha_high.shape
alpha_avg = pd.DataFrame(alpha_high.mean(axis=0), columns=['alpha_shannon'])
Explanation: I. prepare mapping_PDalpha file
calculate average PD alpha diversity at highest rarefaction 5870
End of explanation
mf = pd.read_csv('../data/mapping_cleaned_MrOS.txt', sep='\t', dtype=str, index_col='#SampleID')
mf.head()
table = pd.merge(mf, alpha_avg, left_index=True, right_index=True)
table.head()
print(mf.shape, table.shape)
# check
print(alpha_avg.head())
print(table.loc[table.index=='SD8637'].alpha_shannon)
print(table.loc[table.index=='PO7016'].alpha_shannon)
print(table.loc[table.index=='MN1789'].alpha_shannon)
print(table.loc[table.index=='MN1868'].alpha_shannon)
print(table.loc[table.index=='PA3814'].alpha_shannon)
Explanation: Add PD alpha diversity into mapping file
End of explanation
table.to_csv('../data/mapping_Shannonalpha.txt', sep='\t')
Explanation: output mapping file with PD alpha diveristy
End of explanation
df = table[['OHVD3', 'OHV1D3', 'OHV24D3', 'ratio_activation', 'ratio_catabolism', 'alpha_shannon']]
df = df.apply(pd.to_numeric, errors='coerce') # still need to convert, as their types changed in 'table'
print(df.shape)
df.head()
df.describe()
df.to_csv('../data/vitamin_shannon.txt', sep='\t')
Explanation: II. scatterplots of PD alpha vs. 5 vitaminD measurements
prepare file with only 5 VitD variables and PD
End of explanation
var = df.columns.drop('alpha_shannon')
i = 0
col_list_palette = sns.xkcd_palette(['sky blue'])
sns.set_palette(col_list_palette)
ax = sns.regplot(x=var[i], y="alpha_shannon", label=var[i],data=df)
ax.set_xlabel('25(OH)2D3', fontsize=20)
ax.set_ylabel('Shannon alpha diversity', fontsize=20)
ax = ax.get_figure()
ax.tight_layout()
ax.savefig('../figures/Shannon_VD3_reg.pdf')
ax.savefig('../figures/Shannon_VD3_reg.png')
i = 1
col_list_palette = sns.xkcd_palette(['aquamarine'])
sns.set_palette(col_list_palette)
ax = sns.regplot(x=var[i], y="alpha_shannon", label=var[i],data=df)
ax.set_xlabel('1,25(OH)2D3', fontsize=20)
ax.set_ylabel('Shannon alpha diversity', fontsize=20)
ax = ax.get_figure()
ax.tight_layout()
ax.savefig('../figures/Shannon_V1D3_reg.pdf')
ax.savefig('../figures/Shannon_V1D3_reg.png')
i = 2
col_list_palette = sns.xkcd_palette(['light orange'])
sns.set_palette(col_list_palette)
ax = sns.regplot(x=var[i], y="alpha_shannon", label=var[i],data=df)
ax.set_xlabel('24,25(OH)2D3', fontsize=20)
ax.set_ylabel('Shannon alpha diversity', fontsize=20)
ax = ax.get_figure()
ax.tight_layout()
ax.savefig('../figures/Shannon_V24D3_reg.pdf')
ax.savefig('../figures/Shannon_V24D3_reg.png')
i = 3
col_list_palette = sns.xkcd_palette(['pale purple'])
sns.set_palette(col_list_palette)
ax = sns.regplot(x=var[i], y="alpha_shannon", label=var[i],data=df)
ax.set_xlabel(var[i], fontsize=20)
ax.set_ylabel('Shannon alpha diversity', fontsize=20)
ax = ax.get_figure()
ax.tight_layout()
ax.savefig('../figures/Shannon_RatioAct_reg.pdf')
ax.savefig('../figures/Shannon_RatioAct_reg.png')
i = 4
col_list_palette = sns.xkcd_palette(['red'])
sns.set_palette(col_list_palette)
ax = sns.regplot(x=var[i], y="alpha_shannon", label=var[i],data=df)
ax.set_xlabel(var[i], fontsize=20)
ax.set_ylabel('Shannon alpha diversity', fontsize=20)
ax = ax.get_figure()
ax.tight_layout()
ax.savefig('../figures/Shannon_RatioCat_reg.pdf')
ax.savefig('../figures/Shannon_RatioCat_reg.png')
# all 5 VitD measurements
sns.set(color_codes=True)
var = df.columns.drop('alpha_shannon')
sns.set_palette("Set2", n_colors=len(var))
for i in range(len(var)):
ax = sns.regplot(x=var[i], y="alpha_shannon", label=var[i], data=df)
ax.set(xlabel='VitaminD measurement', ylabel='Shannon alpha diversity')
ax.legend()
plt.xlim(-5, 120)
ax = ax.get_figure()
ax.tight_layout()
ax.savefig('../figures/Shannon_5VitD_reg.pdf')
ax.savefig('../figures/Shannon_5VitD_reg.png')
Explanation: plot scatterplots separately due to different scale
End of explanation
out = []
for i in range(len(var)):
tmp = df[['alpha_shannon', var[i]]].dropna(axis=0, how='any')
y = tmp['alpha_shannon']
X = tmp[var[i]]
results = smf.OLS(y, sm.add_constant(X)).fit()
#print(results.summary())
# normality test
name = ['Chi^2', 'Two-tail probability']
test = sms.omni_normtest(results.resid)
normtest = lzip(name, test)[1][1]
# condition number
cn = np.linalg.cond(results.model.exog)
# heteroskedasticity tests (null: the residual variance does not depend on the variables in x)
name = ['Lagrange multiplier statistic', 'p-value']
test = sms.het_breuschpagan(results.resid, results.model.exog)
heter = lzip(name, test)[1][1]
# linearity test (null: is linear)
name = ['t value', 'p value']
test = sms.linear_harvey_collier(results)
linear = lzip(name, test)[1][1]
out.append(['alpha_shannon', var[i], results.params[1], results.pvalues[1],
results.rsquared_adj, normtest, cn, heter, linear])
out = pd.DataFrame(out, columns=['y', 'X', 'slope', 'pvalue', 'adjusted R-square',
'norm test P-val', 'condition number', 'hetero test P-val', 'linear P-val'])
out
# fine with non-normal residuals
# reference: https://stats.stackexchange.com/questions/29731/regression-when-the-ols-residuals-are-not-normally-distributed
Explanation: III. linear regression of PD alpha diversity vs. 5 vitaminD measurements
End of explanation
# plot the data and fit
plt.plot(X, y, 'ro')
plt.plot(X, results.fittedvalues, 'b')
plt.xlabel(var[i])
plt.ylabel('shannon Alpha')
# histogram of normalized residuals
plt.hist(results.resid_pearson)
plt.ylabel('Count')
plt.xlabel('Normalized residuals')
# normality of residual tests
# Jarque-Bera test
name = ['Jarque-Bera', 'Chi^2 two-tail prob.', 'Skew', 'Kurtosis']
test = sms.jarque_bera(results.resid)
print(lzip(name, test))
# Omni test
name = ['Chi^2', 'Two-tail probability']
test = sms.omni_normtest(results.resid)
print(lzip(name, test))
# cooks distance
influence = results.get_influence()
(c, p) = influence.cooks_distance # c is the distance and p is p-value
plt.stem(np.arange(len(c)), c, markerfmt=',')
# influence test
from statsmodels.stats.outliers_influence import OLSInfluence
test_class = OLSInfluence(results)
test_class.dfbetas[:5,:]
# residuals against leverage
from statsmodels.graphics.regressionplots import plot_leverage_resid2
fig, ax = plt.subplots(figsize=(8,6))
fig = plot_leverage_resid2(results, ax = ax)
# multicolinearity
np.linalg.cond(results.model.exog) # condition number
# heteroskedasticity tests (whether residuals have unequal variance)
# Breush-Pagan test
name = ['Lagrange multiplier statistic', 'p-value'
'f-value', 'f p-value']
test = sms.het_breuschpagan(results.resid, results.model.exog)
print(lzip(name, test))
# Goldfeld-Quand test
name = ['F statistic', 'p-value']
test = sms.het_goldfeldquandt(results.resid, results.model.exog)
print(lzip(name, test))
# linearity test
name = ['t value', 'p value']
test = sms.linear_harvey_collier(results)
lzip(name, test)
Explanation: diagnostic results: only break the 'normality' asssumption; all else hold
This is not a concern, see here (https://stats.stackexchange.com/questions/75054/how-do-i-perform-a-regression-on-non-normal-data-which-remain-non-normal-when-tr)
Diagnostics Plots and Tests
End of explanation |
3,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learn the standard library to at least know what's there
itertools and collections have very useful features
chain
product
permutations
combinations
izip
Step2: Challenge (Easy)
Write a function to return the total number of digits in a given string, and those digits.
Step3: Challenge (Tricky)
Same as above -- but were consecutive digits are available, return as a single number.
Ex. "2a78b123" returns "3 numbers, they are
Step5: Challenge (Tricky)
Same as above, but do it a second way.
Step7: Challenge (Hard)
Same as above, but all valid numbers expressed in digits, commas, and decimal points.
Ex. "a23.42dx9,331nm87,55" -> 4; 23.42, 9331, 87, 55
Left as an exercise
Step8: Gotchas
Modifying a dictionary's keys while iterating over it.
python
for key in dictionary | Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina'
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
sns.set_style('darkgrid')
plt.rcParams['figure.figsize'] = 12, 8 # plotsize
import numpy as np
import pandas as pd
# plot residuals
from itertools import groupby # NOT REGULAR GROUPBY
from itertools import product, cycle, izip
import re # regular expressions
Explanation: Learn the standard library to at least know what's there
itertools and collections have very useful features
chain
product
permutations
combinations
izip
End of explanation
test_string = de3456yghj87654edfghuio908ujhgyuY^YHJUi8ytgh gtyujnh y7
count = 0
digits = []
for x in test_string:
try:
int(x)
count += 1
digits.append(int(x))
except:
pass
print("Number of digits:", str(count) + ";")
print("They are:", digits)
Explanation: Challenge (Easy)
Write a function to return the total number of digits in a given string, and those digits.
End of explanation
test_string
groups = []
uniquekeys = []
for k, g in groupby(test_string, lambda x: x.isdigit()):
groups.append(list(g))
uniquekeys.append(k)
print(groups)
print(uniquekeys)
numbers = []
for x, y in izip(groups, uniquekeys):
if y:
numbers.append(int(''.join([j for j in x])))
print("Number:", np.sum(uniquekeys))
print("They are:", numbers)
# In one cell
def solution_2(test_string):
groups = []
uniquekeys = []
for k, g in groupby(test_string, lambda x: x.isdigit()):
if k:
groups.append(int(''.join([j for j in g])))
return len(groups), groups
print(solution_2(test_string))
Explanation: Challenge (Tricky)
Same as above -- but were consecutive digits are available, return as a single number.
Ex. "2a78b123" returns "3 numbers, they are: 2, 78, 123"
End of explanation
def solution_3(test_string):
Regular expressions can be a very powerful and useful tool.
groups = [int(j) for j in re.findall(r'\d+', test_string)]
return len(groups), groups
solution_3(test_string)
Explanation: Challenge (Tricky)
Same as above, but do it a second way.
End of explanation
def ex1(num):
A stupid example generator to prove a point.
while num > 1:
num += 1
yield num
hey = ex1(5)
hey.next()
hey.next()
Explanation: Challenge (Hard)
Same as above, but all valid numbers expressed in digits, commas, and decimal points.
Ex. "a23.42dx9,331nm87,55" -> 4; 23.42, 9331, 87, 55
Left as an exercise :)
Don't spend much time on this one.
Generators
End of explanation
even_better_name = 5
even_better_name = 5
even_better_name = 5
even_better_name = 5
even_better_name = 5
even_better_name = 5
Explanation: Gotchas
Modifying a dictionary's keys while iterating over it.
python
for key in dictionary:
if key == "bat":
del dictionary[key]
If you have to do someeven_better_name like this:
python
list_of_keys = dictionary.keys()
for key in list_of_keys:
if key == "bat":
del dictionary[key]
End of explanation |
3,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: VRESM-1-0
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
3,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building communities
micom will construct communities from a specification via a Pandas DataFrame. Here, the DataFrame needs at least two columns
Step1: As we see this specification contains the required fields and some more information. In fact the specification may contain any number of additional information which will be saved along with the community model. One special example is "abundance" which we will get to know soon
Step2: This includes the correctly scaled exchange reactions with the internal medium and initializes the external imports to the maximum found in all models. The original taxonomy is maintained in the com.taxonomy attribute.
Note that micom can figure out how to read a variety of different file types based on the extension. It curently supports
Step3: As you can notice we have gained a new column called abundance. This column quantifies the relative quantity of each individual in the community. Since we did not specify this in the original taxonomy micom assumes that all individuals are present in the same quantity.
The presented community here is pretty simplistic. For microbial communities micom includes a larger taxonomy for 773 microbial species from the AGORA paper. Here the "file" column only contains the base names for the files but you can easily prepend any path as demonstrated in the following | Python Code:
from micom.data import test_taxonomy
taxonomy = test_taxonomy()
taxonomy
Explanation: Building communities
micom will construct communities from a specification via a Pandas DataFrame. Here, the DataFrame needs at least two columns: "id" and "file" which specify the ID of the organism/tissue and a file containing the actual individual model.
To make more sense of that we can look at a small example. micom comes with a function that generates a simple example community specification consisting of several copies of a small E. coli model containing only the central carbon metabolism.
End of explanation
from micom import Community
com = Community(taxonomy)
print("Build a community with a total of {} reactions.".format(len(com.reactions)))
Explanation: As we see this specification contains the required fields and some more information. In fact the specification may contain any number of additional information which will be saved along with the community model. One special example is "abundance" which we will get to know soon :)
In order to convert the specification in a community model we will use the Community class from micom which derives from the cobrapy Model class.
End of explanation
com.taxonomy
Explanation: This includes the correctly scaled exchange reactions with the internal medium and initializes the external imports to the maximum found in all models. The original taxonomy is maintained in the com.taxonomy attribute.
Note that micom can figure out how to read a variety of different file types based on the extension. It curently supports:
.pickle for pickled models
.xml or .gz for XML models
.json for JSON models
.mat for COBRAtoolbox models
End of explanation
from micom.data import agora
tax = agora.copy()
tax.file = "models/" + tax.file # assuming you have downloaded the AGORA models to the "models" folder
tax.head()
Explanation: As you can notice we have gained a new column called abundance. This column quantifies the relative quantity of each individual in the community. Since we did not specify this in the original taxonomy micom assumes that all individuals are present in the same quantity.
The presented community here is pretty simplistic. For microbial communities micom includes a larger taxonomy for 773 microbial species from the AGORA paper. Here the "file" column only contains the base names for the files but you can easily prepend any path as demonstrated in the following:
End of explanation |
3,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
3,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Annotate movement artifacts and reestimate dev_head_t
Periods, where the participant moved considerably, are contaminated by low
amplitude artifacts. When averaging the magnetic fields, the more spread the
head position, the bigger the cancellation due to different locations.
Similarly, the covariance will also be affected by severe head movement,
and source estimation will suffer low/smeared coregistration accuracy.
This example uses the continuous head position indicators (cHPI) times series
to annotate periods of head movement, then the device to head transformation
matrix is estimated from the artifact-free segments. The new head position will
be more representative of the actual head position during the recording.
Step1: Plot continuous head position with respect to the mean recording position
Step2: Plot raw data with annotated movement
Step3: After checking the annotated movement artifacts, calculate the new transform
and plot it | Python Code:
# Authors: Adonay Nunes <[email protected]>
# Luke Bloy <[email protected]>
# License: BSD (3-clause)
import os.path as op
import mne
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
from mne.preprocessing import annotate_movement, compute_average_dev_head_t
# Load data
data_path = bst_auditory.data_path()
data_path_MEG = op.join(data_path, 'MEG')
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
raw_fname1 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_02.ds')
# read and concatenate two files
raw = read_raw_ctf(raw_fname1, preload=False)
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=False)])
raw.crop(350, 410).load_data()
raw.resample(100, npad="auto")
Explanation: Annotate movement artifacts and reestimate dev_head_t
Periods, where the participant moved considerably, are contaminated by low
amplitude artifacts. When averaging the magnetic fields, the more spread the
head position, the bigger the cancellation due to different locations.
Similarly, the covariance will also be affected by severe head movement,
and source estimation will suffer low/smeared coregistration accuracy.
This example uses the continuous head position indicators (cHPI) times series
to annotate periods of head movement, then the device to head transformation
matrix is estimated from the artifact-free segments. The new head position will
be more representative of the actual head position during the recording.
End of explanation
# Get cHPI time series and compute average
chpi_locs = mne.chpi.extract_chpi_locs_ctf(raw)
head_pos = mne.chpi.compute_head_pos(raw.info, chpi_locs)
original_head_dev_t = mne.transforms.invert_transform(
raw.info['dev_head_t'])
average_head_dev_t = mne.transforms.invert_transform(
compute_average_dev_head_t(raw, head_pos))
fig = mne.viz.plot_head_positions(head_pos)
for ax, val, val_ori in zip(fig.axes[::2], average_head_dev_t['trans'][:3, 3],
original_head_dev_t['trans'][:3, 3]):
ax.axhline(1000 * val, color='r')
ax.axhline(1000 * val_ori, color='g')
# The green horizontal lines represent the original head position, whereas the
# red lines are the new head position averaged over all the time points.
Explanation: Plot continuous head position with respect to the mean recording position
End of explanation
mean_distance_limit = .0015 # in meters
annotation_movement, hpi_disp = annotate_movement(
raw, head_pos, mean_distance_limit=mean_distance_limit)
raw.set_annotations(annotation_movement)
raw.plot(n_channels=100, duration=20)
Explanation: Plot raw data with annotated movement
End of explanation
new_dev_head_t = compute_average_dev_head_t(raw, head_pos)
raw.info['dev_head_t'] = new_dev_head_t
mne.viz.plot_alignment(raw.info, show_axes=True, subject=subject,
trans=trans_fname, subjects_dir=subjects_dir)
Explanation: After checking the annotated movement artifacts, calculate the new transform
and plot it:
End of explanation |
3,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Semi graphic displays and charsets
Some text or semi graphic displays included in stemgraphic.
imports
Step1: Loading some data
Step2: Heatmaps
These are stem-and-leaf heatmaps as introduced by stemgraphic. columns are leaves, rows are stems.
Limited to 300 by default, random state for reproducibility
Step3: Limited to a sample of 300 by default (display= to modify), random state for reproducibility. heatmap is more readable for pattern than heatmatrix
Step4: Tally chart
Step5: Dot plot
With flip_axes (rotated 90) and symmetric options
Step6: Histogram
Basically a histogram where binning is based on the stem-and-leaf bins (decimal, or break on 2 or break on 5), but can also be zoomed in (-1, -2) or out (+1, +2). shade can be 'none', 'light', 'medium', 'dark' or 'full'.
Step7: Charset support
Some of these render slightly differently in the console (python, ipython) versus in the notebook.
Step8: The alignment might not be 100% based on the font available on your system, but in a terminal, alignment will be correct, which is where most people will use these. arabic and arabic_r are reversed (right to left, left to right) in the console compared to the notebook. | Python Code:
import pandas as pd
from stemgraphic.num import text_heatmap, heatmatrix, text_hist, text_dot, stem_tally, stem_text
from stemgraphic.helpers import available_charsets
Explanation: Semi graphic displays and charsets
Some text or semi graphic displays included in stemgraphic.
imports
End of explanation
df = pd.read_csv('../datasets/home_data.csv')
df.shape
df.head()
Explanation: Loading some data
End of explanation
heatmatrix(df.zipcode, charset='bold', random_state=42);
Explanation: Heatmaps
These are stem-and-leaf heatmaps as introduced by stemgraphic. columns are leaves, rows are stems.
Limited to 300 by default, random state for reproducibility
End of explanation
text_heatmap(df.zipcode, charset='sansbold', random_state=42);
Explanation: Limited to a sample of 300 by default (display= to modify), random state for reproducibility. heatmap is more readable for pattern than heatmatrix
End of explanation
stem_tally(df.price)
Explanation: Tally chart
End of explanation
text_dot(df.price, symmetric=True, flip_axes=True)
Explanation: Dot plot
With flip_axes (rotated 90) and symmetric options
End of explanation
text_hist(df.bathrooms, display=100, zoom=1, random_state=42, shade='dark')
Explanation: Histogram
Basically a histogram where binning is based on the stem-and-leaf bins (decimal, or break on 2 or break on 5), but can also be zoomed in (-1, -2) or out (+1, +2). shade can be 'none', 'light', 'medium', 'dark' or 'full'.
End of explanation
available_charsets()
Explanation: Charset support
Some of these render slightly differently in the console (python, ipython) versus in the notebook.
End of explanation
for charset in available_charsets():
print('Using charset: {}'.format(charset))
stem_text(df.sqft_living, charset=charset, random_state=42);
print()
print('____________________________________________________________________')
print()
Explanation: The alignment might not be 100% based on the font available on your system, but in a terminal, alignment will be correct, which is where most people will use these. arabic and arabic_r are reversed (right to left, left to right) in the console compared to the notebook.
End of explanation |
3,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monte Carlo Simulations
Calculating Pi
Step1: Calculating an integral
Step2: Drawing random numbers
Numpy has tons of random number functions.
See https
Step3: Hit-miss
Now let draw from a weird distrubtion
Step4: Markov Chain Monte Carlo
Step5: 2D MCMC Visualised | Python Code:
def random_number_plusminus1(n):
return 2*np.random.random(n) - 1
x, y = random_number_plusminus1((2,1000))
plt.scatter(x, y)
plt.show()
area_of_square = 2*2
ratio_of_dart_inside = np.mean(x**2 + y**2 < 1)
pi_estimate = area_of_square * ratio_of_dart_inside
print(pi_estimate, np.pi)
x, y = random_number_plusminus1((2,10000000))
area_of_square = 2*2
ratio_of_dart_inside = np.mean(x**2 + y**2 < 1)
pi_estimate = area_of_square * ratio_of_dart_inside
print(pi_estimate, np.pi)
Explanation: Monte Carlo Simulations
Calculating Pi
End of explanation
def f(x):
return np.log(2*x) # Integral from 1 to 10 is 20.264
x = np.linspace(1,10,1000)
plt.plot(x, f(x))
plt.show()
n = 1000
x_draw = 1 + 9*np.random.random(n)
y_draw = 3.5 * np.random.random(n)
plt.scatter(x_draw, y_draw)
plt.plot(x, f(x), 'r', lw=3)
plt.show()
area_square = 3.5*9
ratio_inside = np.mean(y_draw < f(x_draw))
integral = area_square * ratio_inside
print(integral)
def calc_intergal(n):
x_draw = 1 + 9*np.random.random(n)
y_draw = 3.5 * np.random.random(n)
ratio_inside = np.mean(y_draw < f(x_draw))
return area_square * ratio_inside
estimates = [calc_intergal(100000) for i in range(100)]
print(np.mean(estimates), '+-', np.std(estimates)/np.sqrt(100))
Explanation: Calculating an integral
End of explanation
def exponential_numbers(a, n):
u = np.random.random(n)
return -1/a * np.log(u) # inverse method
x = exponential_numbers(1, 250)
plt.plot(x, '-o')
plt.show()
x = exponential_numbers(1, 10000)
plt.hist(x, bins=50)
plt.show()
Explanation: Drawing random numbers
Numpy has tons of random number functions.
See https://docs.scipy.org/doc/numpy/reference/routines.random.html
But it doesn't have everything.
It does have the exponential distributions, but let's try and make it ourselves.
End of explanation
def f(x):
return np.exp(-x) * x**2/(2-10*np.exp(-2))
x = np.linspace(0, 2, 10000)
plt.plot(x, f(x))
plt.show()
def draw_random_number(f, minx, maxx, maxy):
while True:
x = minx + (maxx - minx) * np.random.random()
y = maxy * np.random.random()
if f(x) > y:
return x
x = [draw_random_number(f, 0, 2, 1) for i in range(100000)]
plt.hist(x, bins=50, normed=True)
plt.show()
Explanation: Hit-miss
Now let draw from a weird distrubtion:
End of explanation
def metropolis(f, x0, n=1000, std=0.3):
current = f(x0)
x = [x0]
for i in range(1, n):
xn = x0 + std * np.random.randn()
new = f(xn)
if np.random.random() < new/current:
x0 = xn
current = new
x.append(x0)
return x
gauss = lambda x : np.exp(-x**2/2)
exp = lambda x : np.exp(-x) * (x>=0)
x = metropolis(gauss, 1, 250)
plt.plot(x, '-o')
plt.show()
x = metropolis(gauss, 1, 100000)
plt.hist(x, bins=30, normed=True)
plt.show()
x = metropolis(exp, 1, 100000)
plt.hist(x, bins=30, normed=True)
plt.show()
Explanation: Markov Chain Monte Carlo: Metropolis hastings
We give up the requirement that samples are independent.
End of explanation
def some_2d_distribution(x, y): # doesn't have to be normalised
return x**2*np.exp(-y**2) * (x>=0) * (x<=10) * (y>=-5) * (y<=5)
X, Y = np.meshgrid(np.linspace(0,10,50), np.linspace(-5,5,50))
d = some_2d_distribution(X, Y)
plt.imshow(d, extent=(np.min(X),np.max(X),np.max(Y),np.min(Y)))
plt.show()
def metropolis(f, x0, y0, n=1000, std=1.0):
current = f(x0, y0)
x = [x0]
y = [y0]
plt.ion()
for i in range(1, n):
xn = x0 + std * np.random.randn()
yn = y0 + std * np.random.randn()
new = f(xn, yn)
if np.random.random() < new/current:
x0 = xn
y0 = yn
current = new
x.append(x0)
y.append(y0)
plt.clf()
plt.plot(x, y)
plt.axis([0,10,-5,5])
plt.pause(0.001)
return x
%matplotlib
metropolis(some_2d_distribution, 0, 0)
Explanation: 2D MCMC Visualised
End of explanation |
3,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculer x**n le plus rapidement possible
Step1: Enoncé
Comme $n$ est entier, la façon la plus simple est de calculer $xx...*x$ mais existe-t-il plus rapide que cela ?
Solution
L'idée de départ consiste à écrire $x^{2n}=(x^n)^2$. En extrapolant, on en déduit que si $n=2^k$, alors le coût du calcul de $x^n$ consistera en $k$ itérations en on $2^k$.
Step2: Lorsque $n$ n'est pas une puissance de 2, il suffit que le décomposer en écriture binaire. Si $n = \sum_k a_k 2^k$, avec $a_k \in {0,1}$, alors $x^n = \prod_k x^{a_k 2^k}$. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Calculer x**n le plus rapidement possible
End of explanation
def puissance2k(x,k):
while k > 0 :
x *= x
k -= 1
return x
for i in range(0,4) :
print ( "2^(2^{0})=2^{1}={2}".format( i, 2**i, puissance2k ( 2, i ) ) )
Explanation: Enoncé
Comme $n$ est entier, la façon la plus simple est de calculer $xx...*x$ mais existe-t-il plus rapide que cela ?
Solution
L'idée de départ consiste à écrire $x^{2n}=(x^n)^2$. En extrapolant, on en déduit que si $n=2^k$, alors le coût du calcul de $x^n$ consistera en $k$ itérations en on $2^k$.
End of explanation
def puissance(x,n):
r = 1
while n > 0 :
if n % 2 == 1 : r *= x
x *= x
n //= 2
return r
for i in range(0,9) :
print ( "2^{0}={1}".format( i, puissance ( 2, i ) ) )
Explanation: Lorsque $n$ n'est pas une puissance de 2, il suffit que le décomposer en écriture binaire. Si $n = \sum_k a_k 2^k$, avec $a_k \in {0,1}$, alors $x^n = \prod_k x^{a_k 2^k}$.
End of explanation |
3,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 텍스트 생성을 위한 Federated Learning
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 사전 훈련된 모델 로드하기
TensorFlow 튜토리얼 즉시 실행되는 RNN을 사용한 텍스트 생성에 따라 사전 훈련된 모델을 로드합니다. 하지만, 셰익스피어의 전체 작품에 대한 훈련 대신 Charles Dickens의 A Tale of Two Cities 및 A Christmas Carol의 텍스트에 대해 모델을 사전 훈련했습니다.
어휘 확장 이외에는 원래 튜토리얼을 수정하지 않았기 때문에이 초기 모델은 최첨단이 아니지만 합리적인 예측을 생성하고 튜토리얼 목적에 충분합니다. 최종 모델은 tf.keras.models.save_model(include_optimizer=False) 로 저장되었습니다.
이 튜토리얼에서는 TFF에서 제공하는 데이터의 페더레이션 버전을 사용하여 셰익스피어에 대한 이 모델을 미세 조정하는 데 페더레이션 학습을 사용할 것입니다.
어휘 조회 테이블 생성하기
Step3: 사전 훈련된 모델을 로드하고 일부 텍스트 생성하기
Step4: 페더레이션 셰익스피어 데이터 로드 및 전처리
tff.simulation.datasets 패키지는 "clients"로 분할된 다양한 데이터세트를 제공합니다. 여기서 각 클라이언트는 페더레이션 학습에 참여할 수 있는 특정 기기의 데이터세트에 해당합니다.
이들 데이터세트는 실제 분산된 데이터에 대한 훈련 문제를 시뮬레이션에서 복제하는 현실적인 비 IID 데이터 분산을 제공합니다. 이 데이터의 일부 전처리는 Leaf 프로젝트(github)의 도구를 사용하여 수행되었습니다.
Step5: shakespeare.load_data()에서 제공하는 데이터세트는 셰익스피어 연극의 특정 캐릭터가 말한 각 대사에 하나씩, 문자열 Tensors의 시퀀스로 구성됩니다. 클라이언트 키는 캐릭터의 이름과 결합된 연극의 이름으로 구성됩니다. 예를 들어, MUCH_ADO_ABOUT_NOTHING_OTHELLO는 연극 Much Ado About Nothing의 오델로 캐릭터 대사에 해당합니다. 실제 페더레이션 학습 시나리오에서 클라이언트는 ID로 식별되거나 추적되지 않지만, 시뮬레이션의 경우 키가 지정된 데이터세트로 작업하는 것이 유용합니다.
예를 들어, 여기에서 King Lear의 일부 데이터를 볼 수 있습니다.
Step6: 이제 tf.data.Dataset 변환을 사용하여 위에 로드된 문자 RNN을 훈련하기 위한 이 데이터를 준비합니다.
Step7: 원래 시퀀스의 형성과 위의 배치 형성에서는 단순성을 위해 drop_remainder=True를 사용합니다. 즉, 최소한 (SEQ_LENGTH + 1) * BATCH_SIZE 문자가 없는 모든 문자(클라이언트)는 빈 데이터세트를 갖게 됩니다. 이를 해결하기 위한 일반적인 접근 방식은 배치를 특수 토큰으로 채운 다음 해당 토큰을 고려하지 않도록 손실을 마스크하는 것입니다.
이로 인해 예제가 다소 복잡해지므로 이 튜토리얼에서는 표준 튜토리얼에서와 같이 전체 배치만 사용합니다. 그러나 페더레이션 설정에서는 많은 사용자가 작은 데이터세트를 가질 수 있으므로 이 문제가 더 중요합니다.
이제 raw_example_dataset를 전처리하고 유형을 확인할 수 있습니다.
Step8: 모델 컴파일 및 전처리된 데이터로 테스트하기
컴파일되지 않은 keras 모델을 로드했지만, keras_model.evaluate를 실행하려면 손실 및 메트릭을 사용하여 컴파일해야 합니다. 또한, 페더레이션 학습에서 기기 내 옵티마이저로 사용될 옵티마이저에서 컴파일할 것입니다.
원래 튜토리얼에는 문자 수준의 정확성은 없었습니다(올바른 다음 문자에 가장 높은 확률을 부여한 예측 비율). 이 정확성은 유용한 메트릭이므로 추가합니다. 그러나, 예측값은 순위 3(각 BATCH_SIZE * SEQ_LENGTH 예측값에 대한 로짓 벡터)이고 SparseCategoricalAccuracy는 순위 2 예측값만 기대하므로 정확성에 대한 새 메트릭 클래스를 정의해야 합니다.
Step9: 이제 모델을 컴파일하고 example_dataset에서 평가할 수 있습니다.
Step10: 페더레이션 학습으로 모델 미세 조정하기
TFF는 모든 TensorFlow 계산을 직렬화하여 잠재적으로 Python이 아닌 환경에서 실행할 수 있습니다(현재로서는 Python으로 구현된 시뮬레이션 런타임만 사용할 수 있음). 즉시 모드(TF 2.0)에서 실행 중이지만, 현재 TFF는 "with tf.Graph.as_default()" 문의 컨텍스트 내에서 필요한 ops를 구성하여 TensorFlow 계산을 직렬화합니다. 따라서 TFF가 제어하는 그래프에 모델을 도입하는 데 사용할 수 있는 함수를 제공해야 합니다. 다음과 같이 수행합니다.
Step11: 이제 모델을 개선하는 데 사용할 Federated Averaging 반복 프로세스를 구성할 준비가 되었습니다(Federated Averaging 알고리즘에 대한 자세한 내용은 분산 데이터에서 딥 네트워크의 Communication-Efficient Learning 논문 참조).
컴파일된 Keras 모델을 사용하여 페더레이션 훈련의 각 라운드 후에 표준 (비 페더레이션) 평가를 수행합니다. 이는 시뮬레이션된 페더레이션 학습을 수행할 때 연구 목적으로 유용하며, 표준 테스트데이터 세트가 있습니다.
현실적인 운영 환경에서 같은 기술을 사용하여 페러레이션 학습으로 훈련 된 모델을 테스트 또는 품질 보증 목적으로 중앙 집중식 벤치마크 데이터세트에서 평가할 수 있습니다.
Step12: 다음은 단일 배치의 단일 클라이언트에서 한 라운드에 대해 페더레이션 평균화를 실행하는 가장 간단한 루프입니다.
Step13: 이제 약간 더 흥미로운 훈련 및 평가 루프를 작성해 보겠습니다.
이 시뮬레이션이 여전히 상대적으로 빠르게 실행되도록 각 라운드에 대해 2개의 미니 배치만을 고려하여 같은 3개의 클라이언트에 대해 훈련합니다.
Step14: clone_model()은 가중치를 복제하지 않으므로 fed_avg.initialize()에 의해 생성된 모델의 초기 상태는 로드된 가중치가 아니라 Keras 모델의 임의 이니셜라이저를 기반으로 합니다. 사전 훈련된 모델에서 훈련을 시작하기 위해 로드된 모델에서 직접 서버 상태의 모델 가중치를 설정합니다.
Step15: 기본 변경으로 큰 차이를 만들 수 있는 충분한 훈련을 하지 않았지만, 더 많은 셰익스피어 데이터에 대해 더 오래 훈련하면 업데이트된 모델로 생성된 텍스트 스타일에서 차이를 볼 수 있습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import functools
import os
import time
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
np.random.seed(0)
# Test the TFF is working:
tff.federated_computation(lambda: 'Hello, World!')()
Explanation: 텍스트 생성을 위한 Federated Learning
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/federated_learning_for_text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/federated_learning_for_text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소그 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/federated/tutorials/federated_learning_for_text_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
참고: 이 Colab은 tensorflow_federated pip 패키지의 최신 릴리즈 버전에서 동작하는 것으로 확인되었지만 Tensorflow Federated 프로젝트는 아직 시험판 개발 중이며 main에서 동작하지 않을 수 있습니다.
이 튜토리얼은 이미지 분류를 위한 Federated Learning 튜토리얼의 개념을 기반으로 하며, 페더레이션 학습을 위한 몇 가지 유용한 접근 방식을 보여줍니다.
특히, 이전에 훈련된 Keras 모델을 로드하고 분산된 (시뮬레이션) 데이터세트에 대한 페더레이션 훈련을 사용하여 구체화합니다. 이 방법은 여러 가지 이유로 실질적으로 중요합니다. 직렬화된 모델을 사용하는 기능을 통해 페더레이션 학습을 다른 ML 접근 방식과 쉽게 혼합할 수 있습니다. 또한, 이를 통해 사전 훈련된 모델의 범위가 증가할 수 있습니다. 예를 들어, 사전 훈련된 수많은 모델이 현재 널리 사용 가능하기 때문에 처음부터 언어 모델을 훈련하는 것은 거의 필요하지 않습니다(예: TF Hub 참조). 대신, 사전 훈련된 모델에서 시작하여 특정 애플리케이션에 대한 분산 데이터의 특정 특성에 적응하여 Federated Learning을 사용하여 구체화하는 것이 더 합리적입니다.
이 튜토리얼에서는 ASCII 문자를 생성하는 RNN으로 시작하고 페더레이션 학습을 통해 구체화합니다. 또한, 최종 가중치를 원래 Keras 모델로 피드백하여 표준 도구를 사용하여 쉽게 평가하고 텍스트를 생성할 수 있는 방법을 보여줍니다.
End of explanation
# A fixed vocabularly of ASCII chars that occur in the works of Shakespeare and Dickens:
vocab = list('dhlptx@DHLPTX $(,048cgkoswCGKOSW[_#\'/37;?bfjnrvzBFJNRVZ"&*.26:\naeimquyAEIMQUY]!%)-159\r')
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
Explanation: 사전 훈련된 모델 로드하기
TensorFlow 튜토리얼 즉시 실행되는 RNN을 사용한 텍스트 생성에 따라 사전 훈련된 모델을 로드합니다. 하지만, 셰익스피어의 전체 작품에 대한 훈련 대신 Charles Dickens의 A Tale of Two Cities 및 A Christmas Carol의 텍스트에 대해 모델을 사전 훈련했습니다.
어휘 확장 이외에는 원래 튜토리얼을 수정하지 않았기 때문에이 초기 모델은 최첨단이 아니지만 합리적인 예측을 생성하고 튜토리얼 목적에 충분합니다. 최종 모델은 tf.keras.models.save_model(include_optimizer=False) 로 저장되었습니다.
이 튜토리얼에서는 TFF에서 제공하는 데이터의 페더레이션 버전을 사용하여 셰익스피어에 대한 이 모델을 미세 조정하는 데 페더레이션 학습을 사용할 것입니다.
어휘 조회 테이블 생성하기
End of explanation
def load_model(batch_size):
urls = {
1: 'https://storage.googleapis.com/tff-models-public/dickens_rnn.batch1.kerasmodel',
8: 'https://storage.googleapis.com/tff-models-public/dickens_rnn.batch8.kerasmodel'}
assert batch_size in urls, 'batch_size must be in ' + str(urls.keys())
url = urls[batch_size]
local_file = tf.keras.utils.get_file(os.path.basename(url), origin=url)
return tf.keras.models.load_model(local_file, compile=False)
def generate_text(model, start_string):
# From https://www.tensorflow.org/tutorials/sequences/text_generation
num_generate = 200
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
text_generated = []
temperature = 1.0
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
predictions = tf.squeeze(predictions, 0)
predictions = predictions / temperature
predicted_id = tf.random.categorical(
predictions, num_samples=1)[-1, 0].numpy()
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
# Text generation requires a batch_size=1 model.
keras_model_batch1 = load_model(batch_size=1)
print(generate_text(keras_model_batch1, 'What of TensorFlow Federated, you ask? '))
Explanation: 사전 훈련된 모델을 로드하고 일부 텍스트 생성하기
End of explanation
train_data, test_data = tff.simulation.datasets.shakespeare.load_data()
Explanation: 페더레이션 셰익스피어 데이터 로드 및 전처리
tff.simulation.datasets 패키지는 "clients"로 분할된 다양한 데이터세트를 제공합니다. 여기서 각 클라이언트는 페더레이션 학습에 참여할 수 있는 특정 기기의 데이터세트에 해당합니다.
이들 데이터세트는 실제 분산된 데이터에 대한 훈련 문제를 시뮬레이션에서 복제하는 현실적인 비 IID 데이터 분산을 제공합니다. 이 데이터의 일부 전처리는 Leaf 프로젝트(github)의 도구를 사용하여 수행되었습니다.
End of explanation
# Here the play is "The Tragedy of King Lear" and the character is "King".
raw_example_dataset = train_data.create_tf_dataset_for_client(
'THE_TRAGEDY_OF_KING_LEAR_KING')
# To allow for future extensions, each entry x
# is an OrderedDict with a single key 'snippets' which contains the text.
for x in raw_example_dataset.take(2):
print(x['snippets'])
Explanation: shakespeare.load_data()에서 제공하는 데이터세트는 셰익스피어 연극의 특정 캐릭터가 말한 각 대사에 하나씩, 문자열 Tensors의 시퀀스로 구성됩니다. 클라이언트 키는 캐릭터의 이름과 결합된 연극의 이름으로 구성됩니다. 예를 들어, MUCH_ADO_ABOUT_NOTHING_OTHELLO는 연극 Much Ado About Nothing의 오델로 캐릭터 대사에 해당합니다. 실제 페더레이션 학습 시나리오에서 클라이언트는 ID로 식별되거나 추적되지 않지만, 시뮬레이션의 경우 키가 지정된 데이터세트로 작업하는 것이 유용합니다.
예를 들어, 여기에서 King Lear의 일부 데이터를 볼 수 있습니다.
End of explanation
# Input pre-processing parameters
SEQ_LENGTH = 100
BATCH_SIZE = 8
BUFFER_SIZE = 100 # For dataset shuffling
# Construct a lookup table to map string chars to indexes,
# using the vocab loaded above:
table = tf.lookup.StaticHashTable(
tf.lookup.KeyValueTensorInitializer(
keys=vocab, values=tf.constant(list(range(len(vocab))),
dtype=tf.int64)),
default_value=0)
def to_ids(x):
s = tf.reshape(x['snippets'], shape=[1])
chars = tf.strings.bytes_split(s).values
ids = table.lookup(chars)
return ids
def split_input_target(chunk):
input_text = tf.map_fn(lambda x: x[:-1], chunk)
target_text = tf.map_fn(lambda x: x[1:], chunk)
return (input_text, target_text)
def preprocess(dataset):
return (
# Map ASCII chars to int64 indexes using the vocab
dataset.map(to_ids)
# Split into individual chars
.unbatch()
# Form example sequences of SEQ_LENGTH +1
.batch(SEQ_LENGTH + 1, drop_remainder=True)
# Shuffle and form minibatches
.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
# And finally split into (input, target) tuples,
# each of length SEQ_LENGTH.
.map(split_input_target))
Explanation: 이제 tf.data.Dataset 변환을 사용하여 위에 로드된 문자 RNN을 훈련하기 위한 이 데이터를 준비합니다.
End of explanation
example_dataset = preprocess(raw_example_dataset)
print(example_dataset.element_spec)
Explanation: 원래 시퀀스의 형성과 위의 배치 형성에서는 단순성을 위해 drop_remainder=True를 사용합니다. 즉, 최소한 (SEQ_LENGTH + 1) * BATCH_SIZE 문자가 없는 모든 문자(클라이언트)는 빈 데이터세트를 갖게 됩니다. 이를 해결하기 위한 일반적인 접근 방식은 배치를 특수 토큰으로 채운 다음 해당 토큰을 고려하지 않도록 손실을 마스크하는 것입니다.
이로 인해 예제가 다소 복잡해지므로 이 튜토리얼에서는 표준 튜토리얼에서와 같이 전체 배치만 사용합니다. 그러나 페더레이션 설정에서는 많은 사용자가 작은 데이터세트를 가질 수 있으므로 이 문제가 더 중요합니다.
이제 raw_example_dataset를 전처리하고 유형을 확인할 수 있습니다.
End of explanation
class FlattenedCategoricalAccuracy(tf.keras.metrics.SparseCategoricalAccuracy):
def __init__(self, name='accuracy', dtype=tf.float32):
super().__init__(name, dtype=dtype)
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = tf.reshape(y_true, [-1, 1])
y_pred = tf.reshape(y_pred, [-1, len(vocab), 1])
return super().update_state(y_true, y_pred, sample_weight)
Explanation: 모델 컴파일 및 전처리된 데이터로 테스트하기
컴파일되지 않은 keras 모델을 로드했지만, keras_model.evaluate를 실행하려면 손실 및 메트릭을 사용하여 컴파일해야 합니다. 또한, 페더레이션 학습에서 기기 내 옵티마이저로 사용될 옵티마이저에서 컴파일할 것입니다.
원래 튜토리얼에는 문자 수준의 정확성은 없었습니다(올바른 다음 문자에 가장 높은 확률을 부여한 예측 비율). 이 정확성은 유용한 메트릭이므로 추가합니다. 그러나, 예측값은 순위 3(각 BATCH_SIZE * SEQ_LENGTH 예측값에 대한 로짓 벡터)이고 SparseCategoricalAccuracy는 순위 2 예측값만 기대하므로 정확성에 대한 새 메트릭 클래스를 정의해야 합니다.
End of explanation
BATCH_SIZE = 8 # The training and eval batch size for the rest of this tutorial.
keras_model = load_model(batch_size=BATCH_SIZE)
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[FlattenedCategoricalAccuracy()])
# Confirm that loss is much lower on Shakespeare than on random data
loss, accuracy = keras_model.evaluate(example_dataset.take(5), verbose=0)
print(
'Evaluating on an example Shakespeare character: {a:3f}'.format(a=accuracy))
# As a sanity check, we can construct some completely random data, where we expect
# the accuracy to be essentially random:
random_guessed_accuracy = 1.0 / len(vocab)
print('Expected accuracy for random guessing: {a:.3f}'.format(
a=random_guessed_accuracy))
random_indexes = np.random.randint(
low=0, high=len(vocab), size=1 * BATCH_SIZE * (SEQ_LENGTH + 1))
data = collections.OrderedDict(
snippets=tf.constant(
''.join(np.array(vocab)[random_indexes]), shape=[1, 1]))
random_dataset = preprocess(tf.data.Dataset.from_tensor_slices(data))
loss, accuracy = keras_model.evaluate(random_dataset, steps=10, verbose=0)
print('Evaluating on completely random data: {a:.3f}'.format(a=accuracy))
Explanation: 이제 모델을 컴파일하고 example_dataset에서 평가할 수 있습니다.
End of explanation
# Clone the keras_model inside `create_tff_model()`, which TFF will
# call to produce a new copy of the model inside the graph that it will
# serialize. Note: we want to construct all the necessary objects we'll need
# _inside_ this method.
def create_tff_model():
# TFF uses an `input_spec` so it knows the types and shapes
# that your model expects.
input_spec = example_dataset.element_spec
keras_model_clone = tf.keras.models.clone_model(keras_model)
return tff.learning.from_keras_model(
keras_model_clone,
input_spec=input_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[FlattenedCategoricalAccuracy()])
Explanation: 페더레이션 학습으로 모델 미세 조정하기
TFF는 모든 TensorFlow 계산을 직렬화하여 잠재적으로 Python이 아닌 환경에서 실행할 수 있습니다(현재로서는 Python으로 구현된 시뮬레이션 런타임만 사용할 수 있음). 즉시 모드(TF 2.0)에서 실행 중이지만, 현재 TFF는 "with tf.Graph.as_default()" 문의 컨텍스트 내에서 필요한 ops를 구성하여 TensorFlow 계산을 직렬화합니다. 따라서 TFF가 제어하는 그래프에 모델을 도입하는 데 사용할 수 있는 함수를 제공해야 합니다. 다음과 같이 수행합니다.
End of explanation
# This command builds all the TensorFlow graphs and serializes them:
fed_avg = tff.learning.build_federated_averaging_process(
model_fn=create_tff_model,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(lr=0.5))
Explanation: 이제 모델을 개선하는 데 사용할 Federated Averaging 반복 프로세스를 구성할 준비가 되었습니다(Federated Averaging 알고리즘에 대한 자세한 내용은 분산 데이터에서 딥 네트워크의 Communication-Efficient Learning 논문 참조).
컴파일된 Keras 모델을 사용하여 페더레이션 훈련의 각 라운드 후에 표준 (비 페더레이션) 평가를 수행합니다. 이는 시뮬레이션된 페더레이션 학습을 수행할 때 연구 목적으로 유용하며, 표준 테스트데이터 세트가 있습니다.
현실적인 운영 환경에서 같은 기술을 사용하여 페러레이션 학습으로 훈련 된 모델을 테스트 또는 품질 보증 목적으로 중앙 집중식 벤치마크 데이터세트에서 평가할 수 있습니다.
End of explanation
state = fed_avg.initialize()
state, metrics = fed_avg.next(state, [example_dataset.take(5)])
train_metrics = metrics['train']
print('loss={l:.3f}, accuracy={a:.3f}'.format(
l=train_metrics['loss'], a=train_metrics['accuracy']))
Explanation: 다음은 단일 배치의 단일 클라이언트에서 한 라운드에 대해 페더레이션 평균화를 실행하는 가장 간단한 루프입니다.
End of explanation
def data(client, source=train_data):
return preprocess(source.create_tf_dataset_for_client(client)).take(5)
clients = [
'ALL_S_WELL_THAT_ENDS_WELL_CELIA', 'MUCH_ADO_ABOUT_NOTHING_OTHELLO',
]
train_datasets = [data(client) for client in clients]
# We concatenate the test datasets for evaluation with Keras by creating a
# Dataset of Datasets, and then identity flat mapping across all the examples.
test_dataset = tf.data.Dataset.from_tensor_slices(
[data(client, test_data) for client in clients]).flat_map(lambda x: x)
Explanation: 이제 약간 더 흥미로운 훈련 및 평가 루프를 작성해 보겠습니다.
이 시뮬레이션이 여전히 상대적으로 빠르게 실행되도록 각 라운드에 대해 2개의 미니 배치만을 고려하여 같은 3개의 클라이언트에 대해 훈련합니다.
End of explanation
NUM_ROUNDS = 5
# The state of the FL server, containing the model and optimization state.
state = fed_avg.initialize()
# Load our pre-trained Keras model weights into the global model state.
state = tff.learning.state_with_new_model_weights(
state,
trainable_weights=[v.numpy() for v in keras_model.trainable_weights],
non_trainable_weights=[
v.numpy() for v in keras_model.non_trainable_weights
])
def keras_evaluate(state, round_num):
# Take our global model weights and push them back into a Keras model to
# use its standard `.evaluate()` method.
keras_model = load_model(batch_size=BATCH_SIZE)
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[FlattenedCategoricalAccuracy()])
state.model.assign_weights_to(keras_model)
loss, accuracy = keras_model.evaluate(example_dataset, steps=2, verbose=0)
print('\tEval: loss={l:.3f}, accuracy={a:.3f}'.format(l=loss, a=accuracy))
for round_num in range(NUM_ROUNDS):
print('Round {r}'.format(r=round_num))
keras_evaluate(state, round_num)
state, metrics = fed_avg.next(state, train_datasets)
train_metrics = metrics['train']
print('\tTrain: loss={l:.3f}, accuracy={a:.3f}'.format(
l=train_metrics['loss'], a=train_metrics['accuracy']))
print('Final evaluation')
keras_evaluate(state, NUM_ROUNDS + 1)
Explanation: clone_model()은 가중치를 복제하지 않으므로 fed_avg.initialize()에 의해 생성된 모델의 초기 상태는 로드된 가중치가 아니라 Keras 모델의 임의 이니셜라이저를 기반으로 합니다. 사전 훈련된 모델에서 훈련을 시작하기 위해 로드된 모델에서 직접 서버 상태의 모델 가중치를 설정합니다.
End of explanation
# Set our newly trained weights back in the originally created model.
keras_model_batch1.set_weights([v.numpy() for v in keras_model.weights])
# Text generation requires batch_size=1
print(generate_text(keras_model_batch1, 'What of TensorFlow Federated, you ask? '))
Explanation: 기본 변경으로 큰 차이를 만들 수 있는 충분한 훈련을 하지 않았지만, 더 많은 셰익스피어 데이터에 대해 더 오래 훈련하면 업데이트된 모델로 생성된 텍스트 스타일에서 차이를 볼 수 있습니다.
End of explanation |
3,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We have some data providing the results of 30 coin tosses. We would like to estimate how fair the coin is, i.e. what is the probability of getting heads (1).
Step1: We build a probabilistic model of coin tossing.
All coin tosses are supposed to be independent tosses of the same coin, which always have the same probability of returning a head.
We want to perform Bayesian inference, therefore we need a prior.
For inference, we will be using Metropolis MCMC.
Defining a prior
We need to put some prior probability on the fairness of the coin. For this, a beta distribution seems appropriate, as it is a continuous distribution between 0 and 1.
Let's display a beta distribution with various parameter values.
Step2: We choose to use a=2, b=2. This is a weakly informative prior that the coin should be fair, given our past experience with coins.
Inference of the fairness of the coin using MCMC
We build a MCMC chain to estimate the probability of heads for this coin.
First we define the model, with the prior, the likelihood and the posterior probability, then we implement a Metropolis MCMC inference mechanism.
Building of the model
Step3: Implementing the MCMC algorithm
Step4: Let's compare the posterior inference to the prior
Step5: What is the probability that the coin favours heads over tails?
Let's compute P(parameter > 0.5).
Step6: Our median estimate for the parameter is
Step7: Compared to the Maximum Likelihood estimate, the frequency of heads | Python Code:
data = [1,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,0,0,1,1,0,1,1,0,1,1,0,1,1]
print(len(data))
Explanation: We have some data providing the results of 30 coin tosses. We would like to estimate how fair the coin is, i.e. what is the probability of getting heads (1).
End of explanation
fig_size=[]
fig_size.append(15)
fig_size.append(9)
plt.rcParams["figure.figsize"] = fig_size
beginning = 0.0001
end = 0.9999
a = 1
b = 1
x = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)
plt.plot(x, beta.pdf(x, a, b),'r-', lw=5, alpha=0.6, label='Beta pdf, a=1, b=1')
a = 2
b = 2
x2 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)
plt.plot(x2, beta.pdf(x2, a, b),'b-', lw=5, alpha=0.6, label='Beta pdf, a=2, b=2')
a = 0.8
b = 0.8
x3 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)
plt.plot(x3, beta.pdf(x3, a, b),'g-', lw=5, alpha=0.6, label='Beta pdf, a=0.8, b=0.8')
a = 2
b = 0.8
x4 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)
plt.plot(x4, beta.pdf(x4, a, b),'p-', lw=5, alpha=0.6, label='Beta pdf, a=2, b=0.8')
plt.legend(loc='best', frameon=False)
plt.xlabel("Parameter value")
plt.ylabel("Density/Frequency")
Explanation: We build a probabilistic model of coin tossing.
All coin tosses are supposed to be independent tosses of the same coin, which always have the same probability of returning a head.
We want to perform Bayesian inference, therefore we need a prior.
For inference, we will be using Metropolis MCMC.
Defining a prior
We need to put some prior probability on the fairness of the coin. For this, a beta distribution seems appropriate, as it is a continuous distribution between 0 and 1.
Let's display a beta distribution with various parameter values.
End of explanation
# Function to compute the likelihood P(D|M)
def likelihood (data, parameter):
p = 1.0
for d in data:
if d == 0:
p *= 1-parameter
else:
p *= parameter
return p
# Function to compute the prior P(M)
def prior (parameter):
return beta.pdf(parameter, a=2, b=2)
# Function to compute the un-normalized posterior P(D|M) * P(M)
def unnormalized_posterior (data, parameter):
return likelihood(data, parameter) * prior(parameter)
Explanation: We choose to use a=2, b=2. This is a weakly informative prior that the coin should be fair, given our past experience with coins.
Inference of the fairness of the coin using MCMC
We build a MCMC chain to estimate the probability of heads for this coin.
First we define the model, with the prior, the likelihood and the posterior probability, then we implement a Metropolis MCMC inference mechanism.
Building of the model
End of explanation
# Function to propose a new parameter value, randomly drawn between 0 and 1
def propose_new_parameter_value():
return random.random()
# Function to run Metropolis MCMC inference
def MetropolisMCMC(data, number_iterations):
current_parameter_value = propose_new_parameter_value()
record_parameter = []
record_parameter.append(current_parameter_value)
print("Initial parameter value for the MCMC: "+str(current_parameter_value))
current_posterior = unnormalized_posterior(data, current_parameter_value)
print("Initial probability of the model: " + str(current_posterior))
record_posterior = []
record_posterior.append(current_posterior)
for i in range (number_iterations):
acceptance_threshold = random.random()
proposed_parameter_value = random.random()
proposed_posterior = unnormalized_posterior(data, proposed_parameter_value)
if (proposed_posterior / current_posterior > acceptance_threshold):
current_parameter_value = proposed_parameter_value
current_posterior = proposed_posterior
record_parameter.append(current_parameter_value)
record_posterior.append(current_posterior)
return record_parameter, record_posterior
params, posteriors = MetropolisMCMC(data, 10000)
plt.plot(posteriors)
plt.xlabel("Iteration")
plt.ylabel("Posterior probability")
plt.plot(params)
plt.xlabel("Iteration")
plt.ylabel("Parameter value")
Explanation: Implementing the MCMC algorithm
End of explanation
plt.rcParams["figure.figsize"] = fig_size
a = 2
b = 2
x2 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)
plt.plot(x2, beta.pdf(x2, a, b),'b-', lw=5, alpha=0.6, label='PRIOR: Beta pdf, a=2, b=2')
plt.hist(params, label='POSTERIOR', density=True, bins=200, color="lightgreen")
plt.legend(loc='best', frameon=False)
plt.xlabel("Parameter value")
plt.ylabel("Density/Frequency")
sns.kdeplot(np.array(params), bw=0.03, lw=5, color="green", shade=True)
Explanation: Let's compare the posterior inference to the prior: have we learned anything about our coin?
End of explanation
num = 0
for i in range(len(params)):
if params[i]>0.5:
num += 1
print("The probability that the coin favours heads is "+str(num / len(params)) + " vs "+str(1-num / len(params)) + " that it favours tails.")
Explanation: What is the probability that the coin favours heads over tails?
Let's compute P(parameter > 0.5).
End of explanation
median_param=np.median(params)
print(str(median_param))
Explanation: Our median estimate for the parameter is:
End of explanation
ratio_1=sum(data)/len(data)
print(str(ratio_1))
Explanation: Compared to the Maximum Likelihood estimate, the frequency of heads:
End of explanation |
3,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Catchy feature extraction
Outline
This notebook shows how to compute features for a set of presegmented audiofiles.
Extracting catchy features from a folder of such files involves three steps
Step1: Base features
Basic feature time series can be extracted using the base_features module.
The function compute_and_write() provides a convenient wrapper around most of the functionality in this module, reading audio and computing a set of basic, useful features.
The results will be written to a set of csv files in data_dir.
Currently requires a dir to made for each of the features.
Step2: Pitch Features
The pitch_features module provides code to compute, from the variable-length base features computed above, fixed-sized melody and harmony descriptors for each of the song sections.
pitch_features.compute_and_write() again provides a high-level wrapper function.
The features that it should compute must be provided in a dictionary of (feature_function, parameters) tuples, with some feature name of your choice for each as keys.
The result is again stored in a set of csv files. Directories are the feature names provided.
Step3: Feature Transforms
The feature_transforms module allows you to compute first- and second-order features based on any of the features above. The transforms to be applied must be passed to the compute() function using a special syntax. The syntax states a feature, a reference corpus, and an aggregation function.
From the doc string
Step4: The above tells the module where to look for base features.
Below, a set of tested first and second-order features is computed for the full dataset.
Step5: Output
Finally, output data to a single CSV file for use in another notebook or R. | Python Code:
audio_dir = '../Cogitch/Audio/Eurovision/'
euro_dict = utils.dataset_from_dir(audio_dir)
Explanation: Catchy feature extraction
Outline
This notebook shows how to compute features for a set of presegmented audiofiles.
Extracting catchy features from a folder of such files involves three steps:
1. Base feature extraction
Here, basic, familiar feature time series are extracted. The toolbox currently implements (wrappers for) MFCC, chroma, melody and perceptual feature extraction.
This part of the toolbox relies on a lot of external code, but it's also easy to work around: if you want to use other features, just save them to a set of csv files (1 per song section--see below) in some folder (1 per feature).
2. Pitch descriptor extraction
This part computes mid-level pitch descriptors from chroma and/or melody information computed in step one.
Essentially an implementation of several kinds of audio bigram descriptors.
See also [1] and [2].
3. Feature transforms
Compute 'first' and 'second order' aggregates of any of the features computed in step 1 and step 2.
See [2].
[1] Van Balen, J., Wiering, F., & Veltkamp, R. (2015). Audio Bigrams as a Unifying Model of Pitch-based Song Description. In Proc. 11th International Symposium on Computer Music Multidisciplinary Research (CMMR). Plymouth, United Kingdom.
[2] Van Balen, J., Burgoyne, J. A., Bountouridis, D., Müllensiefen, D., & Veltkamp, R. (2015). Corpus Analysis Tools for Computational Hook Discovery. In Proc. 16th International Society for Music Information Retrieval Conference (pp. 227–233). Malaga, Spain.
Dataset
Let's import some audio data and see how all of this works.
The CATCHY toolbox was designed for the analysis of a corpus of song sections.
CATCHY therefore requires data to be represented as a python dictionary of song section paths, grouped by song id.
utils.dataset_from_dir() makes such a dictionary given a folder of audio files, labeled songid-sectionid.ext where ext can be wav or mp3
End of explanation
data_dir = '../Cogitch/Data/Eurovision/'
# base_features.compute_and_write(audio_dir, data_dir)
Explanation: Base features
Basic feature time series can be extracted using the base_features module.
The function compute_and_write() provides a convenient wrapper around most of the functionality in this module, reading audio and computing a set of basic, useful features.
The results will be written to a set of csv files in data_dir.
Currently requires a dir to made for each of the features.
End of explanation
pitch_features.melody_dir = data_dir + 'melody/'
pitch_features.chroma_dir = data_dir + 'hpcp/'
features = {'pitchhist3': (pitch_features.get_pitchhist3, {}),
'pitchhist3_int': (pitch_features.get_pitchhist3, {'intervals': True}),
'chromahist3': (pitch_features.get_chromahist3, {}),
'chromahist3_int': (pitch_features.get_chromahist3, {'intervals': True}),
'harmonisation': (pitch_features.get_harmonisation, {}),
'harmonisation_int': (pitch_features.get_harmonisation, {'intervals': True}) }
# pitch_features.compute_and_write(data_dir, features=features)
Explanation: Pitch Features
The pitch_features module provides code to compute, from the variable-length base features computed above, fixed-sized melody and harmony descriptors for each of the song sections.
pitch_features.compute_and_write() again provides a high-level wrapper function.
The features that it should compute must be provided in a dictionary of (feature_function, parameters) tuples, with some feature name of your choice for each as keys.
The result is again stored in a set of csv files. Directories are the feature names provided.
End of explanation
feature_transforms.data_dir = data_dir
Explanation: Feature Transforms
The feature_transforms module allows you to compute first- and second-order features based on any of the features above. The transforms to be applied must be passed to the compute() function using a special syntax. The syntax states a feature, a reference corpus, and an aggregation function.
From the doc string:
- feature name and aggregates are separated by dots, e.g. 'mfcc.entropy'
- feature name is first and contains no dots
- first order and second order aggregates are separated by one of 2 keywords:
'corpus' or 'song'
Ex.:
>>> parse_features('loudness.mean.song.pdf.log')
('loudness', ['mean'], ['song', 'pdf', 'log'])
The above shows how the transform names are read. In the example:
`loudness.mean.song.pdf.log`
computes the log of the probability density function of the distribution of the loudness features' mean within the song (i.e., across the sections of the song).
The result is returned in a Pandas dataframe.
End of explanation
features = [
'harmonisation_int.corpus.information',
'harmonisation_int.corpus.tau',
'harmonisation_int.song.information',
'harmonisation_int.song.tau',
'harmonisation.normentropy.minlog',
'harmonisation.normentropy.minlog.corpus.pdf.rank.logit',
'harmonisation.normentropy.minlog.song.pdf.rank.logit',
'chromahist3_int.corpus.information',
'chromahist3_int.corpus.tau',
'chromahist3_int.song.information',
'chromahist3_int.song.tau',
'chromahist3.normentropy.minlog',
'chromahist3.normentropy.minlog.corpus.pdf.rank.logit',
'chromahist3.normentropy.minlog.song.pdf.rank.logit',
'loudness.mean',
'loudness.mean.corpus.pdf.rank.logit',
'loudness.mean.song.pdf.rank.logit',
'loudness.std',
'loudness.std.corpus.pdf.rank.logit',
'loudness.std.song.pdf.rank.logit',
'pitchhist3_int.corpus.information',
'pitchhist3_int.corpus.tau',
'pitchhist3_int.song.information',
'pitchhist3_int.song.tau',
'pitchhist3.normentropy.minlog',
'pitchhist3.normentropy.minlog.corpus.pdf.rank.logit',
'pitchhist3.normentropy.minlog.song.pdf.rank.logit',
'mfcc.mean.corpus.indeppdf.rank.logit',
'mfcc.mean.song.indeppdf.rank.logit',
'mfcc.totvar.log',
'mfcc.totvar.log.corpus.pdf.rank.logit',
'mfcc.totvar.log.song.pdf.rank.logit',
'melody.mean',
'melody.mean.corpus.pdf.rank.logit',
'melody.mean.song.pdf.rank.logit',
'melody.std.log',
'melody.std.log.corpus.pdf.rank.logit',
'melody.std.log.song.pdf.rank.logit',
'roughness.mean.log',
'roughness.mean.log.corpus.pdf.rank.logit',
'roughness.mean.log.song.pdf.rank.logit',
'sharpness.mean',
'sharpness.mean.corpus.pdf.rank.logit',
'sharpness.mean.song.pdf.rank.logit']
data = feature_transforms.compute(euro_dict, features)
Explanation: The above tells the module where to look for base features.
Below, a set of tested first and second-order features is computed for the full dataset.
End of explanation
# data.hist(figsize=(28,21));
data.to_csv('euro_features.csv', index=None)
Explanation: Output
Finally, output data to a single CSV file for use in another notebook or R.
End of explanation |
3,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
from collections import Counter
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
word_ctr = Counter(text)
sorted_vocab = sorted(word_ctr, key=word_ctr.get, reverse=True)
int_to_vocab = {i: word for i, word in enumerate(sorted_vocab)}
vocab_to_int = {word: i for i, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
dict_punctuation_token = {
'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_Mark||',
';' : '||Semicolon||',
'!' : '||Exclamation_Mark||',
'?' : '||Question_mark||',
'(' : '||Left_Parentheses||',
')' : '||Right_Parentheses||',
'--' : '||Dash||',
'\n' : '||Return||'
}
return dict_punctuation_token
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(tf.int32, shape=[None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm_layers = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform(shape=[vocab_size, embed_dim], minval=-1,maxval=1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, FinalState = build_rnn(cell, embed)
Logits = tf.contrib.layers.fully_connected(outputs,
vocab_size,
weights_initializer=tf.random_uniform_initializer(-1, 1),
biases_initializer=tf.zeros_initializer(),
activation_fn=None
)
return Logits, FinalState
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text) // (batch_size * seq_length)
x_data = np.array(int_text[:n_batches * batch_size * seq_length])
y_data = np.roll(x_data, -1)
x_batches = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
batches = np.array(list(zip(x_batches, y_batches)))
return batches
# print('## I am using the example on this question:')
# print('get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2):')
# print(get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2))
# print('##')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = 25
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 50
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
predicted_word = int_to_vocab[int(np.searchsorted(np.cumsum(probabilities),
np.sum(probabilities) * np.random.rand(1)))]
return predicted_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
3,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initialization
Step1: Algorithm
Step2: Save coefs (X) and Y, along with tests (to know what each row refers to) to work with it later
Step3: What without NEs
Step4: SGD
Step5: Simple trial
Step6: Run cross validation for each model
Step7: Scores of each model
Step8: Final results
Step9: The one not normalised gets less error due to the fact that bigger absolute values make sigmoid closer to 1 or 0, thus reducing the error. Anyhow, the desired value is the one normalised (values must sum 1)
Step10: Notice how with not-ne-what What becomes slightly more important (maybe because it doesn't lose accuracy due to NE mistreatment?). Even if the resulting accuracy is the same but more stable, the other system makes What be more easily computed in terms of performance, so it would still be the selected approach, because the algorithm is slow enough already. In next steps, when the algorithm is optimised, working with not-ne-what would be a good idea.
Finally, by checking what would be the values with just Who and Where | Python Code:
folder = os.path.join('..', 'data')
newsbreaker.init(os.path.join(folder, 'topic_model'), 'topic_model.pkl', 'vocab.txt')
entries = load_entries(folder)
entries_dict = defaultdict(list)
for entry in entries:
entries_dict[entry.feed].append(entry)
client = MongoClient()
db = client.newstagger
Explanation: Initialization
End of explanation
def get_entry(s):
feedname, index = s.split('|')
try:
index = int(index)
except ValueError:
raise KeyError('Malformed entry %s' % s)
for feed, l in entries_dict.items():
if feed.name == feedname:
for entry in l:
if entry.index == index:
return entry
else:
break
raise KeyError('Entry %s not found' % s)
coefs = []
Y = []
tests = list(db.pairs.find())
for test in tests:
base = get_entry(test['base'])
e1 = get_entry(test['e1'])
e2 = get_entry(test['e2'])
coefs.append(
[
[
base.what_distance(e1),
base.who_distance(e1),
base.where_distance(e1)
],
[
base.what_distance(e2),
base.who_distance(e2),
base.where_distance(e2)
]
]
)
Y.append(float(test['res']))
Explanation: Algorithm
End of explanation
with open('X.txt', 'w') as f:
f.write('\n'.join(str(x) for x in coefs))
with open('Y.txt', 'w') as f:
f.write('\n'.join(str(x) for x in Y))
import json
with open('tests.json', 'w') as f:
f.write(
json.dumps(
[
{ k: v for k, v in d.items() if k != '_id' }
for d in tests
],
indent=2
)
)
Explanation: Save coefs (X) and Y, along with tests (to know what each row refers to) to work with it later
End of explanation
from collections import Counter
def what_without_ne(entry):
entry.doc(tag=True, parse=False, entity=True)
avoid_ent_cats = set(entry.who_ne_cats)
avoid_ent_cats.update(entry.where_ne_cats)
avoid_ents = [
(ent.start, ent.end)
for ent in entry.doc.ents
if ent.label_ in avoid_ent_cats
]
words = []
doc_words = list(entry.doc)
while doc_words and avoid_ents:
i = doc_words[0].i
low, high = avoid_ents[0]
if i < low:
words.append(doc_words.pop(0))
elif low <= i and i < high:
doc_words.pop(0) # but don't save it, since is part of NE
else: # low < high <= i
avoid_ents.pop(0) # delete ent, since we overpassed it
words += doc_words # no more ents to filter with
counter = Counter(
word.lower_
for word in words
)
entry._what = entry.topic_model.model.transform(
np.array([ counter[word] for word in entry.topic_model.vocab ])
)
not_ne_what_coefs = []
for test in tests:
base = get_entry(test['base'])
what_without_ne(base)
e1 = get_entry(test['e1'])
what_without_ne(e1)
e2 = get_entry(test['e2'])
what_without_ne(e2)
not_ne_what_coefs.append(
[
base.what_distance(e1),
base.what_distance(e2)
]
)
Explanation: What without NEs
End of explanation
with open('X.txt') as f:
coefs = [eval(x) for x in f.read().split('\n')]
with open('Y.txt') as f:
Y = [float(x) for x in f.read().split('\n')]
with open('tests.json') as f:
tests = json.loads(f.read())
X_copy = list(coefs); Y_copy = list(Y)
X = np.array(
[
[
v1[i] - v2[i]
for i in range(3)
]
for v1, v2 in coefs
]
)
Y = np.array(Y)
X_not_ne_what = np.array(
[
[
not_ne_what_coefs[n][0] - not_ne_what_coefs[n][1],
row[1], row[2]
]
for n, row in enumerate(X)
]
)
def sigmoid(x, gamma=1.):
return 1.0 / (1.0 + np.exp(-gamma * x))
def cost(theta, X=None, Y=None): # theta is np.array
return np.sum(
(sigmoid(np.dot(X, np.abs(theta))) - Y) ** 2
) / len(X)
grad_cost = grad(cost)
class SGD:
def __init__(self, learning=0.5, max_iters=10**5, prec=10**-3):
self.learning = learning
self.max_iters = max_iters
self.prec = prec
self.theta = None
self._iters = None
self._costs = None
def get_params(self, deep=True):
return {
'learning': self.learning,
'max_iters': self.max_iters,
'prec': self.prec
}
@property
def iters(self):
if self._iters is None:
raise Exception('SGD must be fitted to access iters')
return self._iters
@iters.setter
def iters(self, value): self._iters = value
@property
def costs(self):
if self._costs is None:
raise Exception('SGD must be fitted to access costs')
return self._costs
@costs.setter
def costs(self, value): self._costs = value
def fit(self, X, Y):
self.iters = 0
self.costs = []
theta = np.random.random(3)
while self.iters < self.max_iters:
self.iters += 1
self.costs.append(cost(theta, X=X, Y=Y))
prev_theta = theta.copy()
theta -= self.learning * grad_cost(theta, X=X, Y=Y)
if np.linalg.norm(theta - prev_theta) < self.prec:
break
self.costs.append(cost(theta, X=X, Y=Y))
self.theta = theta
return self
def score(self, X, Y):
return sum(
(not ((pred > 0.) ^ (cls > 0.))) if pred != 0. else 0.
for pred, cls in zip(np.dot(X, self.theta), Y)
) / len(Y)
class WhatSGD(SGD):
def fit(self, X, Y):
self.theta = np.array([1., 0., 0.])
return self
Explanation: SGD
End of explanation
threshold = int(len(X) * 0.9)
X_train, X_test = X[:threshold], X[threshold:]
Y_train, Y_test = Y[:threshold], Y[threshold:]
trained_sgd = SGD()
trained_sgd.fit(X_train, Y_train)
pd.Series(trained_sgd.costs).plot() # error on each iteration
Explanation: Simple trial
End of explanation
X_not_what = X.copy()
for row in X_not_what:
row[0] = 0
sgd = SGD()
what_sgd = WhatSGD()
sgd_not_what = SGD()
rows = []
for i in range(2, 20 + 1):
rows.append(
[
cross_validation.cross_val_score(
sgd, X, Y, cv=i
),
cross_validation.cross_val_score(
sgd, X_not_ne_what, Y, cv=i
),
cross_validation.cross_val_score(
what_sgd, X, Y, cv=i
),
cross_validation.cross_val_score(
what_sgd, X_not_ne_what, Y, cv=i
)
]
)
for n, i in enumerate(range(2, 20 + 1)):
rows[n].append(
cross_validation.cross_val_score(
sgd_not_what, X_not_what, Y, cv=i
)
)
df = pd.DataFrame(
[[s.mean() for s in row] for row in rows],
columns=['sgd, what with NE', 'sgd, what without NE', 'what with NE', 'what without NE', 'sgd without what'],
index=[100 - 100 // i for i in range(2, 20 + 1)]
)
df.plot(ylim=(0, 1))
df.plot()
df.mean()
df[df.index > 75].mean()
df[df.index > 90].mean()
Explanation: Run cross validation for each model
End of explanation
scores = cross_validation.cross_val_score(
sgd, X, Y, cv=10
)
scores, scores.mean()
scores = cross_validation.cross_val_score(
sgd, X_not_ne_what, Y, cv=10
)
scores, scores.mean()
scores = cross_validation.cross_val_score(
what_sgd, X, Y, cv=10
)
scores, scores.mean()
scores = cross_validation.cross_val_score(
what_sgd, X_not_ne_what, Y, cv=10
)
scores, scores.mean()
sgd_not_what = SGD()
scores = cross_validation.cross_val_score(
sgd_not_what, X_not_what, Y, cv=10
)
scores, scores.mean()
Explanation: Scores of each model
End of explanation
sgd = SGD()
sgd.fit(X, Y)
cost(sgd.theta, X, Y), cost(sgd.theta / sgd.theta.sum(), X, Y)
Explanation: Final results
End of explanation
sgd.theta, sgd.theta / sgd.theta.sum()
sgd = SGD()
sgd.fit(X_not_ne_what, Y)
sgd.theta, sgd.theta / sgd.theta.sum()
Explanation: The one not normalised gets less error due to the fact that bigger absolute values make sigmoid closer to 1 or 0, thus reducing the error. Anyhow, the desired value is the one normalised (values must sum 1)
End of explanation
sgd_not_what = SGD()
sgd_not_what.fit(X_not_what, Y)
(sgd_not_what.theta - np.array([sgd_not_what.theta[0], 0., 0.])) / sgd_not_what.theta[1:].sum()
Explanation: Notice how with not-ne-what What becomes slightly more important (maybe because it doesn't lose accuracy due to NE mistreatment?). Even if the resulting accuracy is the same but more stable, the other system makes What be more easily computed in terms of performance, so it would still be the selected approach, because the algorithm is slow enough already. In next steps, when the algorithm is optimised, working with not-ne-what would be a good idea.
Finally, by checking what would be the values with just Who and Where:
End of explanation |
3,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Unsupervised dimensionality reduction using a 1 Hidden-layer perceptron where label == ground truth
For NLP, we can say somewhat say that word2vec and autoencoders are similiar.
Dimensionality reduction works only if the inputs are correlated (like images from the same domain). It fails if we pass in completely random inputs each time we train an autoencoder. So in the end, an autoencoder can produce lower dimensional output (at the encoder) given an input much like Principal Component Analysis (PCA). And since we don’t have to use any labels during training, it’s an unsupervised model as well.
Step2: Since our compressed data is in probabilities, we'll convert to whole nums to look up words
Step3: Tadaa!!! And here's our prediction
This show's how well our compression is able to recover data
Remember that Autoencoders are lossy compression which means you will never be able to full reconstruct that data | Python Code:
import os
from random import randint
from collections import Counter
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import numpy as np
import tensorflow as tf
corpus = "the quick brown fox jumped over the lazy dog from the quick tall fox".split()
test_corpus = "the quick brown fox jumped over the lazy dog from the quick tall fox".split()
corpus[:10]
def build_vocab(words, vocab_size):
Build vocabulary of VOCAB_SIZE most frequent words
dictionary = dict()
count = [('UNK', -1)]
count.extend(Counter(words).most_common(vocab_size - 1))
index = 0
for word, _ in count:
dictionary[word] = index
index += 1
index_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return dictionary, index_dictionary
vocabulary, reverse_vocabulary = build_vocab(corpus, 100)
vocabulary
def index_words_in_corpus(corpus):
return [vocabulary[token] if token in vocabulary else 0 for token in corpus]
corpus = index_words_in_corpus(corpus)
test_corpus = index_words_in_corpus(test_corpus)
test_corpus
vocabulary_size = len(vocabulary)
vocabulary_size
def one_hot_encode(index):
row = np.zeros(vocabulary_size, dtype=np.int32)
row[index] = 1
return row
data = np.array([one_hot_encode(i) for i in corpus])
test_data = np.array([one_hot_encode(i) for i in test_corpus])
print("(TRAIN: Total number of words, Vocabulary size):", data.shape)
print("(TEST: Total number of words, Vocabulary size):", test_data.shape)
data[randint(1, data.shape[0])]
X = tf.placeholder(tf.float32, shape=(None, vocabulary_size))
Y = tf.placeholder(tf.float32, shape=(None, vocabulary_size))
w1 = tf.Variable(tf.random_normal(shape=(vocabulary_size, 1000), stddev=0.01), name='weights1')
b1 = tf.Variable(tf.zeros([1, 1000]), name="bias1")
layer1 = tf.nn.relu(tf.add(tf.matmul(X, w1), b1))
w2 = tf.Variable(tf.random_normal(shape=(1000, 250), stddev=0.01), name='weights2')
b2 = tf.Variable(tf.zeros([1, 250]), name="bias2")
layer2 = tf.nn.relu(tf.add(tf.matmul(layer1, w2), b2))
w = tf.Variable(tf.random_normal(shape=(250, 50), stddev=0.01), name='weights')
b = tf.Variable(tf.zeros([1, 50]), name="bias")
code = tf.nn.relu(tf.add(tf.matmul(layer2, w), b))
w3 = tf.Variable(tf.random_normal(shape=(50, 250), stddev=0.01), name='weights3')
b3 = tf.Variable(tf.zeros([1, 250]), name="bias3")
layer3 = tf.nn.relu(tf.add(tf.matmul(code, w3), b3))
w4 = tf.Variable(tf.random_normal(shape=(250, 1000), stddev=0.01), name='weights4')
b4 = tf.Variable(tf.zeros([1, 1000]), name="bias4")
layer4 = tf.nn.relu(tf.add(tf.matmul(layer3, w4), b4))
w5 = tf.Variable(tf.random_normal(shape=(1000, vocabulary_size), stddev=0.01), name='weights5')
b5 = tf.Variable(tf.zeros([1, vocabulary_size]), name="bias5")
decoder = tf.nn.sigmoid(tf.add(tf.matmul(layer4, w5), b5))
# entropy = tf.nn.softmax_cross_entropy_with_logits(logits=decoder, labels=Y)
loss = tf.reduce_mean(tf.pow(X - decoder, 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE).minimize(loss)
init = tf.global_variables_initializer()
LEARNING_RATE = 0.01
NUM_TRAIN_STEPS = 1000
SKIP_STEP = 10 # how many steps to skip before reporting the loss
with tf.Session() as sess:
sess.run(init)
for i in range(NUM_TRAIN_STEPS):
_, loss_val = sess.run([optimizer, loss], feed_dict={X: data})
if i % SKIP_STEP == 0:
print("EPOCH {}/{}, LOSS {}".format(i , NUM_TRAIN_STEPS, loss_val))
test_data_compressed = sess.run(decoder, feed_dict={X: test_data})
# np.save(outfile, test_data_compressed)
test_data_compressed.shape
test_data_compressed
Explanation: Unsupervised dimensionality reduction using a 1 Hidden-layer perceptron where label == ground truth
For NLP, we can say somewhat say that word2vec and autoencoders are similiar.
Dimensionality reduction works only if the inputs are correlated (like images from the same domain). It fails if we pass in completely random inputs each time we train an autoencoder. So in the end, an autoencoder can produce lower dimensional output (at the encoder) given an input much like Principal Component Analysis (PCA). And since we don’t have to use any labels during training, it’s an unsupervised model as well.
End of explanation
test_data_compressed[test_data_compressed>0] = 1
test_data_compressed
test_data
Explanation: Since our compressed data is in probabilities, we'll convert to whole nums to look up words
End of explanation
sent = np.ndarray.tolist(test_data_compressed)[0]
print(' '.join([reverse_vocabulary[i] if sent[i] == 1. else "" for i in range(len(sent))]))
Explanation: Tadaa!!! And here's our prediction
This show's how well our compression is able to recover data
Remember that Autoencoders are lossy compression which means you will never be able to full reconstruct that data
End of explanation |
3,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This IPython notebook illustrates how to performing matching with a ML matcher. In particular we show examples with a decision tree matcher, but the same principles apply to all of the other ML matchers.
Step1: Read in the orignal tables and a set of labeled data into py_entitymatching.
Step2: Training the ML Matcher
Now, we can train our ML matcher. In this notebook we will demonstrate this process with a decision tree matcher. First, we need to split our labeled data into a training set and a test set. Then we will exract feature vectors from the training set and train our decision tree with the fit command.
Step3: Getting Predictions with the ML Matcher
Since we now have a trained decision tree, we can use our matcher to get predictions on the test set. Below, we will show four different ways to get the predictions with the predict command that will be useful in various contexts.
Getting a List of Predictions
First up, we will demonstrate how to get just a list of predictions using the predict command. This is the default method of getting predictions. As shown below, the resulting variable, predictions, is just an array containing the predictions for each of the feature vectors in the test set.
Step4: Getting a List of Predictions and a List of Probabilities
Next we will demonstrate how to get both a list of prediction for the test set, as well as a list of the associated probabilities for the predictions. This is done by setting the 'return_probs' argument to true. Note that the probabilities shown are the probability for a match.
Step5: Appending the Predictions to the Feature Vectors Table
Often, we want to include the predictions with the feature vector table. We can return predictions appended to a copy of the feature vector table if we use the 'append' argument to true. We can choose the name of the new predictions column using the 'target_attr' argument. We can also append the probabilites by setting 'return_probs' to true and setting the new probabilities column name with the 'probs_attr'.
Step6: Appending the Prediction to the Original Feature Vectors Table In-place
Lastly, we will show how to append the predictions to the original feature vector dataframe. We can accomplish this by setting the 'append' argument to true, setting the name of the new column with the 'target_attr' argument and then setting the 'inplace' argument to true. Again, we can include the probabilites with the 'return_probs' and 'probs_attr' arguments. This will append the predictions and probabilities to the original feature vector dataframe as opposed to the mthod used above which will create a copy of the feature vectors and append the predictions to that copy. | Python Code:
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
Explanation: Introduction
This IPython notebook illustrates how to performing matching with a ML matcher. In particular we show examples with a decision tree matcher, but the same principles apply to all of the other ML matchers.
End of explanation
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
path_A = datasets_dir + os.sep + 'dblp_demo.csv'
path_B = datasets_dir + os.sep + 'acm_demo.csv'
path_labeled_data = datasets_dir + os.sep + 'labeled_data_demo.csv'
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
# Load the pre-labeled data
S = em.read_csv_metadata(path_labeled_data,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
Explanation: Read in the orignal tables and a set of labeled data into py_entitymatching.
End of explanation
# Split S into I an J
IJ = em.split_train_test(S, train_proportion=0.5, random_state=0)
I = IJ['train']
J = IJ['test']
# Generate a set of features
F = em.get_features_for_matching(A, B, validate_inferred_attr_types=False)
# Convert I into feature vectors using updated F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Instantiate the matcher to evaluate.
dt = em.DTMatcher(name='DecisionTree', random_state=0)
# Train using feature vectors from I
dt.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
Explanation: Training the ML Matcher
Now, we can train our ML matcher. In this notebook we will demonstrate this process with a decision tree matcher. First, we need to split our labeled data into a training set and a test set. Then we will exract feature vectors from the training set and train our decision tree with the fit command.
End of explanation
# Convert J into a set of feature vectors using F
L1 = em.extract_feature_vecs(J, feature_table=F,
attrs_after='label', show_progress=False)
# Predict on L
predictions = dt.predict(table=L1, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'])
# Show the predictions
predictions[0:10]
Explanation: Getting Predictions with the ML Matcher
Since we now have a trained decision tree, we can use our matcher to get predictions on the test set. Below, we will show four different ways to get the predictions with the predict command that will be useful in various contexts.
Getting a List of Predictions
First up, we will demonstrate how to get just a list of predictions using the predict command. This is the default method of getting predictions. As shown below, the resulting variable, predictions, is just an array containing the predictions for each of the feature vectors in the test set.
End of explanation
# Convert J into a set of feature vectors using F
L2 = em.extract_feature_vecs(J, feature_table=F,
attrs_after='label', show_progress=False)
# Predict on L
predictions, probs = dt.predict(table=L2, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], return_probs=True)
# Show the predictions and probabilities
print('Predictions for first ten entries: {0}'.format(predictions[0:10]))
print('Probabilities of a match for first ten entries: {0}'.format(probs[0:10]))
Explanation: Getting a List of Predictions and a List of Probabilities
Next we will demonstrate how to get both a list of prediction for the test set, as well as a list of the associated probabilities for the predictions. This is done by setting the 'return_probs' argument to true. Note that the probabilities shown are the probability for a match.
End of explanation
# Convert J into a set of feature vectors using F
L3 = em.extract_feature_vecs(J, feature_table=F,
attrs_after='label', show_progress=False)
# Predict on L
predictions = dt.predict(table=L3, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='prediction', append=True,
return_probs=True, probs_attr='probability')
# Show the predictions and probabilities
predictions[['_id', 'ltable_id', 'rtable_id', 'label', 'prediction', 'probability']].head()
Explanation: Appending the Predictions to the Feature Vectors Table
Often, we want to include the predictions with the feature vector table. We can return predictions appended to a copy of the feature vector table if we use the 'append' argument to true. We can choose the name of the new predictions column using the 'target_attr' argument. We can also append the probabilites by setting 'return_probs' to true and setting the new probabilities column name with the 'probs_attr'.
End of explanation
# Convert J into a set of feature vectors using F
L4 = em.extract_feature_vecs(J, feature_table=F,
attrs_after='label', show_progress=False)
# Predict on L
dt.predict(table=L4, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='prediction', append=True,
return_probs=True, probs_attr='probabilities',
inplace=True)
# Show the predictions and probabilities
L4[['_id', 'ltable_id', 'rtable_id', 'label', 'prediction', 'probabilities']].head()
Explanation: Appending the Prediction to the Original Feature Vectors Table In-place
Lastly, we will show how to append the predictions to the original feature vector dataframe. We can accomplish this by setting the 'append' argument to true, setting the name of the new column with the 'target_attr' argument and then setting the 'inplace' argument to true. Again, we can include the probabilites with the 'return_probs' and 'probs_attr' arguments. This will append the predictions and probabilities to the original feature vector dataframe as opposed to the mthod used above which will create a copy of the feature vectors and append the predictions to that copy.
End of explanation |
3,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aufgabe 3
Step1: First we load the iris data from task 1 and split it into training and validation set.
Step2: Then we specify our parameter space and performance metric.
Step3: Next we run a performance test on GridSearchCV. Therefor we search mulitple times to maximize the precision save the best time for later comparison. Each time we use a different number of jobs.
Step4: Finally we evaluate our results | Python Code:
# imports
import pandas
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import GridSearchCV
Explanation: Aufgabe 3: Cross Validation and Grid Search
We use sklearn's GridSearchCV and cross validation to search for an optimal number of kneighbors for the KNeighborsClassifier to maximize the precision of the classification of the iris data from task 1.
End of explanation
# load dataset from task 1
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = pandas.read_csv(url, names=names)
# split-out and scale dataset
array = dataset.values
X = array[:,0:4]
y = array[:,4]
Explanation: First we load the iris data from task 1 and split it into training and validation set.
End of explanation
# specify parameter space and performance metric
max_n = 30
k = list(range(1, max_n + 1))
parameter_grid = {"n_neighbors": k}
scoring = "accuracy"
cross_val = 10
Explanation: Then we specify our parameter space and performance metric.
End of explanation
# parameter for performance test
max_jobs = 8
best_in = 3
# performance test
measurements = []
kneighbors = KNeighborsClassifier()
for i in range(max_jobs):
min_t = float("inf")
grid_search = GridSearchCV(kneighbors, parameter_grid, cv=cross_val, scoring=scoring, n_jobs=i + 1)
tr = %timeit -o grid_search.fit(X, y)
measurements.append(tr.best)
Explanation: Next we run a performance test on GridSearchCV. Therefor we search mulitple times to maximize the precision save the best time for later comparison. Each time we use a different number of jobs.
End of explanation
# best parameters found
print("Best parameters:")
print(grid_search.best_params_)
print("With accuracy:")
print(grid_search.best_score_)
fig, ax = plt.subplots()
ax.plot(range(1, max_jobs + 1), measurements, 'ro')
ax.set_xlim([0, max_jobs + 1])
#plt.axis([0, len(num_cpu_list)+1, 0, max(training_times)+1])
plt.title('Visualization of the runtime depending on the number of used jobs.')
plt.xlabel("#CPU Cores")
plt.ylabel("search time [s]")
plt.show()
scores_all_percent = [100 * grid_score[1] for grid_score in grid_search.grid_scores_]
params_all = [grid_score[0]["n_neighbors"] for grid_score in grid_search.grid_scores_]
N = max_n
ind = np.arange(N) # the x locations for bars
width = 0.5 # the width of the bars
fig, ax = plt.subplots()
ax.bar(ind + width/2, scores_all_percent, width)
ax.set_xticks(ind + width)
ax.set_xticklabels([str(i) for i in params_all])
ax.set_ylim([90,100])
plt.title("Accuracy of KNN vs n_neighbors param")
plt.xlabel("n_neighbors")
plt.ylabel("mean accuracy [%]")
plt.show()
Explanation: Finally we evaluate our results:
End of explanation |
3,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DCT-based Transform Coding of Images
This code is provided as supplementary material of the lecture Quellencodierung.
This code illustrates
* Show basis functions of the DCT
Step1: Functions that implement the 2D-DCT and IDCT of type-2 as they are being used in the JPG standard with (see also https
Step2: Verify that the DCT and IDCT are indeed the same functions
Step3: Show basis functions of DCT
Step4: Decompose image into blocks of size $8\times 8$ and display the DCT coefficients of these blocks
Step5: Fast DCT implementation using scipy | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from itertools import chain
from scipy import fftpack
import scipy as sp
from ipywidgets import interactive, HBox, Label
import ipywidgets as widgets
%matplotlib inline
Explanation: DCT-based Transform Coding of Images
This code is provided as supplementary material of the lecture Quellencodierung.
This code illustrates
* Show basis functions of the DCT
End of explanation
def dct(x):
retval = np.zeros_like(x)
for i in range(x.shape[0]):
for j in range(x.shape[1]):
for k in range(x.shape[0]):
for l in range(x.shape[1]):
retval[i,j] += x[k,l] * np.cos(np.pi*i*(2*k+1)/2/x.shape[0]) * np.cos(np.pi*j*(2*l+1)/2/x.shape[1])
if i == 0:
retval[i,j] *= np.sqrt(1/x.shape[0])
else:
retval[i,j] *= np.sqrt(2/x.shape[0])
if j == 0:
retval[i,j] *= np.sqrt(1/x.shape[0])
else:
retval[i,j] *= np.sqrt(2/x.shape[0])
return retval
def idct(X):
temp = np.zeros_like(X)
# IDCT in horizontal direction
for i in range(X.shape[0]):
for j in range(X.shape[1]):
temp[i,j] = X[i,0] / np.sqrt(X.shape[1])
for k in range(1,X.shape[1]):
temp[i,j] += X[i,k] * np.cos(np.pi*k*(2*j+1)/2/X.shape[1]) * np.sqrt(2/X.shape[1])
# IDCT in vertical direction
retval = np.zeros_like(X)
for j in range(X.shape[1]):
for i in range(X.shape[0]):
retval[i,j] = temp[0,j] / np.sqrt(X.shape[0])
for k in range(1,X.shape[0]):
retval[i,j] += temp[k,j] * np.cos(np.pi*k*(2*i+1)/2/X.shape[0]) * np.sqrt(2/X.shape[0])
return retval
Explanation: Functions that implement the 2D-DCT and IDCT of type-2 as they are being used in the JPG standard with (see also https://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.dct.html)
$$
X_{u,v} = f(u)f(v)\sum_{k=0}^{M-1}\sum_{\ell=0}^{M-1}x[k,\ell]\cos\left(\frac{\pi u(2k+1)}{2M}\right)\cos\left(\frac{\pi v(2\ell+1)}{2M}\right)
$$
with
$$
f(x) = \begin{cases}
\sqrt{\frac{1}{M}} & \text{if }k=0 \
\sqrt{\frac{2}{M}} & \text{otherwise}
\end{cases}
$$
which is, up to a scaling factor, the DCT used in JPG.
Note that the 2D-DCT is decomposable, i.e., we can first execute it on a 2D block in one dimension and then execute the transform in another dimension
End of explanation
x = np.random.randn(8,8)
X = dct(x)
xh = idct(X)
print(x-xh)
Explanation: Verify that the DCT and IDCT are indeed the same functions
End of explanation
fig = plt.figure(1,figsize=(10,10))
fig.patch.set_facecolor('xkcd:beige')
idx = 1
for i in range(8):
for j in range(8):
X = np.zeros((8,8))
X[i,j] = 1
xh = idct(X)
plt.subplot(8,8,idx)
plt.imshow(xh, cmap='gray')
plt.axis('off')
idx += 1
plt.savefig('2dDCT_basis_functions.pdf', bbox_inches='tight', facecolor=fig.get_facecolor(), edgecolor='none')
Explanation: Show basis functions of DCT
End of explanation
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])
image = rgb2gray( mpimg.imread('Mandrill.png') )
# crop image to be a multiple of 8
image_len0 = int(np.floor(image.shape[0] / 8) * 8)
image_len1 = int(np.floor(image.shape[1] / 8) * 8)
image = image[0:image_len0, 0:image_len1]
Explanation: Decompose image into blocks of size $8\times 8$ and display the DCT coefficients of these blocks
End of explanation
def dct2(x):
return sp.fftpack.dct( sp.fftpack.dct( x, axis=0, norm='ortho' ), axis=1, norm='ortho' )
def idct2(x):
return sp.fftpack.idct( sp.fftpack.idct( x, axis=0 , norm='ortho'), axis=1 , norm='ortho')
# show DCT
image_dct = np.zeros_like(image)
for start0 in np.arange(0,image.shape[0], 8):
for start1 in np.arange(0,image.shape[1], 8):
# carry out transform
TC = dct2(image[start0:(start0+8), start1:(start1+8)] - 0.5)
image_dct[start0:(start0+8), start1:(start1+8)] = TC
plt.figure(1,figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(image, cmap='gray')
plt.title('Original image (monochrome)')
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(image_dct, cmap='gray')
plt.title('Image in DCT domain')
plt.axis('off')
plt.savefig('image_in_DCT_domain.pdf', bbox_inches='tight')
Explanation: Fast DCT implementation using scipy
End of explanation |
3,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Data's messy - clean it up!</h1>
Data cleaning is a critical process for improving data quality and ultimately the accuracy of machine learning model output. In this notebook we show how the GraphLab Create Data Matching toolkit can be used to get your data shiny clean.
Auto-tagging Stack Overflow questions and answers
Record linkage of house listings
Composite distances and choosing neighborhood parameters
Note
Step1: In the first section of this notebook we autotag posts from CrossValidated, the statistics section of the Stack Exchange network. Questions posted on this forum are typically annotated with tags by the authors but responses are not, making it more difficult to quickly scan responses for the most useful information. The raw data is available from the Stack Exchange data dump. For convenience we provide a preprocessed subsample (7.8MB) in the public Turi datasets bucket on Amazon S3, which is downloaded and saved locally with the first code snippet below.
For reference tags we use a lightly-curated list of statistics topics from Wikipedia. The preprocessed list is also available in the datasets S3 bucket.
A more extensive explanations of the code can be found in the Autotagger chapter of the User Guide.
<h3>Read in the metadata</h3>
The data is also saved locally to avoid repeated downloads.
Step2: <h3>Create the autotagger model</h3>
Step3: <h3>Read in the document data</h3>
Step4: <h3>Query the model</h3>
There are two key parameters when querying the model
Step5: <h2>Record linkage of house listings</h2>
To illustrate usage of the record linker tool, we use synthetic address data generated by and packaged with the FEBRL program, another data matching tool. For the sake of illustration suppose the dataset called "post_address" is a relatively error free set of reference addresses (say, from the Australian postal service). The dataset called "agent_listings" contains data with the same schema, but it has many errors; imagine this is data created by real estate agencies.
<h3>Read in the reference data</h3>
As with the autotagger data, the datasets downloaded in this section are saved locally for repeated usage. From prior experience, we know only a handful of features are useful for this illustration, and they are enumerated in the address_features list.
Step6: <h3>Create the record linker model</h3>
Step7: <h3>Read in the query data</h3>
Step8: <h3>Query the model</h3>
Results are obtained with the model's link method, which matches a new set of queries to the reference data passed in above to the create function. For our first pass, we set the radius parameter to 0.5, which means that matches must share at least roughly 50% of the information contained in both the post_address and agent_listings records.
Step9: <h3>Evaluate</h3>
The results mean that the address in query row 1 match the address in refs row number 2438, although the Jaccard distance is relatively high at 0.42. Inspecting these records manually we see this is in fact not a good match.
Step10: On the other hand, the match between query number 3 and reference number 2947 has a distance of 0.045, indicating these two records are far more similar. By pulling these records we confirm this to be the case.
Step11: Unfortunately, these records are still not a true match because the street numbers are different (in a way that is not likely to be a typo). Ideally we would like street number differences to be weighted heavily in our distance function, while still allowing for typos and misspellings in the street and city names. To do this we can build a composite distance function.
<h2>Composite distances and choosing neighborhood parameters</h2>
<h3>Create a composite distance and a new model</h3>
In this case we'll use Levenshtein distance to measure the dissimilarity in street number, in addition to our existing Jaccard distance measured over all of the address features. Both of these components will be given equal weight. In the summary of the created model, we see the number of distance components is now two---Levenshtein and Jaccard distances---instead of one in our first model.
Step12: <h3>Query the model for a large number of neighbors</h3>
One tricky aspect of using a composite distance is figuring out the best threshold for match quality. A simple way to do this is to first return a relatively high number of matches for each query, then look at the distribution of distances for good thresholds using the radius parameter. For this notebook, I've captured a screenshot of the canvas output and display it below.
Step13: <h3>Calibrate the parameters for results quality</h3>
In this distribution we see a stark jump at 0.636 in the distribution of distances for the 10-nearest neighbors of every query (remember this is no longer simple Jaccard distance, but a sum of Jaccard and Levenshtein distances over different sets of features). In our final pass, we set the k parameter to None, but enforce this distance threshold with the radius parameter.
Step14: There are far fewer results now, but they are much more likely to be true matches than with our first model, even while allowing for typos in many of the address fields. | Python Code:
import os
import graphlab as gl
Explanation: <h1>Data's messy - clean it up!</h1>
Data cleaning is a critical process for improving data quality and ultimately the accuracy of machine learning model output. In this notebook we show how the GraphLab Create Data Matching toolkit can be used to get your data shiny clean.
Auto-tagging Stack Overflow questions and answers
Record linkage of house listings
Composite distances and choosing neighborhood parameters
Note: this notebook requires GraphLab Create 1.6 or higher.
<h2>Auto-tagging Stack Overflow questions *and* answers</h2>
End of explanation
if os.path.exists('statistics_topics.csv'):
stats_topics = gl.SFrame.read_csv('statistics_topics.csv', header=False)
else:
stats_topics = gl.SFrame.read_csv('https://static.turi.com/datasets//statistics_topics.csv',
header=False)
stats_topics.save('statistics_topics', format='csv')
stats_topics = stats_topics.rename({'X1': 'tag'})
stats_topics.tail(10)
Explanation: In the first section of this notebook we autotag posts from CrossValidated, the statistics section of the Stack Exchange network. Questions posted on this forum are typically annotated with tags by the authors but responses are not, making it more difficult to quickly scan responses for the most useful information. The raw data is available from the Stack Exchange data dump. For convenience we provide a preprocessed subsample (7.8MB) in the public Turi datasets bucket on Amazon S3, which is downloaded and saved locally with the first code snippet below.
For reference tags we use a lightly-curated list of statistics topics from Wikipedia. The preprocessed list is also available in the datasets S3 bucket.
A more extensive explanations of the code can be found in the Autotagger chapter of the User Guide.
<h3>Read in the metadata</h3>
The data is also saved locally to avoid repeated downloads.
End of explanation
model = gl.autotagger.create(stats_topics)
model.list_fields()
model.tag?
Explanation: <h3>Create the autotagger model</h3>
End of explanation
if os.path.exists('stats_overflow_clean'):
posts = gl.SFrame('stats_overflow_clean')
else:
posts = gl.SFrame('https://static.turi.com/datasets/stats_overflow_clean')
posts.save('stats_overflow_clean')
print "Number of posts:", posts.num_rows()
posts[['Body', 'Title', 'PostTypeId', 'Tags']].tail(5)
posts['doc'] = posts['Title'] + ' ' + posts['Body']
Explanation: <h3>Read in the document data</h3>
End of explanation
tags = model.tag(posts, query_name='doc', k=5, similarity_threshold=0.1)
tags.print_rows(10, max_row_width=110, max_column_width=40)
Explanation: <h3>Query the model</h3>
There are two key parameters when querying the model: k, which indicates the maximum number of tags to return for each query, and similarity_threshold, which indicates the maximum distance from a query document to the tag. The most typical usage is to get preliminary results by setting k to 5 and leaving similarity_threshold unspecified. Use the similarity_threshold parameter to tune the final results for optimal precision and recall.
End of explanation
col_types = {'street_number': str, 'postcode': str}
address_features = ['street_number', 'address_1', 'suburb', 'state', 'postcode']
if os.path.exists('febrl_F_org_5000.csv'):
post_address = gl.SFrame.read_csv('febrl_F_org_5000.csv', column_type_hints=col_types)
else:
url = 'https://static.turi.com/datasets/febrl_synthetic/febrl_F_org_5000.csv'
post_address = gl.SFrame.read_csv(url, column_type_hints=col_types)
post_address.save('febrl_F_org_5000.csv')
post_address = post_address[address_features]
post_address.print_rows(5)
Explanation: <h2>Record linkage of house listings</h2>
To illustrate usage of the record linker tool, we use synthetic address data generated by and packaged with the FEBRL program, another data matching tool. For the sake of illustration suppose the dataset called "post_address" is a relatively error free set of reference addresses (say, from the Australian postal service). The dataset called "agent_listings" contains data with the same schema, but it has many errors; imagine this is data created by real estate agencies.
<h3>Read in the reference data</h3>
As with the autotagger data, the datasets downloaded in this section are saved locally for repeated usage. From prior experience, we know only a handful of features are useful for this illustration, and they are enumerated in the address_features list.
End of explanation
model = gl.record_linker.create(post_address, distance='jaccard')
model.summary()
model.list_fields()
Explanation: <h3>Create the record linker model</h3>
End of explanation
if os.path.exists('febrl_F_dup_5000.csv'):
agent_listings = gl.SFrame.read_csv('febrl_F_dup_5000.csv',
column_type_hints=col_types)
else:
url = 'https://static.turi.com/datasets/febrl_synthetic/febrl_F_dup_5000.csv'
agent_listings = gl.SFrame.read_csv(url, column_type_hints=col_types)
agent_listings.save('febrl_F_dup_5000.csv')
agent_listings = agent_listings[address_features]
agent_listings.print_rows(5)
Explanation: <h3>Read in the query data</h3>
End of explanation
model.link?
matches = model.link(agent_listings, k=None, radius=0.5)
matches.head(5)
Explanation: <h3>Query the model</h3>
Results are obtained with the model's link method, which matches a new set of queries to the reference data passed in above to the create function. For our first pass, we set the radius parameter to 0.5, which means that matches must share at least roughly 50% of the information contained in both the post_address and agent_listings records.
End of explanation
print agent_listings[1]
print post_address[2438]
Explanation: <h3>Evaluate</h3>
The results mean that the address in query row 1 match the address in refs row number 2438, although the Jaccard distance is relatively high at 0.42. Inspecting these records manually we see this is in fact not a good match.
End of explanation
print agent_listings[3]
print post_address[2947]
Explanation: On the other hand, the match between query number 3 and reference number 2947 has a distance of 0.045, indicating these two records are far more similar. By pulling these records we confirm this to be the case.
End of explanation
address_dist = [
[['street_number'], 'levenshtein', 1],
[address_features, 'jaccard', 1]
]
model2 = gl.record_linker.create(post_address, distance=address_dist)
model2.summary()
model2['distance']
Explanation: Unfortunately, these records are still not a true match because the street numbers are different (in a way that is not likely to be a typo). Ideally we would like street number differences to be weighted heavily in our distance function, while still allowing for typos and misspellings in the street and city names. To do this we can build a composite distance function.
<h2>Composite distances and choosing neighborhood parameters</h2>
<h3>Create a composite distance and a new model</h3>
In this case we'll use Levenshtein distance to measure the dissimilarity in street number, in addition to our existing Jaccard distance measured over all of the address features. Both of these components will be given equal weight. In the summary of the created model, we see the number of distance components is now two---Levenshtein and Jaccard distances---instead of one in our first model.
End of explanation
pre_match = model2.link(agent_listings, k=10, verbose=False)
pre_match['distance'].show()
from IPython.display import Image
Image(url='https://static.turi.com/datasets/house_link_distances.png')
Explanation: <h3>Query the model for a large number of neighbors</h3>
One tricky aspect of using a composite distance is figuring out the best threshold for match quality. A simple way to do this is to first return a relatively high number of matches for each query, then look at the distribution of distances for good thresholds using the radius parameter. For this notebook, I've captured a screenshot of the canvas output and display it below.
End of explanation
matches = model2.link(agent_listings, k=None, radius=0.64, verbose=False)
matches.head(5)
Explanation: <h3>Calibrate the parameters for results quality</h3>
In this distribution we see a stark jump at 0.636 in the distribution of distances for the 10-nearest neighbors of every query (remember this is no longer simple Jaccard distance, but a sum of Jaccard and Levenshtein distances over different sets of features). In our final pass, we set the k parameter to None, but enforce this distance threshold with the radius parameter.
End of explanation
print agent_listings[6]
print post_address[1266]
Explanation: There are far fewer results now, but they are much more likely to be true matches than with our first model, even while allowing for typos in many of the address fields.
End of explanation |
3,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nipype Quickstart
This is a very quick non-imaging introduction to Nipype workflows. For a more comprehensive introduction, check the next section of the tutorial.
Existing documentation
Visualizing the evolution of Nipype
This notebook is taken from reproducible-imaging repository
Import a few things from nipype
Step1: Creating Workflow with one Node that adds two numbers
Step2: Creating a second node and connecting to the hello Workflow
Step3: And we can check results of our Workflow, we should see a list
Step4: We will try to add additional Node that adds one
Step5: This time the workflow didn't execute cleanly and we got an error. We can use nipypecli to read the crashfile (note, that if you have multiple crashfiles in the directory you'll have to provide a full name)
Step6: It clearly shows the problematic Node and its input. We tried to add an integer to a list, this operation is not allowed in Python.
Let's try using MapNode
Step7: Now the workflow finished without problems, let's see the results from hello.add_1
Step8: And now we will run the example with iterables
Step9: Now we have 6 nodes, we can check results for hello.add_1.a1
Step10: We can plot a general structure of the workflow
Step11: And more detailed structure with all nodes
Step12: We will introduce another iterables, for the concater Node
Step13: Now we will introduce JoinNode that allows us to merge results together
Step14: Let's check the output of hello.join_scale_data.a0 node
Step15: Exercise 1
Create a workflow to calculate a sum of factorials of numbers from a range between $n_{min}$ and $n_{max}$, i.e.
Step16: let's print all nodes
Step17: the final result should be 10
Step18: we can also check the results of two other nodes
Step19: Exercise 2
Create a workflow to calculate the following sum for chosen $n$ and five different values of $x$
Step20: let's check all nodes
Step21: let's print all results of ex2.summing
Step22: Great, we just implemented pretty good Sine function! Those number should be approximately 0, 1, 0, -1 and 0. If they are not, try to increase $n_max$.
Exercise 2a
Use JoinNode to combine results from Exercise 2 in one container, e.g. a dictionary, that takes value $x$ as a key and the result from summing Node as a value.
Step23: let's print all nodes
Step24: and results from merge Node | Python Code:
import os
from nipype import Workflow, Node, Function
Explanation: Nipype Quickstart
This is a very quick non-imaging introduction to Nipype workflows. For a more comprehensive introduction, check the next section of the tutorial.
Existing documentation
Visualizing the evolution of Nipype
This notebook is taken from reproducible-imaging repository
Import a few things from nipype
End of explanation
def sum(a, b):
return a + b
wf = Workflow('hello')
adder = Node(Function(input_names=['a', 'b'],
output_names=['sum'],
function=sum),
name='a_plus_b')
adder.inputs.a = 1
adder.inputs.b = 3
wf.add_nodes([adder])
wf.base_dir = os.getcwd()
eg = wf.run()
list(eg.nodes())[0].result.outputs
Explanation: Creating Workflow with one Node that adds two numbers
End of explanation
def concat(a, b):
return [a, b]
concater = Node(Function(input_names=['a', 'b'],
output_names=['some_list'],
function=concat),
name='concat_a_b')
wf.connect(adder, 'sum', concater, 'a')
concater.inputs.b = 3
eg = wf.run()
print(eg.nodes())
Explanation: Creating a second node and connecting to the hello Workflow
End of explanation
list(eg.nodes())[-1].result.outputs
Explanation: And we can check results of our Workflow, we should see a list:
End of explanation
def plus_one(a):
return a + 1
plusone = Node(Function(input_names=['a'],
output_names=['out'],
function=plus_one),
name='add_1')
wf.connect(concater, 'some_list', plusone, 'a')
try:
eg = wf.run()
except(RuntimeError) as err:
print("RuntimeError:", err)
else:
raise
Explanation: We will try to add additional Node that adds one:
End of explanation
!nipypecli crash crash*
Explanation: This time the workflow didn't execute cleanly and we got an error. We can use nipypecli to read the crashfile (note, that if you have multiple crashfiles in the directory you'll have to provide a full name):
End of explanation
from nipype import MapNode
plusone = MapNode(Function(input_names=['a'],
output_names=['out'],
function=plus_one),
iterfield=['a'],
name='add_1')
wf = Workflow('hello_mapnode')
adder = Node(Function(input_names=['a', 'b'],
output_names=['sum'],
function=sum),
name='a_plus_b')
adder.inputs.a = 1
adder.inputs.b = 3
wf.connect(adder, 'sum', concater, 'a')
concater.inputs.b = 3
wf.connect(concater, 'some_list', plusone, 'a')
wf.base_dir = os.getcwd()
eg = wf.run()
print(eg.nodes())
Explanation: It clearly shows the problematic Node and its input. We tried to add an integer to a list, this operation is not allowed in Python.
Let's try using MapNode
End of explanation
print(list(eg.nodes())[2].result.outputs)
Explanation: Now the workflow finished without problems, let's see the results from hello.add_1:
End of explanation
adder.iterables = ('a', [1, 2])
adder.inputs.b = 2
eg = wf.run()
print(eg.nodes())
Explanation: And now we will run the example with iterables:
End of explanation
list(eg.nodes())[5].result.outputs
wf.write_graph(graph2use='exec')
from IPython.display import Image
Explanation: Now we have 6 nodes, we can check results for hello.add_1.a1
End of explanation
Image("hello_mapnode/graph.png")
Explanation: We can plot a general structure of the workflow:
End of explanation
Image("hello_mapnode/graph_detailed.png")
Explanation: And more detailed structure with all nodes:
End of explanation
concater.iterables = ('b', [3, 4])
eg = wf.run()
eg.nodes();
wf.write_graph(graph2use='exec')
Image("hello_mapnode/graph_detailed.png")
Explanation: We will introduce another iterables, for the concater Node:
End of explanation
def merge_and_scale_data(data2):
import numpy as np
return (np.array(data2) * 1000).tolist()
from nipype import JoinNode
joiner = JoinNode(Function(input_names=['data2'],
output_names=['data_scaled'],
function=merge_and_scale_data),
name='join_scale_data',
joinsource=adder,
joinfield=['data2'])
wf.connect(plusone, 'out', joiner, 'data2')
eg = wf.run()
eg.nodes()
Explanation: Now we will introduce JoinNode that allows us to merge results together:
End of explanation
list(eg.nodes())[0].result.outputs
wf.write_graph(graph2use='exec')
Image("hello_mapnode/graph.png")
Image("hello_mapnode/graph_detailed.png")
%time eg = wf.run(plugin='MultiProc', plugin_args={'n_procs': 2})
wf.base_dir = os.path.join(os.getcwd(), 'alt')
%time eg = wf.run(plugin='MultiProc', plugin_args={'n_procs': 2})
%time eg = wf.run(plugin='MultiProc', plugin_args={'n_procs': 2})
Explanation: Let's check the output of hello.join_scale_data.a0 node:
End of explanation
#write your code here
# 1. write 3 functions: one that returns a list of number from a specific range,
# second that returns n! (you can use math.factorial) and third, that sums the elements from a list
# 2. create a workflow and define the working directory
# 3. define 3 nodes using Node and MapNode and connect them within the workflow
# 4. run the workflow and check the results
from nipype import Workflow, Node, MapNode, Function
import os
def range_fun(n_min, n_max):
return list(range(n_min, n_max+1))
def factorial(n):
# print("FACTORIAL, {}".format(n))
import math
return math.factorial(n)
def summing(terms):
return sum(terms)
wf_ex1 = Workflow('ex1')
wf_ex1.base_dir = os.getcwd()
range_nd = Node(Function(input_names=['n_min', 'n_max'],
output_names=['range_list'],
function=range_fun),
name='range_list')
factorial_nd = MapNode(Function(input_names=['n'],
output_names=['fact_out'],
function=factorial),
iterfield=['n'],
name='factorial')
summing_nd = Node(Function(input_names=['terms'],
output_names=['sum_out'],
function=summing),
name='summing')
range_nd.inputs.n_min = 0
range_nd.inputs.n_max = 3
wf_ex1.add_nodes([range_nd])
wf_ex1.connect(range_nd, 'range_list', factorial_nd, 'n')
wf_ex1.connect(factorial_nd, 'fact_out', summing_nd, "terms")
eg = wf_ex1.run()
Explanation: Exercise 1
Create a workflow to calculate a sum of factorials of numbers from a range between $n_{min}$ and $n_{max}$, i.e.:
$$\sum {k=n{min}}^{n_{max}} k! = 0! + 1! +2! + 3! + \cdots$$
if $n_{min}=0$ and $n_{max}=3$
$$\sum _{k=0}^{3} k! = 0! + 1! +2! + 3! = 1 + 1 + 2 + 6 = 10$$
End of explanation
eg.nodes()
Explanation: let's print all nodes:
End of explanation
list(eg.nodes())[2].result.outputs
Explanation: the final result should be 10:
End of explanation
print(list(eg.nodes())[0].result.outputs)
print(list(eg.nodes())[1].result.outputs)
Explanation: we can also check the results of two other nodes:
End of explanation
# write your solution here
# 1. write 3 functions: one that returns a list of number from a range between 0 and some n,
# second that returns a term for a specific k, and third, that sums the elements from a list
# 2. create a workflow and define the working directory
# 3. define 3 nodes using Node and MapNode and connect them within the workflow
# 4. use iterables for 4 values of x
# 5. run the workflow and check the final results for every value of x
# we can reuse function from previous exercise, but they need some edits
from nipype import Workflow, Node, MapNode, JoinNode, Function
import os
import math
def range_fun(n_max):
return list(range(n_max+1))
def term(k, x):
import math
fract = math.factorial(2 * k + 1)
polyn = x ** (2 * k + 1)
return (-1)**k * polyn / fract
def summing(terms):
return sum(terms)
wf_ex2 = Workflow('ex2')
wf_ex2.base_dir = os.getcwd()
range_nd = Node(Function(input_names=['n_max'],
output_names=['range_list'],
function=range_fun),
name='range_list')
term_nd = MapNode(Function(input_names=['k', 'x'],
output_names=['term_out'],
function=term),
iterfield=['k'],
name='term')
summing_nd = Node(Function(input_names=['terms'],
output_names=['sum_out'],
function=summing),
name='summing')
range_nd.inputs.n_max = 15
x_list = [0, 0.5 * math.pi, math.pi, 1.5 * math.pi, 2 * math.pi]
term_nd.iterables = ('x', x_list)
wf_ex2.add_nodes([range_nd])
wf_ex2.connect(range_nd, 'range_list', term_nd, 'k')
wf_ex2.connect(term_nd, 'term_out', summing_nd, "terms")
eg = wf_ex2.run()
Explanation: Exercise 2
Create a workflow to calculate the following sum for chosen $n$ and five different values of $x$: $0$, $\frac{1}{2} \pi$, $\pi$, $\frac{3}{2} \pi$, and $ 2 \pi$.
$\sum _{{k=0}}^{{n}}{\frac {(-1)^{k}}{(2k+1)!}}x^{{2k+1}}\quad =x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-\cdots $
End of explanation
eg.nodes()
Explanation: let's check all nodes
End of explanation
print(list(eg.nodes())[2].result.outputs)
print(list(eg.nodes())[4].result.outputs)
print(list(eg.nodes())[6].result.outputs)
print(list(eg.nodes())[8].result.outputs)
print(list(eg.nodes())[10].result.outputs)
Explanation: let's print all results of ex2.summing
End of explanation
# write your code here
# 1. create an additional function that takes 2 lists and combines them into one container, e.g. dictionary
# 2. use JoinNode to define a new node that merges results from Exercise 2 and connect it to the workflow
# 3. run the workflow and check the results of the merging node
def merge_results(results, x):
return dict(zip(x, results))
join_nd = JoinNode(Function(input_names=['results', 'x'],
output_names=['results_cont'],
function=merge_results),
name='merge',
joinsource=term_nd, # this is the node that used iterables for x
joinfield=['results'])
# taking the list of arguments from the previous part
join_nd.inputs.x = x_list
# connecting a new node to the summing_nd
wf_ex2.connect(summing_nd, "sum_out", join_nd, "results")
eg = wf_ex2.run()
Explanation: Great, we just implemented pretty good Sine function! Those number should be approximately 0, 1, 0, -1 and 0. If they are not, try to increase $n_max$.
Exercise 2a
Use JoinNode to combine results from Exercise 2 in one container, e.g. a dictionary, that takes value $x$ as a key and the result from summing Node as a value.
End of explanation
eg.nodes()
Explanation: let's print all nodes
End of explanation
list(eg.nodes())[1].result.outputs
Explanation: and results from merge Node:
End of explanation |
3,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License
Step3: The World Cup Problem, Part One
In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?
Let's assume that Germany has some hypothetical goal-scoring rate, λ, in goals per game.
To represent the prior distribution of λ, I'll use a Gamma distribution with mean 1.3, which is the average number of goals per team per game in World Cup play.
Here's what the prior looks like.
Step4: Now we can create a Soccer object and initialize it with the prior Pmf
Step5: Here's the update after the first goal at 11 minutes.
Step6: Here's the update after the second goal at 23 minutes (the time between first and second goals is 12 minutes).
Step7: We can compute the mixture of these distributions by making a Meta-Pmf that maps from each Poisson Pmf to its probability.
Step9: MakeMixture takes a Meta-Pmf (a Pmf that contains Pmfs) and returns a single Pmf that represents the weighted mixture of distributions
Step10: Here's the result for the World Cup problem.
Step11: And here's what the mixture looks like.
Step12: Exercise
Step13: MCMC
Building the MCMC model incrementally, start with just the prior distribution for lam.
Step14: Let's look at the prior predictive distribution for the time between goals (in games).
Step15: Now we're ready for the inverse problem, estimating lam based on the first observed gap.
Step16: And here's the inverse problem with both observed gaps.
Step17: And we can generate a predictive distribution for the time until the next goal (in games).
Step18: Exercise
Step22: And we can generate a predictive distribution for the time until the next goal (in games). | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite
import thinkbayes2
import thinkplot
import numpy as np
from scipy.special import gamma
import pymc3 as pm
Explanation: Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
from thinkbayes2 import MakeGammaPmf
xs = np.linspace(0, 12, 101)
pmf_gamma = MakeGammaPmf(xs, 1.3)
thinkplot.Pdf(pmf_gamma)
thinkplot.decorate(title='Gamma PDF',
xlabel='Goals per game',
ylabel='PDF')
pmf_gamma.Mean()
class Soccer(Suite):
Represents hypotheses about goal-scoring rates.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: scoring rate in goals per game
data: interarrival time in minutes
x = data / 90
lam = hypo
like = lam * np.exp(-lam * x)
return like
Explanation: The World Cup Problem, Part One
In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?
Let's assume that Germany has some hypothetical goal-scoring rate, λ, in goals per game.
To represent the prior distribution of λ, I'll use a Gamma distribution with mean 1.3, which is the average number of goals per team per game in World Cup play.
Here's what the prior looks like.
End of explanation
prior = Soccer(pmf_gamma)
thinkplot.Pdf(prior)
thinkplot.decorate(title='Gamma prior',
xlabel='Goals per game',
ylabel='PDF')
prior.Mean()
Explanation: Now we can create a Soccer object and initialize it with the prior Pmf:
End of explanation
posterior1 = prior.Copy()
posterior1.Update(11)
thinkplot.Pdf(prior, color='0.7')
thinkplot.Pdf(posterior1)
thinkplot.decorate(title='Posterior after 1 goal',
xlabel='Goals per game',
ylabel='PDF')
posterior1.Mean()
Explanation: Here's the update after the first goal at 11 minutes.
End of explanation
posterior2 = posterior1.Copy()
posterior2.Update(12)
thinkplot.Pdf(prior, color='0.7')
thinkplot.Pdf(posterior1, color='0.7')
thinkplot.Pdf(posterior2)
thinkplot.decorate(title='Posterior after 2 goals',
xlabel='Goals per game',
ylabel='PDF')
posterior2.Mean()
from thinkbayes2 import MakePoissonPmf
Explanation: Here's the update after the second goal at 23 minutes (the time between first and second goals is 12 minutes).
End of explanation
rem_time = 90 - 23
metapmf = Pmf()
for lam, prob in posterior2.Items():
lt = lam * rem_time / 90
pred = MakePoissonPmf(lt, 15)
metapmf[pred] = prob
Explanation: We can compute the mixture of these distributions by making a Meta-Pmf that maps from each Poisson Pmf to its probability.
End of explanation
def MakeMixture(metapmf, label='mix'):
Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for x, p2 in pmf.Items():
mix[x] += p1 * p2
return mix
Explanation: MakeMixture takes a Meta-Pmf (a Pmf that contains Pmfs) and returns a single Pmf that represents the weighted mixture of distributions:
End of explanation
mix = MakeMixture(metapmf)
mix.Print()
Explanation: Here's the result for the World Cup problem.
End of explanation
thinkplot.Hist(mix)
thinkplot.decorate(title='Posterior predictive distribution',
xlabel='Goals scored',
ylabel='PMF')
Explanation: And here's what the mixture looks like.
End of explanation
# Solution goes here
Explanation: Exercise: Compute the predictive mean and the probability of scoring 5 or more additional goals.
End of explanation
cdf_gamma = pmf_gamma.MakeCdf();
mean_rate = 1.3
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
trace = pm.sample_prior_predictive(1000)
lam_sample = trace['lam']
print(lam_sample.mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(cdf_gamma, label='Prior grid')
thinkplot.Cdf(cdf_lam, label='Prior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: MCMC
Building the MCMC model incrementally, start with just the prior distribution for lam.
End of explanation
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
gap = pm.Exponential('gap', lam)
trace = pm.sample_prior_predictive(1000)
gap_sample = trace['gap']
print(gap_sample.mean())
cdf_lam = Cdf(gap_sample)
thinkplot.Cdf(cdf_lam)
thinkplot.decorate(xlabel='Time between goals (games)',
ylabel='Cdf')
Explanation: Let's look at the prior predictive distribution for the time between goals (in games).
End of explanation
first_gap = 11/90
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
gap = pm.Exponential('gap', lam, observed=first_gap)
trace = pm.sample(1000, tune=3000)
pm.traceplot(trace);
lam_sample = trace['lam']
print(lam_sample.mean())
print(posterior1.Mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(posterior1.MakeCdf(), label='Posterior analytic')
thinkplot.Cdf(cdf_lam, label='Posterior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: Now we're ready for the inverse problem, estimating lam based on the first observed gap.
End of explanation
second_gap = 12/90
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
gap = pm.Exponential('gap', lam, observed=[first_gap, second_gap])
trace = pm.sample(1000, tune=2000)
pm.traceplot(trace);
lam_sample = trace['lam']
print(lam_sample.mean())
print(posterior2.Mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(posterior2.MakeCdf(), label='Posterior analytic')
thinkplot.Cdf(cdf_lam, label='Posterior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: And here's the inverse problem with both observed gaps.
End of explanation
with model:
post_pred = pm.sample_ppc(trace, samples=1000)
gap_sample = post_pred['gap'].flatten()
print(gap_sample.mean())
cdf_gap = Cdf(gap_sample)
thinkplot.Cdf(cdf_gap)
thinkplot.decorate(xlabel='Time between goals (games)',
ylabel='Cdf')
Explanation: And we can generate a predictive distribution for the time until the next goal (in games).
End of explanation
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
goals = pm.Poisson('goals', lam, observed=1)
trace = pm.sample(3000, tune=3000)
pm.traceplot(trace);
lam_sample = trace['lam']
print(lam_sample.mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(cdf_lam, label='Posterior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: Exercise: Use PyMC to write a solution to the second World Cup problem:
In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. How much evidence does this victory provide that Germany had the better team? What is the probability that Germany would win a rematch?
End of explanation
with model:
post_pred = pm.sample_ppc(trace, samples=3000)
goal_sample = post_pred['goals'].flatten()
print(goal_sample.mean())
pmf_goals = Pmf(goal_sample)
thinkplot.Hist(pmf_goals)
thinkplot.decorate(xlabel='Number of goals',
ylabel='Cdf')
from scipy.stats import poisson
class Soccer2(thinkbayes2.Suite):
Represents hypotheses about goal-scoring rates.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: goal rate in goals per game
data: goals scored in a game
return poisson.pmf(data, hypo)
from thinkbayes2 import MakeGammaPmf
xs = np.linspace(0, 8, 101)
pmf = MakeGammaPmf(xs, 1.3)
thinkplot.Pdf(pmf)
thinkplot.decorate(xlabel='Goal-scoring rate (λ)',
ylabel='PMF')
pmf.Mean()
germany = Soccer2(pmf);
germany.Update(1)
def PredictiveDist(suite, duration=1, label='pred'):
Computes the distribution of goals scored in a game.
returns: new Pmf (mixture of Poissons)
metapmf = thinkbayes2.Pmf()
for lam, prob in suite.Items():
pred = thinkbayes2.MakePoissonPmf(lam * duration, 10)
metapmf[pred] = prob
mix = thinkbayes2.MakeMixture(metapmf, label=label)
return mix
germany_pred = PredictiveDist(germany, label='germany')
thinkplot.Hist(germany_pred, width=0.45, align='right')
thinkplot.Hist(pmf_goals, width=0.45, align='left')
thinkplot.decorate(xlabel='Predicted # goals',
ylabel='Pmf')
thinkplot.Cdf(germany_pred.MakeCdf(), label='Grid')
thinkplot.Cdf(Cdf(goal_sample), label='MCMC')
thinkplot.decorate(xlabel='Predicted # goals',
ylabel='Pmf')
Explanation: And we can generate a predictive distribution for the time until the next goal (in games).
End of explanation |
3,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Phylogenetic generalized least squares model fit for quartet data
Manuscript
Step1: We're gonna be running R code as well, so we need the following
Step3: To run parallel Python code using ipyparallel
In a separate terminal run the command below to start N engines for doing parallel computing. This requires the Python package 'ipyparallel'.
ipcluster start --n 20
Then we connect to these Engines and populate the namespace with our libraries
Step4: 1. An example pgls fit for simulated data.
The code to fit a phylogenetic generalized least squares model is adapted from http
Step5: Pass objects between R and Python
Step8: Model fitting functions
I know this is a bit convoluted, but these are Python functions which call mostly R code to do model fitting. This is because I couldn't find a Python library capable of doing pgls. The funtions take in Python objects, convert them to R objects, compute results, and return values as Python objects. The lines with "%R" execute as R code in IPython. The function rModelFitPhy uses a phylogeny to estimate a covariance matrix and perform model fitting using GLS.
Step9: Model fits to random data
Here we did four different model fits to check that our covariacne structure is working correctly. We expect that in all model fits the two variables (inputreads and pdist) will be poor predictors of nloci, since all values were randomly generated. In the first model fit we use no covariance structure and the LL=158. In the second we enforce a covariance structure by getting a VCV from the phylogeny. This gives a much worse fit to the data, as we might expect since the data were not generated with any regard to the phylogeny. The third test checks that when we input a VCV directly we get the same answer as when we put in a tree. We do. This is good since in our quartet analysis below we will be entering just the VCV. Finally, We fit a model that includes a VCV, but where all off-diagonal elements are zero (no covariance). This is equivalent to tranforming the tree by Pagel's lambda = 0. It is encouraging that when we remove the covariance structure in this way we get the same LL=158 as when no covariance structure is present at all. This means that in our real data set we can try to estimate Pagel's lambda for our covariance matrices computed from quartet data and if the covariance structure we create does not fit our data at all then a zero lambda will be estimated and the effect of our covariance structure will be removed.
Step12: A function to optimize lambda
On this simulated data there is little power to detect any covariance structure (lambda fit) in the data because the data is basically just noise. But you'll see below on our simulated data that it fits very well.
Step13: Fit with lambda
When we fit the model using our estimated lambda, which in this case is
zero since the data were simulated random (no respect to phylogeny) the
model fit is the same as above when there is no covariance structure.
This is good news. We will penalize this model for the extra parameter
using AIC when comparing models.
Step15: 2. Write functions to get stats info from pyrad outputs
Here we build a large data frame that will store how many loci are shared among all sets of quartets, what the phylogenetic distance spanned by each quartet is (pdist), and how much input data each quartet sample had (inputreads).
Parse nloci (shared) from pyrad output
Step18: Get pdist and median inputreads for all(!) quartets
Step20: A simple Class object to store our results in
Step24: Calculate a covariance matrix from shared edges among quartets
This is of course different from the VCV we inferred from a tree structure in the example at the beginning of this notebook. Here our data points are not tips of a tree but rather quartets. And we create a covariance matrix that measures the amount of shared edges among sampled quartets.
Step26: Function to run model functions together
This allows us to make one call to run jobs in parallel
Step27: 3. Testing on a simulated RAD data set
In this case we know what the source of missing data is, either mutation (drop) or random (rand).
For a large tree like this (64) tips this takes quite a while to run (~20 minutes). This is the case even when we only randomly sample 200 quartets out of the possible ~650K. Our solution will be to do many repetitions of subsampling and to parallelize this to get a good estimate quickly. Fortunately our largest empirical data set is about the same size as these simulated data sets, so this gives us a good idea for run times (still pretty long). To repeat this analysis you will have to change the file paths. The files are created in the simulation notebook.
Simulate a new data set variable missing data
I'm simulating a new data set for this since the 'random' missing data sets we made in the sim notebook all have very similar amounts of input data. To get signal from the data we need the input data to vary more significantly among samples. So I simulate some new data here with greater variance.
More specifically, I will take data from the 'fullcov' data sets which had no missing data
and randomly remove some DIFFERENT proportion of data from each sample. This is different
from our low coverage simulations in which we removed the same proportion of data from
each sample, but it was randomly missing from different loci.
New balanced tree data set
Step28: For the mutation-disruption data set we can re-use the sim data from notebook 1
Step29: New Imbalanced tree data set
Step30: Imbalanced tree
Step31: Get array of shared loci for each data set
Step32: Build array of model stats for each data set
This takes a few minutes depending on how many CPUs you're running in parallel. One of the arguments to 'build_df4_parallel' is 'lbview', our load_balanced_view of the parallel processors.
Step33: Mean standardize the arrays
Step34: To parallelize the next step we need to send our functions to the remote namespace
A much cleaner way to do this would have been to collect all the functions into a Python module and then just import that. Since I'm writing everything out in this notebook to be more didactic, though, we need to perform this step instead.
Step35: Plot sim data set with a random 1000 quartets sampled
Step36: Run 100 replicate subsample models for each data set
Step37: Simulated data sets results
In all of the data sets the phylo corrected model was a better fit to the data by 30-90 AIC points. When data was missing from low sequence coverage it was best predicted by the inputreads, and when data was missing from mutation-disruption it was best explained by phylo distances.
Step38: confidence intervals
The fit for this data set yields a negative AIC both with and without a covariance matrix. This shows that the amount of input data (raw) is a better predictor of shared data bewteen samples than is their phylogenetic distance. See the plot below.
Step39: How to deal with large matrices (absurd run times)
OK, so in our test example it takes about 10 minutes to compute a matrix with only 4000 elements, meaning we can expect that a matrix of several hundred thousand elements will pretty much never finish. One work around for this is to take a sub-sampling approach. The full matrix for 13 taxa is ~700 induced quartets, while the full data set for 65 taxa is ~700K. For the latter we will subsample 100 matrices composed of 1000 random quartets. Then we will compute the covariance matrix of the sampled quartets and fit a regression model. Finally, the results over all 100 subsampled replicates will be reported as a 95% confidence interval.
Get all 10 empirical data sets
Step40: Create large data frames for each data set
Step41: Collect the results
Step42: Print results means
Step43: So, for example, why is this one a poor fit for pdist?
There are three clouds of points corresponding to comparisons within and between the major clades. Some with little phylo distance between them have tons of data, while some with tons of data between them have very few data. It comes down to whether those data points include the same few samples or not. When we control for their non-independence, those clouds of points represent much less information, and in essence the pattern disappears.
Step44: Confidence intervals
Step45: Make all plots for supp fig 4 | Python Code:
## import Python libraries
from scipy.optimize import fminbound
import numpy as np
import pandas as pd
import itertools
import ete3
import rpy2
import copy
import glob
import gzip
import os
Explanation: Phylogenetic generalized least squares model fit for quartet data
Manuscript: "Misconceptions on Missing Data in RAD-seq Phylogenetics"
Notebook author: Deren Eaton
Contact: [email protected]
Date: 8/16/2016
This analysis is similar to a standard phylogenetic generalized least squares model fit in that we are trying to perform a regression on data for which there is a lot of non-independence of the data points. If we were comparing species the phylogeny would provide an estimate of the covariance structure where shared branch lengths among species represent their shared history. Here we are not comparing species as our data points but rather quartets of species. Our covariance structure is thus the length of shared branch lengths among sets of four taxa. The code below sets up a framework for doing this. Although the model is not perfect it provides a first step in the right direction and we show it performs better than when we do not account for the non-independence among quartets at all when analyzing them. Below you will find the following:
A test of pgls in R using a simulated tree and data.
Code for measuring a covariance matrix from quartet data.
Code for building a large data set of variables for our quartet regressions from pyrad outputs.
A test of our pgls quartet regression code on simulated RAD data.
Fits of our pgls quartet regression code on 10 empirical data sets.
Import Python libraries
End of explanation
## requires rpy2
%load_ext rpy2.ipython
%%R
## load a few R libraries
library(ape)
library(ade4)
library(nlme)
Explanation: We're gonna be running R code as well, so we need the following:
The R code is run using 'rmagic' commands in IPython, so to copy this code it would be easiest if you also ran it in a Jupyter/IPython notebook.
End of explanation
## import ipyparallel
import ipyparallel as ipp
## start a parallel client
ipyclient = ipp.Client()
## create a loadbalanced view to distribute jobs
lbview = ipyclient.load_balanced_view()
## import Python and R packages into parallel namespace
ipyclient[:].execute(
from scipy.optimize import fminbound
import numpy as np
import pandas as pd
import itertools
import ete3
import rpy2
import copy
import os
%load_ext rpy2.ipython
%R library(ape)
%R library(nlme)
)
Explanation: To run parallel Python code using ipyparallel
In a separate terminal run the command below to start N engines for doing parallel computing. This requires the Python package 'ipyparallel'.
ipcluster start --n 20
Then we connect to these Engines and populate the namespace with our libraries
End of explanation
%%R -w 400 -h 400
## matrix size (can it handle big data?)
n = 500
## simulate random data, log-transformed large values
set.seed(54321999)
simdata = data.frame('nloci'=log(rnorm(n, 50000, 10000)),
'pdist'=rnorm(n, 1, 0.2),
'inputreads'=log(abs(rnorm(n, 500000, 100000))))
## simulate a tree of same size
simtree = rtree(n)
## match names of tree to simdata idxs
simtree$tip.label = 1:n
## plot the data
plot(simtree)
plot(simdata)
Explanation: 1. An example pgls fit for simulated data.
The code to fit a phylogenetic generalized least squares model is adapted from http://www.mpcm-evolution.org/OPM/Chapter5_OPM/OPM_chap5.pdf. Below we first fit a model for simulated data to see how large of data matrices this method can handle. Then we will run on our real data, which is a more complex case. We'll get to that.
Generate some data and plot it
This cell is an example of R code (notice the %%R header). Google "R in Jupyter notebooks" for more info.
End of explanation
## as an example of what we'll be doing with the real data (Python objects)
## let's export them (-o) from R back to Python objects
%R newick <- write.tree(simtree)
%R -o newick
%R -o simdata
## Now we have the tree from R as a string in Python
## and the data frame from R as a pandas data frame in Python
newick = newick.tostring()
simdata = simdata
print simdata.head()
Explanation: Pass objects between R and Python
End of explanation
def rModelFit(pydat, covmat=np.zeros(0), newick=""):
send PyObjects to R and runs pgls using either an
input covariance matrix or an input tree. Returns
the model fit as a dataframe, and the Log likelhiood
## reconstitute Python data frame as R data frame
%R -i pydat
%R data <- data.frame(pydat)
## which model to use...
if (not np.any(covmat)) and (not newick):
%R fit <- gls(nloci ~ inputreads + pdist, data=data)
else:
## get covariance (correlation) matrix from tree
if newick:
%R -i newick
%R tre <- read.tree(text=newick)
%R simmat <- vcv(tre, corr=TRUE)
## get covariance matrix from input
else:
%R -i covmat
%R simmat <- cov2cor(covmat)
## fit the model
%R tip.heights <- diag(simmat)
%R fit <- gls(nloci ~ inputreads + pdist, data=data, \
correlation=corSymm(simmat[lower.tri(simmat)], fixed=TRUE), \
weights=varFixed(~tip.heights))
## return results as data frame
%R df <- as.data.frame(summary(fit)$tTable)
%R LL <- fit$logLik
%R -o df,LL
return df, LL
def rModelFit2(pydat, covmat=np.zeros(0), newick=""):
Send PyObjects to R and run pgls with covariance mat.
In contrast to the model above this one only fits the
model to inputreads, not pdist. We use the likelihood
fit of this model to estimate estimate lamda by using
maximum likelihood in the func estimate_lambda.
## reconstitute Python data frame as R data frame
%R -i pydat
%R data <- data.frame(pydat)
## which model to use...
if (not np.any(covmat)) and (not newick):
%R fit <- gls(nloci ~ inputreads, data=data)
else:
## get covariance (correlation) matrix from tree
if newick:
%R -i newick
%R tre <- read.tree(text=newick)
%R simmat <- vcv(tre, corr=TRUE)
## get covariance matrix from input
else:
%R -i covmat
%R simmat <- cov2cor(covmat)
## fit the model
%R tip.heights <- diag(simmat)
%R fit <- gls(nloci ~ inputreads, data=data, \
correlation=corSymm(simmat[lower.tri(simmat)], fixed=TRUE), \
weights=varFixed(~tip.heights))
## return results as data frame
%R df <- as.data.frame(summary(fit)$tTable)
%R LL <- fit$logLik
%R -o df,LL
return df, LL
Explanation: Model fitting functions
I know this is a bit convoluted, but these are Python functions which call mostly R code to do model fitting. This is because I couldn't find a Python library capable of doing pgls. The funtions take in Python objects, convert them to R objects, compute results, and return values as Python objects. The lines with "%R" execute as R code in IPython. The function rModelFitPhy uses a phylogeny to estimate a covariance matrix and perform model fitting using GLS.
End of explanation
print "\nno VCV"
df, LL = rModelFit(simdata)
print df
print "log-likelihood", LL
print "---"*20
print "\nVCV from tree -- entered as tree"
df, LL = rModelFit(simdata, newick=newick)
print df
print "log-likelihood", LL
print "---"*20
print "\nVCV from tree -- entered as VCV"
%R -o simmat
df, LL = rModelFit(simdata, covmat=simmat) ## <- uses the simmat from the tree
print df
print "log-likelihood", LL
print "---"*20
print "\nVCV from tree -- entered as VCV -- transformed so no VCV structure"
df, LL = rModelFit(simdata, covmat=np.eye(simdata.shape[0])) ## <- no covar == lambda=0
print df
print "log-likelihood", LL
print "---"*20
Explanation: Model fits to random data
Here we did four different model fits to check that our covariacne structure is working correctly. We expect that in all model fits the two variables (inputreads and pdist) will be poor predictors of nloci, since all values were randomly generated. In the first model fit we use no covariance structure and the LL=158. In the second we enforce a covariance structure by getting a VCV from the phylogeny. This gives a much worse fit to the data, as we might expect since the data were not generated with any regard to the phylogeny. The third test checks that when we input a VCV directly we get the same answer as when we put in a tree. We do. This is good since in our quartet analysis below we will be entering just the VCV. Finally, We fit a model that includes a VCV, but where all off-diagonal elements are zero (no covariance). This is equivalent to tranforming the tree by Pagel's lambda = 0. It is encouraging that when we remove the covariance structure in this way we get the same LL=158 as when no covariance structure is present at all. This means that in our real data set we can try to estimate Pagel's lambda for our covariance matrices computed from quartet data and if the covariance structure we create does not fit our data at all then a zero lambda will be estimated and the effect of our covariance structure will be removed.
End of explanation
def get_lik_lambda(lam, data, covmat):
a function that can be optimized with ML to find lambda
tmat = covmat*lam
np.fill_diagonal(tmat, 1.0)
_, LL = rModelFit2(data, covmat=tmat)
## return as the NEGATIVE LL to minimze func
return -1*LL
def estimate_lambda(data, covmat):
uses fminbound to estimate lambda in [0, 1]
return fminbound(get_lik_lambda,
0, 1,
args=(data, covmat),
xtol=0.001, maxfun=25)
Explanation: A function to optimize lambda
On this simulated data there is little power to detect any covariance structure (lambda fit) in the data because the data is basically just noise. But you'll see below on our simulated data that it fits very well.
End of explanation
print "\nVCV from tree -- entered as VCV -- transformed by estimated lambda"
lam = estimate_lambda(simdata, simmat)
mat = simmat * lam
np.fill_diagonal(mat, 1.0)
df, LL = rModelFit(simdata, covmat=mat)
print df
print "lambda", lam
print "log-likelihood", LL
print "---"*20
Explanation: Fit with lambda
When we fit the model using our estimated lambda, which in this case is
zero since the data were simulated random (no respect to phylogeny) the
model fit is the same as above when there is no covariance structure.
This is good news. We will penalize this model for the extra parameter
using AIC when comparing models.
End of explanation
def getarray(locifile, treefile):
get presence/absence matrix from .loci file
(pyrad v3 format) ordered by tips on the tree
## parse the loci file
infile = open(locifile)
loci = infile.read().split("\n//")[:-1]
## order (ladderize) the tree
tree = ete3.Tree(treefile, format=3)
tree.ladderize()
## assign numbers to internal nodes
nodes = tree.iter_descendants()
nodenum = 0
for node in nodes:
if not node.is_leaf():
node.name = nodenum
nodenum += 1
## get tip names
names = tree.get_leaf_names()
## make empty matrix
lxs = np.zeros((len(names), len(loci)), dtype=np.uint32)
## fill the matrix
for loc in xrange(len(loci)):
for seq in loci[loc].split("\n"):
if ">" in seq:
## drop _0 from sim names if present
seq = seq.rsplit("_0 ", 1)[0]
lxs[names.index(seq.split()[0][1:]),loc] += 1
infile.close()
return lxs, tree
Explanation: 2. Write functions to get stats info from pyrad outputs
Here we build a large data frame that will store how many loci are shared among all sets of quartets, what the phylogenetic distance spanned by each quartet is (pdist), and how much input data each quartet sample had (inputreads).
Parse nloci (shared) from pyrad output
End of explanation
def build_df4_parallel(tree, lxs, s2file, lbview):
Builds a data frame for quartets in parallel. A less generalized
form of the 'buildarray' function, and much faster. Returns a
data frame with n-shared-loci, median-input-reads, phylo-dist.
## get number of taxa
names = tree.get_leaf_names()
## read in step2 stats to get inputreads info, correct _0 in names for simdata
res2 = pd.read_table(s2file, header=0, index_col=0, nrows=len(names))
res2.index = [i[:-2] if i.endswith("_0") else i for i in res2.index]
inputdat = res2["passed.total"].reindex(tree.get_leaf_names())
## create empty array of nquart rows and 7 columns
nsets = sum(1 for _ in itertools.combinations(xrange(len(names)), 4))
taxonset = itertools.combinations(xrange(len(names)), 4)
arr = np.zeros((nsets, 7), dtype=np.float32)
## iterate over sampled sets and fill array.
asyncs = {}
sub = 0
while 1:
## grab 100 rows
hund = np.array(list(itertools.islice(taxonset, 100)))
## submit to engine
if np.any(hund):
asyncs[sub] = lbview.apply(fillrows, *[tree, names, inputdat, hund, lxs])
sub += 100
else:
break
## wait for jobs to finish and enter into the results array
lbview.wait()
taxonset = itertools.combinations(xrange(len(names)), 4)
for idx in xrange(nsets):
arr[idx, :4] = taxonset.next()
for idx in xrange(0, sub, 100):
arr[idx:idx+100, 4:] = asyncs[idx].get()
## dress up the array as a dataframe
columns=["p1", "p2", "p3", "p4", "nloci", "inputreads", "pdist"]
df = pd.DataFrame(arr, columns=columns)
## convert quartet indices to ints for prettier printing
df[["p1", "p2", "p3", "p4"]] = df[["p1", "p2", "p3", "p4"]].astype(int)
return df
def fillrows(tree, names, inputdat, tsets, lxs):
takes 100 row elements in build df4
## output array
arr = np.zeros((tsets.shape[0], 3), dtype=np.float64)
## get number of loci shared by the set
for ridx in xrange(tsets.shape[0]):
tset = tsets[ridx]
colsums = lxs[tset, :].sum(axis=0)
lshare = np.sum(colsums==4)
## get total tree length separating these four taxa
t = copy.deepcopy(tree)
t.prune([names[i] for i in tset], preserve_branch_length=True)
pdist = sum([i.dist for i in t])
## get min input reads
inputreads = np.median([inputdat[i] for i in tset])
## fill arr (+1 ensures no log(0) = -inf)
arr[ridx] = [np.log(lshare+1), np.log(inputreads+1), pdist]
## return array with 100 values
return arr
Explanation: Get pdist and median inputreads for all(!) quartets
End of explanation
## define a class object to store data in
class dataset():
def __init__(self, name):
self.name = name
self.files = fileset()
## define a class object to store file locations
class fileset(dict):
checks that data handles exist and stores them
def __getattr__(self, name):
if name in self:
return self[name]
else:
raise AttributeError("No such attribute: " + name)
def __setattr__(self, name, value):
if os.path.exists(value):
self[name] = value
else:
raise AttributeError("bad file name " + value)
Explanation: A simple Class object to store our results in
End of explanation
def get_path(node, mrca):
get branch length path from tip to chosen node (mrca)
path = set()
while 1:
## check that tips have not coalesced
if not node == mrca:
path.add((node, node.up))
node = node.up
else:
return path
def calculate_covariance(intree, maxarr=1000):
get covariance matrix measuring shared branch lengths among quartets,
if total number of quartets for a tree is >1000 then randomly sample
1000 quartets instead.
## tree copy
tree = copy.deepcopy(intree)
tree.unroot()
## create a large empty matrix
tt = tree.get_leaves()
nsets = sum(1 for _ in itertools.combinations(range(len(tt)), 4))
## we're not gonna worry about memory for now, and so make a list
## otherwise we would work with iterators
fullcombs = list(itertools.combinations(range(len(tt)), 4))
## either sample all quarets or a random maxarr
if nsets <= maxarr:
quarts = np.zeros((nsets, 4), dtype=np.uint16)
arr = np.zeros((nsets, nsets), dtype=np.float64)
ridx = np.arange(nsets)
for i in xrange(nsets):
quarts[i] = fullcombs[i]
arrsize = nsets
else:
quarts = np.zeros((maxarr, 4), dtype=np.uint16)
arr = np.zeros((maxarr, maxarr), dtype=np.float64)
## randomly sample 1000 indices within maxarr
ridx = np.random.choice(nsets, maxarr)
for i in xrange(maxarr):
quarts[i] = fullcombs[ridx[i]]
arrsize = maxarr
## iterate over each comb to compare
for idx in xrange(arrsize):
## get path to quartet tips
set1 = [tt[i] for i in quarts[idx]]
mrca1 = set1[0].get_common_ancestor(set1)
edges1 = [get_path(node, mrca1) for node in set1]
for cdx in xrange(idx, arrsize):
## get path to quartet tips
set2 = [tt[i] for i in quarts[cdx]]
mrca2 = set2[0].get_common_ancestor(set2)
edges2 = [get_path(node, mrca2) for node in set2]
## which edges are shared
a = set([tuple(i) for i in itertools.chain(*edges1)])
b = set([tuple(i) for i in itertools.chain(*edges2)])
shared = set.intersection(a, b)
## save the branch lengths
sumshare = sum([tree.get_distance(edg[0], edg[1]) for edg in shared])
arr[idx, cdx] = sumshare
#print idx,
return arr, ridx
def check_covariance(covar):
tests covariance matrix for positive definite
## fill the lower triangle of matrix symmetrically
mat = np.array(covar)
upidx = np.triu_indices_from(covar)
mat[(upidx[1], upidx[0])] = covar[upidx]
## is the matrix symmetric?
assert np.allclose(mat, mat.T), "matrix is not symmetric"
## is it positive definite?
test = 0
while 1:
if not np.all(np.linalg.eigvals(mat) > 0):
didx = np.diag_indices_from(mat)
mat[didx] = np.diag(mat) *1.1
test += 1
elif test > 20:
assert np.all(np.linalg.eigvals(mat) > 0), "matrix is not positive definite"
else:
break
assert np.all(np.linalg.eigvals(mat) > 0), "matrix is not positive definite"
return mat
Explanation: Calculate a covariance matrix from shared edges among quartets
This is of course different from the VCV we inferred from a tree structure in the example at the beginning of this notebook. Here our data points are not tips of a tree but rather quartets. And we create a covariance matrix that measures the amount of shared edges among sampled quartets.
End of explanation
def fitmodels(tree, df4, nsamples):
Calculates covar, checks matrix, fits models,
and return arrays for with and without covar
## calculate covariance of (nsamples) random data points
covar, ridx = calculate_covariance(tree, nsamples)
## get symmetric matrix and test for positive definite
mat = check_covariance(covar)
## subsample a reasonable number of data points
subdf4 = df4.loc[ridx, :]
## estimate lambda to be used in model fits
## I'm using R to convert cov2cor
%R -i mat
%R mm <-cov2cor(mat)
%R -o mm
lam = estimate_lambda(subdf4, mm)
## transform corr matrix with lambda
mat = mm*lam
np.fill_diagonal(mat, 1.0)
## fit models with covar
ndf, nLL = rModelFit(subdf4)
wdf, wLL = rModelFit(subdf4, covmat=mat)
## return two arrays
return wdf, wLL, nLL, lam #ndf, nLL, wdf, wLL
Explanation: Function to run model functions together
This allows us to make one call to run jobs in parallel
End of explanation
## make a new directory for the subsampled fastqs
! mkdir -p /home/deren/Documents/RADsims/Tbal_rad_varcov/fastq/
## grab the no-missing fastqs
fastqs = glob.glob("/home/deren/Documents/RADsims/Tbal_rad_covfull/fastq/s*")
for fastq in fastqs:
## create a new output file
_, handle = os.path.split(fastq)
outfile = gzip.open(
os.path.join(
"/home/deren/Documents/RADsims/Tbal_rad_varcov/fastq",
handle), 'w')
## grab a random proportion of reads from this data set (0-100%)
p = np.random.uniform(0.1, 0.9)
## iterate over file 4-lines at a time.
infile = gzip.open(fastq, 'r')
qiter = itertools.izip(*[iter(infile)]*4)
## sample read with probability p
kept = 0
while 1:
try:
if np.random.binomial(1, p):
outfile.write("".join(qiter.next()))
kept += 1
else:
_ = qiter.next()
except StopIteration:
break
print '{} sampled at p={:.2f} kept {} reads'.format(handle, p, kept)
infile.close()
outfile.close()
%%bash
## assemble the data set in pyrad
rm params.txt
pyrad -n >> log.txt 2>&1
sed -i '/## 1. /c\Tbal_rad_varcov ## 1. working dir ' params.txt
sed -i '/## 2. /c\ ## 2. data loc ' params.txt
sed -i '/## 3. /c\ ## 3. Bcode ' params.txt
sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt
sed -i '/## 7. /c\20 ## 7. Nproc ' params.txt
sed -i '/## 10. /c\.82 ## 10. clust thresh' params.txt
sed -i '/## 11. /c\rad ## 11. datatype ' params.txt
sed -i '/## 12. /c\2 ## 12. minCov ' params.txt
sed -i '/## 13. /c\10 ## 13. maxSH' params.txt
sed -i '/## 14. /c\Tbal ## 14. outname' params.txt
sed -i '/## 18./c\Tbal_rad_varcov/fastq/*.gz ## sorted data ' params.txt
sed -i '/## 24./c\99 ## 24. maxH' params.txt
sed -i '/## 30./c\n,p,s ## 30. out format' params.txt
pyrad -p params.txt -s 234567 >> log.txt 2>&1
## load the data files
Tbalvarcov = dataset("Tbalvarcov")
Tbalvarcov.files.loci4 = "/home/deren/Documents/RADsims/Tbal_rad_varcov/outfiles/Tbal.loci"
Tbalvarcov.files.tree = "/home/deren/Documents/RADsims/Tbal.tre"
Tbalvarcov.files.s2 = "/home/deren/Documents/RADsims/Tbal_rad_varcov/stats/s2.rawedit.txt"
Explanation: 3. Testing on a simulated RAD data set
In this case we know what the source of missing data is, either mutation (drop) or random (rand).
For a large tree like this (64) tips this takes quite a while to run (~20 minutes). This is the case even when we only randomly sample 200 quartets out of the possible ~650K. Our solution will be to do many repetitions of subsampling and to parallelize this to get a good estimate quickly. Fortunately our largest empirical data set is about the same size as these simulated data sets, so this gives us a good idea for run times (still pretty long). To repeat this analysis you will have to change the file paths. The files are created in the simulation notebook.
Simulate a new data set variable missing data
I'm simulating a new data set for this since the 'random' missing data sets we made in the sim notebook all have very similar amounts of input data. To get signal from the data we need the input data to vary more significantly among samples. So I simulate some new data here with greater variance.
More specifically, I will take data from the 'fullcov' data sets which had no missing data
and randomly remove some DIFFERENT proportion of data from each sample. This is different
from our low coverage simulations in which we removed the same proportion of data from
each sample, but it was randomly missing from different loci.
New balanced tree data set
End of explanation
## balanced tree with only phylo missing data.
Tbaldrop = dataset("Tbaldrop")
Tbaldrop.files.loci4 = "/home/deren/Documents/RADsims/Tbal_rad_drop/outfiles/Tbal.loci"
Tbaldrop.files.tree = "/home/deren/Documents/RADsims/Tbal.tre"
Tbaldrop.files.s2 = "/home/deren/Documents/RADsims/Tbal_rad_drop/stats/s2.rawedit.txt"
Explanation: For the mutation-disruption data set we can re-use the sim data from notebook 1
End of explanation
## make a new directory for the subsampled fastqs
! mkdir -p /home/deren/Documents/RADsims/Timb_rad_varcov/fastq/
## grab the no-missing fastqs
fastqs = glob.glob("/home/deren/Documents/RADsims/Timb_rad_covfull/fastq/s*")
for fastq in fastqs:
## create a new output file
_, handle = os.path.split(fastq)
outfile = gzip.open(
os.path.join(
"/home/deren/Documents/RADsims/Timb_rad_varcov/fastq",
handle), 'w')
## grab a random proportion of reads from this data set (0-100%)
p = np.random.uniform(0.1, 0.9)
## iterate over file 4-lines at a time.
infile = gzip.open(fastq, 'r')
qiter = itertools.izip(*[iter(infile)]*4)
## sample read with probability p
kept = 0
while 1:
try:
if np.random.binomial(1, p):
outfile.write("".join(qiter.next()))
kept += 1
else:
_ = qiter.next()
except StopIteration:
break
print '{} sampled at p={:.2f} kept {} reads'.format(handle, p, kept)
infile.close()
outfile.close()
%%bash
## assemble the data set in pyrad
rm params.txt
pyrad -n >> log.txt 2>&1
sed -i '/## 1. /c\Timb_rad_varcov ## 1. working dir ' params.txt
sed -i '/## 2. /c\ ## 2. data loc ' params.txt
sed -i '/## 3. /c\ ## 3. Bcode ' params.txt
sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt
sed -i '/## 7. /c\20 ## 7. Nproc ' params.txt
sed -i '/## 10. /c\.82 ## 10. clust thresh' params.txt
sed -i '/## 11. /c\rad ## 11. datatype ' params.txt
sed -i '/## 12. /c\2 ## 12. minCov ' params.txt
sed -i '/## 13. /c\10 ## 13. maxSH' params.txt
sed -i '/## 14. /c\Timb ## 14. outname' params.txt
sed -i '/## 18./c\Timb_rad_varcov/fastq/*.gz ## sorted data ' params.txt
sed -i '/## 24./c\99 ## 24. maxH' params.txt
sed -i '/## 30./c\n,p,s ## 30. out format' params.txt
pyrad -p params.txt -s 234567 >> log.txt 2>&1
## imbalanced tree with only phylo missind data.
Timbvarcov = dataset("Timbvarcov")
Timbvarcov.files.loci4 = "/home/deren/Documents/RADsims/Timb_rad_varcov/outfiles/Timb.loci"
Timbvarcov.files.tree = "/home/deren/Documents/RADsims/Timb.tre"
Timbvarcov.files.s2 = "/home/deren/Documents/RADsims/Timb_rad_varcov/stats/s2.rawedit.txt"
Explanation: New Imbalanced tree data set
End of explanation
## balanced tree with only phylo missind data.
Timbdrop = dataset("Timbdrop")
Timbdrop.files.loci4 = "/home/deren/Documents/RADsims/Timb_rad_drop/outfiles/Timb.loci"
Timbdrop.files.tree = "/home/deren/Documents/RADsims/Timb.tre"
Timbdrop.files.s2 = "/home/deren/Documents/RADsims/Timb_rad_drop/stats/s2.rawedit.txt"
## list of dsets
dsets = [Tbaldrop, Timbdrop, Tbalvarcov, Timbvarcov]
Explanation: Imbalanced tree
End of explanation
## submit parallel [getarray] jobs
asyncs = {}
for dset in dsets:
asyncs[dset.name] = lbview.apply(getarray, *[dset.files.loci4, dset.files.tree])
## collect results
ipyclient.wait()
for dset in dsets:
dset.lxs4, dset.tree = asyncs[dset.name].get()
print dset.name, "\n", dset.lxs4, "\n"
Explanation: Get array of shared loci for each data set
End of explanation
## submit parallel [buildarray] jobs
for dset in dsets:
dset.df4 = build_df4_parallel(dset.tree, dset.lxs4, dset.files.s2, lbview)
## peek at one of the data sets
print dsets[3].df4.head()
Explanation: Build array of model stats for each data set
This takes a few minutes depending on how many CPUs you're running in parallel. One of the arguments to 'build_df4_parallel' is 'lbview', our load_balanced_view of the parallel processors.
End of explanation
for dset in dsets:
for var in ["nloci", "inputreads", "pdist"]:
dset.df4[var] = (dset.df4[var] - dset.df4[var].mean()) / dset.df4[var].std()
## peek again
print dsets[3].df4.head()
Explanation: Mean standardize the arrays
End of explanation
ipyclient[:].push(
dict(
calculate_covariance=calculate_covariance,
check_covariance=check_covariance,
get_path=get_path,
rModelFit=rModelFit,
rModelFit2=rModelFit2,
estimate_lambda=estimate_lambda,
get_lik_lambda=get_lik_lambda
)
)
Explanation: To parallelize the next step we need to send our functions to the remote namespace
A much cleaner way to do this would have been to collect all the functions into a Python module and then just import that. Since I'm writing everything out in this notebook to be more didactic, though, we need to perform this step instead.
End of explanation
## pass objects into R
rdf0 = dsets[0].df4.loc[np.random.choice(range(630000), 1000), :]
rdf1 = dsets[1].df4.loc[np.random.choice(range(630000), 1000), :]
rdf2 = dsets[2].df4.loc[np.random.choice(range(630000), 1000), :]
rdf3 = dsets[3].df4.loc[np.random.choice(range(630000), 1000), :]
baltre = dsets[0].tree.write()
imbtre = dsets[1].tree.write()
%R -i rdf0,rdf1,rdf2,rdf3,baltre,imbtre
%%R -w 400 -h 400
## make tree and plot data
#pdf("simulation_model_fits.pdf")
tre <- read.tree(text=baltre)
plot(tre, 'u', no.margin=TRUE)
plot(rdf0[,c(5,6,7)], main="Balanced tree - phylo missing")
plot(rdf2[,c(5,6,7)], main="Balanced tree - low-cov missing")
tre <- read.tree(text=imbtre)
plot(tre, 'u', no.margin=TRUE)
plot(rdf1[,c(5,6,7)], main="Imbalanced tree - phylo missing")
plot(rdf3[,c(5,6,7)], main="Imbalanced tree - low-cov missing")
#dev.off()
dset = dsets[0]
print dset.name
fitmodels(dset.tree, dset.df4, nsamples=200)
dset = dsets[1]
print dset.name
fitmodels(dset.tree, dset.df4, nsamples=200)
dset = dsets[2]
print dset.name
fitmodels(dset.tree, dset.df4, nsamples=200)
dset = dsets[3]
print dset.name
fitmodels(dset.tree, dset.df4, nsamples=200)
def AICc(LL, k, n):
return (-2*LL) + 2*k + ((2*k * (k+1))/float(n-k-1))
Explanation: Plot sim data set with a random 1000 quartets sampled
End of explanation
## store results in this array
ntests = 100
nsamples = 200
## for each test
for dset in dsets:
## create output storage arrays
dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64)
dset.LL = np.zeros((2, ntests), dtype=np.float64)
dset.lam = np.zeros(ntests, dtype=np.float64)
dset.asyncs = {}
## send jobs to get results in parallel
for tidx in xrange(ntests):
dset.asyncs[tidx] = lbview.apply(fitmodels, *[dset.tree, dset.df4, nsamples])
## check progress on running jobs
ipyclient.wait_interactive()
## enter results into results array when finished
for dset in dsets:
## create empty results arrays
dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64)
dset.LL = np.zeros((ntests, 2), dtype=np.float64)
dset.lam = np.zeros(ntests, dtype=np.float64)
for tidx in range(ntests):
if dset.asyncs[tidx].ready():
res = dset.asyncs[tidx].get()
dset.tab[tidx] = res[0]
dset.LL[tidx] = res[1], res[2]
dset.lam[tidx] = res[3]
else:
print "job: [{}, {}] is still running".format(dset.name, tidx)
Explanation: Run 100 replicate subsample models for each data set
End of explanation
def results_table(dset):
tdat = dset.tab.mean(axis=0)
df = pd.DataFrame(
index=["fit"],
data=[
pd.Series([np.mean(dset.LL[:, 0] - dset.LL[:, 1]),
dset.lam.mean(),
tdat[1, 0],
tdat[1, 3],
tdat[2, 0],
tdat[2, 3]],
index=["deltaAIC", "lambda",
"raw_coeff", "raw_P",
"phy_coeff", "phy_P"
]),
])
return df
for dset in dsets:
print dset.name, "---"*23
print results_table(dset)
print "---"*27, "\n"
Explanation: Simulated data sets results
In all of the data sets the phylo corrected model was a better fit to the data by 30-90 AIC points. When data was missing from low sequence coverage it was best predicted by the inputreads, and when data was missing from mutation-disruption it was best explained by phylo distances.
End of explanation
## get a stats module
import scipy.stats as st
def get_CI(a):
mean = np.mean(a)
interval = st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))
return mean, interval[0], interval[1]
for dset in dsets:
print dset.name
print "LL ", get_CI(dset.LL[:,0]-dset.LL[:,1])
print "lambda", get_CI(dset.lam)
print "raw_coeff", get_CI(dset.tab[:, 1, 0])
print "raw_P", get_CI(dset.tab[:, 1, 3])
print "phy_coeff", get_CI(dset.tab[:, 2, 0])
print "phy_P", get_CI(dset.tab[:, 2, 3])
print ""
Explanation: confidence intervals
The fit for this data set yields a negative AIC both with and without a covariance matrix. This shows that the amount of input data (raw) is a better predictor of shared data bewteen samples than is their phylogenetic distance. See the plot below.
End of explanation
## data set 1 (Viburnum)
data1 = dataset("data1")
data1.files.loci4 = "/home/deren/Documents/RADmissing/empirical_1/fullrun/outfiles/empirical_1_full_m4.loci"
data1.files.tree = "/home/deren/Documents/RADmissing/empirical_1/fullrun/RAxML_bipartitions.empirical_1_full_m4"
data1.files.s2 = "/home/deren/Documents/RADmissing/empirical_1/fullrun/stats/s2.rawedit.txt"
## data set 2 (Phrynosomatidae)
data2 = dataset("data2")
data2.files.loci4 = "/home/deren/Documents/RADmissing/empirical_2/outfiles/empirical_2_m4.loci"
data2.files.tree = "/home/deren/Documents/RADmissing/empirical_2/RAxML_bipartitions.empirical_2"
data2.files.s2 = "/home/deren/Documents/RADmissing/empirical_2/stats/s2.rawedit.txt"
## data set 3 (Quercus)
data3 = dataset("data3")
data3.files.loci4 = "/home/deren/Documents/RADmissing/empirical_3/outfiles/empirical_3_m4.loci"
data3.files.tree = "/home/deren/Documents/RADmissing/empirical_3/RAxML_bipartitions.empirical_3"
data3.files.s2 = "/home/deren/Documents/RADmissing/empirical_3/stats/s2.rawedit.txt"
## data set 4 (Orestias)
data4 = dataset("data4")
data4.files.loci4 = "/home/deren/Documents/RADmissing/empirical_4/outfiles/empirical_4_m4.loci"
data4.files.tree = "/home/deren/Documents/RADmissing/empirical_4/RAxML_bipartitions.empirical_4"
data4.files.s2 = "/home/deren/Documents/RADmissing/empirical_4/stats/s2.rawedit.txt"
## data set 5 (Heliconius)
data5 = dataset("data5")
data5.files.loci4 = "/home/deren/Documents/RADmissing/empirical_5/outfiles/empirical_5_m4.loci"
data5.files.tree = "/home/deren/Documents/RADmissing/empirical_5/RAxML_bipartitions.empirical_5"
data5.files.s2 = "/home/deren/Documents/RADmissing/empirical_5/stats/s2.rawedit.txt"
## data set 6 (Finches)
data6 = dataset("data6")
data6.files.loci4 = "/home/deren/Documents/RADmissing/empirical_6/outfiles/empirical_6_m4.loci"
data6.files.tree = "/home/deren/Documents/RADmissing/empirical_6/RAxML_bipartitions.empirical_6"
data6.files.s2 = "/home/deren/Documents/RADmissing/empirical_6/stats/s2.rawedit.txt"
## data set 7 (Danio)
data7 = dataset("data7")
data7.files.loci4 = "/home/deren/Documents/RADmissing/empirical_7/outfiles/empirical_7_m4.loci"
data7.files.tree = "/home/deren/Documents/RADmissing/empirical_7/RAxML_bipartitions.empirical_7"
data7.files.s2 = "/home/deren/Documents/RADmissing/empirical_7/stats/s2.rawedit.txt"
## data set 8 (Barnacles)
data8 = dataset("data8")
data8.files.loci4 = "/home/deren/Documents/RADmissing/empirical_8/outfiles/empirical_8_m4.loci"
data8.files.tree = "/home/deren/Documents/RADmissing/empirical_8/RAxML_bipartitions.empirical_8"
data8.files.s2 = "/home/deren/Documents/RADmissing/empirical_8/stats/s2.rawedit.txt"
## data set 9 (Ohomopterus)
data9 = dataset("data9")
data9.files.loci4 = "/home/deren/Documents/RADmissing/empirical_9/outfiles/empirical_9_m4.loci"
data9.files.tree = "/home/deren/Documents/RADmissing/empirical_9/RAxML_bipartitions.empirical_9"
data9.files.s2 = "/home/deren/Documents/RADmissing/empirical_9/stats/s2.rawedit.txt"
## data set 10 (Pedicularis)
data10 = dataset("data10")
data10.files.loci4 = "/home/deren/Documents/RADmissing/empirical_10/outfiles/empirical_10_m4.loci"
data10.files.tree = "/home/deren/Documents/RADmissing/empirical_10/RAxML_bipartitions.empirical_10_m4"
data10.files.s2 = "/home/deren/Documents/RADmissing/empirical_10/stats/s2.rawedit.txt"
## put all in a list
datasets = [data1, data2, data3, data4, data5,
data6, data7, data8, data9, data10]
Explanation: How to deal with large matrices (absurd run times)
OK, so in our test example it takes about 10 minutes to compute a matrix with only 4000 elements, meaning we can expect that a matrix of several hundred thousand elements will pretty much never finish. One work around for this is to take a sub-sampling approach. The full matrix for 13 taxa is ~700 induced quartets, while the full data set for 65 taxa is ~700K. For the latter we will subsample 100 matrices composed of 1000 random quartets. Then we will compute the covariance matrix of the sampled quartets and fit a regression model. Finally, the results over all 100 subsampled replicates will be reported as a 95% confidence interval.
Get all 10 empirical data sets
End of explanation
## submit parallel [getarray] jobs
asyncs = {}
for dset in datasets:
asyncs[dset.name] = lbview.apply(getarray, *[dset.files.loci4, dset.files.tree])
## collect results
ipyclient.wait()
for dset in datasets:
dset.lxs4, dset.tree = asyncs[dset.name].get()
print dset.name, "\n", dset.lxs4, "\n"
## submit parallel [buildarray] jobs
for dset in datasets:
dset.df4 = build_df4_parallel(dset.tree, dset.lxs4, dset.files.s2, lbview)
## peek at one of the data sets
print datasets[0].df4.head()
for dset in datasets:
for var in ["nloci", "inputreads", "pdist"]:
dset.df4[var] = (dset.df4[var] - dset.df4[var].mean()) / dset.df4[var].std()
## peek again
print datasets[0].df4.head()
## pass objects into R
rdf0 = datasets[0].df4.loc[np.random.choice(range(630000), 1000), :]
rdftre = datasets[0].tree.write()
%R -i rdf0,rdftre
%%R -w 400 -h 400
## make tree and plot data
#pdf("simulation_model_fits.pdf")
tre <- read.tree(text=rdftre)
plot(tre, 'u', no.margin=TRUE, show.tip.label=FALSE)
plot(rdf0[,c(5,6,7)], main="Viburnum tree -- empirical")
## store results in this array
ntests = 100
nsamples = 200
## for each test
for dset in datasets:
## create output storage arrays
dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64)
dset.LL = np.zeros((2, ntests), dtype=np.float64)
dset.lam = np.zeros(ntests, dtype=np.float64)
dset.asyncs = {}
## send jobs to get results in parallel
for tidx in xrange(ntests):
dset.asyncs[tidx] = lbview.apply(fitmodels, *[dset.tree, dset.df4, nsamples])
ipyclient.wait_interactive()
Explanation: Create large data frames for each data set
End of explanation
## enter results into results array when finished
for dset in datasets:
## create empty results arrays
dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64)
dset.LL = np.zeros((ntests, 2), dtype=np.float64)
dset.lam = np.zeros(ntests, dtype=np.float64)
for tidx in range(ntests):
if dset.asyncs[tidx].ready():
res = dset.asyncs[tidx].get()
dset.tab[tidx] = res[0]
dset.LL[tidx] = res[1], res[2]
dset.lam[tidx] = res[3]
else:
print "job: [{}, {}] is still running".format(dset.name, tidx)
Explanation: Collect the results
End of explanation
for dset in datasets:
print dset.name, "---"*23
print results_table(dset)
print "---"*27, "\n"
Explanation: Print results means
End of explanation
## pass objects into R
rdf0 = datasets[5].df4.loc[np.random.choice(range(10000), 1000), :]
rdftre = datasets[5].tree.write()
%R -i rdf0,rdftre
%%R -w 400 -h 400
## make tree and plot data
#pdf("simulation_model_fits.pdf")
tre <- read.tree(text=rdftre)
plot(tre, 'u', no.margin=TRUE, show.tip.label=FALSE)
plot(rdf0[,c(5,6,7)], main="Finch tree -- empirical")
Explanation: So, for example, why is this one a poor fit for pdist?
There are three clouds of points corresponding to comparisons within and between the major clades. Some with little phylo distance between them have tons of data, while some with tons of data between them have very few data. It comes down to whether those data points include the same few samples or not. When we control for their non-independence, those clouds of points represent much less information, and in essence the pattern disappears.
End of explanation
for dset in datasets:
print dset.name
print "LL ", get_CI(dset.LL[:,0]-dset.LL[:,1])
print "lambda", get_CI(dset.lam)
print "raw_coeff", get_CI(dset.tab[:, 1, 0])
print "raw_P", get_CI(dset.tab[:, 1, 3])
print "phy_coeff", get_CI(dset.tab[:, 2, 0])
print "phy_P", get_CI(dset.tab[:, 2, 3])
print ""
Explanation: Confidence intervals
End of explanation
## pass objects into R
dset = datasets[0]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Vib.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Viburnum")
dev.off()
dset = datasets[1]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Phryn.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Phrynosomatidae")
dev.off()
dset = datasets[2]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Quer.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Quercus")
dev.off()
dset = datasets[3]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Orest.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Orestias")
dev.off()
dset = datasets[4]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Helic.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Heliconius")
dev.off()
dset = datasets[5]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Finch.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Finches")
dev.off()
dset = datasets[6]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Danio.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Danio")
dev.off()
dset = datasets[7]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Barnacles.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Barnacles")
dev.off()
dset = datasets[8]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Ohomo.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Ohomopterus")
dev.off()
dset = datasets[9]
rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :]
%R -i rdf
%%R -w 400 -h 400
## make tree and plot data
pdf("empscatter_Pedi.pdf", height=5, width=5)
plot(rdf[,c(5,6,7)], main="Pedicularis")
dev.off()
Explanation: Make all plots for supp fig 4
End of explanation |
3,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Polynomials
Some of the equations we've looked at so far include expressions that are actually polynomials; but what is a polynomial, and why should you care?
A polynomial is an algebraic expression containing one or more terms that each meet some specific criteria. Specifically
Step1: Adding Polynomials
When you add two polynomials, the result is a polynomial. Here's an example
Step2: Subtracting Polynomials
Subtracting polynomials is similar to adding them but you need to take into account that one of the polynomials is a negative.
Consider this expression
Step3: Multiplying Polynomials
To multiply two polynomials, you need to perform the following two steps
Step4: Dividing Polynomials
When you need to divide one polynomial by another, there are two approaches you can take depending on the number of terms in the divisor (the expression you're dividing by).
Dividing Polynomials Using Simplification
In the simplest case, division of a polynomial by a monomial, the operation is really just simplification of a fraction.
For example, consider the following expression
Step5: Dividing Polynomials Using Long Division
Things get a little more complicated for divisors with more than one term.
Suppose we have the following expression | Python Code:
from random import randint
x = randint(1,100)
(x**3 + 2*x**3 - 3*x - x + 8 - 3) == (3*x**3 - 4*x + 5)
Explanation: Polynomials
Some of the equations we've looked at so far include expressions that are actually polynomials; but what is a polynomial, and why should you care?
A polynomial is an algebraic expression containing one or more terms that each meet some specific criteria. Specifically:
- Each term can contain:
- Numeric values that are coefficients or constants (for example 2, -5, <sup>1</sup>/<sub>7</sub>)
- Variables (for example, x, y)
- Non-negative integer exponents (for example <sup>2</sup>, <sup>64</sup>)
- The terms can be combined using arithmetic operations - but not division by a variable.
For example, the following expression is a polynomial:
\begin{equation}12x^{3} + 2x - 16 \end{equation}
When identifying the terms in a polynomial, it's important to correctly interpret the arithmetic addition and subtraction operators as the sign for the term that follows. For example, the polynomial above contains the following three terms:
- 12x<sup>3</sup>
- 2x
- -16
The terms themselves include:
- Two coefficients(12 and 2) and a constant (-16)
- A variable (x)
- An exponent (<sup>3</sup>)
A polynomial that contains three terms is also known as a trinomial. Similarly, a polynomial with two terms is known as a binomial and a polynomial with only one term is known as a monomial.
So why do we care? Well, polynomials have some useful properties that make them easy to work with. for example, if you multiply, add, or subtract a polynomial, the result is always another polynomial.
Standard Form for Polynomials
Techbnically, you can write the terms of a polynomial in any order; but the standard form for a polynomial is to start with the highest degree first and constants last. The degree of a term is the highest order (exponent) in the term, and the highest order in a polynomial determines the degree of the polynomial itself.
For example, consider the following expression:
\begin{equation}3x + 4xy^{2} - 3 + x^{3} \end{equation}
To express this as a polynomial in the standard form, we need to re-order the terms like this:
\begin{equation}x^{3} + 4xy^{2} + 3x - 3 \end{equation}
Simplifying Polynomials
We saw previously how you can simplify an equation by combining like terms. You can simplify polynomials in the same way.
For example, look at the following polynomial:
\begin{equation}x^{3} + 2x^{3} - 3x - x + 8 - 3 \end{equation}
In this case, we can combine x<sup>3</sup> and 2x<sup>3</sup> by adding them to make 3x<sup>3</sup>. Then we can add -3x and -x (which is really just a shorthand way to say -1x) to get -4x, and then add 8 and -3 to get 5. Our simplified polynomial then looks like this:
\begin{equation}3x^{3} - 4x + 5 \end{equation}
We can use Python to compare the original and simplified polynomials to check them - using an arbitrary random value for x:
End of explanation
from random import randint
x = randint(1,100)
(3*x**3 - 4*x + 5) + (2*x**3 + 3*x**2 - 2*x + 2) == 5*x**3 + 3*x**2 - 6*x + 7
Explanation: Adding Polynomials
When you add two polynomials, the result is a polynomial. Here's an example:
\begin{equation}(3x^{3} - 4x + 5) + (2x^{3} + 3x^{2} - 2x + 2) \end{equation}
because this is an addition operation, you can simply add all of the like terms from both polynomials. To make this clear, let's first put the like terms together:
\begin{equation}3x^{3} + 2x^{3} + 3x^{2} - 4x -2x + 5 + 2 \end{equation}
This simplifies to:
\begin{equation}5x^{3} + 3x^{2} - 6x + 7 \end{equation}
We can verify this with Python:
End of explanation
from random import randint
x = randint(1,100)
(2*x**2 - 4*x + 5) - (x**2 - 2*x + 2) == x**2 - 2*x + 3
Explanation: Subtracting Polynomials
Subtracting polynomials is similar to adding them but you need to take into account that one of the polynomials is a negative.
Consider this expression:
\begin{equation}(2x^{2} - 4x + 5) - (x^{2} - 2x + 2) \end{equation}
The key to performing this calculation is to realize that the subtraction of the second polynomial is really an expression that adds -1(x<sup>2</sup> - 2x + 2); so you can use the distributive property to multiply each of the terms in the polynomial by -1 (which in effect simply reverses the sign for each term). So our expression becomes:
\begin{equation}(2x^{2} - 4x + 5) + (-x^{2} + 2x - 2) \end{equation}
Which we can solve as an addition problem. First place the like terms together:
\begin{equation}2x^{2} + -x^{2} + -4x + 2x + 5 + -2 \end{equation}
Which simplifies to:
\begin{equation}x^{2} - 2x + 3 \end{equation}
Let's check that with Python:
End of explanation
from random import randint
x = randint(1,100)
(x**4 + 2)*(2*x**2 + 3*x - 3) == 2*x**6 + 3*x**5 - 3*x**4 + 4*x**2 + 6*x - 6
Explanation: Multiplying Polynomials
To multiply two polynomials, you need to perform the following two steps:
1. Multiply each term in the first polynomial by each term in the second polynomial.
2. Add the results of the multiplication operations, combining like terms where possible.
For example, consider this expression:
\begin{equation}(x^{4} + 2)(2x^{2} + 3x - 3) \end{equation}
Let's do the first step and multiply each term in the first polynomial by each term in the second polynomial. The first term in the first polynomial is x<sup>4</sup>, and the first term in the second polynomial is 2x<sup>2</sup>, so multiplying these gives us 2x<sup>6</sup>. Then we can multiply the first term in the first polynomial (x<sup>4</sup>) by the second term in the second polynomial (3x), which gives us 3x<sup>5</sup>, and so on until we've multipled all of the terms in the first polynomial by all of the terms in the second polynomial, which results in this:
\begin{equation}2x^{6} + 3x^{5} - 3x^{4} + 4x^{2} + 6x - 6 \end{equation}
We can verify a match between this result and the original expression this with the following Python code:
End of explanation
from random import randint
x = randint(1,100)
(4*x + 6*x**2) / (2*x) == 2 + 3*x
Explanation: Dividing Polynomials
When you need to divide one polynomial by another, there are two approaches you can take depending on the number of terms in the divisor (the expression you're dividing by).
Dividing Polynomials Using Simplification
In the simplest case, division of a polynomial by a monomial, the operation is really just simplification of a fraction.
For example, consider the following expression:
\begin{equation}(4x + 6x^{2}) \div 2x \end{equation}
This can also be written as:
\begin{equation}\frac{4x + 6x^{2}}{2x} \end{equation}
One approach to simplifying this fraction is to split it it into a separate fraction for each term in the dividend (the expression we're dividing), like this:
\begin{equation}\frac{4x}{2x} + \frac{6x^{2}}{2x}\end{equation}
Then we can simplify each fraction and add the results. For the first fraction, 2x goes into 4x twice, so the fraction simplifies to 2; and for the second, 6x<sup>2</sup> is 2x mutliplied by 3x. So our answer is 2 + 3x:
\begin{equation}2 + 3x\end{equation}
Let's use Python to compare the original fraction with the simplified result for an arbitrary value of x:
End of explanation
from random import randint
x = randint(3,100)
(x**2 + 2*x -3)/(x-2) == x + 4 + (5/(x-2))
Explanation: Dividing Polynomials Using Long Division
Things get a little more complicated for divisors with more than one term.
Suppose we have the following expression:
\begin{equation}(x^{2} + 2x - 3) \div (x - 2) \end{equation}
Another way of writing this is to use the long-division format, like this:
\begin{equation} x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
We begin long-division by dividing the highest order divisor into the highest order dividend - so in this case we divide x into x<sup>2</sup>. X goes into x<sup>2</sup> x times, so we put an x on top and then multiply it through the divisor:
\begin{equation} \;\;\;\;x \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation} \;x^{2} -2x \end{equation}
Now we'll subtract the remaining dividend, and then carry down the -3 that we haven't used to see what's left:
\begin{equation} \;\;\;\;x \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation}- (x^{2} -2x) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}
OK, now we'll divide our highest order divisor into the highest order of the remaining dividend. In this case, x goes into 4x four times, so we'll add a 4 to the top line, multiply it through the divisor, and subtract the remaining dividend:
\begin{equation} \;\;\;\;\;\;\;\;x + 4 \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation}- (x^{2} -2x) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}
\begin{equation}- (\;\;\;\;\;\;\;\;\;\;\;\;4x -8) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;5} \end{equation}
We're now left with just 5, which we can't divide further by x - 2; so that's our remainder, which we'll add as a fraction.
The solution to our division problem is:
\begin{equation}x + 4 + \frac{5}{x-2} \end{equation}
Once again, we can use Python to check our answer:
End of explanation |
3,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scrapy 3
Step1: As you can see the very first (and brutal) approach can be adding the URLs one-by-one to the start_urls list. The good news is that all URLs are quite similar
Step2: The same, of course, could be achieved using a while loop as follows
Step3: This approach is easy and user firendly, yet it requires you to know the overall number of pages (10, in our case). A smarter solution would be the one that will not requore you to have this information. If you take an attentive look you will notice that there is a Next button on each single page and there is only one page which is missing the Next button | Python Code:
# -*- coding: utf-8 -*-
import scrapy
class QuoteSpider(scrapy.Spider):
name = "quote"
allowed_domains = ["quotes.toscrape.com"]
start_urls = ['http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/']
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('span small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
Explanation: Scrapy 3: crawling all pages
The last notebook (scrapy 2) provided Scrapy code for scraping a page from quotes.toscrape.com. Yet, there are several other pages on this website that one may need to scrape. Which means, we have to actually create a Spider that do the same scraping tasks for all the URLs, not just one. That can be implemented in several ways, but first of all, let's start a new project and generate a new spider.
To start a new project, open the command prompt (move to the Data_Scraping folder, if you always do so) and run the following command:
scrapy startproject quote_pages
So now move to the newly created folder and generate a new spider (called quote_all) for getting data from quotes.toscrape.com as follows:
cd quote_pages
scrapy genspider quote_all quotes.toscrape.com
The spider we will create is basically the same we had before (that scraped the same page and yielded a JSON file) just with some small changes. So let's copy the code from out spider and paste it inside the newly generated quote_all.py file.
End of explanation
# -*- coding: utf-8 -*-
import scrapy
class QuoteSpider(scrapy.Spider):
name = "quote_new"
allowed_domains = ["quotes.toscrape.com"]
start_urls = []
for i in range(1,11):
URL = 'http://quotes.toscrape.com/page/' + str(i) + '/'
start_urls.append(URL)
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('span small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
Explanation: As you can see the very first (and brutal) approach can be adding the URLs one-by-one to the start_urls list. The good news is that all URLs are quite similar: the only difference is the page number. This means we can construct URLs from three components as follows:
URL = 'http://quotes.toscrape.com/page/' + '1' + '/'
where the 2nd component (in this case 1) is the only variable component. If you check manually, you will see that there are 10 pages overall that include quote data. Which means, we can create each separate link using range() function and append them to the start_urls empty list as follows:
start_urls = []
for i in range(1,11):
URL = 'http://quotes.toscrape.com/page/' + str(i) + '/'
start_urls.append(URL)
Thus, the overall function after the abovementioned change will look like this (P.S. also, change the name variable value as we do not want to have 2 scrapers with the same name):
End of explanation
# -*- coding: utf-8 -*-
import scrapy
class QuoteSpider(scrapy.Spider):
name = "quote_new"
allowed_domains = ["quotes.toscrape.com"]
start_urls = []
i=1
while i<11:
URL = 'http://quotes.toscrape.com/page/' + str(i) + '/'
start_urls.append(URL)
i+=1
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('span small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
Explanation: The same, of course, could be achieved using a while loop as follows:
End of explanation
# -*- coding: utf-8 -*-
import scrapy
class QuoteSpider(scrapy.Spider):
name = "quote_new"
allowed_domains = ["quotes.toscrape.com"]
start_urls = ["http://quotes.toscrape.com/"]
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('span small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
next_page = response.css('li.next a::attr(href)').extract_first()
new_link = "http://quotes.toscrape.com" + next_page
if next_page is not None:
yield scrapy.Request(new_link)
Explanation: This approach is easy and user firendly, yet it requires you to know the overall number of pages (10, in our case). A smarter solution would be the one that will not requore you to have this information. If you take an attentive look you will notice that there is a Next button on each single page and there is only one page which is missing the Next button: the last page. The button includes a hyperlink to each next page. As there is not next page for the last one, there is no next button on it. Which means we can navigate over pages by finding the hyperlink under the next button. It can be found with following code, which is using CSS selectors to find a list item (li) with a class next, then find an <a> tag inside the list item and get the value of its href attribute:
next_page = response.css('li.next a::attr(href)').extract_first()
If we are on the very first page, the value of the next_page guy will be /page/2/. Then this will be the absolute link of the 2nd page:
new_link = 'http://quotes/toscrape.com' + next_page
To finalize the code what we need to do is to first check whether there is any next button (any next_page url) and if so, then yield a new request to the new url as follows:
if next_page is not None:
yield scrapy.Request(new_link)
The code above must be added inside the defined parse() function (but outside the for loop). Thus, the full code will look like this.
End of explanation |
3,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build a DNN using the Keras Functional API
Learning objectives
Review how to read in CSV file data using tf.data.
Specify input, hidden, and output layers in the DNN architecture.
Review and visualize the final DNN shape.
Train the model locally and visualize the loss curves.
Deploy and predict with the model using Cloud AI Platform.
Introduction
In this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Step1: Locating the CSV files
We will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data
Step2: Lab Task 1
Step3: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
Step4: Lab Task 2
Step5: Lab Task 3
Step6: Lab Task 4
Step7: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
Step8: Lab Task 5 | Python Code:
import os, json, math
import numpy as np
import shutil
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY
Explanation: Build a DNN using the Keras Functional API
Learning objectives
Review how to read in CSV file data using tf.data.
Specify input, hidden, and output layers in the DNN architecture.
Review and visualize the final DNN shape.
Train the model locally and visualize the loss curves.
Deploy and predict with the model using Cloud AI Platform.
Introduction
In this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
!ls -l ../data/toy_data/*.csv
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data
End of explanation
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
Explanation: Lab Task 1: Use tf.data to read the CSV files
First let's define our columns of data, which column we're predicting for, and the default values.
End of explanation
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
Explanation: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
End of explanation
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
INPUT_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
# TODO 2
# input layer
inputs = # TODO -- Your code here.
feature_columns = # TODO -- Your code here.
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = # TODO -- Your code here.
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = # TODO -- Your code here.
h2 = # TODO -- Your code here.
# final output is a linear activation because this is regression
output = # TODO -- Your code here.
model = # TODO -- Your code here.
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
Explanation: Lab Task 2: Build a DNN with Keras
Now let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.
End of explanation
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
Explanation: Lab Task 3: Visualize the DNN
We can visualize the DNN using the Keras plot_model utility.
End of explanation
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 32 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../data/toy_data/taxi-traffic-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/toy_data/taxi-traffic-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
Explanation: Lab Task 4: Train the model
To train the model, simply call model.fit().
Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
End of explanation
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
End of explanation
# TODO 5
# TODO -- Your code here.
Explanation: Lab Task 5: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.
End of explanation |
3,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 3
Imports
Step2: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
Explanation: Algorithms Exercise 3
Imports
End of explanation
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
alphabet = 'abcdefghijklmnopqrstuvwxyz'
dic = {}
for i in alphabet:
if i in s:
dic[i] = s.count(i)/len(s)
return dic
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
ps = np.array([d[x] for x in d])
H = -sum(ps*np.log2(ps))
return H
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
def calc(s):
print(entropy(char_probs(s)))
interact(calc,s='Your Text');
assert True # use this for grading the pi digits histogram
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation |
3,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planet Analytics API Tutorial
Summary Statistics
Step1: 2. Post a stats job request
a) Check API Connection
Note
Step2: b) Select your subscription
The analytics stats API enables you to create summary stats reports for your analytics subscriptions. You will need the id of a subscription of interest in order to make a stats request. This notebook uses the Singapore Strait ships subscription by default (f3aef23c-a540-458e-a3b5-979b7920d2ea)
Step3: d) Post a stats report job request to the AF API
Step4: 3. Poll the stats job endpoint
Step5: 4. Get the job report results
Step6: 5. Restructure the results into a pandas dataframe
Step7: 6. Visualize the time series
Step8: 7. Normalize and clean the report data
The graph above is likely very noisy due to clouds, haze, and a variation in the amount of imagery per day. The steps below normalize the object count by the estimated area of usable imagery that the model observed. Planet currently provides two versions of an unusable data mask (UDM) for most scenes. Udm (version 1) is less accurate but is available for every scene. Udm2 is more accurate but is sometimes unavailable. The steps below use udm2 to estimate the percentage of pixels that are usable (i.e. not cloudy), and the original udm to estimate the total imaged area per day.
Step9: a) Remove time points that contain < 50% clear imagery
On cloudy days results are less likely to be accurate.
Step10: b) Remove time points where imagery coverage is < 50%
If only a small section of the AOI contains imagery, inferring the object count for the whole AOI is less accurate.
Step11: c) Estimate usable area per time point
Models can often detect objects through light haze and sometimes through heavy haze, so we use that rough information to create an estimated "usable percentage" metric. You can adjust the parameters if you know the model your using performs better or worse in haze.
Step12: d) Normalize the object count
Create a normalized object count by getting the object count per usable square meter and multiplying by the total aoi size.
Step13: e) Vizualize the normalized data | Python Code:
!pip install hvplot
import os
import requests
import json
import pprint
import time
import pandas as pd
import holoviews as hv
import hvplot.pandas
from bokeh.models.formatters import DatetimeTickFormatter
from collections import defaultdict
Explanation: Planet Analytics API Tutorial
Summary Statistics: Ships
Overview
Introduction
Post a stats job request
Poll the stats job endpoint
Get the job report results
Restructure the results into a pandas dataframe
Visualize the time series
Normalize and clean the report data
1. Introduction
This notebook demonstrates how to request road summary statistics for a subscription using the Anaytics Feeds Stats API and visualize them as time series, enabling further analyses including patterns of life, development trends and anomaly detection. Access to an object detection subscription (ships or planes) is required to run the notebook.
The workflow involves:
- Posting a stats job request
- Polling the job stats endpoint
- Getting the job report results
- Restructuring the results into a pandas dataframe
- Normalizing and cleaning the report data
- Visualizing the time series
Import and install external dependencies
This notebook requires hvplot, which may not be available in the main notebook docker image.
End of explanation
ANALYTICS_BASE_URL = 'https://api.planet.com/analytics/'
# change this line if your API key is not set as an env var
API_KEY = os.environ['PL_API_KEY']
# alternatively, you can just set your API key directly as a string variable:
# API_KEY = "YOUR_PLANET_API_KEY_HERE"
# set up a reusable session with required headers
session = requests.Session()
session.headers.update({'content-type':'application/json','Authorization': 'api-key ' + API_KEY})
# make a request to the analytics api
resp = session.get(ANALYTICS_BASE_URL)
if resp.ok:
print("Yay, you are able to connect to the Planet Analytics API!")
else:
print("Something is wrong:", resp.content)
Explanation: 2. Post a stats job request
a) Check API Connection
Note: If you do not have access to the Analytics Feeds API, you may not be able to run through these examples. Contact Sales to learn more.
End of explanation
# Make sure you have access to the subscription
subscription_id = 'f3aef23c-a540-458e-a3b5-979b7920d2ea'
resp = session.get(f"{ANALYTICS_BASE_URL}subscriptions/{subscription_id}")
if not resp.ok:
raise Exception('Bad response:', resp.content)
else:
print("Subscription info:")
print(resp.json())
Explanation: b) Select your subscription
The analytics stats API enables you to create summary stats reports for your analytics subscriptions. You will need the id of a subscription of interest in order to make a stats request. This notebook uses the Singapore Strait ships subscription by default (f3aef23c-a540-458e-a3b5-979b7920d2ea)
End of explanation
request_body = {
"title": "Stats Demo - Ships",
"subscriptionID": subscription_id,
"interval": "day", # most object detection feeds generate results on a daily cadence
# "collection": collection, # remove this line if you want to use the default subscription geometry
# "startTime": start_time, # remove this line if you want to use the default subscription startTime
# "endTime": end_time # remove this line if you want to use the default subscription endTime
}
stats_post_url = ANALYTICS_BASE_URL + 'stats'
job_post_resp = session.post(
stats_post_url,
data=json.dumps(request_body)
)
pprint.pprint(job_post_resp.json())
Explanation: d) Post a stats report job request to the AF API
End of explanation
job_link = job_post_resp.json()['links'][0]['href']
status = "pending"
while status != "completed":
report_status_resp = session.get(
job_link,
)
status = report_status_resp.json()['status']
print(status)
time.sleep(2)
pprint.pprint(report_status_resp.json())
Explanation: 3. Poll the stats job endpoint
End of explanation
report_results_link = report_status_resp.json()['links'][-1]['href']
report_results_link
results_resp = session.get(
report_results_link,
)
print(results_resp.status_code)
Explanation: 4. Get the job report results
End of explanation
def restructure_results(results_json):
cols = results_json['cols']
rows = results_json['rows']
records = []
for r in rows:
rec = defaultdict()
for i, cell in enumerate(r):
rec[cols[i]['label']] = cell
records.append(rec)
df = pd.DataFrame.from_records(records)
df['Start Time'] = pd.to_datetime(df['Start Time'])
df = df.set_index('Start Time')
return df
df = restructure_results(results_resp.json())
df.head()
Explanation: 5. Restructure the results into a pandas dataframe
End of explanation
hv.extension('bokeh')
formatter = DatetimeTickFormatter(months='%b %Y')
df['Total Object Count'].hvplot().options(xformatter=formatter, width=800)
Explanation: 6. Visualize the time series
End of explanation
pd.set_option('precision', 15)
# Get the total area of the subscription or submitted feature (sq m)
submitted_area = df['Submitted Area'][0]
Explanation: 7. Normalize and clean the report data
The graph above is likely very noisy due to clouds, haze, and a variation in the amount of imagery per day. The steps below normalize the object count by the estimated area of usable imagery that the model observed. Planet currently provides two versions of an unusable data mask (UDM) for most scenes. Udm (version 1) is less accurate but is available for every scene. Udm2 is more accurate but is sometimes unavailable. The steps below use udm2 to estimate the percentage of pixels that are usable (i.e. not cloudy), and the original udm to estimate the total imaged area per day.
End of explanation
df['Clear Percentage'] = df['Clear Area (udm2_band_1)'] / df['Total Area (udm2)']
df = df[df['Clear Percentage'] > 0.5]
Explanation: a) Remove time points that contain < 50% clear imagery
On cloudy days results are less likely to be accurate.
End of explanation
df['Imagery Coverage'] = df['Total Area (udm2)'] / submitted_area
df = df[df['Imagery Coverage'] > 0.5]
Explanation: b) Remove time points where imagery coverage is < 50%
If only a small section of the AOI contains imagery, inferring the object count for the whole AOI is less accurate.
End of explanation
# Count 100% of light haze area as usable
light_haze_weight = 1.0
# Count 50% of heavy haze area as usable
heavy_haze_weight = 0.5
# Create a column that estimates the percentage of imagery where the model is expected to perform.
df['Usable Percentage'] = (df['Clear Area (udm2_band_1)'] + (df['Light Haze Area (udm2_band_4)'] * light_haze_weight) + (df['Heavy Haze Area (udm2_band_5)'] * heavy_haze_weight)) / df['Total Area (udm2)']
# Create a column that estimates usable area. In some cases udm2 assets are missing, so the most accurate measurement of total area that the model has seen comes from the udm Total Area column.
df['Usable Area'] = df['Usable Percentage'] * df['Total Area (udm)']
Explanation: c) Estimate usable area per time point
Models can often detect objects through light haze and sometimes through heavy haze, so we use that rough information to create an estimated "usable percentage" metric. You can adjust the parameters if you know the model your using performs better or worse in haze.
End of explanation
df['Normalized Count'] = round((df['Total Object Count'] / df['Usable Area']) * submitted_area).astype(int)
Explanation: d) Normalize the object count
Create a normalized object count by getting the object count per usable square meter and multiplying by the total aoi size.
End of explanation
max_count = df['Normalized Count'].max()
df['Normalized Count'].hvplot().options(xformatter=formatter, width=800, ylim=(0,max_count + (max_count * .1)))
Explanation: e) Vizualize the normalized data
End of explanation |
3,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rich Output
In Python, objects can declare their textual representation using the __repr__ method. IPython expands on this idea and allows objects to declare other, rich representations including
Step1: A few points
Step2: Images
To work with images (JPEG, PNG) use the Image class.
Step3: Returning an Image object from an expression will automatically display it
Step4: Or you can pass an object with a rich representation to display
Step5: An image can also be displayed from raw data or a URL.
Step6: SVG images are also supported out of the box.
Step7: Embedded vs non-embedded Images
By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the Image class to only store a link to the image. Let's see how this works using a webcam at Berkeley.
Step8: Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image.
Step9: Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline.
Step11: Of course, if you re-run this Notebook, the two images will be the same again.
HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
Step12: You can also use the %%html cell magic to accomplish the same thing.
Step13: JavaScript
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
Step14: Pass a string of JavaScript source code to the JavaScript object and then display it.
Step15: The same thing can be accomplished using the %%javascript cell magic
Step17: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
Step18: LaTeX
The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using MathJax.
You can pass raw LaTeX test as a string to the Math object
Step20: With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray
Step21: Or you can enter LaTeX directly with the %%latex cell magic
Step22: Audio
IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
Step23: A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur. This can be auralised as follows
Step24: Video
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load
Step25: Using the nascent video capabilities of modern browsers, you may also be able to display local
videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you;
we will continue testing this and looking for ways to make it more robust.
The following cell loads a local file called animation.m4v, encodes the raw video as base64 for http
transport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider.
Step26: External sites
You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia
page for mobile users
Step27: Links to local files
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object
Step28: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. | Python Code:
from IPython.display import display
Explanation: Rich Output
In Python, objects can declare their textual representation using the __repr__ method. IPython expands on this idea and allows objects to declare other, rich representations including:
HTML
JSON
PNG
JPEG
SVG
LaTeX
A single object can declare some or all of these representations; all are handled by IPython's display system. This Notebook shows how you can use this display system to incorporate a broad range of content into your Notebooks.
Basic display imports
The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations.
End of explanation
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
Explanation: A few points:
Calling display on an object will send all possible representations to the Notebook.
These representations are stored in the Notebook document.
In general the Notebook will use the richest available representation.
If you want to display a particular representation, there are specific functions for that:
End of explanation
from IPython.display import Image
i = Image(filename='../images/ipython_logo.png')
Explanation: Images
To work with images (JPEG, PNG) use the Image class.
End of explanation
i
Explanation: Returning an Image object from an expression will automatically display it:
End of explanation
display(i)
Explanation: Or you can pass an object with a rich representation to display:
End of explanation
Image(url='http://python.org/images/python-logo.gif')
Explanation: An image can also be displayed from raw data or a URL.
End of explanation
from IPython.display import SVG
SVG(filename='../images/python_logo.svg')
Explanation: SVG images are also supported out of the box.
End of explanation
from IPython.display import Image
img_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'
# by default Image data are embedded
Embed = Image(img_url)
# if kwarg `url` is given, the embedding is assumed to be false
SoftLinked = Image(url=img_url)
# In each case, embed can be specified explicitly with the `embed` kwarg
# ForceEmbed = Image(url=img_url, embed=True)
Explanation: Embedded vs non-embedded Images
By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the Image class to only store a link to the image. Let's see how this works using a webcam at Berkeley.
End of explanation
Embed
Explanation: Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image.
End of explanation
SoftLinked
Explanation: Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline.
End of explanation
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display(h)
Explanation: Of course, if you re-run this Notebook, the two images will be the same again.
HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
End of explanation
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
Explanation: You can also use the %%html cell magic to accomplish the same thing.
End of explanation
from IPython.display import Javascript
Explanation: JavaScript
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
End of explanation
js = Javascript('alert("hi")');
display(js)
Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it.
End of explanation
%%javascript
alert("hi");
Explanation: The same thing can be accomplished using the %%javascript cell magic:
End of explanation
Javascript(
$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("data/flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
End of explanation
from IPython.display import Math
Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')
Explanation: LaTeX
The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using MathJax.
You can pass raw LaTeX test as a string to the Math object:
End of explanation
from IPython.display import Latex
Latex(r\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray})
Explanation: With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray:
End of explanation
%%latex
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
Explanation: Or you can enter LaTeX directly with the %%latex cell magic:
End of explanation
from IPython.display import Audio
Audio(url="http://www.nch.com.au/acm/8k16bitpcm.wav")
Explanation: Audio
IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
End of explanation
import numpy as np
max_time = 3
f1 = 220.0
f2 = 224.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
Explanation: A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur. This can be auralised as follows:
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
Explanation: Video
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:
End of explanation
from IPython.display import HTML
from base64 import b64encode
video = open("../images/animation.m4v", "rb").read()
video_encoded = b64encode(video).decode('ascii')
video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)
HTML(data=video_tag)
Explanation: Using the nascent video capabilities of modern browsers, you may also be able to display local
videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you;
we will continue testing this and looking for ways to make it more robust.
The following cell loads a local file called animation.m4v, encodes the raw video as base64 for http
transport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider.
End of explanation
from IPython.display import IFrame
IFrame('http://jupyter.org', width='100%', height=350)
Explanation: External sites
You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia
page for mobile users:
End of explanation
from IPython.display import FileLink, FileLinks
FileLink('Cell Magics.ipynb')
Explanation: Links to local files
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object:
End of explanation
FileLinks('.')
Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
End of explanation |
3,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6. ADI forward modeling of disks
Author
Step1: In the following box we import all the VIP routines that will be used in this tutorial.
The path to some routines has changed between versions 1.0.3 and 1.1.0, which saw a major revamp of the modular architecture, hence the if statements.
Step2: 6.1. Introduction
6.1.1. Overview
The functions implemented in vip_hci for disks are located in vip.metrics.scattered_light_disk. It contains the definition of a class called ScatteredLightDisk which can produce a synthetic image of a disk, and also utility functions to create cubes of images where a synthetic disk has been injected at specific position angles to simulate a real observation.
Currently there is no utility function to do forward modelling and try to find the best disk matching a given dataset as this is usually specific to each dataset.
Keep in mind that ScatteredLightDisk is only a ray-tracing approach and does not contain any physics in it (no radiative transfer, no particle cross-section). It assumes the particles number density around a star follows the mathematical prescription given in section 1.2 and uses a unity scattering cross-section for all particles (no particle size distribution and cross-section dependant on the particle size, the flux of the synthetic disk cannot be converted in physical units (e.g. Jy)
6.1.2. Parametrisation of the density distribution of dust
The density distribution of dust particles is parametrized in a cylindrical coordinate system $\rho(r,\theta,z)$ and is described by the equation
Step3: 6.2.1. Symmetric pole-on disk
For a pole-on disk, $i_\text{tilt}=0^\circ$.
For a symmetric disk, $e=0$ and the position angle (pa) and argument of pericenter ($\omega$) have no impact.
We choose a semi-major axis of 70 a.u., a vertical profile with a gaussian distribution ($\gamma=2$), a reference scale height of 3 a.u. at the semi-major axis of the disk, and inner and outer exponent $\alpha_{in}=12$ and $\alpha_{out}=-12$
Step4: Then create your disk model
Step5: The method compute_scattered_light returns the synthetic image of the disk.
Step6: You can print some info on the geometrical properties of the model, the dust distribution parameters, the numerical integration parameters and the phase function parameters (detailed later).
This can be useful because, in addition to reminding all the parameters used in the model, it also computes some properties such as the radial FWHM of the disk.
Step7: As a side note, if $\alpha_{in} \ne \alpha_{out}$, then the peak surface density of the disk is not located at the reference radius $a$.
Step8: 6.2.2. Inclined symmetric disk
Step9: The position angle of the disk is 0 (e.g. north). The phase function is asymmetric, the reason why the north and south ansae appear brighter is because the disk is not flat
Step10: Warning ! The code does not handle perfectly edge-on disks. There is a maximum inclination close to edge-on beyond which it cannot create an image. In practice this is not a limitation as the convolution by the PSF always makes it impossible to disentangle between a close to edge-on disk and a perfectly edge-on disk.
Step11: 6.2.3. Inclined symmetric disk with anisotropy of scattering
6.2.3.1. Simple Henyey-Greenstein phase function
We parametrize the phase function by a Henyey Greenstein phase function, with an asymmetry parameter g. An isotropic phase function has $g=0$, forward scattering is represented by $0<g\leq1$ and backward scattering is represented by $-1\leq g<0$
Step12: You can plot how the phase function look like
Step13: The forward side is brighter.
6.2.3.2. Double Henyey-Greenstein phase function
A double Henyey Greenstein (HG) phase function is simply a linear combination of 2 simple HG phase function. It is therefore parametrized by $g_1$ and $g_2$ the 2 asymmetry parameters of each HG and the weight (between 0 and 1) of the first HG phase function. Typically a double HG is used to represent a combination of forward scattering ($g_1>0$) and backward scattering ($g_2<1$)
Step14: 6.2.3.3. Custom phase function
In some cases, a HG phase function (simple or double) cannot represent well the behaviour of the dust. The code is modular and you can propose new prescriptions for the phase functions if you need, or you can also create a custom phase function.
Step15: 6.2.3.4. Representing a polarised phase function
If you are trying to reproduce the polarised intensity of a disk (for instance Stokes $Q_\phi$ image), you may want to add on top of the scattering phase function, a modulation representing the degree of linear polarisation.
This can be done by setting the polar keyword to True and in this case, the model assumes a Rayleigh-like degree of linear polarisation parametrized by $(1-(\cos \phi)^2) / (1+(\cos \phi)^2)$ where $\phi$ is the scattering angle.
Step16: You can combine this Rayleigh-like degree of linear polarisation with any phase function (simple HG, double HG or custom type).
6.2.4. Asymmetric disk
Be careful here !
There is no consensus in the community on how to parametrize an eccentric dust distribution, so keep in mind that the convention described in section 1.2 is only one way to do so, but does not mean the dust density distribution in an eccentric disk follows this prescription. For instance, around the pericenter particle velocities are higher and one expects more collision to happen which can create an overdensity of particles compared to other regions of the disk. Conversely, particles stay longer at the apocenter because of Kepler's third law, which means that one could also expect a higher density at apocenter... All these physical phenomena are not described in this model.
Let's start woth a pole-on disk to be insensitive to phase function effects
Step17: The brightness asymmetry here is entirely due to the fact that the brightness at one point in the disk is inversely proportional to the squared distance to the star.
Once you incline the disk, you start seeing the competing effect of the phase function and eccentricity.
Step18: 6.3. Forward modeling of disks
Let's start from our inclined simple HG symmeric disk fake_disk3_map and assume we observe this disk as part of an ADI sequence of 30 images
Step19: cube_fake_disk3 is now a cube of 30 frames, where the disk has been injected at the correct position angle.
Step20: Let's visualize the first, middle and last image of the cube.
Step21: We can now process this cube with median-ADI for instance
Step22: The example above shows a typical bias that can be induced by ADI on extended disk signals (Milli et al. 2012).
So far we have not dealt with convolution effects. In practice the image of a disk is convolved by the instrumental PSF.
Let's assume here an instrument having a gaussian PSF with FWHM = 4px, and create a synthetic PSF using the create_synth_psf function
Step23: Then we inject the disk in the cube and convolve each frame by the PSF | Python Code:
%matplotlib inline
from hciplot import plot_frames, plot_cubes
from matplotlib.pyplot import *
from matplotlib import pyplot as plt
import numpy as np
from packaging import version
Explanation: 6. ADI forward modeling of disks
Author: Julien Milli
Last update: 23/03/2022
Suitable for VIP v1.0.0 onwards.
Table of contents
6.1. Introduction
6.1.1. Overview
6.1.2. Parametrisation of the density distribution of dust
6.2. Examples of disks
6.2.1. Symmetric pole-on disk
6.2.2. Inclined symmetric disk
6.2.3. Inclined symmetric disk with anisotropy of scattering
6.2.3.1. Simple Henyey-Greenstein phase function
6.2.3.2. Double Henyey-Greenstein phase function
6.2.3.3. Custom phase function
6.2.3.4. Representing a polarised phase function
6.2.4. Asymmetric disk
6.3. Forward modeling of disks
This tutorial shows:
how to generate different models of synthetic (debris) disks;
how to inject model disks in ADI cubes, for forward modeling.
Let's first import a couple of external packages needed in this tutorial:
End of explanation
import vip_hci as vip
vvip = vip.__version__
print("VIP version: ", vvip)
if version.parse(vvip) < version.parse("1.0.0"):
msg = "Please upgrade your version of VIP"
msg+= "It should be 1.0.0 or above to run this notebook."
raise ValueError(msg)
elif version.parse(vvip) <= version.parse("1.0.3"):
from vip_hci.conf import time_ini, timing
from vip_hci.medsub import median_sub
from vip_hci.metrics import cube_inject_fakedisk, ScatteredLightDisk
else:
from vip_hci.config import time_ini, timing
from vip_hci.fm import cube_inject_fakedisk, ScatteredLightDisk
from vip_hci.psfsub import median_sub
# common to all versions:
from vip_hci.var import create_synth_psf
Explanation: In the following box we import all the VIP routines that will be used in this tutorial.
The path to some routines has changed between versions 1.0.3 and 1.1.0, which saw a major revamp of the modular architecture, hence the if statements.
End of explanation
pixel_scale=0.01225 # pixel scale in arcsec/px
dstar= 80 # distance to the star in pc
nx = 200 # number of pixels of your image in X
ny = 200 # number of pixels of your image in Y
Explanation: 6.1. Introduction
6.1.1. Overview
The functions implemented in vip_hci for disks are located in vip.metrics.scattered_light_disk. It contains the definition of a class called ScatteredLightDisk which can produce a synthetic image of a disk, and also utility functions to create cubes of images where a synthetic disk has been injected at specific position angles to simulate a real observation.
Currently there is no utility function to do forward modelling and try to find the best disk matching a given dataset as this is usually specific to each dataset.
Keep in mind that ScatteredLightDisk is only a ray-tracing approach and does not contain any physics in it (no radiative transfer, no particle cross-section). It assumes the particles number density around a star follows the mathematical prescription given in section 1.2 and uses a unity scattering cross-section for all particles (no particle size distribution and cross-section dependant on the particle size, the flux of the synthetic disk cannot be converted in physical units (e.g. Jy)
6.1.2. Parametrisation of the density distribution of dust
The density distribution of dust particles is parametrized in a cylindrical coordinate system $\rho(r,\theta,z)$ and is described by the equation:
$\rho(r,\theta,z) = \rho_0 \times \left( \frac{2}{\left( \frac{r}{R(\theta)} \right)^{-2a_{in}} + \left( \frac{r}{R(\theta)} \right)^{-2a_{out}} }\right)^{1/2} \times e^{\left[ -\left( \frac{z}{H(r) }\right)^\gamma \right]}$
where $R(\theta)$ is called the reference radius. It is simply the radius of the disk $a$ if the dust distribution is centrally symmetric (no eccentricity). If the disk is eccentric, then $R(\theta)$ depends on $\theta$ and is given by the equation of an ellipse in polar coordinates: $R(\theta) = \frac{a(1-e^2)}{1+e \cos{\theta}}$
This equation for $\rho(r,\theta,z)$ is the product of 3 terms:
1. a constant $\rho_0$ which is the surfacce density of the dust in the midplane, at the reference radius $R(\theta)$.
2. the density distribution in the midplane $z=0$ defined as $\left( \frac{2}{\left( \frac{r}{R(\theta)} \right)^{-2a_{in}} + \left( \frac{r}{R(\theta)} \right)^{-2a_{out}} }\right)^{1/2}$. Such a function ensures that when $r\ll R(\theta)$ then the term is $\propto r^{\alpha_{in}}$ (and we typically use $\alpha_{in}>0$) and when $r\gg R(\theta)$ then the term is $\propto r^{\alpha_{out}}$ (and we typically use $\alpha_{out}<0$).
3. the vertical profile $e^{\left[ -\left( \frac{z}{H(r) }\right)^\gamma \right]}$ is parametrized by an exponential decay of exponent $\gamma$ and scale height $H(r)$. If $\gamma=2$, the vertical profile is Gaussian (and $H(r)$ is proportional to the $\sigma$ or FWHM of the Gaussian (but not strictly equal to any of them). The scale height is further defined as $H(r)=\xi_0 \times \left( \frac{r}{R(\theta)} \right)^\beta$ where $\xi_0$ is the reference scale height at the reference radius $R(\theta)$ and $\beta$ is the flaring coeffient ($\beta=1$ means a linear flaring: the scale height increases linearly with radius).
6.2. Examples of disks
Let's assume we want to create a synthetic image of 200px, containing a disk around a star located at 80 a.u., observed with SPHERE/IRDIS (pixel scale 12.25 mas).
End of explanation
itilt = 0. # inclination of your disk in degrees
a = 70. # semimajoraxis of the disk in au
ksi0 = 3. # reference scale height at the semi-major axis of the disk
gamma = 2. # exponant of the vertical exponential decay
alpha_in = 12
alpha_out = -12
beta = 1
Explanation: 6.2.1. Symmetric pole-on disk
For a pole-on disk, $i_\text{tilt}=0^\circ$.
For a symmetric disk, $e=0$ and the position angle (pa) and argument of pericenter ($\omega$) have no impact.
We choose a semi-major axis of 70 a.u., a vertical profile with a gaussian distribution ($\gamma=2$), a reference scale height of 3 a.u. at the semi-major axis of the disk, and inner and outer exponent $\alpha_{in}=12$ and $\alpha_{out}=-12$
End of explanation
fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws','ain':alpha_in,'aout':alpha_out,
'a':a,'e':0.0,'ksi0':ksi0,'gamma':gamma,'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
Explanation: Then create your disk model
End of explanation
fake_disk1_map = fake_disk1.compute_scattered_light()
plot_frames(fake_disk1_map, grid=False, size_factor=6)
Explanation: The method compute_scattered_light returns the synthetic image of the disk.
End of explanation
fake_disk1.print_info()
Explanation: You can print some info on the geometrical properties of the model, the dust distribution parameters, the numerical integration parameters and the phase function parameters (detailed later).
This can be useful because, in addition to reminding all the parameters used in the model, it also computes some properties such as the radial FWHM of the disk.
End of explanation
fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':-3,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
fake_disk1_map = fake_disk1.compute_scattered_light()
plot_frames(fake_disk1_map, grid=False, size_factor=6)
fake_disk1.print_info()
Explanation: As a side note, if $\alpha_{in} \ne \alpha_{out}$, then the peak surface density of the disk is not located at the reference radius $a$.
End of explanation
itilt = 76 # inclination of your disk in degreess
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
Explanation: 6.2.2. Inclined symmetric disk
End of explanation
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta,
'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':False})
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
Explanation: The position angle of the disk is 0 (e.g. north). The phase function is asymmetric, the reason why the north and south ansae appear brighter is because the disk is not flat: it has a certain scale height and there is more dust intercepted along the line of sight in the ansae.
Note that we decided here to normalize the disk to a maximum brightness of 1, using the option flux_max=1.. This is not the only option available and you can decide to paramterize $\rho_0$ instead, using the keyword dens_at_r0 which directly specifies $\rho_0$.
End of explanation
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=90, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':2, 'gamma':gamma, 'beta':beta,
'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':False})
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
Explanation: Warning ! The code does not handle perfectly edge-on disks. There is a maximum inclination close to edge-on beyond which it cannot create an image. In practice this is not a limitation as the convolution by the PSF always makes it impossible to disentangle between a close to edge-on disk and a perfectly edge-on disk.
End of explanation
g=0.4
fake_disk3 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
Explanation: 6.2.3. Inclined symmetric disk with anisotropy of scattering
6.2.3.1. Simple Henyey-Greenstein phase function
We parametrize the phase function by a Henyey Greenstein phase function, with an asymmetry parameter g. An isotropic phase function has $g=0$, forward scattering is represented by $0<g\leq1$ and backward scattering is represented by $-1\leq g<0$
End of explanation
fake_disk3.phase_function.plot_phase_function()
fake_disk3_map = fake_disk3.compute_scattered_light()
plot_frames(fake_disk3_map, grid=False, size_factor=6)
Explanation: You can plot how the phase function look like:
End of explanation
g1=0.6
g2=-0.4
weight1=0.7
fake_disk4 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'DoubleHG', 'g':[g1,g2], 'weight':weight1,
'polar':False},
flux_max=1)
fake_disk4.phase_function.plot_phase_function()
fake_disk4_map = fake_disk4.compute_scattered_light()
plot_frames(fake_disk4_map, grid=False, size_factor=6)
Explanation: The forward side is brighter.
6.2.3.2. Double Henyey-Greenstein phase function
A double Henyey Greenstein (HG) phase function is simply a linear combination of 2 simple HG phase function. It is therefore parametrized by $g_1$ and $g_2$ the 2 asymmetry parameters of each HG and the weight (between 0 and 1) of the first HG phase function. Typically a double HG is used to represent a combination of forward scattering ($g_1>0$) and backward scattering ($g_2<1$)
End of explanation
kind='cubic' #kind must be either "linear", "nearest", "zero", "slinear", "quadratic" or "cubic"
spf_dico = dict({'phi':[0, 60, 90, 120, 180],
'spf':[1, 0.4, 0.3, 0.3, 0.5],
'name':'interpolated', 'polar':False, 'kind':kind})
fake_disk5 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico=spf_dico, flux_max=1)
fake_disk5.phase_function.plot_phase_function()
fake_disk5_map = fake_disk5.compute_scattered_light()
plot_frames(fake_disk5_map, grid=False, size_factor=6)
Explanation: 6.2.3.3. Custom phase function
In some cases, a HG phase function (simple or double) cannot represent well the behaviour of the dust. The code is modular and you can propose new prescriptions for the phase functions if you need, or you can also create a custom phase function.
End of explanation
fake_disk6 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma,
'beta':beta, 'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':True})
fake_disk6.phase_function.plot_phase_function()
fake_disk6_map = fake_disk6.compute_scattered_light()
plot_frames(fake_disk6_map, grid=False, size_factor=6)
Explanation: 6.2.3.4. Representing a polarised phase function
If you are trying to reproduce the polarised intensity of a disk (for instance Stokes $Q_\phi$ image), you may want to add on top of the scattering phase function, a modulation representing the degree of linear polarisation.
This can be done by setting the polar keyword to True and in this case, the model assumes a Rayleigh-like degree of linear polarisation parametrized by $(1-(\cos \phi)^2) / (1+(\cos \phi)^2)$ where $\phi$ is the scattering angle.
End of explanation
e=0.4 # eccentricity in degrees
omega=30 # argument of pericenter
fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=0, omega=omega, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
fake_disk7_map = fake_disk7.compute_scattered_light()
plot_frames(fake_disk7_map, grid=False, size_factor=6)
Explanation: You can combine this Rayleigh-like degree of linear polarisation with any phase function (simple HG, double HG or custom type).
6.2.4. Asymmetric disk
Be careful here !
There is no consensus in the community on how to parametrize an eccentric dust distribution, so keep in mind that the convention described in section 1.2 is only one way to do so, but does not mean the dust density distribution in an eccentric disk follows this prescription. For instance, around the pericenter particle velocities are higher and one expects more collision to happen which can create an overdensity of particles compared to other regions of the disk. Conversely, particles stay longer at the apocenter because of Kepler's third law, which means that one could also expect a higher density at apocenter... All these physical phenomena are not described in this model.
Let's start woth a pole-on disk to be insensitive to phase function effects
End of explanation
fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=omega, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
fake_disk7_map = fake_disk7.compute_scattered_light()
plot_frames(fake_disk7_map, grid=False, size_factor=6)
Explanation: The brightness asymmetry here is entirely due to the fact that the brightness at one point in the disk is inversely proportional to the squared distance to the star.
Once you incline the disk, you start seeing the competing effect of the phase function and eccentricity.
End of explanation
plot_frames(fake_disk3_map, grid=False, size_factor=6)
nframes = 30
# we assume we have 60º of parallactic angle rotation centered around meridian
parang_amplitude = 60
derotation_angles = np.linspace(-parang_amplitude/2, parang_amplitude/2, nframes)
start = time_ini()
cube_fake_disk3 = cube_inject_fakedisk(fake_disk3_map, -derotation_angles, imlib='vip-fft')
timing(start)
Explanation: 6.3. Forward modeling of disks
Let's start from our inclined simple HG symmeric disk fake_disk3_map and assume we observe this disk as part of an ADI sequence of 30 images
End of explanation
cube_fake_disk3.shape
Explanation: cube_fake_disk3 is now a cube of 30 frames, where the disk has been injected at the correct position angle.
End of explanation
plot_frames((cube_fake_disk3[0], cube_fake_disk3[nframes//2], cube_fake_disk3[nframes-1]),
grid=False, size_factor=3)
Explanation: Let's visualize the first, middle and last image of the cube.
End of explanation
cadi_fake_disk3 = median_sub(cube_fake_disk3, derotation_angles, imlib='vip-fft')
plot_frames((fake_disk3_map, cadi_fake_disk3), grid=False, size_factor=4)
Explanation: We can now process this cube with median-ADI for instance:
End of explanation
psf = create_synth_psf(model='gauss', shape=(11, 11), fwhm=4.)
plot_frames(psf, grid=True, size_factor=2)
Explanation: The example above shows a typical bias that can be induced by ADI on extended disk signals (Milli et al. 2012).
So far we have not dealt with convolution effects. In practice the image of a disk is convolved by the instrumental PSF.
Let's assume here an instrument having a gaussian PSF with FWHM = 4px, and create a synthetic PSF using the create_synth_psf function:
End of explanation
cube_fake_disk3_convolved = cube_inject_fakedisk(fake_disk3_map, -derotation_angles,
psf=psf, imlib='vip-fft')
cadi_fake_disk3_convolved = median_sub(cube_fake_disk3_convolved, derotation_angles, imlib='vip-fft')
plot_frames((fake_disk3_map, cadi_fake_disk3, cadi_fake_disk3_convolved), grid=False, size_factor=4)
Explanation: Then we inject the disk in the cube and convolve each frame by the PSF
End of explanation |
3,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Markdown 2 Reportlab
Markdown
Here we create some lorem ipsum markdown text for testing
Step3: ReportLab
import the necessary functions one by one
Step8: The ReportFactory class creates a ReportLab document / report object; the idea is that all style information as well as page layouts are collected in this object, so that when a different factory is passed to the writer object the report looks different.
Step13: The ReportWriter object executes the conversion from markdown to pdf. It is currently very simplistic - for example there is no entry hook for starting the conversion at the html level rather than at markdown, and only a few basic tags are implemented.
Step14: create a standard report (A4, black text etc)
Step15: create a second report with different parameters (A5, changed colors etc; the __dict__ method shows all the options that can be modified for changing styles) | Python Code:
from IPython.display import HTML
import markdown as md
l = LOREM ipsum dolor sit amet, _consectetur_ adipiscing elit. Praesent dignissim orci a leo dapibus semper eget sed
sem. Pellentesque tellus nisl, condimentum nec libero id, __cursus consequat__ lectus. Ut quis nulla laoreet, efficitur
metus sit amet, <strike>viverra dui. Nam tempor ornare urna a consequat</strike>. Nulla dolor velit, sollicitudin sit
amet consectetur sed, interdum nec orci. Nunc suscipit tempus est ut porta. <u>Ut non felis a ligula suscipit
posuere quis sit amet elit</u>.
markdown_text =
# Heading1
## Heading 2
%s %s %s
## Heading 2
%s
- %s
- %s
- %s
## Heading 2
%s
4. %s
4. %s
4. %s
%s
% (l,l,l,l,l,l,l,l,l,l,l,l)
#HTML(md.markdown(markdown_text))
Explanation: Markdown 2 Reportlab
Markdown
Here we create some lorem ipsum markdown text for testing
End of explanation
from markdown import markdown as md_markdown
from xml.etree.ElementTree import fromstring as et_fromstring
from xml.etree.ElementTree import tostring as et_tostring
from reportlab.platypus import BaseDocTemplate as plat_BaseDocTemplate
from reportlab.platypus import Frame as plat_Frame
from reportlab.platypus import Paragraph as plat_Paragraph
from reportlab.platypus import PageTemplate as plat_PageTemplate
from reportlab.lib.styles import getSampleStyleSheet as sty_getSampleStyleSheet
from reportlab.lib.pagesizes import A4 as ps_A4
from reportlab.lib.pagesizes import A5 as ps_A5
from reportlab.lib.pagesizes import landscape as ps_landscape
from reportlab.lib.pagesizes import portrait as ps_portrait
from reportlab.lib.units import inch as un_inch
Explanation: ReportLab
import the necessary functions one by one
End of explanation
class ReportFactory():
create a Reportlab report object using BaseDocTemplate
the report creation is a two-step process
1. instantiate a ReportFactory object
2. retrieve the report using the report() method
note: as it currently stands the report object is remembered in the
factory object, so another call to report() return the _same_ object;
this means that changing the paramters after report() has been called
for the first time will not have an impact
def __init__(self, filename=None):
if filename == None: filename = 'report_x1.pdf'
# f = open (filename,'wb') -> reports can take a file handle!
self.filename = filename
self.pagesize = ps_portrait(ps_A4)
self.showboundary = 0
#PAGE_HEIGHT=defaultPageSize[1]; PAGE_WIDTH=defaultPageSize[0]
self.styles=sty_getSampleStyleSheet()
self.bullet = "\u2022"
self._report = None
@staticmethod
def static_page(canvas,doc):
template for report page
this template defines how the standard page looks (header, footer, background
objects; it does _not_ define the flow objects though, as those are separately
passed to the PageTemplate() function)
canvas.saveState()
canvas.setFont('Times-Roman',9)
canvas.drawString(un_inch, 0.75 * un_inch, "Report - Page %d" % doc.page)
canvas.restoreState()
def refresh_styles(self):
refresh all styles
derived ReportLab styles need to be refreshed in case the parent style
has been modified; this does not really work though - it seems that the
styles are simply flattened....
style_names = self.styles.__dict__['byName'].keys()
for name in style_names:
self.styles[name].refresh()
def report(self):
initialise a report object
this function initialised and returns a report object, based on the properties
set on the factory object at this point (note: the report object is only generated
_once_ and subsequent calls return the same object;this implies that most property
changes after this function has been called are not taken into account)
if self._report == None:
rp = plat_BaseDocTemplate(self.filename,showBoundary=self.showboundary, pagesize=self.pagesize)
frame_page = plat_Frame(rp.leftMargin, rp.bottomMargin, rp.width, rp.height, id='main')
pagetemplates = [
plat_PageTemplate(id='Page',frames=frame_page,onPage=self.static_page),
]
rp.addPageTemplates(pagetemplates)
self._report = rp
return self._report
Explanation: The ReportFactory class creates a ReportLab document / report object; the idea is that all style information as well as page layouts are collected in this object, so that when a different factory is passed to the writer object the report looks different.
End of explanation
class ReportWriter():
def __init__(self, report_factory):
self._simple_tags = {
'h1' : 'Heading1',
'h2' : 'Heading2',
'h3' : 'Heading3',
'h4' : 'Heading4',
'h5' : 'Heading5',
'p' : 'BodyText',
}
self.rf = report_factory
self.report = report_factory.report();
def _render_simple_tag(self, el, story):
style_name = self._simple_tags[el.tag]
el.tag = 'para'
text = et_tostring(el)
story.append(plat_Paragraph(text,self.rf.styles[style_name]))
def _render_ol(self, el, story):
return self._render_error(el, story)
def _render_ul(self, ul_el, story):
for li_el in ul_el:
li_el.tag = 'para'
text = et_tostring(li_el)
story.append(plat_Paragraph(text,self.rf.styles['Bullet'], bulletText=self.rf.bullet))
def _render_error(self, el, story):
story.append(plat_Paragraph(
"<para fg='#ff0000' bg='#ffff00'>cannot render '%s' tag</para>" % el.tag,self.rf.styles['Normal']))
@staticmethod
def html_from_markdown(mdown, remove_newline=True, wrap=True):
convert markdown to html
mdown - the markdown to be converted
remove_newline - if True, all \n characters are removed after conversion
wrap - if True, the whole html is wrapped in an <html> tag
html = md_markdown(mdown)
if remove_newline: html = html.replace("\n", "")
if wrap: html = "<html>"+html+"</html>"
return html
@staticmethod
def dom_from_html(html, wrap=False):
convert html into a dom tree
html - the html to be converted
wrap - if True, the whole html is wrapped in an <html> tag
if wrap: html = "<html>"+html+"</html>"
dom = et_fromstring(html)
return (dom)
@staticmethod
def dom_from_markdown(mdown):
convert markdown into a dom tree
mdown - the markdown to be converted
wrap - if True, the whole html is wrapped in an <html> tag
html = ReportWriter.html_from_markdown(mdown, remove_newline=True, wrap=True)
dom = ReportWriter.dom_from_html(html, wrap=False)
return (dom)
def create_report(self, mdown):
create report and write it do disk
mdown - markdown source of the report
dom = self.dom_from_markdown(mdown)
story = []
for el in dom:
if el.tag in self._simple_tags:
self._render_simple_tag(el, story)
elif el.tag == 'ul':
self._render_ul(el, story)
elif el.tag == 'ol':
self._render_ol(el, story)
else:
self._render_error(el, story)
self.report.build(story)
Explanation: The ReportWriter object executes the conversion from markdown to pdf. It is currently very simplistic - for example there is no entry hook for starting the conversion at the html level rather than at markdown, and only a few basic tags are implemented.
End of explanation
rfa4 = ReportFactory('report_a4.pdf')
pdfw = ReportWriter(rfa4)
pdfw.create_report(markdown_text*10)
Explanation: create a standard report (A4, black text etc)
End of explanation
#rfa5.styles['Normal'].__dict__
rfa5 = ReportFactory('report_a5.pdf')
rfa5.pagesize = ps_portrait(ps_A5)
#rfa5.styles['Normal'].textColor = '#664422'
#rfa5.refresh_styles()
rfa5.styles['BodyText'].textColor = '#666666'
rfa5.styles['Bullet'].textColor = '#666666'
rfa5.styles['Heading1'].textColor = '#000066'
rfa5.styles['Heading2'].textColor = '#000066'
rfa5.styles['Heading3'].textColor = '#000066'
pdfw = ReportWriter(rfa5)
pdfw.create_report(markdown_text*10)
Explanation: create a second report with different parameters (A5, changed colors etc; the __dict__ method shows all the options that can be modified for changing styles)
End of explanation |
3,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Get Started with TensorFlow 1.x
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers
Step3: Build the tf.keras model by stacking layers. Select an optimizer and loss function used for training
Step4: Train and evaluate model | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow.compat.v1 as tf
Explanation: Get Started with TensorFlow 1.x
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/_index.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/_index.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
This is a Google Colaboratory notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To run the Colab notebook:
Connect to a Python runtime: At the top-right of the menu bar, select CONNECT.
Run all the notebook code cells: Select Runtime > Run all.
For more examples and guides (including details for this program), see Get Started with TensorFlow.
Let's get started, import the TensorFlow library into your program:
End of explanation
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
Explanation: Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:
End of explanation
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Explanation: Build the tf.keras model by stacking layers. Select an optimizer and loss function used for training:
End of explanation
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test, verbose=2)
Explanation: Train and evaluate model:
End of explanation |
3,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use Word2Vec in gensim to train a word embedding model using the content from NIPS papers.
Step1: Gensim word2vec
https
Step2: Train a word2vec model
Step3: Create a representation of each paper
The representation is simply a set of embedded words taken from the abstract and the title.
Step4: Load the saved pickle and check
Step5: filter words by DF | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#config InlineBackend.figure_format = 'pdf'
from IPython.core.display import HTML
import gensim as gen
import gensim.models.word2vec as w2v
import matplotlib.pyplot as plt
from nltk.tokenize import WhitespaceTokenizer
import numpy as np
import os
import pandas as pd
try:
import cPickle as pickle
except:
import pickle
import re
import scipy.stats as stats
import scipy.sparse as sp
import string
import sys
import csv
# load the pickle containing the document-term matrix,
# put the abstracts in, and dump it to a file.
fyear = 1988
tyear = 2015
dt_fpath = 'DT_%d_%d_wabs.p'%(fyear, tyear)
with open(dt_fpath, 'r') as f:
info = pickle.load(f)
info.keys()
list_abs = info['abstracts']
list_abs[:2]
# make each abstract a list of words
list_list_abs = [ab.split(' ') for ab in list_abs if ab is not None]
print list_list_abs[20]
Explanation: Use Word2Vec in gensim to train a word embedding model using the content from NIPS papers.
End of explanation
def paper_dataframe(fpath):
rows = []
with open(fpath, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
# Each read gives ['Id', 'Title', 'EventType', 'PdfName', 'Abstract', 'PaperText']
reader.next()
for row in reader:
rows.append(tuple(row))
data = pd.DataFrame(rows, columns=['Id', 'Title', 'EventType',
'PdfName', 'Abstract', 'PaperText'])
return data
text = ',sdf,.-23\][](s)'
re.sub(r'([^\w])+', ' ', text, flags=re.DOTALL)
def tokenize_simple(text):
# replace spaces with one space
text = re.sub(r'\s+', ' ', text, flags=re.DOTALL)
# remove non-English words
text = re.sub(r'[^\w]+', ' ', text, flags=re.DOTALL)
# naive tokenization
tokens = [w.lower().strip() for w in text.split(' ') if len(w) > 1]
return tokens
dframe = paper_dataframe('Papers1988_2015.csv')
n_docs = dframe.shape[0]
tok_papers = []
tok_abstracts = []
for i in xrange(n_docs):
paper = dframe['PaperText'][i]
paper_tokens = tokenize_simple(paper)
tok_papers.append(paper_tokens)
ab = list_abs[i]
if ab is None:
ab_tokens = []
else:
ab_tokens = tokenize_simple(ab)
tok_abstracts.append(ab_tokens)
Explanation: Gensim word2vec
https://radimrehurek.com/gensim/models/word2vec.html#id6
End of explanation
# size means the latent dimension
# sentences = an iterable where each item is a list of words
size = 50
window = 5
dest_fname = 'w2v_size%d_win%d.p'%(size, window)
model = w2v.Word2Vec(tok_papers, size=size, window=window, min_count=5, workers=4)
model.save(dest_fname)
model.wv.similarity('neural', 'deep')
model.wv.similarity('neural', 'kernel')
model.wv.doesnt_match('supervised unsupervised neuron reinforcement'.split())
model.wv.doesnt_match('kernel gretton hsic mmd'.split())
model.wv['kernel']
'kernel' in model.wv
Explanation: Train a word2vec model
End of explanation
titles = info['titles']
# each element is the representation of the paper.
# This is a matrix with each row corresponding to the embedding
# of a word in the abstract and the title.
paper_reps = []
for i in xrange(n_docs):
title_tokens = tokenize_simple(titles[i])
rep_words = tok_abstracts[i] + title_tokens
# embed each word in rep_words (if in the vocabulary)
rep = []
for w in rep_words:
# only embed words that are in the vocabulary
if w in model.wv:
embed = model.wv[w]
rep.append(embed)
mat = np.vstack(rep)
paper_reps.append(mat)
len(paper_reps)
# save the pickle with the paper representations
dt_dest = 'DT_%d_%d_wembed.p'%(fyear, tyear)
info['paper_reps'] = paper_reps
with open(dt_dest, 'w') as f:
pickle.dump(info, f)
Explanation: Create a representation of each paper
The representation is simply a set of embedded words taken from the abstract and the title.
End of explanation
with open('DT_%d_%d_wembed.p'%(fyear, tyear), 'r') as f:
info = pickle.load(f)
info.keys()
DT = info['DT']
abstracts = info['abstracts']
paper_reps = info['paper_reps']
titles = info['titles']
words = info['words']
Explanation: Load the saved pickle and check
End of explanation
# document frequency of each word
n_docs = DT.shape[0]
DF = np.array( (DT > 0).sum(0) )[0]
df_lb = 7
df_ub = int(0.15*n_docs)
print('n = #docs: %d'%n_docs)
print('original #words: %d'%len(words))
print('#words with %d <= df: %d'% (df_lb, np.sum(DF>=df_lb) ) )
print('#words with df <= %d: %d'% (df_ub, np.sum(DF<=df_ub) ) )
df_I = np.logical_and(DF>=df_lb, DF<=df_ub)
print('#words with %d <= df <= %d: %d'%
(df_lb, df_ub, np.sum( df_I) ) )
df_words = np.array(words)[df_I]
print df_words.tolist()
# filter out words
fDT = DT[:, df_I]
fwords = np.array(words)[df_I].tolist()
info['DT'] = fDT
info['words'] = fwords
dffiltered_fname = 'DT_%d_%d_wem_df%d_%d.p'%(fyear, tyear, df_lb, df_ub)
with open(dffiltered_fname, 'w') as f:
pickle.dump(info, f)
Explanation: filter words by DF
End of explanation |
3,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Si in FCC Ni
Based on data in hdl.handle.net/11115/239, "Data Citation
Step1: Create an FCC Ni crystal.
Step2: Next, we construct our diffuser. For this problem, our thermodynamic range is out to the fourth neighbor; hence, we construct a two shell thermodynamic range (that is, sums of two $\frac{a}{2}\langle 110\rangle$ vectors. That is, $N_\text{thermo}=2$ gives 4 stars
Step3: Below is an example of the above data translated into a dictionary corresponding to the data for Ni-Si; it is output into a JSON compliant file for reference. The strings are the corresponding tags in the diffuser. The first entry in each list is the prefactor (in THz) and the second is the corresponding energy (in eV). Note
Step4: Next, we convert our dictionary into the simpler form used by the diffuser.
Step5: We can now calculate the diffusion coefficients and drag ratio. Note
Step6: For direct comparison with the SCMF data in the 2013 Phys. Rev. B paper, we evaluate at 960K, 1060K (the predicted crossover temperature), and 1160K. The reported data is in units of mol/eV Å ns. | Python Code:
import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
%matplotlib inline
import onsager.crystal as crystal
import onsager.OnsagerCalc as onsager
from scipy.constants import physical_constants
kB = physical_constants['Boltzmann constant in eV/K'][0]
import h5py, json
Explanation: Si in FCC Ni
Based on data in hdl.handle.net/11115/239, "Data Citation: Diffusion of Si impurities in Ni under stress: A first-principles study" by T. Garnier, V. R. Manga, P. Bellon, and D. R. Trinkle (2014). The transport coefficient results, using the self-consistent mean-field method, appear in T. Garnier, V. R. Manga, D. R. Trinkle, M. Nastar, and P. Bellon, "Stress-induced anisotropic diffusion in alloys: Complex Si solute flow near a dislocation core in Ni," Phys. Rev. B 88, 134108 (2013), doi:10.1103/PhysRevB.88.134108.
End of explanation
a0 = 0.343
Ni = crystal.Crystal.FCC(a0, chemistry="Ni")
print(Ni)
Explanation: Create an FCC Ni crystal.
End of explanation
chemistry = 0 # only one sublattice anyway
Nthermo = 2
NiSi = onsager.VacancyMediated(Ni, chemistry, Ni.sitelist(chemistry),
Ni.jumpnetwork(chemistry, 0.75*a0), Nthermo)
print(NiSi)
Explanation: Next, we construct our diffuser. For this problem, our thermodynamic range is out to the fourth neighbor; hence, we construct a two shell thermodynamic range (that is, sums of two $\frac{a}{2}\langle 110\rangle$ vectors. That is, $N_\text{thermo}=2$ gives 4 stars: $\frac{a}2\langle110\rangle$, $a\langle100\rangle$, $\frac{a}2\langle112\rangle$, and $a\langle110\rangle$. For Si in Ni, the first three have non-zero interaction energies, while the fourth is zero. The states, as written, are the solute (basis index + lattice position) : vacancy (basis index + lattice position), and $dx$ is the (Cartesian) vector separating them.
End of explanation
NiSidata={
"v:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [1., -0.108],
"s:+0.000,+0.000,+0.000-v:-1.000,-1.000,+1.000": [1., +0.004],
"s:+0.000,+0.000,+0.000-v:+1.000,-2.000,+0.000": [1., +0.037],
"s:+0.000,+0.000,+0.000-v:+0.000,-2.000,+0.000": [1., -0.008],
"omega0:v:+0.000,+0.000,+0.000^v:+0.000,+1.000,-1.000": [4.8, 1.074],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+0.000,+0.000^v:-1.000,+1.000,-1.000": [5.2, 1.213-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000^v:+0.000,+0.000,-1.000": [5.2, 1.003-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000^v:+0.000,+2.000,-2.000": [4.8, 1.128-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+1.000,+0.000^v:-1.000,+2.000,-1.000": [5.2, 1.153-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:+1.000,-1.000,-1.000^v:+1.000,+0.000,-2.000": [4.8, 1.091+0.004],
"omega2:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+1.000^s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [5.1, 0.891-0.108]
}
NiSi2013data={
"v:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [1., -0.100],
"s:+0.000,+0.000,+0.000-v:-1.000,-1.000,+1.000": [1., +0.011],
"s:+0.000,+0.000,+0.000-v:+1.000,-2.000,+0.000": [1., +0.045],
"s:+0.000,+0.000,+0.000-v:+0.000,-2.000,+0.000": [1., 0],
"omega0:v:+0.000,+0.000,+0.000^v:+0.000,+1.000,-1.000": [4.8, 1.074],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+0.000,+0.000^v:-1.000,+1.000,-1.000": [5.2, 1.213-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000^v:+0.000,+0.000,-1.000": [5.2, 1.003-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000^v:+0.000,+2.000,-2.000": [4.8, 1.128-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+1.000,+0.000^v:-1.000,+2.000,-1.000": [5.2, 1.153-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:+1.000,-1.000,-1.000^v:+1.000,+0.000,-2.000": [4.8, 1.091+0.011],
"omega2:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+1.000^s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [5.1, 0.891-0.100]
}
print(json.dumps(NiSi2013data, sort_keys=True, indent=4))
Explanation: Below is an example of the above data translated into a dictionary corresponding to the data for Ni-Si; it is output into a JSON compliant file for reference. The strings are the corresponding tags in the diffuser. The first entry in each list is the prefactor (in THz) and the second is the corresponding energy (in eV). Note: all jumps are defined as transition state energies, hence the reference energy is added / subtracted as needed. Also, there are "missing" transition states; these will have there energies defined using the LIMB (linear interpolation of migration barriers) approximation. This introduces an error of no more than 10 meV in any activation barrier.
End of explanation
preenedict = NiSi.tags2preene(NiSi2013data)
preenedict
Explanation: Next, we convert our dictionary into the simpler form used by the diffuser.
End of explanation
print("#T #Lss #Lsv #drag")
for T in np.linspace(300, 1400, 23):
L0vv, Lss, Lsv, L1vv = NiSi.Lij(*NiSi.preene2betafree(kB*T, **preenedict))
print(T, Lss[0,0], Lsv[0,0], Lsv[0,0]/Lss[0,0])
Explanation: We can now calculate the diffusion coefficients and drag ratio. Note: the diffusion coefficients $L_\text{ss}$ and $L_\text{sv}$ both need to be multiplied by $c_\text{s}c_\text{v}/k_\text{B}T$ where $c_\text{s}$ is the solute concentration, $c_\text{v}$ the (equilibrium) vacancy concentration, and $k_\text{B}T$ is the thermal energy of the system. The current units shown below are in $\text{nm}^2\cdot\text{THz}$.
End of explanation
volume = 0.25*a0**3
conv = 1e3*0.1/volume # 10^3 for THz->ns^-1, 10^-1 for nm^-1 ->Ang^-1
# T: (L0vv, Lsv, Lss)
PRBdata = {960: (1.52e-1, 1.57e-1, 1.29e0),
1060: (4.69e-1, 0., 3.27e0),
1160: (1.18e0, -7.55e-1, 7.02e0)}
print("#T #Lvv #Lsv #Lss")
for T in (960, 1060, 1160):
c = conv/(kB*T)
L0vv, Lss, Lsv, L1vv = NiSi.Lij(*NiSi.preene2betafree(kB*T, **preenedict))
vv, sv, ss = L0vv[0,0]*c, Lsv[0,0]*c, Lss[0,0]*c
vvref, svref, ssref = PRBdata[T]
print("{} {:.4g} ({:.4g}) {:.4g} ({:.4g}) {:.4g} ({:.4g})".format(T, vv, vvref/vv, sv, svref/sv, ss, ssref/ss))
# raw comparison data from 2013 paper
Tval = np.array([510, 530, 550, 570, 590, 610, 630, 650, 670, 690,
710, 730, 750, 770, 790, 810, 830, 850, 870, 890,
910, 930, 950, 970, 990, 1010, 1030, 1050, 1070, 1090,
1110, 1130, 1150, 1170, 1190, 1210, 1230, 1250, 1270, 1290,
1310, 1330, 1350, 1370, 1390, 1410, 1430, 1450, 1470, 1490])
fluxval = np.array([0.771344, 0.743072, 0.713923, 0.684066, 0.653661, 0.622858,
0.591787, 0.560983, 0.529615, 0.498822, 0.467298, 0.436502,
0.406013, 0.376193, 0.346530, 0.316744, 0.288483, 0.260656,
0.232809, 0.205861, 0.179139, 0.154038, 0.128150, 0.103273,
0.079025, 0.055587, 0.032558, 0.010136, -0.011727, -0.033069,
-0.053826, -0.074061, -0.093802, -0.113075, -0.132267, -0.149595,
-0.167389, -0.184604, -0.202465, -0.218904, -0.234157, -0.250360,
-0.265637, -0.280173, -0.294940, -0.308410, -0.322271, -0.335809,
-0.349106, -0.361605])
# Trange = np.linspace(300, 1500, 121)
Draglist = []
for T in Tval:
L0vv, Lss, Lsv, L1vv = NiSi.Lij(*NiSi.preene2betafree(kB*T, **preenedict))
Draglist.append(Lsv[0,0]/Lss[0,0])
Drag = np.array(Draglist)
fig, ax1 = plt.subplots()
ax1.plot(Tval, Drag, 'k', label='GF')
ax1.plot(Tval, fluxval, 'r', label='SCMF (PRB 2013)')
ax1.set_ylabel('drag ratio $L^{\\rm{SiV}}/L^{\\rm{SiSi}}$', fontsize='x-large')
ax1.set_xlabel('$T$ [K]', fontsize='x-large')
ax1.legend(bbox_to_anchor=(0.5,0.6,0.5,0.2), ncol=1,
shadow=True, frameon=True, fontsize='x-large')
plt.show()
# plt.savefig('NiSi-drag.pdf', transparent=True, format='pdf')
Explanation: For direct comparison with the SCMF data in the 2013 Phys. Rev. B paper, we evaluate at 960K, 1060K (the predicted crossover temperature), and 1160K. The reported data is in units of mol/eV Å ns.
End of explanation |
3,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tailored constraints, variables and objectives
Thanks to the use of symbolic expressions via the optlang mathematical modeling package, it is relatively straight-forward to add new variables, constraints and advanced objectives that can not easily be formulated as a combination of different reaction and their corresponding upper and lower bounds. Here we demonstrate this optlang functionality which is exposed via the model.solver.interface.
Constraints
Suppose we want to ensure that two reactions have the same flux in our model. We can add this criteria as constraint to our model using the optlang solver interface by simply defining the relevant expression as follows.
Step1: The flux for our reaction of interest is obtained by the model.reactions.FBA.flux_expression which is simply the sum of the forward and reverse flux, i.e.,
Step2: Now I can maximize growth rate whilst the fluxes of reactions 'FBA' and 'NH4t' are constrained to be (near) identical.
Step3: Objectives
Simple objective such as the maximization of the flux through one or more reactions can conveniently be done by simply
assigning to the model.objective property as we have seen in previous chapters, e.g.,
Step4: The objectives mathematical expression is seen by
Step5: But suppose we need a more complicated objective, such as minimizing the Euclidean distance of the solution to the origin minus another variable, while subject to additional linear constraints. This is an objective function with both linear and quadratic components.
Consider the example problem
Step6: We return to the textbook model and set the solver to one that can handle quadratic objectives such as cplex. We then add the linear constraint that the sum of our x and y reactions, that we set to FBA and NH4t, must equal 2.
Step7: Next we add the quadratic objective
Step8: Variables
We can also create additional variables to facilitate studying the effects of new constraints and variables. Suppose we want to study the difference in flux between nitrogen and carbon uptake whilst we block other reactions. For this it will may help to add another variable representing this difference.
Step9: We use constraints to define what values this variable shall take
Step10: Now we can access that difference directly during our knock-out exploration by looking at its primal value. | Python Code:
import cobra.test
model = cobra.test.create_test_model('textbook')
same_flux = model.problem.Constraint(
model.reactions.FBA.flux_expression - model.reactions.NH4t.flux_expression,
lb=0,
ub=0)
model.add_cons_vars(same_flux)
Explanation: Tailored constraints, variables and objectives
Thanks to the use of symbolic expressions via the optlang mathematical modeling package, it is relatively straight-forward to add new variables, constraints and advanced objectives that can not easily be formulated as a combination of different reaction and their corresponding upper and lower bounds. Here we demonstrate this optlang functionality which is exposed via the model.solver.interface.
Constraints
Suppose we want to ensure that two reactions have the same flux in our model. We can add this criteria as constraint to our model using the optlang solver interface by simply defining the relevant expression as follows.
End of explanation
model.reactions.FBA.flux_expression
Explanation: The flux for our reaction of interest is obtained by the model.reactions.FBA.flux_expression which is simply the sum of the forward and reverse flux, i.e.,
End of explanation
solution = model.optimize()
print(solution.fluxes['FBA'], solution.fluxes['NH4t'],
solution.objective_value)
Explanation: Now I can maximize growth rate whilst the fluxes of reactions 'FBA' and 'NH4t' are constrained to be (near) identical.
End of explanation
model = cobra.test.create_test_model('textbook')
with model:
model.objective = {model.reactions.Biomass_Ecoli_core: 1}
model.optimize()
print(model.reactions.Biomass_Ecoli_core.flux)
Explanation: Objectives
Simple objective such as the maximization of the flux through one or more reactions can conveniently be done by simply
assigning to the model.objective property as we have seen in previous chapters, e.g.,
End of explanation
model.objective.expression
Explanation: The objectives mathematical expression is seen by
End of explanation
%matplotlib inline
import plot_helper
plot_helper.plot_qp2()
Explanation: But suppose we need a more complicated objective, such as minimizing the Euclidean distance of the solution to the origin minus another variable, while subject to additional linear constraints. This is an objective function with both linear and quadratic components.
Consider the example problem:
min $\frac{1}{2}\left(x^2 + y^2 \right) - y$
subject to
$x + y = 2$
$x \ge 0$
$y \ge 0$
This (admittedly very artificial) problem can be visualized graphically where the optimum is indicated by the blue dot on the line of feasible solutions.
End of explanation
model.solver = 'cplex'
sum_two = model.problem.Constraint(
model.reactions.FBA.flux_expression + model.reactions.NH4t.flux_expression,
lb=2,
ub=2)
model.add_cons_vars(sum_two)
Explanation: We return to the textbook model and set the solver to one that can handle quadratic objectives such as cplex. We then add the linear constraint that the sum of our x and y reactions, that we set to FBA and NH4t, must equal 2.
End of explanation
quadratic_objective = model.problem.Objective(
0.5 * model.reactions.NH4t.flux_expression**2 + 0.5 *
model.reactions.FBA.flux_expression**2 -
model.reactions.FBA.flux_expression,
direction='min')
model.objective = quadratic_objective
solution = model.optimize(objective_sense=None)
print(solution.fluxes['NH4t'], solution.fluxes['FBA'])
Explanation: Next we add the quadratic objective
End of explanation
model = cobra.test.create_test_model('textbook')
difference = model.problem.Variable('difference')
Explanation: Variables
We can also create additional variables to facilitate studying the effects of new constraints and variables. Suppose we want to study the difference in flux between nitrogen and carbon uptake whilst we block other reactions. For this it will may help to add another variable representing this difference.
End of explanation
constraint = model.problem.Constraint(
model.reactions.EX_glc__D_e.flux_expression -
model.reactions.EX_nh4_e.flux_expression - difference,
lb=0,
ub=0)
model.add_cons_vars([difference, constraint])
Explanation: We use constraints to define what values this variable shall take
End of explanation
for reaction in model.reactions[:5]:
with model:
reaction.knock_out()
model.optimize()
print(model.solver.variables.difference.primal)
Explanation: Now we can access that difference directly during our knock-out exploration by looking at its primal value.
End of explanation |
3,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a dashboard to plan a marketing campaign leveraging CARTO Data Observatory
Combining different data sources to identify some patterns or understand some behavior in a specific location is a very typical use case in Spatial Data Science.
In this notebook, we will build a dashboard combining different data from CARTO's Data Observatory to help identify the locations with specific characteristics described below.
Note
Step1: In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
Step2: Note about credentials
For security reasons, we recommend storing your credentials in an external file to prevent publishing them by accident when sharing your notebooks. You can get more information in the section Setting your credentials of the Authentication guide.
<a id='section1'></a>
1. Download all pharmacies in Philadelphia from the Data Observatory
Below is the bounding box of the area of study.
Step3: We can get the pharmacies from Pitney Bowes' Consumer Points of Interest dataset. This is a premium dataset, so we first need to check that we are subscribed to it.
Take a look at <a href='#example-access-premium-data-from-the-data-observatory' target='_blank'>this template</a> for more details on how to access and download a premium dataset.
Step4: Download and explore sample
Pitney Bowes POI's are hierarchically classified (levels
Step5: Let's now download a small sample to help us identify which of the four hierarchy variables gives us the pharmacies.
Step6: The class DRUG STORES AND PROPRIETARY STORES is the one we're looking for.
Step8: Download all pharmacies in the area of study
Step9: The dataset contains different versions of the POI's tagged by the do_date column. We are only inetrested in the latest version of each POI.
Step10: Visualize the dataset
Step11: <a id='section2'></a>
2. Calculate catchment areas
In order to know the characteristics of the potential customers of every pharmacy, we assume the majority of their clients live closeby. Therefore we will calculate 5-minute-by-car isochrones and take them as their cathment areas.
Note catchment areas usually depend on whether it is a store in the downtown area or in the suburbs, or if it is reachable on foot or only by car. For this example, we will not make such distiction between pharmacies, but we strongly encourage you to do so on your analyses. As an example, here we describe how to calculate catchment areas using human mobility data.
Step12: Visualize isochrones
We'll only visualize the ten first isochrones to get a clean visualization.
Step13: <a id='section3'></a>
3. Enrichment
Step14: Demographics
We will use AGS premium data. In particular, we will work with the dataset ags_sociodemogr_f510a947 which contains yearly demographics data from 2019.
Variable selection
Here we will enrich the pharmacies isochrones with
Step15: We explore the variables to identify the ones we're interested in.
Variables in a dataset are uniquely identified by their slug.
Step16: We'll select
Step17: Isochrone enrichment
Step18: Points of Interest
We will use Pitney Bowes' Consumer Points of Interest premium dataset.
Variable selection
We are interested in knowing how many of the following POIs can be found in each isochrone
Step19: Isochrone enrichment
In order to count only Beauty Shops/Salons and Gyms, we will apply a filter to the enrichment. All filters are applied with an AND-like relationship. This means we need to run two independent enrichment calls, one for the beauty shops/salons and another one for the gyms.
Step20: Consumer spending
For consumer spending, we will use AGS premium data. In particular, we will work with the dataset ags_consumer_sp_dbabddfb which contains the latest version of yearly consumer data.
Variable selection
We are interested in spending in
Step21: The variables we're interested in are
Step22: We rename the new columns to give them a more descriptive name.
Step23: <a id='section4'></a>
4. Dashboard
Finally, with all the data gathered, we will build the dashboard and publish it so we can share it with our client/manager/colleague for them to explore it.
This dashboard allows you to select a range of desired expenditure in care products, people aged 60+, household income, and so forth. Selecting the desired ranges will filter out pharmacies, so that in the end you can identify the target pharmacies for your marketing campaign.
Step24: Publish dashboard | Python Code:
import geopandas as gpd
import pandas as pd
from cartoframes.auth import set_default_credentials
from cartoframes.data.services import Isolines
from cartoframes.data.observatory import *
from cartoframes.viz import *
from shapely.geometry import box
pd.set_option('display.max_columns', None)
Explanation: Building a dashboard to plan a marketing campaign leveraging CARTO Data Observatory
Combining different data sources to identify some patterns or understand some behavior in a specific location is a very typical use case in Spatial Data Science.
In this notebook, we will build a dashboard combining different data from CARTO's Data Observatory to help identify the locations with specific characteristics described below.
Note: This use case leverages premium datasets from CARTO's Data Observatory.
Use case description
A pharmaceutical lab wants to launch a new marketing campaign to promote a new line of personal care products for senior people in the city of Philadelphia, PA. They know their target group is characterized by:
- People over 60
- Medium-high to high income
- High expenditure in personal care products and services
Given these characteristics, they would like to know which pharmacies and drug stores in the city of Philadelphia they should focus their efforts on.
In order to identify the target drug stores and pharmacies, we will follow the following steps:
- Get all pharmacies in Philadelphia
- Calculate their cathment areas using isochrones
- Enrich the isochrones with demographic, POI's, and consumption data
- Build the dashboard to help identify the pharmacies where the campaign can be more successful given the characteristics of the population within their catchment area
0. Setup
Import the packages we'll use.
End of explanation
from cartoframes.auth import set_default_credentials
set_default_credentials('creds.json')
Explanation: In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
End of explanation
dem_bbox = box(-75.229353,39.885501,-75.061124,39.997898)
Explanation: Note about credentials
For security reasons, we recommend storing your credentials in an external file to prevent publishing them by accident when sharing your notebooks. You can get more information in the section Setting your credentials of the Authentication guide.
<a id='section1'></a>
1. Download all pharmacies in Philadelphia from the Data Observatory
Below is the bounding box of the area of study.
End of explanation
Catalog().subscriptions().datasets.to_dataframe()
Explanation: We can get the pharmacies from Pitney Bowes' Consumer Points of Interest dataset. This is a premium dataset, so we first need to check that we are subscribed to it.
Take a look at <a href='#example-access-premium-data-from-the-data-observatory' target='_blank'>this template</a> for more details on how to access and download a premium dataset.
End of explanation
dataset = Dataset.get('pb_consumer_po_62cddc04')
dataset.head()
Explanation: Download and explore sample
Pitney Bowes POI's are hierarchically classified (levels: trade division, group, class, sub class).
Since we might not know which level can help us identify all pharmacies, we can start by downloading a sample for a smaller area to explore the dataset. For calculating the bounding box we use bboxfinder.
We start by selecting our dataset and taking a quick look at its first 10 rows.
End of explanation
sql_query = "SELECT * except(do_label) FROM $dataset$ WHERE ST_IntersectsBox(geom, -75.161723,39.962019,-75.149535,39.968071)"
sample = dataset.to_dataframe(sql_query=sql_query)
sample.head()
sample['TRADE_DIVISION'].unique()
sample.loc[sample['TRADE_DIVISION'] == 'DIVISION G. - RETAIL TRADE', 'GROUP'].unique()
sample.loc[sample['TRADE_DIVISION'] == 'DIVISION G. - RETAIL TRADE', 'CLASS'].unique()
Explanation: Let's now download a small sample to help us identify which of the four hierarchy variables gives us the pharmacies.
End of explanation
sample.loc[sample['CLASS'] == 'DRUG STORES AND PROPRIETARY STORES', 'SUB_CLASS'].unique()
Explanation: The class DRUG STORES AND PROPRIETARY STORES is the one we're looking for.
End of explanation
sql_query = SELECT * except(do_label)
FROM $dataset$
WHERE CLASS = 'DRUG STORES AND PROPRIETARY STORES'
AND ST_IntersectsBox(geom, -75.229353,39.885501,-75.061124,39.997898)
ph_pharmacies = dataset.to_dataframe(sql_query=sql_query)
ph_pharmacies.head()
Explanation: Download all pharmacies in the area of study
End of explanation
ph_pharmacies = ph_pharmacies.sort_values(by='do_date', ascending=False).groupby('PB_ID').first().reset_index()
ph_pharmacies.shape
Explanation: The dataset contains different versions of the POI's tagged by the do_date column. We are only inetrested in the latest version of each POI.
End of explanation
Layer(ph_pharmacies,
geom_col='geom',
style=basic_style(opacity=0.75),
popup_hover=popup_element('NAME'))
Explanation: Visualize the dataset
End of explanation
iso_service = Isolines()
isochrones_gdf, _ = iso_service.isochrones(ph_pharmacies, [300], mode='car', geom_col='geom')
ph_pharmacies['iso_5car'] = isochrones_gdf.sort_values(by='source_id')['the_geom'].values
Explanation: <a id='section2'></a>
2. Calculate catchment areas
In order to know the characteristics of the potential customers of every pharmacy, we assume the majority of their clients live closeby. Therefore we will calculate 5-minute-by-car isochrones and take them as their cathment areas.
Note catchment areas usually depend on whether it is a store in the downtown area or in the suburbs, or if it is reachable on foot or only by car. For this example, we will not make such distiction between pharmacies, but we strongly encourage you to do so on your analyses. As an example, here we describe how to calculate catchment areas using human mobility data.
End of explanation
Map([Layer(ph_pharmacies.iloc[:10],
geom_col='iso_5car',
style=basic_style(opacity=0.1),
legends=basic_legend('Catchment Areas')),
Layer(ph_pharmacies.iloc[:10],
geom_col='geom',
popup_hover=popup_element('NAME'),
legends=basic_legend('Pharmacies'))])
Explanation: Visualize isochrones
We'll only visualize the ten first isochrones to get a clean visualization.
End of explanation
enrichment = Enrichment()
Explanation: <a id='section3'></a>
3. Enrichment: Chacacterize catchment areas
We'll now enrich the pharmacies catchment areas with demographics, POI's, and consumer spending data.
For the enrichment, we will use the CARTOframes Enrichment class. This class contains the functionality to enrich polygons and points.
Visit CARTOframes Guides for further detail.
End of explanation
Catalog().country('usa').category('demographics').provider('ags').datasets.to_dataframe().head()
dataset = Dataset.get('ags_sociodemogr_f510a947')
dataset.head()
Explanation: Demographics
We will use AGS premium data. In particular, we will work with the dataset ags_sociodemogr_f510a947 which contains yearly demographics data from 2019.
Variable selection
Here we will enrich the pharmacies isochrones with:
- Population aged 60+
- Household income
- Household income for population ages 65+
End of explanation
dataset.variables.to_dataframe().head()
Explanation: We explore the variables to identify the ones we're interested in.
Variables in a dataset are uniquely identified by their slug.
End of explanation
vars_enrichment = ['POPCY_5e23b8f4', 'AGECY6064_d54c2315', 'AGECY6569_ad369d43', 'AGECY7074_74eb7531',
'AGECY7579_c91cb67', 'AGECY8084_ab1079a8', 'AGECYGT85_a0959a08', 'INCCYMEDHH_b80a7a7b',
'HINCYMED65_37a430a4', 'HINCYMED75_2ebf01e5']
Explanation: We'll select:
- Population and population by age variables to identify number of people aged 60+ as a percentage of total population
- Average household income
- Average household income for porpulation aged 65+
End of explanation
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=vars_enrichment,
geom_col='iso_5car'
)
ph_pharmacies_enriched.head()
ph_pharmacies = ph_pharmacies_enriched.copy()
ph_pharmacies['pop_60plus'] = ph_pharmacies[['AGECY8084', 'AGECYGT85', 'AGECY6569', 'AGECY7579', 'AGECY7074', 'AGECY6064']].sum(1)
ph_pharmacies.drop(columns=['AGECY8084', 'AGECYGT85', 'AGECY6569', 'AGECY7579', 'AGECY7074', 'AGECY6064'], inplace=True)
Explanation: Isochrone enrichment
End of explanation
sample.loc[sample['TRADE_DIVISION'] == 'DIVISION I. - SERVICES', 'SUB_CLASS'].unique()
Explanation: Points of Interest
We will use Pitney Bowes' Consumer Points of Interest premium dataset.
Variable selection
We are interested in knowing how many of the following POIs can be found in each isochrone:
- Beauty shops and beauty salons
- Gyms and other sports centers
These POI's will be considered as an indicator of personal care awareness in a specific area.
The hierarchy classification variable SUB_CLASS variable allows us to identify beaty shops and salons (BEAUTY SHOPS/BEAUTY SALON) and gyms (MEMBERSHIP SPORTS AND RECREATION CLUBS/CLUB AND ASSOCIATION - UNSPECIFIED).
End of explanation
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=['SUB_CLASS_10243439'],
aggregation='COUNT',
geom_col='iso_5car',
filters={Variable.get('SUB_CLASS_10243439').id : "= 'BEAUTY SHOPS/BEAUTY SALON'"}
)
ph_pharmacies = ph_pharmacies_enriched.rename(columns={'SUB_CLASS_y':'n_beauty_pois'})
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=['SUB_CLASS_10243439'],
aggregation='COUNT',
geom_col='iso_5car',
filters={Variable.get('SUB_CLASS_10243439').id : "= 'MEMBERSHIP SPORTS AND RECREATION CLUBS/CLUB AND ASSOCIATION - UNSPECIFIED'"}
)
ph_pharmacies = ph_pharmacies_enriched.rename(columns={'SUB_CLASS':'n_gym_pois'})
ph_pharmacies['n_pois_personal_care'] = ph_pharmacies['n_beauty_pois'] + ph_pharmacies['n_gym_pois']
ph_pharmacies.drop(columns=['n_beauty_pois', 'n_gym_pois'], inplace=True)
Explanation: Isochrone enrichment
In order to count only Beauty Shops/Salons and Gyms, we will apply a filter to the enrichment. All filters are applied with an AND-like relationship. This means we need to run two independent enrichment calls, one for the beauty shops/salons and another one for the gyms.
End of explanation
dataset = Dataset.get('ags_consumer_sp_dbabddfb')
dataset.variables.to_dataframe().head()
Explanation: Consumer spending
For consumer spending, we will use AGS premium data. In particular, we will work with the dataset ags_consumer_sp_dbabddfb which contains the latest version of yearly consumer data.
Variable selection
We are interested in spending in:
- Personal care services
- Personal care products
- Health care services
End of explanation
Variable.get('XCYHC2_18141567').to_dict()
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=['XCYPC3_7d26d739', 'XCYPC4_e342429a', 'XCYHC2_18141567'],
geom_col='iso_5car'
)
Explanation: The variables we're interested in are:
- XCYHC2 Health care services expenditure
- XCYPC3 Personal care services expenditure
- XCYPC4 Personal care products expenditure
End of explanation
ph_pharmacies = ph_pharmacies_enriched.rename(columns={'XCYHC2':'health_care_services_exp',
'XCYPC3':'personal_care_services_exp',
'XCYPC4':'personal_care_products_exp'})
ph_pharmacies.head(2)
Explanation: We rename the new columns to give them a more descriptive name.
End of explanation
cmap = Map(Layer(ph_pharmacies,
geom_col='geom',
style=color_category_style('SIC8_DESCRIPTION', size=4, opacity=0.85, palette='safe', stroke_width=0.15),
widgets=[formula_widget(
'PB_ID',
operation='COUNT',
title='Total number of pharmacies',
description='Keep track of the total amount of pharmacies that meet the ranges selected on the widgets below'),
histogram_widget(
'pop_60plus',
title='Population 60+',
description='Select a range of values to filter',
buckets=15
),
histogram_widget(
'HINCYMED65',
title='Household income 65-74',
buckets=15
),
histogram_widget(
'HINCYMED75',
title='Household income 75+',
buckets=15
),
histogram_widget(
'n_pois_personal_care',
title='Number of personal care POIs',
buckets=15
),
histogram_widget(
'personal_care_products_exp',
title='Expenditure in personal care products ($)',
buckets=15
)],
legends=color_category_legend(
title='Pharmacies',
description='Type of store'),
popup_hover=[popup_element('NAME', title='Name')]
),
viewport={'zoom': 11}
)
cmap
Explanation: <a id='section4'></a>
4. Dashboard
Finally, with all the data gathered, we will build the dashboard and publish it so we can share it with our client/manager/colleague for them to explore it.
This dashboard allows you to select a range of desired expenditure in care products, people aged 60+, household income, and so forth. Selecting the desired ranges will filter out pharmacies, so that in the end you can identify the target pharmacies for your marketing campaign.
End of explanation
cmap.publish('ph_pharmacies_dashboard', password='MY_PASS', if_exists='replace')
Explanation: Publish dashboard
End of explanation |
3,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grade
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS | Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
Explanation: Grade: 5 / 6 -- search "TA-COMMENT" to check out a note on the last question!
Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
conn.rollback()
Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
cursor = conn.cursor()
statement = "select movie_title from uitem where horror = '1' and scifi = '1' order by release_date DESC;"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
cursor = conn.cursor()
statement = "select count(*) from uitem where musical = '1' or childrens = '1'"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
conn.rollback()
cursor = conn.cursor()
statement = "select occupation, count(*) from uuser group by occupation having count(*)>50"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
conn.rollback()
cursor = conn.cursor()
statement = "select distinct(uitem.movie_title) from uitem join udata on uitem.movie_id = udata.item_id where uitem.documentary = 1 and udata.rating = 5 and uitem.release_date < '1992-01-01';"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
conn.rollback()
cursor = conn.cursor()
statement = "select uitem.movie_title, avg(udata.rating)from uitem join udata on uitem.movie_id = udata.item_id where uitem.horror= 1 group by uitem.movie_title order by avg(udata.rating) limit 10; "
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
cursor = conn.cursor()
statement = "select uitem.movie_title, avg(udata.rating) from uitem join udata on uitem.movie_id = udata.item_id where uitem.horror = 1 group by uitem.movie_title having count(udata.user_id)>10 order by avg(udata.rating) limit 10"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation |
3,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Javascript extension for a notebook
Play with Javascript extensions.
Step1: We install extensions in case it was not done before
Step2: We check the list of installed extensions (from IPython-notebook-extensions)
Step3: And then, we load one of them | Python Code:
from pyquickhelper.ipythonhelper import install_notebook_extension, get_installed_notebook_extension
Explanation: Javascript extension for a notebook
Play with Javascript extensions.
End of explanation
install_notebook_extension()
Explanation: We install extensions in case it was not done before:
End of explanation
from pyquickhelper.ipythonhelper.notebook_helper import get_jupyter_extension_dir
path = get_jupyter_extension_dir()
path
get_installed_notebook_extension()
import notebook
notebook.nbextensions.check_nbextension('autosavetime', user=True)
Explanation: We check the list of installed extensions (from IPython-notebook-extensions):
End of explanation
%%javascript
require(['base/js/utils'],
function(utils) {
utils.load_extensions('autosavetime/main');
});
print(3)
Explanation: And then, we load one of them:
End of explanation |
3,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
COSC Learning Lab
03_interface_properties.py
Related Scripts
Step1: Implementation
Step2: Execution
Step3: HTTP | Python Code:
help('learning_lab.03_interface_properties')
Explanation: COSC Learning Lab
03_interface_properties.py
Related Scripts:
* 03_interface_configuration.py
Table of Contents
Table of Contents
Documentation
Implementation
Execution
HTTP
Documentation
End of explanation
from importlib import import_module
script = import_module('learning_lab.03_interface_properties')
from inspect import getsource
print(getsource(script.main))
print(getsource(script.demonstrate))
Explanation: Implementation
End of explanation
run ../learning_lab/03_interface_properties.py
Explanation: Execution
End of explanation
from basics.odl_http import http_history
from basics.http import http_history_to_html
from IPython.core.display import HTML
HTML(http_history_to_html(http_history()))
Explanation: HTTP
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.