Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
1,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QGrid
Interactive pandas dataframes
Step1: Github
https | Python Code:
df = pd.read_csv("../data/coal_prod_cleaned.csv")
df.head()
df.shape
df.columns
qgrid_widget = qgrid.show_grid(
df[["Year", "Mine_State", "Labor_Hours", "Production_short_tons"]],
show_toolbar=True,
)
qgrid_widget
df2 = df.groupby('Mine_State').sum()
df3 = df.groupby('Mine_State').sum()
df2.loc['Wyoming', 'Production_short_tons'] = 5.181732e+08
# have to run the next line then restart your kernel
# !cd ../insight; python setup.py develop
%aimport insight.plotting
insight.plotting.plot_prod_vs_hours(df2, color_index=1)
insight.plotting.plot_prod_vs_hours(df3, color_index=0)
def plot_prod_vs_hours(
df, color_index=0, output_file="../img/production-vs-hours-worked.png"
):
fig, ax = plt.subplots(figsize=(10, 8))
sns.regplot(
df["Labor_Hours"],
df["Production_short_tons"],
ax=ax,
color=sns.color_palette()[color_index],
)
ax.set_xlabel("Labor Hours Worked")
ax.set_ylabel("Total Amount Produced")
x = ax.set_xlim(-9506023.213266129, 204993853.21326613)
y = ax.set_ylim(-51476801.43653282, 746280580.4034251)
fig.tight_layout()
fig.savefig(output_file)
plot_prod_vs_hours(df2, color_index=0)
plot_prod_vs_hours(df3, color_index=1)
# make a change via qgrid
df3 = qgrid_widget.get_changed_df()
Explanation: QGrid
Interactive pandas dataframes: https://github.com/quantopian/qgrid
End of explanation
qgrid_widget = qgrid.show_grid(
df2[["Year", "Labor_Hours", "Production_short_tons"]],
show_toolbar=True,
)
qgrid_widget
Explanation: Github
https://github.com/jbwhit/jupyter-tips-and-tricks/commit/d3f2c0cef4dfd28eb3b9077595f14597a3022b1c?short_path=04303fc#diff-04303fce5e9bb38bcee25d12d9def22e
End of explanation |
1,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1"><a href="#Task-1.-Compiling-Ebola-Data"><span class="toc-item-num">Task 1. </span>Compiling Ebola Data</a></div>
<div class="lev1"><a href="#Task-2.-RNA-Sequences"><span class="toc-item-num">Task 2. </span>RNA Sequences</a></div>
<div class="lev1"><a href="#Task-3.-Class-War-in-Titanic"><span class="toc-item-num">Task 3. </span>Class War in Titanic</a></div></p>
Step1: Task 1. Compiling Ebola Data
The DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
Use pandas to import these data files into a single Dataframe.
Using this DataFrame, calculate for each country, the daily average per month of new cases and deaths.
Make sure you handle all the different expressions for new cases and deaths that are used in the reports.
Step2: Task 2. RNA Sequences
In the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each.
Use pandas to import the first 9 spreadsheets into a single DataFrame.
Then, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.
Make sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.
Step3: Creating and filling the DataFrame
In order to iterate only once over the data folder, we will attach the metadata to each excel spreadsheet right after creating a DataFrame with it. This will allow the code to be shorter and clearer, but also to iterate only once on every line and therefore be more efficient.
Step4: 3. Cleaning and reindexing
At first we get rid of the NaN value, we must replace them by "unknown". In order to have a more meaningful and single index, we will reset it to be the name of the RNA sequence.
Step5: Task 3. Class War in Titanic
Use pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.
For each of the following questions state clearly your assumptions and discuss your findings
Step6: Question 3.2
"Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals. "
assumptions
Step7: Question 3.3
Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
assumptions
Step8: Question 3.4
For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
assumptions
Step9: Question 3.5
"Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram."
assumptions
Step10: Question 3.6
"Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index."
assumptions | Python Code:
# Imports
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob
import csv
import calendar
import webbrowser
from datetime import datetime
# Constants
DATA_FOLDER = 'Data/'
Explanation: Table of Contents
<p><div class="lev1"><a href="#Task-1.-Compiling-Ebola-Data"><span class="toc-item-num">Task 1. </span>Compiling Ebola Data</a></div>
<div class="lev1"><a href="#Task-2.-RNA-Sequences"><span class="toc-item-num">Task 2. </span>RNA Sequences</a></div>
<div class="lev1"><a href="#Task-3.-Class-War-in-Titanic"><span class="toc-item-num">Task 3. </span>Class War in Titanic</a></div></p>
End of explanation
'''
Functions needed to solve task 1
'''
#function to import excel file into a dataframe
def importdata(path,date):
allpathFiles = glob.glob(DATA_FOLDER+path+'/*.csv')
list_data = []
for file in allpathFiles:
excel = pd.read_csv(file,parse_dates=[date])
list_data.append(excel)
return pd.concat(list_data)
#function to add the month on a new column of a DataFrame
def add_month(df):
copy_df = df.copy()
months = [calendar.month_name[x.month] for x in copy_df.Date]
copy_df['Month'] = months
return copy_df
#founction which loc only the column within a country and a specified month
#return a dataframe
def chooseCountry_month(dataframe,country,descr,month):
df = dataframe.loc[(dataframe['Country']==country) & (dataframe['Description']==descr)]
#df = add_month(df)
df_month = df.loc[(df['Month']==month)]
return df_month
# Create a dataframe with the number of death, the new cases and the daily infos for a country and a specified month
def getmonthresults(dataframe,country,month):
if country =='Liberia':
descr_kill ='Total death/s in confirmed cases'
descr_cases ='Total confirmed cases'
if country =='Guinea':
descr_kill ='Total deaths of confirmed'
descr_cases ='Total cases of confirmed'
if country == 'Sierra Leone':
descr_kill ='death_confirmed'
descr_cases ='cum_confirmed'
df_kill = chooseCountry_month(dataframe,country,descr_kill,month)
df_cases = chooseCountry_month(dataframe,country,descr_cases,month)
#calculate the number of new cases and of new deaths for the all month
res_kill = int(df_kill.iloc[len(df_kill)-1].Totals)-int(df_kill.iloc[0].Totals)
res_cases = int(df_cases.iloc[len(df_cases)-1].Totals)-int(df_cases.iloc[0].Totals)
#calculate the number of days counted which is last day of register - first day of register
nb_day = df_kill.iloc[len(df_kill)-1].Date.day-df_kill.iloc[0].Date.day
# Sometimes the values in the dataframe are wrong due to the excelfiles which are not all the same!
# We then get negative results. Therefor we replace them all by NaN !
if(res_cases < 0)&(res_kill <0):
monthreport = pd.DataFrame({'New cases':[np.nan],'Deaths':[np.nan],'daily average of New cases':[np.nan],'daily average of Deaths':[np.nan],'month':[month],'Country':[country]})
elif(res_cases >= 0) &( res_kill <0):
monthreport = pd.DataFrame({'New cases':[res_cases],'Deaths':[np.nan],'daily average of New cases':[res_cases/nb_day],'daily average of Deaths':[np.nan],'month':[month],'Country':[country]})
elif(res_cases < 0) & (res_kill >= 0):
monthreport = pd.DataFrame({'New cases':[np.nan],'Deaths':[res_kill],'daily average of New cases':[np.nan],'daily average of Deaths':[res_kill/nb_day],'month':[month],'Country':[country]})
elif(nb_day == 0):
monthreport = pd.DataFrame({'New cases':'notEnoughdatas','Deaths':'notEnoughdatas','daily average of New cases':'notEnoughdatas','daily average of Deaths':'notEnoughdatas','month':[month],'Country':[country]})
else:
monthreport = pd.DataFrame({'New cases':[res_cases],'Deaths':[res_kill],'daily average of New cases':[res_cases/nb_day],'daily average of Deaths':[res_kill/nb_day],'month':[month],'Country':[country]})
return monthreport
#check if the month and the country is in the dataframe df
def checkData(df,month,country):
check = df.loc[(df['Country']==country)& (df['Month']== month)]
return check
#return a dataframe with all the infos(daily new cases, daily death) for each month and each country
def getResults(data):
Countries = ['Guinea','Liberia','Sierra Leone']
Months = ['January','February','March','April','May','June','July','August','September','October','November','December']
results=[]
compteur =0
for country in Countries:
for month in Months:
if not(checkData(data,month,country).empty) : #check if the datas for the month and country exist
res = getmonthresults(data,country,month)
results.append(res)
return pd.concat(results)
# import data from guinea
path_guinea = 'Ebola/guinea_data/'
data_guinea = importdata(path_guinea,'Date')
# set the new order / change the columns / keep only the relevant datas / add the name of the country
data_guinea = data_guinea[['Date', 'Description','Totals']]
data_guinea['Country'] = ['Guinea']*len(data_guinea)
#search for New cases and death!!
#descr(newcases): "Total cases of confirmed" // descr(deaths): "Total deaths of confirmed"
data_guinea = data_guinea.loc[(data_guinea.Description=='Total cases of confirmed')|(data_guinea.Description=='Total deaths of confirmed')]
#import data from liberia
path_liberia = 'Ebola/liberia_data/'
data_liberia = importdata(path_liberia,'Date')
# set the new order / change the columns / keep only the relevant datas / add the name of the country
data_liberia = data_liberia[['Date', 'Variable','National']]
data_liberia['Country'] = ['Liberia']*len(data_liberia)
#search for New cases and death!!
#descr(newcases): "Total confirmed cases" // descr(deaths): "Total death/s in confirmed cases"
data_liberia = data_liberia.loc[(data_liberia.Variable=='Total confirmed cases')|(data_liberia.Variable=='Total death/s in confirmed cases')]
#change the name of the columns to be able merge the 3 data sets
data_liberia = data_liberia.rename(columns={'Date': 'Date', 'Variable': 'Description','National':'Totals'})
#import data from sierra leonne
path_sl = 'Ebola/sl_data/'
data_sl = importdata(path_sl,'date')
# set the new order / change the columns / keep only the relevant datas / add the name of the country
data_sl = data_sl[['date', 'variable','National']]
data_sl['Country'] = ['Sierra Leone']*len(data_sl)
#search for new cases and death
#descr(newcases): "cum_confirmed" // descr(deaths): "death_confirmed"
data_sl = data_sl.loc[(data_sl.variable=='cum_confirmed')|(data_sl.variable=='death_confirmed')]
#change the name of the columns to be able merge the 3 data sets
data_sl = data_sl.rename(columns={'date': 'Date', 'variable': 'Description','National':'Totals'})
#merge the 3 dataframe into ONE which we'll apply our analysis
dataFrame = [data_guinea,data_liberia,data_sl]
data = pd.concat(dataFrame)
# Replace the NaN by 0;
data = data.fillna(0)
#add a column with the month
data = add_month(data)
#we now show the whole merged dataframe with the input of each file
data
#get the results from the data set -> see the function
results = getResults(data)
#print the resuults
results
Explanation: Task 1. Compiling Ebola Data
The DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
Use pandas to import these data files into a single Dataframe.
Using this DataFrame, calculate for each country, the daily average per month of new cases and deaths.
Make sure you handle all the different expressions for new cases and deaths that are used in the reports.
End of explanation
Sheet10_Meta = pd.read_excel(DATA_FOLDER +'microbiome/metadata.xls')
allFiles = glob.glob(DATA_FOLDER + 'microbiome' + "/MID*.xls")
allFiles
Explanation: Task 2. RNA Sequences
In the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each.
Use pandas to import the first 9 spreadsheets into a single DataFrame.
Then, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.
Make sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.
End of explanation
#Creating an empty DataFrame to store our data and initializing a counter.
Combined_data = pd.DataFrame()
K = 0
while (K < int(len(allFiles))):
#Creating a DataFrame and filling it with the excel's data
df = pd.read_excel(allFiles[K], header=None)
#Getting the metadata of the corresponding spreadsheet
df['BARCODE'] = Sheet10_Meta.at[int(K), 'BARCODE']
df['GROUP'] = Sheet10_Meta.at[int(K), 'GROUP']
df['SAMPLE'] = Sheet10_Meta.at[int(K),'SAMPLE']
#Append the recently created DataFrame to our combined one
Combined_data = Combined_data.append(df)
K = K + 1
#Renaming the columns with meaningfull names
Combined_data.columns = ['Name', 'Value','BARCODE','GROUP','SAMPLE']
Combined_data.head()
Explanation: Creating and filling the DataFrame
In order to iterate only once over the data folder, we will attach the metadata to each excel spreadsheet right after creating a DataFrame with it. This will allow the code to be shorter and clearer, but also to iterate only once on every line and therefore be more efficient.
End of explanation
#Replacing the NaN values with unknwown
Combined_data = Combined_data.fillna('unknown')
#Reseting the index
Combined_data = Combined_data.set_index('Name')
#Showing the result
Combined_data
Explanation: 3. Cleaning and reindexing
At first we get rid of the NaN value, we must replace them by "unknown". In order to have a more meaningful and single index, we will reset it to be the name of the RNA sequence.
End of explanation
'''
Here is a sample of the information in the titanic dataframe
'''
# Importing titanic.xls info with Pandas
titanic = pd.read_excel('Data/titanic.xls')
# printing only the 30 first and last rows of information
print(titanic.head)
'''
To describe the INTENDED values and types of the data we will show you the titanic.html file that was provided to us
Notice:
- 'age' is of type double, so someone can be 17.5 years old, mostly used with babies that are 0.x years old
- 'cabin' is stored as integer, but it har characters and letters
- By this model, embarked is stored as an integer, witch has to be interpreted as the 3 different embarkation ports
- It says that 'boat' is stored as a integer even though it has spaces and letters, it should be stored as string
PS: it might be that the information stored as integer is supposed to be categorical data,
...because they have a "small" amount of valid options
'''
# Display html info in Jupyter Notebook
from IPython.core.display import display, HTML
htmlFile = 'Data/titanic.html'
display(HTML(htmlFile))
'''
The default types of the data after import:
Notice:
- the strings and characters are imported as objects
- 'survived' is imported as int instead of double (which is in our opinion better since it's only 0 and 1
- 'sex' is imported as object not integer because it is a string
'''
titanic.dtypes
'''
Below you can see the value range of the different numerical values.
name, sex, ticket, cabin, embarked, boat and home.dest is not included because they can't be quantified numerically.
'''
titanic.describe()
'''
Additional information that is important to remember when manipulation the data
is if/where there are NaN values in the dataset
'''
# This displays the number of NaN there is in different attributes
print(pd.isnull(titanic).sum())
'''
Some of this data is missing while some is meant to describe 'No' or something of meaning.
Example:
Cabin has 1014 NaN in its column, it might be that every passenger had a cabin and the data is missing.
Or it could mean that most passengers did not have a cabin or a mix. The displayed titanic.html file
give us some insight if it is correct. It says that there are 0 NaN in the column. This indicates that
there are 1014 people without a cabin. Boat has also 823 NaN's, while the titanic lists 0 NaN's.
It is probably because most of those who died probably weren't in a boat.
'''
'''
What attributes should be stored as categorical information?
Categorical data is essentially 8-bit integers which means it can store up to 2^8 = 256 categories
Benefit is that it makes memory usage lower and it has a performance increase in calculations.
'''
print('Number of unique values in... :')
for attr in titanic:
print(" {attr}: {u}".format(attr=attr, u=len(titanic[attr].unique())))
'''
We think it will be smart to categorize: 'pclass', 'survived', 'sex', 'cabin', 'embarked' and 'boat'
because they have under 256 categories and don't have a strong numerical value like 'age'
'survived' is a bordercase because it might be more practical to work with integers in some settings
'''
# changing the attributes to categorical data
titanic.pclass = titanic.pclass.astype('category')
titanic.survived = titanic.survived.astype('category')
titanic.sex = titanic.sex.astype('category')
titanic.cabin = titanic.cabin.astype('category')
titanic.embarked = titanic.embarked.astype('category')
titanic.boat = titanic.boat.astype('category')
#Illustrate the change by printing out the new types
titanic.dtypes
Explanation: Task 3. Class War in Titanic
Use pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.
For each of the following questions state clearly your assumptions and discuss your findings:
Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.
Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals.
Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.
Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.
Question 3.1
Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.
Assumptions:
- "For each exercise, please provide both a written explanation of the steps you will apply to manipulate the data, and the corresponding code." We assume that "written explanation can come in the form of commented code as well as text"
- We assume that we must not describe the value range of attributes that contain string as we dont feel the length of strings or ASCI-values don't give any insight
End of explanation
#Plotting the ratio different classes(1st, 2nd and 3rd class) the passengers have
pc = titanic.pclass.value_counts().sort_index().plot(kind='bar')
pc.set_title('Travel classes')
pc.set_ylabel('Number of passengers')
pc.set_xlabel('Travel class')
pc.set_xticklabels(('1st class', '2nd class', '3rd class'))
plt.show(pc)
#Plotting the amount of people that embarked from different cities(C=Cherbourg, Q=Queenstown, S=Southampton)
em = titanic.embarked.value_counts().sort_index().plot(kind='bar')
em.set_title('Ports of embarkation')
em.set_ylabel('Number of passengers')
em.set_xlabel('Port of embarkation')
em.set_xticklabels(('Cherbourg', 'Queenstown', 'Southampton'))
plt.show(em)
#Plotting what sex the passengers are
sex = titanic.sex.value_counts().plot(kind='bar')
sex.set_title('Gender of the passengers')
sex.set_ylabel('Number of Passengers')
sex.set_xlabel('Gender')
sex.set_xticklabels(('Female', 'Male'))
plt.show(sex)
#Plotting agegroup of passengers
bins = [0,10,20,30,40,50,60,70,80]
age_grouped = pd.DataFrame(pd.cut(titanic.age, bins))
ag = age_grouped.age.value_counts().sort_index().plot.bar()
ag.set_title('Age of Passengers ')
ag.set_ylabel('Number of passengers')
ag.set_xlabel('Age groups')
plt.show(ag)
Explanation: Question 3.2
"Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals. "
assumptions:
End of explanation
'''
Parsing the cabinfloor, into floors A, B, C, D, E, F, G, T and display in a pie chart
'''
#Dropping NaN (People without cabin)
cabin_floors = titanic.cabin.dropna()
# removes digits and spaces
cabin_floors = cabin_floors.str.replace(r'[\d ]+', '')
# removes duplicate letters and leave unique (CC -> C) (FG -> G)
cabin_floors = cabin_floors.str.replace(r'(.)(?=.*\1)', '')
# removes ambigous data from the dataset (FE -> NaN)(FG -> NaN)
cabin_floors = cabin_floors.str.replace(r'([A-Z]{1})\w+', 'NaN' )
# Recategorizing (Since we altered the entries, we messed with the categories)
cabin_floors = cabin_floors.astype('category')
# Removing NaN (uin this case ambigous data)
cabin_floors = cabin_floors.cat.remove_categories('NaN')
cabin_floors = cabin_floors.dropna()
# Preparing data for plt.pie
numberOfCabinPlaces = cabin_floors.count()
grouped = cabin_floors.groupby(cabin_floors).count()
sizes = np.array(grouped)
labels = np.array(grouped.index)
# Plotting the pie chart
plt.pie(sizes, labels=labels, autopct='%1.1f%%', pctdistance=0.75, labeldistance=1.1)
print("There are {cabin} passengers that have cabins and {nocabin} passengers without a cabin"
.format(cabin=numberOfCabinPlaces, nocabin=(len(titanic) - numberOfCabinPlaces)))
Explanation: Question 3.3
Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
assumptions:
- Because we are tasked with categorizing persons by the floor of their cabin it was problematic that you had cabin input: "F E57" and "F G63". There were only 7 of these instances with conflicting cabinfloors. We also presumed that the was a floor "T". Even though there was only one instance, so it might have been a typo.
- We assume that you don't want to include people without cabinfloor
End of explanation
# function that returns the number of people that survived and died given a specific travelclass
def survivedPerClass(pclass):
survived = len(titanic.survived[titanic.survived == 1][titanic.pclass == pclass])
died = len(titanic.survived[titanic.survived == 0][titanic.pclass == pclass])
return [survived, died]
# Fixing the layout horizontal
the_grid = plt.GridSpec(1, 3)
labels = ["Survived", "Died"]
# Each iteration plots a pie chart
for p in titanic.pclass.unique():
sizes = survivedPerClass(p)
plt.subplot(the_grid[0, p-1], aspect=1 )
plt.pie(sizes, labels=labels, autopct='%1.1f%%')
plt.show()
Explanation: Question 3.4
For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
assumptions:
End of explanation
# group by selected data and get a count for each category
survivalrate = titanic.groupby(['pclass', 'sex', 'survived']).size()
# calculate percentage
survivalpercentage = survivalrate.groupby(level=['pclass', 'sex']).apply(lambda x: x / x.sum() * 100)
# plotting in a histogram
histogram = survivalpercentage.filter(like='1', axis=0).plot(kind='bar')
histogram.set_title('Proportion of the passengers that survived by travel class and sex')
histogram.set_ylabel('Percent likelyhood of surviving titanic')
histogram.set_xlabel('class/gender group')
plt.show(histogram)
Explanation: Question 3.5
"Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram."
assumptions:
1. By "proportions" We assume it is a likelyhood-percentage of surviving
End of explanation
#drop NaN rows
age_without_nan = titanic.age.dropna()
#categorizing
age_categories = pd.qcut(age_without_nan, 2, labels=["Younger", "Older"])
#Numbers to explain difference
median = int(np.float64(age_without_nan.median()))
amount = int(age_without_nan[median])
print("The Median age is {median} years old".format(median = median))
print("and there are {amount} passengers that are {median} year old \n".format(amount=amount, median=median))
print(age_categories.groupby(age_categories).count())
print("\nAs you can see the pd.qcut does not cut into entirely equal sized bins, because the age is of a discreet nature")
# imported for the sake of surpressing some warnings
import warnings
warnings.filterwarnings('ignore')
# extract relevant attributes
csas = titanic[['pclass', 'sex', 'age', 'survived']]
csas.dropna(subset=['age'], inplace=True)
# Defining the categories
csas['age_group'] = csas.age > csas.age.median()
csas['age_group'] = csas['age_group'].map(lambda age_category: 'older' if age_category else "younger")
# Converting to int to make it able to aggregate and give percentage
csas.survived = csas.survived.astype(int)
g_categories = csas.groupby(['pclass', 'age_group', 'sex'])
result = pd.DataFrame(g_categories.survived.mean()).rename(columns={'survived': 'survived proportion'})
# reset current index and spesify the unique index
result.reset_index(inplace=True)
unique_index = result.pclass.astype(str) + ': ' + result.age_group.astype(str) + ' ' + result.sex.astype(str)
# Finalize the unique index dataframe
result_w_unique = result[['survived proportion']]
result_w_unique.set_index(unique_index, inplace=True)
print(result_w_unique)
Explanation: Question 3.6
"Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index."
assumptions:
1. By "proportions" we assume it is a likelyhood-percentage of surviving
2. To create 2 equally populated age categories; we will find the median and round up from the median to nearest whole year difference before splitting.
End of explanation |
1,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are
Step1: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
Step2: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
Step3: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
Step4: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail
Step5: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab
Step6: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include
Step7: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
Step8: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things
Step9: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we
Step10: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps. | Python Code:
%matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
Explanation: Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are:
While it has a Python interface, much of the low-level computation is implemented in C/C++, making it run much faster than a native Python solution.
Many common aspects of neural networks such as computation of various losses and a variety of modern optimization techniques are implemented as built in methods, reducing their implementation to a single line of code. This also helps in development and testing of various solutions, as you can easily swap in and try various solutions without having to write all the code by hand.
You can get more details about various popular machine learning libraries in this comparison.
To test our basic network, we will use the Boston Housing Dataset, which represents data on 506 houses in Boston across 14 different features. One of the features is the median value of the house in $1000’s. This is a common data set for testing regression performance of machine learning algorithms. All 14 features are continuous values, making them easy to plug directly into a neural network (after normalizing ofcourse!). The common goal is to predict the median house value using the other columns as features.
This lab will conclude with two assignments:
Assignment 1 (at bottom of this notebook) asks you to experiment with various regularization parameters to reduce overfitting and improve the results of the model.
Assignment 2 (in the next notebook) asks you to take our regression problem and convert it to a classification problem.
Let's start by importing some of the libraries we will use for this tutorial:
End of explanation
#load data from scikit-learn library
dataset = load_boston()
#load data as DataFrame
houses = pd.DataFrame(dataset.data, columns=dataset.feature_names)
#add target data to DataFrame
houses['target'] = dataset.target
#print first 5 entries of data
print houses.head()
Explanation: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
End of explanation
print dataset['DESCR']
Explanation: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
End of explanation
# Create a datset of correlations between house features
corrmat = houses.corr()
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(9, 6))
# Draw the heatmap using seaborn
sns.set_context("notebook", font_scale=0.7, rc={"lines.linewidth": 1.5})
sns.heatmap(corrmat, annot=True, square=True)
f.tight_layout()
Explanation: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
End of explanation
sns.jointplot(houses['target'], houses['RM'], kind='hex')
sns.jointplot(houses['target'], houses['LSTAT'], kind='hex')
Explanation: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail:
End of explanation
# convert housing data to numpy format
houses_array = houses.as_matrix().astype(float)
# split data into feature and target sets
X = houses_array[:, :-1]
y = houses_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# split data into training and test sets
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
Explanation: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab:
We will first re-split the data into a feature set (X) and a target set (y)
Then we will normalize the feature set so that the values range from 0 to 1
Finally, we will split both data sets into a training and test set.
End of explanation
# helper variables
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = 1
# Hyper-parameters
batch_size = 50
num_hidden_1 = 12
num_hidden_2 = 12
learning_rate = 0.0001
training_epochs = 100
dropout_keep_prob = 0.3 # set to no dropout by default
# variable to control the resolution at which the training results are stored
display_step = 1
Explanation: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include:
batch size, which sets how many training samples are used at a time
learning rate which controls how quickly the gradient descent algorithm works
training epochs which sets how many rounds of training occurs
dropout keep probability, a regularization technique which controls how many neurons are 'dropped' randomly during each training step (note in Tensorflow this is specified as the 'keep probability' from 0 to 1, with 0 representing all neurons dropped, and 1 representing all neurons kept). You can read more about dropout here.
End of explanation
def accuracy(predictions, targets):
error = np.absolute(predictions.reshape(-1) - targets)
return np.mean(error)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
Explanation: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
End of explanation
'''First we create a variable to store our graph'''
graph = tf.Graph()
'''Next we build our neural network within this graph variable'''
with graph.as_default():
'''Our training data will come in as x feature data and
y target data. We need to create tensorflow placeholders
to capture this data as it comes in'''
x = tf.placeholder(tf.float32, shape=(None, num_features))
_y = tf.placeholder(tf.float32, shape=(None))
'''Another placeholder stores the hyperparameter
that controls dropout'''
keep_prob = tf.placeholder(tf.float32)
'''Finally, we convert the test and train feature data sets
to tensorflow constants so we can use them to generate
predictions on both data sets'''
tf_X_test = tf.constant(X_test, dtype=tf.float32)
tf_X_train = tf.constant(X_train, dtype=tf.float32)
'''Next we create the parameter variables for the model.
Each layer of the neural network needs it's own weight
and bias variables which will be tuned during training.
The sizes of the parameter variables are determined by
the number of neurons in each layer.'''
W_fc1 = weight_variable([num_features, num_hidden_1])
b_fc1 = bias_variable([num_hidden_1])
W_fc2 = weight_variable([num_hidden_1, num_hidden_2])
b_fc2 = bias_variable([num_hidden_2])
W_fc3 = weight_variable([num_hidden_2, num_outputs])
b_fc3 = bias_variable([num_outputs])
'''Next, we define the forward computation of the model.
We do this by defining a function model() which takes in
a set of input data, and performs computations through
the network until it generates the output.'''
def model(data, keep):
# computing first hidden layer from input, using relu activation function
fc1 = tf.nn.relu(tf.matmul(data, W_fc1) + b_fc1)
# adding dropout to first hidden layer
fc1_drop = tf.nn.dropout(fc1, keep)
# computing second hidden layer from first hidden layer, using relu activation function
fc2 = tf.nn.relu(tf.matmul(fc1_drop, W_fc2) + b_fc2)
# adding dropout to second hidden layer
fc2_drop = tf.nn.dropout(fc2, keep)
# computing output layer from second hidden layer
# the output is a single neuron which is directly interpreted as the prediction of the target value
fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3
# the output is returned from the function
return fc3
'''Next we define a few calls to the model() function which
will return predictions for the current batch input data (x),
as well as the entire test and train feature set'''
prediction = model(x, keep_prob)
test_prediction = model(tf_X_test, 1.0)
train_prediction = model(tf_X_train, 1.0)
'''Finally, we define the loss and optimization functions
which control how the model is trained.
For the loss we will use the basic mean square error (MSE) function,
which tries to minimize the MSE between the predicted values and the
real values (_y) of the input dataset.
For the optimization function we will use basic Gradient Descent (SGD)
which will minimize the loss using the specified learning rate.'''
loss = tf.reduce_mean(tf.square(tf.sub(prediction, _y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
'''We also create a saver variable which will allow us to
save our trained model for later use'''
saver = tf.train.Saver()
Explanation: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things:
describes the architecture of the network, including how many layers it has and how many neurons are in each layer
initializes all the parameters of the network
describes the 'forward' calculation of the network, or how input data is passed through the network layer by layer until it reaches the result
defines the loss function which describes how well the model is performing
specifies the optimization function which dictates how the parameters are tuned in order to minimize the loss
Once this graph is defined, we can work with it by 'executing' it on sets of training data and 'calling' different parts of the graph to get back results. Every time the graph is executed, Tensorflow will only do the minimum calculations necessary to generate the requested results. This makes Tensorflow very efficient, and allows us to structure very complex models while only testing and using certain portions at a time. In programming language theory, this type of programming is called 'lazy evaluation'.
End of explanation
# create an array to store the results of the optimization at each epoch
results = []
'''First we open a session of Tensorflow using our graph as the base.
While this session is active all the parameter values will be stored,
and each step of training will be using the same model.'''
with tf.Session(graph=graph) as session:
'''After we start a new session we first need to
initialize the values of all the variables.'''
tf.initialize_all_variables().run()
print('Initialized')
'''Now we iterate through each training epoch based on the hyper-parameter set above.
Each epoch represents a single pass through all the training data.
The total number of training steps is determined by the number of epochs and
the size of mini-batches relative to the size of the entire training set.'''
for epoch in range(training_epochs):
'''At the beginning of each epoch, we create a set of shuffled indexes
so that we are using the training data in a different order each time'''
indexes = range(num_samples)
random.shuffle(indexes)
'''Next we step through each mini-batch in the training set'''
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
'''We subset the feature and target training sets to create each mini-batch'''
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
'''Then, we create a 'feed dictionary' that will feed this data,
along with any other hyper-parameters such as the dropout probability,
to the model'''
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
'''Finally, we call the session's run() function, which will feed in
the current training data, and execute portions of the graph as necessary
to return the data we ask for.
The first argument of the run() function is a list specifying the
model variables we want it to compute and return from the function.
The most important is 'optimizer' which triggers all calculations necessary
to perform one training step. We also include 'loss' and 'prediction'
because we want these as ouputs from the function so we can keep
track of the training process.
The second argument specifies the feed dictionary that contains
all the data we want to pass into the model at each training step.'''
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
'''At the end of each epoch, we will calcule the error of predictions
on the full training and test data set. We will then store the epoch number,
along with the mini-batch, training, and test accuracies to the 'results' array
so we can visualize the training process later. How often we save the data to
this array is specified by the display_step variable created above'''
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
'''Once training is complete, we will save the trained model so that we can use it later'''
save_path = saver.save(session, "model_houses.ckpt")
print("Model saved in file: %s" % save_path)
Explanation: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we:
Feed in a new set of training data. Remember that with SGD we only have to feed in a small set of data at a time. The size of each batch of training data is determined by the 'batch_size' hyper-parameter specified above.
Call the optimizer function by asking tensorflow to return the model's 'optimizer' variable. This starts a chain reaction in Tensorflow that executes all the computation necessary to train the model. The optimizer function itself will compute the gradients in the model and modify the weight and bias parameters in a way that minimizes the overall loss. Because it needs this loss to compute the gradients, it will also trigger the loss function, which will in turn trigger the model to compute predictions based on the input data. This sort of chain reaction is at the root of the 'lazy evaluation' model used by Tensorflow.
End of explanation
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Minimum test loss:", np.min(df["test_acc"])
Explanation: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps.
End of explanation |
1,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Customizing IPython - Magics
IPython extends Python by adding shell-like commands called magics.
Step2: Defining your own magic
As we have seen already, IPython has cell and line magics. You can define your own magics using any Python function and the register_magic_function method
Step3: Exercise
Define %tic and %toc magics, which can be use for simple timings, e.g. where
python
for p in range(1,4)
Step6: Cell Magic
Cell magics take two args
Step7: Excercise
Can you write and register a cell magic that automates the outer iteration,
timing a block for various values of a particular variable
Step9: Executing Notebooks
We can load a notebook into memory using IPython.nbformat.
Step10: A notebook is just a dictionary with attribute access for convenience.
Step11: We can see all the cells and their type
Step12: Now I can run all of the code cells with get_ipython().run_cell
Step13: And we can now use the function that was defined in that notebook
Step14: Exercise
Can you write and register an %nbrun line magic to run a notebook?
python
%nbrun Sample | Python Code:
%lsmagic
import numpy
%timeit A=numpy.random.random((1000,1000))
%%timeit -n 1
A=numpy.random.random((1000,1000))
b = A.sum()
Explanation: Customizing IPython - Magics
IPython extends Python by adding shell-like commands called magics.
End of explanation
ip = get_ipython()
import time
def sleep_magic(line):
A simple function for sleeping
t = float(line)
time.sleep(t)
ip.register_magic_function?
ip.register_magic_function(sleep_magic, "line", "sleep")
%sleep 2
%sleep?
Explanation: Defining your own magic
As we have seen already, IPython has cell and line magics. You can define your own magics using any Python function and the register_magic_function method:
End of explanation
%load soln/tictocf.py
import numpy as np
import sys
for p in range(1,4):
N = 10**p
print("N=%i" % N)
sys.stdout.flush()
%tic
A = np.random.random((N,N))
np.linalg.eigvals(A)
%toc
Explanation: Exercise
Define %tic and %toc magics, which can be use for simple timings, e.g. where
python
for p in range(1,4):
N = 10**p
print "N=%i" % N
%tic
A = np.random.random((N,N))
np.linalg.eigvals(A)
%toc
each %toc will print the time since the last %tic. Create separate tic and toc functions that read and write
a global time variable.
End of explanation
def dummy_cell_magic(line, cell):
dummy cell magic for displaying the line and cell it is passed
print("line: %r" % line)
print("cell: %r" % cell)
ip.register_magic_function(dummy_cell_magic, "cell", "dummy")
%%dummy this is the line
this
is the
cell
def parse_magic_line(line):
parse a magic line into a name and eval'd expression
name, values_s = line.split(None, 1)
values = eval(values_s, get_ipython().user_ns)
return name, values
parse_magic_line("x range(5)")
Explanation: Cell Magic
Cell magics take two args:
the line on the same line of the magic
the cell the multiline body of the cell after the first line
End of explanation
%load soln/scalemagic.py
%%scale N [ int(10**p) for p in range(1,4) ]
A = np.random.random((N,N))
np.linalg.eigvals(A)
%%scale N [ int(2**p) for p in np.linspace(6, 11, 11) ]
A = np.random.random((N,N))
np.linalg.eigvals(A)
Explanation: Excercise
Can you write and register a cell magic that automates the outer iteration,
timing a block for various values of a particular variable:
End of explanation
import io
import os
import IPython.nbformat as nbf
def load_notebook(filename):
load a notebook object from a filename
if not os.path.exists(filename) and not filename.endswith(".ipynb"):
filename = filename + ".ipynb"
with io.open(filename) as f:
return nbf.read(f, as_version=4)
nb = load_notebook("_Sample")
Explanation: Executing Notebooks
We can load a notebook into memory using IPython.nbformat.
End of explanation
nb.keys()
cells = nb.cells
cells
Explanation: A notebook is just a dictionary with attribute access for convenience.
End of explanation
for cell in cells:
print()
print('----- %s -----' % cell.cell_type)
print(cell.source)
Explanation: We can see all the cells and their type
End of explanation
for cell in cells:
ip = get_ipython()
if cell.cell_type == 'code':
ip.run_cell(cell.source, silent=True)
Explanation: Now I can run all of the code cells with get_ipython().run_cell
End of explanation
nb_info(nb)
Explanation: And we can now use the function that was defined in that notebook:
End of explanation
%load soln/nbrun.py
%nbrun _Sample
Explanation: Exercise
Can you write and register an %nbrun line magic to run a notebook?
python
%nbrun Sample
End of explanation |
1,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 1
Imports
Step2: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0
Step3: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
Step4: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
Explanation: Numpy Exercise 1
Imports
End of explanation
def checkerboard(size):
Return a 2d checkboard of 0.0 and 1.0 as a NumPy array
check = np.zeros((size,size),float)
check.fill(0.0)
n = 0
while n<(size):
if n % 2 == 0: #For even number rows, start filling 1's at position 0
p = 0
else: #For odd number rows, start filling 1's at position 1
p = 1
while p<(size):
check[n,p] = (1.0) #Fill 1's at position n,p
p = p + 2 #Skip one position in row before filling in a row (Key to the checkerboard pattern)
n = n + 1 #Move to next row
return check
#print (checkerboard(7)) #Was used to test output
#raise NotImplementedError()
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
Explanation: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:
Your function should work for both odd and even size.
The 0,0 element should be 1.0.
The dtype should be float.
End of explanation
va.set_block_size(10)
va.vizarray(checkerboard(20))
#raise NotImplementedError()
assert True
Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
End of explanation
va.set_block_size(5)
va.vizarray(checkerboard(27))
#raise NotImplementedError()
assert True
Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
End of explanation |
1,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras for Text Classification
Learning Objectives
1. Learn how to create a text classification datasets using BigQuery
1. Learn how to tokenize and integerize a corpus of text for training in Keras
1. Learn how to do one-hot-encodings in Keras
1. Learn how to use embedding layers to represent words in Keras
1. Learn about the bag-of-word representation for sentences
1. Learn how to use DNN/CNN/RNN model to classify text in keras
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c
Step12: Let's write the sample datatset to disk.
Step13: Note
Step14: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step15: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).
Step16: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that
Step17: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Lab Task #2
Step18: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
Step19: Lab Task #3
Step20: Preparing the train/test splits
Let's split our data into train and test splits
Step21: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
Step22: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
Step23: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Lab Tasks #4, #5, and #6
Step24: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
Step25: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Step26: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs)
Step27: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Complete the code below to create a CNN model for text classification. This model is similar to the previous models in that you should start with an embedding layer. However, the embedding next layers should pass through a 1-dimensional convolution and ultimately the final fully connected, dense layer. Use the arguments of the build_cnn_model function to set up the 1D convolution layer.
Step28: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps. | Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
Explanation: Keras for Text Classification
Learning Objectives
1. Learn how to create a text classification datasets using BigQuery
1. Learn how to tokenize and integerize a corpus of text for training in Keras
1. Learn how to do one-hot-encodings in Keras
1. Learn how to use embedding layers to represent words in Keras
1. Learn about the bag-of-word representation for sentences
1. Learn how to use DNN/CNN/RNN model to classify text in keras
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
Explanation: Replace the variable values in the cell below:
End of explanation
%%bigquery --project $PROJECT
SELECT
# TODO: Your code goes here.
FROM
# TODO: Your code goes here.
WHERE
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
LIMIT 10
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a:
Complete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with
* title length greater than 10 characters
* score greater than 10
* url length greater than 0 characters
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
# TODO: Your code goes here.
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
# TODO: Your code goes here.
GROUP BY
# TODO: Your code goes here.
ORDER BY num_articles DESC
LIMIT 100
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
Lab task 1b:
Complete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.
End of explanation
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
title_dataset.source.value_counts()
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
sample_title_dataset = # TODO: Your code goes here.
# TODO: Your code goes here.
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c:
Use .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?
End of explanation
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0
Explanation: Let's write the sample datatset to disk.
End of explanation
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow.keras.layers import (
Embedding,
Flatten,
GRU,
Conv1D,
Lambda,
Dense,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
Explanation: Note: You can simply ignore the incompatibility error related
to tensorflow-serving-api and tensorflow-io.
While re-running the above cell you will see the output
tensorflow==2.0.0 that is the installed version of tensorflow.
End of explanation
LOGDIR = "./text_models"
DATA_DIR = "./data"
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
Explanation: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).
End of explanation
tokenizer = Tokenizer()
tokenizer.fit_on_texts(titles_df.title)
integerized_titles = tokenizer.texts_to_sequences(titles_df.title)
integerized_titles[:3]
VOCAB_SIZE = len(tokenizer.index_word)
VOCAB_SIZE
DATASET_SIZE = tokenizer.document_count
DATASET_SIZE
MAX_LEN = max(len(sequence) for sequence in integerized_titles)
MAX_LEN
Explanation: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that:
End of explanation
# TODO 1
def create_sequences(texts, max_len=MAX_LEN):
sequences = # TODO: Your code goes here.
padded_sequences = # TODO: Your code goes here.
return padded_sequences
sequences = create_sequences(titles_df.title[:3])
sequences
titles_df.source[:4]
Explanation: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Lab Task #2:
Complete the code in the create_sequences function below to
* create text sequences from texts using the tokenizer we created above
* pad the end of those text sequences to have length max_len
End of explanation
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
Explanation: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
End of explanation
# TODO 2
def encode_labels(sources):
classes = # TODO: Your code goes here.
one_hots = # TODO: Your code goes here.
return one_hots
encode_labels(titles_df.source[:4])
Explanation: Lab Task #3:
Complete the code in the encode_labels function below to
* create a list that maps each source in sources to its corresponding numeric value using the dictionary CLASSES above
* use the Keras function to one-hot encode the variable classes
End of explanation
N_TRAIN = int(DATASET_SIZE * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
sources_train.value_counts()
sources_valid.value_counts()
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)
X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
Explanation: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
End of explanation
# TODOs 4-6
def build_dnn_model(embed_dim):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Lab Tasks #4, #5, and #6:
Create a Keras Sequential model with three layers:
* The first layer should be an embedding layer with output dimension equal to embed_dim.
* The second layer should use a Lambda layer to create a bag-of-words representation of the sentences by computing the mean.
* The last layer should use a Dense layer to predict which class the example belongs to.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'dnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
BATCH_SIZE = 300
EPOCHS = 100
EMBED_DIM = 10
PATIENCE = 0
dnn_model = build_dnn_model(embed_dim=EMBED_DIM)
dnn_history = dnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(dnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(dnn_history.history)[['accuracy', 'val_accuracy']].plot()
dnn_model.summary()
Explanation: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
End of explanation
def build_rnn_model(embed_dim, units):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6:
Complete the code below to build an RNN model which predicts the article class. The code below is similar to the DNN you created above; however, here we do not need to use a bag-of-words representation of the sentence. Instead, you can pass the embedding layer directly to an RNN/LSTM/GRU layer.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'rnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 0
rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)
history = rnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
rnn_model.summary()
Explanation: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs):
End of explanation
def build_cnn_model(embed_dim, filters, ksize, strides):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Complete the code below to create a CNN model for text classification. This model is similar to the previous models in that you should start with an embedding layer. However, the embedding next layers should pass through a 1-dimensional convolution and ultimately the final fully connected, dense layer. Use the arguments of the build_cnn_model function to set up the 1D convolution layer.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'cnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 5
FILTERS = 200
STRIDES = 2
KSIZE = 3
PATIENCE = 0
cnn_model = build_cnn_model(
embed_dim=EMBED_DIM,
filters=FILTERS,
strides=STRIDES,
ksize=KSIZE,
)
cnn_history = cnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(cnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(cnn_history.history)[['accuracy', 'val_accuracy']].plot()
cnn_model.summary()
Explanation: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
End of explanation |
1,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text mining
In this task we will use nltk package to recognize named entities and classify in a given text (in this case article about American Revolution from Wikipedia).
nltk.ne_chunk function can be used for both recognition and classification of named entities. We will aslo implement custom NER function to recognize entities, and custom function to classify named entities using their Wikipedia articles.
Step1: Suppress wikipedia package warnings.
Step2: Helper functions to process output of nltk.ne_chunk and to count frequency of named entities in a given text.
Step3: Since nltk.ne_chunks tends to put same named entities into more classes (like 'American'
Step4: Our custom NER functio from example here.
Step5: Loading processed article, approximately 500 sentences. Regex substitution removes reference links (e.g. [12])
Step6: Now we try to recognize entities with both nltk.ne_chunk and our custom_NER function and print 10 most frequent entities.
Yielded results seem to be fairly similar. nltk.ne_chunk function also added basic classification tags.
Step7: Next we would want to do our own classification, using Wikipedia articles for each named entity. Idea is to find article matching entity string (for example 'America') and then create a noun phrase from its first sentence. When no suitable article or description is found, entity classification will be 'Thing'.
Step8: Obivously this classification is way more specific than tags used by nltk.ne_chunk. We can also see that both NER methods mistook common words for entities unrelated to the article (for example 'New').
Since custom_NER function relies on uppercase letters to recognize entities, this can be commonly caused by first words in sentences.
The lack of description for entity 'America' is caused by simple way get_noun_phrase function constructs description. It looks for basic words like 'is', so more advanced language can throw it off. This could be fixed by searching simple english Wikipedia or using it as a fallback when no suitable phrase is found on normal english Wikipedia (for example compare article about Americas on simple and normal wiki).
I also tried to search for more general verb (presen tense verb, tag 'VBZ'), but this yielded worse results. Other improvement could be simply expanding the verb list in get_noun_phrase with other suitable verbs.
When no exact match for pair (entity, article) is found, wikipedia module raises DisambiguationError, which (same as disambiguation page on Wikipedia) offers possible matching pages. When this happens, first suggested page is picked. This however does not have to be the best page for given entity.
Step9: When searching simple wiki, entity 'Americas' gets fairly reasonable description. However there seems to be an issue with handling DisambiguationError in some cases when looking for first page in DisambiguationError.options raises another DisambiguationError (even if pages from .options should be guaranteed hit). | Python Code:
import nltk
import numpy as np
import wikipedia
import re
Explanation: Text mining
In this task we will use nltk package to recognize named entities and classify in a given text (in this case article about American Revolution from Wikipedia).
nltk.ne_chunk function can be used for both recognition and classification of named entities. We will aslo implement custom NER function to recognize entities, and custom function to classify named entities using their Wikipedia articles.
End of explanation
import warnings
warnings.filterwarnings('ignore')
Explanation: Suppress wikipedia package warnings.
End of explanation
def count_entites(entity, text):
s = entity
if type(entity) is tuple:
s = entity[0]
return len(re.findall(s, text))
def get_top_n(entities, text, n):
a = [ (e, count_entites(e, text)) for e in entities]
a.sort(key=lambda x: x[1], reverse=True)
return a[0:n]
# For a list of entities found by nltk.ne_chunks:
# returns (entity, label) if it is a single word or
# concatenates multiple word named entities into single string
def get_entity(entity):
if isinstance(entity, tuple) and entity[1][:2] == 'NE':
return entity
if isinstance(entity, nltk.tree.Tree):
text = ' '.join([word for word, tag in entity.leaves()])
return (text, entity.label())
return None
Explanation: Helper functions to process output of nltk.ne_chunk and to count frequency of named entities in a given text.
End of explanation
# returns list of named entities in a form [(entity_text, entity_label), ...]
def extract_entities(chunk):
data = []
for entity in chunk:
d = get_entity(entity)
if d is not None and d[0] not in [e[0] for e in data]:
data.append(d)
return data
Explanation: Since nltk.ne_chunks tends to put same named entities into more classes (like 'American' : 'ORGANIZATION' and 'American' : 'GPE'), we would want to filter these duplicities.
End of explanation
def custom_NER(tagged):
entities = []
entity = []
for word in tagged:
if word[1][:2] == 'NN' or (entity and word[1][:2] == 'IN'):
entity.append(word)
else:
if entity and entity[-1][1].startswith('IN'):
entity.pop()
if entity:
s = ' '.join(e[0] for e in entity)
if s not in entities and s[0].isupper() and len(s) > 1:
entities.append(s)
entity = []
return entities
Explanation: Our custom NER functio from example here.
End of explanation
text = None
with open('text', 'r') as f:
text = f.read()
text = re.sub(r'\[[0-9]*\]', '', text)
Explanation: Loading processed article, approximately 500 sentences. Regex substitution removes reference links (e.g. [12])
End of explanation
tokens = nltk.word_tokenize(text)
tagged = nltk.pos_tag(tokens)
ne_chunked = nltk.ne_chunk(tagged, binary=False)
ex = extract_entities(ne_chunked)
ex_custom = custom_NER(tagged)
top_ex = get_top_n(ex, text, 20)
top_ex_custom = get_top_n(ex_custom, text, 20)
print('ne_chunked:')
for e in top_ex:
print('{} count: {}'.format(e[0], e[1]))
print()
print('custom NER:')
for e in top_ex_custom:
print('{} count: {}'.format(e[0], e[1]))
Explanation: Now we try to recognize entities with both nltk.ne_chunk and our custom_NER function and print 10 most frequent entities.
Yielded results seem to be fairly similar. nltk.ne_chunk function also added basic classification tags.
End of explanation
def get_noun_phrase(entity, sentence):
t = nltk.pos_tag([word for word in nltk.word_tokenize(sentence)])
phrase = []
stage = 0
for word in t:
if word[0] in ('is', 'was', 'were', 'are', 'refers') and stage == 0:
stage = 1
continue
elif stage == 1:
if word[1] in ('NN', 'JJ', 'VBD', 'CD', 'NNP', 'NNPS', 'RBS', 'IN', 'NNS'):
phrase.append(word)
elif word[1] in ('DT', ',', 'CC', 'TO', 'POS'):
continue
else:
break
if len(phrase) > 1 and phrase[-1][1] == 'IN':
phrase.pop()
phrase = ' '.join([ word[0] for word in phrase ])
if phrase == '':
phrase = 'Thing'
return {entity : phrase}
def get_wiki_desc(entity, wiki='en'):
wikipedia.set_lang(wiki)
try:
fs = wikipedia.summary(entity, sentences=1)
except wikipedia.DisambiguationError as e:
fs = wikipedia.summary(e.options[0], sentences=1)
except wikipedia.PageError:
return {entity : 'Thing'}
#fs = nltk.sent_tokenize(page.summary)[0]
return get_noun_phrase(entity, fs)
Explanation: Next we would want to do our own classification, using Wikipedia articles for each named entity. Idea is to find article matching entity string (for example 'America') and then create a noun phrase from its first sentence. When no suitable article or description is found, entity classification will be 'Thing'.
End of explanation
for entity in top_ex:
print(get_wiki_desc(entity[0][0]))
for entity in top_ex_custom:
print(get_wiki_desc(entity[0]))
Explanation: Obivously this classification is way more specific than tags used by nltk.ne_chunk. We can also see that both NER methods mistook common words for entities unrelated to the article (for example 'New').
Since custom_NER function relies on uppercase letters to recognize entities, this can be commonly caused by first words in sentences.
The lack of description for entity 'America' is caused by simple way get_noun_phrase function constructs description. It looks for basic words like 'is', so more advanced language can throw it off. This could be fixed by searching simple english Wikipedia or using it as a fallback when no suitable phrase is found on normal english Wikipedia (for example compare article about Americas on simple and normal wiki).
I also tried to search for more general verb (presen tense verb, tag 'VBZ'), but this yielded worse results. Other improvement could be simply expanding the verb list in get_noun_phrase with other suitable verbs.
When no exact match for pair (entity, article) is found, wikipedia module raises DisambiguationError, which (same as disambiguation page on Wikipedia) offers possible matching pages. When this happens, first suggested page is picked. This however does not have to be the best page for given entity.
End of explanation
get_wiki_desc('Americas', wiki='simple')
Explanation: When searching simple wiki, entity 'Americas' gets fairly reasonable description. However there seems to be an issue with handling DisambiguationError in some cases when looking for first page in DisambiguationError.options raises another DisambiguationError (even if pages from .options should be guaranteed hit).
End of explanation |
1,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter/IPython Notebook Quick Start Guide
The following is partially taken from the offical documentation
Step1: This is an equation formatted in LaTeX $y = \sin(x)$
double-click this cell to edit it
2. Jupyter Notebook App
The Jupyter Notebook App is a server-client application that allows editing and running notebook documents via a web browser.
3. kernel
A notebook kernel is a “computational engine” that executes the code contained in a Notebook document. The ipython kernel executes python code. Kernels for many other languages exist (official kernels).
4. Notebook Dashboard
The Notebook Dashboard is the component which is shown first when you launch Jupyter Notebook App. The Notebook Dashboard is mainly used to open notebook documents, and to manage the running kernels (visualize and shutdown).
5. References
Jupyter project
Jupyter documentation
Jupyter notebooks documentation
(Very) Short Python Introduction
In addition to "standard" data types such as integers, floats and strings, Python knows a number of compound data types, used to group together other values. We will briefly see * Lists and Dictionaries*
1. Lists
https
Step2: Appending to a list or extending a list
Step3: Slicing
Step4: Looping over the elements of a list
Step5: 2. Dictionaries
https
Step6: Adding entries
Step7: Removing entries
Step8: Looping over entries
Step9: Accessing entries
Step10: NumPy
NumPy is the fundamental package for scientific computing with Python (http
Step11: And you can do much more with NumPy! see https | Python Code:
# press Shit+Enter to execute this cell
print('This is a cell containing python code')
#we can also make figures
import matplotlib.pyplot as plt
import numpy as np
% matplotlib inline
x = np.linspace(-np.pi, np.pi, 100)
plt.plot(x, np.sin(x))
# Use `Tab` for completion and `Shift-Tab` for code info
Explanation: Jupyter/IPython Notebook Quick Start Guide
The following is partially taken from the offical documentation:
1. What is the Jupyter Notebook?
Notebook documents (or “notebooks”, all lower case) are documents produced by the Jupyter Notebook App, which contain both computer code (e.g. python) and rich text elements (paragraph, equations, figures, links, etc...). Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis.
End of explanation
# define a new list
zoo_animals = ["pangolin", "cassowary", "sloth", "tiger"]
if len(zoo_animals) > 3:
print("The first animal at the zoo is the " + zoo_animals[0])
print("The second animal at the zoo is the " + zoo_animals[1])
print("The third animal at the zoo is the " + zoo_animals[2])
print("The fourth animal at the zoo is the " + zoo_animals[3])
Explanation: This is an equation formatted in LaTeX $y = \sin(x)$
double-click this cell to edit it
2. Jupyter Notebook App
The Jupyter Notebook App is a server-client application that allows editing and running notebook documents via a web browser.
3. kernel
A notebook kernel is a “computational engine” that executes the code contained in a Notebook document. The ipython kernel executes python code. Kernels for many other languages exist (official kernels).
4. Notebook Dashboard
The Notebook Dashboard is the component which is shown first when you launch Jupyter Notebook App. The Notebook Dashboard is mainly used to open notebook documents, and to manage the running kernels (visualize and shutdown).
5. References
Jupyter project
Jupyter documentation
Jupyter notebooks documentation
(Very) Short Python Introduction
In addition to "standard" data types such as integers, floats and strings, Python knows a number of compound data types, used to group together other values. We will briefly see * Lists and Dictionaries*
1. Lists
https://docs.python.org/3/tutorial/introduction.html#lists
Accessing by index:
End of explanation
# empty list
suitcase = []
suitcase.append("sunglasses")
# Your code here!
suitcase.append("doll")
suitcase.append("ball")
suitcase.append("comb")
list_length = len(suitcase) # Set this to the length of suitcase
print("There are %d items in the suitcase." % (list_length))
print(suitcase)
# we can also append an other list to a list
# using the "extend" method
numbers = [42,7,12]
suitcase.extend(numbers)
print(suitcase)
Explanation: Appending to a list or extending a list:
End of explanation
suitcase = ["sunglasses", "hat", "passport", "laptop", "suit", "shoes"]
first = suitcase[0:2] # The first and second items (index zero and one)
middle = suitcase[2:4] # Third and fourth items (index two and three)
last = suitcase[4:6] # The last two items (index four and five)
print(last)
Explanation: Slicing:
End of explanation
my_list = [1,9,3,8,5,7]
for number in my_list:
print(2*number)
# Your code here
Explanation: Looping over the elements of a list:
End of explanation
# Assigning a dictionary with three key-value pairs to residents:
residents = {'Puffin' : 104, 'Sloth' : 105, 'Burmese Python' : 106}
print(residents['Puffin']) # Prints Puffin's room number
print(residents['Sloth'])
print(residents['Burmese Python'])
Explanation: 2. Dictionaries
https://docs.python.org/3/tutorial/datastructures.html#dictionaries
Unlike sequences, which are indexed by a range of numbers, dictionaries are indexed by keys, which can be strings or numbers.
It is best to think of a dictionary as an unordered set of key: value pairs, with the requirement that the keys are unique (within one dictionary).
Creating a Dictionary:
End of explanation
menu = {} # Empty dictionary
menu['Raclette'] = 14.50 # Adding new key-value pair
print(menu['Raclette'])
menu['Cheese Fondue'] = 10.50
menu['Muesli'] = 13.50
menu['Quiche'] = 19.50
menu['Cervela'] = 17.50 # Your code here: Add some dish-price pairs to menu!
print("There are " + str(len(menu)) + " items on the menu.")
print(menu)
Explanation: Adding entries:
End of explanation
del menu['Muesli']
print(menu)
Explanation: Removing entries:
End of explanation
for key, value in menu.items():
print(key, value)
Explanation: Looping over entries:
End of explanation
raclette = menu.get('Raclette')
print(raclette)
#accessing a non-existing key
burger = menu.get('Burger')
print(burger)
#listing keys
menu.keys()
#listing values
menu.values()
Explanation: Accessing entries:
End of explanation
# first import the package
import numpy as np
# numpy works with array similarly to MATLAB
x = np.array([1,2,4,5,6,8,10,4,3,5,6])
# but indexing start at 0!!
print(x[0])
# arrays have useful methods
print(x.size)
print(x.mean())
# we can also use numpy functions
amax = np.max(x)
print(amax)
Explanation: NumPy
NumPy is the fundamental package for scientific computing with Python (http://www.numpy.org/).
It is part of SciPy (https://www.scipy.org/).
The numpy package provides functionalities that makes Python similar to MATLAB.
End of explanation
# import matplotlib
import matplotlib.pyplot as plt
# make matplotlib create figures inline
%matplotlib inline
x = np.linspace(0,2,100)
y1 = np.exp(x)
y2 = np.sqrt(x)
plt.plot(x,y1, 'r-', label='y = exp(x)')
plt.plot(x,y2, 'b--', label='y = sqrt(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
# press Shift-Tab to see information about a function
norm = np.random.randn(200)
n, bins, patches = plt.hist(norm, normed=True, bins=20)
Explanation: And you can do much more with NumPy! see https://docs.scipy.org/doc/numpy-dev/user/quickstart.html
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms (https://matplotlib.org/).
End of explanation |
1,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Parameters
Step3: Colab-only auth
Step4: tf.data.Dataset
Step5: Let's have a look at the data
Step6: Estimator model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course
Step7: Train and validate the model
Step8: Visualize predictions
Step9: Deploy the trained model to ML Engine
Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
Step10: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https
Step11: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work. | Python Code:
import os, re, math, json, shutil, pprint, datetime
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/05_MNIST_Estimator_Tensorboard_solution.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Please run this notebook on a GPU backend. Porting the model from Estimator to TPUEstimator is needed for it to work on TPU.
MNIST with Tensorboard, using the Estimator API
Fun with handwritten digits and tensorboard.
This notebook will show you how to follow your training and validation curves in Tensorboard and what you can do to address the issues you see there.
Imports
End of explanation
BATCH_SIZE = 32 #@param {type:"integer"}
BUCKET = 'gs://' #@param {type:"string"}
assert re.search(r'gs://.+', BUCKET), 'You need a GCS bucket for your Tensorboard logs. Head to http://console.cloud.google.com/storage and create one.'
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
Explanation: Parameters
End of explanation
# backend identification
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
# Auth on Colab
# Little wrinkle: without auth, Colab will be extremely slow in accessing data from a GCS bucket, even public
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user()
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
Explanation: Colab-only auth
End of explanation
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# In Estimator, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs
def model_fn(features, labels, mode):
x = features
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
x = features
y = tf.reshape(x, [-1, 28, 28, 1])
# little wrinkle: tf.keras.layers can normally be used in an Estimator but tf.keras.layers.BatchNormalization does not work
# in an Estimator environment. Using TF layers everywhere for consistency. tf.layers and tf.ketas.layers are carbon copies of each other.
y = tf.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False)(y) # no bias necessary before batch norm
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) # no batch norm scaling necessary before "relu"
y = tf.nn.relu(y) # activation after batch norm
y = tf.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Flatten()(y)
y = tf.layers.Dense(200, use_bias=False)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Dropout(0.5)(y, training=is_training)
logits = tf.layers.Dense(10)(y)
predictions = tf.nn.softmax(logits)
classes = tf.math.argmax(predictions, axis=-1)
if (mode != tf.estimator.ModeKeys.PREDICT):
loss = tf.losses.softmax_cross_entropy(labels, logits)
step = tf.train.get_or_create_global_step()
lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000, 1/math.e)
tf.summary.scalar("learn_rate", lr)
optimizer = tf.train.AdamOptimizer(lr)
# little wrinkle: batch norm uses running averages which need updating after each batch. create_train_op does it, optimizer.minimize does not.
train_op = tf.contrib.training.create_train_op(loss, optimizer)
#train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
metrics = {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}
else:
loss = train_op = metrics = None # None of these can be computed in prediction mode because labels are not available
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"predictions": predictions, "classes": classes}, # name these fields as you like
loss=loss,
train_op=train_op,
eval_metric_ops=metrics
)
# Called once when the model is saved. This function produces a Tensorflow
# graph of operations that will be prepended to your model graph. When
# your model is deployed as a REST API, the API receives data in JSON format,
# parses it into Tensors, then sends the tensors to the input graph generated by
# this function. The graph can transform the data so it can be sent into your
# model input_fn. You can do anything you want here as long as you do it with
# tf.* functions that produce a graph of operations.
def serving_input_fn():
# placeholder for the data received by the API (already parsed, no JSON decoding necessary,
# but the JSON must contain one or multiple 'image' key(s) with 28x28 greyscale images as content.)
inputs = {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])} # the shape of this dict should match the shape of your JSON
features = inputs['serving_input'] # no transformation needed
return tf.estimator.export.TensorServingInputReceiver(features, inputs) # features are the features needed by your model_fn
# Return a ServingInputReceiver if your features are a dictionary of Tensors, TensorServingInputReceiver if they are a straight Tensor
Explanation: Estimator model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
EPOCHS = 8
steps_per_epoch = 60000 // BATCH_SIZE # 60,000 images in training dataset
MODEL_EXPORT_NAME = "mnist" # name for exporting saved model
tf_logging.set_verbosity(tf_logging.INFO)
now = datetime.datetime.now()
MODEL_DIR = BUCKET+"/mnistjobs/job" + "-{}-{:02d}-{:02d}-{:02d}:{:02d}:{:02d}".format(now.year, now.month, now.day, now.hour, now.minute, now.second)
training_config = tf.estimator.RunConfig(model_dir=MODEL_DIR, save_summary_steps=10, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=steps_per_epoch/4)
export_latest = tf.estimator.LatestExporter(MODEL_EXPORT_NAME, serving_input_receiver_fn=serving_input_fn)
estimator = tf.estimator.Estimator(model_fn=model_fn, config=training_config)
train_spec = tf.estimator.TrainSpec(training_input_fn, max_steps=EPOCHS*steps_per_epoch)
eval_spec = tf.estimator.EvalSpec(validation_input_fn, steps=1, exporters=export_latest, throttle_secs=0) # no eval throttling: evaluates after each checkpoint
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
tf_logging.set_verbosity(tf_logging.WARN)
Explanation: Train and validate the model
End of explanation
# recognize digits from local fonts
predictions = estimator.predict(lambda: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_font_classes = next(predictions)['classes']
display_digits(font_digits, predicted_font_classes, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
predictions = estimator.predict(validation_input_fn,
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_labels = next(predictions)['classes']
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation
PROJECT = "" #@param {type:"string"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "estimator_mnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
export_path = os.path.join(MODEL_DIR, 'export', MODEL_EXPORT_NAME)
last_export = sorted(tf.gfile.ListDirectory(export_path))[-1]
export_path = os.path.join(export_path, last_export)
print('Saved model directory found: ', export_path)
Explanation: Deploy the trained model to ML Engine
Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
End of explanation
# Create the model
if NEW_MODEL:
!gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
Explanation: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
End of explanation
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because that is what you defined in your serving_input_fn: {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])}
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
predictions = np.array([int(p.split('[')[0]) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
display_top_unrecognized(digits, predictions, labels, N, 100//N)
Explanation: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
End of explanation |
1,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How the Length of a Jeopardy Question Relates to its Value?
The American television game show Jeopardy is probably one of the most famous shows ever aired on TV. Few years ago IBM's Watson conquered the show, and now, it's time the conquer the dataset of all the questions that were asked in years and see if any interesting relations lie behind them.
Thanks to Reddit user trexmatt for providing CSV data of the questions which can be found here.
Structure of the Dataset
As explained on the Reddit post given above, each row of the dataset contains information on a particular question
Step1: Apparently columns have a blank space in the beginning. Let's get rid of them
Step2: Hypothesis - "Value of the question is related to its length."
Let's have a copy of dataframe so that changes we make doesn't disturb further analysis.
Step3: There are some media-based questions, and also some questions with hyper-links. These can disturb our analysis so we should get rid of them.
Step4: We can add a column to dataframe for lenght of questions.
Step5: When we look at the "Value" column, we see they are not integers but strings, also there are some "None" values. We should clean those values.
Step6: The "Value" column has 145 different values. For the sake of simplicity, let's keep the ones that are multiples of 100 and between 200 and 2500 (first round questions has range of 200-1000, second round questions has range of 500-2500).
Step7: It looks like there isn't a correlation, but this graph isn't structured well enough to draw conclusions. Instead, let's calculate average question length for each value and plot average length vs value. | Python Code:
# this line is required to see visualizations inline for Jupyter notebook
%matplotlib inline
# importing modules that we need for analysis
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import re
# read the data from file and print out first few rows
jeopardy = pd.read_csv("jeopardy.csv")
print(jeopardy.head(3))
print(jeopardy.columns)
Explanation: How the Length of a Jeopardy Question Relates to its Value?
The American television game show Jeopardy is probably one of the most famous shows ever aired on TV. Few years ago IBM's Watson conquered the show, and now, it's time the conquer the dataset of all the questions that were asked in years and see if any interesting relations lie behind them.
Thanks to Reddit user trexmatt for providing CSV data of the questions which can be found here.
Structure of the Dataset
As explained on the Reddit post given above, each row of the dataset contains information on a particular question:
* Category
* Value
* Question text
* Answer text
* Round of the game the question was asked
* Show number
* Date
Hypothesis
Before diving into analysis of the data let's come up with a relation between different columns:
Value of the question is related to its length.
Setting up Data
End of explanation
jeopardy.rename(columns = lambda x: x[1:] if x[0] == " " else x, inplace=True)
jeopardy.columns
Explanation: Apparently columns have a blank space in the beginning. Let's get rid of them:
End of explanation
data1 = jeopardy
data1["Question"].value_counts()[:10]
Explanation: Hypothesis - "Value of the question is related to its length."
Let's have a copy of dataframe so that changes we make doesn't disturb further analysis.
End of explanation
# regex pattern used to remove hyper-links
pattern = re.compile("^<a href")
# remove media clue questions
data1 = data1[data1["Question"].str.contains(pattern) == False]
data1 = data1[data1["Question"] != "[audio clue]"]
data1 = data1[data1["Question"] != "(audio clue)"]
data1 = data1[data1["Question"] != "[video clue]"]
data1 = data1[data1["Question"] != "[filler]"]
data1["Question"].value_counts()[:10]
Explanation: There are some media-based questions, and also some questions with hyper-links. These can disturb our analysis so we should get rid of them.
End of explanation
data1["Question Length"] = data1["Question"].apply(lambda x: len(x))
data1["Question Length"][:12]
Explanation: We can add a column to dataframe for lenght of questions.
End of explanation
data1["Value"].value_counts()[:15]
# get rid of None values
data1 = data1[data1["Value"] != "None"]
# parse integers from strings
pattern = "[0-9]"
data1["Value"] = data1["Value"].apply(lambda x: "".join(re.findall(pattern,x)))
data1["Value"] = data1["Value"].astype(int)
print(data1["Value"].value_counts()[:10])
print("Number of distinct values:" + str(len(data1["Value"].value_counts())))
Explanation: When we look at the "Value" column, we see they are not integers but strings, also there are some "None" values. We should clean those values.
End of explanation
data1 = data1[(data1["Value"]%100 == 0) & (data1["Value"]<= 2500)]
print(data1["Value"].value_counts())
print("Number of distinct values: " + str(len(data1["Value"].value_counts())))
# set up the figure and plot length vs value on ax1
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(1,1,1)
ax1.scatter(data1["Question Length"], data1["Value"])
ax1.set_xlim(0, 800)
ax1.set_ylim(0, 2700)
ax1.set_title("The Relation between Question Length and Value")
ax1.set_xlabel("Lenght of the Question")
ax1.set_ylabel("Value of the Question")
plt.show()
Explanation: The "Value" column has 145 different values. For the sake of simplicity, let's keep the ones that are multiples of 100 and between 200 and 2500 (first round questions has range of 200-1000, second round questions has range of 500-2500).
End of explanation
#find the average length for each value
average_lengths = []
values = data1["Value"].unique()
for value in values:
rows = data1[data1["Value"] == value]
average = rows["Question Length"].mean()
average_lengths.append(average)
print(average_lengths)
print(values)
# set up the figure and plot average length vs value on ax1
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(1,1,1)
ax1.scatter(average_lengths, values)
ax1.set_title("The Relation between Average Question Length and Value")
ax1.set_xlabel("Average Question Length")
ax1.set_ylabel("Value")
ax1.set_xlim(70, 105)
ax1.set_ylim(0, 3000)
plt.show()
print("Correlation coefficient: " + str(np.corrcoef(average_lengths, values)[0,1]))
Explanation: It looks like there isn't a correlation, but this graph isn't structured well enough to draw conclusions. Instead, let's calculate average question length for each value and plot average length vs value.
End of explanation |
1,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Questions/Concerns
=======================================
Was/how was the main parachute bag held down for LV2?
How many lines were cut during LV2 recovery?
What if the drogue gets cut away but the main does not deploy..then what?
Need property to do flight testing over
Possible resources
Step1: LV2 recovery system analysis
Step2: suggestions
Your $dt$ value is a bit off. Also, $dt$ changes, especially when the drogue is deploying.
The other thing that I'd be careful about is your assumption that the deployment force is just the $\Delta v$ for deployment divided by the time. That would mean the force is as spread out as possible, which isn't true.
Also, I think it's a reasonable guess that the IMU was zeroed on the launch pad. This means that
1. The acceleration data isn't quite representative of the load on the parachute/rocket.
1. We can get a calibration factor for the IMU data.
At apogee, right before the drogue deploys, the rocket is basically in freefall. Aka, any accelerometer "should" read $0$. So, whatever the IMU reads at apogee is equal to $1 g$. We can then subtract out that value and scale the data so that we get $9.81$ on the launch pad.
Step4: LV3 Recovery System
=======================================
Overview
Deployment design
<img src='Sketch_ Deployment_Design.jpg' width="450">
Top-level design
<img src='Sketch_ Top-Level_Design.jpg' width="450">
<img src='initial_idea_given_info.jpg' width="450">
Step-by-step
<img src='step_by_step.jpg' width="450">
Deciding on a parachute
Estimating necessary area | Python Code:
# General
########################################
# Gravity (m/sec^2)
g = 9.81
# Air density (kg/m^3)
p = 1.225
# LV2 given information
#######################################
print ("LV2 Given Information\n")
# Mass of parachute (kg)
# From OpenRocket LV2.3.ork
m_p2 = 2.118
# Mass of system (rocket + chute) (kg)
# From OpenRocket LV2.3.ork
m_tot2 = 34.982
# Weight of system (N)
w_tot2 = m_tot2 * g
print ("weight")
print ("w_tot2 (N) %3.2f \n" % w_tot2)
# Main parachute dimensions, x-shape
# Diameter, d2 = length of one cross strip (m)
# Width of cross strip, w2 (m)
# Almost the same w/d ratio as AIAA source [4] (0.263 compared to 0.260)
# Line length, l2 (m)
d2 = 5.38
w2 = 1.40
l2 = 5.03
# Area of parachute from LV2 system
A2 = ((d2*w2)*2)-(w2**2)
print ("diameter")
print ("d2 (m) %3.2f" %d2)
print ("area")
print ("A2 (m^2) %3.2f" % A2)
Explanation: Questions/Concerns
=======================================
Was/how was the main parachute bag held down for LV2?
How many lines were cut during LV2 recovery?
What if the drogue gets cut away but the main does not deploy..then what?
Need property to do flight testing over
Possible resources: Glenn, Asa, Jorden
Next steps (as of 8/23)
Finalize surgical tubing ring
Design attachment ring
Email "The Rocketman" to see when parachutes will arrive
Get in contact with Tim to talk about LV2
LV2 Recovery System
=======================================
Given information
Drogue parachute
Hand measured
Diameter (length of cross) = 54"
Width = 17"
Longest line length = 56"
Main parachute
Reference Main.dxf
X-form shape
Dimensions, total outer: 216.54" x 218.54"
Dimensions, inner cross: 64.01"
Corner line length, skirt to confluence: 205"
Center lines lengthen 3%
Hand measured
Diameter = 212"
Width = 55"
Longest line length = 198"
Assumptions$^{[1]}$:
1. Linear motion
2. The deployment system is inelastic
3. The partially unfurled parachute is in tension during deployment
4. The deployment rate is much less than the vehicle velocity
The parachute depends on two main factors$^{[2]}$:
The weight of the payload and parachute
The speed upon impact when returning
End of explanation
# Analyzing LV2 Telemetry and IMU data
# From git, Launch 12
#######################################
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.integrate import simps
%matplotlib inline
# Graphing helper function
def setup_graph(title='',x_label='', y_label='', fig_size=None):
fig = plt.figure()
if fig_size != None:
fig.set_size_inches(fig_size[0],
fig_size[1])
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
########################################
# Data from the IMU
data = pd.read_csv('IMU_data.csv')
time = data[' [1]Timestamp'].tolist()
time = np.array(time)
# Umblicial disconnect event
t_0 = 117853569585227
# Element wise subtraction
time = np.subtract(time, t_0)
# Convert from ns to s
time = np.divide(time, 1e9)
acceleration = data[' [6]Acc_X'].tolist()
acceleration = np.array(acceleration)
acceleration = np.subtract(acceleration,g)
########################################
# Data from the Telemetrum
data_tel = pd.read_csv('Telemetry_data.csv')
time_tel = data_tel['time'].tolist()
time_tel = np.array(time_tel)
acceleration_tel = data_tel['acceleration'].tolist()
acceleration_tel = np.array(acceleration_tel)
setup_graph('Accel vs. Time', 'Time (s)', 'Accel (m/s^2)', (16,7))
plt.plot(time,acceleration,'b-')
plt.plot(time_tel, acceleration_tel,'r-')
plt.legend(['IMU','Telemetry'])
plt.show()
# Drogue analysis
# Plot of only the duration of impulse
setup_graph('Accel vs. Time of Drogue Impact', 'Time (s)', 'Accel (m/s^2)', (10,7))
plt.plot(time[len(time)-8700:35500], acceleration[len(acceleration)-8700:35500],'b-')
plt.plot(time_tel[len(time_tel)-9808:3377], acceleration_tel[len(acceleration_tel)-9808:3377],'r-')
plt.legend(['IMU','Telemetry'])
plt.show()
Explanation: LV2 recovery system analysis
End of explanation
# how to get the values for dt:
#diffs= [t2-t1 for t1,t2 in zip(time[:-1], time[1:])]
#plt.figure()
#plt.plot(time[1:], diffs)
#plt.title('values of dt')
print('mean dt value:', np.mean(diffs))
# marginally nicer way to keep track of time windows:
ind_drogue= [i for i in range(len(time)) if ((time[i]>34.5) & (time[i]<39))]
# indices where we're basically in freefall:
ind_vomit= [i for i in range(len(time)) if ((time[i]>32) & (time[i]<34.5))]
offset_g= np.mean(acceleration[ind_vomit])
accel_nice = (acceleration-offset_g)*(-9.81/offset_g)
deltaV= sum([accel_nice[i]*diffs[i] for i in ind_drogue])
print('change in velocity (area under the curve):', deltaV)
plt.figure()
plt.plot(time[:2500], acceleration[:2500])
plt.title('launch pad acceleration')
print('"pretty much zeroed" value on the launch pad:', np.mean(acceleration[:2500]))
# Filtering out noise from IMU and Telemetry data w/ scipy
# Using these graphs to estimate max possible impulse felt during LV2
# From IMU
from scipy.signal import savgol_filter
accel_filtered = savgol_filter(acceleration, 201, 2)
# From telemetry
# ***Assuming the filter parameters will work for the telemetry data too***
accel_filtered_tel = savgol_filter(acceleration_tel, 201, 2)
# Plot of filtered IMU and telemetry for comparison
setup_graph('Accel vs. Time (filtered)', 'Time (s)', 'Accel_filtered (m/s^2)', (16,7))
plt.plot(time[1:], accel_filtered[1:],'b-')
plt.plot(time_tel[1:], accel_filtered_tel[1:],'r-')
plt.legend(['IMU','Telemetry'])
plt.show()
# Looks pretty darn close
# Wanted the filtered telemetry instead of IMU because IMU does not include landing and we need terminal velocity
print ("Estimating max possible impulse during LV2 \n")
# Estimate Area Under Curve
areaCurve = simps(abs(accel_filtered), dx=0.00125)
print ("Area under entire filtered curve (m/s) %3.2f" %areaCurve)
# Calculating area under drogue acceleration curve to find impulse
areaCurvesmall = simps(abs(accel_filtered[len(accel_filtered)-8700:35500]), dx=0.00125)
print ("Area under accel curve during drogue impact (m/s) %3.2f" %areaCurvesmall)
# Impulse = mass * integral(accel dt)
impulse = m_tot2 * areaCurvesmall
print ("Impulse (N*s) %3.2f" %impulse)
# If deploy time = approx. 3.5 sec
deploy_time = 3.5
force = impulse/deploy_time # not a conservative assumption
print ("Force (N) %3.2f" %force)
# Finding terminal velocity with height and time
# Assuming average velocity is equivalent to terminal velocity
##because so much of the time after the main is deployed is at terminal velocity
# Using telemetry data
# Main 'chute deployment height (m)
main_deploy_height = 251.6
# Main 'chute deployment time (s)
main_deploy_time = 686.74
# Final height (m)
final_height = 0
# Final time
final_time = 728.51
# Average (terminal) velocity (m/s)
terminal_speed = abs((main_deploy_height - final_height)/(main_deploy_time - final_time))
print ("Terminal speed %3.2f" % terminal_speed)
# This seems accurate, 6 m/s is approx. 20 ft/s which is ideal
# LV2 drag calculations
#######################################
# If we say at terminal velocity, accel = 0,
# Therefore drag force (D2) = weight of system (w_tot2) (N)
# Because sum(forces) = ma = 0, then
D2 = w_tot2
print ("D2 (N) %3.2f" % D2)
# Calculated drag coefficient (Cd2), using D2
Cd2 = (2*D2)/(p*A2*(terminal_speed**2))
print ("Cd2 %3.2f" % Cd2)
# Drag coefficient (Cd_or), from OpenRocket LV2.3
# Compare to AIAA source [4], 0.60
# Compare to calculated from LV2 data, Cd2
Cd_or = 0.59
print ("Cd_or %3.2f" % Cd_or)
# Calculated drag (D_or) (N), from OpenRocket LV2.3
D_or = (Cd_or*p*A2*(terminal_speed**2))/2
print ("D_or (N) %3.2f" % D_or)
Explanation: suggestions
Your $dt$ value is a bit off. Also, $dt$ changes, especially when the drogue is deploying.
The other thing that I'd be careful about is your assumption that the deployment force is just the $\Delta v$ for deployment divided by the time. That would mean the force is as spread out as possible, which isn't true.
Also, I think it's a reasonable guess that the IMU was zeroed on the launch pad. This means that
1. The acceleration data isn't quite representative of the load on the parachute/rocket.
1. We can get a calibration factor for the IMU data.
At apogee, right before the drogue deploys, the rocket is basically in freefall. Aka, any accelerometer "should" read $0$. So, whatever the IMU reads at apogee is equal to $1 g$. We can then subtract out that value and scale the data so that we get $9.81$ on the launch pad.
End of explanation
## **Need to decide on FOS for the weight and then use estimator to calculate necessary area** ##
# Calculating area needed for LV3 parachutes
# Both drogue and main
# From OpenRocket file LV3_L13a_ideal.ork
# Total mass (kg), really rough estimate
m_tot3 = 27.667
# Total weight (N)
w_tot3 = m_tot3 * g
print w_tot3
# Assuming drag is equivalent to weight of rocket system
D3 = w_tot3
# v_f3 (m/s), ideal terminal(impact) velocity
v_f3 = 6
# Cd from AIAA example (source 4)
Cd_aiaa = 0.60
# Cd from OpenRocket LV3
Cd_or3 = 0.38
# Printing previous parachute area for reference
print ("Previous parachute area %3.2f" %A2)
# Need to work on this...
# Area needed using LV2 calculations (m^2)
A3_2 = (D3*2)/(Cd2*p*(v_f3**2))
print ("Area using Cd2 %3.2f" %A3_2)
# Area needed using LV2 OpenRocket (m^2)
A3_or2 = (D3*2)/(Cd_or*p*(v_f3**2))
print ("Area using Cd_or %3.2f" %A3_or2)
# Area needed using aiaa info (m^2)
A3_aiaa = (D3*2)/(Cd_aiaa*p*(v_f3**2))
print ("Area using Cd_aiaa %3.2f" %A3_aiaa)
# Area needed using LV3 OpenRocket
A3_or3 = (D3*2)/(Cd_or3*p*(v_f3**2))
print ("Area using Cd_or3 %3.2f" %A3_or3)
# Area estimater
A3 = (D3*2)/(1.5*p*(v_f3**2))
print ("Area estimate %3.2f" %A3)
import math
d_m = (math.sqrt(A3_or3/math.pi))*2
d_ft = d_m * 0.3048
print d_ft
Explanation: LV3 Recovery System
=======================================
Overview
Deployment design
<img src='Sketch_ Deployment_Design.jpg' width="450">
Top-level design
<img src='Sketch_ Top-Level_Design.jpg' width="450">
<img src='initial_idea_given_info.jpg' width="450">
Step-by-step
<img src='step_by_step.jpg' width="450">
Deciding on a parachute
Estimating necessary area
End of explanation |
1,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
This is a generalized notebook for computing grade statistics from the Ted Grade Center.
Step1: Load data from exported CSV from Ted Full Grade Center. Some sanitization is performed to remove non-ascii characters and cruft
Step3: Define lower grade cutoffs in terms of number of standard deviations from mean.
Step4: Overall grade
Overall points and assign overall grade. | Python Code:
#The usual imports
import math
from collections import OrderedDict
from pandas import read_csv
import numpy as np
from pymatgen.util.plotting_utils import get_publication_quality_plot
from monty.string import remove_non_ascii
import prettyplotlib as ppl
from prettyplotlib import brewer2mpl
import matplotlib.pyplot as plt
colors = brewer2mpl.get_map('Set1', 'qualitative', 8).mpl_colors
%matplotlib inline
Explanation: Overview
This is a generalized notebook for computing grade statistics from the Ted Grade Center.
End of explanation
d = read_csv("gc_NANO114_WI15_Ong_fullgc_2015-03-16-15-33-52.csv")
d.columns = [remove_non_ascii(c) for c in d.columns]
d.columns = [c.split("[")[0].strip().strip("\"") for c in d.columns]
Explanation: Load data from exported CSV from Ted Full Grade Center. Some sanitization is performed to remove non-ascii characters and cruft
End of explanation
grade_cutoffs = OrderedDict()
grade_cutoffs["A"] = 0.75
grade_cutoffs["B+"] = 0.5
grade_cutoffs["B"] = -0.25
grade_cutoffs["B-"] = -0.5
grade_cutoffs["C+"] = -0.75
grade_cutoffs["C"] = -1
grade_cutoffs["C-"] = -1.5
grade_cutoffs["F"] = float("-inf")
def bar_plot(dframe, data_key, offset=0):
Creates a historgram of the results.
Args:
dframe: DataFrame which is imported from CSV.
data_key: Specific column to plot
offset: Allows an offset for each grade. Defaults to 0.
Returns:
dict of cutoffs, {grade: (lower, upper)}
data = dframe[data_key]
d = filter(lambda x: (not np.isnan(x)) and x != 0, list(data))
heights, bins = np.histogram(d, bins=20, range=(0, 100))
bins = list(bins)
bins.pop(-1)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1)
ppl.bar(ax, bins, heights, width=5, color=colors[0], grid='y')
plt = get_publication_quality_plot(12, 8, plt)
plt.xlabel("Score")
plt.ylabel("Number of students")
#print len([d for d in data if d > 90])
mean = data.mean(0)
sigma = data.std()
maxy = np.max(heights)
prev_cutoff = 100
cutoffs = {}
grade = ["A", "B+", "B", "B-", "C+", "C", "C-", "F"]
for grade, cutoff in grade_cutoffs.items():
if cutoff == float("-inf"):
cutoff = 0
else:
cutoff = max(0, mean + cutoff * sigma) + offset
plt.plot([cutoff] * 2, [0, maxy], 'k--')
plt.annotate("%.1f" % cutoff, [cutoff, maxy - 1], fontsize=18, horizontalalignment='left', rotation=45)
n = len([d for d in data if cutoff <= d < prev_cutoff])
print "Grade %s (%.1f-%.1f): %d" % (grade, cutoff, prev_cutoff, n)
plt.annotate(grade, [(cutoff + prev_cutoff) / 2, maxy], fontsize=18, horizontalalignment='center')
cutoffs[grade] = (cutoff, prev_cutoff)
prev_cutoff = cutoff
plt.ylim([0, maxy * 1.1])
plt.annotate("$\mu = %.1f$\n$\sigma = %.1f$\n$max=%.1f$" % (mean, sigma, data.max()), xy=(10, 7), fontsize=30)
title = data_key.split("[")[0].strip()
plt.title(title, fontsize=30)
plt.tight_layout()
plt.savefig("%s.eps" % title)
return cutoffs
for c in d.columns:
if "PS" in c or "Mid-term" in c or "Final" in c:
if not all(np.isnan(d[c])):
print c
bar_plot(d, c)
Explanation: Define lower grade cutoffs in terms of number of standard deviations from mean.
End of explanation
cutoffs = bar_plot(d, "Overall", offset=-2)
def assign_grade(pts):
for g, c in cutoffs.items():
if c[0] < pts <= c[1]:
return g
d["Final_Assigned_Egrade"] = map(assign_grade, d["Overall"])
d.to_csv("Overall grades.csv")
Explanation: Overall grade
Overall points and assign overall grade.
End of explanation |
1,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 14 – Recurrent Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 14.
<table align="left">
<td>
<a target="_blank" href="https
Step1: Then of course we will need TensorFlow
Step2: Basic RNNs
Manual RNN
Step3: Using static_rnn()
Note
Step4: Packing sequences
Step5: Using dynamic_rnn()
Step6: Setting the sequence lengths
Step7: Training a sequence classifier
Note
Step8: Warning
Step9: Multi-layer RNN
Step10: Time series
Step11: Using an OuputProjectionWrapper
Let's create the RNN. It will contain 100 recurrent neurons and we will unroll it over 20 time steps since each training instance will be 20 inputs long. Each input will contain only one feature (the value at that time). The targets are also sequences of 20 inputs, each containing a single value
Step12: At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an OutputProjectionWrapper.
Step13: Without using an OutputProjectionWrapper
Step14: Generating a creative new sequence
Step15: Deep RNN
MultiRNNCell
Step16: Distributing a Deep RNN Across Multiple GPUs
Do NOT do this
Step17: Instead, you need a DeviceCellWrapper
Step18: Alternatively, since TensorFlow 1.1, you can use the tf.contrib.rnn.DeviceWrapper class (alias tf.nn.rnn_cell.DeviceWrapper since TF 1.2).
Step19: Dropout
Step20: Note
Step21: Oops, it seems that Dropout does not help at all in this particular case.
Step23: Embeddings
This section is based on TensorFlow's Word2Vec tutorial.
Fetch the data
Step24: Build the dictionary
Step25: Generate batches
Step26: Build the model
Step27: Train the model
Step28: Let's save the final embeddings (of course you can use a TensorFlow Saver if you prefer)
Step29: Plot the embeddings
Step30: Machine Translation
The basic_rnn_seq2seq() function creates a simple Encoder/Decoder model
Step31: Exercise solutions
1. to 6.
See Appendix A.
7. Embedded Reber Grammars
First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
Step32: Let's generate a few strings based on the default Reber grammar
Step33: Looks good. Now let's generate a few strings based on the embedded Reber grammar
Step34: Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character
Step35: Let's look at a few corrupted strings
Step36: It's not possible to feed a string directly to an RNN
Step37: We can now generate the dataset, with 50% good strings, and 50% bad strings
Step38: Let's take a look at the first training instances
Step39: It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?
Step40: What class is it?
Step41: Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences
Step42: Now let's generate a validation set so we can track progress during training
Step43: Now let's test our RNN on two tricky strings | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rnn"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
Explanation: Chapter 14 – Recurrent Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 14.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/14_recurrent_neural_networks.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
import tensorflow as tf
Explanation: Then of course we will need TensorFlow:
End of explanation
reset_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons],dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons,n_neurons],dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))
Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)
init = tf.global_variables_initializer()
import numpy as np
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
print(Y0_val)
print(Y1_val)
Explanation: Basic RNNs
Manual RNN
End of explanation
n_inputs = 3
n_neurons = 5
reset_graph()
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, [X0, X1],
dtype=tf.float32)
Y0, Y1 = output_seqs
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
Y0_val
Y1_val
from datetime import datetime
root_logdir = os.path.join(os.curdir, "tf_logs")
def make_log_subdir(run_id=None):
if run_id is None:
run_id = datetime.utcnow().strftime("%Y%m%d%H%M%S")
return "{}/run-{}/".format(root_logdir, run_id)
def save_graph(graph=None, run_id=None):
if graph is None:
graph = tf.get_default_graph()
logdir = make_log_subdir(run_id)
file_writer = tf.summary.FileWriter(logdir, graph=graph)
file_writer.close()
return logdir
save_graph()
%load_ext tensorboard
%tensorboard --logdir {root_logdir}
Explanation: Using static_rnn()
Note: tf.contrib.rnn was partially moved to the core API in TensorFlow 1.2. Most of the *Cell and *Wrapper classes are now available in tf.nn.rnn_cell, and the tf.contrib.rnn.static_rnn() function is available as tf.nn.static_rnn().
End of explanation
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, X_seqs,
dtype=tf.float32)
outputs = tf.transpose(tf.stack(output_seqs), perm=[1, 0, 2])
init = tf.global_variables_initializer()
X_batch = np.array([
# t = 0 t = 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
print(np.transpose(outputs_val, axes=[1, 0, 2])[1])
Explanation: Packing sequences
End of explanation
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
save_graph()
%tensorboard --logdir {root_logdir}
Explanation: Using dynamic_rnn()
End of explanation
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print(outputs_val)
print(states_val)
Explanation: Setting the sequence lengths
End of explanation
reset_graph()
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
logits = tf.layers.dense(states, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
Explanation: Training a sequence classifier
Note: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function. The main differences relevant to this chapter are:
* several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.
* the default activation is now None rather than tf.nn.relu.
End of explanation
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
X_valid, X_train = X_train[:5000], X_train[5000:]
y_valid, y_train = y_train[:5000], y_train[5000:]
def shuffle_batch(X, y, batch_size):
rnd_idx = np.random.permutation(len(X))
n_batches = len(X) // batch_size
for batch_idx in np.array_split(rnd_idx, n_batches):
X_batch, y_batch = X[batch_idx], y[batch_idx]
yield X_batch, y_batch
X_test = X_test.reshape((-1, n_steps, n_inputs))
n_epochs = 100
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Last batch accuracy:", acc_batch, "Test accuracy:", acc_test)
Explanation: Warning: tf.examples.tutorials.mnist is deprecated. We will use tf.keras.datasets.mnist instead.
End of explanation
reset_graph()
n_steps = 28
n_inputs = 28
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
n_neurons = 100
n_layers = 3
layers = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons,
activation=tf.nn.relu)
for layer in range(n_layers)]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
states_concat = tf.concat(axis=1, values=states)
logits = tf.layers.dense(states_concat, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Last batch accuracy:", acc_batch, "Test accuracy:", acc_test)
Explanation: Multi-layer RNN
End of explanation
t_min, t_max = 0, 30
resolution = 0.1
def time_series(t):
return t * np.sin(t) / 3 + 2 * np.sin(t*5)
def next_batch(batch_size, n_steps):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))
n_steps = 20
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_plot")
plt.show()
X_batch, y_batch = next_batch(1, n_steps)
np.c_[X_batch[0], y_batch[0]]
Explanation: Time series
End of explanation
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
Explanation: Using an OuputProjectionWrapper
Let's create the RNN. It will contain 100 recurrent neurons and we will unroll it over 20 time steps since each training instance will be 20 inputs long. Each input will contain only one feature (the value at that time). The targets are also sequences of 20 inputs, each containing a single value:
End of explanation
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
output_size=n_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
learning_rate = 0.001
loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
saver.save(sess, "./my_time_series_model") # not shown in the book
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_pred_plot")
plt.show()
Explanation: At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an OutputProjectionWrapper.
End of explanation
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
saver.save(sess, "./my_time_series_model")
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Without using an OutputProjectionWrapper
End of explanation
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
sequence = [0.] * n_steps
for iteration in range(300):
X_batch = np.array(sequence[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence.append(y_pred[0, -1, 0])
plt.figure(figsize=(8,4))
plt.plot(np.arange(len(sequence)), sequence, "b-")
plt.plot(t[:n_steps], sequence[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.show()
with tf.Session() as sess:
saver.restore(sess, "./my_time_series_model")
sequence1 = [0. for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence1[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence1.append(y_pred[0, -1, 0])
sequence2 = [time_series(i * resolution + t_min + (t_max-t_min/3)) for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence2[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence2.append(y_pred[0, -1, 0])
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(t, sequence1, "b-")
plt.plot(t[:n_steps], sequence1[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.plot(t, sequence2, "b-")
plt.plot(t[:n_steps], sequence2[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
save_fig("creative_sequence_plot")
plt.show()
Explanation: Generating a creative new sequence
End of explanation
reset_graph()
n_inputs = 2
n_steps = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
n_neurons = 100
n_layers = 3
layers = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.random.rand(2, n_steps, n_inputs)
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run([outputs, states], feed_dict={X: X_batch})
outputs_val.shape
Explanation: Deep RNN
MultiRNNCell
End of explanation
with tf.device("/gpu:0"): # BAD! This is ignored.
layer1 = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
with tf.device("/gpu:1"): # BAD! Ignored again.
layer2 = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
Explanation: Distributing a Deep RNN Across Multiple GPUs
Do NOT do this:
End of explanation
import tensorflow as tf
class DeviceCellWrapper(tf.nn.rnn_cell.RNNCell):
def __init__(self, device, cell):
self._cell = cell
self._device = device
@property
def state_size(self):
return self._cell.state_size
@property
def output_size(self):
return self._cell.output_size
def __call__(self, inputs, state, scope=None):
with tf.device(self._device):
return self._cell(inputs, state, scope)
reset_graph()
n_inputs = 5
n_steps = 20
n_neurons = 100
X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
devices = ["/cpu:0", "/cpu:0", "/cpu:0"] # replace with ["/gpu:0", "/gpu:1", "/gpu:2"] if you have 3 GPUs
cells = [DeviceCellWrapper(dev,tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons))
for dev in devices]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(cells)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
Explanation: Instead, you need a DeviceCellWrapper:
End of explanation
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(outputs, feed_dict={X: np.random.rand(2, n_steps, n_inputs)}))
Explanation: Alternatively, since TensorFlow 1.1, you can use the tf.contrib.rnn.DeviceWrapper class (alias tf.nn.rnn_cell.DeviceWrapper since TF 1.2).
End of explanation
reset_graph()
n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
Explanation: Dropout
End of explanation
keep_prob = tf.placeholder_with_default(1.0, shape=())
cells = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
cells_drop = [tf.nn.rnn_cell.DropoutWrapper(cell, input_keep_prob=keep_prob)
for cell in cells]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(cells_drop)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
learning_rate = 0.01
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
train_keep_prob = 0.5
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
_, mse = sess.run([training_op, loss],
feed_dict={X: X_batch, y: y_batch,
keep_prob: train_keep_prob})
if iteration % 100 == 0: # not shown in the book
print(iteration, "Training MSE:", mse) # not shown
saver.save(sess, "./my_dropout_time_series_model")
with tf.Session() as sess:
saver.restore(sess, "./my_dropout_time_series_model")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Note: the input_keep_prob parameter can be a placeholder, making it possible to set it to any value you want during training, and to 1.0 during testing (effectively turning dropout off). This is a much more elegant solution than what was recommended in earlier versions of the book (i.e., writing your own wrapper class or having a separate model for training and testing). Thanks to Shen Cheng for bringing this to my attention.
End of explanation
reset_graph()
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
lstm_cells = [tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = tf.layers.dense(top_layer_h_state, n_outputs, name="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
states
top_layer_h_state
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Last batch accuracy:", acc_batch, "Test accuracy:", acc_test)
lstm_cell = tf.nn.rnn_cell.LSTMCell(num_units=n_neurons, use_peepholes=True)
gru_cell = tf.nn.rnn_cell.GRUCell(num_units=n_neurons)
Explanation: Oops, it seems that Dropout does not help at all in this particular case. :/
LSTM
End of explanation
import urllib.request
import errno
import os
import zipfile
WORDS_PATH = "datasets/words"
WORDS_URL = 'http://mattmahoney.net/dc/text8.zip'
def mkdir_p(path):
Create directories, ok if they already exist.
This is for python 2 support. In python >=3.2, simply use:
>>> os.makedirs(path, exist_ok=True)
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
def fetch_words_data(words_url=WORDS_URL, words_path=WORDS_PATH):
os.makedirs(words_path, exist_ok=True)
zip_path = os.path.join(words_path, "words.zip")
if not os.path.exists(zip_path):
urllib.request.urlretrieve(words_url, zip_path)
with zipfile.ZipFile(zip_path) as f:
data = f.read(f.namelist()[0])
return data.decode("ascii").split()
words = fetch_words_data()
words[:5]
Explanation: Embeddings
This section is based on TensorFlow's Word2Vec tutorial.
Fetch the data
End of explanation
from collections import Counter
vocabulary_size = 50000
vocabulary = [("UNK", None)] + Counter(words).most_common(vocabulary_size - 1)
vocabulary = np.array([word for word, _ in vocabulary])
dictionary = {word: code for code, word in enumerate(vocabulary)}
data = np.array([dictionary.get(word, 0) for word in words])
" ".join(words[:9]), data[:9]
" ".join([vocabulary[word_index] for word_index in [5241, 3081, 12, 6, 195, 2, 3134, 46, 59]])
words[24], data[24]
Explanation: Build the dictionary
End of explanation
from collections import deque
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=[batch_size], dtype=np.int32)
labels = np.ndarray(shape=[batch_size, 1], dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = np.random.randint(0, span)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
np.random.seed(42)
data_index = 0
batch, labels = generate_batch(8, 2, 1)
batch, [vocabulary[word] for word in batch]
labels, [vocabulary[word] for word in labels[:, 0]]
Explanation: Generate batches
End of explanation
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64 # Number of negative examples to sample.
learning_rate = 0.01
reset_graph()
# Input data.
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
vocabulary_size = 50000
embedding_size = 150
# Look up embeddings for inputs.
init_embeds = tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)
embeddings = tf.Variable(init_embeds)
train_inputs = tf.placeholder(tf.int32, shape=[None])
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / np.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,
num_sampled, vocabulary_size))
# Construct the Adam optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), axis=1, keepdims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)
# Add variable initializer.
init = tf.global_variables_initializer()
Explanation: Build the model
End of explanation
num_steps = 10001
with tf.Session() as session:
init.run()
average_loss = 0
for step in range(num_steps):
print("\rIteration: {}".format(step), end="\t")
batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}
# We perform one update step by evaluating the training op (including it
# in the list of returned values for session.run()
_, loss_val = session.run([training_op, loss], feed_dict=feed_dict)
average_loss += loss_val
if step % 2000 == 0:
if step > 0:
average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = vocabulary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log_str = "Nearest to %s:" % valid_word
for k in range(top_k):
close_word = vocabulary[nearest[k]]
log_str = "%s %s," % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval()
Explanation: Train the model
End of explanation
np.save("./my_final_embeddings.npy", final_embeddings)
Explanation: Let's save the final embeddings (of course you can use a TensorFlow Saver if you prefer):
End of explanation
def plot_with_labels(low_dim_embs, labels):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) #in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i,:]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
from sklearn.manifold import TSNE
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [vocabulary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)
Explanation: Plot the embeddings
End of explanation
import tensorflow as tf
reset_graph()
n_steps = 50
n_neurons = 200
n_layers = 3
num_encoder_symbols = 20000
num_decoder_symbols = 20000
embedding_size = 150
learning_rate = 0.01
X = tf.placeholder(tf.int32, [None, n_steps]) # English sentences
Y = tf.placeholder(tf.int32, [None, n_steps]) # French translations
W = tf.placeholder(tf.float32, [None, n_steps - 1, 1])
Y_input = Y[:, :-1]
Y_target = Y[:, 1:]
encoder_inputs = tf.unstack(tf.transpose(X)) # list of 1D tensors
decoder_inputs = tf.unstack(tf.transpose(Y_input)) # list of 1D tensors
lstm_cells = [tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cells)
output_seqs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
encoder_inputs,
decoder_inputs,
cell,
num_encoder_symbols,
num_decoder_symbols,
embedding_size)
logits = tf.transpose(tf.unstack(output_seqs), perm=[1, 0, 2])
logits_flat = tf.reshape(logits, [-1, num_decoder_symbols])
Y_target_flat = tf.reshape(Y_target, [-1])
W_flat = tf.reshape(W, [-1])
xentropy = W_flat * tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y_target_flat, logits=logits_flat)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
Explanation: Machine Translation
The basic_rnn_seq2seq() function creates a simple Encoder/Decoder model: it first runs an RNN to encode encoder_inputs into a state vector, then runs a decoder initialized with the last encoder state on decoder_inputs. Encoder and decoder use the same RNN cell type but they don't share parameters.
End of explanation
np.random.seed(42)
default_reber_grammar = [
[("B", 1)], # (state 0) =B=>(state 1)
[("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)
[("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)
[("T", 3), ("V", 5)], # and so on...
[("X", 3), ("S", 6)],
[("P", 4), ("V", 6)],
[("E", None)]] # (state 6) =E=>(terminal state)
embedded_reber_grammar = [
[("B", 1)],
[("T", 2), ("P", 3)],
[(default_reber_grammar, 4)],
[(default_reber_grammar, 5)],
[("T", 6)],
[("P", 6)],
[("E", None)]]
def generate_string(grammar):
state = 0
output = []
while state is not None:
index = np.random.randint(len(grammar[state]))
production, state = grammar[state][index]
if isinstance(production, list):
production = generate_string(grammar=production)
output.append(production)
return "".join(output)
Explanation: Exercise solutions
1. to 6.
See Appendix A.
7. Embedded Reber Grammars
First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
End of explanation
for _ in range(25):
print(generate_string(default_reber_grammar), end=" ")
Explanation: Let's generate a few strings based on the default Reber grammar:
End of explanation
for _ in range(25):
print(generate_string(embedded_reber_grammar), end=" ")
Explanation: Looks good. Now let's generate a few strings based on the embedded Reber grammar:
End of explanation
def generate_corrupted_string(grammar, chars="BEPSTVX"):
good_string = generate_string(grammar)
index = np.random.randint(len(good_string))
good_char = good_string[index]
bad_char = np.random.choice(sorted(set(chars) - set(good_char)))
return good_string[:index] + bad_char + good_string[index + 1:]
Explanation: Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:
End of explanation
for _ in range(25):
print(generate_corrupted_string(embedded_reber_grammar), end=" ")
Explanation: Let's look at a few corrupted strings:
End of explanation
def string_to_one_hot_vectors(string, n_steps, chars="BEPSTVX"):
char_to_index = {char: index for index, char in enumerate(chars)}
output = np.zeros((n_steps, len(chars)), dtype=np.int32)
for index, char in enumerate(string):
output[index, char_to_index[char]] = 1.
return output
string_to_one_hot_vectors("BTBTXSETE", 12)
Explanation: It's not possible to feed a string directly to an RNN: we need to convert it to a sequence of vectors, first. Each vector will represent a single letter, using a one-hot encoding. For example, the letter "B" will be represented as the vector [1, 0, 0, 0, 0, 0, 0], the letter E will be represented as [0, 1, 0, 0, 0, 0, 0] and so on. Let's write a function that converts a string to a sequence of such one-hot vectors. Note that if the string is shorted than n_steps, it will be padded with zero vectors (later, we will tell TensorFlow how long each string actually is using the sequence_length parameter).
End of explanation
def generate_dataset(size):
good_strings = [generate_string(embedded_reber_grammar)
for _ in range(size // 2)]
bad_strings = [generate_corrupted_string(embedded_reber_grammar)
for _ in range(size - size // 2)]
all_strings = good_strings + bad_strings
n_steps = max([len(string) for string in all_strings])
X = np.array([string_to_one_hot_vectors(string, n_steps)
for string in all_strings])
seq_length = np.array([len(string) for string in all_strings])
y = np.array([[1] for _ in range(len(good_strings))] +
[[0] for _ in range(len(bad_strings))])
rnd_idx = np.random.permutation(size)
return X[rnd_idx], seq_length[rnd_idx], y[rnd_idx]
X_train, l_train, y_train = generate_dataset(10000)
Explanation: We can now generate the dataset, with 50% good strings, and 50% bad strings:
End of explanation
X_train[0]
Explanation: Let's take a look at the first training instances:
End of explanation
l_train[0]
Explanation: It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?
End of explanation
y_train[0]
Explanation: What class is it?
End of explanation
reset_graph()
possible_chars = "BEPSTVX"
n_inputs = len(possible_chars)
n_neurons = 30
n_outputs = 1
learning_rate = 0.02
momentum = 0.95
X = tf.placeholder(tf.float32, [None, None, n_inputs], name="X")
seq_length = tf.placeholder(tf.int32, [None], name="seq_length")
y = tf.placeholder(tf.float32, [None, 1], name="y")
gru_cell = tf.nn.rnn_cell.GRUCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(gru_cell, X, dtype=tf.float32,
sequence_length=seq_length)
logits = tf.layers.dense(states, n_outputs, name="logits")
y_pred = tf.cast(tf.greater(logits, 0.), tf.float32, name="y_pred")
y_proba = tf.nn.sigmoid(logits, name="y_proba")
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
momentum=momentum,
use_nesterov=True)
training_op = optimizer.minimize(loss)
correct = tf.equal(y_pred, y, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences:
* First, the input strings have variable length, so we need to specify the sequence_length when calling the dynamic_rnn() function.
* Second, this is a binary classifier, so we only need one output neuron that will output, for each input string, the estimated log probability that it is a good string. For multiclass classification, we used sparse_softmax_cross_entropy_with_logits() but for binary classification we use sigmoid_cross_entropy_with_logits().
End of explanation
X_val, l_val, y_val = generate_dataset(5000)
n_epochs = 50
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
X_batches = np.array_split(X_train, len(X_train) // batch_size)
l_batches = np.array_split(l_train, len(l_train) // batch_size)
y_batches = np.array_split(y_train, len(y_train) // batch_size)
for X_batch, l_batch, y_batch in zip(X_batches, l_batches, y_batches):
loss_val, _ = sess.run(
[loss, training_op],
feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: X_val, seq_length: l_val, y: y_val})
print("{:4d} Train loss: {:.4f}, accuracy: {:.2f}% Validation accuracy: {:.2f}%".format(
epoch, loss_val, 100 * acc_train, 100 * acc_val))
saver.save(sess, "./my_reber_classifier")
Explanation: Now let's generate a validation set so we can track progress during training:
End of explanation
test_strings = [
"BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE",
"BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE"]
l_test = np.array([len(s) for s in test_strings])
max_length = l_test.max()
X_test = [string_to_one_hot_vectors(s, n_steps=max_length)
for s in test_strings]
with tf.Session() as sess:
saver.restore(sess, "./my_reber_classifier")
y_proba_val = y_proba.eval(feed_dict={X: X_test, seq_length: l_test})
print()
print("Estimated probability that these are Reber strings:")
for index, string in enumerate(test_strings):
print("{}: {:.2f}%".format(string, 100 * y_proba_val[index][0]))
Explanation: Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).
End of explanation |
1,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a Heatmap of Vector Results
In this notebook, you'll learn how to use Planet's Analytics API to display a heatmap of vector analytic results, specifically buildng change detections. This can be used to identify where the most change is happining.
Setup
Install additional dependencies
Install cartopy v0.18 beta, so that we can render OSM tiles under the heatmap
Step1: API configuration
Before getting items from the API, you must set your API_KEY and the SUBSCRIPTION_ID of the change detection subscription to use.
If you want to limit the heatmap to a specific time range, also set TIMES to a valid time range.
Step2: Fetch Items
Next, we fetch the items from the API in batches of 500 items, and return only the relevant data - the centroid and the area. This might take a few minutes to run, as some change detection feeds have thousands of items.
Step3: Displaying the Heatmap
Once you've fetched all the items, you are nearly ready to display them as a heatmap.
Coordinate Systems
The items fetched from the API are in WGS84 (lat/lon) coordinates. However, it can be useful to display the data in an equal area projection like EPSG
Step4: Colormap
Matplotlib provides a number of colormaps that are useful to render heatmaps. However, all of these are solid color - in order to see an underlying map, we need to add an alpha chanel.
For this example, we will use the "plasma" colormap, and add a transparent gradient to the first half of the map, so that it starts out completely transparent, and gradually becomes opaque, such that all values above the midpoint have no transparency.
Step5: Heatmap configuration
Note | Python Code:
!pip install cython
!pip install https://github.com/SciTools/cartopy/archive/v0.18.0.zip
Explanation: Creating a Heatmap of Vector Results
In this notebook, you'll learn how to use Planet's Analytics API to display a heatmap of vector analytic results, specifically buildng change detections. This can be used to identify where the most change is happining.
Setup
Install additional dependencies
Install cartopy v0.18 beta, so that we can render OSM tiles under the heatmap:
End of explanation
import os
import requests
API_KEY = os.environ["PL_API_KEY"]
SUBSCRIPTION_ID = "..."
TIMES = None
planet = requests.session()
planet.auth = (API_KEY, '')
Explanation: API configuration
Before getting items from the API, you must set your API_KEY and the SUBSCRIPTION_ID of the change detection subscription to use.
If you want to limit the heatmap to a specific time range, also set TIMES to a valid time range.
End of explanation
import requests
import statistics
def get_next_url(result):
if '_links' in result:
return result['_links'].get('_next')
elif 'links' in result:
for link in result['links']:
if link['rel'] == 'next':
return link['href']
def get_items_from_sif():
url = 'https://api.planet.com/analytics/collections/{}/items?limit={}'.format(
SUBSCRIPTION_ID, 500)
if TIMES:
url += '&datetime={}'.format(TIMES)
print("Fetching items from " + url)
result = planet.get(url).json()
items = []
while len(result.get('features', [])) > 0:
for f in result['features']:
coords = f['geometry']['coordinates'][0]
items.append({
'lon': statistics.mean([c[0] for c in coords]),
'lat': statistics.mean([c[1] for c in coords]),
'area': f['properties']['object_area_m2']
})
url = get_next_url(result)
if not url:
return items
print("Fetching items from " + url)
result = planet.get(url).json()
items = get_items_from_sif()
print("Fetched " + str(len(items)) + " items")
# Get the bounding box coordinates of this AOI.
url = 'https://api.planet.com/analytics/subscriptions/{}'.format(SUBSCRIPTION_ID)
result = planet.get(url).json()
geometry = result['geometry']
Explanation: Fetch Items
Next, we fetch the items from the API in batches of 500 items, and return only the relevant data - the centroid and the area. This might take a few minutes to run, as some change detection feeds have thousands of items.
End of explanation
import pyproj
SRC_PROJ = 'EPSG:4326'
DEST_PROJ = 'EPSG:3857'
PROJ_UNITS = 'm'
transformer = pyproj.Transformer.from_crs(SRC_PROJ, DEST_PROJ, always_xy=True)
Explanation: Displaying the Heatmap
Once you've fetched all the items, you are nearly ready to display them as a heatmap.
Coordinate Systems
The items fetched from the API are in WGS84 (lat/lon) coordinates. However, it can be useful to display the data in an equal area projection like EPSG:3857 so that the heatmap shows change per square meter.
To do this, we use pyproj to transfrom the item coordinates between projections.
End of explanation
import matplotlib.pylab as pl
import numpy as np
from matplotlib.colors import ListedColormap
src_colormap = pl.cm.plasma
alpha_vals = src_colormap(np.arange(src_colormap.N))
alpha_vals[:int(src_colormap.N/2),-1] = np.linspace(0, 1, int(src_colormap.N/2))
alpha_vals[int(src_colormap.N/2):src_colormap.N,-1] = 1
alpha_colormap = ListedColormap(alpha_vals)
Explanation: Colormap
Matplotlib provides a number of colormaps that are useful to render heatmaps. However, all of these are solid color - in order to see an underlying map, we need to add an alpha chanel.
For this example, we will use the "plasma" colormap, and add a transparent gradient to the first half of the map, so that it starts out completely transparent, and gradually becomes opaque, such that all values above the midpoint have no transparency.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import cartopy.io.img_tiles as cimgt
import cartopy.crs as ccrs
import shapely
# Heatmap Configuration
RAW_BOUNDS = shapely.geometry.shape(geometry).bounds
INTERVALS: int = 36
BOUNDS = [0.] * 4
BOUNDS[0],BOUNDS[2] = transformer.transform(RAW_BOUNDS[0],RAW_BOUNDS[1])
BOUNDS[1],BOUNDS[3] = transformer.transform(RAW_BOUNDS[2],RAW_BOUNDS[3])
# Categorization
# 1. Generate bins from bounds + intervals
aspect_ratio = (BOUNDS[1] - BOUNDS[0]) / (BOUNDS[3] - BOUNDS[2])
x_bins = np.linspace(BOUNDS[0], BOUNDS[1], INTERVALS, endpoint=False)
y_bins = np.linspace(BOUNDS[2], BOUNDS[3], int(INTERVALS/aspect_ratio), endpoint=False)
x_delta2 = (x_bins[1] - x_bins[0])/2
y_delta2 = (y_bins[1] - y_bins[0])/2
x_bins = x_bins + x_delta2
y_bins = y_bins + y_delta2
# 2. Categorize items in bins
binned = []
for f in items:
fx,fy = transformer.transform(f['lon'], f['lat'])
if (BOUNDS[0] < fx < BOUNDS[1]) and (BOUNDS[2] < fy < BOUNDS[3]):
binned.append({
'x': min(x_bins, key=(lambda x: abs(x - fx))),
'y': min(y_bins, key=(lambda y: abs(y - fy))),
'area': f['area']
})
# 3. Aggregate binned values
hist = pd.DataFrame(binned).groupby(['x', 'y']).sum().reset_index()
# 4. Pivot into an xy grid and fill in empty cells with 0.
hist = hist.pivot('y', 'x', 'area')
hist = hist.reindex(y_bins, axis=0, fill_value=0).reindex(x_bins, axis=1, fill_value=0).fillna(0)
# OSM Basemap
osm_tiles = cimgt.OSM()
carto_proj = ccrs.GOOGLE_MERCATOR
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection=carto_proj)
ax.axis(BOUNDS)
tile_image = ax.add_image(osm_tiles, 8)
# Display Heatmap
heatmap = ax.imshow(hist.values, zorder=1, aspect='equal', origin='lower', extent=BOUNDS, cmap=alpha_colormap, interpolation='bicubic')
plt.colorbar(heatmap, ax=ax).set_label("Square meters of new buildings per {:.3e} {}²".format(4 * x_delta2 * y_delta2,PROJ_UNITS))
Explanation: Heatmap configuration
Note: These final four sections are presented together in one code block, to make it easier to re-run with different configurations of bounds or intervals.
Set BOUNDS to the area of interest to display (min lon,max lon,min lat,max lat). The default bounds are centered on Sydney, Australia - you should change this to match the AOI of your change detection subscription feed.
Set INTERVALS to the number of bins along the x-axis. Items are categorized into equal-size square bins based on this number of intervals and the aspect ratio of your bounds. For a square AOI, the default value of INTERVALS = 36 would give 36 * 36 = 1296 bins; an AOI with the same width that is half as tall would give 36 * 18 = 648 bins.
The area (in square meters) of each bin is displayed in the legend to the right of the plot.
Categorization
This configuration is used to categorize the items into bins for display as a heatmap.
The bounds and intervals are used to generate an array of midpoints representing the bins.
Categorize the items retrieved from the API into these bins based on which midpoint they are closest to.
Aggregate up the areas of all the items in each bin.
Convert the resulting data into an xy grid of areas and fill in missing cells with zeros.
OSM Basemap
So that we can see where our heatmap values actually are, we will use cartopy to display OSM tiles underneath the heatmap. Note that this requires an internet connection.
For an offline alternative, you could plot a vector basemap or imshow to display a local raster image.
Display Heatmap
The final step is to display the grid data as a heatmap, using imshow. You can use the parameters here to change how the heatmap is rendered. For example, chose a different cmap to change the color, or add the interpolation='bicubic' parameter to display smooth output instead of individual pixels.
To make it clear where the heatmap is being displayed, use Natural Earth 1:110m datasets to render a map alongside the heatmap data.
End of explanation |
1,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feedforward Neural Network
Step1: <script type="text/javascript" src="https
Step2: It can be seen from the above figure that as we increase our input the our activation starts to saturate which can inturn kill gradients. This can be mitigated using rectified activation functions. Another problem that we encounter in training deep neural networks during backpropagation is vanishing gradient and gradient explosion. It can be observed from the derivative of our nth activation- $\large\frac{\partial act_n}{\partial pre_act_n}$ , is fairly large near zero. Let's assume that the weigths $< 1$, this will usually satisfy $|w_{i}*tanh'(x)| < 1$. The succesive product of such values in each layer will exponentially decrease the computed product leading to vanishing gradient. This is not a robust explanation of vanishing gradient problem. For more information refer to this article.
Similarly if the weigths are large 100, 40.., we can formulate the gradient explosion problem.
Step3: Animate Training | Python Code:
# import feedforward neural net
from mlnn import neural_net
Explanation:
Feedforward Neural Network
End of explanation
# Visualize tanh and its derivative
x = np.linspace(-np.pi, np.pi, 120)
plt.figure(figsize=(8, 3))
plt.subplot(1, 2, 1)
plt.plot(x, np.tanh(x))
plt.title("tanh(x)")
plt.xlim(-3, 3)
plt.subplot(1, 2, 2)
plt.plot(x, 1 - np.square(np.tanh(x)))
plt.xlim(-3, 3)
plt.title("tanh\'(x)")
plt.show()
Explanation: <script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"></script>
Let's build a 4-layer neural network. Our network has one input layer, two hidden layer and one output layer. Our model can be represented as a directed acyclic graph wherein each node in a layer is connected all other nodes in its succesive layer. The neural net is shown below-
Each node in the hidden layer uses a nonlinear activation function $f(x)$, which computes the outputs from its inputs and transfer these outputs to successive layers. Here we've used $f(x)= tanh(x)$, as our non-linear activation. Its derivative is given by- $f'(x)= 1-tanh(x)^2$.
Our network graph can be represented as-
| Layer No. | Notation | Value | Variable |
|----------:|-----------:|---------------------------------------------:|----------:|
| 1 | X | $X$| X|
| 2 | W1(~)+b1 | $W1X+b1$| pre_act1|
| 2 | tanh | $tanh(W1X+b1)$| act1|
| 3 | W2(~)+b2 | $W2(tanh(W1X +b1))+b2$| pre_act2|
| 3 | tanh | $tanh(W2(tanhW1X+b1))+b2)$| act2|
| 4 | W3(~)+b3 | $W3(tanh(W2(tanhW1X+b1))+b2)+b3$ | pre_act3|
| 4 | softmax |$softmax(W3(tanh(W2(tanhW1X+b1)+b2))+b3)$ </br>| act3|
Backpropagation
Now we formulate the backpropagation algorithm or backprop for training the network. For derivation of the backprop, please see Dr. Hugo Larochelle's excellent course on neural networks.
$ \large\frac{\partial L}{\partial Pred} = \frac{\partial L}{\partial L} * \frac{\partial L}{\partial Pred} $
$ \large\frac{\partial L}{\partial act3} = \frac{\partial L}{\partial Pred} * \frac{\partial Pred}{\partial act3} $
$ \large\frac{\partial L}{\partial pre_act3} = \frac{\partial L}{\partial act3} * \frac{\partial act3}{\partial pre_act3}= \delta4$
$ \large\frac{\partial L}{\partial act2} = \frac{\partial L}{\partial pre_act3} * \frac{\partial pre_act3}{\partial act2} $
$ \large\frac{\partial L}{\partial pre_act2} = \frac{\partial L}{\partial act2} * \frac{\partial act2}{\partial pre_act2}= \delta3$
$ \large\frac{\partial L}{\partial act1} = \frac{\partial L}{\partial pre_act2} * \frac{\partial pre_act2}{\partial act1} $
$ \large\frac{\partial L}{\partial pre_act1} = \frac{\partial L}{\partial act1} * \frac{\partial act1}{\partial pre_act1}= \delta2$
$ \large\frac{\partial L}{\partial W3} = \delta4 * \frac{\partial pre_act3}{\partial W3}$
$ \large\frac{\partial L}{\partial W2} = \delta3 * \frac{\partial pre_act2}{\partial W2}$
$ \large\frac{\partial L}{\partial W1} = \delta2 * \frac{\partial pre_act1}{\partial W1}$
End of explanation
# Training the neural network
my_nn = neural_net([2, 4, 2]) # [2,4,2] = [input nodes, hidden nodes, output nodes]
my_nn.train(X, y, 0.001, 0.0001) # weights regularization lambda= 0.001 , epsilon= 0.0001
### visualize predictions
my_nn.visualize_preds(X ,y)
Explanation: It can be seen from the above figure that as we increase our input the our activation starts to saturate which can inturn kill gradients. This can be mitigated using rectified activation functions. Another problem that we encounter in training deep neural networks during backpropagation is vanishing gradient and gradient explosion. It can be observed from the derivative of our nth activation- $\large\frac{\partial act_n}{\partial pre_act_n}$ , is fairly large near zero. Let's assume that the weigths $< 1$, this will usually satisfy $|w_{i}*tanh'(x)| < 1$. The succesive product of such values in each layer will exponentially decrease the computed product leading to vanishing gradient. This is not a robust explanation of vanishing gradient problem. For more information refer to this article.
Similarly if the weigths are large 100, 40.., we can formulate the gradient explosion problem.
End of explanation
X_, y_ = sklearn.datasets.make_circles(n_samples=400, noise=0.18, factor=0.005, random_state=1)
plt.figure(figsize=(7, 5))
plt.scatter(X_[:, 0], X_[:, 1], s=15, c=y_, cmap=plt.cm.Spectral)
plt.show()
'''
Uncomment the code below to see classification process for above data.
To stop training early reduce no. of iterations.
'''
#new_nn = neural_net([2, 6, 2])
#new_nn.animate_preds(X_, y_, 0.001, 0.0001) # max iterations = 35000
Explanation: Animate Training:
End of explanation |
1,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NEST implementation of the aeif models
Hans Ekkehard Plesser and Tanguy Fardet, 2016-09-09
This notebook provides a reference solution for the Adaptive Exponential Integrate and Fire
(AEIF) neuronal model and compares it with several numerical implementations using simpler solvers.
In particular this justifies the change of implementation in September 2016 to make the simulation
closer to the reference solution.
Position of the problem
Basics
The equations governing the evolution of the AEIF model are
$$\left\lbrace\begin{array}{rcl}
C_m\dot{V} &=& -g_L(V-E_L) + g_L \Delta_T e^{\frac{V-V_T}{\Delta_T}} + I_e + I_s(t) -w\
\tau_s\dot{w} &=& a(V-E_L) - w
\end{array}\right.$$
when $V < V_{peak}$ (threshold/spike detection).
Once a spike occurs, we apply the reset conditions
Step1: Scipy functions mimicking the NEST code
Right hand side functions
Step2: Complete model
Step6: LSODAR reference solution
Setting assimulo class
Step7: LSODAR reference model
Step8: Set the parameters and simulate the models
Params (chose a dictionary)
Step9: Simulate the 3 implementations
Step10: Plot the results
Zoom out
Step11: Zoom in
Step12: Compare properties at spike times
Step13: Size of minimal integration timestep
Step14: Convergence towards LSODAR reference with step size
Zoom out
Step15: Zoom in | Python Code:
# Install assimulo package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install assimulo
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 6)
Explanation: NEST implementation of the aeif models
Hans Ekkehard Plesser and Tanguy Fardet, 2016-09-09
This notebook provides a reference solution for the Adaptive Exponential Integrate and Fire
(AEIF) neuronal model and compares it with several numerical implementations using simpler solvers.
In particular this justifies the change of implementation in September 2016 to make the simulation
closer to the reference solution.
Position of the problem
Basics
The equations governing the evolution of the AEIF model are
$$\left\lbrace\begin{array}{rcl}
C_m\dot{V} &=& -g_L(V-E_L) + g_L \Delta_T e^{\frac{V-V_T}{\Delta_T}} + I_e + I_s(t) -w\
\tau_s\dot{w} &=& a(V-E_L) - w
\end{array}\right.$$
when $V < V_{peak}$ (threshold/spike detection).
Once a spike occurs, we apply the reset conditions:
$$V = V_r \quad \text{and} \quad w = w + b$$
Divergence
In the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large.
This can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\Delta_T$ is small.
Tested solutions
Old implementation (before September 2016)
The orginal solution was to bind the exponential argument to be smaller than 10 (ad hoc value to be close to the original implementation in BRIAN).
As will be shown in the notebook, this solution does not converge to the reference LSODAR solution.
New implementation
The new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$.
We will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller.
Reference solution
The reference solution is implemented using the LSODAR solver which is described and compared in the following references:
http://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one)
http://www.sciencedirect.com/science/article/pii/S0377042712000684
http://www.radford.edu/~thompson/RP/rootfinding.pdf
https://computation.llnl.gov/casc/nsde/pubs/u88007.pdf
http://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf
http://www.sciencedirect.com/science/article/pii/0377042789903348
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf
https://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf
Technical details and requirements
Implementation of the functions
The old and new implementations are reproduced using Scipy and are called by the scipy_aeif function
The NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver.
The reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package.
Requirements
To run this notebook, you need:
numpy and scipy
assimulo
matplotlib
End of explanation
def rhs_aeif_new(y, _, p):
'''
New implementation bounding V < V_peak
Parameters
----------
y : list
Vector containing the state variables [V, w]
_ : unused var
p : Params instance
Object containing the neuronal parameters.
Returns
-------
dv : double
Derivative of V
dw : double
Derivative of w
'''
v = min(y[0], p.Vpeak)
w = y[1]
Ispike = 0.
if p.DeltaT != 0.:
Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT)
dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm
dw = (p.a * (v-p.EL) - w) / p.tau_w
return dv, dw
def rhs_aeif_old(y, _, p):
'''
Old implementation bounding the argument of the
exponential function (e_arg < 10.).
Parameters
----------
y : list
Vector containing the state variables [V, w]
_ : unused var
p : Params instance
Object containing the neuronal parameters.
Returns
-------
dv : double
Derivative of V
dw : double
Derivative of w
'''
v = y[0]
w = y[1]
Ispike = 0.
if p.DeltaT != 0.:
e_arg = min((v-p.vT)/p.DeltaT, 10.)
Ispike = p.gL * p.DeltaT * np.exp(e_arg)
dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm
dw = (p.a * (v-p.EL) - w) / p.tau_w
return dv, dw
Explanation: Scipy functions mimicking the NEST code
Right hand side functions
End of explanation
def scipy_aeif(p, f, simtime, dt):
'''
Complete aeif model using scipy `odeint` solver.
Parameters
----------
p : Params instance
Object containing the neuronal parameters.
f : function
Right-hand side function (either `rhs_aeif_old`
or `rhs_aeif_new`)
simtime : double
Duration of the simulation (will run between
0 and tmax)
dt : double
Time increment.
Returns
-------
t : list
Times at which the neuronal state was evaluated.
y : list
State values associated to the times in `t`
s : list
Spike times.
vs : list
Values of `V` just before the spike.
ws : list
Values of `w` just before the spike
fos : list
List of dictionaries containing additional output
information from `odeint`
'''
t = np.arange(0, simtime, dt) # time axis
n = len(t)
y = np.zeros((n, 2)) # V, w
y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.)
y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.)
s = [] # spike times
vs = [] # membrane potential at spike before reset
ws = [] # w at spike before step
fos = [] # full output dict from odeint()
# imitate NEST: update time-step by time-step
for k in range(1, n):
# solve ODE from t_k-1 to t_k
d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True)
y[k, :] = d[1, :]
fos.append(fo)
# check for threshold crossing
if y[k, 0] >= p.Vpeak:
s.append(t[k])
vs.append(y[k, 0])
ws.append(y[k, 1])
y[k, 0] = p.Vreset # reset
y[k, 1] += p.b # step
return t, y, s, vs, ws, fos
Explanation: Complete model
End of explanation
from assimulo.solvers import LSODAR
from assimulo.problem import Explicit_Problem
class Extended_Problem(Explicit_Problem):
# need variables here for access
sw0 = [ False ]
ts_spikes = []
ws_spikes = []
Vs_spikes = []
def __init__(self, p):
self.p = p
self.y0 = [self.p.EL, 5.] # V, w
# reset variables
self.ts_spikes = []
self.ws_spikes = []
self.Vs_spikes = []
#The right-hand-side function (rhs)
def rhs(self, t, y, sw):
This is the function we are trying to simulate (aeif model).
V, w = y[0], y[1]
Ispike = 0.
if self.p.DeltaT != 0.:
Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT)
dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm
dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w
return np.array([dotV, dotW])
# Sets a name to our function
name = 'AEIF_nosyn'
# The event function
def state_events(self, t, y, sw):
This is our function that keeps track of our events. When the sign
of any of the events has changed, we have an event.
event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike
if event_0 < 0:
if not self.ts_spikes:
self.ts_spikes.append(t)
self.Vs_spikes.append(y[0])
self.ws_spikes.append(y[1])
elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01):
self.ts_spikes.append(t)
self.Vs_spikes.append(y[0])
self.ws_spikes.append(y[1])
return np.array([event_0])
#Responsible for handling the events.
def handle_event(self, solver, event_info):
Event handling. This functions is called when Assimulo finds an event as
specified by the event functions.
ev = event_info
event_info = event_info[0] # only look at the state events information.
if event_info[0] > 0:
solver.sw[0] = True
solver.y[0] = self.p.Vreset
solver.y[1] += self.p.b
else:
solver.sw[0] = False
def initialize(self, solver):
solver.h_sol=[]
solver.nq_sol=[]
def handle_result(self, solver, t, y):
Explicit_Problem.handle_result(self, solver, t, y)
# Extra output for algorithm analysis
if solver.report_continuously:
h, nq = solver.get_algorithm_data()
solver.h_sol.extend([h])
solver.nq_sol.extend([nq])
Explanation: LSODAR reference solution
Setting assimulo class
End of explanation
def reference_aeif(p, simtime):
'''
Reference aeif model using LSODAR.
Parameters
----------
p : Params instance
Object containing the neuronal parameters.
f : function
Right-hand side function (either `rhs_aeif_old`
or `rhs_aeif_new`)
simtime : double
Duration of the simulation (will run between
0 and tmax)
dt : double
Time increment.
Returns
-------
t : list
Times at which the neuronal state was evaluated.
y : list
State values associated to the times in `t`
s : list
Spike times.
vs : list
Values of `V` just before the spike.
ws : list
Values of `w` just before the spike
h : list
List of the minimal time increment at each step.
'''
#Create an instance of the problem
exp_mod = Extended_Problem(p) #Create the problem
exp_sim = LSODAR(exp_mod) #Create the solver
exp_sim.atol=1.e-8
exp_sim.report_continuously = True
exp_sim.store_event_points = True
exp_sim.verbosity = 30
#Simulate
t, y = exp_sim.simulate(simtime) #Simulate 10 seconds
return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol
Explanation: LSODAR reference model
End of explanation
# Regular spiking
aeif_param = {
'V_reset': -58.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 420.,
'g_L': 11.,
'tau_w': 300.,
'E_L': -70.,
'Delta_T': 2.,
'a': 3.,
'b': 0.,
'C_m': 200.,
'V_m': -70., #! must be equal to E_L
'w': 5., #! must be equal to 5.
'tau_syn_ex': 0.2
}
# Bursting
aeif_param2 = {
'V_reset': -46.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 500.0,
'g_L': 10.,
'tau_w': 120.,
'E_L': -58.,
'Delta_T': 2.,
'a': 2.,
'b': 100.,
'C_m': 200.,
'V_m': -58., #! must be equal to E_L
'w': 5., #! must be equal to 5.
}
# Close to chaos (use resolution < 0.005 and simtime = 200)
aeif_param3 = {
'V_reset': -48.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 160.,
'g_L': 12.,
'tau_w': 130.,
'E_L': -60.,
'Delta_T': 2.,
'a': -11.,
'b': 30.,
'C_m': 100.,
'V_m': -60., #! must be equal to E_L
'w': 5., #! must be equal to 5.
}
class Params:
'''
Class giving access to the neuronal
parameters.
'''
def __init__(self):
self.params = aeif_param
self.Vpeak = aeif_param["V_peak"]
self.Vreset = aeif_param["V_reset"]
self.gL = aeif_param["g_L"]
self.Cm = aeif_param["C_m"]
self.EL = aeif_param["E_L"]
self.DeltaT = aeif_param["Delta_T"]
self.tau_w = aeif_param["tau_w"]
self.a = aeif_param["a"]
self.b = aeif_param["b"]
self.vT = aeif_param["V_th"]
self.Ie = aeif_param["I_e"]
p = Params()
Explanation: Set the parameters and simulate the models
Params (chose a dictionary)
End of explanation
# Parameters of the simulation
simtime = 100.
resolution = 0.01
t_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resolution)
t_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resolution)
t_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime)
Explanation: Simulate the 3 implementations
End of explanation
fig, ax = plt.subplots()
ax2 = ax.twinx()
# Plot the potentials
ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.")
ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old")
ax.plot(t_new, y_new[:,0], linestyle="--", label="V new")
# Plot the adaptation variables
ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.")
ax2.plot(t_old, y_old[:,1], linestyle="-.", c="m", label="w old")
ax2.plot(t_new, y_new[:,1], linestyle="--", c="y", label="w new")
# Show
ax.set_xlim([0., simtime])
ax.set_ylim([-65., 40.])
ax.set_xlabel("Time (ms)")
ax.set_ylabel("V (mV)")
ax2.set_ylim([-20., 20.])
ax2.set_ylabel("w (pA)")
ax.legend(loc=6)
ax2.legend(loc=2)
plt.show()
Explanation: Plot the results
Zoom out
End of explanation
fig, ax = plt.subplots()
ax2 = ax.twinx()
# Plot the potentials
ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.")
ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old")
ax.plot(t_new, y_new[:,0], linestyle="--", label="V new")
# Plot the adaptation variables
ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.")
ax2.plot(t_old, y_old[:,1], linestyle="-.", c="y", label="w old")
ax2.plot(t_new, y_new[:,1], linestyle="--", c="m", label="w new")
ax.set_xlim([90., 92.])
ax.set_ylim([-65., 40.])
ax.set_xlabel("Time (ms)")
ax.set_ylabel("V (mV)")
ax2.set_ylim([17.5, 18.5])
ax2.set_ylabel("w (pA)")
ax.legend(loc=5)
ax2.legend(loc=2)
plt.show()
Explanation: Zoom in
End of explanation
print("spike times:\n-----------")
print("ref", np.around(s_ref, 3)) # ref lsodar
print("old", np.around(s_old, 3))
print("new", np.around(s_new, 3))
print("\nV at spike time:\n---------------")
print("ref", np.around(vs_ref, 3)) # ref lsodar
print("old", np.around(vs_old, 3))
print("new", np.around(vs_new, 3))
print("\nw at spike time:\n---------------")
print("ref", np.around(ws_ref, 3)) # ref lsodar
print("old", np.around(ws_old, 3))
print("new", np.around(ws_new, 3))
Explanation: Compare properties at spike times
End of explanation
plt.semilogy(t_ref, h_ref, label='Reference')
plt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old')
plt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New')
plt.legend(loc=6)
plt.show();
Explanation: Size of minimal integration timestep
End of explanation
plt.plot(t_ref, y_ref[:,0], label="V ref.")
resolutions = (0.1, 0.01, 0.001)
di_res = {}
for resolution in resolutions:
t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resolution)
t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resolution)
di_res[resolution] = (t_old, y_old, t_new, y_new)
plt.plot(t_old, y_old[:,0], linestyle=":", label="V old, r={}".format(resolution))
plt.plot(t_new, y_new[:,0], linestyle="--", linewidth=1.5, label="V new, r={}".format(resolution))
plt.xlim(0., simtime)
plt.xlabel("Time (ms)")
plt.ylabel("V (mV)")
plt.legend(loc=2)
plt.show();
Explanation: Convergence towards LSODAR reference with step size
Zoom out
End of explanation
plt.plot(t_ref, y_ref[:,0], label="V ref.")
for resolution in resolutions:
t_old, y_old = di_res[resolution][:2]
t_new, y_new = di_res[resolution][2:]
plt.plot(t_old, y_old[:,0], linestyle="--", label="V old, r={}".format(resolution))
plt.plot(t_new, y_new[:,0], linestyle="-.", linewidth=2., label="V new, r={}".format(resolution))
plt.xlim(90., 92.)
plt.ylim([-62., 2.])
plt.xlabel("Time (ms)")
plt.ylabel("V (mV)")
plt.legend(loc=2)
plt.show();
Explanation: Zoom in
End of explanation |
1,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Words Associated to each Gender (through PMI)
In this notebook we compute PMI scores for the vocabulary obtained in the previous notebook.
By Eduardo Graells-Garrido.
Step1: First, we load a list of English stopwords. We also add some stopwords that we found on the dataset while exploring word frequency.
Note that we store a list of stopwords in the file stopwords_en.txt in our target folder (in the case of the English edition).
Step2: We also load our person data.
Step3: And our vocabulary. We will consider only words that appear in both genders (so it makes sense to compare association).
Step4: Now we estimate PMI. Recall that PMI is
Step5: Now we are ready to explore PMI. Recall that PMI overweights words that have extremely low frequencies. We need to set a threshold for it. For instance, in our previous paper we considered 1% of biographies as threshold. But this time we have more biographies, and with 1% we don't have 200 words for women.
Hence, this time we lower the bar up to 0.1%.
Step6: What we will do is to save both lists of top-200 words and then manually annotate them according to the following categories | Python Code:
from __future__ import print_function, unicode_literals, division
from cytoolz.dicttoolz import valmap
from collections import Counter
import pandas as pd
import json
import gzip
import numpy as np
import pandas as pd
import dbpedia_config
target_folder = dbpedia_config.TARGET_FOLDER
Explanation: Words Associated to each Gender (through PMI)
In this notebook we compute PMI scores for the vocabulary obtained in the previous notebook.
By Eduardo Graells-Garrido.
End of explanation
with open('{0}/stopwords_{1}.txt'.format(target_folder, dbpedia_config.MAIN_LANGUAGE), 'r') as f:
stopwords = f.read().split()
stopwords.extend('Monday Tuesday Wednesday Thursday Friday Saturday Sunday'.lower().split())
stopwords.extend('January February March April May June July August September October November December'.lower().split())
stopwords.extend('one two three four five six seven eight nine ten'.lower().split())
len(stopwords)
Explanation: First, we load a list of English stopwords. We also add some stopwords that we found on the dataset while exploring word frequency.
Note that we store a list of stopwords in the file stopwords_en.txt in our target folder (in the case of the English edition).
End of explanation
person_data = pd.read_csv('{0}/person_data_en.csv.gz'.format(target_folder), encoding='utf-8', index_col='uri')
N = person_data.gender.value_counts()
N
Explanation: We also load our person data.
End of explanation
with gzip.open('{0}/vocabulary.json.gz'.format(target_folder), 'rb') as f:
vocabulary = valmap(Counter, json.load(f))
common_words = list(set(vocabulary['male'].keys()) & set(vocabulary['female'].keys()))
len(common_words)
def word_iter():
for w in common_words:
if w in stopwords:
continue
yield {'male': vocabulary['male'][w], 'female': vocabulary['female'][w], 'word': w}
words = pd.DataFrame.from_records(word_iter(), index='word')
Explanation: And our vocabulary. We will consider only words that appear in both genders (so it makes sense to compare association).
End of explanation
p_c = N / N.sum()
p_c
words['p_w'] = (words['male'] + words['female']) / N.sum()
words['p_w'].head(5)
words['p_male_w'] = words['male'] / N.sum()
words['p_female_w'] = words['female'] / N.sum()
words['pmi_male'] = np.log(words['p_male_w'] / (words['p_w'] * p_c['male'])) / -np.log(words['p_male_w'])
words['pmi_female'] = np.log(words['p_female_w'] / (words['p_w'] * p_c['female'])) / -np.log(words['p_female_w'])
words.head()
Explanation: Now we estimate PMI. Recall that PMI is:
$$\mbox{PMI}(c, w) = \log \frac{p(c, w)}{p(c) p(w)}$$
Where c is a class (or gender) and w is a word (or bigram in our case). To normalize PMI we can divide by $-\log p(c,w)$.
End of explanation
min_p = 0.001
top_female = words[words.p_w > min_p].sort_values(by=['pmi_female'], ascending=False)
top_female.head(10)
top_male = words[words.p_w > min_p].sort_values(by=['pmi_male'], ascending=False)
top_male.head(10)
Explanation: Now we are ready to explore PMI. Recall that PMI overweights words that have extremely low frequencies. We need to set a threshold for it. For instance, in our previous paper we considered 1% of biographies as threshold. But this time we have more biographies, and with 1% we don't have 200 words for women.
Hence, this time we lower the bar up to 0.1%.
End of explanation
top_male.head(200).to_csv('{0}/top-200-pmi-male.csv'.format(target_folder), encoding='utf-8')
top_female.head(200).to_csv('{0}/top-200-pmi-female.csv'.format(target_folder), encoding='utf-8')
Explanation: What we will do is to save both lists of top-200 words and then manually annotate them according to the following categories:
F: Family
R: Relationship
G: Gender
O: Other
We will add that categorization to the column "cat", and we will process it in the following notebook.
End of explanation |
1,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adaptive Filters Real-time Use with Padasip Module
This tutorial shows how to use Padasip module for filtering and prediction with adaptive filters in real-time.
Lets start with importing padasip. In the following examples we will also use numpy and matplotlib.
Step1: One Sample Ahead Prediction Example with the NLMS Filter
Consider measurement of a variable $\textbf{d}$ in time $k$. The inputs of the system which produces this variable is also measured at every sample $\textbf{x}(k)$. We will simulate the measurement via the following function
Step2: For prediction of the variable $d(k)$ it is possible to use any implemented fitler (LMS, RLS, NLMS). In this case the NLMS filter is used. The filter (as a size of 3 in this example) can be created as follows
Step3: Now the created filter can be used in the loop in real time as
Step4: Now, according to logged values it is possible to display the learning process of the filter. | Python Code:
import numpy as np
import matplotlib.pylab as plt
import padasip as pa
%matplotlib inline
plt.style.use('ggplot') # nicer plots
np.random.seed(52102) # always use the same random seed to make results comparable
Explanation: Adaptive Filters Real-time Use with Padasip Module
This tutorial shows how to use Padasip module for filtering and prediction with adaptive filters in real-time.
Lets start with importing padasip. In the following examples we will also use numpy and matplotlib.
End of explanation
def measure_x():
# input vector of size 3
x = np.random.random(3)
return x
def measure_d(x):
# meausure system output
d = 2*x[0] + 1*x[1] - 1.5*x[2]
return d
Explanation: One Sample Ahead Prediction Example with the NLMS Filter
Consider measurement of a variable $\textbf{d}$ in time $k$. The inputs of the system which produces this variable is also measured at every sample $\textbf{x}(k)$. We will simulate the measurement via the following function
End of explanation
filt = pa.filters.FilterNLMS(3, mu=1.)
Explanation: For prediction of the variable $d(k)$ it is possible to use any implemented fitler (LMS, RLS, NLMS). In this case the NLMS filter is used. The filter (as a size of 3 in this example) can be created as follows
End of explanation
N = 100
log_d = np.zeros(N)
log_y = np.zeros(N)
for k in range(N):
# measure input
x = measure_x()
# predict new value
y = filt.predict(x)
# do the important stuff with prediction output
pass
# measure output
d = measure_d(x)
# update filter
filt.adapt(d, x)
# log values
log_d[k] = d
log_y[k] = y
Explanation: Now the created filter can be used in the loop in real time as
End of explanation
plt.figure(figsize=(12.5,6))
plt.plot(log_d, "b", label="target")
plt.plot(log_y, "g", label="prediction")
plt.xlabel("discrete time index [k]")
plt.legend()
plt.tight_layout()
plt.show()
Explanation: Now, according to logged values it is possible to display the learning process of the filter.
End of explanation |
1,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EventVestor
Step1: Let's go over the columns
Step2: Finally, suppose we want a DataFrame of all earnings calendar releases in February 2012, but we only want the event_headline and the calendar_time.
Step3: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows
Step4: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
Step5: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread
Step6: Taking what we've seen from above, let's see how we'd move that into the backtester. | Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import earnings_calendar as dataset
# or if you want to import the free dataset, use:
# from quantopian.data.eventvestor import earnings_calendar_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
Explanation: EventVestor: Earnings Calendar
In this notebook, we'll take a look at EventVestor's Earnings Calendar dataset, available on the Quantopian Store. This dataset spans January 01, 2007 through the current day, and documents the quarterly earnings releases calendar indicating date and time of reporting.
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Free samples and limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.
To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
# get apple's sid first
aapl_sid = symbols('AAPL').sid
aapl_earnings = earnings_calendar[('2011-12-31' < earnings_calendar['asof_date']) & (earnings_calendar['asof_date'] <'2013-01-01') & (earnings_calendar.sid==aapl_sid)]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
aapl_earnings.sort('asof_date')
Explanation: Let's go over the columns:
- event_id: the unique identifier for this event.
- asof_date: EventVestor's timestamp of event capture.
- trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.
- symbol: stock ticker symbol of the affected company.
- event_type: this should always be Earnings Calendar.
- event_headline: a brief description of the event
- event_phase: the inclusion of this field is likely an error on the part of the data vendor. We're currently attempting to resolve this.
- calendar_date: proposed earnings reporting date
- calendar_time: earnings release time: before/after market hours, or other.
- event_rating: this is always 1. The meaning of this is uncertain.
- timestamp: this is our timestamp on when we registered the data.
- sid: the equity's unique identifier. Use this instead of the symbol.
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all of Apple's entries from 2012.
End of explanation
# manipulate with Blaze first:
feb_2012 = earnings_calendar[(earnings_calendar['asof_date'] < '2012-03-01')&('2012-02-01' <= earnings_calendar['asof_date'])]
# now that we've got a much smaller object, we can convert it to a pandas DataFrame
feb_df = odo(feb_2012, pd.DataFrame)
reduced = feb_df[['event_headline','calendar_time']]
# When printed: pandas DataFrames display the head(30) and tail(30) rows, and truncate the middle.
reduced
Explanation: Finally, suppose we want a DataFrame of all earnings calendar releases in February 2012, but we only want the event_headline and the calendar_time.
End of explanation
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.eventvestor import EarningsCalendar
# To use built-in Pipeline factors for this dataset
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysUntilNextEarnings,
BusinessDaysSincePreviousEarnings
)
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.eventvestor import EarningsCalendar
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(EarningsCalendar.previous_announcement.latest, 'previous_announcement')
End of explanation
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (EarningsCalendar,):
_print_fields(data)
print "---------------------------------------------------\n"
Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
End of explanation
# Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(EarningsCalendar.previous_announcement.latest, 'previous_announcement')
pipe.add(EarningsCalendar.next_announcement.latest, 'next_announcement')
pipe.add(BusinessDaysSincePreviousEarnings(), "business_days_since")
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid & EarningsCalendar.previous_announcement.latest.notnull())
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
End of explanation
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.eventvestor import EarningsCalendar
# To use built-in Pipeline factors for this dataset
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysUntilNextEarnings,
BusinessDaysSincePreviousEarnings
)
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add pipeline factors
pipe.add(EarningsCalendar.previous_announcement.latest, 'previous_announcement')
pipe.add(EarningsCalendar.next_announcement.latest, 'next_announcement')
pipe.add(BusinessDaysSincePreviousEarnings(), "business_days_since")
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
Explanation: Taking what we've seen from above, let's see how we'd move that into the backtester.
End of explanation |
1,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: 3T_데이터베이스, 테이블 생성하고 데이터 추가하기
우리의 데이터베이스 서버에 "데이터베이스", "테이블", "데이터" 생성하고, 저장하기
데이터베이스 생성하기 ( 각자 이름으로 )
테이블 생성하기 ( "zigbang" )
데이터 추가하기
Step9: 실습)
데이터를 어떻게 저장할 것인가? - 확장성이 있는가, ... ( JOIN, Merge, GROUP BY, ... )
provider ( "직방", "다방", "꿀방", "두더지방", ) - id, name
agency ( "부동산명" ) - id, provider_id, phonenumber, name
room ( "매물" ) - address, deposit, rent
Step14: agency Table 만들기
agency_id, provider_id, phonenumber, name | Python Code:
zigbang_df = pd.read_csv("zigbang.csv")
zigbang_df.head()
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
# "sakila",
charset="utf8",
)
# cursor 라는 객체를 가져와서 DB에 명령을 실행시킵니다.
cursor = db.cursor()
# 현재 있는 모든 데이터베이스 이름을 가져오는 명령어
SQL_QUERY =
SHOW DATABASES;
# pd.read_sql(SQL_QUERY, db) => Pandas의 DataFrame으로 만들어주는 명령어. 이렇게 해도 되지만 cursor를 이용해보자
cursor.execute(SQL_QUERY)
cursor.fetchall()
#데이터베이스 생성(=="world", "sakila", ...)
SQL_QUERY =
CREATE DATABASE kipoy;
cursor.execute(SQL_QUERY)
pd.read_sql("SHOW DATABASES;", db)
db.commit() # 만약 접속이 안 되면 이거 실행하고 해라.
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"kipoy",
charset="utf8",
)
# SHOW DATABASES;
# 접근한 이후 ( USE _______; ) => SHOW TABLES;
SQL_QUERY =
SHOW TABLES;
pd.read_sql(SQL_QUERY, db)
Explanation: 3T_데이터베이스, 테이블 생성하고 데이터 추가하기
우리의 데이터베이스 서버에 "데이터베이스", "테이블", "데이터" 생성하고, 저장하기
데이터베이스 생성하기 ( 각자 이름으로 )
테이블 생성하기 ( "zigbang" )
데이터 추가하기
End of explanation
# 1. 데이터베이스 연결
SQL_QUERY =
USE kipoy;
cursor.execute(SQL_QUERY)
# 2. 기존의 데이터베이스 제거
SQL_QUERY =
DROP TABLE IF EXISTS provider;
cursor.execute(SQL_QUERY)
# 3. 테이블 생성하기
SQL_QUERY =
CREATE TABLE IF NOT EXISTS provider
(
provider_id int PRIMARY KEY,
name varchar(20)
)
;
cursor.execute(SQL_QUERY)
db.commit()
SQL_QUERY =
SELECT *
FROM provider
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
INSERT INTO provider (provider_id, name)
VALUES (
1,
"다방"
);
cursor.execute(SQL_QUERY)
db.commit()
pd.read_sql("SELECT * FROM provider;", db) #받아오는 데 시간이 꽤 걸리네
Explanation: 실습)
데이터를 어떻게 저장할 것인가? - 확장성이 있는가, ... ( JOIN, Merge, GROUP BY, ... )
provider ( "직방", "다방", "꿀방", "두더지방", ) - id, name
agency ( "부동산명" ) - id, provider_id, phonenumber, name
room ( "매물" ) - address, deposit, rent
End of explanation
db.commit()
SQL_QUERY =
DROP TABLE IF EXISTS agency;
cursor.execute(SQL_QUERY)
SQL_QUERY =
CREATE TABLE IF NOT EXISTS agency (
agency_id int,
provider_id int,
name varchar(100),
phonenumber varchar(20)
)
;
cursor.execute(SQL_QUERY)
db.commit()
pd.read_sql("SELECT * FROM agency;", db)
SQL_QUERY =
INSERT INTO agency (agency_id, provider_id, name, phonenumber)
VALUES (
4,
2,
"해퓌 부동산",
"010-6235-3317"
)
;
cursor.execute(SQL_QUERY)
db.commit()
pd.read_sql("SELECT * FROM agency;", db)
SQL_QUERY =
SELECT *
FROM agency a
JOIN provider p
ON a.provider_id = p.provider_id
;
pd.read_sql(SQL_QUERY, db)
Explanation: agency Table 만들기
agency_id, provider_id, phonenumber, name
End of explanation |
1,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: The <span style="background-color
Step2: Creating placeholders
It's a best practice to create placeholders before variable assignments when using TensorFlow. Here we'll create placeholders for inputs ("Xs") and outputs ("Ys").
Placeholder 'X'
Step3: Assigning bias and weights to null tensors
Now we are going to create the weights and biases, for this purpose they will be used as arrays filled with zeros. The values that we choose here can be critical, but we'll cover a better way on the second part, instead of this type of initialization.
Step4: Execute the assignment operation
Before, we assigned the weights and biases but we did not initialize them with null values. For this reason, TensorFlow need to initialize the variables that you assign.
Please notice that we're using this notation "sess.run" because we previously started an interactive session.
Step5: Adding Weights and Biases to input
The only difference from our next operation to the picture below is that we are using the mathematical convention for what is being executed in the illustration. The tf.matmul operation performs a matrix multiplication between x (inputs) and W (weights) and after the code add biases.
<img src="https
Step6: Softmax Regression
Softmax is an activation function that is normally used in classification problems. It generate the probabilities for the output. For example, our model will not be 100% sure that one digit is the number nine, instead, the answer will be a distribution of probabilities where, if the model is right, the nine number will have the larger probability.
For comparison, below is the one-hot vector for a nine digit label
Step7: Logistic function output is used for the classification between two target classes 0/1. Softmax function is generalized type of logistic function. That is, Softmax can output a multiclass categorical probability distribution.
Cost function
It is a function that is used to minimize the difference between the right answers (labels) and estimated outputs by our Network.
Step8: Type of optimization
Step9: Training batches
Train using minibatch Gradient Descent.
In practice, Batch Gradient Descent is not often used because is too computationally expensive. The good part about this method is that you have the true gradient, but with the expensive computing task of using the whole dataset in one time. Due to this problem, Neural Networks usually use minibatch to train.
Step10: Test
Step11: <a id="ref4"></a>
Evaluating the final result
Is the final result good?
Let's check the best algorithm available out there (10th june 2016)
Step12: The MNIST data
Step13: Initial parameters
Create general parameters for the model
Step14: Input and output
Create place holders for inputs and outputs
Step15: Converting images of the data set to tensors
The input image is a 28 pixels by 28 pixels and 1 channel (grayscale)
In this case the first dimension is the batch number of the image (position of the input on the batch) and can be of any size (due to -1)
Step16: Convolutional Layer 1
Defining kernel weight and bias
Size of the filter/kernel
Step17: <img src="https
Step18: <img src="https
Step19: Apply the max pooling
Use the max pooling operation already defined, so the output would be 14x14x32
Defining a function to perform max pooling. The maximum pooling is an operation that finds maximum values and simplifies the inputs using the spacial correlations between them.
Kernel size
Step20: First layer completed
Step21: Convolutional Layer 2
Weights and Biases of kernels
Filter/kernel
Step22: Convolve image with weight tensor and add biases.
Step23: Apply the ReLU activation Function
Step24: Apply the max pooling
Step25: Second layer completed
Step26: So, what is the output of the second layer, layer2?
- it is 64 matrix of [7x7]
Fully Connected Layer 3
Type
Step27: Weights and Biases between layer 2 and 3
Composition of the feature map from the last layer (7x7) multiplied by the number of feature maps (64); 1027 outputs to Softmax layer
Step28: Matrix Multiplication (applying weights and biases)
Step29: Apply the ReLU activation Function
Step30: Third layer completed
Step31: Optional phase for reducing overfitting - Dropout 3
It is a phase where the network "forget" some features. At each training step in a mini-batch, some units get switched off randomly so that it will not interact with the network. That is, it weights cannot be updated, nor affect the learning of the other network nodes. This can be very useful for very large neural networks to prevent overfitting.
Step32: Layer 4- Readout Layer (Softmax Layer)
Type
Step33: Matrix Multiplication (applying weights and biases)
Step34: Apply the Softmax activation Function
softmax allows us to interpret the outputs of fcl4 as probabilities. So, y_conv is a tensor of probablities.
Step35: <a id="ref7"></a>
Summary of the Deep Convolutional Neural Network
Now is time to remember the structure of our network
0) Input - MNIST dataset
1) Convolutional and Max-Pooling
2) Convolutional and Max-Pooling
3) Fully Connected Layer
4) Processing - Dropout
5) Readout layer - Fully Connected
6) Outputs - Classified digits
<a id="ref8"></a>
Define functions and train the model
Define the loss function
We need to compare our output, layer4 tensor, with ground truth for all mini_batch. we can use cross entropy to see how bad our CNN is working - to measure the error at a softmax layer.
The following code shows an toy sample of cross-entropy for a mini-batch of size 2 which its items have been classified. You can run it (first change the cell type to code in the toolbar) to see hoe cross entropy changes.
reduce_sum computes the sum of elements of (y_ * tf.log(layer4) across second dimension of the tensor, and reduce_mean computes the mean of all elements in the tensor..
Step36: Define the optimizer
It is obvious that we want minimize the error of our network which is calculated by cross_entropy metric. To solve the problem, we have to compute gradients for the loss (which is minimizing the cross-entropy) and apply gradients to variables. It will be done by an optimizer
Step37: Define prediction
Do you want to know how many of the cases in a mini-batch has been classified correctly? lets count them.
Step38: Define accuracy
It makes more sense to report accuracy using average of correct cases.
Step39: Run session, train
Step40: If you want a fast result (it might take sometime to train it)
Step41: <br>
You can run this cell if you REALLY have time to wait (change the type of the cell to code)
<br>
PS. If you have problems running this notebook, please shutdown all your Jupyter runnning notebooks, clear all cells outputs and run each cell only after the completion of the previous cell.
<a id="ref9"></a>
Evaluate the model
Print the evaluation to the user
Step42: Visualization
Do you want to look at all the filters?
Step43: Do you want to see the output of an image passing through first convolution layer?
Step44: What about second convolution layer? | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
Explanation: <a href="https://www.cognitiveclass.ai"><img src = "https://cognitiveclass.ai/wp-content/themes/bdu3.0/static/images/cc-logo.png" align = left></a>
<br>
<br>
Introduction to Convolutional Neural Networks
In this section, we will use the famous MNIST Dataset to build two Neural Networks capable to perform handwritten digits classification. The first Network is a simple Multi-layer Perceptron (MLP) and the second one is a Convolutional Neural Network (CNN from now on). In other words, our algorithm will say, with some associated error, what type of digit is the presented input.
This lesson is not intended to be a reference for machine learning, convolutions or TensorFlow. The intention is to give notions to the user about these fields and awareness of Data Scientist Workbench capabilities. We recommend that the students search for further references to understand completely the mathematical and theoretical concepts involved.
Table of contents
What is Deep Learning
Simple test: Is tensorflow working?
1st part: classify MNIST using a simple model
Evaluating the final result
How to improve our model?
2nd part: Deep Learning applied on MNIST
Summary of the Deep Convolutional Neural Network
Define functions and train the model
Evaluate the model
<a id="ref1"></a>
What is Deep Learning?
Brief Theory: Deep learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple non-linear transformations.
<img src="https://ibm.box.com/shared/static/gcbbrh440604cj2nksu3f44be87b8ank.png" alt="HTML5 Icon" style="width:600px;height:450px;">
<div style="text-align:center">It's time for deep learning. Our brain does't work with one or three layers. Why it would be different with machines?. </div>
In this tutorial, we first classify MNIST using a simple Multi-layer percepetron and then, in the second part, we use deeplearning to improve the accuracy of our results.
<a id="ref3"></a>
1st part: classify MNIST using a simple model.
We are going to create a simple Multi-layer percepetron, a simple type of Neural Network, to performe classification tasks on the MNIST digits dataset. If you are not familiar with the MNIST dataset, please consider to read more about it: <a href="http://yann.lecun.com/exdb/mnist/">click here</a>
What is MNIST?
According to Lecun's website, the MNIST is a: "database of handwritten digits that has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image".
Import the MNIST dataset using TensorFlow built-in feature
It's very important to notice that MNIST is a high optimized data-set and it does not contain images. You will need to build your own code if you want to see the real digits. Another important side note is the effort that the authors invested on this data-set with normalization and centering operations.
End of explanation
sess = tf.InteractiveSession()
Explanation: The <span style="background-color:#dcdcdc
"> One-hot = True</span> argument only means that, in contrast to Binary representation, the labels will be presented in a way that only one bit will be on for a specific digit. For example, five and zero in a binary code would be:
<pre>
Number representation: 0
Binary encoding: [2^5] [2^4] [2^3] [2^2] [2^1] [2^0]
Array/vector: 0 0 0 0 0 0
Number representation: 5
Binary encoding: [2^5] [2^4] [2^3] [2^2] [2^1] [2^0]
Array/vector: 0 0 0 1 0 1
</pre>
Using a different notation, the same digits using one-hot vector representation can be show as:
<pre>
Number representation: 0
One-hot encoding: [5] [4] [3] [2] [1] [0]
Array/vector: 0 0 0 0 0 1
Number representation: 5
One-hot encoding: [5] [4] [3] [2] [1] [0]
Array/vector: 1 0 0 0 0 0
</pre>
Understanding the imported data
The imported data can be divided as follow:
Training (mnist.train) >> Use the given dataset with inputs and related outputs for training of NN. In our case, if you give an image that you know that represents a "nine", this set will tell the neural network that we expect a "nine" as the output.
- 55,000 data points
- mnist.train.images for inputs
- mnist.train.labels for outputs
Validation (mnist.validation) >> The same as training, but now the date is used to generate model properties (classification error, for example) and from this, tune parameters like the optimal number of hidden units or determine a stopping point for the back-propagation algorithm
- 5,000 data points
- mnist.validation.images for inputs
- mnist.validation.labels for outputs
Test (mnist.test) >> the model does not have access to this informations prior to the test phase. It is used to evaluate the performance and accuracy of the model against "real life situations". No further optimization beyond this point.
- 10,000 data points
- mnist.test.images for inputs
- mnist.test.labels for outputs
Creating an interactive section
You have two basic options when using TensorFlow to run your code:
[Build graphs and run session] Do all the set-up and THEN execute a session to evaluate tensors and run operations (ops)
[Interactive session] create your coding and run on the fly.
For this first part, we will use the interactive session that is more suitable for environments like Jupyter notebooks.
End of explanation
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
Explanation: Creating placeholders
It's a best practice to create placeholders before variable assignments when using TensorFlow. Here we'll create placeholders for inputs ("Xs") and outputs ("Ys").
Placeholder 'X': represents the "space" allocated input or the images.
* Each input has 784 pixels distributed by a 28 width x 28 height matrix
* The 'shape' argument defines the tensor size by its dimensions.
* 1st dimension = None. Indicates that the batch size, can be of any size.
* 2nd dimension = 784. Indicates the number of pixels on a single flattened MNIST image.
__Placeholder 'Y':___ represents the final output or the labels.
* 10 possible classes (0,1,2,3,4,5,6,7,8,9)
* The 'shape' argument defines the tensor size by its dimensions.
* 1st dimension = None. Indicates that the batch size, can be of any size.
* 2nd dimension = 10. Indicates the number of targets/outcomes
dtype for both placeholders: if you not sure, use tf.float32. The limitation here is that the later presented softmax function only accepts float32 or float64 dtypes. For more dtypes, check TensorFlow's documentation <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/framework.html#tensor-types">here</a>
End of explanation
# Weight tensor
W = tf.Variable(tf.zeros([784,10],tf.float32))
# Bias tensor
b = tf.Variable(tf.zeros([10],tf.float32))
Explanation: Assigning bias and weights to null tensors
Now we are going to create the weights and biases, for this purpose they will be used as arrays filled with zeros. The values that we choose here can be critical, but we'll cover a better way on the second part, instead of this type of initialization.
End of explanation
# run the op initialize_all_variables using an interactive session
sess.run(tf.initialize_all_variables())
Explanation: Execute the assignment operation
Before, we assigned the weights and biases but we did not initialize them with null values. For this reason, TensorFlow need to initialize the variables that you assign.
Please notice that we're using this notation "sess.run" because we previously started an interactive session.
End of explanation
#mathematical operation to add weights and biases to the inputs
tf.matmul(x,W) + b
Explanation: Adding Weights and Biases to input
The only difference from our next operation to the picture below is that we are using the mathematical convention for what is being executed in the illustration. The tf.matmul operation performs a matrix multiplication between x (inputs) and W (weights) and after the code add biases.
<img src="https://ibm.box.com/shared/static/88ksiymk1xkb10rgk0jwr3jw814jbfxo.png" alt="HTML5 Icon" style="width:400px;height:350px;">
<div style="text-align:center">Illustration showing how weights and biases are added to neurons/nodes. </div>
End of explanation
y = tf.nn.softmax(tf.matmul(x,W) + b)
Explanation: Softmax Regression
Softmax is an activation function that is normally used in classification problems. It generate the probabilities for the output. For example, our model will not be 100% sure that one digit is the number nine, instead, the answer will be a distribution of probabilities where, if the model is right, the nine number will have the larger probability.
For comparison, below is the one-hot vector for a nine digit label:
A machine does not have all this certainty, so we want to know what is the best guess, but we also want to understand how sure it was and what was the second better option. Below is an example of a hypothetical distribution for a nine digit:
End of explanation
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
Explanation: Logistic function output is used for the classification between two target classes 0/1. Softmax function is generalized type of logistic function. That is, Softmax can output a multiclass categorical probability distribution.
Cost function
It is a function that is used to minimize the difference between the right answers (labels) and estimated outputs by our Network.
End of explanation
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
Explanation: Type of optimization: Gradient Descent
This is the part where you configure the optimizer for you Neural Network. There are several optimizers available, in our case we will use Gradient Descent that is very well stablished.
End of explanation
batch = mnist.train.next_batch(50)
batch[0].shape
type(batch[0])
mnist.train.images.shape
#Load 50 training examples for each training iteration
for i in range(1000):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
Explanation: Training batches
Train using minibatch Gradient Descent.
In practice, Batch Gradient Descent is not often used because is too computationally expensive. The good part about this method is that you have the true gradient, but with the expensive computing task of using the whole dataset in one time. Due to this problem, Neural Networks usually use minibatch to train.
End of explanation
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
acc = accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}) * 100
print("The final accuracy for the simple ANN model is: {} % ".format(acc) )
sess.close() #finish the session
Explanation: Test
End of explanation
import tensorflow as tf
# finish possible remaining session
sess.close()
#Start interactive session
sess = tf.InteractiveSession()
Explanation: <a id="ref4"></a>
Evaluating the final result
Is the final result good?
Let's check the best algorithm available out there (10th june 2016):
Result: 0.21% error (99.79% accuracy)
<a href="http://cs.nyu.edu/~wanli/dropc/">Reference here</a>
<a id="ref5"></a>
How to improve our model?
Several options as follow:
Regularization of Neural Networks using DropConnect
Multi-column Deep Neural Networks for Image Classification
APAC: Augmented Pattern Classification with Neural Networks
Simple Deep Neural Network with Dropout
In the next part we are going to explore the option:
Simple Deep Neural Network with Dropout (more than 1 hidden layer)
<a id="ref6"></a>
2nd part: Deep Learning applied on MNIST
In the first part, we learned how to use a simple ANN to classify MNIST. Now we are going to expand our knowledge using a Deep Neural Network.
Architecture of our network is:
(Input) -> [batch_size, 28, 28, 1] >> Apply 32 filter of [5x5]
(Convolutional layer 1) -> [batch_size, 28, 28, 32]
(ReLU 1) -> [?, 28, 28, 32]
(Max pooling 1) -> [?, 14, 14, 32]
(Convolutional layer 2) -> [?, 14, 14, 64]
(ReLU 2) -> [?, 14, 14, 64]
(Max pooling 2) -> [?, 7, 7, 64]
[fully connected layer 3] -> [1x1024]
[ReLU 3] -> [1x1024]
[Drop out] -> [1x1024]
[fully connected layer 4] -> [1x10]
The next cells will explore this new architecture.
Starting the code
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
Explanation: The MNIST data
End of explanation
width = 28 # width of the image in pixels
height = 28 # height of the image in pixels
flat = width * height # number of pixels in one image
class_output = 10 # number of possible classifications for the problem
Explanation: Initial parameters
Create general parameters for the model
End of explanation
x = tf.placeholder(tf.float32, shape=[None, flat])
y_ = tf.placeholder(tf.float32, shape=[None, class_output])
Explanation: Input and output
Create place holders for inputs and outputs
End of explanation
x_image = tf.reshape(x, [-1,28,28,1])
x_image
Explanation: Converting images of the data set to tensors
The input image is a 28 pixels by 28 pixels and 1 channel (grayscale)
In this case the first dimension is the batch number of the image (position of the input on the batch) and can be of any size (due to -1)
End of explanation
W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 1, 32], stddev=0.1))
b_conv1 = tf.Variable(tf.constant(0.1, shape=[32])) # need 32 biases for 32 outputs
Explanation: Convolutional Layer 1
Defining kernel weight and bias
Size of the filter/kernel: 5x5;
Input channels: 1 (greyscale);
32 feature maps (here, 32 feature maps means 32 different filters are applied on each image. So, the output of convolution layer would be 28x28x32). In this step, we create a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels]
End of explanation
convolve1= tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1], padding='SAME') + b_conv1
Explanation: <img src="https://ibm.box.com/shared/static/f4touwscxlis8f2bqjqg4u5zxftnyntc.png" style="width:800px;height:400px;" alt="HTML5 Icon" >
Convolve with weight tensor and add biases.
Defining a function to create convolutional layers. To creat convolutional layer, we use tf.nn.conv2d. It computes a 2-D convolution given 4-D input and filter tensors.
Inputs:
- tensor of shape [batch, in_height, in_width, in_channels]. x of shape [batch_size,28 ,28, 1]
- a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels]. W is of size [5, 5, 1, 32]
- stride which is [1, 1, 1, 1]
Process:
- change the filter to a 2-D matrix with shape [5*5*1,32]
- Extracts image patches from the input tensor to form a virtual tensor of shape [batch, 28, 28, 5*5*1].
- For each patch, right-multiplies the filter matrix and the image patch vector.
Output:
- A Tensor (a 2-D convolution) of size <tf.Tensor 'add_7:0' shape=(?, 28, 28, 32)- Notice: the output of the first convolution layer is 32 [28x28] images. Here 32 is considered as volume/depth of the output image.
End of explanation
h_conv1 = tf.nn.relu(convolve1)
Explanation: <img src="https://ibm.box.com/shared/static/brosafd4eaii7sggpbeqwj9qmnk96hmx.png" style="width:800px;height:400px;" alt="HTML5 Icon" >
Apply the ReLU activation Function
In this step, we just go through all outputs convolution layer, covolve1, and wherever a negative number occurs,we swap it out for a 0. It is called ReLU activation Function.
End of explanation
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2
Explanation: Apply the max pooling
Use the max pooling operation already defined, so the output would be 14x14x32
Defining a function to perform max pooling. The maximum pooling is an operation that finds maximum values and simplifies the inputs using the spacial correlations between them.
Kernel size: 2x2 (if the window is a 2x2 matrix, it would result in one output pixel)
Strides: dictates the sliding behaviour of the kernel. In this case it will move 2 pixels everytime, thus not overlapping.
<img src="https://ibm.box.com/shared/static/awyoq0e2r3hfx3n7xrvhw4y7gly683p4.png" alt="HTML5 Icon" style="width:800px;height:400px;">
End of explanation
layer1= h_pool1
Explanation: First layer completed
End of explanation
W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1))
b_conv2 = tf.Variable(tf.constant(0.1, shape=[64])) #need 64 biases for 64 outputs
Explanation: Convolutional Layer 2
Weights and Biases of kernels
Filter/kernel: 5x5 (25 pixels) ; Input channels: 32 (from the 1st Conv layer, we had 32 feature maps); 64 output feature maps
Notice: here, the input is 14x14x32, the filter is 5x5x32, we use 64 filters, and the output of the convolutional layer would be 14x14x64.
Notice: the convolution result of applying a filter of size [5x5x32] on image of size [14x14x32] is an image of size [14x14x1], that is, the convolution is functioning on volume.
End of explanation
convolve2= tf.nn.conv2d(layer1, W_conv2, strides=[1, 1, 1, 1], padding='SAME')+ b_conv2
Explanation: Convolve image with weight tensor and add biases.
End of explanation
h_conv2 = tf.nn.relu(convolve2)
Explanation: Apply the ReLU activation Function
End of explanation
h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2
Explanation: Apply the max pooling
End of explanation
layer2= h_pool2
Explanation: Second layer completed
End of explanation
layer2_matrix = tf.reshape(layer2, [-1, 7*7*64])
Explanation: So, what is the output of the second layer, layer2?
- it is 64 matrix of [7x7]
Fully Connected Layer 3
Type: Fully Connected Layer. You need a fully connected layer to use the Softmax and create the probabilities in the end. Fully connected layers take the high-level filtered images from previous layer, that is all 64 matrics, and convert them to an array.
So, each matrix [7x7] will be converted to a matrix of [49x1], and then all of the 64 matrix will be connected, which make an array of size [3136x1]. We will connect it into another layer of size [1024x1]. So, the weight between these 2 layers will be [3136x1024]
<img src="https://ibm.box.com/shared/static/hvbegd0lfr1maxpq2gpq3g8ibvk8d2eo.png" alt="HTML5 Icon" style="width:800px;height:400px;">
Flattening Second Layer
End of explanation
W_fc1 = tf.Variable(tf.truncated_normal([7 * 7 * 64, 1024], stddev=0.1))
b_fc1 = tf.Variable(tf.constant(0.1, shape=[1024])) # need 1024 biases for 1024 outputs
Explanation: Weights and Biases between layer 2 and 3
Composition of the feature map from the last layer (7x7) multiplied by the number of feature maps (64); 1027 outputs to Softmax layer
End of explanation
fcl3=tf.matmul(layer2_matrix, W_fc1) + b_fc1
Explanation: Matrix Multiplication (applying weights and biases)
End of explanation
h_fc1 = tf.nn.relu(fcl3)
Explanation: Apply the ReLU activation Function
End of explanation
layer3= h_fc1
layer3
Explanation: Third layer completed
End of explanation
keep_prob = tf.placeholder(tf.float32)
layer3_drop = tf.nn.dropout(layer3, keep_prob)
Explanation: Optional phase for reducing overfitting - Dropout 3
It is a phase where the network "forget" some features. At each training step in a mini-batch, some units get switched off randomly so that it will not interact with the network. That is, it weights cannot be updated, nor affect the learning of the other network nodes. This can be very useful for very large neural networks to prevent overfitting.
End of explanation
W_fc2 = tf.Variable(tf.truncated_normal([1024, 10], stddev=0.1)) #1024 neurons
b_fc2 = tf.Variable(tf.constant(0.1, shape=[10])) # 10 possibilities for digits [0,1,2,3,4,5,6,7,8,9]
Explanation: Layer 4- Readout Layer (Softmax Layer)
Type: Softmax, Fully Connected Layer.
Weights and Biases
In last layer, CNN takes the high-level filtered images and translate them into votes using softmax.
Input channels: 1024 (neurons from the 3rd Layer); 10 output features
End of explanation
fcl4=tf.matmul(layer3_drop, W_fc2) + b_fc2
Explanation: Matrix Multiplication (applying weights and biases)
End of explanation
y_conv= tf.nn.softmax(fcl4)
layer4= y_conv
layer4
Explanation: Apply the Softmax activation Function
softmax allows us to interpret the outputs of fcl4 as probabilities. So, y_conv is a tensor of probablities.
End of explanation
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(layer4), reduction_indices=[1]))
Explanation: <a id="ref7"></a>
Summary of the Deep Convolutional Neural Network
Now is time to remember the structure of our network
0) Input - MNIST dataset
1) Convolutional and Max-Pooling
2) Convolutional and Max-Pooling
3) Fully Connected Layer
4) Processing - Dropout
5) Readout layer - Fully Connected
6) Outputs - Classified digits
<a id="ref8"></a>
Define functions and train the model
Define the loss function
We need to compare our output, layer4 tensor, with ground truth for all mini_batch. we can use cross entropy to see how bad our CNN is working - to measure the error at a softmax layer.
The following code shows an toy sample of cross-entropy for a mini-batch of size 2 which its items have been classified. You can run it (first change the cell type to code in the toolbar) to see hoe cross entropy changes.
reduce_sum computes the sum of elements of (y_ * tf.log(layer4) across second dimension of the tensor, and reduce_mean computes the mean of all elements in the tensor..
End of explanation
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
Explanation: Define the optimizer
It is obvious that we want minimize the error of our network which is calculated by cross_entropy metric. To solve the problem, we have to compute gradients for the loss (which is minimizing the cross-entropy) and apply gradients to variables. It will be done by an optimizer: GradientDescent or Adagrad.
End of explanation
correct_prediction = tf.equal(tf.argmax(layer4,1), tf.argmax(y_,1))
Explanation: Define prediction
Do you want to know how many of the cases in a mini-batch has been classified correctly? lets count them.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: Define accuracy
It makes more sense to report accuracy using average of correct cases.
End of explanation
sess.run(tf.initialize_all_variables())
Explanation: Run session, train
End of explanation
for i in range(1100):
batch = mnist.train.next_batch(50)
if i%100 == 0:
#train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
loss, train_accuracy = sess.run([cross_entropy, accuracy], feed_dict={x: batch[0],y_: batch[1],keep_prob: 1.0})
print("step %d, loss %g, training accuracy %g"%(i, float(loss),float(train_accuracy)))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
Explanation: If you want a fast result (it might take sometime to train it)
End of explanation
print("test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
Explanation: <br>
You can run this cell if you REALLY have time to wait (change the type of the cell to code)
<br>
PS. If you have problems running this notebook, please shutdown all your Jupyter runnning notebooks, clear all cells outputs and run each cell only after the completion of the previous cell.
<a id="ref9"></a>
Evaluate the model
Print the evaluation to the user
End of explanation
kernels = sess.run(tf.reshape(tf.transpose(W_conv1, perm=[2, 3, 0,1]),[32,-1]))
from utils import tile_raster_images
import matplotlib.pyplot as plt
from PIL import Image
%matplotlib inline
image = Image.fromarray(tile_raster_images(kernels, img_shape=(5, 5) ,tile_shape=(4, 8), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (18.0, 18.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
Explanation: Visualization
Do you want to look at all the filters?
End of explanation
import numpy as np
plt.rcParams['figure.figsize'] = (5.0, 5.0)
sampleimage = mnist.test.images[1]
plt.imshow(np.reshape(sampleimage,[28,28]), cmap="gray")
ActivatedUnits.shape
ActivatedUnitsL1 = sess.run(convolve1,feed_dict={x:np.reshape(sampleimage,[1,784],order='F'),keep_prob:1.0})
filters = ActivatedUnitsL1.shape[3]
plt.figure(1, figsize=(20,20))
n_columns = 6
n_rows = np.math.ceil(filters / n_columns) + 1
for i in range(filters):
plt.subplot(n_rows, n_columns, i+1)
plt.title('Cov1_ ' + str(i))
plt.imshow(ActivatedUnitsL1[0,:,:,i], interpolation="nearest", cmap="gray")
Explanation: Do you want to see the output of an image passing through first convolution layer?
End of explanation
ActivatedUnitsL2 = sess.run(convolve2,feed_dict={x:np.reshape(sampleimage,[1,784],order='F'),keep_prob:1.0})
filters = ActivatedUnitsL2.shape[3]
plt.figure(1, figsize=(20,20))
n_columns = 8
n_rows = np.math.ceil(filters / n_columns) + 1
for i in range(filters):
plt.subplot(n_rows, n_columns, i+1)
plt.title('Conv_2 ' + str(i))
plt.imshow(ActivatedUnitsL2[0,:,:,i], interpolation="nearest", cmap="gray")
sess.close() #finish the session
Explanation: What about second convolution layer?
End of explanation |
1,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I've implemented the integral of wt in pearce. This notebook verifies it works as I believe it should.
Step1: Load up the tptY3 buzzard mocks.
Step2: Load up a snapshot at a redshift near the center of this bin.
Step3: This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.
Step4: Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor
$$ W = \frac{2}{c}\int_0^{\infty} dz H(z) \left(\frac{dN}{dz} \right)^2 $$
Step5: If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now.
Step6: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
Step7: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
Step8: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
Perform the below integral in each theta bin
Step9: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
Step10: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason. | Python Code:
from pearce.mocks import cat_dict
import numpy as np
from os import path
from astropy.io import fits
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
Explanation: I've implemented the integral of wt in pearce. This notebook verifies it works as I believe it should.
End of explanation
fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits'
hdulist = fits.open(fname)
hdulist.info()
hdulist[0].header
z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])
zbin=1
a = 0.81120
z = 1.0/a - 1.0
Explanation: Load up the tptY3 buzzard mocks.
End of explanation
print z
Explanation: Load up a snapshot at a redshift near the center of this bin.
End of explanation
cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a, particles = True)
cat.load_model(a, 'redMagic')
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0 = 100, Om0 = 0.3, Tcmb0=2.725)
#cat.cosmology = cosmo # set to the "standard" one
#cat.h = cat.cosmology.h
Explanation: This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.
End of explanation
hdulist[8].columns
nz_zspec = hdulist[8]
zbin_edges = [row[0] for row in nz_zspec.data]
zbin_edges.append(nz_zspec.data[-1][2]) # add the last bin edge
zbin_edges = np.array(zbin_edges)
Nz = np.array([row[2+zbin] for row in nz_zspec.data])
N_total = np.sum(Nz)
dNdz = Nz/N_total
W = cat.compute_wt_prefactor(zbin_edges, dNdz)
print W
Explanation: Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor
$$ W = \frac{2}{c}\int_0^{\infty} dz H(z) \left(\frac{dN}{dz} \right)^2 $$
End of explanation
params = cat.model.param_dict.copy()
params['mean_occupation_centrals_assembias_param1'] = 0
params['mean_occupation_satellites_assembias_param1'] = 0
params['logMmin'] = 12.0
params['sigma_logM'] = 0.2
params['f_c'] = 0.19
params['alpha'] = 1.21
params['logM1'] = 13.71
params['logM0'] = 11.39
print params
cat.populate(params)
nd_cat = cat.calc_analytic_nd()
print nd_cat
cat.cosmology
area = 4635.4 #sq degrees
full_sky = 41253 #sq degrees
volIn, volOut = cat.cosmology.comoving_volume(z_bins[zbin-1]), cat.cosmology.comoving_volume(z_bins[zbin])
fullsky_volume = volOut-volIn
survey_volume = fullsky_volume*area/full_sky
nd_mock = N_total/survey_volume
print nd_mock
volIn.value, volOut
correct_nds = np.array([1e-3, 1e-3, 1e-3, 4e-4, 1e-4])
%%bash
ls ~jderose/public_html/bcc/catalog/redmagic/y3/buzzard/flock/buzzard-0/a/buzzard-0_1.6_y3_run_redmapper_v6.4.20_redmagic_*vlim_area.fit
vol_fname = '/u/ki/jderose/public_html/bcc/catalog/redmagic/y3/buzzard/flock/buzzard-0/a/buzzard-0_1.6_y3_run_redmapper_v6.4.20_redmagic_highlum_1.0_vlim_area.fit'
vol_hdulist = fits.open(vol_fname)
nd_mock.value/nd_cat
#compute the mean mass
mf = cat.calc_mf()
HOD = cat.calc_hod()
mass_bin_range = (9,16)
mass_bin_size = 0.01
mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
print mean_host_mass
theta_bins = np.logspace(np.log10(2.5), np.log10(2000), 25)/60 #binning used in buzzard mocks
tpoints = (theta_bins[1:]+theta_bins[:-1])/2
r_bins = np.logspace(-0.5, 1.7, 16)/cat.h
rpoints = (r_bins[1:]+r_bins[:-1])/2
r_bins
wt = cat.calc_wt(theta_bins, r_bins, W)
wt
r_bins
Explanation: If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now.
End of explanation
xi = cat.calc_xi(r_bins, do_jackknife=False)
Explanation: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
End of explanation
import george
from george.kernels import ExpSquaredKernel
kernel = ExpSquaredKernel(0.05)
gp = george.GP(kernel)
gp.compute(np.log10(rpoints))
print xi
xi[xi<=0] = 1e-2 #ack
from scipy.stats import linregress
m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
plt.plot(rpoints, (2.22353827e+03)*(rpoints**(-1.88359)))
#plt.plot(rpoints, b2*(rpoints**m2))
plt.scatter(rpoints, xi)
plt.loglog();
plt.plot(np.log10(rpoints), b+(np.log10(rpoints)*m))
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
#plt.loglog();
print m,b
rpoints_dense = np.logspace(-0.5, 2, 500)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.loglog();
Explanation: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
End of explanation
print zbin
#a subset of the data from above. I've verified it's correct, but we can look again.
zbin = 1
wt_redmagic = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/mcmc/buzzard2_wt_%d%d.npy'%(zbin,zbin))
Explanation: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
Perform the below integral in each theta bin:
$$ w(\theta) = W \int_0^\infty du \xi \left(r = \sqrt{u^2 + \bar{x}^2(z)\theta^2} \right) $$
Where $\bar{x}$ is the median comoving distance to z.
End of explanation
from scipy.special import gamma
def wt_analytic(m,b,t,x):
return W*b*np.sqrt(np.pi)*(t*x)**(1 + m)*(gamma(-(1./2) - m/2.)/(2*gamma(-(m/2.))) )
theta_bins_rm = np.logspace(np.log10(2.5), np.log10(250), 21)/60 #binning used in buzzard mocks
tpoints_rm = (theta_bins_rm[1:]+theta_bins_rm[:-1])/2
plt.plot(tpoints, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
#plt.plot(tpoints_rm, W.to("1/Mpc").value*mathematica_calc, label = 'Mathematica Calc')
#plt.plot(tpoints, wt_analytic(m,10**b, np.radians(tpoints), x),label = 'Mathematica Calc' )
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
print bias2
plt.plot(rpoints, xi/xi_mm)
plt.plot(rpoints, cat.calc_bias(r_bins))
plt.plot(rpoints, bias2*np.ones_like(rpoints))
plt.xscale('log')
plt.plot(rpoints, xi, label = 'Galaxy')
plt.plot(rpoints, xi_mm, label = 'Matter')
plt.loglog()
plt.legend(loc ='best')
from astropy import units
from scipy.interpolate import interp1d
cat.cosmology
import pyccl as ccl
ob = 0.047
om = cat.cosmology.Om0
oc = om - ob
sigma_8 = 0.82
h = cat.h
ns = 0.96
cosmo = ccl.Cosmology(Omega_c =oc, Omega_b=ob, h=h, n_s=ns, sigma8=sigma_8 )
big_rbins = np.logspace(1, 2.1, 21)
big_rbc = (big_rbins[1:] + big_rbins[:-1])/2.0
xi_mm2 = ccl.correlation_3d(cosmo, cat.a, big_rbc)
plt.plot(rpoints, xi)
plt.plot(big_rbc, xi_mm2)
plt.vlines(30, 1e-3, 1e2)
plt.loglog()
plt.plot(np.logspace(0,1.5, 20), xi_interp(np.log10(np.logspace(0,1.5,20))))
plt.plot(np.logspace(1.2,2.0, 20), xi_mm_interp(np.log10(np.logspace(1.2,2.0,20))))
plt.vlines(30, -3, 2)
#plt.loglog()
plt.xscale('log')
xi_interp = interp1d(np.log10(rpoints), np.log10(xi))
xi_mm_interp = interp1d(np.log10(big_rbc), np.log10(xi_mm2))
print xi_interp(np.log10(30))/xi_mm_interp(np.log10(30))
#xi = cat.calc_xi(r_bins)
xi_interp = interp1d(np.log10(rpoints), np.log10(xi))
xi_mm_interp = interp1d(np.log10(big_rbc), np.log10(xi_mm2))
#xi_mm = cat._xi_mm#self.calc_xi_mm(r_bins,n_cores='all')
#if precomputed, will just load the cache
bias2 = np.mean(xi[-3:]/xi_mm[-3:]) #estimate the large scale bias from the box
#print bias2
#note i don't use the bias builtin cuz i've already computed xi_gg.
#Assume xi_mm doesn't go below 0; will fail catastrophically if it does. but if it does we can't hack around it.
#idx = -3
#m,b,_,_,_ =linregress(np.log10(rpoints), np.log10(xi))
#large_scale_model = lambda r: bias2*(10**b)*(r**m) #should i use np.power?
large_scale_model = lambda r: (10**b)*(r**m) #should i use np.power?
tpoints = (theta_bins[1:] + theta_bins[:-1])/2.0
wt_large = np.zeros_like(tpoints)
wt_small = np.zeros_like(tpoints)
x = cat.cosmology.comoving_distance(cat.z)*cat.a/cat.h
assert tpoints[0]*x.to("Mpc").value/cat.h >= r_bins[0]
#ubins = np.linspace(10**-6, 10**4.0, 1001)
ubins = np.logspace(-6, 3.0, 1001)
ubc = (ubins[1:]+ubins[:-1])/2.0
def integrate_xi(bin_no):#, w_theta, bin_no, ubc, ubins)
int_xi = 0
t_med = np.radians(tpoints[bin_no])
for ubin_no, _u in enumerate(ubc):
_du = ubins[ubin_no+1]-ubins[ubin_no]
u = _u*units.Mpc*cat.a/cat.h
du = _du*units.Mpc*cat.a/cat.h
r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h
#if r > (units.Mpc)*cat.Lbox/10:
try:
int_xi+=du*bias2*(np.power(10, \
xi_mm_interp(np.log10(r.value))))
except ValueError:
int_xi+=du*0
#else:
#int_xi+=du*0#(np.power(10, \
#xi_interp(np.log10(r.value))))
wt_large[bin_no] = int_xi.to("Mpc").value/cat.h
def integrate_xi_small(bin_no):#, w_theta, bin_no, ubc, ubins)
int_xi = 0
t_med = np.radians(tpoints[bin_no])
for ubin_no, _u in enumerate(ubc):
_du = ubins[ubin_no+1]-ubins[ubin_no]
u = _u*units.Mpc*cat.a/cat.h
du = _du*units.Mpc*cat.a/cat.h
r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h
#if r > (units.Mpc)*cat.Lbox/10:
#int_xi+=du*large_scale_model(r.value)
#else:
try:
int_xi+=du*(np.power(10, \
xi_interp(np.log10(r.value))))
except ValueError:
try:
int_xi+=du*bias2*(np.power(10, \
xi_mm_interp(np.log10(r.value))))
except ValueError:
int_xi+=0*du
wt_small[bin_no] = int_xi.to("Mpc").value/cat.h
#Currently this doesn't work cuz you can't pickle the integrate_xi function.
#I'll just ignore for now. This is why i'm making an emulator anyway
#p = Pool(n_cores)
map(integrate_xi, range(tpoints.shape[0]));
map(integrate_xi_small, range(tpoints.shape[0]));
#wt_large[wt_large<1e-10] = 0
wt_small[wt_small<1e-10] = 0
wt_large
plt.plot(tpoints, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
#plt.plot(tpoints, W*wt_large, label = 'LS')
plt.plot(tpoints, W*wt_small, label = "My Calculation")
#plt.plot(tpoints, wt+W*wt_large, label = "both")
#plt.plot(tpoints_rm, W.to("1/Mpc").value*mathematica_calc, label = 'Mathematica Calc')
#plt.plot(tpoints, wt_analytic(m,10**b, np.radians(tpoints), x),label = 'Mathematica Calc' )
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
wt/wt_redmagic
wt_redmagic/(W.to("1/Mpc").value*mathematica_calc)
import cPickle as pickle
with open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:
xi_rm = pickle.load(f)
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].mbins
xi_rm.metrics[0].cbins
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(rpoints, xi)
for i in xrange(3):
for j in xrange(3):
plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])
plt.loglog();
plt.subplot(211)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
#plt.ylim([0,10])
plt.subplot(212)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
plt.ylim([2.0,4])
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].rbins #Mpc/h
Explanation: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
End of explanation
x = cat.cosmology.comoving_distance(z)*a
#ubins = np.linspace(10**-6, 10**2.0, 1001)
ubins = np.logspace(-6, 2.0, 51)
ubc = (ubins[1:]+ubins[:-1])/2.0
#NLL
def liklihood(params, wt_redmagic,x, tpoints):
#print _params
#prior = np.array([ PRIORS[pname][0] < v < PRIORS[pname][1] for v,pname in zip(_params, param_names)])
#print param_names
#print prior
#if not np.all(prior):
# return 1e9
#params = {p:v for p,v in zip(param_names, _params)}
#cat.populate(params)
#nd_cat = cat.calc_analytic_nd(parmas)
#wt = np.zeros_like(tpoints_rm[:-5])
#xi = cat.calc_xi(r_bins, do_jackknife=False)
#m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
#if np.any(xi < 0):
# return 1e9
#kernel = ExpSquaredKernel(0.05)
#gp = george.GP(kernel)
#gp.compute(np.log10(rpoints))
#for bin_no, t_med in enumerate(np.radians(tpoints_rm[:-5])):
# int_xi = 0
# for ubin_no, _u in enumerate(ubc):
# _du = ubins[ubin_no+1]-ubins[ubin_no]
# u = _u*unit.Mpc*a
# du = _du*unit.Mpc*a
#print np.sqrt(u**2+(x*t_med)**2)
# r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h
#if r > unit.Mpc*10**1.7: #ignore large scales. In the full implementation this will be a transition to a bias model.
# int_xi+=du*0
#else:
# the GP predicts in log, so i predict in log and re-exponate
# int_xi+=du*(np.power(10, \
# gp.predict(np.log10(xi), np.log10(r.value), mean_only=True)[0]))
# int_xi+=du*(10**b)*(r.to("Mpc").value**m)
#print (((int_xi*W))/wt_redmagic[0]).to("m/m")
#break
# wt[bin_no] = int_xi*W.to("1/Mpc")
wt = wt_analytic(params[0],params[1], tpoints, x.to("Mpc").value)
chi2 = np.sum(((wt - wt_redmagic[:-5])**2)/(1e-3*wt_redmagic[:-5]) )
#chi2=0
#print nd_cat
#print wt
#chi2+= ((nd_cat-nd_mock.value)**2)/(1e-6)
#mf = cat.calc_mf()
#HOD = cat.calc_hod()
#mass_bin_range = (9,16)
#mass_bin_size = 0.01
#mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
#mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
# np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
#chi2+=((13.35-np.log10(mean_host_mass))**2)/(0.2)
print chi2
return chi2 #nll
print nd_mock
print wt_redmagic[:-5]
import scipy.optimize as op
results = op.minimize(liklihood, np.array([-2.2, 10**1.7]),(wt_redmagic,x, tpoints_rm[:-5]))
results
#plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
plt.plot(tpoints_rm, wt_analytic(-1.88359, 2.22353827e+03,tpoints_rm, x.to("Mpc").value), label = 'Mathematica Calc')
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
plt.plot(np.log10(rpoints), np.log10(2.22353827e+03)+(np.log10(rpoints)*(-1.88)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
np.array([v for v in params.values()])
Explanation: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason.
End of explanation |
1,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<CENTER>
<a href="http
Step1: In order to activate the interactive visualisation of the histogram that is later created we can use the JSROOT magic
Step2: Next we have to open the data that we want to analyze. As described above the data is stored in a *.root file.
Step3: After the data is opened we create a canvas on which we can draw a histogram. If we do not have a canvas we cannot see our histogram at the end. Its name is Canvas and its header is c. The two following arguments define the width and the height of the canvas.
Step4: The next step is to define a tree named t to get the data out of the .root file.
Step5: Now we define a histogram that will later be placed on this canvas. Its name is variable, the header of the histogram is Mass of the Z boson, the x axis is named mass [GeV] and the y axis is named events. The three following arguments indicate that this histogram contains 30 bins which have a range from 40 to 140.
Step6: Time to fill our above defined histogram. At first we define some variables and then we loop over the data. We also make some cuts as you can see in the # comments.
Step7: After filling the histogram we want to see the results of the analysis. First we draw the histogram on the canvas and then the canvas on which the histogram lies. | Python Code:
import ROOT
Explanation: <CENTER>
<a href="http://opendata.atlas.cern" class="icons"><img src="http://opendata.atlas.cern/DataAndTools/pictures/opendata-top-transblack.png" style="width:40%"></a>
</CENTER>
A more difficult notebook in python
In this notebook you can find a more difficult program that shows further high energy physics (HEP) analysis techniques.
The following analysis is searching for events where Z bosons decay to two leptons of same flavour and opposite charge (to be seen for example in the Feynman diagram).
<CENTER><img src="../images/Z_ElectronPositron.png" style="width:40%"></CENTER>
First of all - like we did it in the first notebook - ROOT is imported to read the files in the .root data format.
End of explanation
##%jsroot on
Explanation: In order to activate the interactive visualisation of the histogram that is later created we can use the JSROOT magic:
End of explanation
f = ROOT.TFile.Open("/home/student/datasets/MC/mc_105986.ZZ.root")
#f = ROOT.TFile.Open("http://opendata.atlas.cern/release/samples/MC/mc_147770.Zee.root")
Explanation: Next we have to open the data that we want to analyze. As described above the data is stored in a *.root file.
End of explanation
canvas = ROOT.TCanvas("Canvas","c",800,600)
Explanation: After the data is opened we create a canvas on which we can draw a histogram. If we do not have a canvas we cannot see our histogram at the end. Its name is Canvas and its header is c. The two following arguments define the width and the height of the canvas.
End of explanation
tree = f.Get("mini")
Explanation: The next step is to define a tree named t to get the data out of the .root file.
End of explanation
hist = ROOT.TH1F("variable","Mass of the Z boson; mass [GeV]; events",30,40,140)
Explanation: Now we define a histogram that will later be placed on this canvas. Its name is variable, the header of the histogram is Mass of the Z boson, the x axis is named mass [GeV] and the y axis is named events. The three following arguments indicate that this histogram contains 30 bins which have a range from 40 to 140.
End of explanation
leadLepton = ROOT.TLorentzVector()
trailLepton = ROOT.TLorentzVector()
for event in tree:
# Cut #1: At least 2 leptons
if tree.lep_n == 2:
# Cut #2: Leptons with opposite charge
if (tree.lep_charge[0] != tree.lep_charge[1]):
# Cut #3: Leptons of the same family (2 electrons or 2 muons)
if (tree.lep_type[0] == tree.lep_type[1]):
# Let's define one TLorentz vector for each, e.i. two vectors!
leadLepton.SetPtEtaPhiE(tree.lep_pt[0]/1000., tree.lep_eta[0], tree.lep_phi[0], tree.lep_E[0]/1000.)
trailLepton.SetPtEtaPhiE(tree.lep_pt[1]/1000., tree.lep_eta[1], tree.lep_phi[1], tree.lep_E[1]/1000.)
# Next line: addition of two TLorentz vectors above --> ask mass very easy (devide by 1000 to get value in GeV)
invmass = leadLepton + trailLepton
hist.Fill(invmass.M())
Explanation: Time to fill our above defined histogram. At first we define some variables and then we loop over the data. We also make some cuts as you can see in the # comments.
End of explanation
hist.Draw()
canvas.Draw()
Explanation: After filling the histogram we want to see the results of the analysis. First we draw the histogram on the canvas and then the canvas on which the histogram lies.
End of explanation |
1,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The use of watermark (above) is optional, and we use it to keep track of the changes while developing the tutorial material. (You can install this IPython extension via "pip install watermark". For more information, please see
Step1: The resulting dataset is a Bunch object
Step2: The features of each sample flower are stored in the data attribute of the dataset
Step3: The information about the class of each sample is stored in the target attribute of the dataset
Step4: Using the NumPy's bincount function (above), we can see that the classes are distributed uniformly in this dataset - there are 50 flowers from each species, where
class 0
Step5: This data is four dimensional, but we can visualize one or two of the dimensions
at a time using a simple histogram or scatter-plot. Again, we'll start by enabling
matplotlib inline mode
Step6: Quick Exercise
Step7: Other Available Data
Scikit-learn makes available a host of datasets for testing learning algorithms.
They come in three flavors
Step8: The data downloaded using the fetch_ scripts are stored locally,
within a subdirectory of your home directory.
You can use the following to determine where it is
Step9: Be warned
Step10: The target here is just the digit represented by the data. The data is an array of
length 64... but what does this data mean?
There's a clue in the fact that we have two versions of the data array
Step11: We can see that they're related by a simple reshaping
Step12: Let's visualize the data. It's little bit more involved than the simple scatter-plot
we used above, but we can do it rather quickly.
Step13: We see now what the features mean. Each feature is a real-valued quantity representing the
darkness of a pixel in an 8x8 image of a hand-written digit.
Even though each sample has data that is inherently two-dimensional, the data matrix flattens
this 2D data into a single vector, which can be contained in one row of the data matrix.
Generated Data
Step14: This example is typically used with an unsupervised learning method called Locally
Linear Embedding. We'll explore unsupervised learning in detail later in the tutorial.
Exercise
Step15: Solution | Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
Explanation: The use of watermark (above) is optional, and we use it to keep track of the changes while developing the tutorial material. (You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark).
SciPy 2016 Scikit-learn Tutorial
Representation and Visualization of Data
Machine learning is about fitting models to data; for that reason, we'll start by
discussing how data can be represented in order to be understood by the computer. Along
with this, we'll build on our matplotlib examples from the previous section and show some
examples of how to visualize data.
Data in scikit-learn
Data in scikit-learn, with very few exceptions, is assumed to be stored as a
two-dimensional array, of shape [n_samples, n_features]. Many algorithms also accept scipy.sparse matrices of the same shape.
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be Boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being "zeros" for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than NumPy arrays.
As we recall from the previous section (or Jupyter notebook), we represent samples (data points or instances) as rows in the data array, and we store the corresponding features, the "dimensions," as columns.
A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the iris data stored by scikit-learn.
The data consists of measurements of three different iris flower species. There are three different species of iris
in this particular dataset as illustrated below:
Iris Setosa
<img src="figures/iris_setosa.jpg" width="50%">
Iris Versicolor
<img src="figures/iris_versicolor.jpg" width="50%">
Iris Virginica
<img src="figures/iris_virginica.jpg" width="50%">
Quick Question:
Let's assume that we are interested in categorizing new observations; we want to predict whether unknown flowers are Iris-Setosa, Iris-Versicolor, or Iris-Virginica flowers, respectively. Based on what we've discussed in the previous section, how would we construct such a dataset?*
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature
number j must be a similar kind of quantity for each sample.
Loading the Iris Data with Scikit-learn
For future experiments with machine learning algorithms, we recommend you to bookmark the UCI machine learning repository, which hosts many of the commonly used datasets that are useful for benchmarking machine learning algorithms -- a very popular resource for machine learning practioners and researchers. Conveniently, some of these datasets are already included in scikit-learn so that we can skip the tedious parts of downloading, reading, parsing, and cleaning these text/CSV files. You can find a list of available datasets in scikit-learn at: http://scikit-learn.org/stable/datasets/#toy-datasets.
For example, scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
<img src="figures/petal_sepal.jpg" alt="Sepal" style="width: 50%;"/>
(Image: "Petal-sepal". Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg#/media/File:Petal-sepal.jpg)
scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
End of explanation
iris.keys()
Explanation: The resulting dataset is a Bunch object: you can see what's available using
the method keys():
End of explanation
n_samples, n_features = iris.data.shape
print('Number of samples:', n_samples)
print('Number of features:', n_features)
# the sepal length, sepal width, petal length and petal width of the first sample (first flower)
print(iris.data[0])
Explanation: The features of each sample flower are stored in the data attribute of the dataset:
End of explanation
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
import numpy as np
np.bincount(iris.target)
Explanation: The information about the class of each sample is stored in the target attribute of the dataset:
End of explanation
print(iris.target_names)
Explanation: Using the NumPy's bincount function (above), we can see that the classes are distributed uniformly in this dataset - there are 50 flowers from each species, where
class 0: Iris-Setosa
class 1: Iris-Versicolor
class 2: Iris-Virginica
These class names are stored in the last attribute, namely target_names:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
x_index = 3
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.hist(iris.data[iris.target==label, x_index],
label=iris.target_names[label],
color=color)
plt.xlabel(iris.feature_names[x_index])
plt.legend(loc='upper right')
plt.show()
x_index = 3
y_index = 0
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.scatter(iris.data[iris.target==label, x_index],
iris.data[iris.target==label, y_index],
label=iris.target_names[label],
c=color)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plt.legend(loc='upper left')
plt.show()
Explanation: This data is four dimensional, but we can visualize one or two of the dimensions
at a time using a simple histogram or scatter-plot. Again, we'll start by enabling
matplotlib inline mode:
End of explanation
import pandas as pd
iris_df = pd.DataFrame(iris.data, columns=iris.feature_names)
pd.plotting.scatter_matrix(iris_df, figsize=(8, 8));
Explanation: Quick Exercise:
Change x_index and y_index in the above script
and find a combination of two parameters
which maximally separate the three classes.
This exercise is a preview of dimensionality reduction, which we'll see later.
An aside: scatterplot matrices
Instead of looking at the data one plot at a time, a common tool that analysts use is called the scatterplot matrix.
Scatterplot matrices show scatter plots between all features in the data set, as well as histograms to show the distribution of each feature.
End of explanation
from sklearn import datasets
Explanation: Other Available Data
Scikit-learn makes available a host of datasets for testing learning algorithms.
They come in three flavors:
Packaged Data: these small datasets are packaged with the scikit-learn installation,
and can be downloaded using the tools in sklearn.datasets.load_*
Downloadable Data: these larger datasets are available for download, and scikit-learn
includes tools which streamline this process. These tools can be found in
sklearn.datasets.fetch_*
Generated Data: there are several datasets which are generated from models based on a
random seed. These are available in the sklearn.datasets.make_*
You can explore the available dataset loaders, fetchers, and generators using IPython's
tab-completion functionality. After importing the datasets submodule from sklearn,
type
datasets.load_<TAB>
or
datasets.fetch_<TAB>
or
datasets.make_<TAB>
to see a list of available functions.
End of explanation
from sklearn.datasets import get_data_home
get_data_home()
Explanation: The data downloaded using the fetch_ scripts are stored locally,
within a subdirectory of your home directory.
You can use the following to determine where it is:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
n_samples, n_features = digits.data.shape
print((n_samples, n_features))
print(digits.data[0])
print(digits.target)
Explanation: Be warned: many of these datasets are quite large, and can take a long time to download!
If you start a download within the IPython notebook
and you want to kill it, you can use ipython's "kernel interrupt" feature, available in the menu or using
the shortcut Ctrl-m i.
You can press Ctrl-m h for a list of all ipython keyboard shortcuts.
Loading Digits Data
Now we'll take a look at another dataset, one where we have to put a bit
more thought into how to represent the data. We can explore the data in
a similar manner as above:
End of explanation
print(digits.data.shape)
print(digits.images.shape)
Explanation: The target here is just the digit represented by the data. The data is an array of
length 64... but what does this data mean?
There's a clue in the fact that we have two versions of the data array:
data and images. Let's take a look at them:
End of explanation
import numpy as np
print(np.all(digits.images.reshape((1797, 64)) == digits.data))
Explanation: We can see that they're related by a simple reshaping:
End of explanation
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
Explanation: Let's visualize the data. It's little bit more involved than the simple scatter-plot
we used above, but we can do it rather quickly.
End of explanation
from sklearn.datasets import make_s_curve
data, colors = make_s_curve(n_samples=1000)
print(data.shape)
print(colors.shape)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
ax.scatter(data[:, 0], data[:, 1], data[:, 2], c=colors)
ax.view_init(10, -60)
Explanation: We see now what the features mean. Each feature is a real-valued quantity representing the
darkness of a pixel in an 8x8 image of a hand-written digit.
Even though each sample has data that is inherently two-dimensional, the data matrix flattens
this 2D data into a single vector, which can be contained in one row of the data matrix.
Generated Data: the S-Curve
One dataset often used as an example of a simple nonlinear dataset is the S-cure:
End of explanation
from sklearn.datasets import fetch_olivetti_faces
# fetch the faces data
# Use a script like above to plot the faces image data.
# hint: plt.cm.bone is a good colormap for this data
Explanation: This example is typically used with an unsupervised learning method called Locally
Linear Embedding. We'll explore unsupervised learning in detail later in the tutorial.
Exercise: working with the faces dataset
Here we'll take a moment for you to explore the datasets yourself.
Later on we'll be using the Olivetti faces dataset.
Take a moment to fetch the data (about 1.4MB), and visualize the faces.
You can copy the code used to visualize the digits above, and modify it for this data.
End of explanation
# %load solutions/03A_faces_plot.py
Explanation: Solution:
End of explanation |
1,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs) are an interesting application of deep learning that allow models to predict the future. While regression models attempt to fit an equation to existing data and extend the predictive power of the equation into the future, RNNs fit a model and use sequences of time series data to make step-by-step predictions about the next most likely output of the model.
In this colab we will create a recurrent neural network that can predict engine vibrations.
Exploratory Data Analysis
We'll use the Engine Vibrations data from Kaggle. This dataset contains artificial engine vibration values we will use to train a model that can predict future values.
To load the data, upload your kaggle.json file and run the code block below.
Step2: Next, download the data from Kaggle.
Step3: Now load the data into a DataFrame.
Step4: We know the data contains readings of engine vibration over time. Let's see how that looks on a line chart.
Step5: That's quite a tough chart to read. Let's sample it.
Step6: See if any of the data is missing.
Step7: Finally, we'll do a box plot to see if the data is evenly distributed, which it is.
Step8: There is not much more EDA we need to do at this point. Let's move on to modeling.
Preparing the Data
Currently we have a series of data that contains a single list of vibration values over time. When training our model and when asking for predictions, we'll want to instead feed the model a subset of our sequence.
We first need to determine our subsequence length and then create in-order subsequences of that length.
We'll create a list of lists called X that contains subsequences. We'll also create a list called y that contains the next value after each subsequence stored in X.
Step9: We also need to explicitly set the final dimension of the data in order to have it pass through our model.
Step10: We'll also standardize our data for the model. Note that we don't normalize here because we need to be able to reproduce negative values.
Step11: And for final testing after model training, we'll split off 20% of the data.
Step12: Setting a Baseline
We are only training with 50 data points at a time. This is well within the bounds of what a standard deep neural network can handle, so let's first see what a very simple neural network can do.
Step13: We quickly converged and, when we ran the model, we got a baseline quality value of 0.03750885081060467.
The Most Basic RNN
Let's contrast a basic feedforward neural network with a basic RNN. To do this we simply need to use the SimpleRNN layer in our network in place of the Dense layer in our network above. Notice that, in this case, there is no need to flatten the data before we feed it into the model.
Step14: Our model converged a little more slowly, but it got an error of only 0.8974118571865628, which is not an improvement over the baseline model.
A Deep RNN
Let's try to build a deep RNN and see if we can get better results.
In the model below, we stick together four layers ranging in width from 50 nodes to our final output of 1.
Notice all of the layers except the output layer have return_sequences=True set. This causes the layer to pass outputs for all timestamps to the next layer. If you don't include this argument, only the output for the last timestamp is passed, and intermediate layers will complain about the wrong shape of input.
Step15: Woah! What happened? Our MSE during training looked nice
Step16: Even with these measures, we still seem to be overfitting a bit. We could keep tuning, but let's instead look at some other types of neurons found in RNNs.
Long Short Term Memory
The RNN layers we've been using are basic neurons that have a very short memory. They tend to learn patterns that they have recently seen, but they quickly forget older training data.
The Long Short Term Memory (LSTM) neuron was built to combat this forgetfulness. The neuron outputs values for the next layer in the network, and it also outputs two other values
Step17: We got a test RMSE of 0.8989123704842217, which is still not better than our SimpleRNN. And in the more complex model below, we got close to the baseline but still didn't beat it.
Step18: LSTM neurons can be very useful, but as we have seen, they aren't always the best option.
Let's look at one more neuron commonly found in RNN models, the GRU.
Gated Recurrent Unit
The Gated Recurrent Unit (GRU) is another special neuron that often shows up in Recurrent Neural Networks. The GRU is similar to the LSTM in that it feeds output back into itself. The difference is that the GRU feeds a single weight back into itself and then makes long- and short-term state adjustments based on that single backfeed.
The GRU tends to train faster than LSTM and has similar performance. Let's see how a network containing one GRU performs.
Step19: We got a RMSE of 0.9668634342193015, which isn't bad, but it still performs worse than our baseline.
Convolutional Layers
Convolutional layers are limited to image classification models. They can also be really handy when training RNNs. For training on a sequence of data, we use the Conv1D class as shown below.
Step20: Recurrent Neural Networks are a powerful tool for sequence generation and prediction. But they aren't the only mechanism for sequence prediction. If the sequence you are predicting is short enough, then a standard deep neural network might be able to provide the predictions you are looking for.
Also note that we created a model that took a series of data and output one value. It is possible to create RNNs that input one or more values and output one or more values. Each use case is different.
Exercises
Exercise 1
Step21: Exercise 2 | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
Explanation: Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs) are an interesting application of deep learning that allow models to predict the future. While regression models attempt to fit an equation to existing data and extend the predictive power of the equation into the future, RNNs fit a model and use sequences of time series data to make step-by-step predictions about the next most likely output of the model.
In this colab we will create a recurrent neural network that can predict engine vibrations.
Exploratory Data Analysis
We'll use the Engine Vibrations data from Kaggle. This dataset contains artificial engine vibration values we will use to train a model that can predict future values.
To load the data, upload your kaggle.json file and run the code block below.
End of explanation
!kaggle datasets download joshmcadams/engine-vibrations
!ls
Explanation: Next, download the data from Kaggle.
End of explanation
import pandas as pd
df = pd.read_csv('engine-vibrations.zip')
df.describe()
Explanation: Now load the data into a DataFrame.
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(24, 8))
plt.plot(list(range(len(df['mm']))), df['mm'])
plt.show()
Explanation: We know the data contains readings of engine vibration over time. Let's see how that looks on a line chart.
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(24, 8))
plt.plot(list(range(100)), df['mm'].iloc[:100])
plt.show()
Explanation: That's quite a tough chart to read. Let's sample it.
End of explanation
df.isna().any()
Explanation: See if any of the data is missing.
End of explanation
import seaborn as sns
_ = sns.boxplot(df['mm'])
Explanation: Finally, we'll do a box plot to see if the data is evenly distributed, which it is.
End of explanation
import numpy as np
X = []
y = []
sseq_len = 50
for i in range(0, len(df['mm']) - sseq_len - 1):
X.append(df['mm'][i:i+sseq_len])
y.append(df['mm'][i+sseq_len+1])
y = np.array(y)
X = np.array(X)
X.shape, y.shape
Explanation: There is not much more EDA we need to do at this point. Let's move on to modeling.
Preparing the Data
Currently we have a series of data that contains a single list of vibration values over time. When training our model and when asking for predictions, we'll want to instead feed the model a subset of our sequence.
We first need to determine our subsequence length and then create in-order subsequences of that length.
We'll create a list of lists called X that contains subsequences. We'll also create a list called y that contains the next value after each subsequence stored in X.
End of explanation
X = np.expand_dims(X, axis=2)
y = np.expand_dims(y, axis=1)
X.shape, y.shape
Explanation: We also need to explicitly set the final dimension of the data in order to have it pass through our model.
End of explanation
data_std = df['mm'].std()
data_mean = df['mm'].mean()
X = (X - data_mean) / data_std
y = (y - data_mean) / data_std
X.max(), y.max(), X.min(), y.min()
Explanation: We'll also standardize our data for the model. Note that we don't normalize here because we need to be able to reproduce negative values.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
Explanation: And for final testing after model training, we'll split off 20% of the data.
End of explanation
import math
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[sseq_len, 1]),
keras.layers.Dense(1)
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: Setting a Baseline
We are only training with 50 data points at a time. This is well within the bounds of what a standard deep neural network can handle, so let's first see what a very simple neural network can do.
End of explanation
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.SimpleRNN(1, input_shape=[None, 1])
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: We quickly converged and, when we ran the model, we got a baseline quality value of 0.03750885081060467.
The Most Basic RNN
Let's contrast a basic feedforward neural network with a basic RNN. To do this we simply need to use the SimpleRNN layer in our network in place of the Dense layer in our network above. Notice that, in this case, there is no need to flatten the data before we feed it into the model.
End of explanation
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.SimpleRNN(50, return_sequences=True, input_shape=[None, 1]),
keras.layers.SimpleRNN(20, return_sequences=True),
keras.layers.SimpleRNN(10, return_sequences=True),
keras.layers.SimpleRNN(1)
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: Our model converged a little more slowly, but it got an error of only 0.8974118571865628, which is not an improvement over the baseline model.
A Deep RNN
Let's try to build a deep RNN and see if we can get better results.
In the model below, we stick together four layers ranging in width from 50 nodes to our final output of 1.
Notice all of the layers except the output layer have return_sequences=True set. This causes the layer to pass outputs for all timestamps to the next layer. If you don't include this argument, only the output for the last timestamp is passed, and intermediate layers will complain about the wrong shape of input.
End of explanation
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.SimpleRNN(2, return_sequences=True, input_shape=[None, 1]),
keras.layers.Dropout(0.3),
keras.layers.SimpleRNN(1),
keras.layers.Dense(1)
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: Woah! What happened? Our MSE during training looked nice: 0.0496. But our final testing didn't perform much better than our simple model. We seem to have overfit!
We can try to simplify the model and add dropout layers to reduce overfitting, but even with a very basic model like the one below, we still get very different MSE between the training and test datasets.
End of explanation
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.LSTM(1, input_shape=[None, 1]),
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.Adam(),
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=100, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: Even with these measures, we still seem to be overfitting a bit. We could keep tuning, but let's instead look at some other types of neurons found in RNNs.
Long Short Term Memory
The RNN layers we've been using are basic neurons that have a very short memory. They tend to learn patterns that they have recently seen, but they quickly forget older training data.
The Long Short Term Memory (LSTM) neuron was built to combat this forgetfulness. The neuron outputs values for the next layer in the network, and it also outputs two other values: one for short-term memory and one for long-term memory. These weights are then fed back into the neuron at the next iteration of the network. This backfeed is similar to that of a SimpleRNN, except the SimpleRNN only has one backfeed.
We can replace the SimpleRNN with an LSTM layer, as you can see below.
End of explanation
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.LSTM(20, return_sequences=True, input_shape=[None, 1]),
keras.layers.Dropout(0.2),
keras.layers.LSTM(10),
keras.layers.Dropout(0.2),
keras.layers.Dense(1)
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: We got a test RMSE of 0.8989123704842217, which is still not better than our SimpleRNN. And in the more complex model below, we got close to the baseline but still didn't beat it.
End of explanation
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.GRU(1),
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: LSTM neurons can be very useful, but as we have seen, they aren't always the best option.
Let's look at one more neuron commonly found in RNN models, the GRU.
Gated Recurrent Unit
The Gated Recurrent Unit (GRU) is another special neuron that often shows up in Recurrent Neural Networks. The GRU is similar to the LSTM in that it feeds output back into itself. The difference is that the GRU feeds a single weight back into itself and then makes long- and short-term state adjustments based on that single backfeed.
The GRU tends to train faster than LSTM and has similar performance. Let's see how a network containing one GRU performs.
End of explanation
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.Conv1D(filters=20, kernel_size=4, strides=2, padding="valid",
input_shape=[None, 1]),
keras.layers.GRU(2, input_shape=[None, 1], activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(1),
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])
y_pred = model.predict(X_test)
rmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))
print("RMSE Scaled: {}\nRMSE Base Units: {}".format(
rmse, rmse * data_std + data_mean))
plt.figure(figsize=(10,10))
plt.plot(list(range(len(history.history['mse']))), history.history['mse'])
plt.show()
Explanation: We got a RMSE of 0.9668634342193015, which isn't bad, but it still performs worse than our baseline.
Convolutional Layers
Convolutional layers are limited to image classification models. They can also be really handy when training RNNs. For training on a sequence of data, we use the Conv1D class as shown below.
End of explanation
# Your code goes here
Explanation: Recurrent Neural Networks are a powerful tool for sequence generation and prediction. But they aren't the only mechanism for sequence prediction. If the sequence you are predicting is short enough, then a standard deep neural network might be able to provide the predictions you are looking for.
Also note that we created a model that took a series of data and output one value. It is possible to create RNNs that input one or more values and output one or more values. Each use case is different.
Exercises
Exercise 1: Visualization
Create a plot containing a series of at least 50 predicted points. Plot that series against the actual.
Hint: Pick a sequence of 100 values from the original data. Plot data points 50-100 as the actual line. Then predict 50 single values starting with the features 0-49, 1-50, etc.
Student Solution
End of explanation
# Your code goes here
Explanation: Exercise 2: Stock Price Prediction
Using the Stonks! dataset, create a recurrent neural network that can predict the stock price for the 'AAA' ticker. Calculate your RMSE with some holdout data.
Use as many text and code cells as you need to complete this exercise.
Hint: if predicting absolute prices doesn't yield a good model, look into other ways to represent the day-to-day change in data.
End of explanation |
1,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nipype Quickstart
Existing documentation
Visualizing the evolution of Nipype
This notebook is taken from reproducible-imaging repository
Import a few things from nipype and external libraries
Step1: Interfaces
Interfaces are the core pieces of Nipype. The interfaces are python modules that allow you to use various external packages (e.g. FSL, SPM or FreeSurfer), even if they themselves are written in another programming language than python.
Let's try to use bet from FSL
Step2: If you're lost the code is here
Step3: let's check the output
Step4: and we can plot the output file
Step5: you can always check the list of arguments using help method
Step6: Exercise 1a
Import IsotropicSmooth from nipype.interfaces.fsl and find out the FSL command that is being run. What are the mandatory inputs for this interface?
Step7: Exercise 1b
Run the IsotropicSmooth for /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz file with a smoothing kernel 4mm
Step8: Nodes and Workflows
Interfaces are the core pieces of Nipype that run the code of your desire. But to streamline your analysis and to execute multiple interfaces in a sensible order, you have to put them in something that we call a Node and create a Workflow.
In Nipype, a node is an object that executes a certain function. This function can be anything from a Nipype interface to a user-specified function or an external script. Each node consists of a name, an interface, and at least one input field and at least one output field.
Once you have multiple nodes you can use Workflow to connect with each other and create a directed graph. Nipype workflow will take care of input and output of each interface and arrange the execution of each interface in the most efficient way.
Let's create the first node using BET interface
Step9: If you're lost the code is here
Step10: Exercise 2
Create a Node for IsotropicSmooth interface.
Step11: We will now create one more Node for our workflow
Step12: Let's check the interface
Step13: As you can see the interface takes two mandatory inputs
Step14: if you're lost, the full code is here
Step15: It's very important to specify base_dir (as absolute path), because otherwise all the outputs would be saved somewhere in the temporary files.
let's connect the bet_node output to mask_node input`
Step16: if you're lost, the code is here
Step17: Exercise 3
Connect out_file of smooth_node to in_file of mask_node.
Step18: Let's see a graph describing our workflow
Step19: you can also plot a more detailed graph
Step20: and now let's run the workflow
Step21: if you're lost, the full code is here
Step22: and let's look at the results
Step23: we can see the fie structure that has been created
Step24: and we can plot the results
Step25: Iterables
Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin for Workflow, called iterables.
<img src="../static/images/iterables.png" width="240">
Let's assume we have a workflow with two nodes, node (A) does simple skull stripping, and is followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm, and 16mm.
let's just modify smooth_node
Step26: if you're lost the code is here
Step27: we will define again bet and smooth nodes
Step28: will create a new workflow with a new base_dir
Step29: let's run the workflow and check the output
Step30: let's see the graph
Step31: We can see the file structure that was created
Step32: you have now 7 nodes instead of 3!
MapNode
If you want to iterate over a list of inputs, but need to feed all iterated outputs afterward as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs.
Imagine that you have a list of items (let's say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). MapNode can solve this problem. Imagine you have the following workflow
Step33: If I want to know the results only for one x we can use Node
Step34: let's try to ask for more values of x
Step35: It will give an error since square_func do not accept list. But we can try MapNode | Python Code:
import os
from os.path import abspath
from nipype import Workflow, Node, MapNode, Function
from nipype.interfaces.fsl import BET, IsotropicSmooth, ApplyMask
from nilearn.plotting import plot_anat
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Nipype Quickstart
Existing documentation
Visualizing the evolution of Nipype
This notebook is taken from reproducible-imaging repository
Import a few things from nipype and external libraries
End of explanation
# will use a T1w from ds000114 dataset
input_file = abspath("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz")
# we will be typing here
Explanation: Interfaces
Interfaces are the core pieces of Nipype. The interfaces are python modules that allow you to use various external packages (e.g. FSL, SPM or FreeSurfer), even if they themselves are written in another programming language than python.
Let's try to use bet from FSL:
End of explanation
bet = BET()
bet.inputs.in_file = input_file
bet.inputs.out_file = "/output/T1w_nipype_bet.nii.gz"
res = bet.run()
Explanation: If you're lost the code is here:
End of explanation
res.outputs
Explanation: let's check the output:
End of explanation
plot_anat('/output/T1w_nipype_bet.nii.gz',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
Explanation: and we can plot the output file
End of explanation
BET.help()
Explanation: you can always check the list of arguments using help method
End of explanation
# type your code here
from nipype.interfaces.fsl import IsotropicSmooth
# all this information can be found when we run `help` method.
# note that you can either provide `in_file` and `fwhm` or `in_file` and `sigma`
IsotropicSmooth.help()
Explanation: Exercise 1a
Import IsotropicSmooth from nipype.interfaces.fsl and find out the FSL command that is being run. What are the mandatory inputs for this interface?
End of explanation
# type your solution here
smoothing = IsotropicSmooth()
smoothing.inputs.in_file = "/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz"
smoothing.inputs.fwhm = 4
smoothing.inputs.out_file = "/output/T1w_nipype_smooth.nii.gz"
smoothing.run()
# plotting the output
plot_anat('/output/T1w_nipype_smooth.nii.gz',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
Explanation: Exercise 1b
Run the IsotropicSmooth for /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz file with a smoothing kernel 4mm:
End of explanation
# we will be typing here
Explanation: Nodes and Workflows
Interfaces are the core pieces of Nipype that run the code of your desire. But to streamline your analysis and to execute multiple interfaces in a sensible order, you have to put them in something that we call a Node and create a Workflow.
In Nipype, a node is an object that executes a certain function. This function can be anything from a Nipype interface to a user-specified function or an external script. Each node consists of a name, an interface, and at least one input field and at least one output field.
Once you have multiple nodes you can use Workflow to connect with each other and create a directed graph. Nipype workflow will take care of input and output of each interface and arrange the execution of each interface in the most efficient way.
Let's create the first node using BET interface:
End of explanation
# Create Node
bet_node = Node(BET(), name='bet')
# Specify node inputs
bet_node.inputs.in_file = input_file
bet_node.inputs.mask = True
# bet node can be also defined this way:
#bet_node = Node(BET(in_file=input_file, mask=True), name='bet_node')
Explanation: If you're lost the code is here:
End of explanation
# Type your solution here:
# smooth_node =
smooth_node = Node(IsotropicSmooth(in_file=input_file, fwhm=4), name="smooth")
Explanation: Exercise 2
Create a Node for IsotropicSmooth interface.
End of explanation
mask_node = Node(ApplyMask(), name="mask")
Explanation: We will now create one more Node for our workflow
End of explanation
ApplyMask.help()
Explanation: Let's check the interface:
End of explanation
# will be writing the code here:
Explanation: As you can see the interface takes two mandatory inputs: in_file and mask_file. We want to use the output of smooth_node as in_file and one of the output of bet_file (the mask_file) as mask_file input.
Let's initialize a Workflow:
End of explanation
# Initiation of a workflow
wf = Workflow(name="smoothflow", base_dir="/output/working_dir")
Explanation: if you're lost, the full code is here:
End of explanation
# we will be typing here:
Explanation: It's very important to specify base_dir (as absolute path), because otherwise all the outputs would be saved somewhere in the temporary files.
let's connect the bet_node output to mask_node input`
End of explanation
wf.connect(bet_node, "mask_file", mask_node, "mask_file")
Explanation: if you're lost, the code is here:
End of explanation
# type your code here
wf.connect(smooth_node, "out_file", mask_node, "in_file")
Explanation: Exercise 3
Connect out_file of smooth_node to in_file of mask_node.
End of explanation
wf.write_graph("workflow_graph.dot")
from IPython.display import Image
Image(filename="/output/working_dir/smoothflow/workflow_graph.png")
Explanation: Let's see a graph describing our workflow:
End of explanation
wf.write_graph(graph2use='flat')
from IPython.display import Image
Image(filename="/output/working_dir/smoothflow/graph_detailed.png")
Explanation: you can also plot a more detailed graph:
End of explanation
# we will type our code here:
Explanation: and now let's run the workflow
End of explanation
# Execute the workflow
res = wf.run()
Explanation: if you're lost, the full code is here:
End of explanation
# we can check the output of specific nodes from workflow
list(res.nodes)[0].result.outputs
Explanation: and let's look at the results
End of explanation
! tree -L 3 /output/working_dir/smoothflow/
Explanation: we can see the fie structure that has been created:
End of explanation
import numpy as np
import nibabel as nb
#import matplotlib.pyplot as plt
# Let's create a short helper function to plot 3D NIfTI images
def plot_slice(fname):
# Load the image
img = nb.load(fname)
data = img.get_data()
# Cut in the middle of the brain
cut = int(data.shape[-1]/2) + 10
# Plot the data
plt.imshow(np.rot90(data[..., cut]), cmap="gray")
plt.gca().set_axis_off()
f = plt.figure(figsize=(12, 4))
for i, img in enumerate(["/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz",
"/output/working_dir/smoothflow/smooth/sub-01_ses-test_T1w_smooth.nii.gz",
"/output/working_dir/smoothflow/bet/sub-01_ses-test_T1w_brain_mask.nii.gz",
"/output/working_dir/smoothflow/mask/sub-01_ses-test_T1w_smooth_masked.nii.gz"]):
f.add_subplot(1, 4, i + 1)
plot_slice(img)
Explanation: and we can plot the results:
End of explanation
# we will type the code here
Explanation: Iterables
Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin for Workflow, called iterables.
<img src="../static/images/iterables.png" width="240">
Let's assume we have a workflow with two nodes, node (A) does simple skull stripping, and is followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm, and 16mm.
let's just modify smooth_node:
End of explanation
smooth_node_it = Node(IsotropicSmooth(in_file=input_file), name="smooth")
smooth_node_it.iterables = ("fwhm", [4, 8, 16])
Explanation: if you're lost the code is here:
End of explanation
bet_node_it = Node(BET(in_file=input_file, mask=True), name='bet_node')
mask_node_it = Node(ApplyMask(), name="mask")
Explanation: we will define again bet and smooth nodes:
End of explanation
# Initiation of a workflow
wf_it = Workflow(name="smoothflow_it", base_dir="/output/working_dir")
wf_it.connect(bet_node_it, "mask_file", mask_node_it, "mask_file")
wf_it.connect(smooth_node_it, "out_file", mask_node_it, "in_file")
Explanation: will create a new workflow with a new base_dir:
End of explanation
res_it = wf_it.run()
Explanation: let's run the workflow and check the output
End of explanation
list(res_it.nodes)
Explanation: let's see the graph
End of explanation
! tree -L 3 /output/working_dir/smoothflow_it/
Explanation: We can see the file structure that was created:
End of explanation
def square_func(x):
return x ** 2
square = Function(input_names=["x"], output_names=["f_x"], function=square_func)
Explanation: you have now 7 nodes instead of 3!
MapNode
If you want to iterate over a list of inputs, but need to feed all iterated outputs afterward as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs.
Imagine that you have a list of items (let's say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). MapNode can solve this problem. Imagine you have the following workflow:
<img src="../static/images/mapnode.png" width="325">
Node A outputs a list of files, but node B accepts only one file. Additionally, C expects a list of files. What you would like is to run B for every file in the output of A and collect the results as a list and feed it to C.
Let's run a simple numerical example using nipype Function interface
End of explanation
square_node = Node(square, name="square")
square_node.inputs.x = 2
res = square_node.run()
res.outputs
Explanation: If I want to know the results only for one x we can use Node:
End of explanation
# NBVAL_SKIP
square_node = Node(square, name="square")
square_node.inputs.x = [2, 4]
res = square_node.run()
res.outputs
Explanation: let's try to ask for more values of x
End of explanation
square_mapnode = MapNode(square, name="square", iterfield=["x"])
square_mapnode.inputs.x = [2, 4]
res = square_mapnode.run()
res.outputs
Explanation: It will give an error since square_func do not accept list. But we can try MapNode:
End of explanation |
1,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-7s', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-7S
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
1,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Suggestions for lab exercises.
Variables and assignment
Exercise 1
Remember that $n! = n \times (n - 1) \times \dots \times 2 \times 1$. Compute $15!$, assigning the result to a sensible variable name.
Solution
Step1: Exercise 2
Using the math module, check your result for $15$ factorial. You should explore the help for the math library and its functions, using eg tab-completion, the spyder inspector, or online sources.
Solution
Step2: Exercise 3
Stirling's approximation gives that, for large enough $n$,
\begin{equation}
n! \simeq \sqrt{2 \pi} n^{n + 1/2} e^{-n}.
\end{equation}
Using functions and constants from the math library, compare the results of $n!$ and Stirling's approximation for $n = 5, 10, 15, 20$. In what sense does the approximation improve?
Solution
Step4: We see that the relative error decreases, whilst the absolute error grows (significantly).
Basic functions
Exercise 1
Write a function to calculate the volume of a cuboid with edge lengths $a, b, c$. Test your code on sample values such as
$a=1, b=1, c=1$ (result should be $1$);
$a=1, b=2, c=3.5$ (result should be $7.0$);
$a=0, b=1, c=1$ (result should be $0$);
$a=2, b=-1, c=1$ (what do you think the result should be?).
Solution
Step6: In later cases, after having covered exceptions, I would suggest raising a NotImplementedError for negative edge lengths.
Exercise 2
Write a function to compute the time (in seconds) taken for an object to fall from a height $H$ (in metres) to the ground, using the formula
\begin{equation}
h(t) = \frac{1}{2} g t^2.
\end{equation}
Use the value of the acceleration due to gravity $g$ from scipy.constants.g. Test your code on sample values such as
$H = 1$m (result should be $\approx 0.452$s);
$H = 10$m (result should be $\approx 1.428$s);
$H = 0$m (result should be $0$s);
$H = -1$m (what do you think the result should be?).
Solution
Step8: Exercise 3
Write a function that computes the area of a triangle with edge lengths $a, b, c$. You may use the formula
\begin{equation}
A = \sqrt{s (s - a) (s - b) (s - c)}, \qquad s = \frac{a + b + c}{2}.
\end{equation}
Construct your own test cases to cover a range of possibilities.
Step9: Floating point numbers
Exercise 1
Computers cannot, in principle, represent real numbers perfectly. This can lead to problems of accuracy. For example, if
\begin{equation}
x = 1, \qquad y = 1 + 10^{-14} \sqrt{3}
\end{equation}
then it should be true that
\begin{equation}
10^{14} (y - x) = \sqrt{3}.
\end{equation}
Check how accurately this equation holds in Python and see what this implies about the accuracy of subtracting two numbers that are close together.
Solution
Step10: We see that the first three digits are correct. This isn't too surprising
Step12: There is a difference in the fifth significant figure in both solutions in the first case, which gets to the third (arguably the second) significant figure in the second case. Comparing to the limiting solutions above, we see that the larger root is definitely more accurately captured with the first formula than the second (as the result should be bigger than $10^{-2n}$).
In the second case we have divided by a very small number to get the big number, which loses accuracy.
Exercise 5
The standard definition of the derivative of a function is
\begin{equation}
\left. \frac{\text{d} f}{\text{d} x} \right|{x=X} = \lim{\delta \to 0} \frac{f(X + \delta) - f(X)}{\delta}.
\end{equation}
We can approximate this by computing the result for a finite value of $\delta$
Step13: Exercise 6
The function $f_1(x) = e^x$ has derivative with the exact value $1$ at $x=0$. Compute the approximate derivative using your function above, for $\delta = 10^{-2 n}$ with $n = 1, \dots, 7$. You should see the results initially improve, then get worse. Why is this?
Solution
Step15: We have a combination of floating point inaccuracies
Step16: Exercise 2
500 years ago some believed that the number $2^n - 1$ was prime for all primes $n$. Use your function to find the first prime $n$ for which this is not true.
Solution
We could do this many ways. This "elegant" solution says
Step17: Exercise 3
The Mersenne primes are those that have the form $2^n-1$, where $n$ is prime. Use your previous solutions to generate all the $n < 40$ that give Mersenne primes.
Solution
Step19: Exercise 4
Write a function to compute all prime factors of an integer $n$, including their multiplicities. Test it by printing the prime factors (without multiplicities) of $n = 17, \dots, 20$ and the multiplicities (without factors) of $n = 48$.
Note
One effective solution is to return a dictionary, where the keys are the factors and the values are the multiplicities.
Solution
This solution uses the trick of immediately dividing $n$ by any divisor
Step21: Exercise 5
Write a function to generate all the integer divisors, including 1, but not including $n$ itself, of an integer $n$. Test it on $n = 16, \dots, 20$.
Note
You could use the prime factorization from the previous exercise, or you could do it directly.
Solution
Here we will do it directly.
Step23: Exercise 6
A perfect number $n$ is one where the divisors sum to $n$. For example, 6 has divisors 1, 2, and 3, which sum to 6. Use your previous solution to find all perfect numbers $n < 10,000$ (there are only four!).
Solution
We can do this much more efficiently than the code below using packages such as numpy, but this is a "bare python" solution.
Step24: Exercise 7
Using your previous functions, check that all perfect numbers $n < 10,000$ can be written as $2^{k-1} \times (2^k - 1)$, where $2^k-1$ is a Mersenne prime.
Solution
In fact we did this above already
Step25: It's worth thinking about the operation counts of the various functions implemented here. The implementations are inefficient, but even in the best case you see how the number of operations (and hence computing time required) rapidly increases.
Logistic map
Partly taken from Newman's book, p 120.
The logistic map builds a sequence of numbers ${ x_n }$ using the relation
\begin{equation}
x_{n+1} = r x_n \left( 1 - x_n \right),
\end{equation}
where $0 \le x_0 \le 1$.
Exercise 1
Write a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).
Solution
Step26: Exercise 2
Fix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases.
What does this suggest about the long-term behaviour of the sequence?
Solution
Step27: This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.
Exercise 3
Fix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).
Solution
Step28: Exercise 4
For iterative maps such as the logistic map, one of three things can occur
Step29: Exercise 2
Check the points $c=0$ and $c=\pm 2 \pm 2 \text{i}$ and ensure they do what you expect. (What should you expect?)
Solution
Step30: Exercise 3
Write a function that, given $N$
generates an $N \times N$ grid spanning $c = x + \text{i} y$, for $-2 \le x \le 2$ and $-2 \le y \le 2$;
returns an $N\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.
Solution
Step31: Exercise 4
Using the function imshow from matplotlib, plot the resulting array for a $100 \times 100$ array to make sure you see the expected shape.
Solution
Step32: Exercise 5
Modify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using imshow again.
Solution
Step33: Exercise 6
Try some higher resolution plots, and try plotting only a section to see the structure. Note this is not a good way to get high accuracy close up images!
Solution
This is a simple example
Step34: Equivalence classes
An equivalence class is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \sim 10$ to denote two objects within the same equivalence class.
Here, we are going to define the positive integers programmatically from equivalent sequences.
Exercise 1
Define a python class Eqint. This should be
Initialized by a sequence;
Store the sequence;
Define its representation (via the __repr__ function) to be the integer length of the sequence;
Redefine equality (via the __eq__ function) so that two eqints are equal if their sequences have same length.
Solution
Step35: Exercise 2
Define a zero object from the empty list, and three one objects, from a single object list, tuple, and string. For example
python
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
Check that none of the one objects equal the zero object, but all equal the other one objects. Print each object to check that the representation gives the integer length.
Solution
Step36: Exercise 3
Redefine the class by including an __add__ method that combines the two sequences. That is, if a and b are Eqints then a+b should return an Eqint defined from combining a and bs sequences.
Note
Adding two different types of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.
Solution
Step37: Exercise 4
Check your addition function by adding together all your previous Eqint objects (which will need re-defining, as the class has been redefined). Print the resulting object to check you get 3, and also print its internal sequence.
Solution
Step38: Exercise 5
We will sketch a construction of the positive integers from nothing.
Define an empty list positive_integers.
Define an Eqint called zero from the empty list. Append it to positive_integers.
Define an Eqint called next_integer from the Eqint defined by a copy of positive_integers (ie, use Eqint(list(positive_integers)). Append it to positive_integers.
Repeat step 3 as often as needed.
Use this procedure to define the Eqint equivalent to $10$. Print it, and its internal sequence, to check.
Solution
Step39: Rational numbers
Instead of working with floating point numbers, which are not "exact", we could work with the rational numbers $\mathbb{Q}$. A rational number $q \in \mathbb{Q}$ is defined by the numerator $n$ and denominator $d$ as $q = \frac{n}{d}$, where $n$ and $d$ are coprime (ie, have no common divisor other than $1$).
Exercise 1
Find a python function that finds the greatest common divisor (gcd) of two numbers. Use this to write a function normal_form that takes a numerator and divisor and returns the coprime $n$ and $d$. Test this function on $q = \frac{3}{2}$, $q = \frac{15}{3}$, and $q = \frac{20}{42}$.
Solution
Step41: Exercise 2
Define a class Rational that uses the normal_form function to store the rational number in the appropriate form. Define a __repr__ function that prints a string that looks like $\frac{n}{d}$ (hint
Step43: Exercise 3
Overload the __add__ function so that you can add two rational numbers. Test it on $\frac{1}{2} + \frac{1}{3} + \frac{1}{6} = 1$.
Solution
Step45: Exercise 4
Overload the __mul__ function so that you can multiply two rational numbers. Test it on $\frac{1}{3} \times \frac{15}{2} \times \frac{2}{5} = 1$.
Solution
Step47: Exercise 5
Overload the __rmul__ function so that you can multiply a rational by an integer. Check that $\frac{1}{2} \times 2 = 1$ and $\frac{1}{2} + (-1) \times \frac{1}{2} = 0$. Also overload the __sub__ function (using previous functions!) so that you can subtract rational numbers and check that $\frac{1}{2} - \frac{1}{2} = 0$.
Solution
Step49: Exercise 6
Overload the __float__ function so that float(q) returns the floating point approximation to the rational number q. Test this on $\frac{1}{2}, \frac{1}{3}$, and $\frac{1}{11}$.
Solution
Step51: Exercise 7
Overload the __lt__ function to compare two rational numbers. Create a list of rational numbers where the denominator is $n = 2, \dots, 11$ and the numerator is the floored integer $n/2$, ie n//2. Use the sorted function on that list (which relies on the __lt__ function).
Solution
Step53: Exercise 8
The Wallis formula for $\pi$ is
\begin{equation}
\pi = 2 \prod_{n=1}^{\infty} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)}.
\end{equation}
We can define a partial product $\pi_N$ as
\begin{equation}
\pi_N = 2 \prod_{n=1}^{N} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)},
\end{equation}
each of which are rational numbers.
Construct a list of the first 20 rational number approximations to $\pi$ and print them out. Print the sorted list to show that the approximations are always increasing. Then convert them to floating point numbers, construct a numpy array, and subtract this array from $\pi$ to see how accurate they are.
Solution
Step54: The shortest published Mathematical paper
A candidate for the shortest mathematical paper ever shows the following result
Step55: Exercise 2
The more interesting statement in the paper is that
\begin{equation}
27^5 + 84^5 + 110^5 + 133^5 = 144^5.
\end{equation}
[is] the smallest instance in which four fifth powers sum to a fifth power.
Interpreting "the smallest instance" to mean the solution where the right hand side term (the largest integer) is the smallest, we want to use python to check this statement.
You may find the combinations function from the itertools package useful.
Step56: The combinations function returns all the combinations (ignoring order) of r elements from a given list. For example, take a list of length 6, [1, 2, 3, 4, 5, 6] and compute all the combinations of length 4
Step57: We can already see that the number of terms to consider is large.
Note that we have used the list function to explicitly get a list of the combinations. The combinations function returns a generator, which can be used in a loop as if it were a list, without storing all elements of the list.
How fast does the number of combinations grow? The standard formula says that for a list of length $n$ there are
\begin{equation}
\begin{pmatrix} n \ k \end{pmatrix} = \frac{n!}{k! (n-k)!}
\end{equation}
combinations of length $k$. For $k=4$ as needed here we will have $n (n-1) (n-2) (n-3) / 24$ combinations. For $n=144$ we therefore have
Step58: Exercise 2a
Show, by getting python to compute the number of combinations $N = \begin{pmatrix} n \ 4 \end{pmatrix}$ that $N$ grows roughly as $n^4$. To do this, plot the number of combinations and $n^4$ on a log-log scale. Restrict to $n \le 50$.
Solution
Step59: With 17 million combinations to work with, we'll need to be a little careful how we compute.
One thing we could try is to loop through each possible "smallest instance" (the term on the right hand side) in increasing order. We then check all possible combinations of left hand sides.
This is computationally very expensive as we repeat a lot of calculations. We repeatedly recalculate combinations (a bad idea). We repeatedly recalculate the powers of the same number.
Instead, let us try creating the list of all combinations of powers once.
Exercise 2b
Construct a numpy array containing all integers in $1, \dots, 144$ to the fifth power.
Construct a list of all combinations of four elements from this array.
Construct a list of sums of all these combinations.
Loop over one list and check if the entry appears in the other list (ie, use the in keyword).
Solution
Step60: Then calculate the sums
Step61: Finally, loop through the sums and check to see if it matches any possible term on the RHS
Step63: Lorenz attractor
The Lorenz system is a set of ordinary differential equations which can be written
\begin{equation}
\frac{\text{d} \vec{v}}{\text{d} \vec{t}} = \vec{f}(\vec{v})
\end{equation}
where the variables in the state vector $\vec{v}$ are
\begin{equation}
\vec{v} = \begin{pmatrix} x(t) \ y(t) \ z(t) \end{pmatrix}
\end{equation}
and the function defining the ODE is
\begin{equation}
\vec{f} = \begin{pmatrix} \sigma \left( y(t) - x(t) \right) \ x(t) \left( \rho - z(t) \right) - y(t) \ x(t) y(t) - \beta z(t) \end{pmatrix}.
\end{equation}
The parameters $\sigma, \rho, \beta$ are all real numbers.
Exercise 1
Write a function dvdt(v, t, params) that returns $\vec{f}$ given $\vec{v}, t$ and the parameters $\sigma, \rho, \beta$.
Solution
Step64: Exercise 2
Fix $\sigma=10, \beta=8/3$. Set initial data to be $\vec{v}(0) = \vec{1}$. Using scipy, specifically the odeint function of scipy.integrate, solve the Lorenz system up to $t=100$ for $\rho=13, 14, 15$ and $28$.
Plot your results in 3d, plotting $x, y, z$.
Solution
Step65: Exercise 3
Fix $\rho = 28$. Solve the Lorenz system twice, up to $t=40$, using the two different initial conditions $\vec{v}(0) = \vec{1}$ and $\vec{v}(0) = \vec{1} + \vec{10^{-5}}$.
Show four plots. Each plot should show the two solutions on the same axes, plotting $x, y$ and $z$. Each plot should show $10$ units of time, ie the first shows $t \in [0, 10]$, the second shows $t \in [10, 20]$, and so on.
Solution
Step66: This shows the sensitive dependence on initial conditions that is characteristic of chaotic behaviour.
Systematic ODE solving with sympy
We are interested in the solution of
\begin{equation}
\frac{\text{d} y}{\text{d} t} = e^{-t} - y^n, \qquad y(0) = 1,
\end{equation}
where $n > 1$ is an integer. The "minor" change from the above examples mean that sympy can only give the solution as a power series.
Exercise 1
Compute the general solution as a power series for $n = 2$.
Solution
Step67: Exercise 2
Investigate the help for the dsolve function to straightforwardly impose the initial condition $y(0) = 1$ using the ics argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \dots, 10$.
Solution
Step68: Exercise 3
Using the removeO command, plot each of these solutions for $t \in [0, 1]$.
Step70: Twin primes
A twin prime is a pair $(p_1, p_2)$ such that both $p_1$ and $p_2$ are prime and $p_2 = p_1 + 2$.
Exercise 1
Write a generator that returns twin primes. You can use the generators above, and may want to look at the itertools module together with its recipes, particularly the pairwise recipe.
Solution
Note
Step71: Now we can generate pairs using the pairwise recipe
Step73: We could examine the results of the two primes directly. But an efficient solution is to use python's filter function. To do this, first define a function checking if the pair are twin primes
Step75: Then use the filter function to define another generator
Step76: Now check by finding the twin primes with $N<20$
Step78: Exercise 2
Find how many twin primes there are with $p_2 < 1000$.
Solution
Again there are many solutions, but the itertools recipes has the quantify pattern. Looking ahead to exercise 3 we'll define
Step79: Exercise 3
Let $\pi_N$ be the number of twin primes such that $p_2 < N$. Plot how $\pi_N / N$ varies with $N$ for $N=2^k$ and $k = 4, 5, \dots 16$. (You should use a logarithmic scale where appropriate!)
Solution
We've now done all the hard work and can use the solutions above.
Step80: For those that have checked Wikipedia, you'll see Brun's theorem which suggests a specific scaling, that $\pi_N$ is bounded by $C N / \log(N)^2$. Checking this numerically on this data
Step83: A basis for the polynomials
In the section on classes we defined a Monomial class to represent a polynomial with leading coefficient $1$. As the $N+1$ monomials $1, x, x^2, \dots, x^N$ form a basis for the vector space of polynomials of order $N$, $\mathbb{P}^N$, we can use the Monomial class to return this basis.
Exercise 1
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^3$.
Solution
Again we first take the definition of the crucial class from the notes.
Step85: Now we can define the first basis
Step86: Then test it on $\mathbb{P}^N$
Step88: This looks horrible, but is correct. To really make this look good, we need to improve the output. If we use
Step89: then we can deal with the uglier cases, and re-running the test we get
Step91: An even better solution would be to use the numpy.unique function as in this stackoverflow answer (the second one!) to get the frequency of all the roots.
Exercise 2
An alternative basis is given by the monomials
\begin{align}
p_0(x) &= 1, \ p_1(x) &= 1-x, \ p_2(x) &= (1-x)(2-x), \ \dots & \quad \dots, \ p_N(x) &= \prod_{n=1}^N (n-x).
\end{align}
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^4$.
Solution
Step93: I am too lazy to work back through the definitions and flip all the signs; it should be clear how to do this!
Exercise 3
Use these generators to write another generator that produces a basis of $\mathbb{P^3} \times \mathbb{P^4}$.
Solution
Hopefully by now you'll be aware of how useful itertools is!
Step95: I've cheated here as I haven't introduced the yield from syntax (which returns an iterator from a generator). We could write this out instead as | Python Code:
fifteen_factorial = 15*14*13*12*11*10*9*8*7*6*5*4*3*2*1
print(fifteen_factorial)
Explanation: Suggestions for lab exercises.
Variables and assignment
Exercise 1
Remember that $n! = n \times (n - 1) \times \dots \times 2 \times 1$. Compute $15!$, assigning the result to a sensible variable name.
Solution
End of explanation
import math
print(math.factorial(15))
print("Result correct?", math.factorial(15) == fifteen_factorial)
Explanation: Exercise 2
Using the math module, check your result for $15$ factorial. You should explore the help for the math library and its functions, using eg tab-completion, the spyder inspector, or online sources.
Solution
End of explanation
print(math.factorial(5), math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))
print(math.factorial(10), math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))
print(math.factorial(15), math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))
print(math.factorial(20), math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))
print("Absolute differences:")
print(math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))
print(math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))
print(math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))
print(math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))
print("Relative differences:")
print((math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5)) / math.factorial(5))
print((math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10)) / math.factorial(10))
print((math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15)) / math.factorial(15))
print((math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20)) / math.factorial(20))
Explanation: Exercise 3
Stirling's approximation gives that, for large enough $n$,
\begin{equation}
n! \simeq \sqrt{2 \pi} n^{n + 1/2} e^{-n}.
\end{equation}
Using functions and constants from the math library, compare the results of $n!$ and Stirling's approximation for $n = 5, 10, 15, 20$. In what sense does the approximation improve?
Solution
End of explanation
def cuboid_volume(a, b, c):
Compute the volume of a cuboid with edge lengths a, b, c.
Volume is abc. Only makes sense if all are non-negative.
Parameters
----------
a : float
Edge length 1
b : float
Edge length 2
c : float
Edge length 3
Returns
-------
volume : float
The volume a*b*c
if (a < 0.0) or (b < 0.0) or (c < 0.0):
print("Negative edge length makes no sense!")
return 0
return a*b*c
print(cuboid_volume(1,1,1))
print(cuboid_volume(1,2,3.5))
print(cuboid_volume(0,1,1))
print(cuboid_volume(2,-1,1))
Explanation: We see that the relative error decreases, whilst the absolute error grows (significantly).
Basic functions
Exercise 1
Write a function to calculate the volume of a cuboid with edge lengths $a, b, c$. Test your code on sample values such as
$a=1, b=1, c=1$ (result should be $1$);
$a=1, b=2, c=3.5$ (result should be $7.0$);
$a=0, b=1, c=1$ (result should be $0$);
$a=2, b=-1, c=1$ (what do you think the result should be?).
Solution
End of explanation
def fall_time(H):
Give the time in seconds for an object to fall to the ground
from H metres.
Parameters
----------
H : float
Starting height (metres)
Returns
-------
T : float
Fall time (seconds)
from math import sqrt
from scipy.constants import g
if (H < 0):
print("Negative height makes no sense!")
return 0
return sqrt(2.0*H/g)
print(fall_time(1))
print(fall_time(10))
print(fall_time(0))
print(fall_time(-1))
Explanation: In later cases, after having covered exceptions, I would suggest raising a NotImplementedError for negative edge lengths.
Exercise 2
Write a function to compute the time (in seconds) taken for an object to fall from a height $H$ (in metres) to the ground, using the formula
\begin{equation}
h(t) = \frac{1}{2} g t^2.
\end{equation}
Use the value of the acceleration due to gravity $g$ from scipy.constants.g. Test your code on sample values such as
$H = 1$m (result should be $\approx 0.452$s);
$H = 10$m (result should be $\approx 1.428$s);
$H = 0$m (result should be $0$s);
$H = -1$m (what do you think the result should be?).
Solution
End of explanation
def triangle_area(a, b, c):
Compute the area of a triangle with edge lengths a, b, c.
Area is sqrt(s (s-a) (s-b) (s-c)).
s is (a+b+c)/2.
Only makes sense if all are non-negative.
Parameters
----------
a : float
Edge length 1
b : float
Edge length 2
c : float
Edge length 3
Returns
-------
area : float
The triangle area.
from math import sqrt
if (a < 0.0) or (b < 0.0) or (c < 0.0):
print("Negative edge length makes no sense!")
return 0
s = 0.5 * (a + b + c)
return sqrt(s * (s-a) * (s-b) * (s-c))
print(triangle_area(1,1,1)) # Equilateral; answer sqrt(3)/4 ~ 0.433
print(triangle_area(3,4,5)) # Right triangle; answer 6
print(triangle_area(1,1,0)) # Not a triangle; answer 0
print(triangle_area(-1,1,1)) # Not a triangle; exception or 0.
Explanation: Exercise 3
Write a function that computes the area of a triangle with edge lengths $a, b, c$. You may use the formula
\begin{equation}
A = \sqrt{s (s - a) (s - b) (s - c)}, \qquad s = \frac{a + b + c}{2}.
\end{equation}
Construct your own test cases to cover a range of possibilities.
End of explanation
from math import sqrt
x = 1.0
y = 1.0 + 1e-14 * sqrt(3.0)
print("The calculation gives {}".format(1e14*(y-x)))
print("The result should be {}".format(sqrt(3.0)))
Explanation: Floating point numbers
Exercise 1
Computers cannot, in principle, represent real numbers perfectly. This can lead to problems of accuracy. For example, if
\begin{equation}
x = 1, \qquad y = 1 + 10^{-14} \sqrt{3}
\end{equation}
then it should be true that
\begin{equation}
10^{14} (y - x) = \sqrt{3}.
\end{equation}
Check how accurately this equation holds in Python and see what this implies about the accuracy of subtracting two numbers that are close together.
Solution
End of explanation
a = 1e-3
b = 1e3
c = a
formula1_n3_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula1_n3_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula2_n3_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))
formula2_n3_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))
print("For n=3, first formula, solutions are {} and {}.".format(formula1_n3_plus,
formula1_n3_minus))
print("For n=3, second formula, solutions are {} and {}.".format(formula2_n3_plus,
formula2_n3_minus))
a = 1e-4
b = 1e4
c = a
formula1_n4_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula1_n4_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula2_n4_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))
formula2_n4_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))
print("For n=4, first formula, solutions are {} and {}.".format(formula1_n4_plus,
formula1_n4_minus))
print("For n=4, second formula, solutions are {} and {}.".format(formula2_n4_plus,
formula2_n4_minus))
Explanation: We see that the first three digits are correct. This isn't too surprising: we expect 16 digits of accuracy for a floating point number, but $x$ and $y$ are identical for the first 14 digits.
Exercise 2
The standard quadratic formula gives the solutions to
\begin{equation}
a x^2 + b x + c = 0
\end{equation}
as
\begin{equation}
x = \frac{-b \pm \sqrt{b^2 - 4 a c}}{2 a}.
\end{equation}
Show that, if $a = 10^{-n} = c$ and $b = 10^n$ then
\begin{equation}
x = \frac{10^{2 n}}{2} \left( -1 \pm \sqrt{1 - 10^{-4n}} \right).
\end{equation}
Using the expansion (from Taylor's theorem)
\begin{equation}
\sqrt{1 - 10^{-4 n}} \simeq 1 - \frac{10^{-4 n}}{2} + \dots, \qquad n \gg 1,
\end{equation}
show that
\begin{equation}
x \simeq -10^{2 n} + \frac{10^{-2 n}}{4} \quad \text{and} \quad -\frac{10^{-2n}}{4}, \qquad n \gg 1.
\end{equation}
Solution
This is pen-and-paper work; each step should be re-arranging.
Exercise 3
By multiplying and dividing by $-b \mp \sqrt{b^2 - 4 a c}$, check that we can also write the solutions to the quadratic equation as
\begin{equation}
x = \frac{2 c}{-b \mp \sqrt{b^2 - 4 a c}}.
\end{equation}
Solution
Using the difference of two squares we get
\begin{equation}
x = \frac{b^2 - \left( b^2 - 4 a c \right)}{2a \left( -b \mp \sqrt{b^2 - 4 a c} \right)}
\end{equation}
which re-arranges to give the required solution.
Exercise 4
Using Python, calculate both solutions to the quadratic equation
\begin{equation}
10^{-n} x^2 + 10^n x + 10^{-n} = 0
\end{equation}
for $n = 3$ and $n = 4$ using both formulas. What do you see? How has floating point accuracy caused problems here?
Solution
End of explanation
def g(f, X, delta):
Approximate the derivative of a given function at a point.
Parameters
----------
f : function
Function to be differentiated
X : real
Point at which the derivative is evaluated
delta : real
Step length
Returns
-------
g : real
Approximation to the derivative
return (f(X+delta) - f(X)) / delta
Explanation: There is a difference in the fifth significant figure in both solutions in the first case, which gets to the third (arguably the second) significant figure in the second case. Comparing to the limiting solutions above, we see that the larger root is definitely more accurately captured with the first formula than the second (as the result should be bigger than $10^{-2n}$).
In the second case we have divided by a very small number to get the big number, which loses accuracy.
Exercise 5
The standard definition of the derivative of a function is
\begin{equation}
\left. \frac{\text{d} f}{\text{d} x} \right|{x=X} = \lim{\delta \to 0} \frac{f(X + \delta) - f(X)}{\delta}.
\end{equation}
We can approximate this by computing the result for a finite value of $\delta$:
\begin{equation}
g(x, \delta) = \frac{f(x + \delta) - f(x)}{\delta}.
\end{equation}
Write a function that takes as inputs a function of one variable, $f(x)$, a location $X$, and a step length $\delta$, and returns the approximation to the derivative given by $g$.
Solution
End of explanation
from math import exp
for n in range(1, 8):
print("For n={}, the approx derivative is {}.".format(n, g(exp, 0.0, 10**(-2.0*n))))
Explanation: Exercise 6
The function $f_1(x) = e^x$ has derivative with the exact value $1$ at $x=0$. Compute the approximate derivative using your function above, for $\delta = 10^{-2 n}$ with $n = 1, \dots, 7$. You should see the results initially improve, then get worse. Why is this?
Solution
End of explanation
def isprime(n):
Checks to see if an integer is prime.
Parameters
----------
n : integer
Number to check
Returns
-------
isprime : Boolean
If n is prime
# No number less than 2 can be prime
if n < 2:
return False
# We only need to check for divisors up to sqrt(n)
for m in range(2, int(n**0.5)+1):
if n%m == 0:
return False
# If we've got this far, there are no divisors.
return True
for n in range(50):
if isprime(n):
print("Function says that {} is prime.".format(n))
Explanation: We have a combination of floating point inaccuracies: in the numerator we have two terms that are nearly equal, leading to a very small number. We then divide two very small numbers. This is inherently inaccurate.
This does not mean that you can't calculate derivatives to high accuracy, but alternative approaches are definitely recommended.
Prime numbers
Exercise 1
Write a function that tests if a number is prime. Test it by writing out all prime numbers less than 50.
Solution
This is a "simple" solution, but not efficient.
End of explanation
n = 2
while (not isprime(n)) or (isprime(2**n-1)):
n += 1
print("The first n such that 2^n-1 is not prime is {}.".format(n))
Explanation: Exercise 2
500 years ago some believed that the number $2^n - 1$ was prime for all primes $n$. Use your function to find the first prime $n$ for which this is not true.
Solution
We could do this many ways. This "elegant" solution says:
Start from the smallest possible $n$ (2).
Check if $n$ is prime. If not, add one to $n$.
If $n$ is prime, check if $2^n-1$ is prime. If it is, add one to $n$.
If both those logical checks fail, we have found the $n$ we want.
End of explanation
for n in range(2, 41):
if isprime(n) and isprime(2**n-1):
print("n={} is such that 2^n-1 is prime.".format(n))
Explanation: Exercise 3
The Mersenne primes are those that have the form $2^n-1$, where $n$ is prime. Use your previous solutions to generate all the $n < 40$ that give Mersenne primes.
Solution
End of explanation
def prime_factors(n):
Generate all the prime factors of n.
Parameters
----------
n : integer
Number to be checked
Returns
-------
factors : dict
Prime factors (keys) and multiplicities (values)
factors = {}
m = 2
while m <= n:
if n%m == 0:
factors[m] = 1
n //= m
while n%m == 0:
factors[m] += 1
n //= m
m += 1
return factors
for n in range(17, 21):
print("Prime factors of {} are {}.".format(n, prime_factors(n).keys()))
print("Multiplicities of prime factors of 48 are {}.".format(prime_factors(48).values()))
Explanation: Exercise 4
Write a function to compute all prime factors of an integer $n$, including their multiplicities. Test it by printing the prime factors (without multiplicities) of $n = 17, \dots, 20$ and the multiplicities (without factors) of $n = 48$.
Note
One effective solution is to return a dictionary, where the keys are the factors and the values are the multiplicities.
Solution
This solution uses the trick of immediately dividing $n$ by any divisor: this means we never have to check the divisor for being prime.
End of explanation
def divisors(n):
Generate all integer divisors of n.
Parameters
----------
n : integer
Number to be checked
Returns
-------
divs : list
All integer divisors, including 1.
divs = [1]
m = 2
while m <= n/2:
if n%m == 0:
divs.append(m)
m += 1
return divs
for n in range(16, 21):
print("The divisors of {} are {}.".format(n, divisors(n)))
Explanation: Exercise 5
Write a function to generate all the integer divisors, including 1, but not including $n$ itself, of an integer $n$. Test it on $n = 16, \dots, 20$.
Note
You could use the prime factorization from the previous exercise, or you could do it directly.
Solution
Here we will do it directly.
End of explanation
def isperfect(n):
Check if a number is perfect.
Parameters
----------
n : integer
Number to check
Returns
-------
isperfect : Boolean
Whether it is perfect or not.
divs = divisors(n)
sum_divs = 0
for d in divs:
sum_divs += d
return n == sum_divs
for n in range(2,10000):
if (isperfect(n)):
factors = prime_factors(n)
print("{} is perfect.\n"
"Divisors are {}.\n"
"Prime factors {} (multiplicities {}).".format(
n, divisors(n), factors.keys(), factors.values()))
Explanation: Exercise 6
A perfect number $n$ is one where the divisors sum to $n$. For example, 6 has divisors 1, 2, and 3, which sum to 6. Use your previous solution to find all perfect numbers $n < 10,000$ (there are only four!).
Solution
We can do this much more efficiently than the code below using packages such as numpy, but this is a "bare python" solution.
End of explanation
%timeit isperfect(2**(3-1)*(2**3-1))
%timeit isperfect(2**(5-1)*(2**5-1))
%timeit isperfect(2**(7-1)*(2**7-1))
%timeit isperfect(2**(13-1)*(2**13-1))
Explanation: Exercise 7
Using your previous functions, check that all perfect numbers $n < 10,000$ can be written as $2^{k-1} \times (2^k - 1)$, where $2^k-1$ is a Mersenne prime.
Solution
In fact we did this above already:
$6 = 2^{2-1} \times (2^2 - 1)$. 2 is the first number on our Mersenne list.
$28 = 2^{3-1} \times (2^3 - 1)$. 3 is the second number on our Mersenne list.
$496 = 2^{5-1} \times (2^5 - 1)$. 5 is the third number on our Mersenne list.
$8128 = 2^{7-1} \times (2^7 - 1)$. 7 is the fourth number on our Mersenne list.
Exercise 8 (bonus)
Investigate the timeit function in python or IPython. Use this to measure how long your function takes to check that, if $k$ on the Mersenne list then $n = 2^{k-1} \times (2^k - 1)$ is a perfect number, using your functions. Stop increasing $k$ when the time takes too long!
Note
You could waste considerable time on this, and on optimizing the functions above to work efficiently. It is not worth it, other than to show how rapidly the computation time can grow!
Solution
End of explanation
def logistic(x0, r, N = 1000):
sequence = [x0]
xn = x0
for n in range(N):
xnew = r*xn*(1.0-xn)
sequence.append(xnew)
xn = xnew
return sequence
Explanation: It's worth thinking about the operation counts of the various functions implemented here. The implementations are inefficient, but even in the best case you see how the number of operations (and hence computing time required) rapidly increases.
Logistic map
Partly taken from Newman's book, p 120.
The logistic map builds a sequence of numbers ${ x_n }$ using the relation
\begin{equation}
x_{n+1} = r x_n \left( 1 - x_n \right),
\end{equation}
where $0 \le x_0 \le 1$.
Exercise 1
Write a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).
Solution
End of explanation
import numpy
from matplotlib import pyplot
%matplotlib inline
x0 = 0.5
N = 2000
sequence1 = logistic(x0, 1.5, N)
sequence2 = logistic(x0, 3.5, N)
pyplot.plot(sequence1[-100:], 'b-', label = r'$r=1.5$')
pyplot.plot(sequence2[-100:], 'k-', label = r'$r=3.5$')
pyplot.xlabel(r'$n$')
pyplot.ylabel(r'$x$')
pyplot.show()
Explanation: Exercise 2
Fix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases.
What does this suggest about the long-term behaviour of the sequence?
Solution
End of explanation
import numpy
from matplotlib import pyplot
%matplotlib inline
r_values = numpy.linspace(1.0, 4.0, 401)
x0 = 0.5
N = 2000
for r in r_values:
sequence = logistic(x0, r, N)
pyplot.plot(r*numpy.ones_like(sequence[1000:]), sequence[1000:], 'k.')
pyplot.xlabel(r'$r$')
pyplot.ylabel(r'$x$')
pyplot.show()
Explanation: This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.
Exercise 3
Fix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).
Solution
End of explanation
def in_Mandelbrot(c, n_iterations = 100):
z0 = 0.0 + 0j
in_set = True
n = 0
zn = z0
while in_set and (n < n_iterations):
n += 1
znew = zn**2 + c
in_set = abs(znew) < 2.0
zn = znew
return in_set
Explanation: Exercise 4
For iterative maps such as the logistic map, one of three things can occur:
The sequence settles down to a fixed point.
The sequence rotates through a finite number of values. This is called a limit cycle.
The sequence generates an infinite number of values. This is called deterministic chaos.
Using just your plot, or new plots from this data, work out approximate values of $r$ for which there is a transition from fixed points to limit cycles, from limit cycles of a given number of values to more values, and the transition to chaos.
Solution
The first transition is at $r \approx 3$, the next at $r \approx 3.45$, the next at $r \approx 3.55$. The transition to chaos appears to happen before $r=4$, but it's not obvious exactly where.
Mandelbrot
The Mandelbrot set is also generated from a sequence, ${ z_n }$, using the relation
\begin{equation}
z_{n+1} = z_n^2 + c, \qquad z_0 = 0.
\end{equation}
The members of the sequence, and the constant $c$, are all complex. The point in the complex plane at $c$ is in the Mandelbrot set only if the $|z_n| < 2$ for all members of the sequence. In reality, checking the first 100 iterations is sufficient.
Note: the python notation for a complex number $x + \text{i} y$ is x + yj: that is, j is used to indicate $\sqrt{-1}$. If you know the values of x and y then x + yj constructs a complex number; if they are stored in variables you can use complex(x, y).
Exercise 1
Write a function that checks if the point $c$ is in the Mandelbrot set.
Solution
End of explanation
c_values = [0.0, 2+2j, 2-2j, -2+2j, -2-2j]
for c in c_values:
print("Is {} in the Mandelbrot set? {}.".format(c, in_Mandelbrot(c)))
Explanation: Exercise 2
Check the points $c=0$ and $c=\pm 2 \pm 2 \text{i}$ and ensure they do what you expect. (What should you expect?)
Solution
End of explanation
import numpy
def grid_Mandelbrot(N):
x = numpy.linspace(-2.0, 2.0, N)
X, Y = numpy.meshgrid(x, x)
C = X + 1j*Y
grid = numpy.zeros((N, N), int)
for nx in range(N):
for ny in range(N):
grid[nx, ny] = int(in_Mandelbrot(C[nx, ny]))
return grid
Explanation: Exercise 3
Write a function that, given $N$
generates an $N \times N$ grid spanning $c = x + \text{i} y$, for $-2 \le x \le 2$ and $-2 \le y \le 2$;
returns an $N\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.
Solution
End of explanation
from matplotlib import pyplot
%matplotlib inline
pyplot.imshow(grid_Mandelbrot(100))
Explanation: Exercise 4
Using the function imshow from matplotlib, plot the resulting array for a $100 \times 100$ array to make sure you see the expected shape.
Solution
End of explanation
from math import log
def log_Mandelbrot(c, n_iterations = 100):
z0 = 0.0 + 0j
in_set = True
n = 0
zn = z0
while in_set and (n < n_iterations):
n += 1
znew = zn**2 + c
in_set = abs(znew) < 2.0
zn = znew
return log(n)
def log_grid_Mandelbrot(N):
x = numpy.linspace(-2.0, 2.0, N)
X, Y = numpy.meshgrid(x, x)
C = X + 1j*Y
grid = numpy.zeros((N, N), int)
for nx in range(N):
for ny in range(N):
grid[nx, ny] = log_Mandelbrot(C[nx, ny])
return grid
from matplotlib import pyplot
%matplotlib inline
pyplot.imshow(log_grid_Mandelbrot(100))
Explanation: Exercise 5
Modify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using imshow again.
Solution
End of explanation
pyplot.imshow(log_grid_Mandelbrot(1000)[600:800,400:600])
Explanation: Exercise 6
Try some higher resolution plots, and try plotting only a section to see the structure. Note this is not a good way to get high accuracy close up images!
Solution
This is a simple example:
End of explanation
class Eqint(object):
def __init__(self, sequence):
self.sequence = sequence
def __repr__(self):
return str(len(self.sequence))
def __eq__(self, other):
return len(self.sequence)==len(other.sequence)
Explanation: Equivalence classes
An equivalence class is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \sim 10$ to denote two objects within the same equivalence class.
Here, we are going to define the positive integers programmatically from equivalent sequences.
Exercise 1
Define a python class Eqint. This should be
Initialized by a sequence;
Store the sequence;
Define its representation (via the __repr__ function) to be the integer length of the sequence;
Redefine equality (via the __eq__ function) so that two eqints are equal if their sequences have same length.
Solution
End of explanation
zero = Eqint([])
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
print("Is zero equivalent to one? {}, {}, {}".format(zero == one_list,
zero == one_tuple,
zero == one_string))
print("Is one equivalent to one? {}, {}, {}.".format(one_list == one_tuple,
one_list == one_string,
one_tuple == one_string))
print(zero)
print(one_list)
print(one_tuple)
print(one_string)
Explanation: Exercise 2
Define a zero object from the empty list, and three one objects, from a single object list, tuple, and string. For example
python
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
Check that none of the one objects equal the zero object, but all equal the other one objects. Print each object to check that the representation gives the integer length.
Solution
End of explanation
class Eqint(object):
def __init__(self, sequence):
self.sequence = sequence
def __repr__(self):
return str(len(self.sequence))
def __eq__(self, other):
return len(self.sequence)==len(other.sequence)
def __add__(a, b):
return Eqint(tuple(a.sequence) + tuple(b.sequence))
Explanation: Exercise 3
Redefine the class by including an __add__ method that combines the two sequences. That is, if a and b are Eqints then a+b should return an Eqint defined from combining a and bs sequences.
Note
Adding two different types of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.
Solution
End of explanation
zero = Eqint([])
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
sum_eqint = zero + one_list + one_tuple + one_string
print("The sum is {}.".format(sum_eqint))
print("The internal sequence is {}.".format(sum_eqint.sequence))
Explanation: Exercise 4
Check your addition function by adding together all your previous Eqint objects (which will need re-defining, as the class has been redefined). Print the resulting object to check you get 3, and also print its internal sequence.
Solution
End of explanation
positive_integers = []
zero = Eqint([])
positive_integers.append(zero)
N = 10
for n in range(1,N+1):
positive_integers.append(Eqint(list(positive_integers)))
print("The 'final' Eqint is {}".format(positive_integers[-1]))
print("Its sequence is {}".format(positive_integers[-1].sequence))
print("That is, it contains all Eqints with length less than 10.")
Explanation: Exercise 5
We will sketch a construction of the positive integers from nothing.
Define an empty list positive_integers.
Define an Eqint called zero from the empty list. Append it to positive_integers.
Define an Eqint called next_integer from the Eqint defined by a copy of positive_integers (ie, use Eqint(list(positive_integers)). Append it to positive_integers.
Repeat step 3 as often as needed.
Use this procedure to define the Eqint equivalent to $10$. Print it, and its internal sequence, to check.
Solution
End of explanation
def normal_form(numerator, denominator):
from fractions import gcd
factor = gcd(numerator, denominator)
return numerator//factor, denominator//factor
print(normal_form(3, 2))
print(normal_form(15, 3))
print(normal_form(20, 42))
Explanation: Rational numbers
Instead of working with floating point numbers, which are not "exact", we could work with the rational numbers $\mathbb{Q}$. A rational number $q \in \mathbb{Q}$ is defined by the numerator $n$ and denominator $d$ as $q = \frac{n}{d}$, where $n$ and $d$ are coprime (ie, have no common divisor other than $1$).
Exercise 1
Find a python function that finds the greatest common divisor (gcd) of two numbers. Use this to write a function normal_form that takes a numerator and divisor and returns the coprime $n$ and $d$. Test this function on $q = \frac{3}{2}$, $q = \frac{15}{3}$, and $q = \frac{20}{42}$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
q1 = Rational(3, 2)
print(q1)
q2 = Rational(15, 3)
print(q2)
q3 = Rational(20, 42)
print(q3)
Explanation: Exercise 2
Define a class Rational that uses the normal_form function to store the rational number in the appropriate form. Define a __repr__ function that prints a string that looks like $\frac{n}{d}$ (hint: use len(str(number)) to find the number of digits of an integer). Test it on the cases above.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
print(Rational(1,2) + Rational(1,3) + Rational(1,6))
Explanation: Exercise 3
Overload the __add__ function so that you can add two rational numbers. Test it on $\frac{1}{2} + \frac{1}{3} + \frac{1}{6} = 1$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
print(Rational(1,3)*Rational(15,2)*Rational(2,5))
Explanation: Exercise 4
Overload the __mul__ function so that you can multiply two rational numbers. Test it on $\frac{1}{3} \times \frac{15}{2} \times \frac{2}{5} = 1$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __rmul__(self, other):
numerator = self.numerator * other
return Rational(numerator, self.denominator)
def __sub__(a, b):
return a + (-1)*b
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
half = Rational(1,2)
print(2*half)
print(half+(-1)*half)
print(half-half)
Explanation: Exercise 5
Overload the __rmul__ function so that you can multiply a rational by an integer. Check that $\frac{1}{2} \times 2 = 1$ and $\frac{1}{2} + (-1) \times \frac{1}{2} = 0$. Also overload the __sub__ function (using previous functions!) so that you can subtract rational numbers and check that $\frac{1}{2} - \frac{1}{2} = 0$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __rmul__(self, other):
numerator = self.numerator * other
return Rational(numerator, self.denominator)
def __sub__(a, b):
return a + (-1)*b
def __float__(a):
return float(a.numerator) / float(a.denominator)
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
print(float(Rational(1,2)))
print(float(Rational(1,3)))
print(float(Rational(1,11)))
Explanation: Exercise 6
Overload the __float__ function so that float(q) returns the floating point approximation to the rational number q. Test this on $\frac{1}{2}, \frac{1}{3}$, and $\frac{1}{11}$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __rmul__(self, other):
numerator = self.numerator * other
return Rational(numerator, self.denominator)
def __sub__(a, b):
return a + (-1)*b
def __float__(a):
return float(a.numerator) / float(a.denominator)
def __lt__(a, b):
return a.numerator * b.denominator < a.denominator * b.numerator
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = '\n'+str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
q_list = [Rational(n//2, n) for n in range(2, 12)]
print(sorted(q_list))
Explanation: Exercise 7
Overload the __lt__ function to compare two rational numbers. Create a list of rational numbers where the denominator is $n = 2, \dots, 11$ and the numerator is the floored integer $n/2$, ie n//2. Use the sorted function on that list (which relies on the __lt__ function).
Solution
End of explanation
def wallis_rational(N):
The partial product approximation to pi using the first N terms of Wallis' formula.
Parameters
----------
N : int
Number of terms in product
Returns
-------
partial : Rational
A rational number approximation to pi
partial = Rational(2,1)
for n in range(1, N+1):
partial = partial * Rational((2*n)**2, (2*n-1)*(2*n+1))
return partial
pi_list = [wallis_rational(n) for n in range(1, 21)]
print(pi_list)
print(sorted(pi_list))
import numpy
print(numpy.pi-numpy.array(list(map(float, pi_list))))
Explanation: Exercise 8
The Wallis formula for $\pi$ is
\begin{equation}
\pi = 2 \prod_{n=1}^{\infty} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)}.
\end{equation}
We can define a partial product $\pi_N$ as
\begin{equation}
\pi_N = 2 \prod_{n=1}^{N} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)},
\end{equation}
each of which are rational numbers.
Construct a list of the first 20 rational number approximations to $\pi$ and print them out. Print the sorted list to show that the approximations are always increasing. Then convert them to floating point numbers, construct a numpy array, and subtract this array from $\pi$ to see how accurate they are.
Solution
End of explanation
lhs = 27**5 + 84**5 + 110**5 + 133**5
rhs = 144**5
print("Does the LHS {} equal the RHS {}? {}".format(lhs, rhs, lhs==rhs))
Explanation: The shortest published Mathematical paper
A candidate for the shortest mathematical paper ever shows the following result:
\begin{equation}
27^5 + 84^5 + 110^5 + 133^5 = 144^5.
\end{equation}
This is interesting as
This is a counterexample to a conjecture by Euler ... that at least $n$ $n$th powers are required to sum to an $n$th power, $n > 2$.
Exercise 1
Using python, check the equation above is true.
Solution
End of explanation
import numpy
import itertools
Explanation: Exercise 2
The more interesting statement in the paper is that
\begin{equation}
27^5 + 84^5 + 110^5 + 133^5 = 144^5.
\end{equation}
[is] the smallest instance in which four fifth powers sum to a fifth power.
Interpreting "the smallest instance" to mean the solution where the right hand side term (the largest integer) is the smallest, we want to use python to check this statement.
You may find the combinations function from the itertools package useful.
End of explanation
input_list = numpy.arange(1, 7)
combinations = list(itertools.combinations(input_list, 4))
print(combinations)
Explanation: The combinations function returns all the combinations (ignoring order) of r elements from a given list. For example, take a list of length 6, [1, 2, 3, 4, 5, 6] and compute all the combinations of length 4:
End of explanation
n_combinations = 144*143*142*141/24
print("Number of combinations of 4 objects from 144 is {}".format(n_combinations))
Explanation: We can already see that the number of terms to consider is large.
Note that we have used the list function to explicitly get a list of the combinations. The combinations function returns a generator, which can be used in a loop as if it were a list, without storing all elements of the list.
How fast does the number of combinations grow? The standard formula says that for a list of length $n$ there are
\begin{equation}
\begin{pmatrix} n \ k \end{pmatrix} = \frac{n!}{k! (n-k)!}
\end{equation}
combinations of length $k$. For $k=4$ as needed here we will have $n (n-1) (n-2) (n-3) / 24$ combinations. For $n=144$ we therefore have
End of explanation
from matplotlib import pyplot
%matplotlib inline
n = numpy.arange(5, 51)
N = numpy.zeros_like(n)
for i, n_c in enumerate(n):
combinations = list(itertools.combinations(numpy.arange(1,n_c+1), 4))
N[i] = len(combinations)
pyplot.figure(figsize=(12,6))
pyplot.loglog(n, N, linestyle='None', marker='x', color='k', label='Combinations')
pyplot.loglog(n, n**4, color='b', label=r'$n^4$')
pyplot.xlabel(r'$n$')
pyplot.ylabel(r'$N$')
pyplot.legend(loc='upper left')
pyplot.show()
Explanation: Exercise 2a
Show, by getting python to compute the number of combinations $N = \begin{pmatrix} n \ 4 \end{pmatrix}$ that $N$ grows roughly as $n^4$. To do this, plot the number of combinations and $n^4$ on a log-log scale. Restrict to $n \le 50$.
Solution
End of explanation
nmax=145
range_to_power = numpy.arange(1, nmax)**5
lhs_combinations = list(itertools.combinations(range_to_power, 4))
Explanation: With 17 million combinations to work with, we'll need to be a little careful how we compute.
One thing we could try is to loop through each possible "smallest instance" (the term on the right hand side) in increasing order. We then check all possible combinations of left hand sides.
This is computationally very expensive as we repeat a lot of calculations. We repeatedly recalculate combinations (a bad idea). We repeatedly recalculate the powers of the same number.
Instead, let us try creating the list of all combinations of powers once.
Exercise 2b
Construct a numpy array containing all integers in $1, \dots, 144$ to the fifth power.
Construct a list of all combinations of four elements from this array.
Construct a list of sums of all these combinations.
Loop over one list and check if the entry appears in the other list (ie, use the in keyword).
Solution
End of explanation
lhs_sums = []
for lhs_terms in lhs_combinations:
lhs_sums.append(numpy.sum(numpy.array(lhs_terms)))
Explanation: Then calculate the sums:
End of explanation
for i, lhs in enumerate(lhs_sums):
if lhs in range_to_power:
rhs_primitive = int(lhs**(0.2))
lhs_primitive = (numpy.array(lhs_combinations[i])**(0.2)).astype(int)
print("The LHS terms are {}.".format(lhs_primitive))
print("The RHS term is {}.".format(rhs_primitive))
Explanation: Finally, loop through the sums and check to see if it matches any possible term on the RHS:
End of explanation
def dvdt(v, t, sigma, rho, beta):
Define the Lorenz system.
Parameters
----------
v : list
State vector
t : float
Time
sigma : float
Parameter
rho : float
Parameter
beta : float
Parameter
Returns
-------
dvdt : list
RHS defining the Lorenz system
x, y, z = v
return [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]
Explanation: Lorenz attractor
The Lorenz system is a set of ordinary differential equations which can be written
\begin{equation}
\frac{\text{d} \vec{v}}{\text{d} \vec{t}} = \vec{f}(\vec{v})
\end{equation}
where the variables in the state vector $\vec{v}$ are
\begin{equation}
\vec{v} = \begin{pmatrix} x(t) \ y(t) \ z(t) \end{pmatrix}
\end{equation}
and the function defining the ODE is
\begin{equation}
\vec{f} = \begin{pmatrix} \sigma \left( y(t) - x(t) \right) \ x(t) \left( \rho - z(t) \right) - y(t) \ x(t) y(t) - \beta z(t) \end{pmatrix}.
\end{equation}
The parameters $\sigma, \rho, \beta$ are all real numbers.
Exercise 1
Write a function dvdt(v, t, params) that returns $\vec{f}$ given $\vec{v}, t$ and the parameters $\sigma, \rho, \beta$.
Solution
End of explanation
import numpy
from scipy.integrate import odeint
v0 = [1.0, 1.0, 1.0]
sigma = 10.0
beta = 8.0/3.0
t_values = numpy.linspace(0.0, 100.0, 5000)
rho_values = [13.0, 14.0, 15.0, 28.0]
v_values = []
for rho in rho_values:
params = (sigma, rho, beta)
v = odeint(dvdt, v0, t_values, args=params)
v_values.append(v)
%matplotlib inline
from matplotlib import pyplot
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = pyplot.figure(figsize=(12,6))
for i, v in enumerate(v_values):
ax = fig.add_subplot(2,2,i+1,projection='3d')
ax.plot(v[:,0], v[:,1], v[:,2])
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
ax.set_zlabel(r'$z$')
ax.set_title(r"$\rho={}$".format(rho_values[i]))
pyplot.show()
Explanation: Exercise 2
Fix $\sigma=10, \beta=8/3$. Set initial data to be $\vec{v}(0) = \vec{1}$. Using scipy, specifically the odeint function of scipy.integrate, solve the Lorenz system up to $t=100$ for $\rho=13, 14, 15$ and $28$.
Plot your results in 3d, plotting $x, y, z$.
Solution
End of explanation
t_values = numpy.linspace(0.0, 40.0, 4000)
rho = 28.0
params = (sigma, rho, beta)
v_values = []
v0_values = [[1.0,1.0,1.0],
[1.0+1e-5,1.0+1e-5,1.0+1e-5]]
for v0 in v0_values:
v = odeint(dvdt, v0, t_values, args=params)
v_values.append(v)
fig = pyplot.figure(figsize=(12,6))
line_colours = 'by'
for tstart in range(4):
ax = fig.add_subplot(2,2,tstart+1,projection='3d')
for i, v in enumerate(v_values):
ax.plot(v[tstart*1000:(tstart+1)*1000,0],
v[tstart*1000:(tstart+1)*1000,1],
v[tstart*1000:(tstart+1)*1000,2],
color=line_colours[i])
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
ax.set_zlabel(r'$z$')
ax.set_title(r"$t \in [{},{}]$".format(tstart*10, (tstart+1)*10))
pyplot.show()
Explanation: Exercise 3
Fix $\rho = 28$. Solve the Lorenz system twice, up to $t=40$, using the two different initial conditions $\vec{v}(0) = \vec{1}$ and $\vec{v}(0) = \vec{1} + \vec{10^{-5}}$.
Show four plots. Each plot should show the two solutions on the same axes, plotting $x, y$ and $z$. Each plot should show $10$ units of time, ie the first shows $t \in [0, 10]$, the second shows $t \in [10, 20]$, and so on.
Solution
End of explanation
import sympy
sympy.init_printing()
y, t = sympy.symbols('y, t')
sympy.dsolve(sympy.diff(y(t), t) + y(t)**2 - sympy.exp(-t), y(t))
Explanation: This shows the sensitive dependence on initial conditions that is characteristic of chaotic behaviour.
Systematic ODE solving with sympy
We are interested in the solution of
\begin{equation}
\frac{\text{d} y}{\text{d} t} = e^{-t} - y^n, \qquad y(0) = 1,
\end{equation}
where $n > 1$ is an integer. The "minor" change from the above examples mean that sympy can only give the solution as a power series.
Exercise 1
Compute the general solution as a power series for $n = 2$.
Solution
End of explanation
for n in range(2, 11):
ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t),
ics = {y(0) : 1})
print(ode_solution)
Explanation: Exercise 2
Investigate the help for the dsolve function to straightforwardly impose the initial condition $y(0) = 1$ using the ics argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \dots, 10$.
Solution
End of explanation
%matplotlib inline
for n in range(2, 11):
ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t),
ics = {y(0) : 1})
sympy.plot(ode_solution.rhs.removeO(), (t, 0, 1));
Explanation: Exercise 3
Using the removeO command, plot each of these solutions for $t \in [0, 1]$.
End of explanation
def all_primes(N):
Return all primes less than or equal to N.
Parameters
----------
N : int
Maximum number
Returns
-------
prime : generator
Prime numbers
primes = []
for n in range(2, N+1):
is_n_prime = True
for p in primes:
if n%p == 0:
is_n_prime = False
break
if is_n_prime:
primes.append(n)
yield n
Explanation: Twin primes
A twin prime is a pair $(p_1, p_2)$ such that both $p_1$ and $p_2$ are prime and $p_2 = p_1 + 2$.
Exercise 1
Write a generator that returns twin primes. You can use the generators above, and may want to look at the itertools module together with its recipes, particularly the pairwise recipe.
Solution
Note: we need to first pull in the generators introduced in that notebook
End of explanation
from itertools import tee
def pair_primes(N):
"Generate consecutive prime pairs, using the itertools recipe"
a, b = tee(all_primes(N))
next(b, None)
return zip(a, b)
Explanation: Now we can generate pairs using the pairwise recipe:
End of explanation
def check_twin(pair):
Take in a pair of integers, check if they differ by 2.
p1, p2 = pair
return p2-p1 == 2
Explanation: We could examine the results of the two primes directly. But an efficient solution is to use python's filter function. To do this, first define a function checking if the pair are twin primes:
End of explanation
def twin_primes(N):
Return all twin primes
return filter(check_twin, pair_primes(N))
Explanation: Then use the filter function to define another generator:
End of explanation
for tp in twin_primes(20):
print(tp)
Explanation: Now check by finding the twin primes with $N<20$:
End of explanation
def pi_N(N):
Use the quantify pattern from itertools to count the number of twin primes.
return sum(map(check_twin, pair_primes(N)))
pi_N(1000)
Explanation: Exercise 2
Find how many twin primes there are with $p_2 < 1000$.
Solution
Again there are many solutions, but the itertools recipes has the quantify pattern. Looking ahead to exercise 3 we'll define:
End of explanation
import numpy
from matplotlib import pyplot
%matplotlib inline
N = numpy.array([2**k for k in range(4, 17)])
twin_prime_fraction = numpy.array(list(map(pi_N, N))) / N
pyplot.semilogx(N, twin_prime_fraction)
pyplot.xlabel(r"$N$")
pyplot.ylabel(r"$\pi_N / N$")
pyplot.show()
Explanation: Exercise 3
Let $\pi_N$ be the number of twin primes such that $p_2 < N$. Plot how $\pi_N / N$ varies with $N$ for $N=2^k$ and $k = 4, 5, \dots 16$. (You should use a logarithmic scale where appropriate!)
Solution
We've now done all the hard work and can use the solutions above.
End of explanation
pyplot.semilogx(N, twin_prime_fraction * numpy.log(N)**2)
pyplot.xlabel(r"$N$")
pyplot.ylabel(r"$\pi_N \times \log(N)^2 / N$")
pyplot.show()
Explanation: For those that have checked Wikipedia, you'll see Brun's theorem which suggests a specific scaling, that $\pi_N$ is bounded by $C N / \log(N)^2$. Checking this numerically on this data:
End of explanation
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def __repr__(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def __mul__(self, other):
roots = self.roots + other.roots
leading_term = self.leading_term * other.leading_term
return Polynomial(roots, leading_term)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
return None
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
Polynomial.__init__(self, roots, 1)
def __repr__(self):
string = ""
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
Explanation: A basis for the polynomials
In the section on classes we defined a Monomial class to represent a polynomial with leading coefficient $1$. As the $N+1$ monomials $1, x, x^2, \dots, x^N$ form a basis for the vector space of polynomials of order $N$, $\mathbb{P}^N$, we can use the Monomial class to return this basis.
Exercise 1
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^3$.
Solution
Again we first take the definition of the crucial class from the notes.
End of explanation
def basis_pN(N):
A generator for the simplest basis of P^N.
for n in range(N+1):
yield Monomial(n*[0])
Explanation: Now we can define the first basis:
End of explanation
for poly in basis_pN(3):
print(poly)
Explanation: Then test it on $\mathbb{P}^N$:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
Polynomial.__init__(self, roots, 1)
def __repr__(self):
if len(self.roots):
string = ""
n_zero_roots = len(self.roots) - numpy.count_nonzero(self.roots)
if n_zero_roots == 1:
string = "x"
elif n_zero_roots > 1:
string = "x^{}".format(n_zero_roots)
else: # Monomial degree 0.
string = "1"
for root in self.roots:
if root > 0:
string = string + "(x - {})".format(root)
elif root < 0:
string = string + "(x + {})".format(-root)
return string
Explanation: This looks horrible, but is correct. To really make this look good, we need to improve the output. If we use
End of explanation
for poly in basis_pN(3):
print(poly)
Explanation: then we can deal with the uglier cases, and re-running the test we get
End of explanation
def basis_pN_variant(N):
A generator for the 'sum' basis of P^N.
for n in range(N+1):
yield Monomial(range(n+1))
for poly in basis_pN_variant(4):
print(poly)
Explanation: An even better solution would be to use the numpy.unique function as in this stackoverflow answer (the second one!) to get the frequency of all the roots.
Exercise 2
An alternative basis is given by the monomials
\begin{align}
p_0(x) &= 1, \ p_1(x) &= 1-x, \ p_2(x) &= (1-x)(2-x), \ \dots & \quad \dots, \ p_N(x) &= \prod_{n=1}^N (n-x).
\end{align}
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^4$.
Solution
End of explanation
from itertools import product
def basis_product():
Basis of the product space
yield from product(basis_pN(3), basis_pN_variant(4))
for p1, p2 in basis_product():
print("Basis element is ({}) X ({}).".format(p1, p2))
Explanation: I am too lazy to work back through the definitions and flip all the signs; it should be clear how to do this!
Exercise 3
Use these generators to write another generator that produces a basis of $\mathbb{P^3} \times \mathbb{P^4}$.
Solution
Hopefully by now you'll be aware of how useful itertools is!
End of explanation
def basis_product_long_form():
Basis of the product space (without using yield_from)
prod = product(basis_pN(3), basis_pN_variant(4))
yield next(prod)
for p1, p2 in basis_product():
print("Basis element is ({}) X ({}).".format(p1, p2))
Explanation: I've cheated here as I haven't introduced the yield from syntax (which returns an iterator from a generator). We could write this out instead as
End of explanation |
1,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unsupervised learning
Step1: First, we start with some exploratory clustering, visualizing the clustering dendrogram using SciPy's linkage and dendrogram functions
Step2: Next, let's use the AgglomerativeClustering estimator from scikit-learn and divide the dataset into 3 clusters. Can you guess which 3 clusters from the dendrogram it will reproduce?
Step3: Density-based Clustering - DBSCAN
Another useful approach to clustering is Density-based Spatial Clustering of Applications with Noise (DBSCAN). In essence, we can think of DBSCAN as an algorithm that divides the dataset into subgroup based on dense regions of point.
In DBSCAN, we distinguish between 3 different "points"
Step4: Exercise
<div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
n_samples, n_features = X.shape
plt.scatter(X[:, 0], X[:, 1], c=y);
Explanation: Unsupervised learning: Hierarchical and density-based clustering algorithms
In a previous notebook, "08 Unsupervised Learning - Clustering.ipynb", we introduced one of the essential and widely used clustering algorithms, K-means. One of the advantages of K-means is that it is extremely easy to implement, and it is also computationally very efficient compared to other clustering algorithms. However, we've seen that one of the weaknesses of K-Means is that it only works well if the data can be grouped into a globular or spherical shape. Also, we have to assign the number of clusters, k, a priori -- this can be a problem if we have no prior knowledge about how many clusters we expect to find.
In this notebook, we will take a look at 2 alternative approaches to clustering, hierarchical clustering and density-based clustering.
Hierarchical Clustering
One nice feature of hierachical clustering is that we can visualize the results as a dendrogram, a hierachical tree. Using the visualization, we can then decide how "deep" we want to cluster the dataset by setting a "depth" threshold. Or in other words, we don't need to make a decision about the number of clusters upfront.
Agglomerative and divisive hierarchical clustering
Furthermore, we can distinguish between 2 main approaches to hierarchical clustering: Divisive clustering and agglomerative clustering. In agglomerative clustering, we start with a single sample from our dataset and iteratively merge it with other samples to form clusters -- we can see it as a bottom-up approach for building the clustering dendrogram.
In divisive clustering, however, we start with the whole dataset as one cluster, and we iteratively split it into smaller subclusters -- a top-down approach.
In this notebook, we will use agglomerative clustering.
Single and complete linkage
Now, the next question is how we measure the similarity between samples. One approach is the familiar Euclidean distance metric that we already used via the K-Means algorithm. As a refresher, the distance between 2 m-dimensional vectors $\mathbf{p}$ and $\mathbf{q}$ can be computed as:
\begin{align} \mathrm{d}(\mathbf{q},\mathbf{p}) & = \sqrt{(q_1-p_1)^2 + (q_2-p_2)^2 + \cdots + (q_m-p_m)^2} \[8pt]
& = \sqrt{\sum_{j=1}^m (q_j-p_j)^2}.\end{align}
However, that's the distance between 2 samples. Now, how do we compute the similarity between subclusters of samples in order to decide which clusters to merge when constructing the dendrogram? I.e., our goal is to iteratively merge the most similar pairs of clusters until only one big cluster remains. There are many different approaches to this, for example single and complete linkage.
In single linkage, we take the pair of the most similar samples (based on the Euclidean distance, for example) in each cluster, and merge the two clusters which have the most similar 2 members into one new, bigger cluster.
In complete linkage, we compare the pairs of the two most dissimilar members of each cluster with each other, and we merge the 2 clusters where the distance between its 2 most dissimilar members is smallest.
To see the agglomerative, hierarchical clustering approach in action, let us load the familiar Iris dataset -- pretending we don't know the true class labels and want to find out how many different follow species it consists of:
End of explanation
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
clusters = linkage(X,
metric='euclidean',
method='complete')
dendr = dendrogram(clusters)
plt.ylabel('Euclidean Distance');
Explanation: First, we start with some exploratory clustering, visualizing the clustering dendrogram using SciPy's linkage and dendrogram functions:
End of explanation
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3,
affinity='euclidean',
linkage='complete')
prediction = ac.fit_predict(X)
print('Cluster labels: %s\n' % prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
Explanation: Next, let's use the AgglomerativeClustering estimator from scikit-learn and divide the dataset into 3 clusters. Can you guess which 3 clusters from the dendrogram it will reproduce?
End of explanation
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=400,
noise=0.1,
random_state=1)
plt.scatter(X[:,0], X[:,1])
plt.show()
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2,
min_samples=10,
metric='euclidean')
prediction = db.fit_predict(X)
print("Predicted labels:\n", prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
Explanation: Density-based Clustering - DBSCAN
Another useful approach to clustering is Density-based Spatial Clustering of Applications with Noise (DBSCAN). In essence, we can think of DBSCAN as an algorithm that divides the dataset into subgroup based on dense regions of point.
In DBSCAN, we distinguish between 3 different "points":
Core points: A core point is a point that has at least a minimum number of other points (MinPts) in its radius epsilon.
Border points: A border point is a point that is not a core point, since it doesn't have enough MinPts in its neighborhood, but lies within the radius epsilon of a core point.
Noise points: All other points that are neither core points nor border points.
A nice feature about DBSCAN is that we don't have to specify a number of clusters upfront. However, it requires the setting of additional hyperparameters such as the value for MinPts and the radius epsilon.
End of explanation
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1500,
factor=.4,
noise=.05)
plt.scatter(X[:, 0], X[:, 1], c=y);
# %load solutions/20_clustering_comparison.py
Explanation: Exercise
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Using the following toy dataset, two concentric circles, experiment with the three different clustering algorithms that we used so far: `KMeans`, `AgglomerativeClustering`, and `DBSCAN`.
Which clustering algorithms reproduces or discovers the hidden structure (pretending we don't know `y`) best?
Can you explain why this particular algorithm is a good choice while the other 2 "fail"?
</li>
</ul>
</div>
End of explanation |
1,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inspecting the PubMed Paper Dataset
(Adapted from
Step1: To make it easier to access the data, we convert here paper entries into named tuples. This will allow us to refer to fields by keyword, rather than index.
Step2: Dataset statistics
Plotting relies on matplotlib, which you can download from here (NumPy is also required, and can be downloaded here).
Step3: Papers per year
Here, we will get information on how many papers in the dataset were published per year.
We'll be using the Counter class to determine the number of papers per year.
Step4: Filtering results, to obain only papers since 1950
Step5: Creating a bar plot to visualize the results (using matplotlib.pyplot.bar)
Step6: Papers per author
Here, we will obtain the distribution characterizing the number of papers published by an author.
Step7: Creating a histogram to visualize the results (using matplotlib.pyplot.hist)
Step8: Authors per paper
Step9: Most frequently occurring words in paper titles
Step10: Assignments
Your name
Step11: Calculate and plot (e.g. using plt.plot) a graph of the frequency of the 50 most frequent words in titles of papers, from most frequent to least frequent.
Step12: While keeping in mind that we are dealing with a biased (preselected) dataset about air-related papers, what do you notice when looking at the top 10 most frequent words?
[Write your answer text here] | Python Code:
import pickle, bz2
Summaries_file = 'data/air__Summaries.pkl.bz2'
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
Explanation: Inspecting the PubMed Paper Dataset
(Adapted from: Inspecting the dataset - Luís F. Simões. Assignments added by J.E. Hoeksema, 2014-10-16. Converted to Python 3 and minor changes by Tobias Kuhn, 2015-10-23.)
This notebook's purpose is to provide a basic illustration of how to handle data in the PubMed dataset, as well as to provide some basic assignments about this dataset. Make sure you download all the dataset files (air__Summaries.pkl.bz2, etc.) from Blackboard and save them in a directory called data, which should be a sub-directory of the one that contains this notebook file (or adjust the file path in the code). The dataset consists of information about scientific papers from the PubMed dataset that contain the word "air" in the title or abstract.
Note that you can run all of this code from a normal python or ipython shell, except for certain magic codes (marked with %) used for display within a notebook.
Loading the dataset
End of explanation
from collections import namedtuple
paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )
for (id, paper_info) in Summaries.items():
Summaries[id] = paper( *paper_info )
Summaries[26488732]
Summaries[26488732].title
Explanation: To make it easier to access the data, we convert here paper entries into named tuples. This will allow us to refer to fields by keyword, rather than index.
End of explanation
import matplotlib.pyplot as plt
# show plots inline within the notebook
%matplotlib inline
# set plots' resolution
plt.rcParams['savefig.dpi'] = 100
Explanation: Dataset statistics
Plotting relies on matplotlib, which you can download from here (NumPy is also required, and can be downloaded here).
End of explanation
from collections import Counter
paper_years = [ p.year for p in Summaries.values() ]
papers_per_year = sorted( Counter(paper_years).items() )
print('Number of papers in the dataset per year for the past decade:')
print(papers_per_year[-10:])
Explanation: Papers per year
Here, we will get information on how many papers in the dataset were published per year.
We'll be using the Counter class to determine the number of papers per year.
End of explanation
papers_per_year = [ (y,count) for (y,count) in papers_per_year if y >= 1950 ]
years = [ y for (y,count) in papers_per_year ]
nr_papers = [ count for (y,count) in papers_per_year ]
print('Number of papers in the dataset published since 1950: %d.' % sum(nr_papers))
Explanation: Filtering results, to obain only papers since 1950:
End of explanation
plt.bar( left=years, height=nr_papers, width=1.0 )
plt.xlim(1950,2016)
plt.xlabel('year')
plt.ylabel('number of papers');
Explanation: Creating a bar plot to visualize the results (using matplotlib.pyplot.bar):
End of explanation
# flattening out of the list of lists of authors
authors_expanded = [
auth
for paper in Summaries.values()
for auth in paper.authors
]
nr_papers_by_author = Counter( authors_expanded )
print('There are %d authors in the dataset with distinct names.\n' % len(nr_papers_by_author))
print('50 authors with greatest number of papers:')
print(sorted(nr_papers_by_author.items(), key=lambda i:i[1] )[-50:])
Explanation: Papers per author
Here, we will obtain the distribution characterizing the number of papers published by an author.
End of explanation
plt.hist( x=list(nr_papers_by_author.values()), bins=range(51), histtype='step' )
plt.yscale('log')
plt.xlabel('number of papers authored')
plt.ylabel('number of authors');
Explanation: Creating a histogram to visualize the results (using matplotlib.pyplot.hist):
End of explanation
plt.hist( x=[ len(p.authors) for p in Summaries.values() ], bins=range(20), histtype='bar', align='left', normed=True )
plt.xlabel('number of authors in one paper')
plt.ylabel('fraction of papers')
plt.xlim(0,15);
Explanation: Authors per paper
End of explanation
# assemble list of words in paper titles, convert them to lowercase, and remove trailing '.'
title_words = Counter([
( word if word[-1] != '.' else word[:-1] ).lower()
for paper in Summaries.values()
for word in paper.title.split(' ')
if word != '' # discard empty strings that are generated when consecutive spaces occur in the title
])
print(len(title_words), 'distinct words occur in the paper titles.\n')
print('50 most frequently occurring words:')
print(sorted( title_words.items(), key=lambda i:i[1] )[-50:])
Explanation: Most frequently occurring words in paper titles
End of explanation
# Add your code here
Explanation: Assignments
Your name: ...
Create a plot for the years from 1970 until 2015 that shows how many authors published at least one paper.
Hint: use a defaultdict with a default value of set. You can retrieve the number of unique items in a set s with len(s). See also the documentation for set and defaultdict
End of explanation
# Add your code here
Explanation: Calculate and plot (e.g. using plt.plot) a graph of the frequency of the 50 most frequent words in titles of papers, from most frequent to least frequent.
End of explanation
# Add your code here
Explanation: While keeping in mind that we are dealing with a biased (preselected) dataset about air-related papers, what do you notice when looking at the top 10 most frequent words?
[Write your answer text here]
End of explanation |
1,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Real World Tutorial 3
Step3: We define a function that computes the sum of all primes below a certain integer n, and don't try to be smart about it; the point is that it needs a lot of computation. These functions are designated nogil, so that we can be certain no Python objects are accessed. Finally we create a single Python exposed function that uses the
Step4: In fact, we only loaded the multiprocessing module to get the number of CPUs on this machine. We also get a decent amount of work to do in the input_range.
Step5: Single thread
Let's first run our tests in a single thread
Step6: Multi-threading
Step7: Using Noodles
On my laptop, a dual-core hyper-threaded Intel(R) Core(TM) i5-5300U CPU, this runs just over two times faster than the single threaded code. However, setting up a queue and a pool of workers is quite cumbersome. Also, this approach doesn't scale up if the dependencies between our computations get more complex. Next we'll use Noodles to provide the multi-threaded environment to execute our work. We'll need three functions | Python Code:
%load_ext cython
import multiprocessing
import threading
import queue
Explanation: Real World Tutorial 3: Parallel Number crunching using Cython
Python was not designed to be very good at parallel processing. There are two major problems at the core of the language that make it hard to implement parallel algorithms.
The Global Interpreter Lock
Flexible object model
The first of these issues is the most famous obstacle towards a convincing multi-threading approach, where a single instance of the Python interpreter runs in several threads. The second point is more subtle, but makes it harder to do multi-processing, where several independent instances of the Python interpreter work together to achieve parallelism. We will first explain an elegant way to work around the Global Interpreter Lock, or GIL: use Cython.
Using Cython to lift the GIL
The GIL means that the Python interpreter will only operate on one thread at a time. Even when we think we run in a gazillion threads, Python itself uses only one. Multi-threading in Python is only usefull to wait for I/O and to perform system calls. To do useful CPU intensive work in multi-threaded mode, we need to develop functions that are implemented in C, and tell Python when we call these functions not to worry about the GIL. The easiest way to achieve this, is to use Cython. We develop a number-crunching prime adder, and have it run in parallel threads.
We'll load the multiprocessing, threading and queue modules to do our plumbing, and the cython extension so we can do the number crunching, as is shown in this blog post.
End of explanation
%%cython
from libc.math cimport ceil, sqrt
cdef inline int _is_prime(int n) nogil:
return a boolean, is the input integer a prime?
if n == 2:
return True
cdef int max_i = <int>ceil(sqrt(n))
cdef int i = 2
while i <= max_i:
if n % i == 0:
return False
i += 1
return True
cdef unsigned long _sum_primes(int n) nogil:
return sum of all primes less than n
cdef unsigned long i = 0
cdef int x
for x in range(2, n):
if _is_prime(x):
i += x
return i
def sum_primes(int n):
with nogil:
result = _sum_primes(n)
return result
Explanation: We define a function that computes the sum of all primes below a certain integer n, and don't try to be smart about it; the point is that it needs a lot of computation. These functions are designated nogil, so that we can be certain no Python objects are accessed. Finally we create a single Python exposed function that uses the:
python
with nogil:
...
statement. This is a context-manager that lifts the GIL for the duration of its contents.
End of explanation
input_range = range(int(1e6), int(2e6), int(5e4))
ncpus = multiprocessing.cpu_count()
print("We have {} cores to work on!".format(ncpus))
Explanation: In fact, we only loaded the multiprocessing module to get the number of CPUs on this machine. We also get a decent amount of work to do in the input_range.
End of explanation
%%time
for i in input_range:
print(sum_primes(i), end=' ', flush=True)
print()
Explanation: Single thread
Let's first run our tests in a single thread:
End of explanation
%%time
### We need to define a worker function that fetches jobs from the queue.
def worker(q):
while True:
try:
x = q.get(block=False)
print(sum_primes(x), end=' ', flush=True)
except queue.Empty:
break
### Create the queue, and fill it with input values
work_queue = queue.Queue()
for i in input_range:
work_queue.put(i)
### Start a number of threads
threads = [
threading.Thread(target=worker, args=(work_queue,))
for i in range(ncpus)]
for t in threads:
t.start()
### Wait until all of them are done
for t in threads:
t.join()
print()
Explanation: Multi-threading: Worker pool
We can do better than that! We now create a queue containing the work to be done, and a pool of threads eating from this queue. The workers will keep on working as long as the queue has work for them.
End of explanation
from noodles import (schedule, run_parallel, gather)
%%time
@schedule
def s_sum_primes(n):
result = sum_primes(n)
print(result, end=' ', flush=True)
return result
p_prime_sums = gather(*(s_sum_primes(i) for i in input_range))
prime_sums = run_parallel(p_prime_sums, n_threads=ncpus)
print()
Explanation: Using Noodles
On my laptop, a dual-core hyper-threaded Intel(R) Core(TM) i5-5300U CPU, this runs just over two times faster than the single threaded code. However, setting up a queue and a pool of workers is quite cumbersome. Also, this approach doesn't scale up if the dependencies between our computations get more complex. Next we'll use Noodles to provide the multi-threaded environment to execute our work. We'll need three functions:
schedule to decorate our work function
run_parallel to run the work in parallel
gather to collect our work into a workflow
End of explanation |
1,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison of two means (T-test)
Step1: In this notebook we demo two equivalent ways of performing a two-sample Bayesian t-test to compare the mean value of two Gaussian populations using Bambi.
Generate data
We generate 160 values from a Gaussian with $\mu=6$ and $\sigma=2.5$ and another 120 values from a Gaussian'
with $\mu=8$ and $\sigma=2$
Step2: When we carry out a two sample t-test we are implicitly using a linear model that can be specified in different ways. One of these approaches is the following
Step3: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following
Step4: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary.
<!-- I think the reasoning is too simplistic but I don't know if we should make it more complicated here -->
Step5: In the summary table we can see the 94% highest density interval for $\beta_1$ ranges from 1.511 to 2.499. Thus, according to the data and the model used, we conclude the difference between the two population means is somewhere between 1.2 and 2.2 and hence we support the hypotehsis that $\beta_1 \ne 0$.
Similar conclusions can be made with the density estimate for the posterior distribution of $\beta_1$. As seen in the table, most of the probability for the difference in the mean roughly ranges from 1.2 to 2.2.
Step6: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different.
Step7: The linear model implicit in the t-test can also be specified without an intercept term, such is the case of Model 2.
Model 2
When we carry out a two sample t-test we're implicitly using the following model
Step8: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following
Step9: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary.
<!-- I think the reasoning is too simplistic but I don't know if we should make it more complicated here -->
Step10: In this summary we can observe the estimated distribution of means for each population. A simple way to compare them is subtracting one to the other. In the next plot we can se that the entirety of the distribution of differences is higher than zero and that the mean of population 2 is higher than the mean of population 1 by a mean of 2.
Step11: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 - \beta_0 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different. | Python Code:
import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
az.style.use("arviz-darkgrid")
np.random.seed(1234)
Explanation: Comparison of two means (T-test)
End of explanation
a = np.random.normal(6, 2.5, 160)
b = np.random.normal(8, 2, 120)
df = pd.DataFrame({"Group": ["a"] * 160 + ["b"] * 120, "Val": np.hstack([a, b])})
df.head()
az.plot_violin({"a": a, "b": b});
Explanation: In this notebook we demo two equivalent ways of performing a two-sample Bayesian t-test to compare the mean value of two Gaussian populations using Bambi.
Generate data
We generate 160 values from a Gaussian with $\mu=6$ and $\sigma=2.5$ and another 120 values from a Gaussian'
with $\mu=8$ and $\sigma=2$
End of explanation
model_1 = bmb.Model("Val ~ Group", df)
results_1 = model_1.fit()
Explanation: When we carry out a two sample t-test we are implicitly using a linear model that can be specified in different ways. One of these approaches is the following:
Model 1
$$
\mu_i = \beta_0 + \beta_1 (i) + \epsilon_i
$$
where $i = 0$ represents the population 1, $i = 1$ the population 2 and $\epsilon_i$ is a random error with mean 0. If we replace the indicator variables for the two groups we have
$$
\mu_0 = \beta_0 + \epsilon_i
$$
and
$$
\mu_1 = \beta_0 + \beta_1 + \epsilon_i
$$
if $\mu_0 = \mu_1$ then
$$
\beta_0 + \epsilon_i = \beta_0 + \beta_1 + \epsilon_i\
0 = \beta_1
$$
Thus, we can see that testing whether the mean of the two populations are equal is equivalent to testing whether $\beta_1$ is 0.
Analysis
We start by instantiating our model and specifying the model previously described.
End of explanation
model_1
model_1.plot_priors();
Explanation: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following:
End of explanation
az.plot_trace(results_1, kind="rank_vlines");
az.summary(results_1)
Explanation: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary.
<!-- I think the reasoning is too simplistic but I don't know if we should make it more complicated here -->
End of explanation
# Grab just the posterior of the term of interest (group)
group_posterior = results_1.posterior['Group']
az.plot_posterior(group_posterior, ref_val=0);
Explanation: In the summary table we can see the 94% highest density interval for $\beta_1$ ranges from 1.511 to 2.499. Thus, according to the data and the model used, we conclude the difference between the two population means is somewhere between 1.2 and 2.2 and hence we support the hypotehsis that $\beta_1 \ne 0$.
Similar conclusions can be made with the density estimate for the posterior distribution of $\beta_1$. As seen in the table, most of the probability for the difference in the mean roughly ranges from 1.2 to 2.2.
End of explanation
# Probabiliy that posterior is > 0
(group_posterior.values > 0).mean()
Explanation: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different.
End of explanation
model_2 = bmb.Model("Val ~ 0 + Group", df)
results_2 = model_2.fit()
Explanation: The linear model implicit in the t-test can also be specified without an intercept term, such is the case of Model 2.
Model 2
When we carry out a two sample t-test we're implicitly using the following model:
$$
\mu_i = \beta_i + \epsilon_i
$$
where $i = 0$ represents the population 1, $i = 1$ the population 2 and $\epsilon$ is a random error with mean 0. If we replace the indicator variables for the two groups we have
$$
\mu_0 = \beta_0 + \epsilon
$$
and
$$
\mu_1 = \beta_1 + \epsilon
$$
if $\mu_0 = \mu_1$ then
$$
\beta_0 + \epsilon = \beta_1 + \epsilon\
$$
Thus, we can see that testing whether the mean of the two populations are equal is equivalent to testing whether $\beta_0 = \beta_1$.
Analysis
We start by instantiating our model and specifying the model previously described. In this model we will bypass the intercept that Bambi adds by default by setting it to zero, even though setting to -1 has the same effect.
End of explanation
model_2
model_2.plot_priors();
Explanation: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following:
End of explanation
az.plot_trace(results_2, kind="rank_vlines");
az.summary(results_2)
Explanation: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary.
<!-- I think the reasoning is too simplistic but I don't know if we should make it more complicated here -->
End of explanation
# Grab just the posterior of the term of interest (group)
group_posterior = results_2.posterior['Group'][:,:,1] - results_2.posterior['Group'][:,:,0]
az.plot_posterior(group_posterior, ref_val=0);
Explanation: In this summary we can observe the estimated distribution of means for each population. A simple way to compare them is subtracting one to the other. In the next plot we can se that the entirety of the distribution of differences is higher than zero and that the mean of population 2 is higher than the mean of population 1 by a mean of 2.
End of explanation
# Probabiliy that posterior is > 0
(group_posterior.values > 0).mean()
%load_ext watermark
%watermark -n -u -v -iv -w
Explanation: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 - \beta_0 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different.
End of explanation |
1,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Exercise 1
Step2: b. Checking for Normality
As an extra all-purpose check, and one that is often done on series, check whether the above series is normally distributed using the Jarque-Bera test.
Step3: c. Constructing Examples I
Create/provide a series that is stationary and different from any covered so far in the exercise or the lecture.
Step4: d. Constructing Examples II
Create/provide a series that is non-stationary and different from any covered so far in the exercise or the lecture.
Step5: Exercise 2
Step6: Exercise 3 | Python Code:
# Useful Functions
def check_for_stationarity(X, cutoff=0.01):
# H_0 in adfuller is unit root exists (non-stationary)
# We must observe significant p-value to convince ourselves that the series is stationary
pvalue = adfuller(X)[1]
if pvalue < cutoff:
print 'p-value = ' + str(pvalue) + ' The series is likely stationary.'
return True
else:
print 'p-value = ' + str(pvalue) + ' The series is likely non-stationary.'
return False
def generate_datapoint(params):
mu = params[0]
sigma = params[1]
return np.random.normal(mu, sigma)
# Useful Libraries
import numpy as np
import pandas as pd
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint, adfuller
import matplotlib.pyplot as plt
Explanation: Exercises: Integration, Cointegration, and Stationarity - Answer Key
by Delaney Granizo-Mackenzie and Maxwell Margenot
Lecture Link :
https://www.quantopian.com/lectures/integration-cointegration-and-stationarity
IMPORTANT NOTE:
This lecture corresponds to the Integration, Cointegration, and Stationarity lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Helper Functions
End of explanation
QQQ = get_pricing("QQQ", start_date='2014-1-1', end_date='2015-1-1', fields='price')
QQQ.name = QQQ.name.symbol
check_for_stationarity(QQQ)
Explanation: Exercise 1: Stationarity Testing
a. Checking For Stationarity
Check whether the following series is stationary using the tests from the lecture.
End of explanation
from statsmodels.stats.stattools import jarque_bera
jarque_bera(QQQ)
Explanation: b. Checking for Normality
As an extra all-purpose check, and one that is often done on series, check whether the above series is normally distributed using the Jarque-Bera test.
End of explanation
X = np.random.normal(0, 1, 100)
check_for_stationarity(X)
Explanation: c. Constructing Examples I
Create/provide a series that is stationary and different from any covered so far in the exercise or the lecture.
End of explanation
# Set the number of datapoints
T = 100
B = pd.Series(index=range(T))
B.name = 'B'
for t in range(T):
# Now the parameters are dependent on time
# Specifically, the mean of the series changes over time
params = (np.power(t, 2), 1)
B[t] = generate_datapoint(params)
plt.plot(B);
check_for_stationarity(B)
Explanation: d. Constructing Examples II
Create/provide a series that is non-stationary and different from any covered so far in the exercise or the lecture.
End of explanation
QQQ = get_pricing("QQQ", start_date='2014-1-1', end_date='2015-1-1', fields='price')
QQQ.name = QQQ.name.symbol
# Write code to estimate the order of integration of QQQ.
# Feel free to sample from the code provided in the lecture.
QQQ = QQQ.diff()[1:]
QQQ.name = QQQ.name + ' Additive Returns'
check_for_stationarity(QQQ)
plt.plot(QQQ.index, QQQ.values)
plt.ylabel('Additive Returns')
plt.legend([QQQ.name]);
Explanation: Exercise 2: Estimate Order of Integration
Use the techniques laid out in the lecture notebook to estimate the order of integration for the following timeseries.
End of explanation
T = 500
X1 = pd.Series(index=range(T))
X1.name = 'X1'
for t in range(T):
# Now the parameters are dependent on time
# Specifically, the mean of the series changes over time
params = (t * 0.1, 1)
X1[t] = generate_datapoint(params)
X2 = np.power(X1, 2) + X1
X3 = np.power(X1, 3) + X1
X4 = np.sin(X1) + X1
# We now have 4 time series, X1, X2, X3, X4
# Determine a linear combination of the 4 that is stationary over the
# time period shown using the techniques in the lecture.
X1 = sm.add_constant(X1)
results = sm.OLS(X4, X1).fit()
# Get rid of the constant column
X1 = X1['X1']
results.params
plt.plot(X4-0.99 * X1);
check_for_stationarity(X4 - 0.99*X1)
Explanation: Exercise 3: Find a Stationary Linear (Cointegrated) Combination
Use the techniques laid out in the lecture notebook to find a linear combination of the following timeseries that is stationary.
End of explanation |
1,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Abstract
Titel
Step1: Einführung in<br/> Software Analytics
<b>Markus Harrer</b>, Software Development Analyst
@feststelltaste
<small>ML Summit 2019, 14. Oktober 2019</small>
<img src="../../demos/resources/innoq_logo.jpg" width=20% height="20%" align="right"/>
Workshop-Aufbau (1/2)
1. Teil
Step2: "100" == max. Beliebtheit!
Mein "Bias"
Masterand
Step3: Wir sehen uns Basisinfos über den Datensatz an.
Step4: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>6</b> Series (= Spalten), <b>1128819</b> entries (= Reihen)
Wir wandeln die Zeitstempel von Texte in Objekte um.
Step5: Wir sehen uns nur die jüngsten Änderungen an.
Step6: Wir wollen nur Java-Code verwenden.
Step7: III. Formale Modellierung
Schaffe neue Sichten
Verschneide weitere Daten
Wir zählen die Anzahl der Änderungen je Datei.
Step8: Wir holen Infos über die Code-Zeilen hinzu...
Step9: ...und verschneiden diese mit den vorhandenen Daten.
Step10: VI. Interpretation
Erarbeite das Kernergebnis der Analyse heraus
Mache die zentrale Botschaft / neuen Erkenntnisse deutlich
Wir zeigen nur die TOP 10 Hotspots im Code an.
Step11: V. Kommunikation
Transformiere die Erkenntnisse in eine verständliche Visualisierung
Kommuniziere die nächsten Schritte nach der Analyse
Wir erzeugen ein XY-Diagramm aus der TOP 10 Liste. | Python Code:
%matplotlib inline
import pandas as pd
Explanation: Abstract
Titel: Einführung in Software Analytics
Beschreibung
In Unternehmen werden Datenanalysen intensiv genutzt, um aus Geschäftsdaten wertvolle Einsichten
zu gewinnen. Warum nutzen wir als Softwareentwickler Datenanalysen dann nicht auch für unsere eigenen Daten?
In diesem Workshop stelle ich Vorgehen und Best Practices von Software Analytics vor. Wir sehen uns die dazugehörigen Open-Source-Werkzeuge an, mit denen sich Probleme in der Softwareentwicklung zielgerichtet analysieren und kommunizieren lassen.
Im Praxisteil mit Jupyter, pandas, jQAssistant, Neo4j & Co. erarbeiten wir gemeinsam wertvolle Einsichten aus Datenquellen wie Git-Repositories, Performancedaten, Qualitätsberichten oder auch direkt aus dem Programmcode. Wir suchen nach besonders fehleranfälligem Code, erschließen No-Go-Areas in Altanwendungen und priorisieren Aufräumarbeiten entlang wichtiger Programmteile.
Gerne kann bei diesem interaktiven Workshop direkt mitgearbeitet werden. Ein Notebook mit Internetzugang reicht hierfür völlig aus.
End of explanation
pd.read_csv("../datasets/google_trends_datascience.csv").plot();
Explanation: Einführung in<br/> Software Analytics
<b>Markus Harrer</b>, Software Development Analyst
@feststelltaste
<small>ML Summit 2019, 14. Oktober 2019</small>
<img src="../../demos/resources/innoq_logo.jpg" width=20% height="20%" align="right"/>
Workshop-Aufbau (1/2)
1. Teil: Theorie & Hands-On
Einführung in das Thema "Software Analytics"
Vorgehen für Datenanalysen in der Softwareentwicklung
Werkzeuge für leichtgewichtiges Software Analytics
Workshop-Aufbau (2/2)
2. Teil: Praxis
Gemeinsame Durchführung erster Analysen
Bearbeitung von Aufgaben in Kleingruppen
Fragen & Antworten
Über mich
<img src="../../demos/resources/ueber_mich.png" style="width:85%;" >
Datenanalysen in der Softwareentwicklung?
... ein typischer Projektverlauf
... ein typischer Projektverlauf
<img src="../../demos/resources/schuld1.png" style="width:95%;" align="center"/>
... ein typischer Projektverlauf
<img src="../../demos/resources/schuld2.png" style="width:95%;" align="center"/>
... ein typischer Projektverlauf
<img src="../../demos/resources/schuld3.png" style="width:95%;" align="center"/>
... ein typischer Projektverlauf
<img src="../../demos/resources/schuld4.png" style="width:95%;" align="center"/>
"Die Definition von Wahnsinn ist, immer wieder das Gleiche zu tun und andere Ergebnisse zu erwarten."
<br/>
<div align="right">– Albert Einstein</div>
Das (Tr|D)auerthema
<img src="../../demos/resources/kombar0.png" style="width:95%;" align="center"/>
Das (Tr|D)auerthema
<img src="../../demos/resources/kombar4.png" style="width:95%;" align="center"/>
"Software Analytics" als Retter?
Definition "Software Analytics"
"Software Analytics is analytics on <b>software data</b> for managers and <b class="green">software engineers</b> with the aim of empowering software development individuals and teams to gain and share insight from their data to <b>make better decisions</b>."
<br/>
<div align="right"><small>Tim Menzies and Thomas Zimmermann</small></div>
Welche Arten von Softwaredaten?
Alles was aus der Entwicklung und dem Betrieb der Softwaresysteme so anfällt:
* Statische Daten
* Laufzeitdaten
* Chronologische Daten
* Daten aus der Software-Community
Welche Werkzeuge, Datenquellen und Daten?
<img src="../../demos/resources/softwaredaten.png" style="width:85%;" align="center"/>
<b>Sehr große Auswahl == sehr große Möglichkeiten?</b>
(M)ein Problem mit (klassischem) Software Analytics
(M)ein Problem mit Software Analytics
<img src="../../demos/resources/freq1_en.png" style="width:80%;" align="center"/>
(M)ein Problem mit Software Analytics
<img src="../../demos/resources/freq1_en.png" style="width:80%;" align="center"/>
(M)ein Problem mit Software Analytics
<img src="../../demos/resources/freq2_en.png" style="width:80%;" align="center"/>
(M)ein Problem mit Software Analytics
<img src="../../demos/resources/freq3_en.png" style="width:80%;" align="center"/>
(M)ein Problem mit Software Analytics
<img src="../../demos/resources/freq4_en.png" style="width:80%;" align="center"/>
(M)ein Problem mit Software Analytics
<img src="../../demos/resources/freq5_en.png" style="width:80%;" align="center"/>
Andere sehen dieses Problem auch!
Thomas Zimmermann in "One size does not fit all":
<br/><br/>
<div style="font-size:70%;" align="center">
"The main lesson: There is no one size fits all model. Even if you find models that work for most, they will not work for everyone. There is much <strong>academic research</strong> into <strong>general models</strong>. In contrast, <b><span class="green">industrial practitioners</span></b> are often fine with <b><span class="green">models that just work for their data</span></b> if the model provides some insight or allows them to work more efficiently."<br/>
</div>
Aber: "... the methods typically are applicable on different datasets."<br/>
<b>=> Analyseideen sind wiederverwendbar!</b>
"Es kommt drauf an!" aka Kontext
<div style="margin-left:160px;margin-top:80px;">
<img src="../../demos/resources/context.png" style="width:70%;" /></div>
<b>Individuelle Systeme == Individuelle Probleme => Individuelle Analysen => Individuelle Erkenntnisse!</b>
Wie Software Analytics dann umsetzen?
<br/><br/>
<div align="center">
<h1><strong>Data Science</strong>
<br/><br/></h1>
<h2> <i style="font-weight: normal;">Eine leichtgewichtige Umsetzung von </i><b><span class="blue">Software Analytics</span></b></h2>
</div>
Data Science
Was is Data Science?
"Statistics on a <b><span class="green">Mac</span></b>."
<br/>
<br/>
<div align="right"><small>https://twitter.com/cdixon/status/428914681911070720</small></div>
Meine Definition
Was bedeutet "data"?
"Without data you‘re just another person with an opinion."
<br/>
<div align="right"><small>W. Edwards Deming</small></div>
<b>=> Belastbare Erkenntnisse mittels <span class="green">Fakten</span> liefern</b>
Was bedeutet "science"?
"The aim of science is to seek the simplest explanations of complex facts."
<br/>
<div align="right"><small>Albert Einstein</small></div>
<b>=> Neue Erkenntnisse <span class="green">verständlich</span> herausarbeiten</b>
Warum Data Science?
Große (Online-)Community
Kostenlose Online-Kurse, -Videos und Tutorials (z. B. DataCamp mit über 4,6 Mio. Mitgliedern)
Direkte Hilfestellungen (z. B. Stack Overflow oder Blog-Artikel)
Lernen und lernen von anderen durch Online-Wettbewerbe (e. g. Kaggle, )
Freie und einfach zu nutzende Werkzeuge!
<br/>
<br/>
<img src="../../demos/resources/rvspy.png" style="width:95%;" >
Data Science liegt immer noch im Trend!
End of explanation
log = pd.read_csv("../datasets/git_log_intellij.csv.gz")
log.head()
Explanation: "100" == max. Beliebtheit!
Mein "Bias"
Masterand: Schnelle Ergebnisse notwendig
Enterprise Java-Entwickler: Abends noch was Richtiges zu Stande bekommen
Allgemein: Weitere Standbeine "Data Science" und "Graphdatenbanken"
Wie weit weg sind <span class="green">SoftwareentwicklerInnen</span></b><br/> von <strong>Data Science</strong>?
Was ist ein Data Scientist?
"A data scientist is someone who<br/>
is better at statistics<br/>
than any <b><span class="green">software engineer</span></b><br/>
and better at <b><span class="green">software engineering</span></b><br/>
than any statistician."
<br/>
<br/>
<div align="right"><small>From https://twitter.com/cdixon/status/428914681911070720</small></div>
<b>Nicht so weit weg wie gedacht!</b>
Wie <span class="blue">Software Analytics</span> mit <strong>Data Science</strong> beginnen?
Bewährte Ansätze nutzen
<small>Roger Pengs "Stages of Data Analysis"</small>
I. Fragestellung
II. Explorative Datenanalyse
III. Formale Modellierung
IV. Interpretation
V. Kommunikation
<b>=> von der <strong>Frage</strong> über die <span class="green">Daten</span> zur <span class="blue" style="background-color: #FFFF00">Erkenntnis</span>!</b>
"Seven principles...
<small>...of inductive software engineering" (Tim Menzies)</small>
<div style="margin-top:20px">
<ol>
<li>Human before algorithms</li>
<li>Plan for Scale</li>
<li>Get Early Feedback</li>
<li>Be Open Minded</li>
<li>Be Smart with Your Learning</li>
<li>Live with the Data You Have</li>
<li>Develop a Broad Skill Set That Uses a Big Toolkit</li>
</ol>
</div>
Gedanken zur Analyse strukturieren
<br/>
<img src="../../demos/resources/canvas.png" style="width:85%;" >
Wie nachvollziehbar umsetzen?
Verwende Literate Statistical Programming
(Intent + Code + Data + Results)<br />
* Logical Step<br />
+ Automation<br />
= Literate Statistical Programming
Vehikel: Computational notebooks
Beispiel "Computational Notebook"
<br/>
<div align="center"><img src="../../demos/resources/notebook_approach.jpg"></div>
Wende Best Practices in Notebooks an
<br/>
<table align="center">
<tr><td><img src="../../demos/resources/sym_cols.png" style="width:30%;"/></td>
<td style="text-align: left;">Pro Variable eine Spalte</td>
</tr>
<tr><td><img src="../../demos/resources/sym_rows.png" style="width:30%;"/></td>
<td style="text-align: left;">Für jede Beobachtung eine Reihe</td>
</tr>
<tr><td><img src="../../demos/resources/sym_table.png" style="width:30%;"/></td>
<td style="text-align: left;">Für alle zusammengehörigen Variablen eine Tabelle</td>
</tr>
<tr><td><img src="../../demos/resources/sym_link.png" style="width:30%;"/></td>
<td style="text-align: left;">Für jede Tabelle einer Analyse eine verlinkende Spalte</td>
</tr>
</table>
<br/>
<div align="right">
<small>Jeff Leek: The Elements of Data Analytic Style</small>
</div>
Nutze Data Science Standardwerkzeuge
z. B. einen der populärsten Stacks
Jupyter Notebook
Python 3
pandas
matplotlib
Jupyter Notebook
Interactive Notebook
* Dokumentenorientierte Analysen
* Ausführbare Code-Blöcke
* Ergebnisse direkt ersichtlich
* Alles an einem Platz
* Jeder Analyseschritt sichtbar
<b><span class="green">=> Neue Erkenntnisse verständlich herausarbeiten!</span></b>
Python 3
Eine beliebte Programmiersprache im Data Science
* Einfach
* Effektiv
* Schnell
* Spaß
* Automatisierung
<b><span class="green">=> Datenanalysen werden wiederholbar</span></b>
pandas
Pragmatisches Datenanalysewerkzeug
* Tabellenartige Datenstrukturen ("programmierbares Excel-Arbeitsblatt")
* Sehr schnelle Berechnungen
* Flexibel
* Ausdrucksstarke API
<b><span class="green">=> Guter Integrationspunkt für Datenquellen!</span></b>
matplotlib
Progammierbare Visualisierungsbibliothek
Pragmatische Erstellung von Grafiken
Diagramme wie Linien-, Balken-, XY-Diagramme und viele andere
Gut integriert mit pandas
<b><span class="green">=> Direkte Visualisierung der Diagramme / Ergebnisse!</span></b>
Das Python-Ökosystem
<br/>
<div class="row">
<div class="column">
<b>Data Analysis</b>
<ul>
<li>NumPy</li>
<li>scikit-learn</li>
<li>TensorFlow</li>
<li>SciPy</li>
<li>PySpark</li>
<li>py2neo</li>
</ul>
</div>
<div class="column">
<b>Visualisierung und mehr</b>
<ul>
<li>pygal</li>
<li>Bokeh</li>
<li>python-pptx</li>
<li>RISE</li>
<li>Requests, xmldataset, Selenium, Flask...</li>
</ul>
</div>
</div>
<b><span class="green">=> Bietet in ganz individuellen Situationen die notwendige Flexibilität!</span></b>
Andere Technologien
Jupyter Notebook arbeitet auch mit anderen Technologieplattformen zusammen, z. B. mit
* jQAssistant Scanner / Neo4j Graphdatenbank
* JVM-Sprachen via beakerx / Tablesaw
* bash
<b><span class="green">=> Spezielle Technologie? Wird (meist) unterstützt!</span></b>
Meine Empfehlungen zum Einstieg
Meine TOP 5's*
https://www.feststelltaste.de/category/top5/
Kurse, Videos, Blogs, Bücher und mehr...
<small>*einige Seiten befinden sich noch in der Entwicklung</small>
Meine Buchempfehlungen
Adam Tornhill: Software Design X-Ray
Wes McKinney: Python For Data Analysis
Jeff Leek: The Elements of Data Analytic Style
Tim Menzies, Laurie Williams, Thomas Zimmermann: Perspectives on Data Science for Software Engineering
Mini-Tutorial unter <small><code>https://github.com/feststelltaste/software-analytics-workshop</code></small>
Hands-On
Einige Beispiele aus der Praxis
Vorhandenen Modularisierungsschnitt analysieren
Performance-Probleme in verteilten Systemen identifizieren
Potenzielle Wissensverluste ermitteln
Eingesetzte Open-Source-Projekte bewerten
...
Was sind Ihre Analysen aus der Praxis?
<img src="../../demos/resources/vorerfahrung.png" style="width:95%;" align="center"/>
Programmierbeispiel
Fallbeispiel
IntelliJ IDEA
IDE für Java-Entwickler
Fast komplett in Java geschrieben
Großes und lang aktives Projekt
I. Fragestellung (1/3)
Schreibe die Frage explizit auf
Erkläre die Anayseidee verständlich
I. Fragestellung (2/3)
<b>Frage</b>
* Welche Quellcodedateien sind besonders komplex und änderten sich in letzter Zeit häufig?
I. Fragestellung (3/3)
Umsetzungsideen
Werkzeuge: Jupyter, Python, pandas, matplotlib
Heuristiken:
"komplex": viele Quellcodezeilen
"ändert ... häufig": hohe Anzahl Commits
"in letzter Zeit": letzte 90 Tage
Meta-Ziel: Grundmechaniken kennenlernen.
II. Explorative Datenanalyse
Finde und lade mögliche Softwaredaten
Bereinige und filtere die Rohdaten
Wir laden einen Datenexport aus einem Git-Repository.
End of explanation
log.info()
Explanation: Wir sehen uns Basisinfos über den Datensatz an.
End of explanation
log['timestamp'] = pd.to_datetime(log['timestamp'])
log.head()
Explanation: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>6</b> Series (= Spalten), <b>1128819</b> entries (= Reihen)
Wir wandeln die Zeitstempel von Texte in Objekte um.
End of explanation
# use log['timestamp'].max() instead of pd.Timedelta('today') to avoid outdated data in the future
recent = log[log['timestamp'] > log['timestamp'].max() - pd.Timedelta('90 days')]
recent.head()
Explanation: Wir sehen uns nur die jüngsten Änderungen an.
End of explanation
java = recent[recent['filename'].str.endswith(".java")].copy()
java.head()
Explanation: Wir wollen nur Java-Code verwenden.
End of explanation
changes = java.groupby('filename')[['sha']].count()
changes.head()
Explanation: III. Formale Modellierung
Schaffe neue Sichten
Verschneide weitere Daten
Wir zählen die Anzahl der Änderungen je Datei.
End of explanation
loc = pd.read_csv("../datasets/cloc_intellij.csv.gz", index_col=1)
loc.head()
Explanation: Wir holen Infos über die Code-Zeilen hinzu...
End of explanation
hotspots = changes.join(loc[['code']]).dropna(subset=['code'])
hotspots.head()
Explanation: ...und verschneiden diese mit den vorhandenen Daten.
End of explanation
top10 = hotspots.sort_values(by="sha", ascending=False).head(10)
top10
Explanation: VI. Interpretation
Erarbeite das Kernergebnis der Analyse heraus
Mache die zentrale Botschaft / neuen Erkenntnisse deutlich
Wir zeigen nur die TOP 10 Hotspots im Code an.
End of explanation
ax = top10.plot.scatter('sha', 'code');
for k, v in top10.iterrows():
ax.annotate(k.split("/")[-1], v)
Explanation: V. Kommunikation
Transformiere die Erkenntnisse in eine verständliche Visualisierung
Kommuniziere die nächsten Schritte nach der Analyse
Wir erzeugen ein XY-Diagramm aus der TOP 10 Liste.
End of explanation |
1,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
1,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is an effort to replicate the lessons found here
Step1: We're going to fetch the data file we need for this exercise from the following URL
Step2: Let's make a plot of the data, so we know what we're dealing with.
Step3: Let's verify that our data is distributed normally.
Step4: What is the optimal "lag" distance between points? Use the utilities scattergram() function to help determine that distance.
Step5: Here, we plot the semivariogram and overlay a horizontal line for the sill, $c$.
Step6: Looking at the figure above, we can say that the semivariogram levels off around 4000, so we can set the range, $a$, to that value and model the covariance function.
Step7: We can visualize the distribution of the lagged distances with the laghistogram() function.
Step8: If we want to perform anisotropic kriging, we can visualize the distribution of the anisotropic lags using the anisotropiclags() function. Note that we use the bearing, which is measured in degrees, clockwise from North. | Python Code:
import numpy as np
import pandas as pd
from geostatsmodels import utilities, kriging, variograms, model, geoplot
import matplotlib.pyplot as plt
from scipy.stats import norm
Explanation: This notebook is an effort to replicate the lessons found here:
http://people.ku.edu/~gbohling/cpe940/Variograms.pdf
We'll do all of our imports here at the top.
Note: This is a work in progress, and is in the process of being updated to support Python 3+.
End of explanation
cluster_file = 'ZoneA.dat'
column_names = ["x", "y", "thk", "por", "perm", "log-perm", "log-perm-prd", "log-perm-rsd"]
z = pd.read_csv(cluster_file, sep=" ", skiprows=10, names=column_names)
P = np.array(z[['x','y','por']])
Explanation: We're going to fetch the data file we need for this exercise from the following URL:
http://people.ku.edu/~gbohling/geostats/WGTutorial.zip
Subsequent runs of this Notebook should use a local copy, saved in the current directory.
End of explanation
fig, ax = plt.subplots()
fig.set_size_inches(8,8)
cmap = geoplot.YPcmap
ax.scatter(z.x/1000, z.y/1000, c=z.por, s=64, cmap=cmap)
ax.set_aspect(1)
plt.xlim(-2, 22)
plt.ylim(-2, 17.5)
plt.xlabel('Easting [m]')
plt.ylabel('Northing [m]')
th=plt.title('Porosity %')
Explanation: Let's make a plot of the data, so we know what we're dealing with.
End of explanation
hrange = (12, 17.2)
mu, std = norm.fit(z.por)
ahist=plt.hist(z.por, bins=7, density=True, alpha=0.6, color='c', range=hrange)
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'k', linewidth=2)
title = "Fit results: mu = %.2f, std = %.2f" % (mu, std)
th=plt.title(title)
xh=plt.xlabel('Porosity (%)')
yh=plt.ylabel('Density')
xl=plt.xlim(11.5, 17.5)
yl=plt.ylim(-0.02, 0.45)
import scipy.stats as stats
qqdata = stats.probplot(z.por, dist="norm", plot=plt, fit=False)
xh=plt.xlabel('Standard Normal Quantiles')
yh=plt.ylabel('Sorted Porosity Values')
fig=plt.gcf()
fig.set_size_inches(8, 8)
th=plt.title('')
Explanation: Let's verify that our data is distributed normally.
End of explanation
pw = utilities.pairwise(P)
geoplot.hscattergram(P, pw, 1000, 500)
geoplot.hscattergram(P, pw, 2000, 500)
geoplot.hscattergram(P, pw, 3000, 500)
Explanation: What is the optimal "lag" distance between points? Use the utilities scattergram() function to help determine that distance.
End of explanation
tolerance = 250
lags = np.arange(tolerance, 10000, tolerance*2)
sill = np.var(P[:, 2])
geoplot.semivariogram(P, lags, tolerance)
Explanation: Here, we plot the semivariogram and overlay a horizontal line for the sill, $c$.
End of explanation
svm = model.semivariance(model.spherical, [4000, sill])
geoplot.semivariogram(P, lags, tolerance, model=svm)
Explanation: Looking at the figure above, we can say that the semivariogram levels off around 4000, so we can set the range, $a$, to that value and model the covariance function.
End of explanation
geoplot.laghistogram(P, pw, lags, tolerance)
Explanation: We can visualize the distribution of the lagged distances with the laghistogram() function.
End of explanation
geoplot.anisotropiclags(P, pw, lag=2000, tol=250, angle=45, atol=15)
geoplot.anisotropiclags(P, pw, lag=2000, tol=250, angle=135, atol=15)
Explanation: If we want to perform anisotropic kriging, we can visualize the distribution of the anisotropic lags using the anisotropiclags() function. Note that we use the bearing, which is measured in degrees, clockwise from North.
End of explanation |
1,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 2
Step1: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
from scipy.interpolate import interp2d
Explanation: Interpolation Exercise 2
End of explanation
x=np.array([5,5,5,5,5,5,4,3,2,1,0,-1,-2,-3,-4,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-4,-3,-2,-1,0,1,2,3,4,5,5,5,5,5,0])
y=np.array([0,1,2,3,4,5,5,5,5,5,5,5,5,5,5,5,4,3,2,1,0,-1,-2,-3,-4,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-4,-3,-2,-1,0])
f=np.zeros(len(x))
f[len(x)-1]=1.0
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
Explanation: The following plot should show the points on the boundary and the single point in the interior:
End of explanation
xnew=np.linspace(-5,5,100)
ynew=np.linspace(-5,5,100)
Xnew,Ynew=np.meshgrid(xnew,ynew)
fnew=interp2d(x,y,f,kind='cubic')
Fnew=fnew(xnew,ynew)
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
plt.contourf(xnew,ynew,Fnew)
plt.set_cmap('RdBu')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Cubic Interpolation')
plt.tick_params(direction='out')
plt.colorbar()
assert True # leave this to grade the plot
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation |
1,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression and Subset Selection
StatML
Step2: Recall LinearRegression.fit
Throughout let $p < n$
Fit in OLS solves the following, on training set
$$
\hat \beta = (X^\top X)^{-1} X^\top y
$$
where $X,y$ are $n \times p$ and $n$ arrays.
Linear solve
Step4: STOP
Step5: Solution to Exercise 3.2.1
In Gram-Schmidt the projection operator is $$\frac{\langle x, z\rangle}{\langle z, z \rangle} z,$$ and so the projection of $x_j$ onto the space spanned by $z_k$ is
$$ \frac{x_j^\top z_k}{z_k^\top z_k} z_k = \hat \gamma_{j,k} z_k.$$
The result is immediate from algorithm
Step6: STOP | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
import scipy as sc
Explanation: Linear Regression and Subset Selection
StatML: Lecture 3
Prof. James Sharpnack
Reading: "The Elements of Statistical Learning," Hastie, Tibshirani, Friedman, Ch. 3 (ESL)
End of explanation
def sim_corr_lm(n,p,rho,beta,sigma):
Simulate a design matrix with all columns having marginal correlation rho
assert p < n and rho < 1 and rho >= 0, "p must be less than n and rho in [0,1)"
Sigma = (1 - rho)*np.eye(p) + rho*np.ones((p,p))
X = np.random.multivariate_normal(np.zeros(p),Sigma,n)
y = X @ beta + np.random.normal(0,sigma,n)
return X,y
n, p, rho = 100, 2, .8
beta = np.random.normal(0,1,p)
sigma = 1.
X, y = sim_corr_lm(n,p,rho,beta,sigma)
plt.plot(X[:,0],X[:,1],'.')
Explanation: Recall LinearRegression.fit
Throughout let $p < n$
Fit in OLS solves the following, on training set
$$
\hat \beta = (X^\top X)^{-1} X^\top y
$$
where $X,y$ are $n \times p$ and $n$ arrays.
Linear solve:
$$
(X^\top X) \hat \beta = X^\top y
$$
Recall LinearRegression.predict
Apply predict to training set then
$$
\hat y = X \hat \beta = X (X^\top X)^{-1} X^\top y
$$
is a projection of $y$ onto the column space of $X$. Projection in $n$-D space.
Projections are idempotent,
$$
P := X (X^\top X)^{-1} X^\top
$$
has
$$
P P = X (X^\top X)^{-1} X^\top X (X^\top X)^{-1} X^\top = X (X^\top X)^{-1} X^\top.
$$
<img src="projection.png" width=70%>
Image from wikipedia.
Exercise 3.1
Suppose that we have a perfectly reasonable $n \times p$ design matrix $X$, and $n$ response vector $y$ such that $X^\top X$ is invertible. Suppose that we duplicate the columns of $X$ to make an $n \times (2p)$ matrix. Suppose that we want to run OLS with the new data by finding solutions to the normal equation.
1. From the projection intuition above, what is the impact on $\hat y$?
2. Show that a valid solution to the new normal equations is $\tilde \beta = \frac 12 [\hat \beta; \hat \beta]$?
3. What form do other valid solutions take?
STOP
Answer to 3.1
No impact on $\hat y$, because there is no change in the column space of $X$.
Let $\tilde X = (X,X)$ then
$$\tilde X^\top \tilde X = [X^\top X, X^\top X; X^\top X, X^\top X]$$
This is not invertible! But normal equations are...
$$ \tilde X^\top \tilde X \tilde \beta = \tilde X^\top y = [X^\top y; X^\top y] $$
Which gives,
$$
[X^\top X, X^\top X; X^\top X, X^\top X] \frac{[\hat \beta ; \hat \beta]}{2} = \frac 12 [X^\top X \hat \beta + X^\top X \hat \beta; X^\top X \hat \beta + X^\top X \hat \beta] = [X^\top X \hat \beta;X^\top X \hat \beta]
$$
but because $\hat \beta$ solves the original normal equations this is equal to
$$ = [X^\top y; X^\top y] = \tilde X^\top y.$$
There was nothing special about $1/2$ and we could repeat the arguments above with $[\theta \hat \beta; (1-\theta) \hat \beta]$
Regression by Successive Orthogonalization
ESL pg. 54
Input $x_0=1, x_1, \ldots, x_p$ columns of design matrix.
Init $z_0 = x_0 = 1$
For $j = 1,\ldots,p$
Regress $x_j$ on $z_0,\ldots,z_{j-1}$ giving $$\hat \gamma_{j,l} = \frac{z_l^\top x_j}{z_l^\top z_l},$$ $l = 0, \ldots, j-1$ and $z_j = x_j - \sum_{k=0}^{j-1} \hat \gamma_{j,k} z_k$
Regress $y$ on the residual $z_p$ to give $\hat \beta_p$
Regression by Successive Orthogonalization
What does "regress onto" mean?
Solving the normal equation
$$
Z^\top Z \hat \gamma_j = Z^\top x_j
$$
where $Z$ has columns $z_0, \ldots z_{j-1}$.
Why is this any easier?
$Z$ is orthogonal, i.e. the columns are orthogonal,
$$
z_j^\top z_k = 0, j\ne k
$$
which means $Z^\top Z$ is diagonal (easy to invert).
Exercise. 3.2.1 Show that Successive Orthogonalization is equivalent to the Gram-Schmidt procedure for finding an orthonormal basis of column space of $X$.
Regression by Successive Orthogonalization
Regress $y$ on the residual $z_p$ to give $\hat \beta_p$?
We know that $z_p$ is the only basis element that contains $x_p$ and that regressing $y$ onto $Z$ is equivalent to regressing $y$ onto $X$.
Why?
We can write these matrices as
$$
X = Z \Gamma
$$
where $Z$ is orthogonal and $\Gamma$ is upper triangular.
Let $D$ be the diagonal matrix with $\| z_j\|$ on diagonal.
Then
$$
X = Z D^{-1} D \Gamma = Q R
$$
for $Q = Z D^{-1}$ is $n \times p$, $R = D \Gamma$ is $p \times p$.
$Q$ is orthonormal ($Q^\top Q = I$) and $R$ is upper triangular.
Regression by Successive Orthogonalization
Normal eqn is
$$
X^\top X \hat \beta = X^\top y \equiv R^\top R \hat \beta = R^\top (Q^\top y)
$$
Upper triangular matrices are easy to invert!
[1, 2] [a] = [4]
[0, 3] [b] [5]
One of many decompositions that can make linear regression easy (after the decomposition is made).
Regression by Successive Orthogonalization
Input $x_0=1, x_1, \ldots, x_p$ columns of design matrix.
Init $z_0 = x_0 = 1$
For $j = 1,\ldots,p$
Regress $x_j$ on $z_0,\ldots,z_{j-1}$ giving $$\hat \gamma_{j,l} = \frac{z_l^\top x_j}{z_l^\top z_l},$$ $l = 0, \ldots, j-1$ and $z_j = x_j - \sum_{k=0}^{j-1} \hat \gamma_{j,k} z_k$
Regress $y$ on the residual $z_p$ to give $\hat \beta_p$
Only gives us $\hat \beta_p$! Not a great algorithm in its current form.
Algorithm exposes the effect of correlated input $x_j$.
Suppose that $y$ follows the linear model
$$
y = X \beta + \epsilon
$$
where $\epsilon_i$ is iid normal$(0,\sigma^2)$.
Then
Regress $y$ on the residual $z_p$ to give $\hat \beta_p$
Means $$
\hat \beta_p = \frac{y^\top z_p}{z_p^\top z_p}
$$
If $\|z_p\|$ is small then this is instable (high variance), when does this happen?
$$
z_j = x_j - \sum_{k=0}^{j-1} \hat \gamma_{j,k} z_k
$$
If $x_p$ is correlated with $x_0,\ldots,x_{p-1}$ (small residual when regressed onto) then $\hat \beta_p$ is instable.
Exercise 3.2.2
Below is some code for generating a design matrix with correlated X variables, and the response vector. The rho parameter is the correlation of the X variables, so that $rho = 1$ means that they are perfectly correlated. Use the code below to generate the true beta once (also a parameter of sim_corr_lm) and set sigma=1. Choose a sequence of rho's: Rhos = [0,.2,.4,.6,.8,.9,.95,.99] and for each one run 100 trials of the following: simulate from the linear model, fit OLS, and save the first coefficient beta_1. For each rho in the list, calculate the variance of beta_1 and plot the variance as a function of rho.
End of explanation
## Solution to 3.2.2
def sample_coef_corr_lm(trials,**kwargs):
Sample the OLS coefficients for rho correlated input
beta_sim = []
for t in range(trials):
X,y = sim_corr_lm(**kwargs)
beta_sim += [linear_model.LinearRegression(fit_intercept=False).fit(X,y).coef_]
return np.array(beta_sim)
## Sample coefficients with different rho and plot variance of beta_1 as rho increases
Rhos = [0,.2,.4,.6,.8,.9,.95,.99]
coef_vars = [sample_coef_corr_lm(100,n=n,p=p,rho=rho,beta=beta,sigma=sigma)[:,0].var(axis=0) for rho in Rhos]
plt.plot(Rhos,coef_vars)
plt.xlabel('X correlation')
plt.ylabel('beta variance')
Explanation: STOP
End of explanation
## Simulate again
n, p, rho = 100, 8, .8
beta = np.random.normal(0,1,p)
sigma = 1.
X, y = sim_corr_lm(n,p,rho,beta,sigma)
## SVD
U,d,Vt = sc.linalg.svd(X)
## Check to make sure that we understand
print(U.shape, Vt.shape, d.shape)
np.abs(X - U[:,:p] @ np.diag(d) @ Vt).sum()
Explanation: Solution to Exercise 3.2.1
In Gram-Schmidt the projection operator is $$\frac{\langle x, z\rangle}{\langle z, z \rangle} z,$$ and so the projection of $x_j$ onto the space spanned by $z_k$ is
$$ \frac{x_j^\top z_k}{z_k^\top z_k} z_k = \hat \gamma_{j,k} z_k.$$
The result is immediate from algorithm:
$$z_j = x_j - \sum_{k=0}^{j-1} \hat \gamma_{j,k} z_k$$
Singular value decomposition
Recall that QR decomposition was computed to make LinearRegression.fit easier.
There are other decompositions that can also be used: Cholesky and Singular Value.
Singular Value Decomposition (for $n > p$ and $X^\top X$ invertible)
$$
X = U D V^\top,
$$
- U is orthonormal ($U^\top U = I$) $n \times p$
- V is orthonormal $p \times p$
- D is diagonal
If X is singular, there is a non-zero vector $z$ such that $Xz = 0$, then an eigenvalue is $0$. This is equivalent to a residual in Succ. Ortho. being zero.
Computing SVD is more expensive then QR in general.
Singular Value Decomposition
Suppose that we precomputed the SVD,
$$
X = UDV^\top.
$$
Then the Gram matrix is
$$
X^\top X = V D U^\top U D V^\top = V D^2 V^\top,
$$
the Spectral decomposition.
You should derive that
$$
\hat \beta = V D^{-1} U^\top y.
$$
Singular Value Decomposition
The coefficient formula
$$
\hat \beta = V D^{-1} U^\top y
$$
means that
$$
\hat \beta = \sum_{j=0}^p d_j^{-1} (u_j^\top y) v_j
$$
which means that
$\hat \beta$ is instable (high variance in some direction) if the eigenvalues are very small.
Exercise 3.3
Below you can see some code for computing the SVD of $X^\top X$. Using the same selection of Rhos for each rho compute the singular values and plot them in order from largest to smallest. You should get one line for each rho. What does this say about the stability of $\hat \beta$ as rho approaches 1.
End of explanation
## Sample coefficients with different rho and store eigenvalues and beta variances
Rhos = [0,.2,.4,.6,.8,.9,.95,.99]
res_mat = []
for rho in Rhos:
X, y = sim_corr_lm(n,p,rho,beta,sigma)
U,d,Vt = sc.linalg.svd(X)
res_mat += [d]
## plot the ordered eigenvalues for each rho
colors = plt.cm.jet(np.linspace(0,1,len(Rhos)))
for col, rho, res_vec in zip(colors,Rhos,res_mat):
plt.plot(res_vec,label=str(rho),c=col)
plt.legend()
_ = plt.xlabel('eigenvalue index')
Explanation: STOP
End of explanation |
1,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pyplot
pyplot is a context based functional API offering meaningful defaults. It's a concise API and very similar to matplotlib's pyplot. Users new to bqplot should use pyplot as a starting point. Users create figure and mark objects using pyplot functions.
Steps for building plots in pyplot
Step1: For creating other marks (like scatter, pie, bars, etc.), only step 2 needs to be changed. Lets look a simple example to create a bar chart
Step2: Multiple marks can be rendered in a figure. It's as easy as creating marks one after another. They'll all be added to the same figure! | Python Code:
import bqplot.pyplot as plt
# first, let's create two vectors x and y to plot using a Lines mark
import numpy as np
x = np.linspace(-10, 10, 100)
y = np.sin(x)
# 1. Create the figure object
fig = plt.figure(title="Simple Line Chart")
# 2. By default axes are created with basic defaults. If you want to customize the axes create
# a dict and pass it to `axxes_options` argument in the marks
axes_opts = {"x": {"label": "X"}, "y": {"label": "Y"}}
# 3. Create a Lines mark by calling plt.plot function
line = plt.plot(
x=x, y=y, axes_options=axes_opts
) # note that custom axes options are passed here
# 4. Render the figure using plt.show()
plt.show()
Explanation: Pyplot
pyplot is a context based functional API offering meaningful defaults. It's a concise API and very similar to matplotlib's pyplot. Users new to bqplot should use pyplot as a starting point. Users create figure and mark objects using pyplot functions.
Steps for building plots in pyplot:
1. Create a figure object using plt.figure()
* (Optional steps)
* Scales can be customized using plt.scales function
* Axes options can customized by passing a dict to axes_options argument in the marks' functions
* Create marks using pyplot functions like plot, bar, scatter etc. (All the marks created will be automatically added to the figure object created in step 1)
* Render the figure object using the following approaches:
* Using plt.show function which renders the figure in the current context along with toolbar for panzoom etc.
* Using display on the figure object created in step 1 (toolbar doesn't show up in this case)
pyplot also offers many helper functions. A few are listed here:
* plt.xlim: sets the domain bounds of the current 'x' scale
* plt.ylim: sets the domain bounds of the current 'y' scale
* plt.grids: shows/hides the axis grid lines
* plt.xlabel: sets the X-Axis label
* plt.ylabel: sets the Y-Axis label
* plt.hline: draws a horizontal line at a specified level
* plt.vline: draws a vertical line at a specified level
Let's look at the same examples which were created in the Object Model Notebook
End of explanation
# first, let's create two vectors x and y to plot a bar chart
x = list("ABCDE")
y = np.random.rand(5)
# 1. Create the figure object
fig = plt.figure(title="Simple Bar Chart")
# 2. Customize the axes options
axes_opts = {
"x": {"label": "X", "grid_lines": "none"},
"y": {"label": "Y", "tick_format": ".0%"},
}
# 3. Create a Bars mark by calling plt.bar function
bar = plt.bar(x=x, y=y, padding=0.2, axes_options=axes_opts)
# 4. directly display the figure object created in step 1 (note that the toolbar no longer shows up)
fig
Explanation: For creating other marks (like scatter, pie, bars, etc.), only step 2 needs to be changed. Lets look a simple example to create a bar chart:
End of explanation
# first, let's create two vectors x and y
import numpy as np
x = np.linspace(-10, 10, 25)
y = 3 * x + 5
y_noise = y + 10 * np.random.randn(25) # add some random noise to y
# 1. Create the figure object
fig = plt.figure(title="Scatter and Line")
# 3. Create line and scatter marks
# additional attributes (stroke_width, colors etc.) can be passed as attributes to the mark objects as needed
line = plt.plot(x=x, y=y, colors=["green"], stroke_width=3)
scatter = plt.scatter(x=x, y=y_noise, colors=["red"], stroke="black")
# setting x and y axis labels using pyplot functions. Note that these functions
# should be called only after creating the marks
plt.xlabel("X")
plt.ylabel("Y")
# 4. render the figure
fig
Explanation: Multiple marks can be rendered in a figure. It's as easy as creating marks one after another. They'll all be added to the same figure!
End of explanation |
1,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Final Exam Review
CSCI 1360E
Step1: Another example is Part G, asking you to write a function that reverses the elements of an array. According to the instructions, you were not allowed to use either the [
Step2: Polite imports
This isn't technically an error, but rather a very strong convention. Whenever you are importing modules, these imports should all go in one place
Step3: if statements are adults; they can handle being short-staffed, as it were. If there's literally nothing to do in an else clause, you're perfectly able to omit it entirely
Step4: From A7 (but also A4, so this is copy/pasted from the Midterm Review)
len(ndarray) versus ndarray.shape
For the question about checking that the lengths of two NumPy arrays were equal, a lot of people chose this route
Step5: which works, but only for one-dimensional arrays.
For anything other than 1-dimensional arrays, things get problematic
Step6: These definitely are not equal in length. But that's because len doesn't measure length of matrices...it only measures the number of rows (i.e., the first axis--which in this case is 5 in both, hence it thinks they're equal).
You definitely want to get into the habit of using the .shape property of NumPy arrays
Step7: We get the answer we expect.
The dangers of in
Step8: The keyword in is a great tool for testing if some value is present in a collection. But it has a potential downside.
Think of it this way
Step9: Note that this function requires a loop.
So here, then, is the danger
Step10: Of course, that code isn't complete | Python Code:
def dot(arr1, arr2):
if arr1.shape[0] != arr2.shape[0]:
return None
p = arr1 * arr2 # Multiplies corresponding elements of the two arrays...no loops needed!
s = p.sum() # Computes the sum of all the elements...still no loops needed!
return s
Explanation: Final Exam Review
CSCI 1360E: Foundations for Informatics and Analytics
Material
Anything in all lectures is fair game!
Anything in all assignments is fair game!
...but there will be a heavy preference for everything after the midterm!
Topics
Data Science
- Definition
- Intrinsic interdisciplinarity
- "Greater Data Science"
Python Language
- Philosophy
- Compiled vs Interpreted
- Variables, literals, types, operators (arithmetic and comparative)
- Casting, typing system
- Syntax (role of whitespace)
Data Structures
- Collections (lists, sets, tuples, dictionaries)
- Iterators, generators, and list comprehensions
- Loops (for, while), loop control (break, continue), and utility looping functions (zip, enumerate)
- Variable unpacking
- Indexing and slicing
- Differences in indexing between collection types (tuples versus sets, lists versus dictionaries)
Conditionals
- if / elif / else structure
- Boolean algebra (stringing together multiple conditions with or and and)
Exception handling
- try / except structure, and what goes in each block
Functions
- Defining functions
- Philosophy of a function
- Defining versus calling (invoking) a function
- Positional (required) versus default (optional) arguments
- Keyword arguments
- Functions that take any number of arguments
- Object references, and their behaviors in Python
NumPy
- Importing external libraries
- The NumPy ndarray, its properties (.shape), and indexing
- NumPy submodules
- Vectorized arithmetic in lieu of explicit loops
- NumPy array dimensions, or axes, and how they relate to the .shape property
- Array broadcasting, uses and rules
- Fancy indexing with boolean and integer arrays
File I/O
- Reading from / writing to files
- Gracefully handling file-related errors
Linear Algebra
- Vectors and matrices
- Dot products and matrix multiplication
- Dimensions of data versus dimensionality of NumPy arrays
Probability and Statistics
- Axioms of Probability
- Dependence and independence
- Conditional probability
- Bayes' Theorem
- Probability distributions
- First-, second-, and higher-order statistics
Data Visualization and Exploration
- Plotting data (line plots, scatter plots, histograms)
- Plotting images, or matrices as images (colormaps)
- Strategies for visualizing data of different dimensions
- Accounting for missing data
- Matplotlib, pandas
Natural Language Processing
- Bag of words model
- Preprocessing (stop words, stemming)
- TF-IDF
Image Processing
- Pixel representation (RGB, grayscale)
- Thresholding
- Convolutional filters (blurring and sharpening)
- Segmentation
Machine Learning
- Supervised versus unsupervised learning
- Classification algorithms (KNN, SVM)
- Clustering algorithms (K-means, Spectral)
- Bias-variance trade-off
- Cross-validation
- Training and testing
Open Data Science
- Pillars of Open Science
- Anatomy of an open source project
- Importance of licensing
Final Exam Logistics
The format will be just like the midterm. That is to say: very close to that of JupyterHub assignments (there may or may not be autograders to help).
It will be 180 minutes. Don't expect any flexibility in this time limit, so plan accordingly.
You are NOT allowed to use internet resources or collaborate with your classmates (enforced by the honor system), but you ARE allowed to use lecture and assignment materials from this course, as well as terminals in the JupyterHub environment or on your local machine.
I will be available on Slack for questions most of the day tomorrow, from 9am until about 3pm (then will be back online around 4pm until 5pm). Shoot me a direct message if you have a conceptual / technical question relating to the final, and I'll do my best to answer ASAP.
JupyterHub Logistics
The final will be released on JupyterHub at 12:00am on Friday, July 28.
It will be collected at 12:00am on Saturday, July 29. The release and collection will be done by automated scripts, so believe me when I say there won't be any flexibility on the parts of these mechanisms.
Within that 24-hour window, you can start the exam (by "Fetch"-ing it on JupyterHub) whenever you like.
ONCE YOU FETCH THE FINAL, YOU WILL HAVE 180 MINUTES--3 HOURS--FROM THAT MOMENT TO SUBMIT THE COMPLETED FINAL BACK.
Furthermore, it's up to you to keep track of that time. Look at your system clock when you click "Fetch", or use the timer app on your smartphone, to help you track your time use. Once the 180 minutes are up, the exam is considered late.
In theory, this should allow you to take the final when it is most convenient for you. Obviously you should probably start no later than 9:00PM on Friday, since any submissions after midnight will be considered late, even if you started at 11:58PM.
Tough Assignment Questions and Concepts
From A5
Do not as I say, and not as I do?
Ok, for some reason, a lot of people outright ignored directions on this homework and lost points as a result.
In Part A, you computed the dot product of two arrays. Some people used loops; this was both explicitly disallowed in the directions, and it's more work than using NumPy array broadcasting!
End of explanation
def reverse_array(arr):
start = arr.shape[0] - 1 # We're STARTING at the END
stop = -1 # Recall that range() always stops at "index - 1"
step = -1 # Since we start at the end, we increment by -1
# np.arange is the same as range(), just auto-returns a NumPy array
# You could use range() here, too; you'd have to wrap it in list()
indices = np.arange(start, stop, step)
print(indices)
reversed = arr[indices] # Literally just...fancy index.
return reversed
import numpy as np
before = np.array([10, 20, 30, 40, 50])
after = reverse_array(before)
print("Before reversal: ", before)
print("After reversal: ", after)
Explanation: Another example is Part G, asking you to write a function that reverses the elements of an array. According to the instructions, you were not allowed to use either the [::-1] notation, or the .reversed() function. Doing so anyway means you missed this bit of range cleverness:
Recall that range can take up to three arguments: a starting point, an ending point, and an increment. 99% of the time, you're starting at 0 and incrementing by 1, so the only argument you ever really give it is the length.
But if you think about it, you could use this to start at the end, end at the start, and increment by -1.
Even cooler: you could turn this range object into a list of indices, and then use that to fancy index the original array, effectively reversing it in one step. Observe:
End of explanation
def list_of_positive_indices(numbers):
indices = []
for index, element in enumerate(numbers):
if element > 0:
indices.append(index)
else:
pass # Why are we here? What is our purpose? Do we even exist?
return indices
Explanation: Polite imports
This isn't technically an error, but rather a very strong convention. Whenever you are importing modules, these imports should all go in one place: the very top of the file.
There was a common tendency to have import statements inside of functions. This won't cause problems in terms of bugs, but does get very, very confusing if you're dealing with more than 1 function. After all, if you import numpy as np inside of one function, it may or may not be available in another function.
So, rather than try to figure out if you need to import NumPy over and over in every single function... just do all your import statements once at the very top of your program.
try/except overenthusiasm
It's great to see you know when to use try/except blocks! However--and this is very common--there's such as thing as too much use of these blocks.
Here's what happens if you put too much of your code inside a try block: it catches, and therefore effectively HIDES, errors and bugs in your code that have nothing to do with the errors you're trying to handle gracefully.
Two guidelines for using try/except blocks:
Keep the amount of code under a try statement to an absolute minimum. For example, if you're reading from a file, put only the calls to open() and read() inside the try block.
Always give an error type that you're trying to handle; that way, if an unexpected error of a different type crops up, the block won't inadvertently hide it. It's easy enough to simply say except:, but I strongly urge you to specify the error type you're trying to handle.
One final note here: please, please, please, do not EVER nest try/except statements. This will only multiply the problems I've described. If you're trying to handle multiple types of errors, use multiple except statements.
From A6 (but also A3, so this is copy/pasted from the Midterm Review)
if statements don't always need an else.
I saw this a lot:
End of explanation
def list_of_positive_indices(numbers):
indices = []
for index, element in enumerate(numbers):
if element > 0:
indices.append(index)
return indices
Explanation: if statements are adults; they can handle being short-staffed, as it were. If there's literally nothing to do in an else clause, you're perfectly able to omit it entirely:
End of explanation
# Some test data
import numpy as np
x = np.random.random(10)
y = np.random.random(10)
len(x) == len(y)
Explanation: From A7 (but also A4, so this is copy/pasted from the Midterm Review)
len(ndarray) versus ndarray.shape
For the question about checking that the lengths of two NumPy arrays were equal, a lot of people chose this route:
End of explanation
x = np.random.random((5, 5)) # A 5x5 matrix
y = np.random.random((5, 10)) # A 5x10 matrix
len(x) == len(y)
Explanation: which works, but only for one-dimensional arrays.
For anything other than 1-dimensional arrays, things get problematic:
End of explanation
x = np.random.random((5, 5)) # A 5x5 matrix
y = np.random.random((5, 10)) # A 5x10 matrix
x.shape == y.shape
Explanation: These definitely are not equal in length. But that's because len doesn't measure length of matrices...it only measures the number of rows (i.e., the first axis--which in this case is 5 in both, hence it thinks they're equal).
You definitely want to get into the habit of using the .shape property of NumPy arrays:
End of explanation
l = [2985, 42589, 13, 574, 57425, 574]
225 in l
Explanation: We get the answer we expect.
The dangers of in
End of explanation
def in_function(haystack, needle):
for item in haystack:
if item == needle:
return True
return False
h1 = [10, 20, 30, 40, 50]
n1 = 20
print(in_function(h1, n1))
n2 = 60
print(in_function(h1, n2))
Explanation: The keyword in is a great tool for testing if some value is present in a collection. But it has a potential downside.
Think of it this way: how would you implement in yourself? Probably something like this:
End of explanation
import numpy as np
def featurize(*books):
rows = len(books) # This is your number of rows.
vocabulary = global_vocabulary(books) # Your function from Part E!
cols = len(vocabulary) # This is your number of columns.
feature_matrix = np.zeros(shape = (rows, cols)) # Here's your feature matrix.
Explanation: Note that this function requires a loop.
So here, then, is the danger: if you're searching for a specific item in a very large collection, this can take awhile. Even more dangerous if you're running this search inside of another loop.
This created a few failed autograder tests in A7, because JupyterHub automatically kills a test if it takes too long. It's entirely possible your code was correct in theory--and I tried to distribute points accordingly--but it nonetheless lost you points at first.
Something to keep in mind: if you're already using a loop, chances are you probably don't need to use in.
Dependent or Independent?
The question related to coin flips--two random variables $X$ and $Y$, counting the number of heads and tails, respectively--confused a lot of folks.
Several said they were independent variables because "each coin flip is an independent event." That is true, but it is the correct answer to the wrong question. The question isn't if each coin flip is independent, but if the number heads and tails in 1000 flips are independent.
Let's say, out of 1000 flips, you observed 600 heads, so you know $X = 600$. Does this give you any information about $Y$, the number of tails?
Oh yes: you know immediately that $Y = 400$. This is the very definition of dependent variables: if you can directly compute one from the other, that means knowing one gives you information about the other.
From A8
Features, features everywhere
There was a lot of trouble with Part F of the homework: implementing featurize, which entailed computing a matrix of word frequencies.
It was explained that the number of rows of this matrix should correspond to the number of documents (books), and the number of columns correspond with the number of unique words across all the documents.
To compute the dimensions of this matrix took two steps:
1. Iterate through the number of books/documents.
2. Combine all the words from the books/documents together, and count how many unique words there are.
End of explanation
import numpy as np
def featurize(*books):
rows = len(books) # This is your number of rows.
vocabulary = global_vocabulary(books) # Your function from Part E!
cols = len(vocabulary) # This is your number of columns.
feature_matrix = np.zeros(shape = (rows, cols)) # Here's your feature matrix.
# Loop through the books, generating a word count dictionary for each.
for row_index, book in enumerate(books): # enumerate() is very helpful here
wc = word_counts(book)
for word, count in wc.items(): # This loops through the words IN THIS BOOK.
# Which column does this go in?
col_index = vocabulary.index(word)
# Insert the count!
feature_matrix[row_index, col_index] = count
return feature_matrix
Explanation: Of course, that code isn't complete: you still have to fill in the features, which are the word counts.
To do this, you can use your word_counts function from Part B. But you have to build a word count dictionary for each book, one at a time.
Once you have the word count dictionary for a book, you'll then have to loop through the words in that book and add the counts to the right column of the feature matrix you built.
End of explanation |
1,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Optimizers
A Code Along of Kerem Turgutlu's Notebook
Step1: This notebook is inspired by Sebastian Ruder's awesome work from http
Step2: Training With Different Optimizers
Step3: Vanilla Mini-Batch -- SGD
```
Algorithm
for i in range(epochs)
Step4: PyTorch Built-In
Here you can test various optimizers by simply changing optim.(optimizer). It's also interesting to see each optimizer has its own nature and therefore kind of needs a unique λr. This demonstrates the importance of a funciton like lr_find().
Step5: SGD with Momentum
We'd like to pass saddle points with the use of momentum
Step6: Nesterov
NOTE
Step7: Adagrad
Adagrad adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent paramters. In its update rule, Adagrad modifies the general learning rate ηη at each time step tt for every paramter θiθi based on the past gradients that've been computed for θiθi
Step8: Adadelta/RMSPropr
Adadelta restricts the window of accumulated past gradients to some fixed size ww.
Instead of inefficiently storing ww previous squared gradients, the Σ of gradients is recursvively defined as a decaying average of all past squared gradients. The running average E[g2]tE[g2]t at time step tt then depends (as a fraction γγ similarly to the Momentum term) only on the previous average and the current gradient
Step9: Adam
In addition to storing an exponentially decaying average of past squared gradients vtvt like Adadelta and RMSProp, Adam keeps an expontnially decaying average of past gradients mtmt, similar to momentum. As mtmt and vtvt are initialized as vectors of zeros, the authors of Adam observed that they're biased towards zero, especially during the iniital time steps, and especially when the decay rates are small (ie | Python Code:
# Classical
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# PyTorch
import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
from torch import optim
# Misc
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = 15, 7
Explanation: Exploring Optimizers
A Code Along of Kerem Turgutlu's Notebook
End of explanation
# you can just wget the files from his github (use github download link)
data = pd.read_csv('./data/MNIST/train.csv')
data = np.array(data)
class MNIST(Dataset):
def __init__(self, data):
self.data = data
def __getitem__(self, index):
X = data[index][1:]
y = data[index][0]
return torch.from_numpy(X).float() / 256, torch.LongTensor(np.array([y]))
def __len__(self):
return len(self.data)
class SimpleNet(nn.Module):
def __init__(self, layers):
super().__init__()
self.linears = nn.ModuleList([nn.Linear(layers[i], layers[i + 1])
for i in range(len(layers) - 1)])
def forward(self, x):
for lin in self.linears:
lin_x = lin(x)
x = F.relu(lin_x)
return F.log_softmax(lin_x)
# Create dataset
mnist = MNIST(data)
data_dl = DataLoader(mnist, batch_size=256, shuffle=True, num_workers=0)
Explanation: This notebook is inspired by Sebastian Ruder's awesome work from http://ruder.io/optimizing-gradient-descent/. The only thing that won't be implemented in this notebook are gradient calculations - which are provided by PyTorch.
References:
http://www.fast.ai/, Fast.ai ML, Fast.ai DL
http://ruder.io/optimizing-gradient-descent/
http://cs231n.github.io/neural-networks-3/#sgd
End of explanation
epochs = 3 # set epochs
criterion = nn.NLLLoss() # define loss function
#NOTE: http://pytorch.org/docs/master/nn.html?highlight=nllloss#torch.nn.NLLLoss
Explanation: Training With Different Optimizers
End of explanation
# init architecture
snet = SimpleNet([784, 100, 100, 10])
# get weight, bias objects
wbs = [(lin.weight, lin.bias) for lin in snet.linears]
# keep track of training loss
losses = []
# pars
lr = 1e-2
# Training
for epoch in range(epochs):
print(f'epoch {epoch}')
for i, batch in enumerate(data_dl):
inputs, labels = batch
inputs, labels = Variable(inputs), Variable(labels)
outputs = snet(inputs)
# compute loss and gradients
loss = criterion(outputs, labels.squeeze(-1))
losses.append(loss)
loss.backward()
# update weights
for w, b in wbs:
w.data -= lr * w.grad.data
b.data -= lr * b.grad.data
# zero the gradients
w.grad.data.zero_()
b.grad.data.zero_()
sgd_losses_ = [(l.data.numpy()[0]) for l in losses]
sgd_log_losses_ = [np.log(l) for l in sgd_losses_]
plt.plot(sgd_log_losses_)
title = plt.title("Vanilla SGD")
Explanation: Vanilla Mini-Batch -- SGD
```
Algorithm
for i in range(epochs):
shuffled = np.random.shuffle(data)
for batch in get.batch(shuffled, bs):
grads = compute.grads(batch, weight, loss_func)
params -= lr*grads
```
End of explanation
# Training PyTorch
# init architecture
snet = SimpleNet([784, 100, 100, 10])
optimizer = optim.Adam(lr = lr, params=snet.parameters())
losses = []
for epoch in range(epochs):
print(f'epoch {epoch}')
for i, batch in enumerate(data_dl):
inputs, labels = batch
inputs, labels = Variable(inputs), Variable(labels)
outputs = snet(inputs)
# compute loss and gradients
loss = criterion(outputs, labels.squeeze(-1))
losses.append(loss)
loss.backward()
# update weights
optimizer.step()
optimizer.zero_grad()
pyadam_losses_ = [(λ.data.numpy()[0]) for λ in losses]
pyadam_log_losses_ = [np.log(λ) for λ in pyadam_losses_]
plt.plot(pyadam_log_losses_)
# plt.plot(adam_log_losses_)
# title = plt.title("PyTorch Adam vs Custom Adam")
title = plt.title("PyTorch Adam")
plt.axis([-20, 520, -4, 1])
Explanation: PyTorch Built-In
Here you can test various optimizers by simply changing optim.(optimizer). It's also interesting to see each optimizer has its own nature and therefore kind of needs a unique λr. This demonstrates the importance of a funciton like lr_find().
End of explanation
### init architecture
snet = SimpleNet([784, 100, 100, 10])
### get weight, bias objects
wbs = [(lin.weight, lin.bias) for lin in snet.linears]
### keep track of training loss
losses = []
### pars
lr = 1e-3
rho = 0.9
weight_v_prev = [0 for i in range(len(wbs))] # initialize momentum term
bias_v_prev = [0 for i in range(len(wbs))] # initialize momentum term
epochs = 3
### Training
for epoch in range(epochs):
print(f'epoch {epoch}')
for i, batch in enumerate(data_dl):
inputs, labels = batch
inputs, labels = Variable(inputs), Variable(labels)
outputs = snet(inputs)
# compute loss and gradients
loss = criterion(outputs, labels.squeeze(-1))
losses.append(loss)
loss.backward()
# update weights
for i, wb in enumerate(wbs):
w = wb[0]
b = wb[1]
weight_v_new = rho * weight_v_prev[i] + lr * w.grad.data
bias_v_new = rho * bias_v_prev[i] + lr * b.grad.data
weight_v_prev[i] = weight_v_new
bias_v_prev[i] = bias_v_new
w.data -= weight_v_new
b.data -= bias_v_new
# zero the gradients
w.grad.data.zero_()
b.grad.data.zero_()
sgdmom_losses_ = [(λ.data.numpy()[0]) for λ in losses]
sgdmom_log_losses_ = [np.log(λ) for λ in sgdmom_losses_]
plt.plot(sgd_log_losses_)
plt.plot(sgdmom_log_losses_)
title = plt.title("Evolution of Optimizers")
plt.legend(('SGD', 'SGD Momentum'))
Explanation: SGD with Momentum
We'd like to pass saddle points with the use of momentum:
```
Algorithm
v_new = 0 # init update
rho = 0.9 # set ro
for i in range(epochs):
shuffled = np.random.shuffle(data)
for batch in get.batch(shuffled, bs):
v_new = rho * v_prev + lr * compute.grads(batch, weight, loss_func)
params -= v_new
v_prev = v_new
```
End of explanation
### init architecture
snet = SimpleNet([784, 100, 100, 10])
### get weight, bias objects
wbs = [(lin.weight, lin.bias) for lin in snet.linears]
### keep track of training loss
losses = []
### pars
lr = 1e-3
rho = 0.9
weight_v_prev = [0 for i in range(len(wbs))] # initlz momentum term
bias_v_prev = [0 for i in range(len(wbs))] # initlz momentum term
epochs = 3
### Training
for epoch in range(epochs):
print(f'epoch {epoch}')
for n, batch in enumerate(data_dl):
inputs, labels = batch
inputs, labels = Variable(inputs), Variable(labels)
outputs = snet(inputs)
# compute loss
loss = criterion(outputs, labels.squeeze(-1))
losses.append(loss)
# update weights
for i, wb in enumerate(wbs):
w = wb[0]
b = wb[1]
### WEIGHT UPDATE
# take a step in future as if we are updating
w_original = w.data
w.data -= rho*weight_v_prev[i]
# calculate loss and gradients, to see how it'd
future_outputs = snet(inputs)
future_loss = criterion(outputs, labels.squeeze(-1))
future_loss.backward(retain_graph=True)
future_grad = w.grad.data
weight_v_new = rho*weight_v_prev[i] + lr*future_grad # future grad
weight_v_prev[i] = weight_v_new
w.data = w_original # get the original weight data
w.data -= weight_v_new # update
# zero all gradients
snet.zero_grad()
### BIAS UPDATE
# take a step in future as if we are updating
b_original = b.data
b.data -= rho*bias_v_prev[i]
# calculate loss and gradient, to see how it'd be
future_outputs = snet(inputs)
future_loss = criterion(outputs, labels.squeeze(-1))
future_loss.backward(retain_graph=True)
future_grad = b.grad.data
bias_v_new = rho*bias_v_prev[i] + lr*future_grad
bias_v_prev[i] = bias_v_new
b.data = b_original
b.data -= bias_v_new
# zero all gradients
snet.zero_grad()
nesterov_losses_ = [(λ.data.numpy()[0]) for λ in losses]
nesterov_log_losses_ =[np.log(λ) for λ in nesterov_losses_]
plt.plot(sgd_log_losses_)
plt.plot(sgdmom_log_losses_)
plt.plot(nesterov_log_losses_)
title = plt.title("Loss Plot")
plt.legend(('SGD', 'SGD Momentum', 'Nesterov'))
Explanation: Nesterov
NOTE: this may be wrong see Nesterov section: "not sure if giving correct results, as we expect to see this to be faster ito convergence"
We'd like to have a smarter ball, a ball that nas a ntion of where it's going so that it knows to slow down before the hill slopes up again. While computing grads wrt to weight - ρ * v_prev, we have a sense of what'll be the next position of our ball, so we leverage this and make a better update.
```
Algorithm
v_prev = 0 # init update
rho = 0.9 # set ρ
for i in range(epochs):
shuffled = np.random.shuffle(data)
for batch in get.batch(shuffled, bs):
v_new = rho * v_prev + lr * compute.grads(batch, params - rho * v_prev, loss_func)
params -= v_new
v_prev = v_new
```
End of explanation
### init architecture
snet = SimpleNet([784, 100, 100, 10])
### get weight, bias objects
wbs = [(lin.weight, lin.bias) for lin in snet.linears]
### keep track of training loss
losses = []
### pars
lr = 1e-3
grads_squared = [[torch.zeros(wb[0].size()), torch.zeros(wb[1].size())] for wb in wbs]
noise = 1e-8
epochs = 3
### Training
for epoch in range(epochs):
print(f'epoch {epoch}')
for i, batch in enumerate(data_dl):
inputs, labels = batch
inputs, labels = Variable(inputs), Variable(labels)
outputs = snet(inputs)
# compute loss and gradients
loss = criterion(outputs, labels.squeeze(-1))
losses.append(loss)
loss.backward()
# update weights
for i, wb in enumerate(wbs):
w = wb[0]
b = wb[1]
w.data -= lr*w.grad.data / torch.sqrt(grads_squared[i][0] + noise)
b.data -= lr*b.grad.data / torch.sqrt(grads_squared[i][1] + noise)
grads_squared[i][0] += w.grad.data*w.grad.data
grads_squared[i][1] += b.grad.data*b.grad.data
# zero the gradients
w.grad.data.zero_()
b.grad.data.zero_()
adagrad_losses_ = [(λ.data.numpy()[0]) for λ in losses]
adagrad_log_losses_ = [np.log(λ) for λ in adagrad_losses_]
plt.plot(sgd_log_losses_)
plt.plot(sgdmom_log_losses_)
plt.plot(nesterov_log_losses_)
plt.plot(adagrad_log_losses_)
title = plt.title("Loss Plot")
plt.legend(('SGD', 'SGD Momentum', 'Nesterov', 'Adagrad'))
Explanation: Adagrad
Adagrad adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent paramters. In its update rule, Adagrad modifies the general learning rate ηη at each time step tt for every paramter θiθi based on the past gradients that've been computed for θiθi:
One of Adagrad's main benefits is that it eliminates the need to manually tune the λr. Most implementations use a default value of 0.01 and leave it at that.
Adagrad's main weakness is its accumulation of the squared gradients in the denominator: Since every added term is positive, the accumulated Σ keeps growing during training. This in turn causes the learning rate to shrink and eventually become infinitisimally (1/∞) small, at which point the algorithm is no longer able to acquire additional knowledge. The following algorithms aim to resolve this flaw.
```
Algorithm
grad_squared = 0
noise = 1e-8
for i in range(epochs):
shuffled = np.random.shuffle(Data)
for batch in get.batch(shuffled, bs):
grads = compute.grads(batch, wegiht, loss_func)
params -= lr * (grads / (np.sqrt(grads_squared) + noise))
grads_squared += grads*grads
```
End of explanation
### init architecture
snet = SimpleNet([784, 100, 100, 10])
### get weight, bias objects
wbs = [(lin.weight, lin.bias) for lin in snet.linears]
### keep track of training loss
losses = []
### pars
lr = 1e-3
grads_squared = [[torch.zeros(wb[0].size()), torch.zeros(wb[1].size())] for wb in wbs]
noise = 1e-8
rho = 0.9
epochs = 3
### Training
for epoch in range(epochs):
print(f'epoch {epoch}')
for i, batch in enumerate(data_dl):
inputs, labels = batch
inputs, labels = Variable(inputs), Variable(labels)
outputs = snet(inputs)
# compute loss and gradients
loss = criterion(outputs, labels.squeeze(-1))
losses.append(loss)
loss.backward()
# update weights
for i, wb in enumerate(wbs):
w = wb[0]
b = wb[1]
w.data -= lr*w.grad.data / torch.sqrt(grads_squared[i][0] + noise)
b.data -= lr*b.grad.data / torch.sqrt(grads_squared[i][1] + noise)
grads_squared[i][0] = rho*grads_squared[i][0] + (1-rho)*w.grad.data*w.grad.data
grads_squared[i][1] = rho*grads_squared[i][1] + (1-rho)*b.grad.data*b.grad.data
# zero the gradients
w.grad.data.zero_()
b.grad.data.zero_()
rmsprop_losses_ = [(λ.data.numpy()[0]) for λ in losses]
rmsprop_log_losses_ = [np.log(λ) for λ in rmsprop_losses_]
plt.plot(sgd_log_losses_)
plt.plot(sgdmom_log_losses_)
plt.plot(nesterov_log_losses_)
plt.plot(adagrad_log_losses_)
plt.plot(rmsprop_log_losses_)
title = plt.title("Evolution of Optimizers")
plt.legend(('SGD', 'SGD Momentum', 'Nesterov', 'Adagrad', 'RMSProp'))
Explanation: Adadelta/RMSPropr
Adadelta restricts the window of accumulated past gradients to some fixed size ww.
Instead of inefficiently storing ww previous squared gradients, the Σ of gradients is recursvively defined as a decaying average of all past squared gradients. The running average E[g2]tE[g2]t at time step tt then depends (as a fraction γγ similarly to the Momentum term) only on the previous average and the current gradient:
RMSProp and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. RMSProp is in fact identical to the first update vector of Adadelta that we derived above.
```
Algorithm
grad_squared = 0
rho = 0.9 # par for exponential smoothing on grad squares
for i in range(epochs):
shuffled = np.random.shuffle(data)
for batch in get.batch(shuffled, bs):
grads = compute.grads(batch, weight, loss_func)
grads_squared = rho * (grad_squared) + (1 - rho)(gradgrad)
pars -= lr*(grads / (np.sqrt(grads_squared) + noise)
```
End of explanation
### init architecture
snet = SimpleNet([784, 100, 100, 10])
### get weight, bias objects
wbs = [(lin.weight, lin.bias) for lin in snet.linears]
### keep track of training loss
losses = []
### params
lr = 1e-3
### [((m, v), (m, v))] weight and bias m v prevs
m_v_prevs = [[[0, 0], [0, 0]] for i in range(len(wbs))]
noise = 1e-8
beta1 = 0.9
beta2 = 0.999
epochs = 3
t = 0
### Training
for epoch in range(epochs):
print(f'epoch {epoch}')
for i, batch in enumerate(data_dl):
# keep track of time
t += 1
inputs, labels = batch
inputs, labels = Variable(inputs), Variable(labels)
outputs = snet(inputs)
# compute loss and gradients
loss = criterion(outputs, labels.squeeze(-1))
losses.append(loss)
loss.backward()
# update weights
for i, wb in enumerate(wbs):
w = wb[0]
b = wb[1]
# update weight component
w_m_v_t_prev = m_v_prevs[i][0]
w_m_t_prev = w_m_v_t_prev[0]
w_v_t_prev = w_m_v_t_prev[1]
w_m_t_new = beta1*w_m_t_prev + (1 - beta1)*w.grad.data
w_v_t_new = beta2*w_v_t_prev + (1 - beta2)*(w.grad.data*w.grad.data)
w_m_t_new_hat = w_m_t_new / (1 - beta1**t)
w_v_t_new_hat = w_v_t_new / (1 - beta2**t)
m_v_prevs[i][0][0] = w_m_t_new
m_v_prevs[i][0][1] = w_v_t_new
w.data -= lr*w_m_t_new_hat / (torch.sqrt(w_v_t_new_hat) + noise)
# update bias component
b_m_v_t_prev = m_v_prevs[i][1]
b_m_t_prev = b_m_v_t_prev[0]
b_v_t_prev = b_m_v_t_prev[1]
b_m_t_new = beta1*b_m_t_prev + (1 - beta1)*b.grad.data
b_v_t_new = beta2*b_v_t_prev + (1 - beta2)*(b.grad.data*b.grad.data)
b_m_t_new_hat = b_m_t_new / (1 - beta1**t)
b_v_t_new_hat = b_v_t_new / (1 - beta2**t)
m_v_prevs[i][1][0] = b_m_t_new
m_v_prevs[i][1][1] = b_v_t_new
b.data -= lr*b_m_t_new_hat / (torch.sqrt(b_v_t_new_hat) + noise)
# zero the gradients
w.grad.data.zero_()
b.grad.data.zero_()
adam_losses_ = [(λ.data.numpy()[0]) for λ in losses]
adam_log_losses_ = [np.log(λ) for λ in adam_losses_]
plt.plot(sgd_log_losses_)
plt.plot(sgdmom_log_losses_)
plt.plot(nesterov_log_losses_)
plt.plot(adagrad_log_losses_)
plt.plot(rmsprop_log_losses_)
plt.plot(adam_log_losses_)
title = plt.title("Evolution of Optimizers")
plt.legend(('SGD', 'SGD Momentum', 'Nesterov', 'Adagrad', 'RMSProp', 'Adam'))
### ADAMAX NADAM may be next
Explanation: Adam
In addition to storing an exponentially decaying average of past squared gradients vtvt like Adadelta and RMSProp, Adam keeps an expontnially decaying average of past gradients mtmt, similar to momentum. As mtmt and vtvt are initialized as vectors of zeros, the authors of Adam observed that they're biased towards zero, especially during the iniital time steps, and especially when the decay rates are small (ie: β1β1 and β2β2 are close to 1).
They counteract these biases by computing bias-corrected first and second moment estimates:
```
Algorithm
m = 0
v = 0
beta1 = 0.999
beta2 = 1e-8
t = 0
for i in range(epochs):
t += 1
shuffled = np.random.shuffle(data)
for batch in get.batch(shuffled, bs):
grads = compute.grads(batch, weight - rhov_prev, loss_func)
m = beta1m + (1 - beta1)grads
v = beta2v + (1 - beta2)gradsgrads
m_hat = m / (1 - beta1t) # bias correction for first moment
v_hat = v / (1 - beta1t) # bias correction for second moment
params -= lr*m_hat/np.sqrt(v_hat) + noise)
```
End of explanation |
1,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LeNet Lab
Source
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
Step2: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
Step3: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
Step4: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
Step5: TODO
Step6: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
Step7: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
Step8: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
Step9: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step10: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: LeNet Lab
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = tf.contrib.layers.flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: TODO: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation |
1,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
======================================================
Compute source power spectral density (PSD) in a label
======================================================
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
Step1: Set parameters
Step2: View PSD of sources in label | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd
print(__doc__)
Explanation: ======================================================
Compute source power spectral density (PSD) in a label
======================================================
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = 0, 120 # use the first 120s of data
fmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
label = mne.read_label(fname_label)
stc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method="dSPM",
tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,
pick_ori="normal", n_fft=n_fft, label=label,
dB=True)
stc.save('psd_dSPM')
Explanation: Set parameters
End of explanation
plt.plot(1e3 * stc.times, stc.data.T)
plt.xlabel('Frequency (Hz)')
plt.ylabel('PSD (dB)')
plt.title('Source Power Spectrum (PSD)')
plt.show()
Explanation: View PSD of sources in label
End of explanation |
1,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resampling data
When performing experiments where timing is critical, a signal with a high
sampling rate is desired. However, having a signal with a much higher sampling
rate than is necessary needlessly consumes memory and slows down computations
operating on the data.
This example downsamples from 600 Hz to 100 Hz. This achieves a 6-fold
reduction in data size, at the cost of an equal loss of temporal resolution.
Step1: Setting up data paths and loading raw data (skip some data for speed)
Step2: Since downsampling reduces the timing precision of events, we recommend
first extracting epochs and downsampling the Epochs object
Step3: When resampling epochs is unwanted or impossible, for example when the data
doesn't fit into memory or your analysis pipeline doesn't involve epochs at
all, the alternative approach is to resample the continuous data. This
can only be done on loaded or pre-loaded data.
Step4: Because resampling also affects the stim channels, some trigger onsets might
be lost in this case. While MNE attempts to downsample the stim channels in
an intelligent manner to avoid this, the recommended approach is to find
events on the original data before downsampling. | Python Code:
# Authors: Marijn van Vliet <[email protected]>
#
# License: BSD (3-clause)
from matplotlib import pyplot as plt
import mne
from mne.datasets import sample
Explanation: Resampling data
When performing experiments where timing is critical, a signal with a high
sampling rate is desired. However, having a signal with a much higher sampling
rate than is necessary needlessly consumes memory and slows down computations
operating on the data.
This example downsamples from 600 Hz to 100 Hz. This achieves a 6-fold
reduction in data size, at the cost of an equal loss of temporal resolution.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_fname).crop(120, 240).load_data()
Explanation: Setting up data paths and loading raw data (skip some data for speed)
End of explanation
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=2, tmin=-0.1, tmax=0.8, preload=True)
# Downsample to 100 Hz
print('Original sampling rate:', epochs.info['sfreq'], 'Hz')
epochs_resampled = epochs.copy().resample(100, npad='auto')
print('New sampling rate:', epochs_resampled.info['sfreq'], 'Hz')
# Plot a piece of data to see the effects of downsampling
plt.figure(figsize=(7, 3))
n_samples_to_plot = int(0.5 * epochs.info['sfreq']) # plot 0.5 seconds of data
plt.plot(epochs.times[:n_samples_to_plot],
epochs.get_data()[0, 0, :n_samples_to_plot], color='black')
n_samples_to_plot = int(0.5 * epochs_resampled.info['sfreq'])
plt.plot(epochs_resampled.times[:n_samples_to_plot],
epochs_resampled.get_data()[0, 0, :n_samples_to_plot],
'-o', color='red')
plt.xlabel('time (s)')
plt.legend(['original', 'downsampled'], loc='best')
plt.title('Effect of downsampling')
mne.viz.tight_layout()
Explanation: Since downsampling reduces the timing precision of events, we recommend
first extracting epochs and downsampling the Epochs object:
End of explanation
# Resample to 300 Hz
raw_resampled_300 = raw.copy().resample(300, npad='auto')
Explanation: When resampling epochs is unwanted or impossible, for example when the data
doesn't fit into memory or your analysis pipeline doesn't involve epochs at
all, the alternative approach is to resample the continuous data. This
can only be done on loaded or pre-loaded data.
End of explanation
print('Number of events before resampling:', len(mne.find_events(raw)))
# Resample to 100 Hz (suppress the warning that would be emitted)
raw_resampled_100 = raw.copy().resample(100, npad='auto', verbose='error')
print('Number of events after resampling:',
len(mne.find_events(raw_resampled_100)))
# To avoid losing events, jointly resample the data and event matrix
events = mne.find_events(raw)
raw_resampled, events_resampled = raw.copy().resample(
100, npad='auto', events=events)
print('Number of events after resampling:', len(events_resampled))
Explanation: Because resampling also affects the stim channels, some trigger onsets might
be lost in this case. While MNE attempts to downsample the stim channels in
an intelligent manner to avoid this, the recommended approach is to find
events on the original data before downsampling.
End of explanation |
1,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TFLearn [Participle Phrase] Fragment Detection
This notebook is based off the original fragment detection notebook, but specific to detection of participle phrase fragments. As our trainin
g data we will use a datafile of 2,651 sentences with a participle phrase contained in them at the begining, middle, or end of the sentence, and 2,651 partiple phrases extracted from the sentences -- these raw participle phrases will always be fragments.The labels will be either a 1 or 0, where 1 indicates a partiple phrase fragment and 0 indicates that it is NOT a participle phrase fragment.
Install Dependencies
Step1: Create combined data
Step2: Load Datafiles
Step3: Shuffle the data
Step4: Get parts of speech for text string
Step5: Get POS trigrams for a text string
Step6: Turn Trigrams into Dict keys
Step7: Take the trigrams and index them
Step8: Chunking the data for TF
Step9: Setting up TF
Step10: Initialize
Step11: Training
Step12: Playground
Step13: Save the vocab | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
import spacy
nlp = spacy.load('en')
import re
from nltk.util import ngrams, trigrams
import csv
Explanation: TFLearn [Participle Phrase] Fragment Detection
This notebook is based off the original fragment detection notebook, but specific to detection of participle phrase fragments. As our trainin
g data we will use a datafile of 2,651 sentences with a participle phrase contained in them at the begining, middle, or end of the sentence, and 2,651 partiple phrases extracted from the sentences -- these raw participle phrases will always be fragments.The labels will be either a 1 or 0, where 1 indicates a partiple phrase fragment and 0 indicates that it is NOT a participle phrase fragment.
Install Dependencies
End of explanation
import subprocess
subprocess.Popen("python combine.py childrens_fragments".split(), cwd='../data/fragments/participle-phrases')
Explanation: Create combined data
End of explanation
texts = []
labels = []
with open("../data/fragments/participle-phrases/childrens_fragments.combined.txt","r") as f:
for i, sentence_or_fragment in enumerate(f):
if i % 2 == 0:
labels.append(0)
else:
labels.append(1)
texts.append(sentence_or_fragment.strip())
print(texts[-10:])
Explanation: Load Datafiles
End of explanation
import random
combined = list(zip(texts,labels))
random.shuffle(combined)
texts[:], labels[:] = zip(*combined)
print(texts[-10:])
print(labels[-10:])
Explanation: Shuffle the data
End of explanation
def textStringToPOSArray(text):
doc = nlp(text)
tags = []
for word in doc:
tags.append(word.tag_)
return tags
textStringToPOSArray(texts[3])
Explanation: Get parts of speech for text string
End of explanation
def find_ngrams(input_list, n):
return zip(*[input_list[i:] for i in range(n)])
def getPOSTrigramsForTextString(text):
tags = textStringToPOSArray(text)
tgrams = list(trigrams(tags))
return tgrams
print("Text: ", texts[3], labels[3])
getPOSTrigramsForTextString(texts[3])
Explanation: Get POS trigrams for a text string
End of explanation
def trigramsToDictKeys(trigrams):
keys = []
for trigram in trigrams:
keys.append('>'.join(trigram))
return keys
print(texts[2])
print(trigramsToDictKeys(getPOSTrigramsForTextString(texts[2])))
from collections import Counter
c = Counter()
for textString in texts:
c.update(trigramsToDictKeys(getPOSTrigramsForTextString(textString)))
total_counts = c
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: Turn Trigrams into Dict keys
End of explanation
word2idx = {n: i for i, n in enumerate(vocab)}## create the word-to-index dictionary here
print(word2idx)
def textToTrigrams(text):
return trigramsToDictKeys(getPOSTrigramsForTextString(text))
def text_to_vector(text):
wordVector = np.zeros(len(vocab))
for word in textToTrigrams(text):
index = word2idx.get(word, None)
if index != None:
wordVector[index] += 1
return wordVector
text_to_vector('Donald, standing on the precipice, began to dance.')[:65]
word_vectors = np.zeros((len(texts), len(vocab)), dtype=np.int_)
for ii, text in enumerate(texts):
word_vectors[ii] = text_to_vector(text)
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Take the trigrams and index them
End of explanation
records = len(labels)
test_fraction = 0.9
train_split, test_split = int(records*test_fraction), int(records*(1-test_fraction))
print(train_split, test_split)
trainX, trainY = word_vectors[:train_split], to_categorical(labels[:train_split], 2)
testX, testY = word_vectors[test_split:], to_categorical(labels[test_split:], 2)
trainX[-1], trainY[-1]
len(trainY), len(testY), len(trainY) + len(testY)
Explanation: Chunking the data for TF
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, len(vocab)]) # Input
net = tflearn.fully_connected(net, 200, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 25, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
len(vocab)
Explanation: Setting up TF
End of explanation
model = build_model()
Explanation: Initialize
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
# Testing
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
w = csv.writer(open("../models/participlevocabindex.csv", "w"))
for key, val in word2idx.items():
w.writerow([key, val])
model.save("../models/participle_model.tfl")
Explanation: Training
End of explanation
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence)])[0][1]
print('Is this a participle phrase fragment?\n {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Yes' if positive_prob > 0.5 else 'No')
test_sentence("Neglecting to recognize the horrors those people endure allow people to go to war more easily.")
test_sentence("Katherine, gesticulating wildly and dripping in sweat, kissed him on the cheek.")
test_sentence("Working far into the night in an effort to salvage her little boat.")
test_sentence("Working far into the night in an effort to salvage her little boat, she slowly grew tired.")
test_sentence("Rushing to the rescue with his party.")
test_sentence("Isobel was about thirteen now, and as pretty a girl, according to Buzzby, as you could meet with in any part of Britain.")
test_sentence("Being of a modest and retiring disposition, Mr. Hawthorne avoided publicity.")
test_sentence("Clambering to the top of a bridge, he observed a great rainbow")
test_sentence("Clambering to the top of a bridge.")
test_sentence("He observed a great rainbow.")
test_sentence("Sitting on the iron throne, Joffry looked rather fat.")
test_sentence("Worrying that a meteor or chunk of space debris will conk her on the head.")
test_sentence("Aunt Olivia always wears a motorcycle helmet, worrying that a meteor or chunk of space debris will conk her on the head")
test_sentence("Affecting the lives of many students in New York City.")
test_sentence("Quill was a miracle, affecting the lives of many students in New York City.")
test_sentence("Standing on the edge of the cliff looking down.")
test_sentence("Emilia, standing on the edge of the cliff and looking down, began to weep.")
test_sentence("Standing on the edge of the cliff and looking down, Emilia began to weep.")
Explanation: Playground
End of explanation
vocab
Explanation: Save the vocab
End of explanation |
1,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 5
Step1: Example 1
Step2: Example 2
Step3: Example 3
Step4: Example 4
Step5: Example 5 | Python Code:
# Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
from NPTFit import psf_correction as pc # Module for determining the PSF correction
from __future__ import print_function
Explanation: Example 5: NPTF Correction for the Point Spread Function (PSF)
In this example we show how to account for the PSF correction using psf_correction.py
Fundamentally the presence of a non-zero PSF implies that the photons from any point source will be smeared out into some region around its true location. This effect must be accounted for in the NPTF. This is achieved via a function $\rho(f)$. In the code we discretize $\rho(f)$ as an approximation to the full function.
The two outputs of an instance of psf_correction are: 1. f_ary, an array of f values; and 2. df_rho_div_f_ary, an associated array of $\Delta f \rho(f)/f$ values, where $\Delta f$ is the width of the f_ary bins.
If the angular reconstruction of the data is perfect, then $\rho(f) = \delta(f-1)$. In many situations, such as for the Fermi data at higher energies, a Gaussian approximation of the PSF will suffice. Even then there are a number of variables that go into evaluating the correction, as shown below. Finally we will show how the code can be used for the case of non-Gaussian PSFs.
As the calculation of $\rho(f)$ can be time consuming, we always save the output to avoid recomputing the same correction twice. Consequently it can be convenient to have a common psf_dir where all PSF corrections for the runs are stored.
End of explanation
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)
f_ary_1 = pc_inst.f_ary
df_rho_div_f_ary_1 = pc_inst.df_rho_div_f_ary
print('f_ary:', f_ary_1)
print('df_rho_div_f_ary:', df_rho_div_f_ary_1)
plt.plot(f_ary_1,f_ary_1**2*df_rho_div_f_ary_1/(f_ary_1[1]-f_ary_1[0]),color='black', lw = 1.5)
plt.xlabel('$f$')
plt.ylabel('$f \\times \\rho(f)$')
plt.title('Gaussian PSF, $\sigma_\mathrm{PSF} = 0.1812$', y=1.04)
Explanation: Example 1: Default Gaussian PSF
We start by showing the PSF correction for a Gaussian PSF - that is the PSF as a function of $r$ is $\exp \left[ -r^2 / (2\sigma^2) \right]$ - with $\sigma$ set to the value of the 68% containment radius for the PSF of the Fermi dataset we will use in later examples.
End of explanation
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.05)
f_ary_2 = pc_inst.f_ary
df_rho_div_f_ary_2 = pc_inst.df_rho_div_f_ary
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.4)
f_ary_3 = pc_inst.f_ary
df_rho_div_f_ary_3 = pc_inst.df_rho_div_f_ary
plt.plot(f_ary_1,f_ary_1**2*df_rho_div_f_ary_1/(f_ary_1[1]-f_ary_1[0]),color='cornflowerblue',label='0.18', lw = 1.5)
plt.plot(f_ary_2,f_ary_2**2*df_rho_div_f_ary_2/(f_ary_2[1]-f_ary_2[0]),color='forestgreen',label='0.05', lw = 1.5)
plt.plot(f_ary_3,f_ary_3**2*df_rho_div_f_ary_3/(f_ary_3[1]-f_ary_3[0]),color='maroon',label='0.4', lw = 1.5)
plt.xlabel('$f$')
plt.ylabel('$f \\times \\rho(f)$')
plt.legend(loc='upper right', fancybox=True)
plt.title('Varying $\sigma_\mathrm{PSF}$', y=1.04)
Explanation: Example 2: Impact of changing $\sigma$
Here we show the impact on the PSF of changing $\sigma$. From the plot we can see that for a small PSF, $\rho(f)$ approaches the no PSF case of $\delta(f-1)$ implying that the flux fractions are concentrated at a single large value. As $\sigma$ increases we move away from this idealized scenario and the flux becomes more spread out, leading to a $\rho(f)$ peaked at lower flux values.
End of explanation
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812,num_f_bins=20)
f_ary_4 = pc_inst.f_ary
df_rho_div_f_ary_4 = pc_inst.df_rho_div_f_ary
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812,n_psf=5000,n_pts_per_psf=100)
f_ary_5 = pc_inst.f_ary
df_rho_div_f_ary_5 = pc_inst.df_rho_div_f_ary
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812,f_trunc=0.1)
f_ary_6 = pc_inst.f_ary
df_rho_div_f_ary_6 = pc_inst.df_rho_div_f_ary
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812,nside=64)
f_ary_7 = pc_inst.f_ary
df_rho_div_f_ary_7 = pc_inst.df_rho_div_f_ary
plt.plot(f_ary_1,f_ary_1**2*df_rho_div_f_ary_1/(f_ary_1[1]-f_ary_1[0]),color='black',label=r'Default', lw=2.2)
plt.plot(f_ary_4,f_ary_4**2*df_rho_div_f_ary_4/(f_ary_4[1]-f_ary_4[0]),color='forestgreen',label=r'more f\_bins', lw = 1.5)
plt.plot(f_ary_5,f_ary_5**2*df_rho_div_f_ary_5/(f_ary_5[1]-f_ary_5[0]),color='cornflowerblue',label=r'fewer points', lw = 1.5)
plt.plot(f_ary_6,f_ary_6**2*df_rho_div_f_ary_6/(f_ary_6[1]-f_ary_6[0]),color='salmon',label=r'larger f\_trunc', lw = 1.5)
plt.plot(f_ary_7,f_ary_7**2*df_rho_div_f_ary_7/(f_ary_7[1]-f_ary_7[0]),color='orchid',label=r'lower nside', lw = 1.5)
plt.xlabel('$f$')
plt.ylabel('$f \\times \\rho(f)$')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fancybox=True)
Explanation: Example 3: Changing the default options for determining $\rho(f)$
In this example we show how for a given PSF, the other parameters associated with how accurately we calculate $\rho(f)$ can impact what we get back. The parameters that can be changed are are:
| Argument | Defaults | Purpose |
| ------------- | ------------- | ------------- |
| num_f_bins | 10 | number of f_bins used |
| n_psf | 50000 | number of PSFs placed down when calculating |
| n_pts_per_psf | 1000 | number of points to place per psf in the calculation |
| f_trunc | 0.01 | minimum flux fraction to keep track of |
| nside | 128 | nside of the map the PSF is used on |
The default parameters have been chosen to be accurate enough for the Fermi analyses we will be performed later. But if the user changes the PSF (even just $\sigma$), it is important to be sure that the above parameters are chosen so that $\rho(f)$ is evaluated accurately enough.
In general increasing num_f_bins, n_psf, and n_pts_per_psf, whilst decreasing f_trunc leads to a more accurate $\rho(f)$. But each will also slow down the evaluation of $\rho(f)$, and in the case of num_f_bin, slow down the subsequent non-Poissonian likelihood evaluation.
nside should be set to the value of the map being analysed, but we also highlight the impact of changing it below. For an analysis on a non-HEALPix grid, the PSF can often be approximated by an appropriate HEALPix binning. If this is not the case, however, a different approach must be pursued in calculating $\rho(f)$.
End of explanation
pixarea = 4*np.pi/(12*128*128)
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812, healpix_map=False, pixarea=pixarea, gridsize=100)
f_ary_8 = pc_inst.f_ary
df_rho_div_f_ary_8 = pc_inst.df_rho_div_f_ary
plt.plot(f_ary_1,f_ary_1**2*df_rho_div_f_ary_1/(f_ary_1[1]-f_ary_1[0]),color='black', label=r'healpix', lw = 1.5)
plt.plot(f_ary_8,f_ary_8**2*df_rho_div_f_ary_8/(f_ary_8[1]-f_ary_8[0]),color='forestgreen', label=r'cartesian', lw = 1.5)
plt.xlabel('$f$')
plt.ylabel('$f \\times \\rho(f)$')
Explanation: Example 4: PSF on a Cartesian Grid
For some applications, particularly when analyzing smaller regions of the sky, it may be desirable to work with data on a Cartesian grid rather than a healpix map. Note generally for larger regions, in order to account for curvature on the sky a healpix pixelization is recommended. Code to convert from Cartesian grids to healpix can be found here: https://github.com/nickrodd/grid2healpix
In order to calculate the appropriate PSF correction for Cartesian maps the general syntax is the same, except now the healpix_map keyword should be set to False and the pixarea keyword set to the area in sr of each pixel of the Cartesian map. In addition the gridsize keyword determines how large the map is, and flux that falls outside the map is lost in the Cartesian case.
As an example of this syntax we calculate the PSF correction on a Cartesian map that has pixels the same size as an nside=128 healpix map, and compare the two PSF corrections. Note they are essentially identical.
End of explanation
# Fermi-LAT PSF at 2 GeV
# Calculate the appropriate Gaussian approximation to the PSF for 2 GeV
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.2354)
f_ary_9 = pc_inst.f_ary
df_rho_div_f_ary_9 = pc_inst.df_rho_div_f_ary
# Define parameters that specify the Fermi-LAT PSF at 2 GeV
fcore = 0.748988248179
score = 0.428653790656
gcore = 7.82363229341
stail = 0.715962650769
gtail = 3.61883748683
spe = 0.00456544262478
# Define the full PSF in terms of two King functions
def king_fn(x, sigma, gamma):
return 1./(2.*np.pi*sigma**2.)*(1.-1./gamma)*(1.+(x**2./(2.*gamma*sigma**2.)))**(-gamma)
def Fermi_PSF(r):
return fcore*king_fn(r/spe,score,gcore) + (1-fcore)*king_fn(r/spe,stail,gtail)
# Modify the relevant parameters in pc_inst and then make or load the PSF
pc_inst = pc.PSFCorrection(delay_compute=True)
pc_inst.psf_r_func = lambda r: Fermi_PSF(r)
pc_inst.sample_psf_max = 10.*spe*(score+stail)/2.
pc_inst.psf_samples = 10000
pc_inst.psf_tag = 'Fermi_PSF_2GeV'
pc_inst.make_or_load_psf_corr()
# Extract f_ary and df_rho_div_f_ary as usual
f_ary_10 = pc_inst.f_ary
df_rho_div_f_ary_10 = pc_inst.df_rho_div_f_ary
plt.plot(f_ary_9,f_ary_9**2*df_rho_div_f_ary_9/(f_ary_9[1]-f_ary_9[0]),color='maroon',label='Gauss PSF', lw = 1.5)
plt.plot(f_ary_10,f_ary_10**2*df_rho_div_f_ary_10/(f_ary_10[1]-f_ary_10[0]),color='forestgreen',label='Fermi PSF', lw = 1.5)
plt.xlabel('$f$')
plt.ylabel('$f \\times \\rho(f)$')
plt.legend(loc='upper right', fancybox=True)
# Fermi-LAT PSF at 20 GeV
# Calculate the appropriate Gaussian approximation to the PSF for 20 GeV
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.05529)
f_ary_11 = pc_inst.f_ary
df_rho_div_f_ary_11 = pc_inst.df_rho_div_f_ary
# Define parameters that specify the Fermi-LAT PSF at 20 GeV
fcore = 0.834725201378
score = 0.498192326976
gcore = 6.32075520959
stail = 1.06648424558
gtail = 4.49677834267
spe = 0.000943339426754
# Define the full PSF in terms of two King functions
def king_fn(x, sigma, gamma):
return 1./(2.*np.pi*sigma**2.)*(1.-1./gamma)*(1.+(x**2./(2.*gamma*sigma**2.)))**(-gamma)
def Fermi_PSF(r):
return fcore*king_fn(r/spe,score,gcore) + (1-fcore)*king_fn(r/spe,stail,gtail)
# Modify the relevant parameters in pc_inst and then make or load the PSF
pc_inst = pc.PSFCorrection(delay_compute=True)
pc_inst.psf_r_func = lambda r: Fermi_PSF(r)
pc_inst.sample_psf_max = 10.*spe*(score+stail)/2.
pc_inst.psf_samples = 10000
pc_inst.psf_tag = 'Fermi_PSF_20GeV'
pc_inst.make_or_load_psf_corr()
# Extract f_ary and df_rho_div_f_ary as usual
f_ary_12 = pc_inst.f_ary
df_rho_div_f_ary_12 = pc_inst.df_rho_div_f_ary
plt.plot(f_ary_11,f_ary_11**2*df_rho_div_f_ary_11/(f_ary_11[1]-f_ary_11[0]),color='maroon',label='Gauss PSF', lw = 1.5)
plt.plot(f_ary_12,f_ary_12**2*df_rho_div_f_ary_12/(f_ary_12[1]-f_ary_12[0]),color='forestgreen',label='Fermi PSF', lw = 1.5)
plt.xlabel('$f$')
plt.ylabel('$f \\times \\rho(f)$')
plt.legend(loc='upper left', fancybox=True)
Explanation: Example 5: Custom PSF
In addition to the default Gausian PSF, psf_correction.py also has the option of taking in a custom PSF. In order to use this ability, the user needs to initialise psf_correction with delay_compute=True, manually define the parameters that define the PSF and then call make_or_load_psf_corr.
The variables that need to be redefined in the instance of psf_correction are:
| Argument | Purpose |
| ------------- | ------------- |
| psf_r_func | the psf as a function of r, distance in radians from the center of the point source |
| sample_psf_max | maximum distance to sample the psf from the center, should be larger for psfs with long tails |
| psf_samples | number of samples to make from the psf (linearly spaced) from 0 to sample_psf_max, should be large enough to adequately represent the full psf |
| psf_tag | label the psf is saved with |
As an example of a more complicated PSF we consider the full Fermi-LAT PSF. The PSF of Fermi is approximately Gaussian near the core, but has larger tails. To model this a pair of King functions are used to describe the radial distribution. Below we show a comparison between the Gaussian approximation and the full PSF, for two different energies. As shown, for low energies where the Fermi PSF is larger, the difference between the two can be significant. For higher energies where the PSF becomes smaller, however, the difference is marginal.
For the full details of the Fermi-LAT PSF, see:
http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_LAT_IRFs/IRF_PSF.html
End of explanation |
1,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 1
Imports
Step2: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
Step4: The results of the trapezoidal method and the integrate function are very close with a difference ~$10^{-7}$. This shows that the integrate function works. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
Explanation: Integration Exercise 1
Imports
End of explanation
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
h=(b-a)/N
k=np.arange(1,N)
I=h*(0.5*f(a)+0.5*f(b)+sum(f(a+k*h)))
return I
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
I,e=integrate.quad(f,0,1)
print('Result:',I,"error:",e)
I,e=integrate.quad(g,0,np.pi)
print('Result:',I,"error:",e)
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation
assert True # leave this cell to grade the previous one
Explanation: The results of the trapezoidal method and the integrate function are very close with a difference ~$10^{-7}$. This shows that the integrate function works.
End of explanation |
1,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Randomly Generate A Stock Price & View The Data
Here, we create an application which is submitted to a remote host, yet we retrieve its data remotely via views. This way, we can graph remote data inside of Jupyter without needing to run the application on the local host.
First, we create an application which generates a random stock price by using the jsonRandomWalk class. After we create the stream, we create a view object. This can later be used to retrieve the remote data.
Step1: Submit To Remote Streams Install
Then, we submit the application to the default domain.
Step2: Begin Retreiving The Data In A Blocking Queue
Using the view object, we can call the start_data_fetch method. This kicks off a background thread which, once per second, queries the remote view REST endpoint and inserts the data into a queue. The queue is returned from start_data_fetch.
Step3: Print Data to Screen
The queue is a blocking queue, so every time queue.get() is invoked, it will wait until there is more data on the stream. The following is one way of iterating over the queue.
Step4: Stop Fetching The Data, Cancelling The Background Thread
To stop the background thread from fetching data, invoke the stop_data_fetch method on the view.
Step5: Graph The Live Feed Using Matplotlib
One of Jupyter strengths is its capacity for data visualization. Here, we can use Matplotlib to interactively update the graph when new data is (or is not) available. | Python Code:
from streamsx.topology.topology import Topology
from streamsx.topology import context
from some_module import jsonRandomWalk
#from streamsx import rest
import json
import logging
# Define topology & submit
rw = jsonRandomWalk()
top = Topology("myTop")
stock_data = top.source(rw)
# The view object can be used to retrieve data remotely
view = stock_data.view()
stock_data.print()
Explanation: Randomly Generate A Stock Price & View The Data
Here, we create an application which is submitted to a remote host, yet we retrieve its data remotely via views. This way, we can graph remote data inside of Jupyter without needing to run the application on the local host.
First, we create an application which generates a random stock price by using the jsonRandomWalk class. After we create the stream, we create a view object. This can later be used to retrieve the remote data.
End of explanation
context.submit("DISTRIBUTED", top.graph, username = "streamsadmin", password = "passw0rd")
Explanation: Submit To Remote Streams Install
Then, we submit the application to the default domain.
End of explanation
from streamsx import rest
queue = view.start_data_fetch()
Explanation: Begin Retreiving The Data In A Blocking Queue
Using the view object, we can call the start_data_fetch method. This kicks off a background thread which, once per second, queries the remote view REST endpoint and inserts the data into a queue. The queue is returned from start_data_fetch.
End of explanation
for i in iter(queue.get, None):
print(i)
Explanation: Print Data to Screen
The queue is a blocking queue, so every time queue.get() is invoked, it will wait until there is more data on the stream. The following is one way of iterating over the queue.
End of explanation
view.stop_data_fetch()
Explanation: Stop Fetching The Data, Cancelling The Background Thread
To stop the background thread from fetching data, invoke the stop_data_fetch method on the view.
End of explanation
%matplotlib inline
%matplotlib notebook
from streamsx import rest
rest.graph_every(view, 'val', 1.0)
Explanation: Graph The Live Feed Using Matplotlib
One of Jupyter strengths is its capacity for data visualization. Here, we can use Matplotlib to interactively update the graph when new data is (or is not) available.
End of explanation |
1,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualización de sistemas mecánicos
Dado un sistema masa resorte amortiguador como el de la figura siguiente
Step1: Graficas
Una vez que tenemos los datos, nos disponemos a graficarlos, ahora utilizaremos un estilo de matplotlib, para ver los estilos disponibles, tan solo tenemos que preguntar
Step2: Ahora solo damos el tamaño de la imagen, y obtenemos el objeto del axis de la grafica con el metodo gca para poder graficar
Step3: Sin embargo el comportamiento del sistema a penas es visible, $0.00006 m = .6 mm$ en posición es practicamente nada. Recordemos que usamos tan solo usamos una fuerza de $1 N$, para amplificar esta entrada del sistema utilizamos otro bloque con una ganancia de la magnitud de la fuerza a aplicar
Step4: Muy bien, nuestra suspensión se ha movido $10 cm$ con una fuerza comparable con el peso de una persona. Ahora notamos que a la derecha tenemos espacio vacio, por lo que pondremos limite al eje $x$
Step5: Y la suspensión no se mueve de abajo hacia arriba, sino al reves, multiplicamos por $-1$ los datos de $y$
Step6: Para poder ver en una escala razonable esta gráfica pondremos un limite al eje $y$ tambien
Step7: Si ahora metemos todo este codigo dentro de una función para graficar, dependiendo de los datos del problema, se verá asi
Step8: Ejercicio
¿Cuanto crees que pese una bola de demolición? ¿$500 kg$? ¿$1000 kg$? Grafica el comportamiento del sistema ante una fuerza de estas magnitudes, utilizando la función que acabamos de crear.
Step9: Interactividad
Esta es la parte divertida, para interactuar con los datos del problema, podemos usar una función especifica de IPython, tan solo tenemos que dar una función, y los rangos de variación de sus argumentos, para tener
Step10: GIF's
A quien no le gustan los GIF's?
Hasta Chuck Norris lo sabe...
En fin, hoy haremos nuestro GIF... del comportamiento del sistema!
Pero antes, tenemos que definir como graficaremos nuestros elementos, empecemos con el resorte. Basicamente en una linea retorcida, por lo que podemos simplemente dar las coordenadas y decirle a plot que haga una linea entre estas
Step11: Ya que tenemos esto, podemos meterlo en una funcion que nos de las coordenadas y el codigo nos queda mas simple
Step12: El amortiguador es un poco mas complejo, son 3 lineas, 2 para las patitas y una para el cuerpo
Step13: De la misma manera lo metemos en una función, para que nuestro codigo quede
Step14: Por ultimo, para la masa tan solo necesitamos un rectangulo, el cual tenemos que importar de la libreria de matplotlib del modulo patches
Step15: Y uniendo todo, nos queda el codigo
Step16: Por ultimo, para animar esto, tenemos que obtener los puntos de la trayectoria
Step17: Importamos el modulo de animacion de la libreria matplotlib
Step18: Y utilizamos el siguiente codigo para la creacion de cada uno de los cuadros del GIF | Python Code:
from control import step, tf
m = 1200
k = 15000
c = 1500
F = 1
G = tf([0, 0, 1/m], [1, c/m, k/m])
G
y, t = step(G)
Explanation: Visualización de sistemas mecánicos
Dado un sistema masa resorte amortiguador como el de la figura siguiente:
Graficar la trayectoria del sistema, y animar los componentes fisicos de manera esquematica.
Funciones de transferencia
Recordando la práctica anterior, podemos obtener el comportamiento del sistema:
$$
F(t) - k x(t) - c \dot{x}(t) = m \ddot{x}(t)
$$
a traves de su función de transferencia:
$$
G(s) = \frac{\frac{1}{m}}{s^2 + \frac{c}{m} s + \frac{k}{m}}
$$
End of explanation
%matplotlib inline
from matplotlib.pyplot import figure, style, plot
style.use("ggplot")
style.available
Explanation: Graficas
Una vez que tenemos los datos, nos disponemos a graficarlos, ahora utilizaremos un estilo de matplotlib, para ver los estilos disponibles, tan solo tenemos que preguntar:
End of explanation
f = figure(figsize=(8, 8))
axi = f.gca()
axi.plot(t, y);
Explanation: Ahora solo damos el tamaño de la imagen, y obtenemos el objeto del axis de la grafica con el metodo gca para poder graficar:
End of explanation
K = tf([1000], [1])
K
y, t = step(K*G)
f = figure(figsize=(8, 8))
axi = f.gca()
axi.plot(t, y);
Explanation: Sin embargo el comportamiento del sistema a penas es visible, $0.00006 m = .6 mm$ en posición es practicamente nada. Recordemos que usamos tan solo usamos una fuerza de $1 N$, para amplificar esta entrada del sistema utilizamos otro bloque con una ganancia de la magnitud de la fuerza a aplicar:
End of explanation
f = figure(figsize=(16, 8))
axi = f.gca()
axi.plot(t, y)
axi.set_xlim(0, 10);
Explanation: Muy bien, nuestra suspensión se ha movido $10 cm$ con una fuerza comparable con el peso de una persona. Ahora notamos que a la derecha tenemos espacio vacio, por lo que pondremos limite al eje $x$:
End of explanation
f = figure(figsize=(16, 8))
axi = f.gca()
axi.plot(t, -y)
axi.set_xlim(0, 10);
Explanation: Y la suspensión no se mueve de abajo hacia arriba, sino al reves, multiplicamos por $-1$ los datos de $y$:
End of explanation
from numpy import linspace
t = linspace(0, 10, 1000)
y, t = step(K*G, t)
f = figure(figsize=(16, 8))
axi = f.gca()
axi.plot(t, -y)
axi.set_xlim(0, 10)
axi.set_ylim(-1.2, 0);
Explanation: Para poder ver en una escala razonable esta gráfica pondremos un limite al eje $y$ tambien:
End of explanation
def suspension(m, c, k, F):
G = tf([0, 0, 1/m], [1, c/m, k/m])
K = tf([F], [1])
t = linspace(0, 10, 1000)
y, t = step(K*G, t)
f = figure(figsize=(16, 8))
axi = f.gca()
axi.plot(t, -y)
axi.set_xlim(0, 10)
axi.set_ylim(-1.2, 0);
# La siguiente linea devuelve los resultados numericos de la simulacion,
# si pones un punto y coma (;) despues de usar la funcion, o si guardas
# en variables, podras ver la grafica solamente
return t, y
suspension(1200, 1500, 15000, 100*9.81);
Explanation: Si ahora metemos todo este codigo dentro de una función para graficar, dependiendo de los datos del problema, se verá asi:
End of explanation
t1, x1 = suspension(1200, 1500, 15000, ) # Agrega la fuerza necesaria para la primer simulacion
t2, x2 = # Completa el codigo para la segunda simulacion
from pruebas_3 import prueba_3_1
prueba_3_1(t1, x1, t2, x2)
Explanation: Ejercicio
¿Cuanto crees que pese una bola de demolición? ¿$500 kg$? ¿$1000 kg$? Grafica el comportamiento del sistema ante una fuerza de estas magnitudes, utilizando la función que acabamos de crear.
End of explanation
# Se importan widgets de IPython para interactuar con la funcion
from IPython.html.widgets import interact, fixed
# Se llama a la funcion interactiva
interact(suspension, m=(1000, 2500), c=(0, 2500), k=(0, 20000), F=(1, 10000));
Explanation: Interactividad
Esta es la parte divertida, para interactuar con los datos del problema, podemos usar una función especifica de IPython, tan solo tenemos que dar una función, y los rangos de variación de sus argumentos, para tener:
End of explanation
from numpy import zeros, arange
coordenadas_resorte = zeros(20)
coordenadas_resorte[5:15] = 0.5*(-1)**arange(10)
coordenadas_resorte
plot(coordenadas_resorte, arange(20), lw = 2, color="gray")
Explanation: GIF's
A quien no le gustan los GIF's?
Hasta Chuck Norris lo sabe...
En fin, hoy haremos nuestro GIF... del comportamiento del sistema!
Pero antes, tenemos que definir como graficaremos nuestros elementos, empecemos con el resorte. Basicamente en una linea retorcida, por lo que podemos simplemente dar las coordenadas y decirle a plot que haga una linea entre estas:
End of explanation
def coordenadas_resorte(y0, yf, x, N):
ancho = (y0 - yf)/5
xs = zeros(N) + x
xs[N/4:N*3/4] = ancho*0.5*(-1)**arange(N/2) + x
return xs, linspace(y0, yf, N)
x, y = coordenadas_resorte(-5, 5, 0, 20)
f = figure(figsize=(8, 8))
axi = f.gca()
axi.plot(x, y, lw=2, color="gray")
axi.set_xlim(-10, 10)
axi.set_ylim(-10, 10);
Explanation: Ya que tenemos esto, podemos meterlo en una funcion que nos de las coordenadas y el codigo nos queda mas simple:
End of explanation
y0 = -5
yf = 5
alto = yf - y0
cuerpo = alto/2
pata = alto/4
ancho = (yf - y0)/5
x = 2
f = figure(figsize=(8, 8))
axi = f.gca()
axi.plot([-0.5*ancho + x, -0.5*ancho + x, 0.5*ancho + x, 0.5*ancho + x],
[yf - pata, y0 + pata, y0 + pata, yf - pata],
[x, x], [y0 + cuerpo, yf],
[x, x], [y0, y0 + pata], lw=2, color="gray")
axi.set_xlim(-10, 10)
axi.set_ylim(-10, 10);
Explanation: El amortiguador es un poco mas complejo, son 3 lineas, 2 para las patitas y una para el cuerpo:
End of explanation
def coordenadas_amortiguador(y0, yf, x):
ancho = (y0 - yf)/5
alto = yf - y0
cuerpo = alto/2
pata = alto/4
ax = [-0.5*ancho + x, -0.5*ancho + x, 0.5*ancho + x, 0.5*ancho + x]
ay = [yf - pata, y0 + pata, y0 + pata, yf - pata]
bx = [x, x]
by = [y0 + cuerpo, yf]
cx = [x, x]
cy = [y0, y0 + pata]
return ax, ay, bx, by, cx, cy
ax, ay, bx, by, cx, cy = coordenadas_amortiguador(-4, 6, 1)
f = figure(figsize=(8, 8))
axi = f.gca()
axi.plot(ax, ay, bx, by, cx, cy, lw=2, color="gray")
axi.set_xlim(-10, 10)
axi.set_ylim(-10, 10);
Explanation: De la misma manera lo metemos en una función, para que nuestro codigo quede:
End of explanation
from matplotlib.patches import Rectangle
f = figure(figsize=(8, 8))
axi = f.add_subplot(111, autoscale_on=False, xlim=(-3, 3), ylim=(-3, 3))
carro = Rectangle((0,0), 1, 1, lw=0.5)
carro.set_xy((-0.5, -0.5))
axi.add_patch(carro);
Explanation: Por ultimo, para la masa tan solo necesitamos un rectangulo, el cual tenemos que importar de la libreria de matplotlib del modulo patches:
End of explanation
f = figure(figsize=(8, 8))
axi = f.add_subplot(111, autoscale_on=False, xlim=(-2, 2), ylim=(-3, 1))
carro = Rectangle((0,0), 1, 1, lw=0.5)
carro.set_xy((-0.5, -3))
axi.add_patch(carro)
x, y = coordenadas_resorte(0, -2, -0.25, 20)
axi.plot(x, y, lw=2, color="gray")
ax, ay, bx, by, cx, cy = coordenadas_amortiguador(-2, 0, 0.25)
axi.plot(ax, ay, bx, by, cx, cy, lw=2, color="gray");
axi.set_ylim(-4, 0);
Explanation: Y uniendo todo, nos queda el codigo:
End of explanation
m = 1200
c = 1500
k = 15000
F = 4000
G = tf([0, 0, 1/m], [1, c/m, k/m])
K = tf([F], [1])
t = linspace(0, 10, 1000)
y, t = step(K*G, t)
Explanation: Por ultimo, para animar esto, tenemos que obtener los puntos de la trayectoria:
End of explanation
from matplotlib import animation
Explanation: Importamos el modulo de animacion de la libreria matplotlib:
End of explanation
# Se define el tamaño de la figura
fig = figure(figsize=(8, 8))
# Se define una sola grafica en la figura y se dan los limites de los ejes x y y
axi = fig.add_subplot(111, autoscale_on=False, xlim=(-1.5, 1.5), ylim=(-2, 1))
# Se hacen invisibles los ejes
axi.axes.get_xaxis().set_visible(False)
axi.axes.get_yaxis().set_visible(False)
# Se quitan los bordes de la grafica
axi.axes.spines["right"].set_color("none")
axi.axes.spines["left"].set_color("none")
axi.axes.spines["top"].set_color("none")
axi.axes.spines["bottom"].set_color("none")
# Se utilizan graficas de linea para el resorte y amortiguador
resorte, = axi.plot([], [], lw=2, color='gray')
amor1, = axi.plot([], [], lw=2, color='gray')
amor2, = axi.plot([], [], lw=2, color='gray')
amor3, = axi.plot([], [], lw=2, color='gray')
# Se utiliza un rectangulo para la masa
masa = Rectangle((10,10), 1, 1, lw=0.5)
def init():
# Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema
resorte.set_data([], [])
amor1.set_data([], [])
amor2.set_data([], [])
amor3.set_data([], [])
masa.set_xy((0, 0))
axi.add_patch(masa)
return resorte, amor1, amor2, amor3, masa
def animate(i):
# Esta funcion se ejecuta para cada cuadro del GIF
# Se obtienen las coordenadas del resorte y se meten los datos en su grafica de linea
xs, ys = coordenadas_resorte(-y[i], 1, -0.25, 20)
resorte.set_data(xs, ys)
# Se obtienen las coordenadas del amortiguador y se meten los datos en cada una de
# las graficas de linea
ax, ay, bx, by, cx, cy = coordenadas_amortiguador(-y[i], 1, 0.25)
amor1.set_data(ax, ay)
amor2.set_data(bx, by)
amor3.set_data(cx, cy)
# Se meten los datos de ubicacion de la masa
masa.set_xy((-0.5, -1 - y[i]))
return resorte, amor1, amor2, amor3, masa
# Se hace la animacion dandole el nombre de la figura definida al principio, la funcion que
# se debe ejecutar para cada cuadro, el numero de cuadros que se debe de hacer, el periodo
# de cada cuadro y la funcion inicial
ani = animation.FuncAnimation(fig, animate, arange(1, len(y)), interval=25,
blit=True, init_func=init)
# Se guarda el GIF en el archivo indicado
ani.save('./imagenes/masa-resorte-amortiguador.gif', writer='imagemagick');
Explanation: Y utilizamos el siguiente codigo para la creacion de cada uno de los cuadros del GIF:
End of explanation |
1,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data preprocessing methods
Step1: 1. Data preprocessing
1.1. The dataset.
A key component of any data processing method or any machine learning algorithm is the dataset, i.e., the set of data that will be the input to the method or algorithm.
The dataset collects information extracted from a population (of objects, entities, individuals,...). For instance, we can measure the weight and height of students from a class and collect this information in a dataset ${\cal S} = {{\bf x}k, k=0, \ldots, K-1}$ where $K$ is the number of students, and each sample is a 2 dimensional vector, ${\bf x}_k= (x{k0}, x_{k1})$, with the height and the weight in the first and the second component, respectively. These components are usually called features. In other datasets, the number of features can be arbitrarily large.
1.1. Data preprocessing
The aim of data preprocessing methods is to transform the data into a form that is ready to apply machine learning algorithms. This may include
Step2: We can see that the first data feature ($x_0$) has a much large range of variation than the second ($x_1$). In practice, this may be problematic
Step3: We can test if the transformed features have zero-mean and unit variance
Step4: (note that the results can deviate from 0 or 1 due to finite precision errors)
Step5: 2.1.1. Implementation in sklearn
The sklearn package contains a method to perform the standard scaling over a given data matrix.
Step6: Note that, once we have defined the scaler object in Python, you can apply the scaling transformation to other datasets. This will be useful in further topics, when the dataset may be split in several matrices and we may be interested in defining the transformation using some matrix, and apply it to others
2.2. Other normalizations.
The are some alternatives to the standard scaling that may be interesting for some datasets. Here we show some of them, available at the preprocessing module in sklearn | Python Code:
# Some libraries that will be used along the notebook.
import numpy as np
import matplotlib.pyplot as plt
Explanation: Data preprocessing methods: Normalization
Notebook version:
* 1.0 (Sep 15, 2020) - First version
* 1.1 (Sep 15, 2021) - Exercises
Authors: Jesús Cid Sueiro ([email protected])
End of explanation
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=0.60)
X = X @ np.array([[30, 4], [-8, 1]]) + np.array([90, 10])
plt.figure(figsize=(12, 3))
plt.scatter(X[:, 0], X[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show()
Explanation: 1. Data preprocessing
1.1. The dataset.
A key component of any data processing method or any machine learning algorithm is the dataset, i.e., the set of data that will be the input to the method or algorithm.
The dataset collects information extracted from a population (of objects, entities, individuals,...). For instance, we can measure the weight and height of students from a class and collect this information in a dataset ${\cal S} = {{\bf x}k, k=0, \ldots, K-1}$ where $K$ is the number of students, and each sample is a 2 dimensional vector, ${\bf x}_k= (x{k0}, x_{k1})$, with the height and the weight in the first and the second component, respectively. These components are usually called features. In other datasets, the number of features can be arbitrarily large.
1.1. Data preprocessing
The aim of data preprocessing methods is to transform the data into a form that is ready to apply machine learning algorithms. This may include:
Data normalization: transform the individual features to ensure a proper range of variation
Data imputation: assign values to features that may be missed for some data samples
Feature extraction: transform the original data to compute new features that are more appropiate for a specific prediction task
Dimensionality reduction: remove features that are not relevant for the prediction task.
Outlier removal: remove samples that may contain errors and are not reliable for the prediction task.
Clustering: partition the data into smaller subsets, that could be easier to process.
In this notebook we will focus on data normalization.
2. Data normalization
All samples in the dataset can be arranged by rows in a $K \times m$ data matrix ${\bf X}$, where $m$ is the number of features (i.e. the dimension of the vector space containing the data). Each one of the $m$ data features may represent variables of very different nature (e.g. time, distance, price, volume, pixel intensity,...). Thus, the scale and the range of variation of each feature can be completely different.
As an illustration, consider the 2-dimensional dataset in the figure
End of explanation
# Compute the sample mean
# m = <FILL IN>
m = np.mean(X, axis=0) # Compute the sample mean
print(f'The sample mean is m = {m}')
# Compute the standard deviation of each feature
# s = <FILL IN>
s = np.std(X, axis=0) # Compute the standard deviation of each feature
# Normalize de data matrix
# T = <FILL IN>
T = (X-m)/s # Normalize
Explanation: We can see that the first data feature ($x_0$) has a much large range of variation than the second ($x_1$). In practice, this may be problematic: the convergence properties of some machine learning algorithms may depend critically on the feature distributions and, in general, features sets ranging over similar scales use to offer a better performance.
For this reason, transforming the data in order to get similar range of variations for all features is desirable. This can be done in several ways.
2.1. Standard scaling.
A common normalization method consists on applying an affine transformation
$$
{\bf t}_k = {\bf D}({\bf x}_k - {\bf m})
$$
where ${\bf D}$ is a diagonal matrix, in such a way that the transformed dataset ${\cal S}' = {{\bf t}_k, k=0, \ldots, K-1}$ has zero sample mean, i.e.,
$$
\frac{1}{K} \sum_{k=0}^{K-1} {\bf t}_k = 0
$$
and unit sample variance, i.e.,
$$
\frac{1}{K} \sum_{k=0}^{K-1} t_{ki}^2 = 1
$$
It is not difficult to verify that this can be done by taking ${\bf m}$ equal to the sample mean
$$
{\bf m} = \frac{1}{K} \sum_{k=0}^{K-1} {\bf x}_k
$$
and taking the diagonal components of ${\bf D}$ equal to the inverse of the standard deviation of each feature, i.e.,
$$
d_{ii} = \frac{1}{\sqrt{\frac{1}{K} \sum_{k=0}^{K-1} (x_{ki} - m_i)^2}}
$$
Using the data matrix ${\bf X}$ and the broadcasting property of the basic mathematical operators in Python, the implementation of this normalization is straightforward.
Exercise 1: Apply a standard scaling to the data matrix. To do so:
Compute the mean, and store it in variable m (you can use method mean from numpy)
Compute the standard deviation of each feature, and store the result in variable s (you can use method std from numpy)
Take advangate of the broadcasting property to normalize the data matrix in a single line of code. Save the result in variable T.
End of explanation
# Testing mean
print(f"- The mean of the transformed features are: {np.mean(T, axis=0)}")
print(f"- The standard deviation of the transformed features are: {np.std(T, axis=0)}")
Explanation: We can test if the transformed features have zero-mean and unit variance:
End of explanation
# Now you can verify if your solution satisfies
plt.figure(figsize=(4, 4))
plt.scatter(T[:, 0], T[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show()
Explanation: (note that the results can deviate from 0 or 1 due to finite precision errors)
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X)
print(f'The sample mean is m = {scaler.mean_}')
T2 = scaler.transform(X)
plt.figure(figsize=(4, 4))
plt.scatter(T2[:, 0], T2[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show()
Explanation: 2.1.1. Implementation in sklearn
The sklearn package contains a method to perform the standard scaling over a given data matrix.
End of explanation
# Write your solution here
# <SOL>
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(2, 4))
scaler.fit(X)
T24 = scaler.transform(X)
# </SOL>
# We can visually check that the transformed data features lie in the selected range.
plt.figure(figsize=(4, 4))
plt.scatter(T24[:, 0], T24[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show()
Explanation: Note that, once we have defined the scaler object in Python, you can apply the scaling transformation to other datasets. This will be useful in further topics, when the dataset may be split in several matrices and we may be interested in defining the transformation using some matrix, and apply it to others
2.2. Other normalizations.
The are some alternatives to the standard scaling that may be interesting for some datasets. Here we show some of them, available at the preprocessing module in sklearn:
preprocessing.MaxAbsScaler: Scale each feature by its maximum absolute value. As a result, all feature values will lie in the interval [-1, 1].
preprocessing.MinMaxScaler: Transform features by scaling each feature to a given range. Also, all feature values will lie in the specified interval.
preprocessing.Normalizer: Normalize samples individually to unit norm. That is, it applies the transformation ${\bf t}_k = \frac{1}{\|{\bf x}_k\|} {\bf x}_k$
preprocessing.PowerTransformer: Apply a power transform featurewise to make data more Gaussian-like.
preprocessing.QuantileTransformer: Transform features using quantile information. The transformed features follow a specific target distribution (uniform or normal).
preprocessing.RobustScaler: Scale features using statistics that are robust to outliers. This way, anomalous values in one or very few samples cannot have a strong influence in the normalization.
You can find more detailed explanation of these transformations sklearn documentation.
Exercise 2: Use sklearn to transform the data matrix X into a matrix T24such that the minimum feature value is 2 and the maximum is 4.
(Hint: select and import the appropriate preprocessing module from sklearn an follow the same steps used in the code cell above for the scandard scaler)
End of explanation |
1,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artifact Correction with ICA
ICA finds directions in the feature space
corresponding to projections with high non-Gaussianity. We thus obtain
a decomposition into independent components, and the artifact's contribution
is typically localized in only a small number of components.
These components have to be correctly identified and removed.
If EOG or ECG recordings are available, they can be used in ICA to
automatically select the corresponding artifact components from the
decomposition. To do so, you have to first build an
Step1: Before applying artifact correction please learn about your actual artifacts
by reading sphx_glr_auto_tutorials_plot_artifacts_detection.py.
<div class="alert alert-danger"><h4>Warning</h4><p>ICA is sensitive to low-frequency drifts and therefore
requires the data to be high-pass filtered prior to fitting.
Typically, a cutoff frequency of 1 Hz is recommended. Note that
FIR filters prior to MNE 0.15 used the ``'firwin2'`` design
method, which generally produces rather shallow filters that
might not work for ICA processing. Therefore, it is recommended
to use IIR filters for MNE up to 0.14. In MNE 0.15, FIR filters
can be designed with the ``'firwin'`` method, which generally
produces much steeper filters. This method will be the default
FIR design method in MNE 0.16. In MNE 0.15, you need to
explicitly set ``fir_design='firwin'`` to use this method. This
is the recommended filter method for ICA preprocessing.</p></div>
Fit ICA
First, choose the ICA method. There are currently four possible choices
Step2: Define the ICA object instance
Step3: we avoid fitting ICA on crazy environmental artifacts that would
dominate the variance and decomposition
Step4: Plot ICA components
Step5: Component properties
Let's take a closer look at properties of first three independent components.
Step6: we can see that the data were filtered so the spectrum plot is not
very informative, let's change that
Step7: we can also take a look at multiple different components at once
Step8: Instead of opening individual figures with component properties, we can
also pass an instance of Raw or Epochs in inst argument to
ica.plot_components. This would allow us to open component properties
interactively by clicking on individual component topomaps. In the notebook
this works only when running matplotlib in interactive mode
(%matplotlib).
Step9: Advanced artifact detection
Let's use a more efficient way to find artifacts
Step10: We can take a look at the properties of that component, now using the
data epoched with respect to EOG events.
We will also use a little bit of smoothing along the trials axis in the
epochs image
Step11: That component is showing a prototypical average vertical EOG time course.
Pay attention to the labels, a customized read-out of the
mne.preprocessing.ICA.labels_
Step12: These labels were used by the plotters and are added automatically
by artifact detection functions. You can also manually edit them to annotate
components.
Now let's see how we would modify our signals if we removed this component
from the data.
Step13: Note that nothing is yet removed from the raw data. To remove the effects of
the rejected components,
Step14: Exercise
Step15: What if we don't have an EOG channel?
We could either
Step16: The idea behind corrmap is that artifact patterns are similar across subjects
and can thus be identified by correlating the different patterns resulting
from each solution with a template. The procedure is therefore
semi-automatic.
Step17: Remember, don't do this at home! Start by reading in a collection of ICA
solutions instead. Something like
Step18: We use our original ICA as reference.
Step19: Investigate our reference ICA
Step20: Which one is the bad EOG component?
Here we rely on our previous detection algorithm. You would need to decide
yourself if no automatic detection was available.
Step21: Indeed it looks like an EOG, also in the average time course.
We construct a list where our reference run is the first element. Then we
can detect similar components from the other runs (the other ICA objects)
using
Step22: Now we can run the CORRMAP algorithm.
Step23: Nice, we have found similar ICs from the other (simulated) runs!
In this way, you can detect a type of artifact semi-automatically for example
for all subjects in a study.
The detected template can also be retrieved as an array and stored; this
array can be used as an alternative template to | Python Code:
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import ICA
from mne.preprocessing import create_eog_epochs, create_ecg_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# 1Hz high pass is often helpful for fitting ICA (already lowpassed @ 40 Hz)
raw.filter(1., None, n_jobs=1, fir_design='firwin')
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
Explanation: Artifact Correction with ICA
ICA finds directions in the feature space
corresponding to projections with high non-Gaussianity. We thus obtain
a decomposition into independent components, and the artifact's contribution
is typically localized in only a small number of components.
These components have to be correctly identified and removed.
If EOG or ECG recordings are available, they can be used in ICA to
automatically select the corresponding artifact components from the
decomposition. To do so, you have to first build an :class:mne.Epochs object
around blink or heartbeat events.
ICA is implemented in MNE using the :class:mne.preprocessing.ICA class,
which we will review here.
End of explanation
method = 'fastica'
# Choose other parameters
n_components = 25 # if float, select n_components by explained variance of PCA
decim = 3 # we need sufficient statistics, not all time points -> saves time
# we will also set state of the random number generator - ICA is a
# non-deterministic algorithm, but we want to have the same decomposition
# and the same order of components each time this tutorial is run
random_state = 23
Explanation: Before applying artifact correction please learn about your actual artifacts
by reading sphx_glr_auto_tutorials_plot_artifacts_detection.py.
<div class="alert alert-danger"><h4>Warning</h4><p>ICA is sensitive to low-frequency drifts and therefore
requires the data to be high-pass filtered prior to fitting.
Typically, a cutoff frequency of 1 Hz is recommended. Note that
FIR filters prior to MNE 0.15 used the ``'firwin2'`` design
method, which generally produces rather shallow filters that
might not work for ICA processing. Therefore, it is recommended
to use IIR filters for MNE up to 0.14. In MNE 0.15, FIR filters
can be designed with the ``'firwin'`` method, which generally
produces much steeper filters. This method will be the default
FIR design method in MNE 0.16. In MNE 0.15, you need to
explicitly set ``fir_design='firwin'`` to use this method. This
is the recommended filter method for ICA preprocessing.</p></div>
Fit ICA
First, choose the ICA method. There are currently four possible choices:
fastica, picard, infomax and extended-infomax.
<div class="alert alert-info"><h4>Note</h4><p>The default method in MNE is FastICA, which along with Infomax is
one of the most widely used ICA algorithms. Picard is a
new algorithm that is expected to converge faster than FastICA and
Infomax, especially when the aim is to recover accurate maps with
a low tolerance parameter, see [1]_ for more information.</p></div>
End of explanation
ica = ICA(n_components=n_components, method=method, random_state=random_state)
print(ica)
Explanation: Define the ICA object instance
End of explanation
reject = dict(mag=5e-12, grad=4000e-13)
ica.fit(raw, picks=picks_meg, decim=decim, reject=reject)
print(ica)
Explanation: we avoid fitting ICA on crazy environmental artifacts that would
dominate the variance and decomposition
End of explanation
ica.plot_components() # can you spot some potential bad guys?
Explanation: Plot ICA components
End of explanation
# first, component 0:
ica.plot_properties(raw, picks=0)
Explanation: Component properties
Let's take a closer look at properties of first three independent components.
End of explanation
ica.plot_properties(raw, picks=0, psd_args={'fmax': 35.})
Explanation: we can see that the data were filtered so the spectrum plot is not
very informative, let's change that:
End of explanation
ica.plot_properties(raw, picks=[1, 2], psd_args={'fmax': 35.})
Explanation: we can also take a look at multiple different components at once:
End of explanation
# uncomment the code below to test the interactive mode of plot_components:
# ica.plot_components(picks=range(10), inst=raw)
Explanation: Instead of opening individual figures with component properties, we can
also pass an instance of Raw or Epochs in inst argument to
ica.plot_components. This would allow us to open component properties
interactively by clicking on individual component topomaps. In the notebook
this works only when running matplotlib in interactive mode
(%matplotlib).
End of explanation
eog_average = create_eog_epochs(raw, reject=dict(mag=5e-12, grad=4000e-13),
picks=picks_meg).average()
eog_epochs = create_eog_epochs(raw, reject=reject) # get single EOG trials
eog_inds, scores = ica.find_bads_eog(eog_epochs) # find via correlation
ica.plot_scores(scores, exclude=eog_inds) # look at r scores of components
# we can see that only one component is highly correlated and that this
# component got detected by our correlation analysis (red).
ica.plot_sources(eog_average, exclude=eog_inds) # look at source time course
Explanation: Advanced artifact detection
Let's use a more efficient way to find artifacts
End of explanation
ica.plot_properties(eog_epochs, picks=eog_inds, psd_args={'fmax': 35.},
image_args={'sigma': 1.})
Explanation: We can take a look at the properties of that component, now using the
data epoched with respect to EOG events.
We will also use a little bit of smoothing along the trials axis in the
epochs image:
End of explanation
print(ica.labels_)
Explanation: That component is showing a prototypical average vertical EOG time course.
Pay attention to the labels, a customized read-out of the
mne.preprocessing.ICA.labels_:
End of explanation
ica.plot_overlay(eog_average, exclude=eog_inds, show=False)
# red -> before, black -> after. Yes! We remove quite a lot!
# to definitely register this component as a bad one to be removed
# there is the ``ica.exclude`` attribute, a simple Python list
ica.exclude.extend(eog_inds)
# from now on the ICA will reject this component even if no exclude
# parameter is passed, and this information will be stored to disk
# on saving
# uncomment this for reading and writing
# ica.save('my-ica.fif')
# ica = read_ica('my-ica.fif')
Explanation: These labels were used by the plotters and are added automatically
by artifact detection functions. You can also manually edit them to annotate
components.
Now let's see how we would modify our signals if we removed this component
from the data.
End of explanation
raw_copy = raw.copy().crop(0, 10)
ica.apply(raw_copy)
raw_copy.plot() # check the result
Explanation: Note that nothing is yet removed from the raw data. To remove the effects of
the rejected components,
:meth:the apply method <mne.preprocessing.ICA.apply> must be called.
Here we apply it on the copy of the first ten seconds, so that the rest of
this tutorial still works as intended.
End of explanation
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_properties(ecg_epochs, picks=ecg_inds, psd_args={'fmax': 35.})
Explanation: Exercise: find and remove ECG artifacts using ICA!
End of explanation
from mne.preprocessing.ica import corrmap # noqa
Explanation: What if we don't have an EOG channel?
We could either:
make a bipolar reference from frontal EEG sensors and use as virtual EOG
channel. This can be tricky though as you can only hope that the frontal
EEG channels only reflect EOG and not brain dynamics in the prefrontal
cortex.
go for a semi-automated approach, using template matching.
In MNE-Python option 2 is easily achievable and it might give better results,
so let's have a look at it.
End of explanation
# We'll start by simulating a group of subjects or runs from a subject
start, stop = [0, raw.times[-1]]
intervals = np.linspace(start, stop, 4, dtype=np.float)
icas_from_other_data = list()
raw.pick_types(meg=True, eeg=False) # take only MEG channels
for ii, start in enumerate(intervals):
if ii + 1 < len(intervals):
stop = intervals[ii + 1]
print('fitting ICA from {0} to {1} seconds'.format(start, stop))
this_ica = ICA(n_components=n_components, method=method).fit(
raw, start=start, stop=stop, reject=reject)
icas_from_other_data.append(this_ica)
Explanation: The idea behind corrmap is that artifact patterns are similar across subjects
and can thus be identified by correlating the different patterns resulting
from each solution with a template. The procedure is therefore
semi-automatic. :func:mne.preprocessing.corrmap hence takes a list of
ICA solutions and a template, that can be an index or an array.
As we don't have different subjects or runs available today, here we will
simulate ICA solutions from different subjects by fitting ICA models to
different parts of the same recording. Then we will use one of the components
from our original ICA as a template in order to detect sufficiently similar
components in the simulated ICAs.
The following block of code simulates having ICA solutions from different
runs/subjects so it should not be used in real analysis - use independent
data sets instead.
End of explanation
print(icas_from_other_data)
Explanation: Remember, don't do this at home! Start by reading in a collection of ICA
solutions instead. Something like:
icas = [mne.preprocessing.read_ica(fname) for fname in ica_fnames]
End of explanation
reference_ica = ica
Explanation: We use our original ICA as reference.
End of explanation
reference_ica.plot_components()
Explanation: Investigate our reference ICA:
End of explanation
reference_ica.plot_sources(eog_average, exclude=eog_inds)
Explanation: Which one is the bad EOG component?
Here we rely on our previous detection algorithm. You would need to decide
yourself if no automatic detection was available.
End of explanation
icas = [reference_ica] + icas_from_other_data
template = (0, eog_inds[0])
Explanation: Indeed it looks like an EOG, also in the average time course.
We construct a list where our reference run is the first element. Then we
can detect similar components from the other runs (the other ICA objects)
using :func:mne.preprocessing.corrmap. So our template must be a tuple like
(reference_run_index, component_index):
End of explanation
fig_template, fig_detected = corrmap(icas, template=template, label="blinks",
show=True, threshold=.8, ch_type='mag')
Explanation: Now we can run the CORRMAP algorithm.
End of explanation
eog_component = reference_ica.get_components()[:, eog_inds[0]]
Explanation: Nice, we have found similar ICs from the other (simulated) runs!
In this way, you can detect a type of artifact semi-automatically for example
for all subjects in a study.
The detected template can also be retrieved as an array and stored; this
array can be used as an alternative template to
:func:mne.preprocessing.corrmap.
End of explanation |
1,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing time series metabolome profile
by Kozo Nishida (Riken, Japan)
Software Requirments
Please install the following software packages to run this workflow
Step1: Load a KGML pathway data file with KEGGscape REST API
Step2: Load table data file as Pandas DataFrame
Step3: Convert the DataFrame to JSON and send it to Cytoscape
Step4: Set values to the chart column
Step5: Create Visual Style for Custom Mapping | Python Code:
import json
import requests
import pandas as pd
PORT_NUMBER = 1234
BASE_URL = "http://localhost:" + str(PORT_NUMBER) + "/v1/"
HEADERS = {'Content-Type': 'application/json'}
Explanation: Visualizing time series metabolome profile
by Kozo Nishida (Riken, Japan)
Software Requirments
Please install the following software packages to run this workflow:
KEGGscape
enhancedGraphics
Background
This is a sample workflow to automate complex Cytoscape data integaration/visualization process. Please read the following document for more background:
https://github.com/idekerlab/KEGGscape/wiki/How-to-visualize-time-series-metabolome-profile
End of explanation
requests.get("http://localhost:1234/keggscape/v1/ath00020")
res = requests.get("http://localhost:1234/v1/networks/currentNetwork")
result = json.loads(res.content)
pathway_suid = result["data"]["networkSUID"]
print("Pathway SUID = " + str(pathway_suid))
Explanation: Load a KGML pathway data file with KEGGscape REST API
End of explanation
profile_csv = "https://raw.githubusercontent.com/idekerlab/KEGGscape/develop/wiki/data/light-dark-20.csv"
profile_df = pd.read_csv(profile_csv)
profile_df.head()
Explanation: Load table data file as Pandas DataFrame
End of explanation
profile = json.loads(profile_df.to_json(orient="records"))
# print(json.dumps(profile, indent=4))
new_table_data = {
"key": "KEGG_NODE_LABEL",
"dataKey": "KEGG",
"data": profile
}
update_table_url = BASE_URL + "networks/" + str(pathway_suid) + "/tables/defaultnode"
requests.put(update_table_url, data=json.dumps(new_table_data), headers=HEADERS)
Explanation: Convert the DataFrame to JSON and send it to Cytoscape
End of explanation
chart_entry = 'barchart: attributelist="ld20t14,ld20t16,ld20t20,ld20t24,ld20t28,ld20t32,ld20t36,ld20t40,ld20t44,ld20t48,ld20t52,ld20t56,ld20t60,ld20t64,ld20t68,ld20t72" colorlist="up:red,zero:red,down:red" showlabels="false"'
target_row_url = BASE_URL + "networks/" + str(pathway_suid) + "/tables/defaultnode/columns/KEGG"
res2 = requests.get(target_row_url)
matched = json.loads(res2.content)["values"]
df2 = pd.DataFrame(columns=["id", "chart"]);
df2["id"] = matched
df2["chart"] = chart_entry
data = json.loads(df2.to_json(orient="records"))
chart_data = {
"key": "KEGG",
"dataKey": "id",
"data": data
}
requests.put(update_table_url, data=json.dumps(chart_data), headers=HEADERS)
Explanation: Set values to the chart column
End of explanation
custom_graphics_mapping = {
"mappingType" : "passthrough",
"mappingColumn" : "chart",
"mappingColumnType" : "String",
"visualProperty" : "NODE_CUSTOMGRAPHICS_1"
}
style_url = BASE_URL + "styles/KEGG Style/mappings"
requests.post(style_url, data=json.dumps([custom_graphics_mapping]), headers=HEADERS)
Explanation: Create Visual Style for Custom Mapping
End of explanation |
1,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 4
Work on this before the next lecture on 1 May. We will talk about questions, comments, and solutions during the exercise after the third lecture.
Please do form study groups! When you do, make sure you can explain everything in your own words, do not simply copy&paste from others.
The solutions to a lot of these problems can probably be found with Google. Please don't. You will not learn a lot by copy&pasting from the internet.
If you want to get credit/examination on this course please upload your work to your GitHub repository for this course before the next lecture starts and post a link to your repository in this thread. If you worked on things together with others please add their names to the notebook so we can see who formed groups.
These are some useful default imports for plotting and numpy
Step1: Pitfalls of estimating model performance
This question sets up a classification problem to illustrate a common pitfall in
evaluating model performance. To keep things simple the ys in this class room problem
are picked at random
Step2: A common task when building a new model is to select only those variables that are "best"
for the problem. This selection procedure can take many different shapes, here we will
compute the correlation of each feature with the target, select the 20 features that
have the highest correlation and use those in our gradient boosted tree ensemble.
We will then use cross validation to evaluate the performance. | Python Code:
%config InlineBackend.figure_format='retina'
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["font.size"] = 14
from sklearn.utils import check_random_state
Explanation: Exercise 4
Work on this before the next lecture on 1 May. We will talk about questions, comments, and solutions during the exercise after the third lecture.
Please do form study groups! When you do, make sure you can explain everything in your own words, do not simply copy&paste from others.
The solutions to a lot of these problems can probably be found with Google. Please don't. You will not learn a lot by copy&pasting from the internet.
If you want to get credit/examination on this course please upload your work to your GitHub repository for this course before the next lecture starts and post a link to your repository in this thread. If you worked on things together with others please add their names to the notebook so we can see who formed groups.
These are some useful default imports for plotting and numpy
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import SelectKBest, f_regression
np.random.seed(6450345)
Explanation: Pitfalls of estimating model performance
This question sets up a classification problem to illustrate a common pitfall in
evaluating model performance. To keep things simple the ys in this class room problem
are picked at random: there is no way for a classifier to learn how to model y based
on the features provided. This means we know what the true accuracy is: 0.5.
End of explanation
def make_data(N=1000, n_vars=10,
n_classes=2):
X = np.random.normal(size=(N,n_vars))
y = np.random.choice(n_classes, N)
return X, y
X,y = make_data(N=2000, n_vars=50000)
select = SelectKBest(f_regression, k=20)
X_sel = select.fit_transform(X, y)
clf = GradientBoostingClassifier()
scores = cross_val_score(clf, X_sel, y, cv=5, n_jobs=8)
print("Scores on each subset:")
print(scores)
avg = (100*np.mean(scores), 100*np.std(scores)/np.sqrt(scores.shape[0]))
print("Average score and uncertainty: (%.2f +- %.3f)%%" % avg)
Explanation: A common task when building a new model is to select only those variables that are "best"
for the problem. This selection procedure can take many different shapes, here we will
compute the correlation of each feature with the target, select the 20 features that
have the highest correlation and use those in our gradient boosted tree ensemble.
We will then use cross validation to evaluate the performance.
End of explanation |
1,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stay Alert! The Ford Challenge
by Scott Josephson
Driving while distracted, fatigued or drowsy may lead to accidents. Activities that divert the driver's attention from the road ahead, such as engaging in a conversation with other passengers in the car, making or receiving phone calls, sending or receiving text messages, eating while driving or events outside the car may cause driver distraction. Fatigue and drowsiness can result from driving long hours or from lack of sleep.
The data for this Kaggle challenge shows the results of a number of "trials", each one representing about 2 minutes of sequential data that are recorded every 100 ms during a driving session on the road or in a driving simulator. The trials are samples from some 100 drivers of both genders, and of different ages and ethnic backgrounds. The files are structured as follows
Step1: Get the Data
Read in the fordtrain.csv file and set it to a data frame called ford_train.
Split the data into training set and testing set using train_test_split
Step2: Check the head of ad_data
Step3: Logistic Regression
Now it's time to do a train test split, and train our model!
Choose columns that you want to train on!
Step4: Train and fit a logistic regression model on the training set.
Step5: Predictions and Evaluations
Now predict values for the testing data.
Step6: Create a classification report for the model. | Python Code:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
Explanation: Stay Alert! The Ford Challenge
by Scott Josephson
Driving while distracted, fatigued or drowsy may lead to accidents. Activities that divert the driver's attention from the road ahead, such as engaging in a conversation with other passengers in the car, making or receiving phone calls, sending or receiving text messages, eating while driving or events outside the car may cause driver distraction. Fatigue and drowsiness can result from driving long hours or from lack of sleep.
The data for this Kaggle challenge shows the results of a number of "trials", each one representing about 2 minutes of sequential data that are recorded every 100 ms during a driving session on the road or in a driving simulator. The trials are samples from some 100 drivers of both genders, and of different ages and ethnic backgrounds. The files are structured as follows:
The first column is the Trial ID - each period of around 2 minutes of sequential data has a unique trial ID. For instance, the first 1210 observations represent sequential observations every 100ms, and therefore all have the same trial ID
The second column is the observation number - this is a sequentially increasing number within one trial ID
The third column has a value X for each row where
X = 1 if the driver is alert
X = 0 if the driver is not alert
The next 8 columns with headers P1, P2 , …….., P8 represent physiological data;
The next 11 columns with headers E1, E2, …….., E11 represent environmental data;
The next 11 columns with headers V1, V2, …….., V11 represent vehicular data;
Import Libraries
End of explanation
ford_train = pd.read_csv('fordtrain.csv')
Explanation: Get the Data
Read in the fordtrain.csv file and set it to a data frame called ford_train.
Split the data into training set and testing set using train_test_split
End of explanation
ford_train.head()
ford_train.info()
Explanation: Check the head of ad_data
End of explanation
X_train, X_test, y_train, y_test = train_test_split(ford_train.drop('IsAlert',axis=1),ford_train['IsAlert'],
test_size=0.30,random_state=101)
Explanation: Logistic Regression
Now it's time to do a train test split, and train our model!
Choose columns that you want to train on!
End of explanation
logmodel = LogisticRegression()
logmodel.fit(X_train, y_train)
Explanation: Train and fit a logistic regression model on the training set.
End of explanation
predictions = logmodel.predict(X_test)
Explanation: Predictions and Evaluations
Now predict values for the testing data.
End of explanation
print(classification_report(y_test,predictions))
Explanation: Create a classification report for the model.
End of explanation |
1,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Show noise levels from empty room data
This shows how to use
Step1: We can plot the absolute noise levels | Python Code:
# Author: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import mne
data_path = mne.datasets.sample.data_path()
raw_erm = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',
'ernoise_raw.fif'), preload=True)
Explanation: Show noise levels from empty room data
This shows how to use :meth:mne.io.Raw.plot_psd to examine noise levels
of systems. See :footcite:KhanCohen2013 for an example.
End of explanation
raw_erm.plot_psd(tmax=10., average=True, spatial_colors=False,
dB=False, xscale='log')
Explanation: We can plot the absolute noise levels:
End of explanation |
1,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BeautifulSoup 4
Step1: findNext() - Finding e-mail trough its label
There are unlimited number of options for matching an e-mail from a page. This time, we will try to find my e-mail by first finding its label (E-mail) and going forward to the e-mail itself using the findNext() function.
Thus, those are the steps to take
Step2: Similarly, if once uses the findAllNext() function, then s/he will get a list of all the following tags (with their contents), where the very first one will be the e-mail we were looking for.
Step3: findPrevious() and findAllPrevious() functions follow the same intutition.
findNextSibling() - Finding e-mail trough its label
The same objective can be achieved using the findNextSibling() function. The difference between findNext() and findNextSibling() is that the former is finding the next tag, while the latter is trying to find the next tag which has the same level, e.g. is a sibling and not a parent or a child. Similarly, findNextSibling() is finding all the siblings that follow the current tag.
Step4: findPreviousSibling() and findPreviousSiblings() functions follow the same intutition.
findParent()
The findParent() function returns the whole parent tag and its content for the very forst parent of the current tag. FOr example, e-mail is included in a list, which means the direct parent is a <li> tag.
Step5: findParents() function will provide the list of all the parents until the top one (<html>) starting from the direct parent and ending the list with the oldest parent.
Step6: findChild()
The findChild() function follows the same intuition, yet, for our e-mail case it will not return anything, as the e-mail tag does not have any children. | Python Code:
import requests
from BeautifulSoup import *
url = "https://hrantdavtyan.github.io/"
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page)
Explanation: BeautifulSoup 4: Navigation
BeautifulSoup is a powerful package mostly due to the abundance of Navigation methods in the package. Below, the lsit of most used Navigation functions is provided:
find / findAll
findNext / findAllNext
findPrevious / findAllPrevious
findNextSibling / findNextSiblings
findParent / findParents
findChild / findChildren
The functions on the left hand side of the backward slash sign (in a singular form) find and return the very first matching string. Thus, the outcome is a string. The functions on the right hand side of the backward slash sign (in a plural form) find and return all matching strings in a list. Thus, the outcome is a list of strings.
End of explanation
# finding all labels
label_tags = soup.findAll('label')
print(label_tags)
# choosing our label of interest
email_label = label_tags[3]
print(email_label)
# navigating one tag forward and getting the text/string of it
email = email_label.findNext().text
print(email)
Explanation: findNext() - Finding e-mail trough its label
There are unlimited number of options for matching an e-mail from a page. This time, we will try to find my e-mail by first finding its label (E-mail) and going forward to the e-mail itself using the findNext() function.
Thus, those are the steps to take:
Find all labels,
Choose the label that we are interested in,
Navigate one step forward to get the e-mail with tags,
Get the text component out of the tag.
End of explanation
print(email_label.findAllNext())
Explanation: Similarly, if once uses the findAllNext() function, then s/he will get a list of all the following tags (with their contents), where the very first one will be the e-mail we were looking for.
End of explanation
email_sibling = email_label.findNextSibling().text
print(email_sibling)
Explanation: findPrevious() and findAllPrevious() functions follow the same intutition.
findNextSibling() - Finding e-mail trough its label
The same objective can be achieved using the findNextSibling() function. The difference between findNext() and findNextSibling() is that the former is finding the next tag, while the latter is trying to find the next tag which has the same level, e.g. is a sibling and not a parent or a child. Similarly, findNextSibling() is finding all the siblings that follow the current tag.
End of explanation
email_parent = email_label.findParent()
print(email_parent)
Explanation: findPreviousSibling() and findPreviousSiblings() functions follow the same intutition.
findParent()
The findParent() function returns the whole parent tag and its content for the very forst parent of the current tag. FOr example, e-mail is included in a list, which means the direct parent is a <li> tag.
End of explanation
email_parents = email_label.findParents()
print(email_parents)
email_parents[-1]
Explanation: findParents() function will provide the list of all the parents until the top one (<html>) starting from the direct parent and ending the list with the oldest parent.
End of explanation
email_child = email_label.findChild()
print(email_child)
Explanation: findChild()
The findChild() function follows the same intuition, yet, for our e-mail case it will not return anything, as the e-mail tag does not have any children.
End of explanation |
1,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Hill-Tononi Neuron and Synapse Models
Hans Ekkehard Plesser, NMBU/FZ Jülich/U Oslo, 2016-12-01
Background
This notebook describes the neuron and synapse model proposed by Hill and Tononi in J Neurophysiol 93
Step1: Neuron Model
Passive properties
Test relaxation of neuron and threshold to equilibrium values in absence of intrinsic currents and input. We then have
\begin{align}
\tau_m \dot{V}&= \left[-g_{NaL}(V-E_{Na})-g_{KL}(V-E_K)\right] = -(g_{NaL}+g_{KL})V+(g_{NaL}E_{Na}+g_{KL}E_K)\
\Leftrightarrow\quad \tau_{\text{eff}}\dot{V} &= -V+V_{\infty}\
V_{\infty} &= \frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}}\
\tau_{\text{eff}}&=\frac{\tau_m}{g_{NaL}+g_{KL}}
\end{align}
with solution
\begin{equation}
V(t) = V_0 e^{-\frac{t}{\tau_{\text{eff}}}} + V_{\infty}\left(1-e^{-\frac{t}{\tau_{\text{eff}}}} \right)
\end{equation}
and for the threshold
\begin{equation}
\theta(t) = \theta_0 e^{-\frac{t}{\tau_{\theta}}} + \theta_{eq}\left(1-e^{-\frac{t}{\tau_{\theta}}} \right)
\end{equation}
Step2: Agreement is excellent.
Spiking without intrinsic currents or synaptic input
The equations above hold for input current $I(t)$, but with
\begin{equation}
V_{\infty}(I) = \frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}} + \frac{I}{g_{NaL}+g_{KL}}
\end{equation}
In NEST, we need to inject input current into the ht_neuron with a dc_generator, whence the current will set on only at a later time and we need to take this into account. For simplicity, we assume that $V$ is initialized to $V_{\infty}(I=0)$ and that current onset is at $t_I$. We then have for $t\geq t_I$
\begin{equation}
V(t) = V_{\infty}(0) e^{-\frac{t-t_I}{\tau_{\text{eff}}}} + V_{\infty}(I)\left(1-e^{-\frac{t-t_I}{\tau_{\text{eff}}}} \right)
\end{equation}
If we also initialize $\theta=\theta_{\text{eq}}$, the threshold is constant and we have the first spike at
\begin{align}
V(t) &= \theta_{\text{eq}}\
\Leftrightarrow\quad t &= t_I -\tau_{\text{eff}} \ln \frac{\theta_{\text{eq}}-V_{\infty}(I)}{V_{\infty}(0)-V_{\infty}(I)}
\end{align}
Step3: Agreement is as good as possible
Step5: ISIs are as predicted
Step6: I_h channel
The $I_h$ current is governed by
\begin{align}
I_h &= g_{\text{peak}, h} m_h(V, t) (V-E_h) \
\frac{\text{d}m_h}{\text{d}t} &= \frac{m_h^{\infty}-m_h}{\tau_{m,h}(V)}\
m_h^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+75\text{mV}}{5.5\text{mV}}\right)} \
\tau_{m,h}(V) &= \frac{1}{\exp(-14.59-0.086V) + \exp(-1.87 + 0.0701V)}
\end{align}
We first inspect $m_h^{\infty}(V)$ and $\tau_{m,h}(V)$ to prepare for testing
Step7: The time constant is extremely long, up to 1s, for relevant voltages where $I_h$ is perceptible. We thus need long test runs.
Curves are in good agreement with Fig 5 of Huguenard and McCormick, J Neurophysiol 68
Step8: Agreement is very good
Note that currents have units of $mV$ due to choice of dimensionless conductances.
I_T Channel
The corrected equations used for the $I_T$ channel in NEST are
\begin{align}
I_T &= g_{\text{peak}, T} m_T^2(V, t) h_T(V,t) (V-E_T) \
m_T^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+59\text{mV}}{6.2\text{mV}}\right)}\
\tau_{m,T}(V) &= 0.13\text{ms}
+ \frac{0.22\text{ms}}{\exp\left(-\frac{V + 132\text{mV}}{16.7\text{mV}}\right) + \exp\left(\frac{V + 16.8\text{mV}}{18.2\text{mV}}\right)} \
h_T^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+83\text{mV}}{4\text{mV}}\right)}\
\tau_{h,T}(V) &= 8.2\text{ms} + \frac{56.6\text{ms} + 0.27\text{ms} \exp\left(\frac{V + 115.2\text{mV}}{5\text{mV}}\right)}{1 + \exp\left(\frac{V + 86\text{mV}}{3.2\text{mV}}\right)}
\end{align}
Step9: Time constants here are much shorter than for I_h
Time constants are about five times shorter than in Fig 1 of Huguenard and McCormick, J Neurophysiol 68
Step10: Also here the results are in good agreement and the error appears acceptable.
I_NaP channel
This channel adapts instantaneously to changes in membrane potential
Step11: Perfect agreement
Step structure is because $V$ changes only every second.
I_KNa channel (aka I_DK)
Equations for this channel are
\begin{align}
I_{DK} &= - g_{\text{peak},DK} m_{DK}^{\infty}(V,t) (V - E_{DK})\
m_{DK}^{\infty} &= \frac{1}{1 + \left(\frac{d_{1/2}}{D}\right)^{3.5}}\
\frac{dD}{dt} &= D_{\text{influx}}(V) - \frac{D-D_{\text{eq}}}{\tau_D} = \frac{D_{\infty}(V)-D}{\tau_D} \
D_{\infty}(V) &= \tau_D D_{\text{influx}}(V) + {D_{\text{eq}}}\
D_{\text{influx}} &= \frac{D_{\text{influx,peak}}}{1+ \exp\left(-\frac{V-D_{\theta}}{\sigma_D}\right)}
\end{align}
with
|$D_{\text{influx,peak}}$|$D_{\text{eq}}$|$\tau_D$|$D_{\theta}$|$\sigma_D$|$d_{1/2}$|
| --
Step12: Properties of I_DK
Step13: Note that current in steady state is
$\approx 0$ for $V < -40$mV
$\sim -(V-E_{DK})$ for $V> -30$mV
Voltage clamp
Step15: Looks very fine.
Note that the current gets appreviable only when $V>-35$ mV
Once that threshold is crossed, the current adjust instantaneously to changes in $V$, since it is in the linear regime.
When returning from $V=0$ to $V=-70$ mV, the current remains large for a long time since $D$ has to drop below 1 before $m_{\infty}$ changes appreciably
Synaptic channels
For synaptic channels, NEST allows recording of conductances, so we test conductances directly. Due to the voltage-dependence of the NMDA channels, we still do this in voltage clamp.
Step16: AMPA, GABA_A, GABA_B channels
Step17: Looks quite good, but the error is maybe a bit larger than one would hope.
But the synaptic rise time is short (0.5 ms) compared to the integration step in NEST (0.1 ms), which may explain the error.
Reducing the time step reduces the error
Step18: Looks good for all
For GABA_B the error is negligible even for dt = 0.1, since the time constants are large.
NMDA Channel
The equations for this channel are
\begin{align}
\bar{g}{\text{NMDA}}(t) &= m(V, t) g{\text{NMDA}}(t) m(V, t)\ &= a(V) m_{\text{fast}}^(V, t) + ( 1 - a(V) ) m_{\text{slow}}^(V, t)\
a(V) &= 0.51 - 0.0028 V \
m^{\infty}(V) &= \frac{1}{ 1 + \exp\left( -S_{\text{act}} ( V - V_{\text{act}} ) \right) } \
m_X^*(V, t) &= \min(m^{\infty}(V), m_X(V, t))\
\frac{\text{d}m_X}{\text{d}t} &= \frac{m^{\infty}(V) - m_X }{ \tau_{\text{Mg}, X}}
\end{align}
where $g_{\text{NMDA}}(t)$ is the beta functions as for the other channels. In case of instantaneous unblocking, $m=m^{\infty}$.
NMDA with instantaneous unblocking
Step19: Looks good
Jumps are due to blocking/unblocking of Mg channels with changes in $V$
NMDA with unblocking over time
Step20: Looks fine, too.
Synapse Model
We test the synapse model by placing it between two parrot neurons, sending spikes with differing intervals and compare to expected weights.
Step21: Perfect agreement, synapse model looks fine.
Integration test | Python Code:
import sys
import math
import numpy as np
import pandas as pd
import scipy.optimize as so
import scipy.integrate as si
import matplotlib.pyplot as plt
import nest
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 3)
Explanation: The Hill-Tononi Neuron and Synapse Models
Hans Ekkehard Plesser, NMBU/FZ Jülich/U Oslo, 2016-12-01
Background
This notebook describes the neuron and synapse model proposed by Hill and Tononi in J Neurophysiol 93:1671-1698, 2005 (doi:10.1152/jn.00915.2004) and their implementation in NEST. The notebook also contains some tests.
This description is based on the original publication and publications cited therein, an analysis of the source code of the original Synthesis implementation kindly provided by Sean Hill, and plausiblity arguments.
In what follows, I will refer to the original paper as [HT05].
This notebook was run successfully with NEST Branch HT_NMDA at Commit bec1c52 (15 Dec 2016).
The Neuron Model
Integration
The original Synthesis implementation of the model uses Runge-Kutta integration with fixed 0.25 ms step size, and integrates channels dynamics first, followed by integration of membrane potential and threshold.
NEST, in contrast, integrates the complete 16-dimensional state using a single adaptive-stepsize Runge-Kutta-Fehlberg-4(5) solver from the GNU Science Library (gsl_odeiv_step_rkf45).
Membrane potential
Membrane potential evolution is governed by [HT05, p 1677]
\begin{equation}
\frac{\text{d}V}{\text{d}t} = \frac{-g_{\text{NaL}}(V-E_{\text{Na}})
-g_{\text{KL}}(V-E_{\text{K}})+I_{\text{syn}}+I_{\text{int}}}{\tau_{\text{m}}}
-\frac{g_{\text{spike}}(V-E_{\text{K}})}{\tau_{\text{spike}}}
\end{equation}
The equation does not contain membrane capacitance. As a side-effect, all conductances are dimensionless.
Na and K leak conductances $g_{\text{NaL}}$ and $g_{\text{KL}}$ are constant, although $g_{\text{KL}}$ may be adjusted on slow time scales to mimic neuromodulatory effects.
Reversal potentials $E_{\text{Na}}$ and $E_{\text{K}}$ are assumed constant.
Synaptic currents $I_{\text{syn}}$ and intrinsic currents $I_{\text{int}}$ are discussed below. In contrast to the paper, they are shown with positive sign here (just change in notation).
The last term is a re-polarizing current only active during the refractory period, see below. Note that it has a different (faster) time constant than the other currents. It might have been more natural to use the same time constant for all currents and instead adjust $g_{\text{spike}}$. We follow the original approach here.
Threshold, Spike generation and refractory effects
The threshold evolves according to [HT05, p 1677]
\begin{equation}
\frac{\text{d}\theta}{\text{d}t} = -\frac{\theta-\theta_{\text{eq}}}{\tau_{\theta}}
\end{equation}
The neuron emits a single spike if
- it is not refractory
- membrane potential crosses the threshold, $V\geq\theta$
Upon spike emission,
- $V \leftarrow E_{\text{Na}}$
- $\theta \leftarrow E_{\text{Na}}$
- the neuron becomes refractory for time $t_{\text{spike}}$ (t_ref in NEST)
The repolarizing current is active during, and only during the refractory period:
\begin{equation}
g_{\text{spike}} = \begin{cases} 1 & \text{neuron is refractory}\
0 & \text{else} \end{cases}
\end{equation}
During the refractory period, the neuron cannot fire new spikes, but all state variables evolve freely, nothing is clamped.
The model of spiking and refractoriness is based on Synthesis model PulseIntegrateAndFire.
Intrinsic currents
Note that not all intrinsic currents are active in all populations of the network model presented in [HT05, p1678f].
Intrinsic currents are based on the Hodgkin-Huxley description, i.e.,
\begin{align}
I_X &= g_{\text{peak}, X} m_X(V, t)^N_X h_X(V, t)(V-E_X) \
\frac{\text{d}m_X}{\text{d}t} &= \frac{m_X^{\infty}-m_X}{\tau_{m,X}(V)}\
\frac{\text{d}h_X}{\text{d}t} &= \frac{h_X^{\infty}-h_X}{\tau_{h,X}(V)}
\end{align}
where $I_X$ is the current through channel $X$ and $m_X$ and $h_X$ the activation and inactivation variables for channel $X$.
Pacemaker current $I_h$
Synthesis: IhChannel
\begin{align}
N_h & = 1 \
m_h^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+75\text{mV}}{5.5\text{mV}}\right)} \
\tau_{m,h}(V) &= \frac{1}{\exp(-14.59-0.086V) + \exp(-1.87 + 0.0701V)} \
h_h(V, t) &\equiv 1
\end{align}
Note that subscript $h$ in some cases above marks the $I_h$ channel.
Low-threshold calcium current $I_T$
Synthesis: ItChannel
Equations given in paper
\begin{align}
N_T & \quad \text{not given} \
m_T^{\infty}(V) &= 1/{1 + \exp[ -(V + 59.0)/6.2]} \
\tau_{m,T}(V) &= {0.22/\exp[ -(V + 132.0)/ 16.7]} + \exp[(V + 16.8)/18.2] + 0.13\
h_T^{\infty}(V) &= 1/{1 + \exp[(V + 83.0)/4.0]} \
\tau_{h,T}(V) &= \langle 8.2 + {56.6 + 0.27 \exp[(V + 115.2)/5.0]}\rangle / {1.0 + \exp[(V + 86.0)/3.2]}
\end{align}
Note the following:
- The channel model is based on Destexhe et al, J Neurophysiol 76:2049 (1996).
- In the equation for $\tau_{m,T}$, the second exponential term must be added to the first (in the denominator) to make dimensional sense; 0.13 and 0.22 have unit ms.
- In the equation for $\tau_{h,T}$, the $\langle \rangle$ brackets should be dropped, so that $8.2$ is not divided by the $1+\exp$ term. Otherwise, it could have been combined with the $56.6$.
- This analysis is confirmed by code analysis and comparison with Destexhe et al, J Neurophysiol 76:2049 (1996), Eq 5.
- From Destexhe et al we also find $N_T=2$.
Corrected equations
This leads to the following equations, which are implemented in Synthesis and NEST.
\begin{align}
N_T &= 2 \
m_T^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+59\text{mV}}{6.2\text{mV}}\right)}\
\tau_{m,T}(V) &= 0.13\text{ms}
+ \frac{0.22\text{ms}}{\exp\left(-\frac{V + 132\text{mV}}{16.7\text{mV}}\right) + \exp\left(\frac{V + 16.8\text{mV}}{18.2\text{mV}}\right)} \
h_T^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+83\text{mV}}{4\text{mV}}\right)}\
\tau_{h,T}(V) &= 8.2\text{ms} + \frac{56.6\text{ms} + 0.27\text{ms} \exp\left(\frac{V + 115.2\text{mV}}{5\text{mV}}\right)}{1 + \exp\left(\frac{V + 86\text{mV}}{3.2\text{mV}}\right)}
\end{align}
Persistent Sodium Current $I_{NaP}$
Synthesis: INaPChannel
This model has only activation ($m$) and uses the steady-state value, so the only relevant equation is that for $m$. In the paper, it is given as
\begin{equation}
m_{NaP}^{\infty}(V) = 1/[1+\exp(-V+55.7)/7.7]
\end{equation}
Dimensional analysis indicates that the division by $7.7$ should be in the argument of the exponential, and the minus sign needs to be moved so that the current activates as the neuron depolarizes leading to the corrected equation
\begin{equation}
m_{NaP}^{\infty}(V) = \frac{1}{1+\exp\left(-\frac{V+55.7\text{mV}}{7.7\text{mV}}\right)}
\end{equation}
This equation is implemented in NEST and Synthesis and is the one found in Compte et al (2003), cited by [HT05, p 1679].
Corrected exponent
According to Compte et al (2003), $N_{NaP}=3$, i.e.,
\begin{equation}
I_{NaP} = g_{\text{peak,NaP}}(m_{NaP}^{\infty}(V))^3(V-E_{NaP})
\end{equation}
This equation is also given in a comment in Synthesis, but is missing from the implementation.
Note: NEST implements the equation according to Compte et al (2003) with $N_{NaP}=3$, while Synthesis uses $N_{NaP}=1$.
Depolarization-activated Potassium Current $I_{DK}$
Synthesis: IKNaChannel
This model also only has a single activation variable $m$, following more complicated dynamics expressed by $D$.
Equations in paper
\begin{align}
dD/dt &= D_{\text{influx}} - D(1-D_{\text{eq}})/\tau_D \
D_{\text{influx}} &= 1/{1+ \exp[-(V-D_{\theta})/\sigma_D]} \
m_{DK}^{\infty} &= 1/1 + (d_{1/2}D)^{3.5}
\end{align}
There are several problems with these equations.
In the steady state the first equation becomes
\begin{equation}
0 = - D(1-D_{\text{eq}})/\tau_D
\end{equation}
with solution
\begin{equation}
D = 0
\end{equation}
This contradicts both the statement [HT05, p. 1679] that $D\to D_{\text{eq}}$ in this case, and the requirement that $D>0$ to avoid a singluarity in the equation for $m_{DK}^{\infty}$. The most plausible correction is
\begin{equation}
dD/dt = D_{\text{influx}} - (D-D_{\text{eq}})/\tau_D
\end{equation}
The third equation appears incorrect and logic as well as Wang et al, J Neurophysiol 89:3279–3293, 2003, Eq 9, cited in [HT05, p 1679], indicate that the correct equation is
\begin{equation}
m_{DK}^{\infty} = 1/(1 + (d_{1/2} / D)^{3.5})
\end{equation}
Corrected equations
The equations for this channel implemented in NEST are thus
\begin{align}
I_{DK} &= - g_{\text{peak},DK} m_{DK}^{\infty}(V,t) (V - E_{DK})\
m_{DK}^{\infty} &= \frac{1}{1 + \left(\frac{d_{1/2}}{D}\right)^{3.5}}\
\frac{dD}{dt} &= D_{\text{influx}}(V) - \frac{D-D_{\text{eq}}}{\tau_D} = \frac{D_{\infty}(V)-D}{\tau_D} \
D_{\infty}(V) &= \tau_D D_{\text{influx}}(V) + {D_{\text{eq}}}\
D_{\text{influx}} &= \frac{D_{\text{influx,peak}}}{1+ \exp\left(-\frac{V-D_{\theta}}{\sigma_D}\right)}
\end{align}
with
|$D_{\text{influx,peak}}$|$D_{\text{eq}}$|$\tau_D$|$D_{\theta}$|$\sigma_D$|$d_{1/2}$|
| --: | --: | --: | --: | --: | --: |
|$0.025\text{ms}^{-1}$ |$0.001$|$1250\text{ms}$|$-10\text{mV}$|$5\text{mV}$|$0.25$|
Note the following:
- $D_{eq}$ is the equilibrium value only for $D_{\text{influx}}(V)=0$, i.e., in the limit $V\to -\infty$ and $t\to\infty$.
- The actual steady-state value is $D_{\infty}$.
- $d_{1/2}$, $D$, $D_{\infty}$, and $D_{\text{eq}}$ have identical, but arbitrary units, so we can assume them dimensionless ($D$ is a "factor" that in an abstract way represents concentrations).
- $D_{\text{influx}}$ and $D_{\text{influx,peak}}$ are rates of change of $D_{\infty}$ and thus have units of inverse time.
- $m_{DK}^{\infty}$ is a steep sigmoid which is almost 0 or 1 except for a narrow window around $d_{1/2}$.
- To the left of this window, $I_{DK}\approx 0$.
- To the right of this window, $I_{DK}\sim -(V-E_{DK})$.
Note: The differential equation for $dD/dt$ differs from the one implemented in Synthesis.
Synaptic channels
These are described in [HT05, p 1678]. Synaptic channels are conductance based with double-exponential time course (beta functions) and normalized for peak conductance. NMDA channels are additionally voltage gated, as described below.
Let ${t_{(j, X)}}$ be the set of all spike arrival times, where $X$ indicates the synapse model and $j$ enumerates spikes. Then the total synaptic input is given by
\begin{equation}
I_{\text{syn}}(t) = - \sum_{{t_{(j, X)}}} \bar{g}X(t-t{(j, X)}) (V-E_X)
\end{equation}
Standard Channels
Synthesis: SynChannel
The conductance change due to a single input spike at time $t=0$ through a channel of type $X$ is given by (see below for exceptions)
\begin{align}
\bar{g}X(t) &= g_X(t)\
g_X(t) &= g{\text{peak}, X}\frac{\exp(-t/\tau_1) - \exp(-t/\tau_2)}{
\exp(-t_{\text{peak}}/\tau_1) - \exp(-t_{\text{peak}}/\tau_2)} \Theta(t)\
t_{\text{peak}} &= \frac{\tau_2 \tau_1}{\tau_2 - \tau_1} \ln\frac{ \tau_2}{\tau_1}
\end{align}
where $t_{\text{peak}}$ is the time of the conductance maximum and $\tau_1$ and $\tau_2$ are synaptic rise- and decay-time, respectively; $\Theta(t)$ is the Heaviside step function. The equation is integrated using exact integration in Synthesis; in NEST, it is included in the ODE-system integrated using the Runge-Kutta-Fehlberg 4(5) solver from GSL.
The "indirection" from $g$ to $\bar{g}$ is required for consistent notation for NMDA channels below.
These channels are used for AMPA, GABA_A and GABA_B channels.
NMDA Channels
Synthesis: SynNMDAChannel
For the NMDA channel we have
\begin{equation}
\bar{g}{\text{NMDA}}(t) = m(V, t) g{\text{NMDA}}(t)
\end{equation}
with $g_{\text{NMDA}}(t)$ from above.
The voltage-dependent gating $m(V, t)$ is defined as follows (based on textual description, Vargas-Caballero and Robinson J Neurophysiol 89:2778–2783, 2003, doi:10.1152/jn.01038.2002, and code inspection):
\begin{align}
m(V, t) &= a(V) m_{\text{fast}}^(V, t) + ( 1 - a(V) ) m_{\text{slow}}^(V, t)\
a(V) &= 0.51 - 0.0028 V \
m^{\infty}(V) &= \frac{1}{ 1 + \exp\left( -S_{\text{act}} ( V - V_{\text{act}} ) \right) } \
m_X^*(V, t) &= \min(m^{\infty}(V), m_X(V, t))\
\frac{\text{d}m_X}{\text{d}t} &= \frac{m^{\infty}(V) - m_X }{ \tau_{\text{Mg}, X}}
\end{align}
where $X$ is "slow" or "fast". $a(V)$ expresses voltage-dependent weighting between slow and fast unblocking, $m^{\infty}(V)$ the steady-state value of the proportion of unblocked NMDA-channels, the minimum condition in $m_X^*(V,t)$ the instantaneous blocking and the differential equation for $m_X(V,t)$ the unblocking dynamics.
Synthesis uses tabluated values for $m^{\infty}$. NEST uses the best fit of $V_{\text{act}}$ and $S_{\text{act}}$ to the tabulated data for conductance table fNMDA.
Note: NEST also supports instantaneous NMDA dynamics using a boolean switch. In that case $m(V, t)=m^{\infty}(V)$.
No synaptic "minis"
Synaptic "minis" due to spontaneous release of neurotransmitter quanta [HT05, p 1679] are not included in the NEST implementation of the Hill-Tononi model, because the total mini input rate for a cell was just 2 Hz and they cause PSP changes by $0.5 \pm 0.25$mV only and thus should have minimal effect.
The Synapse Depression Model
The synapse depression model is implemented in NEST as ht_synapse, in Synthesis in SynChannel and VesiclePool.
$P\in[0, 1]$ describes the state of the presynaptic vesicle pool. Spikes are transmitted with an effective weight
\begin{equation}
w_{\text{eff}} = P w
\end{equation}
where $w$ is the nominal weight of the synapse.
Evolution of $P$ in paper and Synthesis implementation
According to [HT05, p 1678], the pool $P$ evolves according to
\begin{equation}
\frac{\text{d}P}{\text{d}t} = -\:\text{spike}\:\delta_P P+\frac{P_{\text{peak}}-P}{\tau_P}
\end{equation}
where
- $\text{spike}=1$ while the neuron is in spiking state, 0 otherwise
- $P_{\text{peak}}=1$
- $\delta_P = 0.5$ by default
- $\tau_P = 500\text{ms}$ by default
Since neurons are in spiking state for one integration time step $\Delta t$, this suggest that the effect of a spike on the vesicle pool is approximately
\begin{equation}
P \leftarrow ( 1 - \Delta t \delta_P ) P
\end{equation}
For default parameters $\Delta t=0.25\text{ms}$ and $\delta_P=0.5$, this means that a single spike reduceds the pool by 1/8 of its current size.
Evolution of $P$ in the NEST implementation
In NEST, we modify the equations above to obtain a definite jump in pool size on transmission of a spike, without any dependence on the integration time step (fixing explicitly $P_{\text{peak}}$):
\begin{align}
\frac{\text{d}P}{\text{d}t} &= \frac{1-P}{\tau_P} \
P &\leftarrow ( 1 - \delta_P^*) P
\end{align}
$P$ is only updated when a spike passes the synapse, in the following way (where $\Delta$ is the time since the last spike through the same synapse):
Recuperation: $P\leftarrow 1 - ( 1 - P ) \exp( -\Delta / \tau_P )$
Spike transmission with $w_{\text{eff}} = P w$
Depletion: $P \leftarrow ( 1 - \delta_P^*) P$
To achieve approximately the same depletion as in Synthesis, use $\delta_P^*=\Delta t\delta_p$.
Tests of the Models
End of explanation
def Vpass(t, V0, gNaL, ENa, gKL, EK, taum, I=0):
tau_eff = taum/(gNaL + gKL)
Vinf = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)
return V0*np.exp(-t/tau_eff) + Vinf*(1-np.exp(-t/tau_eff))
def theta(t, th0, theq, tauth):
return th0*np.exp(-t/tauth) + theq*(1-np.exp(-t/tauth))
nest.ResetKernel()
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.,
'tau_theta': 10.})
hp = nest.GetDefaults('ht_neuron')
V_th_0 = [(-100., -65.), (-70., -51.), (-55., -10.)]
T_sim = 20.
nrns = nest.Create('ht_neuron', n=len(V_th_0), params=[{'V_m': V, 'theta': th}
for V, th in V_th_0])
nest.Simulate(T_sim)
V_th_sim = nest.GetStatus(nrns, ['V_m', 'theta'])
for (V0, th0), (Vsim, thsim) in zip(V_th_0, V_th_sim):
Vex = Vpass(T_sim, V0, hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], hp['tau_m'])
thex = theta(T_sim, th0, hp['theta_eq'], hp['tau_theta'])
print('Vex = {:.3f}, Vsim = {:.3f}, Vex-Vsim = {:.3e}'.format(Vex, Vsim, Vex-Vsim))
print('thex = {:.3f}, thsim = {:.3f}, thex-thsim = {:.3e}'.format(thex, thsim, thex-thsim))
Explanation: Neuron Model
Passive properties
Test relaxation of neuron and threshold to equilibrium values in absence of intrinsic currents and input. We then have
\begin{align}
\tau_m \dot{V}&= \left[-g_{NaL}(V-E_{Na})-g_{KL}(V-E_K)\right] = -(g_{NaL}+g_{KL})V+(g_{NaL}E_{Na}+g_{KL}E_K)\
\Leftrightarrow\quad \tau_{\text{eff}}\dot{V} &= -V+V_{\infty}\
V_{\infty} &= \frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}}\
\tau_{\text{eff}}&=\frac{\tau_m}{g_{NaL}+g_{KL}}
\end{align}
with solution
\begin{equation}
V(t) = V_0 e^{-\frac{t}{\tau_{\text{eff}}}} + V_{\infty}\left(1-e^{-\frac{t}{\tau_{\text{eff}}}} \right)
\end{equation}
and for the threshold
\begin{equation}
\theta(t) = \theta_0 e^{-\frac{t}{\tau_{\theta}}} + \theta_{eq}\left(1-e^{-\frac{t}{\tau_{\theta}}} \right)
\end{equation}
End of explanation
def t_first_spike(gNaL, ENa, gKL, EK, taum, theq, tI, I):
tau_eff = taum/(gNaL + gKL)
Vinf0 = (gNaL*ENa + gKL*EK)/(gNaL + gKL)
VinfI = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)
return tI - tau_eff * np.log((theq-VinfI) / (Vinf0-VinfI))
nest.ResetKernel()
nest.SetKernelStatus({'resolution': 0.001})
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.})
hp = nest.GetDefaults('ht_neuron')
I = [25., 50., 100.]
tI = 1.
delay = 1.
T_sim = 40.
nrns = nest.Create('ht_neuron', n=len(I))
dcgens = nest.Create('dc_generator', n=len(I), params=[{'amplitude': dc,
'start': tI} for dc in I])
sdets = nest.Create('spike_detector', n=len(I))
nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})
nest.Connect(nrns, sdets, 'one_to_one')
nest.Simulate(T_sim)
t_first_sim = [ev['events']['times'][0] for ev in nest.GetStatus(sdets)]
for dc, tf_sim in zip(I, t_first_sim):
tf_ex = t_first_spike(hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'],
hp['tau_m'], hp['theta_eq'], tI+delay, dc)
print('tex = {:.4f}, tsim = {:.4f}, tex-tsim = {:.4f}'.format(tf_ex,
tf_sim,
tf_ex-tf_sim))
Explanation: Agreement is excellent.
Spiking without intrinsic currents or synaptic input
The equations above hold for input current $I(t)$, but with
\begin{equation}
V_{\infty}(I) = \frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}} + \frac{I}{g_{NaL}+g_{KL}}
\end{equation}
In NEST, we need to inject input current into the ht_neuron with a dc_generator, whence the current will set on only at a later time and we need to take this into account. For simplicity, we assume that $V$ is initialized to $V_{\infty}(I=0)$ and that current onset is at $t_I$. We then have for $t\geq t_I$
\begin{equation}
V(t) = V_{\infty}(0) e^{-\frac{t-t_I}{\tau_{\text{eff}}}} + V_{\infty}(I)\left(1-e^{-\frac{t-t_I}{\tau_{\text{eff}}}} \right)
\end{equation}
If we also initialize $\theta=\theta_{\text{eq}}$, the threshold is constant and we have the first spike at
\begin{align}
V(t) &= \theta_{\text{eq}}\
\Leftrightarrow\quad t &= t_I -\tau_{\text{eff}} \ln \frac{\theta_{\text{eq}}-V_{\infty}(I)}{V_{\infty}(0)-V_{\infty}(I)}
\end{align}
End of explanation
def Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):
tau_eff = taum/(gNaL + gKL + taum/tauspk)
Vinf = (gNaL*ENa + gKL*EK + I + taum/tauspk*EK)/(gNaL + gKL + taum/tauspk)
return ENa*np.exp(-tspk/tau_eff) + Vinf*(1-np.exp(-tspk/tau_eff))
def thetaspike(tspk, ENa, theq, tauth):
return ENa*np.exp(-tspk/tauth) + theq*(1-np.exp(-tspk/tauth))
def Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):
Vsp = Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I)
return Vpass(t-tspk, Vsp, gNaL, ENa, gKL, EK, taum, I)
def thetapost(t, tspk, ENa, theq, tauth):
thsp = thetaspike(tspk, ENa, theq, tauth)
return theta(t-tspk, thsp, theq, tauth)
def threshold(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I, theq, tauth):
return Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I) - thetapost(t, tspk, ENa, theq, tauth)
nest.ResetKernel()
nest.SetKernelStatus({'resolution': 0.001})
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.})
hp = nest.GetDefaults('ht_neuron')
I = [25., 50., 100.]
tI = 1.
delay = 1.
T_sim = 1000.
nrns = nest.Create('ht_neuron', n=len(I))
dcgens = nest.Create('dc_generator', n=len(I), params=[{'amplitude': dc,
'start': tI} for dc in I])
sdets = nest.Create('spike_detector', n=len(I))
nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})
nest.Connect(nrns, sdets, 'one_to_one')
nest.Simulate(T_sim)
isi_sim = []
for ev in nest.GetStatus(sdets):
t_spk = ev['events']['times']
isi = np.diff(t_spk)
isi_sim.append((np.min(isi), np.mean(isi), np.max(isi)))
for dc, (isi_min, isi_mean, isi_max) in zip(I, isi_sim):
isi_ex = so.bisect(threshold, hp['t_ref'], 50,
args=(hp['t_ref'], hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'],
hp['tau_m'], hp['tau_spike'], dc, hp['theta_eq'], hp['tau_theta']))
print('isi_ex = {:.4f}, isi_sim (min, mean, max) = ({:.4f}, {:.4f}, {:.4f})'.format(
isi_ex, isi_min, isi_mean, isi_max))
Explanation: Agreement is as good as possible: All spikes occur in NEST at then end of the time step containing the expected spike time.
Inter-spike interval
After each spike, $V_m = \theta = E_{Na}$, i.e., all memory is erased. We can thus treat ISIs independently. $\theta$ relaxes according to the equation above. For $V_m$, we have during $t_{\text{spike}}$ after a spike
\begin{align}
\tau_m\dot{V} &= {-g_{\text{NaL}}(V-E_{\text{Na}})
-g_{\text{KL}}(V-E_{\text{K}})+I}
-\frac{\tau_m}{\tau_{\text{spike}}}({V-E_{\text{K}}})\
&= -(g_{NaL}+g_{KL}+\frac{\tau_m}{\tau_{\text{spike}}})V+(g_{NaL}E_{Na}+g_{KL}E_K+\frac{\tau_m}{\tau_{\text{spike}}}E_K)
\end{align}
thus recovering the same for for the solution but with
\begin{align}
\tau^{\text{eff}} &= \frac{\tau_m}{g{NaL}+g_{KL}+\frac{\tau_m}{\tau_{\text{spike}}}}\
V^{\infty} &= \frac{g{NaL}E_{Na}+g_{KL}E_K+I+\frac{\tau_m}{\tau_{\text{spike}}}E_K}{g_{NaL}+g_{KL}+\frac{\tau_m}{\tau_{\text{spike}}}}
\end{align}
Assuming that the ISI is longer than the refractory period $t_{\text{spike}}$, and we had a spike at time $t_s$, then we have at $t_s+t_{\text{spike}}$
\begin{align}
V^ &= V(t_s+t_{\text{spike}}) = E_{Na} e^{-\frac{t_{\text{spike}}}{\tau^{\text{eff}}}} + V^{\infty}(I)\left(1-e^{-\frac{t{\text{spike}}}{\tau^{\text{eff}}}} \right)\
\theta^ &= \theta(t_s+t_{\text{spike}}) = E_{Na} e^{-\frac{t_{\text{spike}}}{\tau_{\theta}}} + \theta_{eq}\left(1-e^{-\frac{t_{\text{spike}}}{\tau_{\theta}}} \right)\
t^ &= t_s+t_{\text{spike}}
\end{align}
For $t>t^$, the normal equations apply again, i.e.,
\begin{align}
V(t) &= V^ e^{-\frac{t-t^}{\tau_{\text{eff}}}} + V_{\infty}(I)\left(1-e^{-\frac{t-t^}{\tau_{\text{eff}}}} \right)\
\theta(t) &= \theta^ e^{-\frac{t-t^}{\tau_{\theta}}} + \theta_{\infty}\left(1-e^{-\frac{t-t^*}{\tau_{\theta}}}\right)
\end{align}
The time of the next spike is then given by
\begin{equation}
V(\hat{t}) = \theta(\hat{t})
\end{equation}
which can only be solved numerically. The ISI is then obtained as $\hat{t}-t_s$.
End of explanation
nest.ResetKernel()
class Channel:
Base class for channel models in Python.
def tau_m(self, V):
raise NotImplementedError()
def tau_h(self, V):
raise NotImplementedError()
def m_inf(self, V):
raise NotImplementedError()
def h_inf(self, V):
raise NotImplementedError()
def D_inf(self, V):
raise NotImplementedError()
def dh(self, h, t, V):
return (self.h_inf(V)-h)/self.tau_h(V)
def dm(self, m, t, V):
return (self.m_inf(V)-m)/self.tau_m(V)
def voltage_clamp(channel, DT_V_seq, nest_dt=0.1):
"Run voltage clamp with voltage V through intervals DT."
# NEST part
nest_g_0 = {'g_peak_h': 0., 'g_peak_T': 0., 'g_peak_NaP': 0., 'g_peak_KNa': 0.}
nest_g_0[channel.nest_g] = 1.
nest.ResetKernel()
nest.SetKernelStatus({'resolution': nest_dt})
nrn = nest.Create('ht_neuron', params=nest_g_0)
mm = nest.Create('multimeter', params={'record_from': ['V_m', 'theta', channel.nest_I],
'interval': nest_dt})
nest.Connect(mm, nrn)
# ensure we start from equilibrated state
nest.SetStatus(nrn, {'V_m': DT_V_seq[0][1], 'equilibrate': True,
'voltage_clamp': True})
for DT, V in DT_V_seq:
nest.SetStatus(nrn, {'V_m': V, 'voltage_clamp': True})
nest.Simulate(DT)
t_end = nest.GetKernelStatus()['time']
# simulate a little more so we get all data up to t_end to multimeter
nest.Simulate(2 * nest.GetKernelStatus()['min_delay'])
tmp = pd.DataFrame(nest.GetStatus(mm)[0]['events'])
nest_res = tmp[tmp.times <= t_end]
# Control part
t_old = 0.
try:
m_old = channel.m_inf(DT_V_seq[0][1])
except NotImplementedError:
m_old = None
try:
h_old = channel.h_inf(DT_V_seq[0][1])
except NotImplementedError:
h_old = None
try:
D_old = channel.D_inf(DT_V_seq[0][1])
except NotImplementedError:
D_old = None
t_all, I_all = [], []
if D_old is not None:
D_all = []
for DT, V in DT_V_seq:
t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)
I_loc = channel.compute_I(t_loc, V, m_old, h_old, D_old)
t_all.extend(t_old + t_loc[1:])
I_all.extend(I_loc[1:])
if D_old is not None:
D_all.extend(channel.D[1:])
m_old = channel.m[-1] if m_old is not None else None
h_old = channel.h[-1] if h_old is not None else None
D_old = channel.D[-1] if D_old is not None else None
t_old = t_all[-1]
if D_old is None:
ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all})
else:
ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all, 'D': D_all})
return nest_res, ctrl_res
Explanation: ISIs are as predicted: measured ISI is predicted rounded up to next time step
ISIs are perfectly regular as expected
Intrinsic Currents
Preparations
End of explanation
nest.ResetKernel()
class Ih(Channel):
nest_g = 'g_peak_h'
nest_I = 'I_h'
def __init__(self, ht_params):
self.hp = ht_params
def tau_m(self, V):
return 1/(np.exp(-14.59-0.086*V) + np.exp(-1.87 + 0.0701*V))
def m_inf(self, V):
return 1/(1+np.exp((V+75)/5.5))
def compute_I(self, t, V, m0, h0, D0):
self.m = si.odeint(self.dm, m0, t, args=(V,))
return - self.hp['g_peak_h'] * self.m * (V - self.hp['E_rev_h'])
ih = Ih(nest.GetDefaults('ht_neuron'))
V = np.linspace(-110, 30, 100)
plt.plot(V, ih.tau_m(V));
ax = plt.gca();
ax.set_xlabel('Voltage V [mV]');
ax.set_ylabel('Time constant tau_m [ms]', color='b');
ax2 = ax.twinx()
ax2.plot(V, ih.m_inf(V), 'g');
ax2.set_ylabel('Steady-state m_h^inf', color='g');
Explanation: I_h channel
The $I_h$ current is governed by
\begin{align}
I_h &= g_{\text{peak}, h} m_h(V, t) (V-E_h) \
\frac{\text{d}m_h}{\text{d}t} &= \frac{m_h^{\infty}-m_h}{\tau_{m,h}(V)}\
m_h^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+75\text{mV}}{5.5\text{mV}}\right)} \
\tau_{m,h}(V) &= \frac{1}{\exp(-14.59-0.086V) + \exp(-1.87 + 0.0701V)}
\end{align}
We first inspect $m_h^{\infty}(V)$ and $\tau_{m,h}(V)$ to prepare for testing
End of explanation
ih = Ih(nest.GetDefaults('ht_neuron'))
nr, cr = voltage_clamp(ih, [(500, -65.), (500, -80.), (500, -100.), (500, -90.), (500, -55.)])
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_h, label='NEST');
plt.plot(cr.times, cr.I_h, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_h [mV]');
plt.title('I_h current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_h-cr.I_h)/np.abs(cr.I_h));
plt.title('Relative I_h error')
plt.xlabel('Time [ms]');
plt.ylabel('Rel. error (NEST-Control)/|Control|');
Explanation: The time constant is extremely long, up to 1s, for relevant voltages where $I_h$ is perceptible. We thus need long test runs.
Curves are in good agreement with Fig 5 of Huguenard and McCormick, J Neurophysiol 68:1373, 1992, cited in [HT05]. I_h data there was from guinea pig slices at 35.5 C and needed no temperature adjustment.
We now run a voltage clamp experiment starting from the equilibrium value.
End of explanation
nest.ResetKernel()
class IT(Channel):
nest_g = 'g_peak_T'
nest_I = 'I_T'
def __init__(self, ht_params):
self.hp = ht_params
def tau_m(self, V):
return 0.13 + 0.22/(np.exp(-(V+132)/16.7) + np.exp((V+16.8)/18.2))
def tau_h(self, V):
return 8.2 + (56.6 + 0.27 * np.exp((V+115.2)/5.0)) /(1 + np.exp((V+86.0)/3.2))
def m_inf(self, V):
return 1/(1+np.exp(-(V+59.0)/6.2))
def h_inf(self, V):
return 1/(1+np.exp((V+83.0)/4.0))
def compute_I(self, t, V, m0, h0, D0):
self.m = si.odeint(self.dm, m0, t, args=(V,))
self.h = si.odeint(self.dh, h0, t, args=(V,))
return - self.hp['g_peak_T'] * self.m**2 * self.h * (V - self.hp['E_rev_T'])
iT = IT(nest.GetDefaults('ht_neuron'))
V = np.linspace(-110, 30, 100)
plt.plot(V, 10 * iT.tau_m(V), 'b-', label='10 * tau_m');
plt.plot(V, iT.tau_h(V), 'b--', label='tau_h');
ax1 = plt.gca();
ax1.set_xlabel('Voltage V [mV]');
ax1.set_ylabel('Time constants [ms]', color='b');
ax2 = ax1.twinx()
ax2.plot(V, iT.m_inf(V), 'g-', label='m_inf');
ax2.plot(V, iT.h_inf(V), 'g--', label='h_inf');
ax2.set_ylabel('Steady-state', color='g');
ln1, lb1 = ax1.get_legend_handles_labels()
ln2, lb2 = ax2.get_legend_handles_labels()
plt.legend(ln1+ln2, lb1+lb2, loc='upper right');
Explanation: Agreement is very good
Note that currents have units of $mV$ due to choice of dimensionless conductances.
I_T Channel
The corrected equations used for the $I_T$ channel in NEST are
\begin{align}
I_T &= g_{\text{peak}, T} m_T^2(V, t) h_T(V,t) (V-E_T) \
m_T^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+59\text{mV}}{6.2\text{mV}}\right)}\
\tau_{m,T}(V) &= 0.13\text{ms}
+ \frac{0.22\text{ms}}{\exp\left(-\frac{V + 132\text{mV}}{16.7\text{mV}}\right) + \exp\left(\frac{V + 16.8\text{mV}}{18.2\text{mV}}\right)} \
h_T^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+83\text{mV}}{4\text{mV}}\right)}\
\tau_{h,T}(V) &= 8.2\text{ms} + \frac{56.6\text{ms} + 0.27\text{ms} \exp\left(\frac{V + 115.2\text{mV}}{5\text{mV}}\right)}{1 + \exp\left(\frac{V + 86\text{mV}}{3.2\text{mV}}\right)}
\end{align}
End of explanation
iT = IT(nest.GetDefaults('ht_neuron'))
nr, cr = voltage_clamp(iT, [(200, -65.), (200, -80.), (200, -100.), (200, -90.), (200, -70.),
(200, -55.)],
nest_dt=0.1)
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_T, label='NEST');
plt.plot(cr.times, cr.I_T, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_T [mV]');
plt.title('I_T current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_T-cr.I_T)/np.abs(cr.I_T));
plt.title('Relative I_T error')
plt.xlabel('Time [ms]');
plt.ylabel('Rel. error (NEST-Control)/|Control|');
Explanation: Time constants here are much shorter than for I_h
Time constants are about five times shorter than in Fig 1 of Huguenard and McCormick, J Neurophysiol 68:1373, 1992, cited in [HT05], but that may be due to the fact that the original data was collected at 23-25C and parameters have been adjusted to 36C.
Steady-state activation and inactivation look much like in Huguenard and McCormick.
Note: Most detailed paper on data is Huguenard and Prince, J Neurosci 12:3804-3817, 1992. The parameters given for h_inf here are for VB cells, not nRT cells in that paper (Fig 5B), parameters for m_inf are similar to but not exactly those of Fig 4B for either VB or nRT.
End of explanation
nest.ResetKernel()
class INaP(Channel):
nest_g = 'g_peak_NaP'
nest_I = 'I_NaP'
def __init__(self, ht_params):
self.hp = ht_params
def m_inf(self, V):
return 1/(1+np.exp(-(V+55.7)/7.7))
def compute_I(self, t, V, m0, h0, D0):
return self.I_V_curve(V * np.ones_like(t))
def I_V_curve(self, V):
self.m = self.m_inf(V)
return - self.hp['g_peak_NaP'] * self.m**3 * (V - self.hp['E_rev_NaP'])
iNaP = INaP(nest.GetDefaults('ht_neuron'))
V = np.arange(-110., 30., 1.)
nr, cr = voltage_clamp(iNaP, [(1, v) for v in V], nest_dt=0.1)
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_NaP, label='NEST');
plt.plot(cr.times, cr.I_NaP, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_NaP [mV]');
plt.title('I_NaP current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_NaP-cr.I_NaP));
plt.title('I_NaP error')
plt.xlabel('Time [ms]');
plt.ylabel('Error (NEST-Control)');
Explanation: Also here the results are in good agreement and the error appears acceptable.
I_NaP channel
This channel adapts instantaneously to changes in membrane potential:
\begin{align}
I_{NaP} &= - g_{\text{peak}, NaP} (m_{NaP}^{\infty}(V, t))^3 (V-E_{NaP}) \
m_{NaP}^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+55.7\text{mV}}{7.7\text{mV}}\right)}
\end{align}
End of explanation
nest.ResetKernel()
class IDK(Channel):
nest_g = 'g_peak_KNa'
nest_I = 'I_KNa'
def __init__(self, ht_params):
self.hp = ht_params
def m_inf(self, D):
return 1/(1+(0.25/D)**3.5)
def D_inf(self, V):
return 1250. * self.D_influx(V) + 0.001
def D_influx(self, V):
return 0.025 / ( 1 + np.exp(-(V+10)/5.) )
def dD(self, D, t, V):
return (self.D_inf(V) - D)/1250.
def compute_I(self, t, V, m0, h0, D0):
self.D = si.odeint(self.dD, D0, t, args=(V,))
self.m = self.m_inf(self.D)
return - self.hp['g_peak_KNa'] * self.m * (V - self.hp['E_rev_KNa'])
Explanation: Perfect agreement
Step structure is because $V$ changes only every second.
I_KNa channel (aka I_DK)
Equations for this channel are
\begin{align}
I_{DK} &= - g_{\text{peak},DK} m_{DK}^{\infty}(V,t) (V - E_{DK})\
m_{DK}^{\infty} &= \frac{1}{1 + \left(\frac{d_{1/2}}{D}\right)^{3.5}}\
\frac{dD}{dt} &= D_{\text{influx}}(V) - \frac{D-D_{\text{eq}}}{\tau_D} = \frac{D_{\infty}(V)-D}{\tau_D} \
D_{\infty}(V) &= \tau_D D_{\text{influx}}(V) + {D_{\text{eq}}}\
D_{\text{influx}} &= \frac{D_{\text{influx,peak}}}{1+ \exp\left(-\frac{V-D_{\theta}}{\sigma_D}\right)}
\end{align}
with
|$D_{\text{influx,peak}}$|$D_{\text{eq}}$|$\tau_D$|$D_{\theta}$|$\sigma_D$|$d_{1/2}$|
| --: | --: | --: | --: | --: | --: |
|$0.025\text{ms}^{-1}$ |$0.001$|$1250\text{ms}$|$-10\text{mV}$|$5\text{mV}$|$0.25$|
Note the following:
- $D_{eq}$ is the equilibrium value only for $D_{\text{influx}}(V)=0$, i.e., in the limit $V\to -\infty$ and $t\to\infty$.
- The actual steady-state value is $D_{\infty}$.
- $m_{DK}^{\infty}$ is a steep sigmoid which is almost 0 or 1 except for a narrow window around $d_{1/2}$.
- To the left of this window, $I_{DK}\approx 0$.
- To the right of this window, $I_{DK}\sim -(V-E_{DK})$.
End of explanation
iDK = IDK(nest.GetDefaults('ht_neuron'))
D=np.linspace(0.01, 1.5,num=200);
V=np.linspace(-110, 30, num=200);
ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);
ax2 = ax1.twinx()
ax3 = plt.subplot2grid((1, 9), (0, 6), colspan=3);
ax1.plot(V, -iDK.m_inf(iDK.D_inf(V))*(V - iDK.hp['E_rev_KNa']), 'g');
ax1.set_ylabel('Current I_inf(V)', color='g');
ax2.plot(V, iDK.m_inf(iDK.D_inf(V)), 'b');
ax2.set_ylabel('Activation m_inf(D_inf(V))', color='b');
ax1.set_xlabel('Membrane potential V [mV]');
ax2.set_title('Steady-state activation and current');
ax3.plot(D, iDK.m_inf(D), 'b');
ax3.set_xlabel('D');
ax3.set_ylabel('Activation m_inf(D)', color='b');
ax3.set_title('Activation as function of D');
Explanation: Properties of I_DK
End of explanation
nr, cr = voltage_clamp(iDK, [(500, -65.), (500, -35.), (500, -25.), (500, 0.), (5000, -70.)],
nest_dt=1.)
ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);
ax2 = plt.subplot2grid((1, 9), (0, 6), colspan=3);
ax1.plot(nr.times, nr.I_KNa, label='NEST');
ax1.plot(cr.times, cr.I_KNa, label='Control');
ax1.legend(loc='lower right');
ax1.set_xlabel('Time [ms]');
ax1.set_ylabel('I_DK [mV]');
ax1.set_title('I_DK current');
ax2.plot(nr.times, (nr.I_KNa-cr.I_KNa)/np.abs(cr.I_KNa));
ax2.set_title('Relative I_DK error')
ax2.set_xlabel('Time [ms]');
ax2.set_ylabel('Rel. error (NEST-Control)/|Control|');
Explanation: Note that current in steady state is
$\approx 0$ for $V < -40$mV
$\sim -(V-E_{DK})$ for $V> -30$mV
Voltage clamp
End of explanation
nest.ResetKernel()
class SynChannel:
Base class for synapse channel models in Python.
def t_peak(self):
return self.tau_1 * self.tau_2 / (self.tau_2 - self.tau_1) * np.log(self.tau_2/self.tau_1)
def beta(self, t):
val = ( ( np.exp(-t/self.tau_1) - np.exp(-t/self.tau_2) ) /
( np.exp(-self.t_peak()/self.tau_1) - np.exp(-self.t_peak()/self.tau_2) ) )
val[t < 0] = 0
return val
def syn_voltage_clamp(channel, DT_V_seq, nest_dt=0.1):
"Run voltage clamp with voltage V through intervals DT with single spike at time 1"
spike_time = 1.0
delay = 1.0
nest.ResetKernel()
nest.SetKernelStatus({'resolution': nest_dt})
try:
nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6,
'instant_unblock_NMDA': channel.instantaneous})
except:
nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6})
mm = nest.Create('multimeter',
params={'record_from': ['g_'+channel.receptor],
'interval': nest_dt})
sg = nest.Create('spike_generator', params={'spike_times': [spike_time]})
nest.Connect(mm, nrn)
nest.Connect(sg, nrn, syn_spec={'weight': 1.0, 'delay': delay,
'receptor_type': channel.rec_code})
# ensure we start from equilibrated state
nest.SetStatus(nrn, {'V_m': DT_V_seq[0][1], 'equilibrate': True,
'voltage_clamp': True})
for DT, V in DT_V_seq:
nest.SetStatus(nrn, {'V_m': V, 'voltage_clamp': True})
nest.Simulate(DT)
t_end = nest.GetKernelStatus()['time']
# simulate a little more so we get all data up to t_end to multimeter
nest.Simulate(2 * nest.GetKernelStatus()['min_delay'])
tmp = pd.DataFrame(nest.GetStatus(mm)[0]['events'])
nest_res = tmp[tmp.times <= t_end]
# Control part
t_old = 0.
t_all, g_all = [], []
m_fast_old = (channel.m_inf(DT_V_seq[0][1])
if channel.receptor == 'NMDA' and not channel.instantaneous else None)
m_slow_old = (channel.m_inf(DT_V_seq[0][1])
if channel.receptor == 'NMDA' and not channel.instantaneous else None)
for DT, V in DT_V_seq:
t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)
g_loc = channel.g(t_old+t_loc-(spike_time+delay), V, m_fast_old, m_slow_old)
t_all.extend(t_old + t_loc[1:])
g_all.extend(g_loc[1:])
m_fast_old = channel.m_fast[-1] if m_fast_old is not None else None
m_slow_old = channel.m_slow[-1] if m_slow_old is not None else None
t_old = t_all[-1]
ctrl_res = pd.DataFrame({'times': t_all, 'g_'+channel.receptor: g_all})
return nest_res, ctrl_res
Explanation: Looks very fine.
Note that the current gets appreviable only when $V>-35$ mV
Once that threshold is crossed, the current adjust instantaneously to changes in $V$, since it is in the linear regime.
When returning from $V=0$ to $V=-70$ mV, the current remains large for a long time since $D$ has to drop below 1 before $m_{\infty}$ changes appreciably
Synaptic channels
For synaptic channels, NEST allows recording of conductances, so we test conductances directly. Due to the voltage-dependence of the NMDA channels, we still do this in voltage clamp.
End of explanation
nest.ResetKernel()
class PlainChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
def g(self, t, V, mf0, ms0):
return self.g_peak * self.beta(t)
def I(self, t, V):
return - self.g(t) * (V-self.E_rev)
ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')
am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.1)
plt.subplot(1, 2, 1);
plt.plot(am_n.times, am_n.g_AMPA, label='NEST');
plt.plot(am_c.times, am_c.g_AMPA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_AMPA');
plt.title('AMPA Channel');
plt.subplot(1, 2, 2);
plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('AMPA rel error');
Explanation: AMPA, GABA_A, GABA_B channels
End of explanation
ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')
am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.001)
plt.subplot(1, 2, 1);
plt.plot(am_n.times, am_n.g_AMPA, label='NEST');
plt.plot(am_c.times, am_c.g_AMPA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_AMPA');
plt.title('AMPA Channel');
plt.subplot(1, 2, 2);
plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('AMPA rel error');
gaba_a = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_A')
ga_n, ga_c = syn_voltage_clamp(gaba_a, [(50, -70.)])
plt.subplot(1, 2, 1);
plt.plot(ga_n.times, ga_n.g_GABA_A, label='NEST');
plt.plot(ga_c.times, ga_c.g_GABA_A, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_GABA_A');
plt.title('GABA_A Channel');
plt.subplot(1, 2, 2);
plt.plot(ga_n.times, (ga_n.g_GABA_A-ga_c.g_GABA_A)/ga_c.g_GABA_A);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('GABA_A rel error');
gaba_b = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_B')
gb_n, gb_c = syn_voltage_clamp(gaba_b, [(750, -70.)])
plt.subplot(1, 2, 1);
plt.plot(gb_n.times, gb_n.g_GABA_B, label='NEST');
plt.plot(gb_c.times, gb_c.g_GABA_B, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_GABA_B');
plt.title('GABA_B Channel');
plt.subplot(1, 2, 2);
plt.plot(gb_n.times, (gb_n.g_GABA_B-gb_c.g_GABA_B)/gb_c.g_GABA_B);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('GABA_B rel error');
Explanation: Looks quite good, but the error is maybe a bit larger than one would hope.
But the synaptic rise time is short (0.5 ms) compared to the integration step in NEST (0.1 ms), which may explain the error.
Reducing the time step reduces the error:
End of explanation
class NMDAInstantChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
self.S_act = hp['S_act_NMDA']
self.V_act = hp['V_act_NMDA']
self.instantaneous = True
def m_inf(self, V):
return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)))
def g(self, t, V, mf0, ms0):
return self.g_peak * self.m_inf(V) * self.beta(t)
def I(self, t, V):
return - self.g(t) * (V-self.E_rev)
nmdai = NMDAInstantChannel(nest.GetDefaults('ht_neuron'), 'NMDA')
ni_n, ni_c = syn_voltage_clamp(nmdai, [(50, -60.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])
plt.subplot(1, 2, 1);
plt.plot(ni_n.times, ni_n.g_NMDA, label='NEST');
plt.plot(ni_c.times, ni_c.g_NMDA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_NMDA');
plt.title('NMDA Channel (instant unblock)');
plt.subplot(1, 2, 2);
plt.plot(ni_n.times, (ni_n.g_NMDA-ni_c.g_NMDA)/ni_c.g_NMDA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('NMDA (inst) rel error');
Explanation: Looks good for all
For GABA_B the error is negligible even for dt = 0.1, since the time constants are large.
NMDA Channel
The equations for this channel are
\begin{align}
\bar{g}{\text{NMDA}}(t) &= m(V, t) g{\text{NMDA}}(t) m(V, t)\ &= a(V) m_{\text{fast}}^(V, t) + ( 1 - a(V) ) m_{\text{slow}}^(V, t)\
a(V) &= 0.51 - 0.0028 V \
m^{\infty}(V) &= \frac{1}{ 1 + \exp\left( -S_{\text{act}} ( V - V_{\text{act}} ) \right) } \
m_X^*(V, t) &= \min(m^{\infty}(V), m_X(V, t))\
\frac{\text{d}m_X}{\text{d}t} &= \frac{m^{\infty}(V) - m_X }{ \tau_{\text{Mg}, X}}
\end{align}
where $g_{\text{NMDA}}(t)$ is the beta functions as for the other channels. In case of instantaneous unblocking, $m=m^{\infty}$.
NMDA with instantaneous unblocking
End of explanation
class NMDAChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
self.S_act = hp['S_act_NMDA']
self.V_act = hp['V_act_NMDA']
self.tau_fast = hp['tau_Mg_fast_NMDA']
self.tau_slow = hp['tau_Mg_slow_NMDA']
self.instantaneous = False
def m_inf(self, V):
return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)) )
def dm(self, m, t, V, tau):
return ( self.m_inf(V) - m ) / tau
def g(self, t, V, mf0, ms0):
self.m_fast = si.odeint(self.dm, mf0, t, args=(V, self.tau_fast))
self.m_slow = si.odeint(self.dm, ms0, t, args=(V, self.tau_slow))
a = 0.51 - 0.0028 * V
m_inf = self.m_inf(V)
mfs = self.m_fast[:]
mfs[mfs > m_inf] = m_inf
mss = self.m_slow[:]
mss[mss > m_inf] = m_inf
m = np.squeeze(a * mfs + ( 1 - a ) * mss)
return self.g_peak * m * self.beta(t)
def I(self, t, V):
raise NotImplementedError()
nmda = NMDAChannel(nest.GetDefaults('ht_neuron'), 'NMDA')
nm_n, nm_c = syn_voltage_clamp(nmda, [(50, -70.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])
plt.subplot(1, 2, 1);
plt.plot(nm_n.times, nm_n.g_NMDA, label='NEST');
plt.plot(nm_c.times, nm_c.g_NMDA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_NMDA');
plt.title('NMDA Channel');
plt.subplot(1, 2, 2);
plt.plot(nm_n.times, (nm_n.g_NMDA-nm_c.g_NMDA)/nm_c.g_NMDA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('NMDA rel error');
Explanation: Looks good
Jumps are due to blocking/unblocking of Mg channels with changes in $V$
NMDA with unblocking over time
End of explanation
nest.ResetKernel()
sp = nest.GetDefaults('ht_synapse')
P0 = sp['P']
dP = sp['delta_P']
tP = sp['tau_P']
spike_times = [10., 12., 20., 20.5, 100., 200., 1000.]
expected = [(0., P0, P0)]
for idx, t in enumerate(spike_times):
tlast, Psend, Ppost = expected[idx]
Psend = 1 - (1-Ppost)*math.exp(-(t-tlast)/tP)
expected.append((t, Psend, (1-dP)*Psend))
expected_weights = list(zip(*expected[1:]))[1]
sg = nest.Create('spike_generator', params={'spike_times': spike_times})
n = nest.Create('parrot_neuron', 2)
wr = nest.Create('weight_recorder')
nest.SetDefaults('ht_synapse', {'weight_recorder': wr[0], 'weight': 1.0})
nest.Connect(sg, n[:1])
nest.Connect(n[:1], n[1:], syn_spec='ht_synapse')
nest.Simulate(1200)
rec_weights = nest.GetStatus(wr)[0]['events']['weights']
print('Recorded weights:', rec_weights)
print('Expected weights:', expected_weights)
print('Difference :', np.array(rec_weights) - np.array(expected_weights))
Explanation: Looks fine, too.
Synapse Model
We test the synapse model by placing it between two parrot neurons, sending spikes with differing intervals and compare to expected weights.
End of explanation
nest.ResetKernel()
nrn = nest.Create('ht_neuron')
ppg = nest.Create('pulsepacket_generator', n=4,
params={'pulse_times': [700., 1700., 2700., 3700.],
'activity': 700, 'sdev': 50.})
pr = nest.Create('parrot_neuron', n=4)
mm = nest.Create('multimeter',
params={'interval': 0.1,
'record_from': ['V_m', 'theta',
'g_AMPA', 'g_NMDA',
'g_GABA_A', 'g_GABA_B',
'I_NaP', 'I_KNa', 'I_T', 'I_h']})
weights = {'AMPA': 25., 'NMDA': 20., 'GABA_A': 10., 'GABA_B': 1.}
receptors = nest.GetDefaults('ht_neuron')['receptor_types']
nest.Connect(ppg, pr, 'one_to_one')
for p, (rec_name, rec_wgt) in zip(pr, weights.items()):
nest.Connect([p], nrn, syn_spec={'model': 'ht_synapse',
'receptor_type': receptors[rec_name],
'weight': rec_wgt})
nest.Connect(mm, nrn)
nest.Simulate(5000)
data = nest.GetStatus(mm)[0]['events']
t = data['times']
def texify_name(name):
return r'${}_{{\mathrm{{{}}}}}$'.format(*name.split('_'))
fig = plt.figure(figsize=(12,10))
Vax = fig.add_subplot(311)
Vax.plot(t, data['V_m'], 'k', lw=1, label=r'$V_m$')
Vax.plot(t, data['theta'], 'r', alpha=0.5, lw=1, label=r'$\Theta$')
Vax.set_ylabel('Potential [mV]')
Vax.legend(fontsize='small')
Vax.set_title('ht_neuron driven by sinousiodal Poisson processes')
Iax = fig.add_subplot(312)
for iname, color in (('I_h', 'blue'), ('I_KNa', 'green'),
('I_NaP', 'red'), ('I_T', 'cyan')):
Iax.plot(t, data[iname], color=color, lw=1, label=texify_name(iname))
#Iax.set_ylim(-60, 60)
Iax.legend(fontsize='small')
Iax.set_ylabel('Current [mV]')
Gax = fig.add_subplot(313)
for gname, sgn, color in (('g_AMPA', 1, 'green'), ('g_GABA_A', -1, 'red'),
('g_GABA_B', -1, 'cyan'), ('g_NMDA', 1, 'magenta')):
Gax.plot(t, sgn*data[gname], lw=1, label=texify_name(gname), color=color)
#Gax.set_ylim(-150, 150)
Gax.legend(fontsize='small')
Gax.set_ylabel('Conductance')
Gax.set_xlabel('Time [ms]');
Explanation: Perfect agreement, synapse model looks fine.
Integration test: Neuron driven through all synapses
We drive a Hill-Tononi neuron through pulse packets arriving at 1 second intervals, impinging through all synapse types. Compare this to Fig 5 of [HT05].
End of explanation |
1,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
syncID
Step1: Next, set display preferences so that plots are inline (meaning any images you output from your code will show up below the cell in the notebook) and turn off plot warnings
Step2: Read in hdf5
f = h5py.File('file.h5','r') reads in an h5 file to the variable f.
Using the help
We will be using a number of built-in and user-defined functions and methods throughout the tutorial. If you are uncertain what a certain function does, or how to call it, you can type help() or type a
? at the end of the function or method and run the cell (either select Cell > Run Cells or Shift Enter with your cursor in the cell you want to run). The ? will pop up a window at the bottom of the notebook displaying the function's docstrings, which includes information about the function and usage. We encourage you to use help and ? throughout the tutorial as you come across functions you are unfamiliar with. Let's try this out with h5py.File
Step3: Now that we have an idea of how to use h5py to read in an h5 file, let's try it out. Note that if the h5 file is stored in a different directory than where you are running your notebook, you need to include the path (either relative or absolute) to the directory where that data file is stored. Use os.path.join to create the full path of the file.
Step4: Explore NEON AOP HDF5 Reflectance Files
We can look inside the HDF5 dataset with the h5py visititems function. The list_dataset function defined below displays all datasets stored in the hdf5 file and their locations within the hdf5 file
Step5: You can see that there is a lot of information stored inside this reflectance hdf5 file. Most of this information is metadata (data about the reflectance data), for example, this file stores input parameters used in the atmospheric correction. For this introductory lesson, we will only work with two of these datasets, the reflectance data (hyperspectral cube), and the corresponding geospatial information, stored in Metadata/Coordinate_System
Step6: Now that we can see the structure of the hdf5 file, let's take a look at some of the information that is stored inside. Let's start by extracting the reflectance data, which is nested under SERC/Reflectance/Reflectance_Data
Step7: The two members of the HDF5 group /SERC/Reflectance are Metadata and Reflectance_Data. Let's save the reflectance data as the variable serc_reflArray
Step8: We can extract the size of this reflectance array that we extracted using the shape method
Step9: This 3-D shape (1000,1000,426) corresponds to (y,x,bands), where (x,y) are the dimensions of the reflectance array in pixels. Hyperspectral data sets are often called "cubes" to reflect this 3-dimensional shape.
<figure>
<a href="https
Step10: We can then use numpy (imported as np) to see the minimum and maximum wavelength values
Step11: Finally, we can determine the band widths (distance between center bands of two adjacent bands). Let's try this for the first two bands and the last two bands. Remember that Python uses 0-based indexing ([0] represents the first value in an array), and note that you can also use negative numbers to splice values from the end of an array ([-1] represents the last value in an array).
Step12: The center wavelengths recorded in this hyperspectral cube range from 383.66 - 2511.94 nm, and each band covers a range of ~5 nm. Now let's extract spatial information, which is stored under SERC/Reflectance/Metadata/Coordinate_System/Map_Info
Step13: Understanding the output
Step14: Now we can extract the spatial information we need from the map info values, convert them to the appropriate data type (float) and store it in a way that will enable us to access and apply it later when we want to plot the data
Step15: Now we can define the spatial exten as the tuple (xMin, xMax, yMin, yMax). This is the format required for applying the spatial extent when plotting with matplotlib.pyplot.
Step16: Extract a Single Band from Array
While it is useful to have all the data contained in a hyperspectral cube, it is difficult to visualize all this information at once. We can extract a single band (representing a ~5nm band, approximating a single wavelength) from the cube by using splicing as follows. Note that we have to cast the reflectance data into the type float. Recall that since Python indexing starts at 0 instead of 1, in order to extract band 56, we need to use the index 55.
Step17: Here we can see that we extracted a 2-D array (1000 x 1000) of the scaled reflectance data corresponding to the wavelength band 56. Before we can use the data, we need to clean it up a little. We'll show how to do this below.
Scale factor and No Data Value
This array represents the scaled reflectance for band 56. Recall from exploring the HDF5 data in HDFViewer that NEON AOP reflectance data uses a Data_Ignore_Value of -9999 to represent missing data (often called NaN), and a reflectance Scale_Factor of 10000.0 in order to save disk space (can use lower precision this way).
<figure>
<a href="https
Step18: Plot single reflectance band
Now we can plot this band using the Python package matplotlib.pyplot, which we imported at the beginning of the lesson as plt. Note that the default colormap is jet unless otherwise specified. You can explore using different colormaps on your own; see the <a href="https
Step19: We can see that this image looks pretty washed out. To see why this is, it helps to look at the range and distribution of reflectance values that we are plotting. We can do this by making a histogram.
Plot histogram
We can plot a histogram using the matplotlib.pyplot.hist function. Note that this function won't work if there are any NaN values, so we can ensure we are only plotting the real data values using the call below. You can also specify the # of bins you want to divide the data into.
Step20: We can see that most of the reflectance values are < 0.4. In order to show more contrast in the image, we can adjust the colorlimit (clim) to 0-0.4
Step21: Here you can see that adjusting the colorlimit displays features (eg. roads, buildings) much better than when we set the colormap limits to the entire range of reflectance values.
Extension | Python Code:
import numpy as np
import h5py
import gdal, osr, os
import matplotlib.pyplot as plt
Explanation: syncID: 61ad1fc43ddd45b49cad1bca48656bbe
title: "NEON AOP Hyperspectral Data in HDF5 format with Python - Tiled Data"
description: "Learn how to read NEON AOP hyperspectral flightline data using Python and develop skills to manipulate and visualize spectral data."
dateCreated: 2018-07-04
authors: Bridget Hass
contributors: Donal O'Leary
estimatedTime: 1 hour
packagesLibraries: numpy, h5py, gdal, matplotlib.pyplot
topics: hyperspectral-remote-sensing, HDF5, remote-sensing
languagesTool: python
dataProduct: NEON.DP3.30006, NEON.DP3.30008
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
tutorialSeries: intro-hsi-py-series
urlTitle: neon-aop-hdf5-tile-py
In this introductory tutorial, we discuss how to read NEON AOP hyperspectral flightline
data using Python. We develop and practice skills and use several tools to manipulate and
visualize the spectral data. By the end of this tutorial, you will become
familiar with the Python syntax.
If you are interested in learning how to do this for flightline NEON AOP hyperspectral data,
please see <a href="/neon-aop-hdf5-py" target="_blank"> NEON AOP Hyperspectral Data in HDF5 format with Python - Flightlines</a>.
Learning Objectives
After completing this tutorial, you will be able to:
Import and use Python packages numpy, pandas, matplotlib, h5py, and gdal.
Use the package h5py and the visititems functionality to read an HDF5 file
and view data attributes.
Read the data ignore value and scaling factor and apply these values to produce
a cleaned reflectance array.
Extract and plot a single band of reflectance data
Plot a histogram of reflectance values to visualize the range and distribution
of values.
Subset an hdf5 reflectance file from the full flightline to a smaller region
of interest (if you complete the optional extension).
Apply a histogram stretch and adaptive equalization to improve the contrast
of an image (if you complete the optional extension) .
Install Python Packages
numpy
pandas
gdal
matplotlib
h5py
Download Data
To complete this tutorial, you will use data available from the NEON 2017 Data
Institute.
This tutorial uses the following files:
<ul>
<li> <a href="https://www.neonscience.org/sites/default/files/neon_aop_spectral_python_functions_tiled_data.zip">neon_aop_spectral_python_functions_tiled_data.zip (10 KB)</a> <- Click to Download</li>
<li><a href="https://ndownloader.figshare.com/files/25752665" target="_blank">NEON_D02_SERC_DP3_368000_4306000_reflectance.h5 (618 MB)</a> <- Click to Download</li>
</ul>
<a href="https://ndownloader.figshare.com/files/25752665" class="link--button link--arrow">
Download Dataset</a>
The LiDAR and imagery data used to create this raster teaching data subset
were collected over the
<a href="http://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a>
<a href="http://www.neonscience.org/science-design/field-sites/" target="_blank" >field sites</a>
and processed at NEON headquarters.
The entire dataset can be accessed on the
<a href="http://data.neonscience.org" target="_blank"> NEON data portal</a>.
Hyperspectral remote sensing data is a useful tool for measuring changes to our
environment at the Earth’s surface. In this tutorial we explore how to extract
information from a tile (1000m x 1000m x 426 bands) of NEON AOP orthorectified surface reflectance data, stored in hdf5 format. For more information on this data product, refer to the <a href="http://data.neonscience.org/data-products/DP3.30006.001" target="_blank">NEON Data Product Catalog</a>.
Mapping the Invisible: Introduction to Spectral Remote Sensing
For more information on spectral remote sensing watch this video.
<iframe width="560" height="315" src="https://www.youtube.com/embed/3iaFzafWJQE" frameborder="0" allowfullscreen></iframe>
Set up
First let's import the required packages:
End of explanation
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: Next, set display preferences so that plots are inline (meaning any images you output from your code will show up below the cell in the notebook) and turn off plot warnings:
End of explanation
help(h5py)
h5py.File?
Explanation: Read in hdf5
f = h5py.File('file.h5','r') reads in an h5 file to the variable f.
Using the help
We will be using a number of built-in and user-defined functions and methods throughout the tutorial. If you are uncertain what a certain function does, or how to call it, you can type help() or type a
? at the end of the function or method and run the cell (either select Cell > Run Cells or Shift Enter with your cursor in the cell you want to run). The ? will pop up a window at the bottom of the notebook displaying the function's docstrings, which includes information about the function and usage. We encourage you to use help and ? throughout the tutorial as you come across functions you are unfamiliar with. Let's try this out with h5py.File:
End of explanation
# Note that you will need to update this filepath for your local machine
f = h5py.File('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5','r')
Explanation: Now that we have an idea of how to use h5py to read in an h5 file, let's try it out. Note that if the h5 file is stored in a different directory than where you are running your notebook, you need to include the path (either relative or absolute) to the directory where that data file is stored. Use os.path.join to create the full path of the file.
End of explanation
#list_dataset lists the names of datasets in an hdf5 file
def list_dataset(name,node):
if isinstance(node, h5py.Dataset):
print(name)
f.visititems(list_dataset)
Explanation: Explore NEON AOP HDF5 Reflectance Files
We can look inside the HDF5 dataset with the h5py visititems function. The list_dataset function defined below displays all datasets stored in the hdf5 file and their locations within the hdf5 file:
End of explanation
#ls_dataset displays the name, shape, and type of datasets in hdf5 file
def ls_dataset(name,node):
if isinstance(node, h5py.Dataset):
print(node)
#to see what the visititems methods does, type ? at the end:
f.visititems?
f.visititems(ls_dataset)
Explanation: You can see that there is a lot of information stored inside this reflectance hdf5 file. Most of this information is metadata (data about the reflectance data), for example, this file stores input parameters used in the atmospheric correction. For this introductory lesson, we will only work with two of these datasets, the reflectance data (hyperspectral cube), and the corresponding geospatial information, stored in Metadata/Coordinate_System:
SERC/Reflectance/Reflectance_Data
SERC/Reflectance/Metadata/Coordinate_System/
We can also display the name, shape, and type of each of these datasets using the ls_dataset function defined below, which is also called with the visititems method:
End of explanation
serc_refl = f['SERC']['Reflectance']
print(serc_refl)
Explanation: Now that we can see the structure of the hdf5 file, let's take a look at some of the information that is stored inside. Let's start by extracting the reflectance data, which is nested under SERC/Reflectance/Reflectance_Data:
End of explanation
serc_reflArray = serc_refl['Reflectance_Data']
print(serc_reflArray)
Explanation: The two members of the HDF5 group /SERC/Reflectance are Metadata and Reflectance_Data. Let's save the reflectance data as the variable serc_reflArray:
End of explanation
refl_shape = serc_reflArray.shape
print('SERC Reflectance Data Dimensions:',refl_shape)
Explanation: We can extract the size of this reflectance array that we extracted using the shape method:
End of explanation
#define the wavelengths variable
wavelengths = serc_refl['Metadata']['Spectral_Data']['Wavelength']
#View wavelength information and values
print('wavelengths:',wavelengths)
Explanation: This 3-D shape (1000,1000,426) corresponds to (y,x,bands), where (x,y) are the dimensions of the reflectance array in pixels. Hyperspectral data sets are often called "cubes" to reflect this 3-dimensional shape.
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png"></a>
<figcaption> A "cube" showing a hyperspectral data set. Source: National Ecological Observatory Network
(NEON)
</figcaption>
</figure>
NEON hyperspectral data contain around 426 spectral bands, and when working with tiled data, the spatial dimensions are 1000 x 1000, where each pixel represents 1 meter. Now let's take a look at the wavelength values. First, we will extract wavelength information from the serc_refl variable that we created:
End of explanation
# Display min & max wavelengths
print('min wavelength:', np.amin(wavelengths),'nm')
print('max wavelength:', np.amax(wavelengths),'nm')
Explanation: We can then use numpy (imported as np) to see the minimum and maximum wavelength values:
End of explanation
#show the band widths between the first 2 bands and last 2 bands
print('band width between first 2 bands =',(wavelengths.value[1]-wavelengths.value[0]),'nm')
print('band width between last 2 bands =',(wavelengths.value[-1]-wavelengths.value[-2]),'nm')
Explanation: Finally, we can determine the band widths (distance between center bands of two adjacent bands). Let's try this for the first two bands and the last two bands. Remember that Python uses 0-based indexing ([0] represents the first value in an array), and note that you can also use negative numbers to splice values from the end of an array ([-1] represents the last value in an array).
End of explanation
serc_mapInfo = serc_refl['Metadata']['Coordinate_System']['Map_Info']
print('SERC Map Info:',serc_mapInfo.value)
Explanation: The center wavelengths recorded in this hyperspectral cube range from 383.66 - 2511.94 nm, and each band covers a range of ~5 nm. Now let's extract spatial information, which is stored under SERC/Reflectance/Metadata/Coordinate_System/Map_Info:
End of explanation
#First convert mapInfo to a string
mapInfo_string = str(serc_mapInfo.value) #convert to string
#see what the split method does
mapInfo_string.split?
#split the strings using the separator ","
mapInfo_split = mapInfo_string.split(",")
print(mapInfo_split)
Explanation: Understanding the output:
Here we can spatial information about the reflectance data. Below is a break down of what each of these values means:
UTM - coordinate system (Universal Transverse Mercator)
1.000, 1.000 -
368000.000, 4307000.0 - UTM coordinates (meters) of the map origin, which refers to the upper-left corner of the image (xMin, yMax).
1.0000000, 1.0000000 - pixel resolution (meters)
18 - UTM zone
N - UTM hemisphere (North for all NEON sites)
WGS-84 - reference ellipoid
The letter b that appears before UTM signifies that the variable-length string data is stored in binary format when it is written to the hdf5 file. Don't worry about it for now, as we will convert the numerical data we need into floating point numbers. For more information on hdf5 strings read the <a href="http://docs.h5py.org/en/latest/strings.html" target="_blank">h5py documentation</a>.
Let's extract relevant information from the Map_Info metadata to define the spatial extent of this dataset. To do this, we can use the split method to break up this string into separate values:
End of explanation
#Extract the resolution & convert to floating decimal number
res = float(mapInfo_split[5]),float(mapInfo_split[6])
print('Resolution:',res)
#Extract the upper left-hand corner coordinates from mapInfo
xMin = float(mapInfo_split[3])
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions
xMax = xMin + (refl_shape[1]*res[0]) #xMax = left edge + (# of columns * x pixel resolution)
yMin = yMax - (refl_shape[0]*res[1]) #yMin = top edge - (# of rows * y pixel resolution)
Explanation: Now we can extract the spatial information we need from the map info values, convert them to the appropriate data type (float) and store it in a way that will enable us to access and apply it later when we want to plot the data:
End of explanation
#Define extent as a tuple:
serc_ext = (xMin, xMax, yMin, yMax)
print('serc_ext:',serc_ext)
print('serc_ext type:',type(serc_ext))
Explanation: Now we can define the spatial exten as the tuple (xMin, xMax, yMin, yMax). This is the format required for applying the spatial extent when plotting with matplotlib.pyplot.
End of explanation
b56 = serc_reflArray[:,:,55].astype(float)
print('b56 type:',type(b56))
print('b56 shape:',b56.shape)
print('Band 56 Reflectance:\n',b56)
Explanation: Extract a Single Band from Array
While it is useful to have all the data contained in a hyperspectral cube, it is difficult to visualize all this information at once. We can extract a single band (representing a ~5nm band, approximating a single wavelength) from the cube by using splicing as follows. Note that we have to cast the reflectance data into the type float. Recall that since Python indexing starts at 0 instead of 1, in order to extract band 56, we need to use the index 55.
End of explanation
#View and apply scale factor and data ignore value
scaleFactor = serc_reflArray.attrs['Scale_Factor']
noDataValue = serc_reflArray.attrs['Data_Ignore_Value']
print('Scale Factor:',scaleFactor)
print('Data Ignore Value:',noDataValue)
b56[b56==int(noDataValue)]=np.nan
b56 = b56/scaleFactor
print('Cleaned Band 56 Reflectance:\n',b56)
Explanation: Here we can see that we extracted a 2-D array (1000 x 1000) of the scaled reflectance data corresponding to the wavelength band 56. Before we can use the data, we need to clean it up a little. We'll show how to do this below.
Scale factor and No Data Value
This array represents the scaled reflectance for band 56. Recall from exploring the HDF5 data in HDFViewer that NEON AOP reflectance data uses a Data_Ignore_Value of -9999 to represent missing data (often called NaN), and a reflectance Scale_Factor of 10000.0 in order to save disk space (can use lower precision this way).
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png"></a>
<figcaption> Screenshot of the NEON HDF5 file format.
Source: National Ecological Observatory Network
</figcaption>
</figure>
We can extract and apply the Data_Ignore_Value and Scale_Factor as follows:
End of explanation
serc_plot = plt.imshow(b56,extent=serc_ext,cmap='Greys')
Explanation: Plot single reflectance band
Now we can plot this band using the Python package matplotlib.pyplot, which we imported at the beginning of the lesson as plt. Note that the default colormap is jet unless otherwise specified. You can explore using different colormaps on your own; see the <a href="https://matplotlib.org/examples/color/colormaps_reference.html" target="_blank">mapplotlib colormaps</a> for for other options.
End of explanation
plt.hist(b56[~np.isnan(b56)],50); #50 signifies the # of bins
Explanation: We can see that this image looks pretty washed out. To see why this is, it helps to look at the range and distribution of reflectance values that we are plotting. We can do this by making a histogram.
Plot histogram
We can plot a histogram using the matplotlib.pyplot.hist function. Note that this function won't work if there are any NaN values, so we can ensure we are only plotting the real data values using the call below. You can also specify the # of bins you want to divide the data into.
End of explanation
serc_plot = plt.imshow(b56,extent=serc_ext,cmap='Greys',clim=(0,0.4))
plt.title('SERC Band 56 Reflectance');
Explanation: We can see that most of the reflectance values are < 0.4. In order to show more contrast in the image, we can adjust the colorlimit (clim) to 0-0.4:
End of explanation
from skimage import exposure
from IPython.html.widgets import *
def linearStretch(percent):
pLow, pHigh = np.percentile(b56[~np.isnan(b56)], (percent,100-percent))
img_rescale = exposure.rescale_intensity(b56, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=serc_ext,cmap='gist_earth')
#cbar = plt.colorbar(); cbar.set_label('Reflectance')
plt.title('SERC Band 56 \n Linear ' + str(percent) + '% Contrast Stretch');
ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
interact(linearStretch,percent=(0,50,1))
Explanation: Here you can see that adjusting the colorlimit displays features (eg. roads, buildings) much better than when we set the colormap limits to the entire range of reflectance values.
Extension: Basic Image Processing -- Contrast Stretch & Histogram Equalization
We can also try out some basic image processing to better visualize the
reflectance data using the ski-image package.
Histogram equalization is a method in image processing of contrast adjustment
using the image's histogram. Stretching the histogram can improve the contrast
of a displayed image, as we will show how to do below.
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"></a>
<figcaption> Histogram equalization is a method in image processing of contrast adjustment
using the image's histogram. Stretching the histogram can improve the contrast
of a displayed image, as we will show how to do below.
Source: <a href="https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png"> Wikipedia - Public Domain </a>
</figcaption>
</figure>
The following tutorial section is adapted from skikit-image's tutorial
<a href="http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py" target="_blank"> Histogram Equalization</a>.
Below we demonstrate a widget to interactively display different linear contrast stretches:
Explore the contrast stretch feature interactively using IPython widgets:
End of explanation |
1,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 4 - Understanding and Predicting Property Maintenance Fines
This assignment is based on a data challenge from the Michigan Data Science Team (MDST).
The Michigan Data Science Team (MDST) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences (MSSISS) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. Blight violations are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know
Step1:
Step2: Train, keep, test split
Step3: Train a NeuralNet and see the performance | Python Code:
import pandas as pd
import numpy as np
def blight_model():
# Your code here
return # Your answer here
df_train = pd.read_csv('train.csv', encoding = "ISO-8859-1")
df_test = pd.read_csv('test.csv', encoding = "ISO-8859-1")
df_train.columns
list_to_remove = ['balance_due',
'collection_status',
'compliance_detail',
'payment_amount',
'payment_date',
'payment_status']
list_to_remove_all = ['violator_name', 'zip_code', 'country', 'city',
'inspector_name', 'violation_street_number', 'violation_street_name',
'violation_zip_code', 'violation_description',
'mailing_address_str_number', 'mailing_address_str_name',
'non_us_str_code',
'ticket_issued_date', 'hearing_date']
df_train.drop(list_to_remove, axis=1, inplace=True)
df_train.drop(list_to_remove_all, axis=1, inplace=True)
df_test.drop(list_to_remove_all, axis=1, inplace=True)
df_train.drop('grafitti_status', axis=1, inplace=True)
df_test.drop('grafitti_status', axis=1, inplace=True)
df_train.head()
df_train.violation_code.unique().size
df_train.disposition.unique().size
df_latlons = pd.read_csv('latlons.csv')
df_latlons.head()
df_address = pd.read_csv('addresses.csv')
df_address.head()
df_id_latlons = df_address.set_index('address').join(df_latlons.set_index('address'))
df_id_latlons.head()
df_train = df_train.set_index('ticket_id').join(df_id_latlons.set_index('ticket_id'))
df_test = df_test.set_index('ticket_id').join(df_id_latlons.set_index('ticket_id'))
df_train.head()
df_train.agency_name.value_counts()
# df_train.country.value_counts()
# so we remove zip code and country as well
vio_code_freq10 = df_train.violation_code.value_counts().index[0:10]
vio_code_freq10
df_train['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in df_train.violation_code ]
df_train.head()
df_train.violation_code_freq10.value_counts()
# drop violation code
df_train.drop('violation_code', axis=1, inplace=True)
df_test['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in df_test.violation_code ]
df_test.drop('violation_code', axis=1, inplace=True)
#df_train.grafitti_status.fillna('None', inplace=True)
#df_test.grafitti_status.fillna('None', inplace=True)
df_train = df_train[df_train.compliance.isnull() == False]
df_train.isnull().sum()
df_test.isnull().sum()
df_train.lat.fillna(method='pad', inplace=True)
df_train.lon.fillna(method='pad', inplace=True)
df_train.state.fillna(method='pad', inplace=True)
df_test.lat.fillna(method='pad', inplace=True)
df_test.lon.fillna(method='pad', inplace=True)
df_test.state.fillna(method='pad', inplace=True)
df_train.isnull().sum().sum()
df_test.isnull().sum().sum()
Explanation: You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 4 - Understanding and Predicting Property Maintenance Fines
This assignment is based on a data challenge from the Michigan Data Science Team (MDST).
The Michigan Data Science Team (MDST) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences (MSSISS) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. Blight violations are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance?
The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time.
All data for this assignment has been provided to us through the Detroit Open Data Portal. Only the data already included in your Coursera directory can be used for training the model for this assignment. Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets:
Building Permits
Trades Permits
Improve Detroit: Submitted Issues
DPD: Citizen Complaints
Parcel Map
We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv.
Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set.
<br>
File descriptions (Use only this data for training your model!)
train.csv - the training set (all tickets issued 2004-2011)
test.csv - the test set (all tickets issued 2012-2016)
addresses.csv & latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates.
Note: misspelled addresses may be incorrectly geolocated.
<br>
Data fields
train.csv & test.csv
ticket_id - unique identifier for tickets
agency_name - Agency that issued the ticket
inspector_name - Name of inspector that issued the ticket
violator_name - Name of the person/organization that the ticket was issued to
violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred
mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator
ticket_issued_date - Date and time the ticket was issued
hearing_date - Date and time the violator's hearing was scheduled
violation_code, violation_description - Type of violation
disposition - Judgment and judgement type
fine_amount - Violation fine amount, excluding fees
admin_fee - $20 fee assigned to responsible judgments
state_fee - $10 fee assigned to responsible judgments
late_fee - 10% fee assigned to responsible judgments
discount_amount - discount applied, if any
clean_up_cost - DPW clean-up or graffiti removal cost
judgment_amount - Sum of all fines and fees
grafitti_status - Flag for graffiti violations
train.csv only
payment_amount - Amount paid, if any
payment_date - Date payment was made, if it was received
payment_status - Current payment status as of Feb 1 2017
balance_due - Fines and fees still owed
collection_status - Flag for payments in collections
compliance [target variable for prediction]
Null = Not responsible
0 = Responsible, non-compliant
1 = Responsible, compliant
compliance_detail - More information on why each ticket was marked compliant or non-compliant
Evaluation
Your predictions will be given as the probability that the corresponding blight ticket will be paid on time.
The evaluation metric for this assignment is the Area Under the ROC Curve (AUC).
Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points.
For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using train.csv. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from test.csv will be paid, and the index being the ticket_id.
Example:
ticket_id
284932 0.531842
285362 0.401958
285361 0.105928
285338 0.018572
...
376499 0.208567
376500 0.818759
369851 0.018528
Name: compliance, dtype: float32
End of explanation
df_train.head()
one_hot_encode_columns = ['agency_name', 'state', 'disposition']
[ df_train[c].unique().size for c in one_hot_encode_columns]
# So remove city and states...
one_hot_encode_columns = ['agency_name', 'state', 'disposition']
df_train = pd.get_dummies(df_train, columns=one_hot_encode_columns)
df_test = pd.get_dummies(df_test, columns=one_hot_encode_columns)
df_train.head()
Explanation:
End of explanation
from sklearn.model_selection import train_test_split
train_features = df_train.columns.drop('compliance')
train_features
X_data, X_keep, y_data, y_keep = train_test_split(df_train[train_features],
df_train.compliance,
random_state=0,
test_size=0.05)
print(X_data.shape, X_keep.shape)
X_train, X_test, y_train, y_test = train_test_split(X_data[train_features],
y_data,
random_state=0,
test_size=0.2)
print(X_train.shape, X_test.shape)
Explanation: Train, keep, test split
End of explanation
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
clf = MLPClassifier(hidden_layer_sizes = [50], alpha = 5,
random_state = 0,
solver='lbfgs')
clf.fit(X_train_scaled, y_train)
print(clf.loss_)
clf.score(X_train_scaled, y_train)
clf.score(X_test_scaled, y_test)
from sklearn.metrics import recall_score, precision_score, f1_score
train_pred = clf.predict(X_train_scaled)
print(precision_score(y_train, train_pred),
recall_score(y_train, train_pred),
f1_score(y_train, train_pred))
from sklearn.metrics import recall_score, precision_score, f1_score
test_pred = clf.predict(X_test_scaled)
print(precision_score(y_test, test_pred),
recall_score(y_test, test_pred),
f1_score(y_test, test_pred))
test_pro = clf.predict_proba(X_test_scaled)
def draw_roc_curve():
%matplotlib notebook
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
fpr_lr, tpr_lr, _ = roc_curve(y_test, test_pro[:,1])
roc_auc_lr = auc(fpr_lr, tpr_lr)
plt.figure()
plt.xlim([-0.01, 1.00])
plt.ylim([-0.01, 1.01])
plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
plt.legend(loc='lower right', fontsize=13)
plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
plt.axes().set_aspect('equal')
plt.show()
draw_roc_curve()
test_pro[0:10]
clf.predict(X_test_scaled[0:10])
y_test[0:10]
1 - y_train.sum()/len(y_train)
from sklearn.metrics import recall_score, precision_score, f1_score
test_pred = clf.predict(X_test_scaled)
print(precision_score(y_test, test_pred),
recall_score(y_test, test_pred),
f1_score(y_test, test_pred))
def draw_pr_curve():
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import roc_curve, auc
precision, recall, thresholds = precision_recall_curve(y_test, test_pro[:,1])
print(len(thresholds))
idx = min(range(len(thresholds)), key=lambda i: abs(thresholds[i]-0.5))
print(idx)
print(np.argmin(np.abs(thresholds)))
closest_zero = idx # np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
import matplotlib.pyplot as plt
plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')
plt.show()
return thresholds
thresholds = draw_pr_curve()
import matplotlib.pyplot as plt
%matplotlib notebook
plt.plot(thresholds)
plt.show()
Explanation: Train a NeuralNet and see the performance
End of explanation |
1,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic Programming and Graph Algorithm Problems
Here are what we went over in the class. In addition here are the links to the MIT Course I mentioned and to my repo that has a lot more implementations of different types of algorithms and strategies.
Dynamic Programming
Dynamic programming is about approaching problems with overlapping substructures. We are taking a careful brute force method where exponential search space is reduced to polynomial search space.
Canonical Example
Step1: There are two approaches to dynamic programming problems.
Guessing + recursion + memoization. Pick out a feature of the problem space that we don't know and brute force an answer. Then we recurse over the problem space until we reach the part that is relevant for our specific instance. Then we memoize. Memoization takes what is often exponential time and makes linear/polynomial.
The second is a bottom-up approach. We build a dynamic programming table until we can solve the original problem.
Problems to try
Step2: 0/1 Knapsack Problem
Given a set of tuples that represent the weight and value of different goods, cakes in our examples, find the maximum value we can get given a knapsack with a certain weight restriction. Find the point where the value is maximum and the sum of their weight is equal to the total weight allowed by the knapsack. 0/1 means you cannot split an item.
Step3: Common dynamic programming problems
Step4: Graph Structures
There are many graph structures that are useful.
Tries- Tries are great for indexing words, alphabets, anything where you are trying to keep track of words tries are useful. The key to tries are that the letters lie along the edges of the graph and the vertices represent the word up to that point. Make sure that at the end of a word you have a special character to denote that you have reached the end of the word even if there are edges that continue towards another word.
DAG - Directed Acyclic Graphs | Python Code:
def fib(n):
if n < 0:
raise Exception("Index was negative. Cannot have a negative index in a series")
if n < 2:
return n
return fib(n-1) + fib(n-2)
fib(25)
def fib(n):
if n < 0:
raise Exception("Index was negative. Cannot have a negative index in a series")
if n < 2:
return n
pred_pred, pred = 0, 1
for _ in range(n-1):
current = pred + pred_pred
pred_pred = pred
pred = current
return current
fib(25)
class Fibber:
def __init__(self):
self.memo = {}
def fib(self, n):
if n < 0:
raise Exception('Index was negative. No such thing as a negative index in a series')
elif n < 2:
return n;
if n in self.memo:
return self.memo[n]
result = self.fib(n-1) + self.fib(n-2)
self.memo[n] = result
return result
fibs = Fibber()
fibs.fib(25)
Explanation: Dynamic Programming and Graph Algorithm Problems
Here are what we went over in the class. In addition here are the links to the MIT Course I mentioned and to my repo that has a lot more implementations of different types of algorithms and strategies.
Dynamic Programming
Dynamic programming is about approaching problems with overlapping substructures. We are taking a careful brute force method where exponential search space is reduced to polynomial search space.
Canonical Example: Fibonacci Sequence
End of explanation
# Triple Step
class triple_step():
def __init__(self):
self.memo = {}
def triple_step(self, step):
if step < 0:
return 0
self.memo[0] = 1
if step == 0:
return self.memo[0]
self.memo[1] = 1
if step == 1:
return self.memo[1]
result = self.triple_step(step-1) + self.triple_step(step-2) + self.triple_step(step-3)
self.memo[step] = result
return result
t = triple_step()
t.triple_step(4)
Explanation: There are two approaches to dynamic programming problems.
Guessing + recursion + memoization. Pick out a feature of the problem space that we don't know and brute force an answer. Then we recurse over the problem space until we reach the part that is relevant for our specific instance. Then we memoize. Memoization takes what is often exponential time and makes linear/polynomial.
The second is a bottom-up approach. We build a dynamic programming table until we can solve the original problem.
Problems to try:
A child is running up a staircase with n steps and can hop either 1 step, 2 steps, or 3 steps at a time. Implement a method to count how many possible ways the child can run up the stairs.
Imagine a robot sitting on the upper left corner of grid with r rows and c columns. The robot can only move in two directions, right and down, but certain cells are 'off limits' such that the robot cannot step on them. Design an algorithm to find a path for the robot from the top left to the bottom right.
End of explanation
# a cake tuple (3, 90) weighs 3 kilograms and has a value
# of 90 pounds
cake_tuples = [(7, 160), (3, 90), (2, 15)]
capacity = 20
def max_duffel_bag_value(cake_tuples, weight_capacity):
# we make a list to hold the max possible value at every
# duffel bag weight capacity from 0 to weight_capacity
# starting each index with value 0
#initialize an array of zeroes for each capacity limit
max_values_at_capacities = [0]*(weight_capacity + 1)
for current_capacity in range(weight_capacity + 1):
current_max_value = 0
#iterate through our range of weights from 0 to capacity
for cake_weight, cake_value in cake_tuples:
if cake_weight <= current_capacity:
#check the cake would fit at all
#take the value from the current capacity - the cake weight and add to the value of this cake
max_value_using_cake = cake_value + max_values_at_capacities[current_capacity - cake_weight]
#do this for each cake, take the one that gives us the highest value
current_max_value = max(max_value_using_cake, current_max_value)
#set that max value to the current capacity
max_values_at_capacities[current_capacity] = current_max_value
return max_values_at_capacities[weight_capacity]
max_duffel_bag_value(cake_tuples, capacity)
def getPath(self, maze):
if len(maze) == 0 or len(maze[0]) == 0:
return None
path = []
failedPoints = set()
if self.pathFinder(maze, len(maze), len(maze[0]), path, failedPoints):
return path
return None
def pathFinder(self, maze, row, col, path, failedPoints):
if col < 0 or row < 0 or not maze[row][col]:
return False
p = (row, col)
if p in failedPoints:
return False
isAtOrigin = (row == 0) and (col == 0)
if isAtOrigin or self.pathFinder(maze, row, col-1, path, failedPoints) or self.pathFinder(maze, row-1, col, path, failedPoints):
path.append(p)
return True
failedPoints.append(p)
return False
Explanation: 0/1 Knapsack Problem
Given a set of tuples that represent the weight and value of different goods, cakes in our examples, find the maximum value we can get given a knapsack with a certain weight restriction. Find the point where the value is maximum and the sum of their weight is equal to the total weight allowed by the knapsack. 0/1 means you cannot split an item.
End of explanation
class Node:
def __init__(self, value):
self.v = value
self.right = None
self.left= None
def checkBST(node):
return (checkBSThelper(node, -math.inf, math.inf))
def checkBSThelper(node, mini, maxi):
if node is None:
return True
if node.v < mini or node.v >= maxi:
return False
return checkBSThelper(node.left, mini, node.v) and checkBSThelper(node.right, node.v, maxi)
class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None
def checkBalanced(root):
if root is None:
return 0
left = checkBalanced(root.left)
right = checkBalanced(root.right)
if abs(left - right) > 1:
return -1
return max(left, right) + 1
Explanation: Common dynamic programming problems:
Fibonacci
Shortest Path
Parenthesization
Shortest Path
Knapsack
Towers of Hanoi
Edit Distance
Eight Queens/ N Queens
Coin change
Longest Common Subsequence
Graph and Trees
Differences between graphs and trees
Trees has a direct child/parent relationship and don't contain cycles. Trees are a DAG (directed acyclic graph) with the restriction that each child has one parent.
Binary Trees
Binary Trees are a great restricted case of a graph problem that are a great way to get familiar with traversing and interacting with graph structures before jumping to something more abstract.
A binary tree is only a binary tree if an inorder traversal is a sorted list of values.
Full - Every node should have exactly 2 nodes except the leaves. The leaves have 0 children
Complete - Every level, except the last, is completely filled, and all nodes are as far left as possible. A binary tree can be complete with nodes which have a single child if it is the leftmost child.
Balanced - The left and right sides of the tree have a height difference of 1 or less.
Here are some tasks you should know how to do with binary search trees:
Build a binary tree from a sorted array
Inorder, Preorder, Postorder traversal
Depth-first and Breadth-first search
Check if a BST is balanced
Validate tree is a BST (must adhere to BST properties)
Find common ancestor between two nodes
End of explanation
class Trie:
def __init__(self):
self.root_node = {}
def check_present_and_add(self, word):
current_node = self.root_node
is_new_word = False
for char in word:
if char not in current_node:
is_new_word = True
current_node[char] = {}
current_node = current_node[char]
if "End of Word" not in current_node:
is_new_word = True
current_node["End Of Word"] = {}
return is_new_word
#[2::][1::2]
import collections
words = ["baa", "", "abcd", "abca", "cab", "cad"]
def alienOrder(words):
pre, suc = collections.defaultdict(set), collections.defaultdict(set)
for pair in zip(words, words[1:]):
print(pair)
for a, b in zip(*pair):
if a != b:
suc[a].add(b)
pre[b].add(a)
break
print('succ %s' % suc)
print('pred %s' % pre)
chars = set(''.join(words))
print('chars %s' % chars)
print(set(pre))
free = chars - set(pre)
print('free %s' % free)
order = ''
while free:
a = free.pop()
order += a
for b in suc[a]:
pre[b].discard(a)
if not pre[b]:
free.add(b)
if set(order) == chars:
return order
else:
False
# return order * (set(order) == chars)
alienOrder(words)
Explanation: Graph Structures
There are many graph structures that are useful.
Tries- Tries are great for indexing words, alphabets, anything where you are trying to keep track of words tries are useful. The key to tries are that the letters lie along the edges of the graph and the vertices represent the word up to that point. Make sure that at the end of a word you have a special character to denote that you have reached the end of the word even if there are edges that continue towards another word.
DAG - Directed Acyclic Graphs: DAG are really good structures for represented relationships between items.
End of explanation |
1,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 15
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: So far the systems we have studied have been physical in the sense that they exist in the world, but they have not been physics, in the sense of what physics classes are usually about. In the next few chapters, we'll do some physics, starting with thermal systems, that is, systems where the temperature of objects changes as heat transfers from one to another.
The coffee cooling problem
The coffee cooling problem was discussed by Jearl Walker in
"The Amateur Scientist", Scientific American, Volume 237, Issue 5, November 1977. Since then it has become a standard example of modeling and simulation.
Here is my version of the problem
Step2: The values of T_init, volume, and t_end come from the statement of the problem
Step3: I chose the value of r arbitrarily for now; we will figure out how to estimate it soon.
In addition, make_system sets the temperature of the environment, T_env, and the time step, dt, which we will use use to simulate the cooling process.
Strictly speaking, Newton's law is a differential equation, but over a short period of time we can approximate it with a difference equation
Step4: We can test it with the initial temperature of the coffee, like this
Step5: With dt=1 minute, the temperature drops by about 0.7 °C/min, at least for this value of r.
Now here's a version of run_simulation that simulates a series of time steps from t_0 to t_end
Step6: This function is similar to previous versions of run_simulation.
One difference is that it uses linrange to make an array of values
from t_0 to t_end with time step dt.
We can run it like this
Step7: The result is a TimeSeries with one row per time step.
Step8: Here's what the results look like.
Step9: The temperature after 30 minutes is 72.3 °C, which is a little higher than what's stated in the problem, 70 °C.
Step10: By trial and error, we could find the value of r where the final temperature is precisely 70 °C.
But it is more efficient to use a root-finding algorithm.
Finding roots
The SciPy library provides a function called root_scalar that finds the roots of non-linear equations. As a simple example, suppose you want to find the roots of the polynomial
$$f(x) = (x - 1)(x - 2)(x - 3)$$
A root is a value of $x$ that makes $f(x)=0$. Because of the way I wrote the polynomial, we can see that if $x=1$, the first factor is 0; if $x=2$, the second factor is 0; and if $x=3$, the third factor is 0, so those are the roots.
I'll use this example to demonstrate root_scalar. First, we have to
write a function that evaluates $f$
Step11: Now we call root_scalar like this
Step12: The first argument is the function whose roots we want. The second
argument is an interval that contains or "brackets" a root. The result is an object that contains several variables, including root, which stores the root that was found.
Step13: If we provide a different interval, we find a different root.
Step14: If the interval doesn't contain a root, you'll get a ValueError
Step15: This is called an "error function" because it returns the
difference between what we got and what we wanted, that is, the error.
With the right value of r, the error is 0.
We can test error_func like this, using the initial guess r=0.01
Step16: The result is an error of 2.3 °C, because the final temperature with
this value of r is too high.
Step17: With r=0.02, the error is about -11°C, which means that the final temperature is too low. So we know that the correct value must be in between.
So we can call root_scalar like this
Step18: The first argument is the error function.
The second argument is the System object, which root_scalar passes as an argument to error_func.
The third argument is an interval that brackets the root.
Here are the results.
Step19: In this example, r_coffee turns out to be about 0.0115, in units of min$^{-1}$ (inverse minutes).
We can confirm that this value is correct by setting r to the root we found and running the simulation.
Step20: The final temperature is very close to 70 °C.
Exercises
Exercise
Step21: Exercise | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 15
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
from modsim import System
def make_system(T_init, volume, r, t_end):
return System(T_init=T_init,
T_final=T_init,
volume=volume,
r=r,
t_end=t_end,
T_env=22,
t_0=0,
dt=1)
Explanation: So far the systems we have studied have been physical in the sense that they exist in the world, but they have not been physics, in the sense of what physics classes are usually about. In the next few chapters, we'll do some physics, starting with thermal systems, that is, systems where the temperature of objects changes as heat transfers from one to another.
The coffee cooling problem
The coffee cooling problem was discussed by Jearl Walker in
"The Amateur Scientist", Scientific American, Volume 237, Issue 5, November 1977. Since then it has become a standard example of modeling and simulation.
Here is my version of the problem:
Suppose I stop on the way to work to pick up a cup of coffee, which I take with milk. Assuming that I want the coffee to be as hot as possible when I arrive at work, should I add the milk at the coffee shop, wait until I get to work, or add the milk at some point in between?
To help answer this question, I made a trial run with the milk and
coffee in separate containers and took some measurements (not really):
When served, the temperature of the coffee is 90 °C. The volume is
300 mL.
The milk is at an initial temperature of 5 °C, and I take about
50 mL.
The ambient temperature in my car is 22 °C.
The coffee is served in a well insulated cup. When I arrive at work after 30 minutes, the temperature of the coffee has fallen to 70 °C.
The milk container is not as well insulated. After 15 minutes, it
warms up to 20 °C, nearly the ambient temperature.
To use this data and answer the question, we have to know something
about temperature and heat, and we have to make some modeling decisions.
Temperature and heat
To understand how coffee cools (and milk warms), we need a model of
temperature and heat. Temperature is a property of an object or a
system; in SI units it is measured in degrees Celsius (°C). Temperature quantifies how hot or cold the object is, which is related to the average velocity of the particles that make up the object.
When particles in a hot object contact particles in a cold object, the
hot object gets cooler and the cold object gets warmer as energy is
transferred from one to the other. The transferred energy is called
heat; in SI units it is measured in joules (J).
Heat is related to temperature by the following equation (see
http://modsimpy.com/thermass):
$$Q = C~\Delta T$$
where $Q$ is the amount of heat transferred to an object, $\Delta T$ is resulting change in temperature, and $C$ is the thermal mass of the object, which quantifies how much energy it takes to heat or cool it. In SI units, thermal mass is measured in joules per degree Celsius (J/°C).
For objects made primarily from one material, thermal mass can be
computed like this:
$$C = m c_p$$
where $m$ is the mass of the object and $c_p$ is the specific heat capacity of the material (see http://modsimpy.com/specheat).
We can use these equations to estimate the thermal mass of a cup of
coffee. The specific heat capacity of coffee is probably close to that
of water, which is 4.2 J/g/°C. Assuming that the density of coffee is
close to that of water, which is 1 g/mL, the mass of 300 mL of coffee is 300 g, and the thermal mass is 1260 J/°C.
So when a cup of coffee cools from 90 °C to 70 °C, the change in
temperature, $\Delta T$ is 20 °C, which means that 25 200 J of heat
energy was transferred from the coffee to the surrounding environment
(the cup holder and air in my car).
To give you a sense of how much energy that is, if you were able to
harness all of that heat to do work (which you cannot), you could
use it to lift a cup of coffee from sea level to 8571 m, just shy of the height of Mount Everest, 8848 m.
Assuming that the cup has less mass than the coffee, and is made from a material with lower specific heat, we can ignore the thermal mass of the cup. For a cup with substantial thermal mass, like a ceramic mug, we might consider a model that computes the temperature of coffee and cup separately.
Heat transfer
In a situation like the coffee cooling problem, there are three ways
heat transfers from one object to another (see http://modsimpy.com/transfer):
Conduction: When objects at different temperatures come into
contact, the faster-moving particles of the higher-temperature
object transfer kinetic energy to the slower-moving particles of the lower-temperature object.
Convection: When particles in a gas or liquid flow from place to
place, they carry heat energy with them. Fluid flows can be caused
by external action, like stirring, or by internal differences in
temperature. For example, you might have heard that hot air rises,
which is a form of "natural convection\".
Radiation: As the particles in an object move due to thermal energy,
they emit electromagnetic radiation. The energy carried by this
radiation depends on the object's temperature and surface properties
(see http://modsimpy.com/thermrad).
For objects like coffee in a car, the effect of radiation is much
smaller than the effects of conduction and convection, so we will ignore it.
Convection can be a complex topic, since it often depends on details of fluid flow in three dimensions. But for this problem we will be able to get away with a simple model called "Newton's law of cooling".
Newton's law of cooling
Newton's law of cooling asserts that the temperature rate of change for an object is proportional to the difference in temperature between the
object and the surrounding environment:
$$\frac{dT}{dt} = -r (T - T_{env})$$
where $T$, the temperature of the object, is a function of time, $t$; $T_{env}$ is the temperature of the environment, and $r$ is a constant that characterizes how quickly heat is transferred between the system and the environment.
Newton's so-called "law " is really a model: it is a good approximation in some conditions and less good in others.
For example, if the primary mechanism of heat transfer is conduction,
Newton's law is "true", which is to say that $r$ is constant over a
wide range of temperatures. And sometimes we can estimate $r$ based on
the material properties and shape of the object.
When convection contributes a non-negligible fraction of heat transfer, $r$ depends on temperature, but Newton's law is often accurate enough, at least over a narrow range of temperatures. In this case $r$ usually has to be estimated experimentally, since it depends on details of surface shape, air flow, evaporation, etc.
When radiation makes up a substantial part of heat transfer, Newton's
law is not a good model at all. This is the case for objects in space or in a vacuum, and for objects at high temperatures (more than a few
hundred degrees Celsius, say).
However, for a situation like the coffee cooling problem, we expect
Newton's model to be quite good.
Implementation
To get started, let's forget about the milk temporarily and focus on the coffee.
Here's a function that takes the parameters of the system and makes a System object:
End of explanation
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
Explanation: The values of T_init, volume, and t_end come from the statement of the problem:
End of explanation
def change_func(T, t, system):
r, T_env, dt = system.r, system.T_env, system.dt
return -r * (T - T_env) * dt
Explanation: I chose the value of r arbitrarily for now; we will figure out how to estimate it soon.
In addition, make_system sets the temperature of the environment, T_env, and the time step, dt, which we will use use to simulate the cooling process.
Strictly speaking, Newton's law is a differential equation, but over a short period of time we can approximate it with a difference equation:
$$\Delta T = -r (T - T_{env}) dt$$
where $dt$ is the time step and $\Delta T$ is the change in temperature during that time step.
Note: I use $\Delta T$ to denote a change in temperature over time, but in the context of heat transfer, you might also see $\Delta T$ used to denote the difference in temperature between an object and its
environment, $T - T_{env}$. To minimize confusion, I avoid this second
use.
The following function takes the current temperature, T, the current time t, and a System object, and computes the change in temperature during a time step:
End of explanation
change_func(coffee.T_init, 0, coffee)
Explanation: We can test it with the initial temperature of the coffee, like this:
End of explanation
from modsim import linrange
from modsim import TimeSeries
def run_simulation(system, change_func):
t_array = linrange(system.t_0, system.t_end, system.dt)
n = len(t_array)
series = TimeSeries(index=t_array)
series.iloc[0] = system.T_init
for i in range(n-1):
t = t_array[i]
T = series.iloc[i]
series.iloc[i+1] = T + change_func(T, t, system)
system.t_end = t_array[-1]
system.T_final = series.iloc[-1]
return series
Explanation: With dt=1 minute, the temperature drops by about 0.7 °C/min, at least for this value of r.
Now here's a version of run_simulation that simulates a series of time steps from t_0 to t_end:
End of explanation
results = run_simulation(coffee, change_func)
Explanation: This function is similar to previous versions of run_simulation.
One difference is that it uses linrange to make an array of values
from t_0 to t_end with time step dt.
We can run it like this:
End of explanation
results.head()
Explanation: The result is a TimeSeries with one row per time step.
End of explanation
from modsim import decorate
results.plot(label='coffee')
decorate(xlabel='Time (minute)',
ylabel='Temperature (C)',
title='Coffee Cooling')
Explanation: Here's what the results look like.
End of explanation
coffee.T_final
Explanation: The temperature after 30 minutes is 72.3 °C, which is a little higher than what's stated in the problem, 70 °C.
End of explanation
def func(x):
return (x-1) * (x-2) * (x-3)
Explanation: By trial and error, we could find the value of r where the final temperature is precisely 70 °C.
But it is more efficient to use a root-finding algorithm.
Finding roots
The SciPy library provides a function called root_scalar that finds the roots of non-linear equations. As a simple example, suppose you want to find the roots of the polynomial
$$f(x) = (x - 1)(x - 2)(x - 3)$$
A root is a value of $x$ that makes $f(x)=0$. Because of the way I wrote the polynomial, we can see that if $x=1$, the first factor is 0; if $x=2$, the second factor is 0; and if $x=3$, the third factor is 0, so those are the roots.
I'll use this example to demonstrate root_scalar. First, we have to
write a function that evaluates $f$:
End of explanation
from scipy.optimize import root_scalar
res = root_scalar(func, bracket=[1.5, 2.5])
res
Explanation: Now we call root_scalar like this:
End of explanation
res.root
Explanation: The first argument is the function whose roots we want. The second
argument is an interval that contains or "brackets" a root. The result is an object that contains several variables, including root, which stores the root that was found.
End of explanation
res = root_scalar(func, bracket=[2.5, 3.5])
res.root
Explanation: If we provide a different interval, we find a different root.
End of explanation
def error_func(r, system):
system.r = r
results = run_simulation(system, change_func)
return system.T_final - 70
Explanation: If the interval doesn't contain a root, you'll get a ValueError:
res = root_scalar(func, bracket=[4, 5])
Now we can use root_scalar to estimate r.
Estimating r
What we want is the value of r that yields a final temperature of
70 °C. To use root_scalar, we need a function that takes r as a parameter and returns the difference between the final temperature and the goal:
End of explanation
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
error_func(0.01, coffee)
Explanation: This is called an "error function" because it returns the
difference between what we got and what we wanted, that is, the error.
With the right value of r, the error is 0.
We can test error_func like this, using the initial guess r=0.01:
End of explanation
error_func(0.02, coffee)
Explanation: The result is an error of 2.3 °C, because the final temperature with
this value of r is too high.
End of explanation
res = root_scalar(error_func, coffee, bracket=[0.01, 0.02])
Explanation: With r=0.02, the error is about -11°C, which means that the final temperature is too low. So we know that the correct value must be in between.
So we can call root_scalar like this:
End of explanation
res
r_coffee = res.root
r_coffee
Explanation: The first argument is the error function.
The second argument is the System object, which root_scalar passes as an argument to error_func.
The third argument is an interval that brackets the root.
Here are the results.
End of explanation
coffee.r = res.root
run_simulation(coffee, change_func)
coffee.T_final
Explanation: In this example, r_coffee turns out to be about 0.0115, in units of min$^{-1}$ (inverse minutes).
We can confirm that this value is correct by setting r to the root we found and running the simulation.
End of explanation
# Solution
milk = make_system(T_init=5, t_end=15, r=0.1, volume=50)
results_milk = run_simulation(milk, change_func)
milk.T_final
# Solution
results_milk.plot(color='C1', label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
Explanation: The final temperature is very close to 70 °C.
Exercises
Exercise: Simulate the temperature of 50 mL of milk with a starting temperature of 5 °C, in a vessel with r=0.1, for 15 minutes, and plot the results.
By trial and error, find a value for r that makes the final temperature close to 20 °C.
End of explanation
# Solution
def error_func2(r, system):
system.r = r
results = run_simulation(system, change_func)
return system.T_final - 20
# Solution
root_scalar(error_func2, milk, bracket=[0.1, 0.2])
# Solution
run_simulation(milk, change_func)
milk.T_final
Explanation: Exercise: Write an error function that simulates the temperature of the milk and returns the difference between the final temperature and 20 °C. Use it to estimate the value of r for the milk.
End of explanation |
1,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Date
Step1: Only train on data when the rat is moving
Step2: Encoding Model
Train model
Step3: Get predicted conditional intensities
Step4: Plot model fits for each neuron to check fit quality
Neurons HPa8245 and HPa8248 appear to have poor fits. What should we do about poor fits?
Step5: Compare to raw spikes
Inbound
Step6: Outbound
Step7: State Transition Matrix
Fit transition matrix based on movement data
Estimate separately based on inbound and outbound movements
Step8: Plot state transition matrix to check quality
Step9: Make sure state transition columns sum to 1
Step10: Initial conditions
Where the replay trajectory starts.
The outbound inital condition is a Gaussian with probability mass at center arm reflecting that the replay trajectory is likely to start at the center arm
The inbound inital condition is a Gaussian with probability mass everywhere but at the center arm reflecting that the replay trajectory is likely to start everywhere except at the center arm
Step11: Plot intial conditions
Step12: Organize into discrete states
Outbound reverse is the same as inbound forward. Double check conditional intensity
Step13: Ripple decoding
Get ripples
Step14: Decode Ripples
Step15: Display ripple category probabilities | Python Code:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%qtconsole
import sys
import collections
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from tqdm import tqdm_notebook as tqdm
import patsy
import statsmodels.api as sm
import statsmodels.formula.api as smf
import scipy.stats
sys.path.append('../src/')
import data_processing
import ripple_decoding
import ripple_detection
Animal = collections.namedtuple('Animal', {'directory', 'short_name'})
num_days = 8
days = range(1, num_days + 1)
animals = {'HPa': Animal(directory='HPa_direct', short_name='HPa')}
epoch_info = data_processing.make_epochs_dataframe(animals, days)
tetrode_info = data_processing.make_tetrode_dataframe(animals)
epoch_index = (epoch_info
.loc[(['HPa'], [8]), :]
.loc[epoch_info.environment == 'wtr1'].index)
cur_tetrode_info = tetrode_info[epoch_index[0]]
cur_tetrode_info
neuron_info = data_processing.make_neuron_dataframe(animals)[epoch_index[0]].dropna()
tetrode_info = data_processing.make_tetrode_dataframe(animals)[epoch_index[0]]
neuron_info = pd.merge(tetrode_info, neuron_info,
on=['animal', 'day', 'epoch_ind', 'tetrode_number', 'area'],
how='right', right_index=True).set_index(neuron_info.index)
neuron_info = neuron_info[neuron_info.area.isin(['CA1', 'iCA1']) &
(neuron_info.numspikes > 0) &
~neuron_info.descrip.str.endswith('Ref').fillna(False)]
trial_info = data_processing.get_interpolated_position_dataframe(epoch_index[0], animals)
trial_info
spikes_data = [data_processing.get_spike_indicator_dataframe(neuron_index, animals)
for neuron_index in neuron_info.index]
spikes_data[0]
# Make sure there are spikes in the training data times. Otherwise exclude that neuron
MEAN_RATE_THRESHOLD = 0.10 # in Hz
sampling_frequency = 1500
spikes_data = [spikes_datum for spikes_datum in spikes_data]
Explanation: Date: 2016-10-21
End of explanation
train_trial_info = trial_info.query('speed > 4')
train_spikes_data = [spikes_datum[trial_info.speed > 4]
for spikes_datum in spikes_data]
Explanation: Only train on data when the rat is moving
End of explanation
formula = '1 + trajectory_direction * bs(linear_distance, df=10, degree=3)';
design_matrix = patsy.dmatrix(formula, train_trial_info, return_type='dataframe')
fit = [sm.GLM(spikes, design_matrix, family=sm.families.Poisson()).fit(maxiter=30)
for spikes in tqdm.tqdm_notebook(train_spikes_data, desc='fit')]
Explanation: Encoding Model
Train model
End of explanation
n_place_bins = 61
linear_distance_grid = np.linspace(
np.floor(position_info.linear_distance.min()),
np.ceil(position_info.linear_distance.max()),
n_place_bins + 1))
linear_distance_grid_centers = linear_distance_grid[:-1] + np.diff(linear_distance_grid) / 2
def predictors_by_trajectory_direction(trajectory_direction, linear_distance_grid_centers, design_matrix):
predictors = {'linear_distance': linear_distance_grid_centers,
'trajectory_direction': [trajectory_direction] * len(linear_distance_grid_centers)}
return patsy.build_design_matrices([design_matrix.design_info], predictors)[0]
inbound_predict_design_matrix = predictors_by_trajectory_direction('Inbound',
linear_distance_grid_centers,
design_matrix)
outbound_predict_design_matrix = predictors_by_trajectory_direction('Outbound',
linear_distance_grid_centers,
design_matrix)
def get_conditional_intensity(fit, predict_design_matrix):
return np.vstack([fitted_model.predict(predict_design_matrix)
for fitted_model in fit]).T
inbound_conditional_intensity = get_conditional_intensity(fit, inbound_predict_design_matrix)
outbound_conditional_intensity = get_conditional_intensity(fit, outbound_predict_design_matrix)
Explanation: Get predicted conditional intensities
End of explanation
num_neurons = len(fit)
col_wrap = 5
num_plot_rows = int(np.ceil(num_neurons / col_wrap))
fig, axes = plt.subplots(nrows=num_plot_rows, ncols=col_wrap, figsize=(12, 9), sharex=True)
sampling_frequency = 1500
for neuron_ind, ax in enumerate(axes.flatten()[:num_neurons]):
ax.plot(linear_distance_grid_centers,
fit[neuron_ind].predict(inbound_predict_design_matrix) * sampling_frequency,
label='Inbound')
ax.plot(linear_distance_grid_centers,
fit[neuron_ind].predict(outbound_predict_design_matrix) * sampling_frequency,
label='Outbound')
ax.legend()
ax.set_xlim((linear_distance_grid.min(), linear_distance_grid.max()))
ax.set_title(neuron_info.neuron_id[neuron_ind])
middle_column = int(np.ceil(col_wrap / 2) - 1)
middle_row = int(np.ceil(num_plot_rows / 2) - 1)
axes[-1, middle_column].set_xlabel('Linear Distance')
axes[middle_row, 0].set_ylabel('Firing Rate (spikes/s)')
plt.tight_layout()
Explanation: Plot model fits for each neuron to check fit quality
Neurons HPa8245 and HPa8248 appear to have poor fits. What should we do about poor fits?
End of explanation
from mpl_toolkits.axes_grid1 import make_axes_locatable
def occupancy_normalized_histogram(stat_at_spike, stat, bins, ax=None, bar_plot_kwargs={}):
if ax is None:
ax = plt.gca()
occupancy, _ = np.histogram(stat, bins=bins)
binned_stat_at_spike, _ = np.histogram(stat_at_spike, bins=bins)
ax.bar(bins[:-1], binned_stat_at_spike / occupancy, **bar_plot_kwargs)
neuron_ind = 3
distance_at_spike = (pd.concat([train_trial_info, train_spikes_data[neuron_ind]], axis=1)
.query('(is_spike > 0) & (trajectory_direction == "Inbound")').linear_distance)
fig, ax = plt.subplots(1)
ax.plot(-linear_distance_grid_centers,
fit[neuron_ind].predict(inbound_predict_design_matrix),
label='Inbound')
ax.plot(linear_distance_grid_centers,
fit[neuron_ind].predict(inbound_predict_design_matrix),
label='Inbound')
occupancy_normalized_histogram(
-distance_at_spike,
-train_trial_info.query('trajectory_direction == "Inbound"').linear_distance,
-linear_distance_grid[::-1], ax=ax)
occupancy_normalized_histogram(
distance_at_spike,
train_trial_info.query('trajectory_direction == "Inbound"').linear_distance,
linear_distance_grid, ax=ax)
num_neurons = len(fit)
col_wrap = 5
num_plot_rows = int(np.ceil(num_neurons / col_wrap))
fig, axes = plt.subplots(nrows=num_plot_rows, ncols=col_wrap, figsize=(12, 9), sharex=True)
extent = (np.fix(train_trial_info.x_position.min()),
np.fix(train_trial_info.x_position.max()),
np.fix(train_trial_info.y_position.min()),
np.fix(train_trial_info.y_position.max()))
for neuron_ind, ax in enumerate(axes.flatten()[:num_neurons]):
df = (pd.concat([train_trial_info, train_spikes_data[neuron_ind]], axis=1)
.query('(is_spike > 0) & (trajectory_direction == "Inbound")'))
ax.plot(train_trial_info.x_position, train_trial_info.y_position, zorder=1)
ax.hexbin(df.x_position, df.y_position, zorder=2, alpha=0.75,
gridsize=20, extent=extent, cmap='Purples')
ax.set_title(neuron_info.neuron_id[neuron_ind])
plt.tight_layout()
Explanation: Compare to raw spikes
Inbound
End of explanation
num_neurons = len(fit)
col_wrap = 5
num_plot_rows = int(np.ceil(num_neurons / col_wrap))
fig, axes = plt.subplots(nrows=num_plot_rows, ncols=col_wrap, figsize=(12, 9), sharex=True)
extent = (np.fix(train_trial_info.x_position.min()),
np.fix(train_trial_info.x_position.max()),
np.fix(train_trial_info.y_position.min()),
np.fix(train_trial_info.y_position.max()))
for neuron_ind, ax in enumerate(axes.flatten()[:num_neurons]):
df = (pd.concat([train_trial_info, train_spikes_data[neuron_ind]], axis=1)
.query('(is_spike > 0) & (trajectory_direction == "Outbound")'))
ax.plot(train_trial_info.x_position, train_trial_info.y_position, zorder=1)
ax.hexbin(df.x_position, df.y_position, zorder=2, alpha=0.75,
gridsize=20, extent=extent, cmap='Purples')
ax.set_title(neuron_info.neuron_id[neuron_ind])
plt.tight_layout()
Explanation: Outbound
End of explanation
state_transition = ripple_decoding.get_state_transition_matrix(
train_position_info, linear_distance_grid)
outbound_state_transitions = state_transition[:n_place_bins, :n_place_bins]
inbound_state_transitions = state_transition[n_place_bins+1:2*(n_place_bins)+1,
n_place_bins+1:2*(n_place_bins)+1]
Explanation: State Transition Matrix
Fit transition matrix based on movement data
Estimate separately based on inbound and outbound movements
End of explanation
def plot_state_transition(state_transition, grid, ax=None,
vmin=0, vmax=1, cmap='viridis'):
if ax is None:
ax = plt.gca()
x_grid, y_grid = np.meshgrid(grid, grid)
mesh = ax.pcolormesh(x_grid, y_grid, state_transition,
cmap=cmap, vmin=vmin, vmax=vmax)
grid_extent = (grid.min(), grid.max())
ax.set_xlim(grid_extent)
ax.set_ylim(grid_extent)
ax.set_aspect('equal')
return mesh
fig, ax = plt.subplots(1, 3, figsize=(12,6))
plot_state_transition(inbound_state_transitions, linear_distance_grid, ax=ax[0])
ax[0].set_xlabel('Linear Distance at time t-1')
ax[0].set_ylabel('Linear Distance at time t')
ax[0].set_title('Inbound')
mesh1 = plot_state_transition(outbound_state_transitions, linear_distance_grid, ax=ax[1])
ax[1].set_title('Outbound')
ax[1].set_xlabel('Linear Distance at time t-1')
state_transition_difference = inbound_state_transitions - outbound_state_transitions
mesh2 = plot_state_transition(state_transition_difference, linear_distance_grid, ax=ax[2],
vmin=-0.01, vmax=0.01, cmap='PiYG')
ax[2].set_title('Inbound - Outbound')
ax[2].set_xlabel('Linear Distance at time t-1')
fig.colorbar(mesh1, ax=ax.ravel().tolist()[:2], label='Probability', orientation='horizontal')
cbar = fig.colorbar(mesh2, ax=ax[2], label='Difference', orientation='horizontal', ticks=[-0.01, 0, 0.01])
cbar.ax.set_xticklabels(['Outbound', '0', 'Inbound']);
Explanation: Plot state transition matrix to check quality
End of explanation
error_tolerance = 1E-13
check_error = lambda x: np.all(np.abs(x - 1) < error_tolerance)
print(check_error(np.sum(inbound_state_transitions, axis=0)))
print(check_error(np.sum(outbound_state_transitions, axis=0)))
Explanation: Make sure state transition columns sum to 1
End of explanation
linear_distance_grid_bin_size = linear_distance_grid[1] - linear_distance_grid[0]
outbound_initial_conditions = ripple_decoding.normalize_to_probability(
scipy.stats.norm.pdf(linear_distance_grid_centers, 0, linear_distance_grid_bin_size * 2))
inbound_initial_conditions = ripple_decoding.normalize_to_probability(
(np.max(outbound_initial_conditions) * np.ones(linear_distance_grid_centers.shape)) -
outbound_initial_conditions)
Explanation: Initial conditions
Where the replay trajectory starts.
The outbound inital condition is a Gaussian with probability mass at center arm reflecting that the replay trajectory is likely to start at the center arm
The inbound inital condition is a Gaussian with probability mass everywhere but at the center arm reflecting that the replay trajectory is likely to start everywhere except at the center arm
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(7, 3), sharex=True, sharey=True)
ax[0].plot(linear_distance_grid_centers, inbound_initial_conditions)
ax[0].set_ylabel('Probability')
ax[0].set_xlabel('Linear Distance')
ax[0].set_title('Inbound')
ax[0].set_xlim((linear_distance_grid.min(), linear_distance_grid.max()))
ax[1].plot(linear_distance_grid_centers, outbound_initial_conditions)
ax[1].set_xlabel('Linear Distance')
ax[1].set_title('Outbound')
ax[1].set_xlim((linear_distance_grid.min(), linear_distance_grid.max()));
Explanation: Plot intial conditions
End of explanation
import scipy.linalg
discrete_state_names = ['outbound_forward', 'outbound_reverse', 'inbound_forward', 'inbound_reverse']
# Initial Conditions
num_decision_states = len(discrete_state_names)
prior_probability_of_state = 1 / num_decision_states
initial_conditions = np.hstack([outbound_initial_conditions,
inbound_initial_conditions,
inbound_initial_conditions,
outbound_initial_conditions]) * prior_probability_of_state
# State Transition
state_transition = scipy.linalg.block_diag(outbound_state_transitions,
inbound_state_transitions,
inbound_state_transitions,
outbound_state_transitions)
# Encoding Model
conditional_intensity = np.vstack([outbound_conditional_intensity,
outbound_conditional_intensity,
inbound_conditional_intensity,
inbound_conditional_intensity]).T
combined_likelihood_params = dict(likelihood_function=ripple_decoding.poisson_likelihood,
likelihood_kwargs=dict(conditional_intensity=conditional_intensity))
Explanation: Organize into discrete states
Outbound reverse is the same as inbound forward. Double check conditional intensity
End of explanation
ripple_times = ripple_detection.get_epoch_ripples(
epoch_index[0], animals, sampling_frequency=1500,
ripple_detection_function=ripple_detection.Kay_method)
spike_ripples_df = [data_processing.reshape_to_segments(
spikes_datum, ripple_times, concat_axis=1, sampling_frequency=sampling_frequency)
for spikes_datum in spikes_data]
num_ripples = len(ripple_times)
test_spikes = [np.vstack([df.iloc[:, ripple_ind].dropna().values
for df in spike_ripples_df]).T
for ripple_ind in np.arange(len(ripple_times))]
Explanation: Ripple decoding
Get ripples
End of explanation
import functools
decode_ripple = functools.partial(ripple_decoding.predict_state,
initial_conditions=initial_conditions,
state_transition=state_transition,
likelihood_function=ripple_decoding.combined_likelihood,
likelihood_kwargs=combined_likelihood_params)
posterior_density = [decode_ripple(ripple_spikes)
for ripple_spikes in test_spikes]
def compute_decision_state_probability(posterior_density, num_decision_states):
num_time = len(posterior_density)
new_shape = (num_time, num_decision_states, -1)
return np.sum(np.reshape(posterior_density, new_shape), axis=2)
decision_state_probability = [compute_decision_state_probability(density, num_decision_states)
for density in posterior_density]
def compute_max_state(probability):
end_time_probability = probability[-1, :]
return (discrete_state_names[np.argmax(end_time_probability)].split('_'),
np.max(end_time_probability))
def num_unique_neurons_spiking(spikes):
return spikes.sum(axis=0).nonzero()[0].shape[0]
def num_total_spikes(spikes):
return int(spikes.sum(axis=(0,1)))
ripple_info = pd.DataFrame([compute_max_state(probability) for probability in decision_state_probability],
columns=['ripple_type', 'ripple_state_probability'],
index=pd.Index(np.arange(num_ripples) + 1, name='ripple_number'))
ripple_info['ripple_start_time'] = np.asarray(ripple_times)[:, 0]
ripple_info['ripple_end_time'] = np.asarray(ripple_times)[:, 1]
ripple_info['number_of_unique_neurons_spiking'] = [num_unique_neurons_spiking(spikes) for spikes in test_spikes]
ripple_info['number_of_spikes'] = [num_total_spikes(spikes) for spikes in test_spikes]
print(ripple_info.ripple_type.value_counts())
print('\n')
print(ripple_info.number_of_unique_neurons_spiking.value_counts())
print('\n')
print(ripple_info.number_of_spikes.value_counts())
Explanation: Decode Ripples
End of explanation
import ipywidgets
def browse_ripple_fits(decision_state_probability, discrete_state_names, sampling_frequency=1500):
def plot_fits(ripple_ind):
time_length = decision_state_probability[ripple_ind].shape[0]
time = np.arange(time_length) / sampling_frequency
lineObjects = plt.plot(time, decision_state_probability[ripple_ind])
# plt.legend(lineObjects, discrete_state_names)
for state_ind, state_names in enumerate(discrete_state_names):
plt.text(time[-1] + (1 / sampling_frequency),
decision_state_probability[ripple_ind][-1, state_ind],
discrete_state_names[state_ind],
color=lineObjects[state_ind].get_color())
plt.ylim((0, 1))
plt.xlabel('Time (seconds)')
plt.ylabel('Probability')
plt.title('Ripple #{ripple_number}'.format(ripple_number=ripple_ind+1))
ipywidgets.interact(plot_fits, ripple_ind=(0, len(decision_state_probability)-1))
browse_ripple_fits(decision_state_probability, discrete_state_names)
import ipywidgets
from mpl_toolkits.axes_grid1 import make_axes_locatable
def browse_ripple_densities(posterior_density, discrete_state_names,
linear_distance_grid_centers,
sampling_frequency=1500):
def plot_fits(ripple_ind):
fig, axes = plt.subplots(2, 2, figsize=(12,9))
time_length = decision_state_probability[ripple_ind].shape[0]
time = np.arange(time_length) / sampling_frequency
num_time = posterior_density[ripple_ind].shape[0]
num_decision_states = len(discrete_state_names)
new_shape = (num_time, num_decision_states, -1)
cur_density = np.reshape(posterior_density[ripple_ind], new_shape)
[time_grid, linear_distance_grid] = np.meshgrid(time, linear_distance_grid_centers)
for state_ind, ax in enumerate(axes.flatten()):
try:
mesh = ax.pcolormesh(time_grid, linear_distance_grid, cur_density[:, state_ind, :].squeeze().T,
cmap='PuRd', vmin=0, vmax=.2)
ax.set_xlim((time.min(), time.max()))
ax.set_ylim((linear_distance_grid_centers.min(), linear_distance_grid_centers.max()))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="2%", pad=0.05)
plt.colorbar(mesh, label='Density', cax=cax)
ax.set_title(discrete_state_names[state_ind])
except ValueError:
pass
ipywidgets.interact(plot_fits, ripple_ind=(0, len(decision_state_probability)-1))
browse_ripple_densities(posterior_density, discrete_state_names,
linear_distance_grid_centers)
cur_tetrode_info.to_json(orient='records')
cur_neuron_info.drop(['spikewidth', 'propbursts', 'numspikes', 'csi', 'meanrate'], axis=1).to_json(orient='records')
trial_info.to_json(orient='index')
spikes_data[0].tojson(orient='split')
pd.MultiIndex.from_tuples(list(itertools.combinations(cur_tetrode_info.index, 2)), names=['tetrode1', 'tetrode2'])
Explanation: Display ripple category probabilities
End of explanation |
1,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nonlinear Astro Features
This notebook examines whether $w_1 - w_2$ and $w_2 - w_3$ are good features. There are indications that these may be correlated with whether galaxies contain AGNs. It also looks at whether the fluxes are more useful than the magnitudes, i.e., should we exponentiate the magnitudes.
Step1: So maybe they're useful features (but not very). What about the fact they're magnitudes?
Step2: Those are promising results, but we need to rererun this a few times with different training and testing sets to get some error bars. | Python Code:
import h5py, numpy, sklearn.linear_model, sklearn.cross_validation, sklearn.metrics
with h5py.File('../data/training.h5') as f:
raw_astro_features = f['features'][:, :4]
dist_features = f['features'][:, 4]
image_features = f['features'][:, 5:]
w1_w2 = raw_astro_features[:, 0] - raw_astro_features[:, 1]
w2_w3 = raw_astro_features[:, 1] - raw_astro_features[:, 2]
features_linear = f['features'][:]
features_nonlinear = numpy.hstack([
raw_astro_features,
dist_features.reshape((-1, 1)),
w1_w2.reshape((-1, 1)),
w2_w3.reshape((-1, 1)),
image_features,
])
features_exp = numpy.hstack([
numpy.power(10, -0.4 * raw_astro_features),
dist_features.reshape((-1, 1)),
image_features,
])
features_nlexp = numpy.hstack([
numpy.power(10, -0.4 * raw_astro_features),
numpy.power(10, -0.4 * w1_w2.reshape((-1, 1))),
numpy.power(10, -0.4 * w2_w3.reshape((-1, 1))),
dist_features.reshape((-1, 1)),
image_features,
])
labels = f['labels'].value
x_train, x_test, t_train, t_test = sklearn.cross_validation.train_test_split(
numpy.arange(raw_astro_features.shape[0]), labels, test_size=0.2)
lr = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')
lr.fit(features_linear[x_train], t_train)
cm = sklearn.metrics.confusion_matrix(t_test, lr.predict(features_linear[x_test]))
tp = cm[1, 1]
n, p = cm.sum(axis=1)
tn = cm[0, 0]
ba = (tp / p + tn / n) / 2
print('Linear features, balanced accuracy: {:.02%}'.format(ba))
print(cm)
lrnl = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')
lrnl.fit(features_nonlinear[x_train], t_train)
cm = sklearn.metrics.confusion_matrix(t_test, lrnl.predict(features_nonlinear[x_test]))
tp = cm[1, 1]
n, p = cm.sum(axis=1)
tn = cm[0, 0]
ba = (tp / p + tn / n) / 2
print('Nonlinear features, balanced accuracy: {:.02%}'.format(ba))
print(cm)
Explanation: Nonlinear Astro Features
This notebook examines whether $w_1 - w_2$ and $w_2 - w_3$ are good features. There are indications that these may be correlated with whether galaxies contain AGNs. It also looks at whether the fluxes are more useful than the magnitudes, i.e., should we exponentiate the magnitudes.
End of explanation
lrexp = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')
lrexp.fit(features_exp[x_train], t_train)
cm = sklearn.metrics.confusion_matrix(t_test, lrexp.predict(features_exp[x_test]))
tp = cm[1, 1]
n, p = cm.sum(axis=1)
tn = cm[0, 0]
ba = (tp / p + tn / n) / 2
print('Exponentiated features, balanced accuracy: {:.02%}'.format(ba))
print(cm)
lrnlexp = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')
lrnlexp.fit(features_nlexp[x_train], t_train)
cm = sklearn.metrics.confusion_matrix(t_test, lrnlexp.predict(features_nlexp[x_test]))
tp = cm[1, 1]
n, p = cm.sum(axis=1)
tn = cm[0, 0]
ba = (tp / p + tn / n) / 2
print('Exponentiated features, balanced accuracy: {:.02%}'.format(ba))
print(cm)
Explanation: So maybe they're useful features (but not very). What about the fact they're magnitudes?
End of explanation
def balanced_accuracy(lr, x_test, t_test):
cm = sklearn.metrics.confusion_matrix(t_test, lr.predict(x_test))
tp = cm[1, 1]
n, p = cm.sum(axis=1)
tn = cm[0, 0]
ba = (tp / p + tn / n) / 2
return ba
def test_feature_set(features, x_train, t_train, x_test, t_test):
lr = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')
lr.fit(features[x_train], t_train)
return balanced_accuracy(lr, features[x_test], t_test)
linear_ba = []
nonlinear_ba = []
exp_ba = []
nonlinear_exp_ba = []
n_trials = 10
for trial in range(n_trials):
print('Trial {}/{}'.format(trial + 1, n_trials))
x_train, x_test, t_train, t_test = sklearn.cross_validation.train_test_split(
numpy.arange(raw_astro_features.shape[0]), labels, test_size=0.2)
linear_ba.append(test_feature_set(features_linear, x_train, t_train, x_test, t_test))
nonlinear_ba.append(test_feature_set(features_nonlinear, x_train, t_train, x_test, t_test))
exp_ba.append(test_feature_set(features_exp, x_train, t_train, x_test, t_test))
nonlinear_exp_ba.append(test_feature_set(features_nlexp, x_train, t_train, x_test, t_test))
print('Linear features: ({:.02f} +- {:.02f})%'.format(
numpy.mean(linear_ba) * 100, numpy.std(linear_ba) * 100))
print('Nonlinear features: ({:.02f} +- {:.02f})%'.format(
numpy.mean(nonlinear_ba) * 100, numpy.std(nonlinear_ba) * 100))
print('Exponentiated features: ({:.02f} +- {:.02f})%'.format(
numpy.mean(exp_ba) * 100, numpy.std(exp_ba) * 100))
print('Exponentiated nonlinear features: ({:.02f} +- {:.02f})%'.format(
numpy.mean(nonlinear_exp_ba) * 100, numpy.std(nonlinear_exp_ba) * 100))
Explanation: Those are promising results, but we need to rererun this a few times with different training and testing sets to get some error bars.
End of explanation |
1,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Competition assay analysis and thoughts
Here we will analyze two competition assay conducted as a rough beginning to understand how to best design competition assays to the fluorescent kinase inhibitors (bosutinib, bosutinib isomer, erlotinib, and gefitinib) used in other assays in this repository.
The first (1st) part of this will be looking at data collected trying to compete off bosutinib from Src kinase with imatinib (conducted on March 11, 2015). The second (2nd) part of this will be looking at data collected trying to compete off gefitinib from Src kinase with imatinib (conducted on October 30, 2015). The third (3rd) part will be some simple modeling to see if these experiments follow our expectations and how we can better design the experiments to get better results from the competition assay. Then in a fourth (4th) section we'll work a little on a PYMC model to get affinities from the competition assay.
Step1: Bosutinib Assay
The first attempt at a Bosutinib-Imatinib competition assay was on March 11, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Bosutinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Bosutinib.
importing and plotting data the clunky way for transparency, will change to use platereader.py once it is slightly nicer.
Step2: Gefitinib Assay
The first attempt at a Gefitinib-Imatinib competition assay was on October 30, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Gefitinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Gefitinib. Note the documentation here could be better, which could be why this data doesn't look particularly great.
Step3: Modeled Data
So now let's look at what our expected data might look like. Here we are looking at inhibitor affinities for Src.
Some initial placeholder data from here again
Step4: From our assay setup we know the Src concentration is 0.5 $\mu$M.
Step6: First let's just plot our two component binding for Bosutinib and Gefitinib.
Step8: Now let's see how we would expect Imatinib to effect this.
From our assay setup we know the Imatinib concentration is
Step9: HMMM.
So I was right to think that Gefitinib should work despite the fact that Bosutinib didn't, but...
Now let's try modeling new experiments with Abl before we actually do them!
Step10: Looks promising!
Let's check out our new data set based on this. | Python Code:
#import needed libraries
import re
import os
from lxml import etree
import pandas as pd
import pymc
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Competition assay analysis and thoughts
Here we will analyze two competition assay conducted as a rough beginning to understand how to best design competition assays to the fluorescent kinase inhibitors (bosutinib, bosutinib isomer, erlotinib, and gefitinib) used in other assays in this repository.
The first (1st) part of this will be looking at data collected trying to compete off bosutinib from Src kinase with imatinib (conducted on March 11, 2015). The second (2nd) part of this will be looking at data collected trying to compete off gefitinib from Src kinase with imatinib (conducted on October 30, 2015). The third (3rd) part will be some simple modeling to see if these experiments follow our expectations and how we can better design the experiments to get better results from the competition assay. Then in a fourth (4th) section we'll work a little on a PYMC model to get affinities from the competition assay.
End of explanation
def get_wells_from_section(path):
reads = path.xpath("*/Well")
wellIDs = [read.attrib['Pos'] for read in reads]
data = [(float(s.text), r.attrib['Pos'])
for r in reads
for s in r]
datalist = {
well : value
for (value, well) in data
}
welllist = [
[
datalist[chr(64 + row) + str(col)]
if chr(64 + row) + str(col) in datalist else None
for row in range(1,9)
]
for col in range(1,13)
]
return welllist
file_BOS= "data/2015-03-11 18-35-16_plate_1.xml"
file_name = os.path.splitext(file_BOS)[0]
root = etree.parse(file_BOS)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_BOS + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
Bos_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
Bos_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
Bos_dataframe
file_BOS_IMA= "data/Ima_WIP_SMH_SrcBos_Extend_013015_mdfx_20150311_18.xml"
file_name = os.path.splitext(file_BOS_IMA)[0]
root = etree.parse(file_BOS_IMA)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_BOS_IMA + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
BosIma_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
BosIma_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
plt.plot(BosIma_dataframe[:].values, 'r');
plt.plot(Bos_dataframe[:].values, 'k');
plt.text(8,450,'Bosutinib',fontsize=15)
plt.text(8,420,'Imatinib + Bosutinib',fontsize=15,color='red')
Explanation: Bosutinib Assay
The first attempt at a Bosutinib-Imatinib competition assay was on March 11, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Bosutinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Bosutinib.
importing and plotting data the clunky way for transparency, will change to use platereader.py once it is slightly nicer.
End of explanation
file_GEF = "data/Gef_2015-10-30 17-55-48_plate_1.xml"
file_name = os.path.splitext(file_GEF)[0]
root = etree.parse(file_GEF)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_GEF + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
Gef_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
Gef_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
file_GEF_IMA= "data/GefIma_2015-10-30 17-51-13_plate_1.xml"
file_name = os.path.splitext(file_GEF_IMA)[0]
root = etree.parse(file_GEF_IMA)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_GEF_IMA + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
GefIma_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
GefIma_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
plt.plot(GefIma_dataframe[:].values, 'r');
plt.plot(Gef_dataframe[:].values, 'k');
plt.text(8,230,'Gefitinib',fontsize=15)
plt.text(8,210,'Imatinib + Gefitinib',fontsize=15,color='red')
Explanation: Gefitinib Assay
The first attempt at a Gefitinib-Imatinib competition assay was on October 30, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Gefitinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Gefitinib. Note the documentation here could be better, which could be why this data doesn't look particularly great.
End of explanation
Kd_Bos = 1.0e-9 # M
Kd_Gef = 3800e-9 # M
Kd_Ima = 3000e-9 # M
Explanation: Modeled Data
So now let's look at what our expected data might look like. Here we are looking at inhibitor affinities for Src.
Some initial placeholder data from here again:
http://www.guidetopharmacology.org/GRAC/LigandScreenDisplayForward?ligandId=5710&screenId=2
End of explanation
Ptot = 0.5e-6 # M
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M
Explanation: From our assay setup we know the Src concentration is 0.5 $\mu$M.
End of explanation
# Now we can use this to define a function that gives us PL from Kd, Ptot, and Ltot.
def two_component_binding(Kd, Ptot, Ltot):
Parameters
----------
Kd : float
Dissociation constant
Ptot : float
Total protein concentration
Ltot : float
Total ligand concentration
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
PL : float
Complex concentration
PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM)
return [P, L, PL]
[Lb, Pb, PLb] = two_component_binding(Kd_Bos, Ptot, Ltot)
[Lg, Pg, PLg] = two_component_binding(Kd_Gef, Ptot, Ltot)
# y will be complex concentration
# x will be total ligand concentration
Bos, = plt.semilogx(Ltot,PLb,'green', label='Bosutinib')
Gef, = plt.semilogx(Ltot,PLg,'violet', label = 'Gefitinib')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.ylim(0,6e-7)
plt.legend(loc=3);
Explanation: First let's just plot our two component binding for Bosutinib and Gefitinib.
End of explanation
Lima = 10e-6 # M
#Competitive binding function
def three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A):
Parameters
----------
Ptot : float
Total protein concentration
Ltot : float
Total tracer(fluorescent) ligand concentration
Kd_L : float
Dissociation constant
Atot : float
Total competitive ligand concentration
Kd_A : float
Dissociation constant
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
A : float
Free ligand concentration
PL : float
Complex concentration
Kd_L_app : float
Apparent dissociation constant of L in the presence of A
Usage
-----
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
Kd_L_app = Kd_L*(1+Atot/Kd_A)
PL = 0.5 * ((Ptot + Ltot + Kd_L_app) - np.sqrt((Ptot + Ltot + Kd_L_app)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free tracer ligand concentration in sample cell after n injections (uM)
A = Atot - PL; # free competitive ligand concentration in sample cell after n injections (uM)
return [P, L, A, PL, Kd_L_app]
[Pbi, Lbi, Abi, PLbi, Kd_bima] = three_component_competitive_binding(Ptot, Ltot, Kd_Bos, Lima, Kd_Ima)
[Pgi, Lgi, Agi, PLgi, Kd_gima] = three_component_competitive_binding(Ptot, Ltot, Kd_Gef, Lima, Kd_Ima)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Src competition assay')
Bos, = plt.semilogx(Ltot,PLb,'green', label='Bosutinib')
Bos_Ima, = plt.semilogx(Ltot,PLbi,'cyan', label='Bosutinib + Ima')
Gef, = plt.semilogx(Ltot,PLg,'violet', label = 'Gefitinib')
Gef_Ima, = plt.semilogx(Ltot,PLgi,'pink', label = 'Gefitinib + Ima')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.ylim(0,6e-7)
plt.legend(loc=3);
Explanation: Now let's see how we would expect Imatinib to effect this.
From our assay setup we know the Imatinib concentration is
End of explanation
#Using expected Kd's from same website as above
Kd_Bos_Abl = 0.1e-9 # M
Kd_Gef_Abl = 480e-9 # M
Kd_Ima_Abl = 21.0e-9 # M
[Lb_Abl, Pb_Abl, PLb_Abl] = two_component_binding(Kd_Bos_Abl, Ptot, Ltot)
[Lg_Abl, Pg_Abl, PLg_Abl] = two_component_binding(Kd_Gef_Abl, Ptot, Ltot)
[Pbi_Abl, Lbi_Abl, Abi_Abl, PLbi_Abl, Kd_bima_Abl] = three_component_competitive_binding(Ptot, Ltot, Kd_Bos_Abl, Lima, Kd_Ima_Abl)
[Pgi_Abl, Lgi_Abl, Agi_Abl, PLgi_Abl, Kd_gima_Abl] = three_component_competitive_binding(Ptot, Ltot, Kd_Gef_Abl, Lima, Kd_Ima_Abl)
# y will be complex concentration
# x will be total ligand concentration
Bos, = plt.semilogx(Ltot,PLb_Abl,'green', label='Bosutinib')
Bos_Ima, = plt.semilogx(Ltot,PLbi_Abl,'cyan', label='Bosutinib + Ima')
Gef, = plt.semilogx(Ltot,PLg_Abl,'violet', label = 'Gefitinib')
Gef_Ima, = plt.semilogx(Ltot,PLgi_Abl,'pink', label = 'Gefitinib + Ima')
plt.title('Abl competition assay')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.ylim(0,6e-7)
plt.legend(loc=3);
Explanation: HMMM.
So I was right to think that Gefitinib should work despite the fact that Bosutinib didn't, but...
Now let's try modeling new experiments with Abl before we actually do them!
End of explanation
def get_wells_from_section(path):
reads = path.xpath("*/Well")
wellIDs = [read.attrib['Pos'] for read in reads]
data = [(s.text, r.attrib['Pos'])
for r in reads
for s in r]
datalist = {
well : value
for (value, well) in data
}
welllist = [
[
datalist[chr(64 + row) + str(col)]
if chr(64 + row) + str(col) in datalist else None
for row in range(1,9)
]
for col in range(1,13)
]
return welllist
file_ABL_GEF= "data/Abl Gef gain 120 bw1020 2016-01-19 15-59-53_plate_1.xml"
file_name = os.path.splitext(file_ABL_GEF)[0]
root = etree.parse(file_ABL_GEF)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_ABL_GEF + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
AblGef_dataframe = pd.DataFrame(welllist, columns = ['A - Abl','B - Buffer','C - Abl','D - Buffer', 'E - Abl','F - Buffer','G - Abl','H - Buffer'])
#AN ERROR FOR 'OVERS' COMES UP UNLESS THE NEXT LINE IS HERE
#THE MAX VALUE IS TAKEN FROM THE MAX VALUE FOR THE ABL GEF IMA DATA
#dataframe_rep = AblGef_dataframe.replace({'OVER':'64060.0'})
AblGef_dataframe
#dataframe_rep[['fluorescence']] = dataframe_rep[['fluorescence']].astype('float')
dataframe_rep = dataframe_rep.astype('float')
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
dataframe_rep.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5);
file_ABL_GEF_IMA= "data/Abl Gef Ima gain 120 bw1020 2016-01-19 16-22-45_plate_1.xml"
file_name = os.path.splitext(file_ABL_GEF_IMA)[0]
root = etree.parse(file_ABL_GEF_IMA)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_ABL_GEF_IMA + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
AblGefIma_dataframe = pd.DataFrame(welllist, columns = ['A - Abl','B - Buffer','C - Abl','D - Buffer', 'E - Abl','F - Buffer','G - Abl','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
AblGefIma_dataframe = AblGefIma_dataframe.astype('float')
AblGefIma_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
AblGefIma_dataframe.values.max()
plt.plot(AblGefIma_dataframe[:].values, 'r');
plt.plot(AblGef_dataframe[:].values, 'k');
plt.text(8,60000,'Gefitinib (ABL)',fontsize=15)
plt.text(8,55000,'Imatinib + Gefitinib (ABL)',fontsize=15,color='red')
plt.savefig('Abl_Gef_Ima_Jan2016_repeat.png')
Explanation: Looks promising!
Let's check out our new data set based on this.
End of explanation |
1,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dane treningowe
Ponieważ będziemy potrzebowali na czymś wytrenować naszą sieć neuronową skorzystamy z popularnego zbioru w Machine Learningu czyli MNIST. Zbiór ten zawiera ręcznie pisane cyfry od 0 do 9. Są to niewielkie obrazki o wielkości 28x28 pixeli.
Pobierzmy i załadujmy zbiór.
Step1: Zaimportujmy dodatkowe biblioteki do wyświetlania wykresów/obrazów oraz numpy który jest paczką do obliczeń na macierzach
Step2: Sprawdźmy ile jest przykładów w zbiorze
Step3: Teraz wyświetlmy pare przykładowych obrazków ze zbioru
Step4: Tworzenie sieci neuronowej
Żeby za bardzo nie komplikować sprawy stworzymy sieć o trzech warstwach. Na początku ustalmy ilość neuronów w każdej z warstw. Ponieważ wielkość obrazka to 28x28 pixeli potrzebujemy więc 784 neuronów wejściowych. W warstwie ukrytej możemy ustawić ilość na dowolną. Ponieważ mamy do wyboru 10 różnych cyfr tyle samo neuronów damy w warstwie wyjściowej.
Step5: Kluczowym elementem sieci neuronowych są ich wagi na połączeniach między neuronami. Aktualnie po prostu wczytamy już wytrenowane wagi dla sieci.
Step6: Obliczenia wykonywane przez sieć neuronową można rozrysować w postaci grafu obliczeniowego, gdzie każdy z wierzchołków reprezentuje jakąś operację na wejściach. Wykorzystywana przez nas sieć przedstawiona jest na grafie poniżej (@ to mnożenie macierzy)
Step7: Trenowanie sieci (Back-propagation)
Należy przygotować dane pod trenowanie sieci. Chodzi tu głównie o zakodowanie mnist.target w sposób 'one-hot encoding'. Czyli
Step8: Na starcie parametry są zwyczajnie losowane. Wykorzystamy do tego funkcje np.random.rand(dim_1, dim_2, ..., dim_n) losuje ona liczby z przedziału $[0, 1)$ i zwraca tensor o podanych przez nas wymiarach.
Uwaga
Step9: Implementacja propagacji wstecznej
Podobnie jak przy optymalizowaniu funkcji, do wyliczenia gradientów wykorzystamy backprop. Graf obliczeniowy jest trochę bardziej skomplikowany. (@ oznacza mnożenie macierzy)
Do zaimplementowania funkcji back_prop(...) będziemy jeszcze potrzebować pochodnych dla naszych funkcji oraz funkcje straty.
Step10: Napiszemy jeszcze funkcje, która będzie wykonywała jeden krok optymalizacji dla podanych parametrów i ich gradientów o podanym kroku.
Step11: Żeby móc lepiej ocenić postęp uczenia się sieci napiszemy funkcje, która będzie wyliczać jaki procent odpowiedzi udzielanych przez sieć neuronową jest poprawny.
Step12: W końcu możemy przejść do napisania głównej pętli uczącej. | Python Code:
# skorzystamy z gotowej funkcji do pobrania tego zbioru
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
Explanation: Dane treningowe
Ponieważ będziemy potrzebowali na czymś wytrenować naszą sieć neuronową skorzystamy z popularnego zbioru w Machine Learningu czyli MNIST. Zbiór ten zawiera ręcznie pisane cyfry od 0 do 9. Są to niewielkie obrazki o wielkości 28x28 pixeli.
Pobierzmy i załadujmy zbiór.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns # można osobno doinstalować tą paczke (rysuje ładniejsze wykresy)
import numpy as np
# pozwala na rysowanie w notebooku (nie otwiera osobnego okna)
%matplotlib inline
Explanation: Zaimportujmy dodatkowe biblioteki do wyświetlania wykresów/obrazów oraz numpy który jest paczką do obliczeń na macierzach
End of explanation
print(mnist.data.shape) # 28x28
print(mnist.target.shape)
Explanation: Sprawdźmy ile jest przykładów w zbiorze
End of explanation
for i in range(10):
r = np.random.randint(0, len(mnist.data))
plt.subplot(2, 5, i + 1)
plt.axis('off')
plt.title(mnist.target[r])
plt.imshow(mnist.data[r].reshape((28, 28)))
plt.show()
Explanation: Teraz wyświetlmy pare przykładowych obrazków ze zbioru
End of explanation
input_layer = 784
hidden_layer = ...
output_layer = 10
Explanation: Tworzenie sieci neuronowej
Żeby za bardzo nie komplikować sprawy stworzymy sieć o trzech warstwach. Na początku ustalmy ilość neuronów w każdej z warstw. Ponieważ wielkość obrazka to 28x28 pixeli potrzebujemy więc 784 neuronów wejściowych. W warstwie ukrytej możemy ustawić ilość na dowolną. Ponieważ mamy do wyboru 10 różnych cyfr tyle samo neuronów damy w warstwie wyjściowej.
End of explanation
# wcztanie już wytrenowanych wag (parametrów)
import h5py
with h5py.File('weights.h5', 'r') as file:
W1 = file['W1'][:]
W2 = file['W2'][:]
def sigmoid(x):
pass
Explanation: Kluczowym elementem sieci neuronowych są ich wagi na połączeniach między neuronami. Aktualnie po prostu wczytamy już wytrenowane wagi dla sieci.
End of explanation
def forward_pass(x, w1, w2):
# x - wejście sieci
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
pass
# uruchomienie sieci i sprawdzenie jej działania
# użyj funkcji forward_pass dla kilku przykładów i zobacz czy sieć odpowiada poprawnie
Explanation: Obliczenia wykonywane przez sieć neuronową można rozrysować w postaci grafu obliczeniowego, gdzie każdy z wierzchołków reprezentuje jakąś operację na wejściach. Wykorzystywana przez nas sieć przedstawiona jest na grafie poniżej (@ to mnożenie macierzy):
End of explanation
x_train = ...
y_train = ...
Explanation: Trenowanie sieci (Back-propagation)
Należy przygotować dane pod trenowanie sieci. Chodzi tu głównie o zakodowanie mnist.target w sposób 'one-hot encoding'. Czyli: $$y = \left[ \begin{matrix} 0 \ 1 \ 2 \ \vdots \ 8 \ 9 \end{matrix} \right] \Longrightarrow \left[ \begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{matrix} \right]$$
Uwaga: aktualnie wszystkie dane są posortowane względem odpowiedzi. Czyli wszystkie zera są na początku póżniej są jednyki, itd. Takie ustawienie może w znaczący sposób utrudnić trenowanie sieci. Dlatego należy dane na starcie "przetasować". Trzeba przy tym pamiętać, żeby wejścia dalej odpowiadały tym samym wyjściom.
End of explanation
W1 = ...
W2 = ...
Explanation: Na starcie parametry są zwyczajnie losowane. Wykorzystamy do tego funkcje np.random.rand(dim_1, dim_2, ..., dim_n) losuje ona liczby z przedziału $[0, 1)$ i zwraca tensor o podanych przez nas wymiarach.
Uwaga: Mimo, że funkcja zwraca liczby z przedziału $[0, 1)$ nasze startowe parametry powinny być z przedziału $(-0.01, 0.01)$
End of explanation
def loss_func(y_true, y_pred):
# y_true - poprawna odpowiedź
# y_pred - odpowiedź wyliczona przez sieć neuronową
pass
def sigmoid_derivative(x):
# implementacja
pass
def loss_derivative(y_true, y_pred):
# y_true - poprawna odpowiedź
# y_pred - odpowiedź wyliczona przez sieć neuronową
pass
def back_prop(x, y, w1, w2):
# x - wejście sieci
# y - poprawne odpowiedzi
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
# zastąp linie pod spodem kodem z funkcji forward_pass
# >>>
...
# <<<
...
return loss, dw1, dw2
Explanation: Implementacja propagacji wstecznej
Podobnie jak przy optymalizowaniu funkcji, do wyliczenia gradientów wykorzystamy backprop. Graf obliczeniowy jest trochę bardziej skomplikowany. (@ oznacza mnożenie macierzy)
Do zaimplementowania funkcji back_prop(...) będziemy jeszcze potrzebować pochodnych dla naszych funkcji oraz funkcje straty.
End of explanation
def apply_gradients(w1, w2, dw1, dw2, learning_rate):
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
# dw1 - gradienty dla parametrów warstwy ukrytej
# dw2 - gradienty dla parametrów warstwy wyjściowej
# learning_rate - krok optymalizacji
...
return w1, w2
Explanation: Napiszemy jeszcze funkcje, która będzie wykonywała jeden krok optymalizacji dla podanych parametrów i ich gradientów o podanym kroku.
End of explanation
def accuracy(x, y, w1, w2):
# x - wejście sieci
# y - poprawne odpowiedzi
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
# hint: użyj funkcji forward_pass i np.argmax
pass
Explanation: Żeby móc lepiej ocenić postęp uczenia się sieci napiszemy funkcje, która będzie wyliczać jaki procent odpowiedzi udzielanych przez sieć neuronową jest poprawny.
End of explanation
nb_epoch = 5 # ile razy będziemy iterować po danych treningowych
learning_rate = 0.001
batch_size = 16 # na jak wielu przykładach na raz będziemy trenować sieć
losses = []
for epoch in range(nb_epoch):
print('\nEpoch %d' % (epoch,))
for i in range(0, len(x_train), batch_size):
x_batch = ...
y_batch = ...
# wykonaj back_prop dla pojedynczego batch'a
...
# zaktualizuj parametry
...
losses.append(loss)
print('\r[%5d/%5d] loss: %8.6f - accuracy: %10.6f' % (i + 1, len(x_train),
loss, accuracy(x_batch, y_batch, W1, W2)), end='')
plt.plot(losses)
plt.show()
print('Dokładność dla całego zbioru:', accuracy(x_train, y_train, W1, W2))
Explanation: W końcu możemy przejść do napisania głównej pętli uczącej.
End of explanation |
1,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Structure Prediction
In this notebook, we aim to do a test of the structure substitution algorithm implemented in SMACT
Before we can do predictions, we need to create our cation mutator, database and a table, and a list of hypothetical compositions
Procedure
Create a list of hypothetical compositions
Create a database of structures
Create Cation Mutator
Predict structures
Part 1
Step1: Part 2
Step2: Part 3
Step3: Part 4
Step4: Part 5 | Python Code:
comps=pd.read_csv("Li-Garnet_Comps_sus.csv")
comps.head()
Explanation: Structure Prediction
In this notebook, we aim to do a test of the structure substitution algorithm implemented in SMACT
Before we can do predictions, we need to create our cation mutator, database and a table, and a list of hypothetical compositions
Procedure
Create a list of hypothetical compositions
Create a database of structures
Create Cation Mutator
Predict structures
Part 1: Compositions
These compositions have generated in a different notebook and the results have been loaded in
End of explanation
DB=StructureDB("Test.db")
SP=StructurePredictor(CM, DB, "Garnets")
Explanation: Part 2: Database
Let's follow this procedure
1. Create a SMACT database and add a table which contains the result of our query
End of explanation
#Create the CationMutator class
#Here we use the default lambda table
CM=CationMutator.from_json()
Explanation: Part 3: Cation Mutator
In order to set up the cation mutator, we must do the following:
1. Generate a dataframe of lambda values for all species we which to consider
2. Instantiate the CationMutator class with the lambda dataframe
End of explanation
#Corrections
cond_df=CM.complete_cond_probs()
species=list(cond_df.columns)
comps_copy=comps[['A','B','C','D']]
df_copy_bool=comps_copy.isin(species)
x=comps_copy[df_copy_bool].fillna(0)
x=x[x.A != 0]
x=x[x.B != 0]
x=x[x.C != 0]
x=x[x.D != 0]
x=x.reset_index(drop=True)
#x.to_csv("./Garnet_Comps_Corrected_Pym.csv", index=False)
x.head()
inner_merged=pd.merge(x, comps)
inner_merged.to_csv("./Li-Garnet_Comps_Corrected_Pym.csv", index=False)
inner_merged.head()
print(inner_merged.head())
print("")
print(f"We have reduced our search space from {comps.shape[0]} to {inner_merged.shape[0]}")
#x=x[:100]
#Create a list of test species
test_specs_list=[[parse_spec(inner_merged["A"][i]),parse_spec(inner_merged["B"][i]),parse_spec(inner_merged["C"][i]),parse_spec(inner_merged["D"][i]) ] for i in range(inner_merged.shape[0])]
#Set up a for loop to store
from datetime import datetime
start = datetime.now()
from operator import itemgetter
preds=[]
parents_list=[]
probs_list=[]
for test_specs in test_specs_list:
predictions=list(SP.predict_structs(test_specs, thresh=10e-4, include_same=False ))
predictions.sort(key=itemgetter(1), reverse=True)
parents = [x[2].composition() for x in predictions]
probs = [x[1] for x in predictions]
preds.append(predictions)
parents_list.append(parents)
probs_list.append(probs)
print(f"Time taken to predict the crystal structures of our search space of {inner_merged.shape[0]} with a threshold of 0.0001 is {datetime.now()-start} ")
#print(parents_list)
print("")
#print(probs_list)
Explanation: Part 4: Structure Prediction
Prerequisites: Part 1, Part 2 & Part 3
Procedure:
1. Instantiate the StructurePredictor class with the cation mutator (part 1), database (part 2) and table (part 2)
2. Predict Structures
3. Compare predicted structures with Database
End of explanation
#Add predictions to dataframe
import pymatgen as mg
pred_structs=[]
probs=[]
parent_structs=[]
parent_pretty_formula=[]
for i in preds:
if len(i)==0:
pred_structs.append(None)
probs.append(None)
parent_structs.append(None)
parent_pretty_formula.append(None)
else:
pred_structs.append(i[0][0].as_poscar())
probs.append(i[0][1])
parent_structs.append(i[0][2].as_poscar())
parent_pretty_formula.append(mg.Structure.from_str(i[0][2].as_poscar(), fmt="poscar").composition.reduced_formula)
#Add prediction results to dataframe
inner_merged["predicted_structure"]=pred_structs
inner_merged["probability"]=probs
inner_merged["Parent formula"]=parent_pretty_formula
inner_merged["parent_structure"]=parent_structs
inner_merged[35:40]
#output the intermediary results into a dataframe
outdir="./Li-SP_results"
if not os.path.exists(outdir):
os.mkdir(outdir)
fullpath=os.path.join(outdir,"pred_results.csv")
inner_merged.to_csv(fullpath)
#Filter dataframe to remove blank entries from dataframe
results=inner_merged.dropna()
results=results.reset_index(drop=True)
results.head()
#Check if composition exists in our local database
in_db=[]
for i in results["predicted_structure"]:
comp=SmactStructure.from_poscar(i).composition()
if len(DB.get_structs(comp, "Garnets"))!=0:
in_db.append("Yes")
else:
in_db.append("No")
results["In DB?"]=in_db
print(results["In DB?"].value_counts())
#Check the ratio of In DB?:Not in DB
in_db_count=results["In DB?"].value_counts()
from matplotlib import pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.axis('equal')
ax.pie(in_db_count,labels=["No","Yes"], autopct='%1.2f%%')
plt.savefig(f"{outdir}/Li-Garnets_Li-In_DB_SP.png")
plt.show()
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=(8,6))
ax1=sns.histplot(data=results, x="probability", hue="In DB?",multiple="stack")
#ax2=sns.histplot(new_series, label="New Garnets")
#plt.savefig("Prediction_Probability_Distribution_pym.png")
g=sns.FacetGrid(results, col="In DB?", height=6, aspect=1)
g.map(sns.histplot,"probability")
g.savefig("./Li-SP_results/Prob_dist.png")
results.head()
#new_structures=
#pym_Li_DB=StructureDB("pym_Li.DB")
#pym_Li_DB.add_table("New")
#pym_Li_DB.add_table("not_new")
#pym_Li_DB.add_structs(new_structures,"New")
#pym_Li_DB.add_structs(exist_structures,"not_new")
#pym_Li_DB.add_table("not_new")
#pym_Li_DB.add_structs(new_structures,"New")
#pym_Li_DB.add_structs(exist_structures,"not_new")
#Periodic table BS
#Get element names
from pymatgen.util.plotting import periodic_table_heatmap
A_els=pd.Series([parse_spec(i)[0] for i in results["A"]])
B_els=pd.Series([parse_spec(i)[0] for i in results["B"]])
C_els=pd.Series([parse_spec(i)[0] for i in results["C"]])
#get dict of counts
A_els_counts=A_els.value_counts().to_dict()
B_els_counts=B_els.value_counts().to_dict()
C_els_counts=C_els.value_counts().to_dict()
ax1=periodic_table_heatmap(elemental_data=A_els_counts, cbar_label="Counts")
ax1.savefig(f"{outdir}/periodic_table_A.png")
ax2=periodic_table_heatmap(elemental_data=B_els_counts, cbar_label="Counts")
ax2.savefig(f"{outdir}/periodic_table_B.png")
ax3=periodic_table_heatmap(elemental_data=C_els_counts, cbar_label="Counts")
ax3.savefig(f"{outdir}/periodic_table_C.png")
Explanation: Part 5: Storing the results
End of explanation |
1,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Association Mining
The goal in this problem set is to design and code an algorithm for generating association rules based on the apriori algorithm. The dataset that you will use to mine for association rules is a synthetic dataset containing over 1500 transactions. The format of the data is as follows
Step1: 2. For a bit of EDA
Step2: 3. Create a function to compute support for a given itemset from a transaction
Step3: 4. Write an apriori function that handles the pruning and generation of frequent itemsets
should return a dict of frequent itemsets as keys, and their support as values
must have a method for setting a minimum support value to determine frequent itemsets
hint
Step4: 5. Create a function that uses the frequent itemsets to generate association rules
must have a min confidence level as a parameter
do not keep rules that fall under this threshold
return dict of rules with string for keys
string key will take a form like
Step5: 6. Create a scatter plot to analyze the association rules
one axis for antecedent and one for consequent
set marker size based on support (use a multiplier to scale these up)
set marker color based on lift
hint | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
with open('itemsets.dat') as f:
transactions = []
for row in f:
transactions.append(row.strip().split(','))
transactions[0:5]
Explanation: Association Mining
The goal in this problem set is to design and code an algorithm for generating association rules based on the apriori algorithm. The dataset that you will use to mine for association rules is a synthetic dataset containing over 1500 transactions. The format of the data is as follows:
Each row in the file represents a single transaction
Each transaction contains one or more letters representing items
The number of each item is immaterial for mining, so each unique letter will appear at most once in a transaction
Note: this means you cannot just read the data into a dataframe, because the data is not columnar
Example:
A
A, C, E
B, D
The above represents 3 transactions, where the third transaction contained at least one of each for items B and D.
Problems
Write a scipt to read the data file 'itemsets.dat' into a list of lists
one inner list of items for each transaction
the .dat file is essentially just a text file
For a bit of EDA:
create a script that plots the proportions of single items sold
plot the counts for the different itemset sizes
Create a function to compute support for a given itemset from a transaction
Write an apriori function that handles the pruning and generation of frequent itemsets
should return a dict of frequent itemsets as keys, and their support as values
must have a method for setting a minimum support value to determine frequent itemsets
hint: use sets and frozensets <- dict keys cant be mutable objects
Create a function that uses the frequent itemsets to generate association rules
must have a min confidence level as a parameter
do not keep rules that fall under this threshold
return dict of rules with string for keys
string key will take a form like: "{'A', 'B'} -> {'C'}"
values must be a dict containing metric:value pairs where metrics include support, confidence, and lift
return dataframe of rules with the following columns
antecedent, consequent, support, confidence, lift
Note: the support is for the entire set: antecedent union consequent
Create a scatter plot to analyze the association rules
one axis for antecedent and one for consequent
set marker size based on support (use a multiplier to scale these up)
set marker color based on lift
hint: convert axes columns to numeric codes using .astype('category').cat.codes
How should you interpret the lift values that are less than 1?
1. Write a scipt to read the data file 'itemsets.dat' into a list of lists
one inner list of items for each transaction
the .dat file is essentially just a text file
End of explanation
items_counts = {}
for tran in transactions:
for item in tran:
if item in items_counts:
items_counts[item] += 1
else:
items_counts[item] = 1
plt.bar(items_counts.keys(), items_counts.values());
itemsets_sizes = {}
for tran in transactions:
tran = frozenset(tran)
if tran in itemsets_sizes:
itemsets_sizes[tran] += 1
else:
itemsets_sizes[tran] = 1
fig, ax = plt.subplots(figsize=(16, 8))
ax.bar(itemsets_sizes.keys(), itemsets_sizes.values())
ax.set_xticklabels([list(itemset) for itemset in itemsets_sizes.keys()], rotation='vertical');
Explanation: 2. For a bit of EDA:
create a script that plots the proportions of single items sold
plot the counts for the different itemset sizes
End of explanation
def support_count(trans, itemset):
count = 0
for tran in trans:
if set(itemset).issubset(set(tran)):
count += 1
return count
print(support_count(transactions, ['A']))
print(support_count(transactions, ['A', 'C']))
print(support_count(transactions, ['F']))
def support(trans, itemset):
return support_count(trans, itemset) / len(trans)
print(support(transactions, ['A']))
print(support(transactions, ['A', 'C']))
print(support(transactions, ['F']))
Explanation: 3. Create a function to compute support for a given itemset from a transaction
End of explanation
def generate_supsets(items, sets):
sups = {}
for s in sets:
for item in items:
sup = frozenset(s.union(item))
if sup != s and sup not in sups:
sups[sup] = 1
return list(sups.keys())
def apriori(trans, minsupp):
frequent_itemsets = {}
# get items list
items = list(set([item for tran in trans for item in tran]))
# initialize list of itemsets to check
curr_sets = items
# iterate till max itemset length
for i in range(len(items)):
# print(i, curr_sets)
# initialize candidates itemsets for generation of supsets
next_sets = []
# initialize current iteration unfrequent itemsets list
unfrequent = []
for s in curr_sets:
supp = support(trans, s)
# print(s, supp)
# if we are over minsupp add itemset to frequent list and to supsets generation candidates
if supp >= minsupp:
frequent_itemsets[frozenset(s)] = supp
next_sets.append(frozenset(s))
#else add to unfrequent list
else:
unfrequent.append(frozenset(s))
# if this is the first iteration update items list in order to optimize supsets generation
if i == 0:
items = [item for item in items if item not in unfrequent]
# generate supsets and exclude those that contain an unfrequent itemset
curr_sets = generate_supsets(items, next_sets)
for unfr in unfrequent:
curr_sets = [s for s in curr_sets if not unfr.issubset(s)]
# print(next_sets)
# print(unfrequent)
# print(curr_sets)
if len(curr_sets) == 0:
break
return frequent_itemsets
apriori(transactions, 0.1)
Explanation: 4. Write an apriori function that handles the pruning and generation of frequent itemsets
should return a dict of frequent itemsets as keys, and their support as values
must have a method for setting a minimum support value to determine frequent itemsets
hint: use sets and frozensets <- dict keys cant be mutable objects
End of explanation
def calc_metrics(trans, X, Y):
supp = support(trans, X.union(Y))
supp_X = support(trans, X)
supp_Y = support(trans, Y)
conf = supp / supp_X
lift = supp / (supp_X*supp_Y)
return conf, lift, supp
def generate_subsets(items, sets):
subs = {}
for s in sets:
for item in items:
sup = frozenset(s.difference(item))
if sup != s and sup not in subs:
subs[sup] = 1
return list(subs.keys())
def association_rules(trans, frequent, minconf):
rules = {}
# get items list
items = list(frequent)
# initialize list of antecedents to check
curr_antecedents = generate_subsets(items, [frequent])
# iterate till itemset length - 1
for i in range(len(frequent)-1):
# print(i, curr_rules)
# initialize candidates itemsets for generation of subsets
next_antecedents = []
# initialize current iteration unfrequent itemsets list
unconfident = []
for ant in curr_antecedents:
cons = set(items).difference(ant)
conf, lift, supp = calc_metrics(trans, ant, cons)
# print(ant, conf)
# if we are over minconf add rule to rules list and to subsets generation candidates
if conf >= minconf:
rule_ant = ', '.join('{}'.format(a) for a in (list(ant)))
rule_cons = ', '.join('{}'.format(c) for c in (list(cons)))
rule = '{{{}}}->{{{}}}'.format(rule_ant, rule_cons)
metrics = {}
metrics['confidence'] = conf
metrics['lift'] = lift
metrics['support'] = supp
rules[rule] = metrics
next_antecedents.append(frozenset(ant))
#else add to unconfident list
else:
unconfident.append(frozenset(ant))
# generate subsets and exclude those that are contained in an unconfident rule
curr_antecedents = generate_subsets(items, next_antecedents)
for uncf in unconfident:
curr_antecedents = [ant for ant in curr_antecedents if not uncf.issuperset(ant)]
# print(next_antecedents)
# print(unconfident)
# print(curr_antecedents)
if len(curr_antecedents) == 0:
break
return rules
association_rules(transactions, frozenset({'A', 'B', 'C'}), 0.2)
def get_all_rules(trans, frequent_itemsets, minconf):
rules_dict = {}
for frequent in frequent_itemsets:
rules = association_rules(trans, frequent, minconf)
rules_dict.update(rules)
rules_df = pd.DataFrame(rules_dict).T.reset_index()
rules_df['antecedent'] = rules_df['index'].apply(lambda x: x.split('->')[0])
rules_df['consequent'] = rules_df['index'].apply(lambda x: x.split('->')[1])
rules_df.drop('index', axis=1, inplace=True)
return rules_dict, rules_df
frequent_itemsets = apriori(transactions, 0.1)
rules_dict, rules_df = get_all_rules(transactions, frequent_itemsets, 0.3)
rules_dict
rules_df
Explanation: 5. Create a function that uses the frequent itemsets to generate association rules
must have a min confidence level as a parameter
do not keep rules that fall under this threshold
return dict of rules with string for keys
string key will take a form like: "{'A', 'B'} -> {'C'}"
values must be a dict containing metric:value pairs where metrics include support, confidence, and lift
return dataframe of rules with the following columns
antecedent, consequent, support, confidence, lift
Note: the support is for the entire set: antecedent union consequent
End of explanation
x = rules_df['antecedent'].astype('category')
y = rules_df['consequent'].astype('category')
fig, ax = plt.subplots(figsize=(12, 10))
sct = ax.scatter(x.cat.codes, y.cat.codes, s=rules_df['support']*10000, c=rules_df['lift'])
ax.xaxis.set_major_locator(plt.MaxNLocator(len(x.unique())))
ax.xaxis.set_ticklabels(np.append([''], x.cat.categories))
ax.yaxis.set_major_locator(plt.MaxNLocator(len(y.unique())))
ax.yaxis.set_ticklabels(np.append([''], y.cat.categories))
fig.colorbar(sct);
Explanation: 6. Create a scatter plot to analyze the association rules
one axis for antecedent and one for consequent
set marker size based on support (use a multiplier to scale these up)
set marker color based on lift
hint: convert axes columns to numeric codes using .astype('category').cat.codes
End of explanation |
1,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Python for Everyone!<br/>Oregon Curriculum Network
Descriptors and Properties in Python
<img src="https
Step2: y's value is an ordinary int, equivalently the value of MyClass.__dict__['y'], whereas the x attribute, a descriptor, will police getting and setting through __get__ and __set__ methods, using the name 'x' as a proxy to x.val behind the scenes (think of x.val as "more secret" as in less directly accessible).
Notice our distinct instances of MyClass nevertheless share both x and y as class level names. Changing the value for one changes it for all. Building on this behavior, a Pythonic way to define setters and getters that store data at the instance level becomes possible.
Properties
The code below is likewise from the Python 3.5 version of the docs at Python.org, and shows how the built-in property() type may be modeled as a pure Python class.
Step3: The getter, setter and deleter methods allow swapping out new versions of fset, fget and fdel by keeping whatever is unchanged and making a new instance with a call to type(self) -- remember that types are callables.
The C class now uses the Property class to fully develop a pet class level attribute named 'x', which in turn fully implements the descriptor protocol, as an instance of the Property descriptor.
Step4: Think of x as an object prepared to delegate to the three methods. Every time a new C instance is created, that instance is bound to a deeply internal "secret" __x. The public or class level x is a proxy to a set of instance methods living inside C.__dict__. Each instance of C talks to its respective self.__x i.e. don't confuse the shared nature of x with the private individuality of each self.__x
The code below goes a step further in using two instances of C to demonstrate how properties work. In this case, the same Property class is used as a decorator. Notice how .setter() becomes available once the getter is defined, because the decorator has "abucted" the original method and turned it into an instance of something, of which .setter is a now an attribute.
Step5: The reason for all the __getitem__ syntax i.e. talking to self.__dict__ in "longhand", is to avoid a recursive situation where a setter or getter calls itself. __getitem__ syntax lets us set and get in a way that bypasses __getattribute__ and its internal mechanisms, which are responsible for triggering the descriptor protocol in the first place.
Once we have our instance of C in the guts of a setter or getter, we talk directly to its proxy, an instance of Property, which responds accordingly to setting and getting.
Step6: Fun though that was, there's more indirection going on than necessary.
The methods of Generic are themselves suitable as setters and getters, without needing to delegate to some instance of C with its fancy 'x' property.
What you see below is the more usual Python program, except instead of using the pure Python class above, the equivalent built-in property type (lowercase) is used as a decorator instead.
Reading the pure Python version shows how it works.
Step7: This time, we've cut out the middle man, C.
The Property class is where the descriptor protocol gets implemented.
We turn Generic.y and Generic.z into properties by decorating methods of the same names.
Through decorating, two class level Property objects, similar to C.x, get created, with each one providing a set of instance methods happy to work with a specific self.
These four instance methods, defined within Generic itself, consult self.__y and self.__z much as x worked with self.__x behind the scenes.
Step8: By the way, notice that method( ) has a single argument 'this', showing that 'self' is by convention and, furthermore, the value of 'this' will depend on whether method( ) is called
Step9: So that's a lot of fancy theory, but what might be a practical application of the above. Suppose we want a circle to let us modify its radius at will, and to treat area as an ordinary attribute nonetheless...
Step10: In decorating only the area method, we provide the area property with a getter, i.e. fget has been set to this method. No setter proxy (self.fset) has been defined, hence an assignment to the area property, which triggers its __set__ method, raises an AttributeError (see Property.__set__).
Step12: Might we make both radius and area into properties, such that setting either recalculates the other?
Let's try | Python Code:
class RevealAccess(object):
A data descriptor that sets and returns values
normally and prints a message logging their access.
Descriptor Example:
https://docs.python.org/3/howto/descriptor.html
def __init__(self, initval=None, name='var'):
self.val = initval
self.name = name
def __get__(self, obj, objtype):
print('Retrieving', self.name)
return self.val
def __set__(self, obj, val):
print('Updating', self.name)
self.val = val
class MyClass(object):
x = RevealAccess(10, 'var "x"')
y = 5
# Let's test...
m1 = MyClass()
m2 = MyClass()
print("m1.x: ", m1.x)
m1.x = 20
m2.x = 10
print("m1.x: ", m1.x)
print("m2.x: ", m2.x)
print("m1.y: ", m1.y)
print(m1.x is m2.x)
Explanation: Python for Everyone!<br/>Oregon Curriculum Network
Descriptors and Properties in Python
<img src="https://c7.staticflickr.com/6/5456/30249061422_4e80e28d05.jpg" width="320" height="240" alt="Retired Mascot"/>
Descriptors
Lets take a look at the descriptor protocol. When and how binding happens, and later lookup, is intimately controlled by __set__ and __get__ methods respectively. When defined for a class of object, getting and setting become mediated operations, without changes to the outward API (user interface).
For example, here is some code directly from the Python docs:
End of explanation
class Property(object):
"Emulate PyProperty_Type() in Objects/descrobject.c"
def __init__(self, fget=None, fset=None, fdel=None, doc=None):
self.fget = fget
self.fset = fset
self.fdel = fdel
if doc is None and fget is not None:
doc = fget.__doc__
self.__doc__ = doc
def __get__(self, obj, objtype=None):
if obj is None:
return self
if self.fget is None:
raise AttributeError("unreadable attribute")
return self.fget(obj)
def __set__(self, obj, value):
if self.fset is None:
raise AttributeError("can't set attribute")
self.fset(obj, value)
def __delete__(self, obj):
if self.fdel is None:
raise AttributeError("can't delete attribute")
self.fdel(obj)
def getter(self, fget):
return type(self)(fget, self.fset, self.fdel, self.__doc__)
def setter(self, fset):
return type(self)(self.fget, fset, self.fdel, self.__doc__)
def deleter(self, fdel):
return Property(self.fget, self.fset, fdel, self.__doc__)
Explanation: y's value is an ordinary int, equivalently the value of MyClass.__dict__['y'], whereas the x attribute, a descriptor, will police getting and setting through __get__ and __set__ methods, using the name 'x' as a proxy to x.val behind the scenes (think of x.val as "more secret" as in less directly accessible).
Notice our distinct instances of MyClass nevertheless share both x and y as class level names. Changing the value for one changes it for all. Building on this behavior, a Pythonic way to define setters and getters that store data at the instance level becomes possible.
Properties
The code below is likewise from the Python 3.5 version of the docs at Python.org, and shows how the built-in property() type may be modeled as a pure Python class.
End of explanation
class C:
def getx(self):
print("getting...")
return self.__x
def setx(self, value):
print("setting...")
self.__x = value
def delx(self):
print("deleting...")
del self.__x
x = Property(getx, setx, delx, "I'm the 'x' property.")
Explanation: The getter, setter and deleter methods allow swapping out new versions of fset, fget and fdel by keeping whatever is unchanged and making a new instance with a call to type(self) -- remember that types are callables.
The C class now uses the Property class to fully develop a pet class level attribute named 'x', which in turn fully implements the descriptor protocol, as an instance of the Property descriptor.
End of explanation
class Generic:
def __init__(self, a=None, b=None):
self.__dict__['y'] = C()
self.y = a
self.__dict__['z'] = C()
self.z = b
@Property
def y(self):
return self.__dict__['y'].x
@y.setter
def y(self, val):
print("Generic setter for y")
self.__dict__['y'].x = val
@Property
def z(self):
return self.__dict__['z'].x
@z.setter
def z(self, val):
print("Generic setter for z")
self.__dict__['z'].x = val
Explanation: Think of x as an object prepared to delegate to the three methods. Every time a new C instance is created, that instance is bound to a deeply internal "secret" __x. The public or class level x is a proxy to a set of instance methods living inside C.__dict__. Each instance of C talks to its respective self.__x i.e. don't confuse the shared nature of x with the private individuality of each self.__x
The code below goes a step further in using two instances of C to demonstrate how properties work. In this case, the same Property class is used as a decorator. Notice how .setter() becomes available once the getter is defined, because the decorator has "abucted" the original method and turned it into an instance of something, of which .setter is a now an attribute.
End of explanation
me = Generic(3, "Hello")
print("me.y:", me.y)
print("me.z:", me.z)
little_me = Generic(4, "World")
print("little_me.y:", little_me.y)
print("little_me.z:", little_me.z)
me.y = 5
me.z = "Ciao"
little_me.y = 6
little_me.z = "Mondo"
print("me.y:", me.y)
print("me.z:", me.z)
print("little_me.y:", little_me.y)
print("little_me.z:", little_me.z)
Explanation: The reason for all the __getitem__ syntax i.e. talking to self.__dict__ in "longhand", is to avoid a recursive situation where a setter or getter calls itself. __getitem__ syntax lets us set and get in a way that bypasses __getattribute__ and its internal mechanisms, which are responsible for triggering the descriptor protocol in the first place.
Once we have our instance of C in the guts of a setter or getter, we talk directly to its proxy, an instance of Property, which responds accordingly to setting and getting.
End of explanation
class Generic:
def __init__(self, a=None, b=None):
self.y = a
self.z = b
@property
def y(self):
return self.__y
@y.setter
def y(self, val):
print("Generic setter for y")
self.__y = val
@property
def z(self):
return self.__z
@z.setter
def z(self, val):
print("Generic setter for z")
self.__z = val
Explanation: Fun though that was, there's more indirection going on than necessary.
The methods of Generic are themselves suitable as setters and getters, without needing to delegate to some instance of C with its fancy 'x' property.
What you see below is the more usual Python program, except instead of using the pure Python class above, the equivalent built-in property type (lowercase) is used as a decorator instead.
Reading the pure Python version shows how it works.
End of explanation
me = Generic(3, "Hello")
print("me.y:", me.y)
print("me.z:", me.z)
little_me = Generic(4, "World")
print("little_me.y:", little_me.y)
print("little_me.z:", little_me.z)
me.y = 5
me.z = "Ciao"
little_me.y = 6
little_me.z = "Mondo"
print("me.y:", me.y)
print("me.z:", me.z)
print("little_me.y:", little_me.y)
print("little_me.z:", little_me.z)
Explanation: This time, we've cut out the middle man, C.
The Property class is where the descriptor protocol gets implemented.
We turn Generic.y and Generic.z into properties by decorating methods of the same names.
Through decorating, two class level Property objects, similar to C.x, get created, with each one providing a set of instance methods happy to work with a specific self.
These four instance methods, defined within Generic itself, consult self.__y and self.__z much as x worked with self.__x behind the scenes.
End of explanation
class Generic2(Generic):
def method(this):
return ("this is: " +
("Instance" if isinstance(this, Generic2)
else "Class"))
class Generic3(Generic):
@classmethod
def method(this):
return ("this is: " +
("Instance" if isinstance(this, Generic2)
else "Class"))
me = Generic2(1,2)
print("On an instance: ", me.method())
print("On the class: ", Generic2.method(Generic2))
me = Generic3(1,2)
print("With @classmethod decorator: ", me.method())
Explanation: By the way, notice that method( ) has a single argument 'this', showing that 'self' is by convention and, furthermore, the value of 'this' will depend on whether method( ) is called: on an instance or on a class.
Calling me.method() sets 'this' to the instance object i.e. what 'self' is used for.
However Generic.method(Generic) is likewise legal Python, and in this case you must pass the class explicitly if that's what's needed.
The @classmethod decorator, applied to a method, will pass in the class automatically.
End of explanation
import math
class Circle:
def __init__(self, r):
self.radius = r
@property
def area(self):
return self.radius ** 2 * math.pi
def __repr__(self):
return "Circle({})".format(self.radius)
the_circle = Circle(1)
print(the_circle) # triggers __repr__ in the absence of __str__
print("Area of the circle: {:f}".format(the_circle.area))
the_circle.radius = 2
print("Area of the circle: {:f}".format(the_circle.area))
Explanation: So that's a lot of fancy theory, but what might be a practical application of the above. Suppose we want a circle to let us modify its radius at will, and to treat area as an ordinary attribute nonetheless...
End of explanation
try:
the_circle.area = 90
except AttributeError:
print("Can't set the area directly")
Explanation: In decorating only the area method, we provide the area property with a getter, i.e. fget has been set to this method. No setter proxy (self.fset) has been defined, hence an assignment to the area property, which triggers its __set__ method, raises an AttributeError (see Property.__set__).
End of explanation
import unittest
class Circle:
setting either the radius or area attribute sets the other
as a dependent value. Initialized with radius only, unit
circle by default.
def __init__(self, radius = 1):
self.radius = radius
@property
def area(self):
return self._area
@property
def radius(self):
return self._radius
@area.setter
def area(self, value):
self._area = value
self._radius = math.sqrt(self._area / math.pi)
@radius.setter
def radius(self, value):
self._radius = value
self._area = math.pi * (self._radius ** 2)
def __repr__(self):
return "Circle(radius = {})".format(self.radius)
class TestCircle(unittest.TestCase):
def testArea(self):
the_circle = Circle(1)
self.assertEqual(the_circle.area, math.pi, "Uh oh")
def testRadius(self):
the_circle = Circle(1)
the_circle.area = math.pi * 4 # power rule
self.assertEqual(the_circle.radius, 2, "Uh oh")
a = TestCircle() # the test suite
suite = unittest.TestLoader().loadTestsFromModule(a)
unittest.TextTestRunner().run(suite) # run the test suite
Explanation: Might we make both radius and area into properties, such that setting either recalculates the other?
Let's try:
End of explanation |
1,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Matplotlib Concepts Lecture
In this lecture we cover some more advanced topics which you won't usually use as often. You can always reference the documentation for more resources!
Logarithmic scale
It is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using set_xscale and set_yscale methods which accept one parameter (with the value "log" in this case)
Step1: Placement of ticks and custom tick labels
We can explicitly determine where we want the axis ticks with set_xticks and set_yticks, which both take a list of values for where on the axis the ticks are to be placed. We can also use the set_xticklabels and set_yticklabels methods to provide a list of custom text labels for each tick location
Step2: There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http
Step3: Axis number and axis label spacing
Step4: Axis position adjustments
Unfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using subplots_adjust
Step5: Axis grid
With the grid method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the plot function
Step6: Axis spines
We can also change the properties of axis spines
Step7: Twin axes
Sometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the twinx and twiny functions
Step8: Axes where x and y is zero
Step9: Other 2D plot styles
In addition to the regular plot method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types
Step10: Text annotation
Annotating text in matplotlib figures can be done using the text function. It supports LaTeX formatting just like axis label texts and titles
Step11: Figures with multiple subplots and insets
Axes can be added to a matplotlib Figure canvas manually using fig.add_axes or using a sub-figure layout manager such as subplots, subplot2grid, or gridspec
Step12: subplot2grid
Step13: gridspec
Step14: add_axes
Manually adding axes with add_axes is useful for adding insets to figures
Step15: Colormap and contour figures
Colormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see
Step16: pcolor
Step17: imshow
Step18: contour
Step19: 3D figures
To use 3D graphics in matplotlib, we first need to create an instance of the Axes3D class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a projection='3d' keyword argument to the add_axes or add_subplot methods.
Step20: Surface plots
Step21: Wire-frame plot
Step22: Coutour plots with projections | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.linspace(0, 5, 11)
y = x**2
fig, axes = plt.subplots(1, 2, figsize=(10,4))
axes[0].plot(x, x**2, x, np.exp(x))
axes[0].set_title("Normal scale")
axes[1].plot(x, x**2, x, np.exp(x))
axes[1].set_yscale("log")
axes[1].set_title("Logarithmic scale (y)");
Explanation: Advanced Matplotlib Concepts Lecture
In this lecture we cover some more advanced topics which you won't usually use as often. You can always reference the documentation for more resources!
Logarithmic scale
It is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using set_xscale and set_yscale methods which accept one parameter (with the value "log" in this case):
End of explanation
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(x, x**2, x, x**3, lw=2)
ax.set_xticks([1, 2, 3, 4, 5])
ax.set_xticklabels([r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$', r'$\epsilon$'], fontsize=18)
yticks = [0, 50, 100, 150]
ax.set_yticks(yticks)
ax.set_yticklabels(["$%.1f$" % y for y in yticks], fontsize=18); # use LaTeX formatted labels
Explanation: Placement of ticks and custom tick labels
We can explicitly determine where we want the axis ticks with set_xticks and set_yticks, which both take a list of values for where on the axis the ticks are to be placed. We can also use the set_xticklabels and set_yticklabels methods to provide a list of custom text labels for each tick location:
End of explanation
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_title("scientific notation")
ax.set_yticks([0, 50, 100, 150])
from matplotlib import ticker
formatter = ticker.ScalarFormatter(useMathText=True)
formatter.set_scientific(True)
formatter.set_powerlimits((-1,1))
ax.yaxis.set_major_formatter(formatter)
Explanation: There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details.
Scientific notation
With large numbers on axes, it is often better use scientific notation:
End of explanation
# distance between x and y axis and the numbers on the axes
matplotlib.rcParams['xtick.major.pad'] = 5
matplotlib.rcParams['ytick.major.pad'] = 5
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("label and axis spacing")
# padding between axis label and axis numbers
ax.xaxis.labelpad = 5
ax.yaxis.labelpad = 5
ax.set_xlabel("x")
ax.set_ylabel("y");
# restore defaults
matplotlib.rcParams['xtick.major.pad'] = 3
matplotlib.rcParams['ytick.major.pad'] = 3
Explanation: Axis number and axis label spacing
End of explanation
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("title")
ax.set_xlabel("x")
ax.set_ylabel("y")
fig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);
Explanation: Axis position adjustments
Unfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using subplots_adjust:
End of explanation
fig, axes = plt.subplots(1, 2, figsize=(10,3))
# default grid appearance
axes[0].plot(x, x**2, x, x**3, lw=2)
axes[0].grid(True)
# custom grid appearance
axes[1].plot(x, x**2, x, x**3, lw=2)
axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)
Explanation: Axis grid
With the grid method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the plot function:
End of explanation
fig, ax = plt.subplots(figsize=(6,2))
ax.spines['bottom'].set_color('blue')
ax.spines['top'].set_color('blue')
ax.spines['left'].set_color('red')
ax.spines['left'].set_linewidth(2)
# turn off axis spine to the right
ax.spines['right'].set_color("none")
ax.yaxis.tick_left() # only ticks on the left side
Explanation: Axis spines
We can also change the properties of axis spines:
End of explanation
fig, ax1 = plt.subplots()
ax1.plot(x, x**2, lw=2, color="blue")
ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, x**3, lw=2, color="red")
ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red")
for label in ax2.get_yticklabels():
label.set_color("red")
Explanation: Twin axes
Sometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the twinx and twiny functions:
End of explanation
fig, ax = plt.subplots()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0)) # set position of y spine to y=0
xx = np.linspace(-0.75, 1., 100)
ax.plot(xx, xx**3);
Explanation: Axes where x and y is zero
End of explanation
n = np.array([0,1,2,3,4,5])
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))
axes[0].set_title("scatter")
axes[1].step(n, n**2, lw=2)
axes[1].set_title("step")
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[2].set_title("bar")
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
axes[3].set_title("fill_between");
Explanation: Other 2D plot styles
In addition to the regular plot method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:
End of explanation
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue")
ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green");
Explanation: Text annotation
Annotating text in matplotlib figures can be done using the text function. It supports LaTeX formatting just like axis label texts and titles:
End of explanation
fig, ax = plt.subplots(2, 3)
fig.tight_layout()
Explanation: Figures with multiple subplots and insets
Axes can be added to a matplotlib Figure canvas manually using fig.add_axes or using a sub-figure layout manager such as subplots, subplot2grid, or gridspec:
subplots
End of explanation
fig = plt.figure()
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2,0))
ax5 = plt.subplot2grid((3,3), (2,1))
fig.tight_layout()
Explanation: subplot2grid
End of explanation
import matplotlib.gridspec as gridspec
fig = plt.figure()
gs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1])
for g in gs:
ax = fig.add_subplot(g)
fig.tight_layout()
Explanation: gridspec
End of explanation
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
fig.tight_layout()
# inset
inset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height
inset_ax.plot(xx, xx**2, xx, xx**3)
inset_ax.set_title('zoom near origin')
# set axis range
inset_ax.set_xlim(-.2, .2)
inset_ax.set_ylim(-.005, .01)
# set axis tick locations
inset_ax.set_yticks([0, 0.005, 0.01])
inset_ax.set_xticks([-0.1,0,.1]);
Explanation: add_axes
Manually adding axes with add_axes is useful for adding insets to figures:
End of explanation
alpha = 0.7
phi_ext = 2 * np.pi * 0.5
def flux_qubit_potential(phi_m, phi_p):
return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)
phi_m = np.linspace(0, 2*np.pi, 100)
phi_p = np.linspace(0, 2*np.pi, 100)
X,Y = np.meshgrid(phi_p, phi_m)
Z = flux_qubit_potential(X, Y).T
Explanation: Colormap and contour figures
Colormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps
End of explanation
fig, ax = plt.subplots()
p = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
cb = fig.colorbar(p, ax=ax)
Explanation: pcolor
End of explanation
fig, ax = plt.subplots()
im = ax.imshow(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
im.set_interpolation('bilinear')
cb = fig.colorbar(im, ax=ax)
Explanation: imshow
End of explanation
fig, ax = plt.subplots()
cnt = ax.contour(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
Explanation: contour
End of explanation
from mpl_toolkits.mplot3d.axes3d import Axes3D
Explanation: 3D figures
To use 3D graphics in matplotlib, we first need to create an instance of the Axes3D class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a projection='3d' keyword argument to the add_axes or add_subplot methods.
End of explanation
fig = plt.figure(figsize=(14,6))
# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(1, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)
# surface_plot with color grading and color bar
ax = fig.add_subplot(1, 2, 2, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)
cb = fig.colorbar(p, shrink=0.5)
Explanation: Surface plots
End of explanation
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)
Explanation: Wire-frame plot
End of explanation
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)
cset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=matplotlib.cm.coolwarm)
ax.set_xlim3d(-np.pi, 2*np.pi);
ax.set_ylim3d(0, 3*np.pi);
ax.set_zlim3d(-np.pi, 2*np.pi);
Explanation: Coutour plots with projections
End of explanation |
1,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TF Lattice 사전 제작 모델
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 필수 패키지 가져오기
Step3: UCI Statlog(Heart) 데이터세트 다운로드하기
Step4: 특성과 레이블을 추출하고 텐서로 변환합니다.
Step5: 이 가이드에서 훈련에 사용되는 기본값 설정하기
Step6: 특성 구성
특성 보정 및 특성별 구성은 tfl.configs.FeatureConfig를 사용하여 설정됩니다. 특성 구성에는 단조 제약 조건, 특성별 정규화(tfl.configs.RegularizerConfig 참조) 및 격자 모델에 대한 격자 크기가 포함됩니다.
모델이 인식해야 할 모든 특성에 대한 특성 구성을 완전하게 지정해야합니다. 그렇지 않으면 모델은 이러한 특성이 존재하는지 알 수 없습니다.
분위수 계산하기
tfl.configs.FeatureConfig에서 pwl_calibration_input_keypoints의 기본 설정은 'quantiles'이지만 사전 제작된 모델의 경우 입력 키포인트를 수동으로 정의해야 합니다. 이를 위해 먼저 분위수 계산을 위한 자체 도우미 함수를 정의합니다.
Step7: 특성 구성 정의하기
이제 분위수를 계산할 수 있으므로 모델이 입력으로 사용하기 원하는 각 특성에 대한 특성 구성을 정의합니다.
Step8: 다음으로 사용자 정의 어휘(위의 'thal'과 같은)를 사용한 특성에 대해 단조를 올바르게 설정해야합니다.
Step9: 보정된 선형 모델
TFL 사전 제작 모델을 구성하려면 먼저 tfl.configs에서 모델 구성을 갖추세요. 보정된 선형 모델은 tfl.configs.CalibratedLinearConfig를 사용하여 구성됩니다. 입력 특성에 구간 선형 및 범주형 보정을 적용한 다음 선형 조합 및 선택적 출력 구간 선형 보정을 적용합니다. 출력 보정을 사용하거나 출력 경계가 지정된 경우 선형 레이어는 보정된 입력에 가중치 평균을 적용합니다.
이 예제는 처음 5개 특성에 대해 보정된 선형 모델을 만듭니다.
Step10: 이제 다른 tf.keras.Model과 마찬가지로 모델을 데이터에 맞게 컴파일하고 적합하도록 맞춥니다.
Step11: 모델을 훈련한 후 테스트세트에서 평가할 수 있습니다.
Step12: 보정된 격자 모델
보정된 격자 모델은 tfl.configs.CalibratedLatticeConfig를 사용하여 구성됩니다. 보정된 격자 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 및 선택적 출력 구간별 선형 보정을 적용합니다.
이 예제에서는 처음 5개의 특성에 대해 보정된 격자 모델을 만듭니다.
Step13: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
Step14: 보정된 격자 앙상블 모델
특성 수가 많으면 앙상블 모델을 사용할 수 있습니다.이 모델은 특성의 하위 집합에 대해 여러 개의 작은 격자를 만들고, 하나의 거대한 격자를 만드는 대신 출력을 평균화합니다. 앙상블 격자 모델은 tfl.configs.CalibratedLatticeEnsembleConfig를 사용하여 구성됩니다. 보정된 격자 앙상블 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 앙상블과 선택적 출력 구간별 선형 보정을 적용합니다.
명시적 격자 앙상블 초기화
격자에 공급할 특성의 하위 집합을 이미 알고 있는 경우 특성 이름을 사용하여 격자를 명시적으로 설정할 수 있습니다. 이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
Step15: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
Step16: 무작위 격자 앙상블
격자에 어떤 특성의 하위 집합을 제공할지 확실하지 않은 경우 각 격자에 대해 무작위의 특성 하위 집합을 사용해보는 옵션이 있습니다. 이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
Step17: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
Step18: RTL 레이어 무작위 격자 앙상블
무작위 격자 앙상블을 사용하는 경우 모델이 단일 tfl.layers.RTL 레이어를 사용하도록 지정할 수 있습니다. tfl.layers.RTL은 단조 제약 조건만 지원하며 모든 특성에 대해 같은 격자 크기를 가져야 하고 특성별 정규화가 없어야 합니다. tfl.layers.RTL 레이어를 사용하면 별도의 tfl.layers.Lattice 인스턴스를 사용하는 것보다 훨씬 더 큰 앙상블로 확장할 수 있습니다.
이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
Step19: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
Step20: Crystal 격자 앙상블
사전 제작은 또한 Crystal 이라는 휴리스틱 특성 배열 알고리즘을 제공합니다. Crystal 알고리즘을 사용하기 위해 먼저 쌍별 특성 상호 작용을 추정하는 사전 적합 모델을 훈련합니다. 그런 다음 더 많은 비선형 상호 작용이 있는 특성이 같은 격자에 있도록 최종 앙상블을 배열합니다.
사전 제작 라이브러리는 사전 적합 모델 구성을 구성하고 결정 구조를 추출하기 위한 도우미 함수를 제공합니다. 사전 적합 모델은 완전하게 훈련될 필요가 없으므로 몇 번의 epoch면 충분합니다.
이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
Step21: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice pydot
Explanation: TF Lattice 사전 제작 모델
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/premade_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/premade_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/premade_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/premade_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
사전 제작된 모델은 일반적인 사용 사례를 위해 TFL tf.keras.model 인스턴스를 구축하는 빠르고 쉬운 방법입니다. 이 가이드에서는 TFL 사전 제작 모델을 구성하고 훈련/테스트하는 데 필요한 단계를 설명합니다.
설정
TF Lattice 패키지 설치하기
End of explanation
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: 필수 패키지 가져오기
End of explanation
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()
Explanation: UCI Statlog(Heart) 데이터세트 다운로드하기
End of explanation
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
Explanation: 특성과 레이블을 추출하고 텐서로 변환합니다.
End of explanation
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
Explanation: 이 가이드에서 훈련에 사용되는 기본값 설정하기
End of explanation
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
Explanation: 특성 구성
특성 보정 및 특성별 구성은 tfl.configs.FeatureConfig를 사용하여 설정됩니다. 특성 구성에는 단조 제약 조건, 특성별 정규화(tfl.configs.RegularizerConfig 참조) 및 격자 모델에 대한 격자 크기가 포함됩니다.
모델이 인식해야 할 모든 특성에 대한 특성 구성을 완전하게 지정해야합니다. 그렇지 않으면 모델은 이러한 특성이 존재하는지 알 수 없습니다.
분위수 계산하기
tfl.configs.FeatureConfig에서 pwl_calibration_input_keypoints의 기본 설정은 'quantiles'이지만 사전 제작된 모델의 경우 입력 키포인트를 수동으로 정의해야 합니다. 이를 위해 먼저 분위수 계산을 위한 자체 도우미 함수를 정의합니다.
End of explanation
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
Explanation: 특성 구성 정의하기
이제 분위수를 계산할 수 있으므로 모델이 입력으로 사용하기 원하는 각 특성에 대한 특성 구성을 정의합니다.
End of explanation
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
Explanation: 다음으로 사용자 정의 어휘(위의 'thal'과 같은)를 사용한 특성에 대해 단조를 올바르게 설정해야합니다.
End of explanation
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
Explanation: 보정된 선형 모델
TFL 사전 제작 모델을 구성하려면 먼저 tfl.configs에서 모델 구성을 갖추세요. 보정된 선형 모델은 tfl.configs.CalibratedLinearConfig를 사용하여 구성됩니다. 입력 특성에 구간 선형 및 범주형 보정을 적용한 다음 선형 조합 및 선택적 출력 구간 선형 보정을 적용합니다. 출력 보정을 사용하거나 출력 경계가 지정된 경우 선형 레이어는 보정된 입력에 가중치 평균을 적용합니다.
이 예제는 처음 5개 특성에 대해 보정된 선형 모델을 만듭니다.
End of explanation
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
Explanation: 이제 다른 tf.keras.Model과 마찬가지로 모델을 데이터에 맞게 컴파일하고 적합하도록 맞춥니다.
End of explanation
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs[:5], test_ys))
Explanation: 모델을 훈련한 후 테스트세트에서 평가할 수 있습니다.
End of explanation
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
Explanation: 보정된 격자 모델
보정된 격자 모델은 tfl.configs.CalibratedLatticeConfig를 사용하여 구성됩니다. 보정된 격자 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 및 선택적 출력 구간별 선형 보정을 적용합니다.
이 예제에서는 처음 5개의 특성에 대해 보정된 격자 모델을 만듭니다.
End of explanation
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs[:5], test_ys))
Explanation: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
End of explanation
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
Explanation: 보정된 격자 앙상블 모델
특성 수가 많으면 앙상블 모델을 사용할 수 있습니다.이 모델은 특성의 하위 집합에 대해 여러 개의 작은 격자를 만들고, 하나의 거대한 격자를 만드는 대신 출력을 평균화합니다. 앙상블 격자 모델은 tfl.configs.CalibratedLatticeEnsembleConfig를 사용하여 구성됩니다. 보정된 격자 앙상블 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 앙상블과 선택적 출력 구간별 선형 보정을 적용합니다.
명시적 격자 앙상블 초기화
격자에 공급할 특성의 하위 집합을 이미 알고 있는 경우 특성 이름을 사용하여 격자를 명시적으로 설정할 수 있습니다. 이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
End of explanation
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
Explanation: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
End of explanation
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
Explanation: 무작위 격자 앙상블
격자에 어떤 특성의 하위 집합을 제공할지 확실하지 않은 경우 각 격자에 대해 무작위의 특성 하위 집합을 사용해보는 옵션이 있습니다. 이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
End of explanation
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
Explanation: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
End of explanation
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
Explanation: RTL 레이어 무작위 격자 앙상블
무작위 격자 앙상블을 사용하는 경우 모델이 단일 tfl.layers.RTL 레이어를 사용하도록 지정할 수 있습니다. tfl.layers.RTL은 단조 제약 조건만 지원하며 모든 특성에 대해 같은 격자 크기를 가져야 하고 특성별 정규화가 없어야 합니다. tfl.layers.RTL 레이어를 사용하면 별도의 tfl.layers.Lattice 인스턴스를 사용하는 것보다 훨씬 더 큰 앙상블로 확장할 수 있습니다.
이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
End of explanation
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys))
Explanation: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
End of explanation
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
Explanation: Crystal 격자 앙상블
사전 제작은 또한 Crystal 이라는 휴리스틱 특성 배열 알고리즘을 제공합니다. Crystal 알고리즘을 사용하기 위해 먼저 쌍별 특성 상호 작용을 추정하는 사전 적합 모델을 훈련합니다. 그런 다음 더 많은 비선형 상호 작용이 있는 특성이 같은 격자에 있도록 최종 앙상블을 배열합니다.
사전 제작 라이브러리는 사전 적합 모델 구성을 구성하고 결정 구조를 추출하기 위한 도우미 함수를 제공합니다. 사전 적합 모델은 완전하게 훈련될 필요가 없으므로 몇 번의 epoch면 충분합니다.
이 예제에서는 5개의 격자와 격자당 3개의 특성이 있는 보정된 격자 앙상블 모델을 만듭니다.
End of explanation
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
Explanation: 이전과 마찬가지로 모델을 컴파일하고 적합하도록 맞추고 평가합니다.
End of explanation |
1,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xlrd demo
This notebook demonstrates the xlrd package which is designed to read MS Excel files. This is not a built-in package, but rather 3rd party package that is installed with ArcGIS Desktop. The PyPy site for xlrd is https
Step1: Open the file as a workbook object, and show some of its properties...
Step2: Get a worksheet from the workbook, and show some of its properties...
Step3: Extract some cell values from the 3-TS worksheet | Python Code:
#Import the os and the xlrd modules
import xlrd
#Set a variable to the path of the xlsx file
xlFilename = './Data/USGSCircular1405-tables1-14.xlsx'
Explanation: xlrd demo
This notebook demonstrates the xlrd package which is designed to read MS Excel files. This is not a built-in package, but rather 3rd party package that is installed with ArcGIS Desktop. The PyPy site for xlrd is https://pypi.python.org/pypi/xlrd, and the package's GitHub site (with documentation is here: https://github.com/python-excel/xlrd.
We will use the quickstart on the GitHub site to, well, start quickly with this package. We'll demonstrate the package using it to read USGS Water Use for 2010, retrieved from here: https://water.usgs.gov/watuse/data/2010/index.html, and stored in the W:/859_data/Demo folder as USGSCircular1405-tables1-14.xlsx.
This brief exercise is not intended to cover the xlrd package fully, but rather to familiarize yourself with how to dig quickly into what a package can do and how to use it.
End of explanation
#Use the open_workbook function to open the Excel file
book = xlrd.open_workbook(xlFilename)
type(book)
#Reveal some properties of this workbook
print("The number of worksheets is {0}".format(book.nsheets))
print("Worksheet name(s): {0}".format(book.sheet_names()))
Explanation: Open the file as a workbook object, and show some of its properties...
End of explanation
# Get a worksheet by index and print its name, and the number of rows and columns
sh1 = book.sheet_by_index(0)
print(sh1.name, sh1.nrows, sh1.ncols)
# Get a worksheet by name
sh2 = book.sheet_by_name('3-TS')
print(sh2.name, sh2.nrows, sh2.ncols)
Explanation: Get a worksheet from the workbook, and show some of its properties...
End of explanation
#Print the value of the cell in row 10, column 3
print(sh2.cell_value(rowx=10, colx=3))
#Print an entire row
row4 = sh2.row(3)
for item in row4:
print (item.value)
#Print an entire column
col3 = sh2.col(3)
print(col3)
Explanation: Extract some cell values from the 3-TS worksheet
End of explanation |
1,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 7
Step1: The totient is a multiplicative function, meaning that if $GCD(a,b) = 1$, then $\phi(ab) = \phi(a) \phi(b)$. Therefore, the totient of number can be found quickly from the totient of the prime powers within its decomposition. We can re-implement the totient using our functions for prime decomposition and multiplicative functions from Notebook 4.
Step2: When $p^e$ is a prime power, the numbers among $1, \ldots, p^e$ are coprime to $p^e$ precisely when they are not multiples of $p$. Therefore, the totient of a prime power is pretty easy to compute
Step3: Note that for efficiency, the computation of $p^{e-1}(p-1)$ is probably faster than the computation of $p^e - p^{e-1}$ (which relies on two exponents). But Python is very smart about these sorts of things, and might do some optimization which makes the computation faster. It should be very fast anyways, in the 10-20 nanosecond range!
Step4: This should be much faster than the previous brute-force computation of the totient.
Modular roots
A consequence of Euler's theorem is that -- depending on the modulus -- some exponentiation can be "reversed" by another exponentiation, a form of taking a "root" in modular arithmetic. For example, if we work modulo $100$, and $GCD(a,100) = 1$, then Euler's theorem states that
$$a^{40} \equiv 1 \text{ mod } 100.$$
It follows that $a^{80} \equiv 1$ and $a^{81} \equiv a$, modulo $100$. Expanding this, we find
$$a \equiv a^{81} = a^{3 \cdot 27} = (a^3)^{27} \text{ mod } 100.$$
What all this computation shows is that "raising to the 27th power" is like "taking the cube root", modulo 100 (and with appropriate bases).
Step5: In every line ending with True, the first and third numbers should match. This will happen in some False lines too, but not reliably since Euler's theorem does not apply there.
We found the exponent 27 -- reversing the cubing operation -- by an ad hoc sort of procedure. The relationship between 27 and 3, which made things work, is that
$$3 \cdot 27 \equiv 1 \text{ mod } 40.$$
In other words, $3 \cdot 27 = 1 + 40k$ for some (positive, in fact) integer $k$.
Recalling that $40 = \phi(100)$, this relationship and Euler's theorem imply that
$$a^{3 \cdot 27} = a^{1 + 40k} \equiv a^1 = a \text{ mod } 100.$$
By this argument, we have the following consequence of Euler's theorem.
Theorem
Step6: Let's test this out on some bigger numbers.
Step7: Exercises
What is the largest integer whose totient is 100? Study this by a brute force search, i.e., looping through the integers up to 10000
For which integers $n$ is it true that $\phi(n) = n/2$? Study this by a brute force search in order to make a conjecture. Then prove it if you can.
Compute the totient of the numbers from 1 to 10000, and analyze the results. The totient $\phi(n)$ is always less than $n$ when $n > 1$, but how does the ratio $\phi(n) / n$ behave? Create a graph. What is the average value of the ratio?
If $a^{323} \equiv 802931$, mod $5342481$, and $GCD(a, 5342481) = 1$, and $0 < a < 5342481$, then what is $a$?
Challenge
Step8: Encryption and decryption
Now we explain how the public key $(N,e)$ and the private key $(p,q)$ are used to encrypt and decrypt a (numerical) message $m$. If you wish to encrypt/decrypt a text message, one case use a numerical scheme like ASCII, of course. The message should be significantly shorter than the modulus, $m < N$ (ideally, shorter than the private key primes), but big enough so that $m^e$ is much bigger than $N$ (not usually a problem if $e = 65537$).
The encryption procedure is simple -- it requires just one line of Python code, using the message and public key. The ciphertext $c$ is given by the formula
$$c = m^e \text{ mod } N,$$
where here we mean the "natural representative" of $m^e$ modulo $N$.
Step9: To decrypt the ciphertext, we need to "undo" the operation of raising to the $e$-th power modulo $N$. We must, effectively, take the $e$-th root of the ciphertext, modulo $N$. This is what we studied earlier in this notebook. Namely, if $c \equiv m^e \text{ mod } N$ is the ciphertext, and $ef \equiv 1$ modulo $\phi(N)$, then
$$c^f \equiv m^{ef} \equiv m \text{ mod } N.$$
So we must raise the ciphertext to the $f$ power, where $f$ is the multiplicative inverse of $e$ modulo $\phi(N)$. Given a giant number $N$, it is difficult to compute the totient $\phi(N)$. But, with the private key $p$ and $q$ (primes), the fact that $N = pq$ implies
$$\phi(N) = (p-1) \cdot (q-1) = pq - p - q + 1 = N - p - q + 1.$$
Armed with the private key (and the public key, which everyone has), we can decrypt a message in just a few lines of Python code.
Step10: That's the entire process of encryption and decryption. Encryption requires just the public key $(N,e)$ and decryption requires the private key $(p,q)$ too. Everything else is modular arithmetic, using Euler's theorem and the Euclidean algorithm to find modular multiplicative inverses.
From a practical standpoint, there are many challenges, and we just mention a few here.
Key generation | Python Code:
def GCD(a,b):
while b: # Recall that != means "not equal to".
a, b = b, a % b
return abs(a)
def totient(m):
tot = 0 # The running total.
j = 0
while j < m: # We go up to m, because the totient of 1 is 1 by convention.
j = j + 1 # Last step of while loop: j = m-1, and then j = j+1, so j = m.
if GCD(j,m) == 1:
tot = tot + 1
return tot
totient(17) # The totient of a prime p should be p-1.
totient(1000)
totient(1) # Check a borderline case, to make sure we didn't make an off-by-one error.
17**totient(1000) % 1000 # Let's demonstrate Euler's theorem. Note GCD(17,1000) = 1.
pow(17,totient(1000),1000) # A more efficient version, using the pow command.
%timeit totient(123456)
Explanation: Part 7: The RSA Cryptosystem in Python 3.x
In the previous notebook, we studied a few basic ciphers together with Diffie-Hellman key exchange. The Vigenère cipher we studied uses a secret key for encrypting and decrypting messages. The same key is used for both encryption and decryption, so we say it is a symmetric key cipher. In order for two parties to share the same secret key, we studied the Diffie-Hellman protocol, whose security rests on the difficulty of the discrete logarithm problem.
Although this represents progress towards secure communication, it is particularly vulnerable to problems of authentication. For example, imagine a "man-in-the-middle attack": Alice and Bob wish to communicate securely, and begin the Diffie-Hellman protocol over an insecure line. But Eve has intercepted the line. To Alice, she pretends to be Bob, and to Bob, she pretends to be Alice. She goes through the Diffie-Hellman protocol with each, obtaining two secret keys, and decrypting/encrypting messages as they pass through her computer. In this way, Alice and Bob think they are talking to each other, but Eve is just passing (and understanding) their messages the whole time!
To thwart such an attack, we need some type of authentication. We need something asymmetric -- something one person can do that no other person can do, like a verifiable signature, so that we can be sure we're communicating with the intended person. For such a purpose, we introduce the RSA cryptosystem. Computationally based on modular exponentiation, its security rests on the difficulty of factoring large numbers.
The material in this notebook complements Chapter 7 of An Illustrated Theory of Numbers.
Table of Contents
Euler's theorem and modular roots
The RSA protocol
<a id='euler'></a>
Euler's Theorem and Modular Roots
Recall Fermat's Little Theorem: if $p$ is prime and $GCD(a,p) = 1$, then $a^{p-1} \equiv 1$ mod $p$. This is a special case of Euler's theorem, which holds for any modulus $m$.
Euler's theorem and the totient
Euler's theorem states: if $m$ is a positive integer and $GCD(a,m) = 1$, then $a^{\phi(m)} \equiv 1$ mod $m$. Here $\phi(m)$ denotes the totient of $m$, which is the number of elements of ${ 1,...,m }$ which are coprime to $m$. We give a brute force implementation of the totient first, using our old Euclidean algorithm code for the GCD.
End of explanation
from math import sqrt # We'll want to use the square root.
def smallest_factor(n):
'''
Gives the smallest prime factor of n.
'''
if n < 2:
return None # No prime factors!
test_factor = 2 # The smallest possible prime factor.
max_factor = sqrt(n) # we don't have to search past sqrt(n).
while test_factor <= max_factor:
if n%test_factor == 0:
return test_factor
test_factor = test_factor + 1 # This could be sped up.
return n # If we didn't find a factor up to sqrt(n), n itself is prime!
def decompose(N):
'''
Gives the unique prime decomposition of a positive integer N,
as a dictionary with primes as keys and exponents as values.
'''
current_number = N # We'll divide out factors from current_number until we get 1.
decomp = {} # An empty dictionary to start.
while current_number > 1:
p = smallest_factor(current_number) # The smallest prime factor of the current number.
if p in decomp.keys(): # Is p already in the list of keys?
decomp[p] = decomp[p] + 1 # Increase the exponent (value with key p) by 1.
else: # "else" here means "if p is not in decomp.keys()".
decomp[p] = 1 # Creates a new entry in the dictionary, with key p and value 1.
current_number = current_number // p # Factor out p.
return decomp
def mult_function(f_pp):
'''
When a function f_pp(p,e) of two arguments is input,
this outputs a multiplicative function obtained from f_pp
via prime decomposition.
'''
def f(n):
D = decompose(n)
result = 1
for p in D:
result = result * f_pp(p, D[p])
return result
return f
Explanation: The totient is a multiplicative function, meaning that if $GCD(a,b) = 1$, then $\phi(ab) = \phi(a) \phi(b)$. Therefore, the totient of number can be found quickly from the totient of the prime powers within its decomposition. We can re-implement the totient using our functions for prime decomposition and multiplicative functions from Notebook 4.
End of explanation
def totient_pp(p,e):
return (p**(e-1)) * (p-1)
Explanation: When $p^e$ is a prime power, the numbers among $1, \ldots, p^e$ are coprime to $p^e$ precisely when they are not multiples of $p$. Therefore, the totient of a prime power is pretty easy to compute:
$$\phi(p^e) = p^e - p^{e-1} = p^{e-1} (p-1).$$
We implement this, and use the multiplicative function code to complete the implementation of the totient.
End of explanation
totient = mult_function(totient_pp)
totient(1000)
%timeit totient(123456)
Explanation: Note that for efficiency, the computation of $p^{e-1}(p-1)$ is probably faster than the computation of $p^e - p^{e-1}$ (which relies on two exponents). But Python is very smart about these sorts of things, and might do some optimization which makes the computation faster. It should be very fast anyways, in the 10-20 nanosecond range!
End of explanation
for b in range(20):
b_cubed = pow(b,3,100)
bb = pow(b_cubed,27,100)
print(b, b_cubed, bb, GCD(b,100) == 1)
Explanation: This should be much faster than the previous brute-force computation of the totient.
Modular roots
A consequence of Euler's theorem is that -- depending on the modulus -- some exponentiation can be "reversed" by another exponentiation, a form of taking a "root" in modular arithmetic. For example, if we work modulo $100$, and $GCD(a,100) = 1$, then Euler's theorem states that
$$a^{40} \equiv 1 \text{ mod } 100.$$
It follows that $a^{80} \equiv 1$ and $a^{81} \equiv a$, modulo $100$. Expanding this, we find
$$a \equiv a^{81} = a^{3 \cdot 27} = (a^3)^{27} \text{ mod } 100.$$
What all this computation shows is that "raising to the 27th power" is like "taking the cube root", modulo 100 (and with appropriate bases).
End of explanation
def mult_inverse(a,m):
'''
Finds the multiplicative inverse of a, mod m.
If GCD(a,m) = 1, this is returned via its natural representative.
Otherwise, None is returned.
'''
u = a # We use u instead of dividend.
v = m # We use v instead of divisor.
u_hops, u_skips = 1,0 # u is built from one hop (a) and no skips.
v_hops, v_skips = 0,1 # v is built from no hops and one skip (b).
while v != 0: # We could just write while v:
q = u // v # q stands for quotient.
r = u % v # r stands for remainder. So u = q(v) + r.
r_hops = u_hops - q * v_hops # Tally hops
r_skips = u_skips - q * v_skips # Tally skips
u,v = v,r # The new dividend,divisor is the old divisor,remainder.
u_hops, v_hops = v_hops, r_hops # The new u_hops, v_hops is the old v_hops, r_hops
u_skips, v_skips = v_skips, r_skips # The new u_skips, v_skips is the old v_skips, r_skips
g = u # The variable g now describes the GCD of a and b.
if g == 1:
return u_hops % m
else: # When GCD(a,m) is not 1...
return None
mult_inverse(3,40) # 3 times what is congruent to 1, mod 40?
mult_inverse(5,40) # None should be returned.
Explanation: In every line ending with True, the first and third numbers should match. This will happen in some False lines too, but not reliably since Euler's theorem does not apply there.
We found the exponent 27 -- reversing the cubing operation -- by an ad hoc sort of procedure. The relationship between 27 and 3, which made things work, is that
$$3 \cdot 27 \equiv 1 \text{ mod } 40.$$
In other words, $3 \cdot 27 = 1 + 40k$ for some (positive, in fact) integer $k$.
Recalling that $40 = \phi(100)$, this relationship and Euler's theorem imply that
$$a^{3 \cdot 27} = a^{1 + 40k} \equiv a^1 = a \text{ mod } 100.$$
By this argument, we have the following consequence of Euler's theorem.
Theorem: If $GCD(a,m) = 1$, and $ef \equiv 1$ mod $\phi(m)$, then
$$a^{ef} \equiv a \text{ mod } \phi(m).$$
In this way, "raising to the $f$ power" is like "taking the $e$-th root", modulo $m$.
If we are given $f$, then $e$ is a multiplicative inverse of $f$ modulo $\phi(m)$. In particular, such a multiplicative inverse exists if and only if $GCD(e,\phi(m)) = 1$. The following function computes a multiplicative inverse, by adapting the solve_LDE function from Notebook 2. After all, solving $ex \equiv 1$ mod $m$ is equivalent to solving the linear Diophantine equation $ex + my = 1$ (and only caring about the $x$-value).
End of explanation
from random import randint
while True:
m = randint(1000000, 9999999) # a random 7-digit number
e = randint(100,999) # a random 3-digit number
a = randint(10,99) # a random 2-digit number
if GCD(a,m) == 1:
tot = totient(m)
if GCD(e,tot) == 1:
f = mult_inverse(e,tot)
test_number = pow(a, e*f, m)
print("Success!")
print("{} ^ ({} * {}) = {}, mod {}".format(a,e,f,test_number,m))
break # Escapes the loop once an example is found!
Explanation: Let's test this out on some bigger numbers.
End of explanation
from random import SystemRandom, randint
def Miller_Rabin(p, base):
'''
Tests whether p is prime, using the given base.
The result False implies that p is definitely not prime.
The result True implies that p **might** be prime.
It is not a perfect test!
'''
result = 1
exponent = p-1
modulus = p
bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent
for bit in bitstring: # Iterates through the "letters" of the string. Here the letters are '0' or '1'.
sq_result = result*result % modulus # We need to compute this in any case.
if sq_result == 1:
if (result != 1) and (result != exponent): # Note that exponent is congruent to -1, mod p.
return False # a ROO violation occurred, so p is not prime
if bit == '0':
result = sq_result
if bit == '1':
result = (sq_result * base) % modulus
if result != 1:
return False # a FLT violation occurred, so p is not prime.
return True # If we made it this far, no violation occurred and p might be prime.
def is_prime(p, witnesses=50): # witnesses is a parameter with a default value.
'''
Tests whether a positive integer p is prime.
For p < 2^64, the test is deterministic, using known good witnesses.
Good witnesses come from a table at Wikipedia's article on the Miller-Rabin test,
based on research by Pomerance, Selfridge and Wagstaff, Jaeschke, Jiang and Deng.
For larger p, a number (by default, 50) of witnesses are chosen at random.
'''
if (p%2 == 0): # Might as well take care of even numbers at the outset!
if p == 2:
return True
else:
return False
if p > 2**64: # We use the probabilistic test for large p.
trial = 0
while trial < witnesses:
trial = trial + 1
witness = randint(2,p-2) # A good range for possible witnesses
if Miller_Rabin(p,witness) == False:
return False
return True
else: # We use a determinisic test for p <= 2**64.
verdict = Miller_Rabin(p,2)
if p < 2047:
return verdict # The witness 2 suffices.
verdict = verdict and Miller_Rabin(p,3)
if p < 1373653:
return verdict # The witnesses 2 and 3 suffice.
verdict = verdict and Miller_Rabin(p,5)
if p < 25326001:
return verdict # The witnesses 2,3,5 suffice.
verdict = verdict and Miller_Rabin(p,7)
if p < 3215031751:
return verdict # The witnesses 2,3,5,7 suffice.
verdict = verdict and Miller_Rabin(p,11)
if p < 2152302898747:
return verdict # The witnesses 2,3,5,7,11 suffice.
verdict = verdict and Miller_Rabin(p,13)
if p < 3474749660383:
return verdict # The witnesses 2,3,5,7,11,13 suffice.
verdict = verdict and Miller_Rabin(p,17)
if p < 341550071728321:
return verdict # The witnesses 2,3,5,7,11,17 suffice.
verdict = verdict and Miller_Rabin(p,19) and Miller_Rabin(p,23)
if p < 3825123056546413051:
return verdict # The witnesses 2,3,5,7,11,17,19,23 suffice.
verdict = verdict and Miller_Rabin(p,29) and Miller_Rabin(p,31) and Miller_Rabin(p,37)
return verdict # The witnesses 2,3,5,7,11,17,19,23,29,31,37 suffice for testing up to 2^64.
def random_prime(bitlength):
while True:
p = SystemRandom().getrandbits(bitlength) # A cryptographically secure random number.
if is_prime(p):
return p
random_prime(100) # A random 100-bit prime
random_prime(512) # A random 512-bit prime. Should be quick, thanks to Miller-Rabin!
%timeit random_prime(1024) # Even 1024-bit primes should be quick!
def RSA_privatekey(bitlength):
'''
Create private key for RSA, with given bitlength.
Just a pair of big primes!
'''
p = random_prime(bitlength)
q = random_prime(bitlength)
return p,q # Returns both values, as a "tuple"
type(RSA_privatekey(8)) # When a function returns multiple values, the type is "tuple".
p,q = RSA_privatekey(512) # If a function outputs two values, you can assign them to two variables.
print("Private key p = {}".format(p))
print("Private key q = {}".format(q))
def RSA_publickey(p,q, e = 65537):
'''
Makes the RSA public key out of
two prime numbers p,q (the private key),
and an auxiliary exponent e.
By default, e = 65537.
'''
N = p*q
return N,e
N,e = RSA_publickey(p,q) # No value of e is input, so it will default to 65537
print("Public key N = {}".format(N)) # A big number!
print("Public key e = {}".format(e))
Explanation: Exercises
What is the largest integer whose totient is 100? Study this by a brute force search, i.e., looping through the integers up to 10000
For which integers $n$ is it true that $\phi(n) = n/2$? Study this by a brute force search in order to make a conjecture. Then prove it if you can.
Compute the totient of the numbers from 1 to 10000, and analyze the results. The totient $\phi(n)$ is always less than $n$ when $n > 1$, but how does the ratio $\phi(n) / n$ behave? Create a graph. What is the average value of the ratio?
If $a^{323} \equiv 802931$, mod $5342481$, and $GCD(a, 5342481) = 1$, and $0 < a < 5342481$, then what is $a$?
Challenge: Create a function superpow(x,y,z,m) which computes $x^{y^z}$ modulo $m$ efficiently, when $GCD(x,m) = 1$ and $m$ is small enough to factor.
<a id='RSA'></a>
The RSA protocol
Like Diffie-Hellman, the RSA protocol involves a series of computations in modular arithmetic, taking care to keep some numbers private while making others public. RSA was published two years after Diffie-Hellman, in 1978 by Rivest, Shamir, and Adelman (hence its name). The great advance of the RSA protocol was its asymmetry. While Diffie-Hellman is used for symmetric key cryptography (using the same key to encrypt and decrypt), the RSA protocol has two keys: a public key that can be used by anyone for encryption and a private key that can be used by its owner for decryption.
In this way, if Alice publishes her public key online, anyone can send her an encrypted message. But as long as she keeps her private key private, only Alice can decrypt the messages sent to her. Such an asymmetry allows RSA to be used for authentication -- if the owner of a private key has an ability nobody else has, then this ability can be used to prove the owner's identity. In practice, this is one of the most common applications of RSA, guaranteeing that we are communicating with the intended person.
In the RSA protocol, the private key is a pair of large (e.g. 512 bit) prime numbers, called $p$ and $q$. The public key is the pair $(N, e)$, where $N$ is defined to be the product $N = pq$ and $e$ is an auxiliary number called the exponent. The number $e$ is often (for computational efficiency and other reasons) taken to be 65537 -- the same number $e$ can be used over and over by different people. But it is absolutely crucial that the same private keys $p$ and $q$ are not used by different individuals. Individuals must create and safely keep their own private key.
We begin with the creation of a private key $(p,q)$. We use the SystemRandom function (see the previous Python Notebook) to cook up cryptographically secure random numbers, and the Miller-Rabin test to certify primality.
End of explanation
def RSA_encrypt(message, N, e):
'''
Encrypts message, using the public keys N,e.
'''
return pow(message, e, N)
c = RSA_encrypt(17,N,e) # c is the ciphertext.
print("The ciphertext is {}".format(c)) # A very long number!
Explanation: Encryption and decryption
Now we explain how the public key $(N,e)$ and the private key $(p,q)$ are used to encrypt and decrypt a (numerical) message $m$. If you wish to encrypt/decrypt a text message, one case use a numerical scheme like ASCII, of course. The message should be significantly shorter than the modulus, $m < N$ (ideally, shorter than the private key primes), but big enough so that $m^e$ is much bigger than $N$ (not usually a problem if $e = 65537$).
The encryption procedure is simple -- it requires just one line of Python code, using the message and public key. The ciphertext $c$ is given by the formula
$$c = m^e \text{ mod } N,$$
where here we mean the "natural representative" of $m^e$ modulo $N$.
End of explanation
def RSA_decrypt(ciphertext, p,q,N,e):
'''
Decrypts message, using the private key (p,q)
and the public key (N,e). We allow the public key N as
an input parameter, to avoid recomputing it.
'''
tot = N - (p+q) + 1
f = mult_inverse(e,tot) # This uses the Euclidean algorithm... very quick!
return pow(ciphertext,f,N)
RSA_decrypt(c,p,q,N,e) # We decrypt the ciphertext... what is the result?
Explanation: To decrypt the ciphertext, we need to "undo" the operation of raising to the $e$-th power modulo $N$. We must, effectively, take the $e$-th root of the ciphertext, modulo $N$. This is what we studied earlier in this notebook. Namely, if $c \equiv m^e \text{ mod } N$ is the ciphertext, and $ef \equiv 1$ modulo $\phi(N)$, then
$$c^f \equiv m^{ef} \equiv m \text{ mod } N.$$
So we must raise the ciphertext to the $f$ power, where $f$ is the multiplicative inverse of $e$ modulo $\phi(N)$. Given a giant number $N$, it is difficult to compute the totient $\phi(N)$. But, with the private key $p$ and $q$ (primes), the fact that $N = pq$ implies
$$\phi(N) = (p-1) \cdot (q-1) = pq - p - q + 1 = N - p - q + 1.$$
Armed with the private key (and the public key, which everyone has), we can decrypt a message in just a few lines of Python code.
End of explanation
from hashlib import sha512
print(sha512("I like sweet potato hash.".encode('utf-8')).digest()) # A 64-character string of hash.
Explanation: That's the entire process of encryption and decryption. Encryption requires just the public key $(N,e)$ and decryption requires the private key $(p,q)$ too. Everything else is modular arithmetic, using Euler's theorem and the Euclidean algorithm to find modular multiplicative inverses.
From a practical standpoint, there are many challenges, and we just mention a few here.
Key generation: The person who constructs the private key $(p,q)$ needs to be careful. The primes $p$ and $q$ need to be pretty large (512 bits, or 1024 bits perhaps), which is not so difficult. They also need to be constructed randomly. For imagine that Alice comes up with her private key $(p,q)$ and Anne comes up with her private key $(q,r)$, with the same prime $q$ in common. Their public keys will include the numbers $N = pq$ and $M = qr$. If someone like Arjen comes along and starts taking GCDs of all the public keys in a database, that person will stumble upon the fact that $GCD(N,M) = q$, from which the private keys $(p,q)$ and $(q,r)$ can be derived. And this sort of disaster has happened! Poorly generated keys were stored in a database, and discovered by Arjen Lenstra et al..
Security by difficulty of factoring: The security of RSA is based on the difficulty of obtaining the private key $(p,q)$ from the public key $(N,e)$. Since $N = pq$, this is precisely the difficulty of factoring a large number $N$ into two primes (given the knowledge that it is the product of two primes). Currently it seems very difficult to factor large numbers. The RSA factoring challenges give monetary rewards for factoring such large $N$. The record (2017) is factoring a 768-bit (232 digit) number, RSA-768. For this reason, we may consider a 1024-bit number secure for now (i.e. $p$ and $q$ are 512-bit primes), or use a 2048-bit number if we are paranoid. If quantum computers develop sufficiently, they could make factoring large numbers easy, and RSA will have to be replaced by a quantum-secure protocol.
Web of Trust: Trust needs to begin somewhere. Every time Alice and Bob communicate, Alice should not come up with a new private key, and give Bob the new public key. For if they are far apart, how does Bob know he's receiving Alice's public key and not talking to an eavesdropper Eve? Instead, it is better for Alice to register (in person, perhaps) her public key at some time. She can create her private key $(p,q)$ and register the resulting public key $(N,e)$ with some "key authority" who checks her identity at the time. The key authority then stores everyone's public keys -- effectively they say "if you want to send a secure message to Alice, use the following public key: (..., ...)" Then Bob can consult the key authority when he wishes to communicate securely to Alice, and this private/public key combination can be used for years.
But, as one might guess, this kicks the trust question to another layer. How does Bob know he's communicating with the key authority? The key authorities need to have their own authentication mechanism, etc.. One way to avoid going down a rabbithole of mistrust is to distribute trust across a network of persons. Instead of a centralized "key authority", one can distribute one's public keys across an entire network of communicators (read about openPGP). Then Bob, if he wishes, can double-check Alice's public keys against the records of numerous members of the network -- assuming that hackers haven't gotten to all of them!
In practice, some implementations of RSA use a more centralized authority and others rely on a web of trust. Cryptography requires a clever application of modular arithmetic (in Diffie-Hellman, RSA, and many other systems), but also a meticulous approach to implementation. Often the challenges of implementation introduce new problems in number theory.
Digital signatures
A variant of RSA is used to "digitally sign" a document. Not worrying about keeping a message private for now, suppose that Alice wants to send Bob a message and sign the message in such a way that Bob can be confident it was sent by Alice.
Alice can digitally sign her message by first hashing the message and then encrypting the hash using her private key (previously the public key was used for encryption -- this is different!). Hashing a message is an irreversible process that turns a message of possibly long length into a nonsense-message of typically fixed length, in such a way that the original message cannot be recovered from the nonsense-message.
There is a science to secure hashing, which we don't touch on here. Instead, we import the sha512 hashing function from the Python standard package hashlib. The input to sha512 is a string of arbitrary length, and the output is a sequence of 512 bits. Here in Python 3, we also must specify the Unicode encoding (utf-8) for the string to be hashed. One can convert the 512 bits of hash into 64 bytes (512/8 = 64) which can be viewed as a 64-character string via ASCII. This 64-byte string is called the digest of the hash. It's all very biological. Note that many of the characters won't display very nicely, since they are out of the code range 32-126!
End of explanation |
1,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
File IO
These notebooks are used to check certain examples in the course notes, hence they are mainly for the benefit of the course developers, rather than a teaching resource.
Step1: Python's open function
Step2: we can use readline() method to read individual lines of a file. This method reads a file till the newline, including the newline character
Step3: The next time we run readline, it will start from where we left off
Step4: As opposed to much text, where line breaks signify a geometric limitation, in the world of data, lines often have a a crirical significance (i.e an individual record).
The following examples show how we could extract indivdual lines and store them in a list.
Step5: When we are done with operations to the file, we need to properly close it. Python has a garbage collector to clean up unreferenced objects. But we must not rely on it to close the file. Closing a file will free up the resources that were tied with the file and is done using the close() method.
Step6: If an exception occurs when we are performing some operation with the file, the code exits without closing the file. he best way to do this is using the with statement. This ensures that the file is closed when the block inside with is exited.
Step7: Numpy
The last section showed how files can be read into Python as strings. What about when our data is numeric? In previous lessons we have looked at the case of importing .csv files with purely numeric data. In this case the numpy methods np.loadtxt or np.genfromtxt can easily turn our data in a numer numpy array. np.genfromtxt is a more robust version of loadtxt that can better handle missing data.
If our csv file contains both characters and numbers, as in the current example, the situation is a little more challenging.
First, let's try the naive approach
Step8: This failed because the column headers have a 11 rows while the data appears to have 12 rows...have a think about why that is.
If we ignored the header we can succesfully import the data
Step9: Unfortunately this is still not that a great solution. Our strings have become nans and our array has 12 colums because it interpreted the comma inside the name field as a column delimiter. without resorting to even more complexity, it may be easiest at this stage to tell np.genfromtxt exactly which columns we want to use, thereby creating a purely numeric array
Step10: This is fine, and if you definately want to use the numpy package, it may be the easiest way to get just the numeric data out of a delimited plain txt file (i.e a csv ).
But if your data consists of both number and characters, there is a more applicable package available in Python...
Pandas
Step11: Okay, that seemed to work pretty well. Pandas was even clever enough to interpret the comma in the names field as a data value, rather than a delimiter. Well done Pandas.
In the course we don't teach Pandas, but given we're here lets have a quick look at some out of the box tools.
Step12: There's a lot of useful info there! You can see immediately we have 418 entries (rows), and for most of the variables we have complete values (418 are non-null). But not for Age, or fare, Cabin -- those have nulls somewhere.
Step13: This is also very useful | Python Code:
import numpy as np
import pandas as pd
#Go up one directory as this is where the 'current' directory assumed in the course notes
cd ..
!head -3 "data/titanic.csv"
!tail -3 "data/titanic.csv"
Explanation: File IO
These notebooks are used to check certain examples in the course notes, hence they are mainly for the benefit of the course developers, rather than a teaching resource.
End of explanation
#Python has a built-in function open() to open a file.
f = open("data/titanic.csv", mode = 'r')
print(type(f))
data = f.read() # read in all data until the end of file as a single string
print(type(data), len(data))
Explanation: Python's open function
End of explanation
f = open("data/titanic.csv", mode = 'r')
f.readline()
Explanation: we can use readline() method to read individual lines of a file. This method reads a file till the newline, including the newline character
End of explanation
f.readline()
Explanation: The next time we run readline, it will start from where we left off
End of explanation
f = open("data/titanic.csv", mode = 'r')
#We can read a file line-by-line using a for loop.
lines = []
for line in f:
lines.append(f)
print(len(lines))
#Or use the readlines() method returns a list of remaining lines of the entire file.
f = open("data/titanic.csv", mode = 'r')
lines = f.readlines()
print(len(lines))
Explanation: As opposed to much text, where line breaks signify a geometric limitation, in the world of data, lines often have a a crirical significance (i.e an individual record).
The following examples show how we could extract indivdual lines and store them in a list.
End of explanation
f.close()
print(type(f))
Explanation: When we are done with operations to the file, we need to properly close it. Python has a garbage collector to clean up unreferenced objects. But we must not rely on it to close the file. Closing a file will free up the resources that were tied with the file and is done using the close() method.
End of explanation
with open("data/titanic.csv",encoding = 'utf-8') as f:
lines = f.readlines()
print(len(lines))
f.read()
Explanation: If an exception occurs when we are performing some operation with the file, the code exits without closing the file. he best way to do this is using the with statement. This ensures that the file is closed when the block inside with is exited.
End of explanation
data = np.genfromtxt("data/titanic.csv", delimiter=",")
Explanation: Numpy
The last section showed how files can be read into Python as strings. What about when our data is numeric? In previous lessons we have looked at the case of importing .csv files with purely numeric data. In this case the numpy methods np.loadtxt or np.genfromtxt can easily turn our data in a numer numpy array. np.genfromtxt is a more robust version of loadtxt that can better handle missing data.
If our csv file contains both characters and numbers, as in the current example, the situation is a little more challenging.
First, let's try the naive approach:
End of explanation
data = np.genfromtxt("data/titanic.csv", delimiter=",", skip_header=1)
data.shape
data[0, :]
Explanation: This failed because the column headers have a 11 rows while the data appears to have 12 rows...have a think about why that is.
If we ignored the header we can succesfully import the data
End of explanation
!head -2 "data/titanic.csv"
data = np.genfromtxt("data/titanic.csv", delimiter=",", skip_header=1, usecols=tuple((0,1,5,6,7,8,9)))
data[0,:]
Explanation: Unfortunately this is still not that a great solution. Our strings have become nans and our array has 12 colums because it interpreted the comma inside the name field as a column delimiter. without resorting to even more complexity, it may be easiest at this stage to tell np.genfromtxt exactly which columns we want to use, thereby creating a purely numeric array:
End of explanation
# For .read_csv, always use header=0 when you know row 0 is the header row
df = pd.read_csv("data/titanic.csv", header=0)
print(df.shape, type(df))
Explanation: This is fine, and if you definately want to use the numpy package, it may be the easiest way to get just the numeric data out of a delimited plain txt file (i.e a csv ).
But if your data consists of both number and characters, there is a more applicable package available in Python...
Pandas
End of explanation
df.info()
Explanation: Okay, that seemed to work pretty well. Pandas was even clever enough to interpret the comma in the names field as a data value, rather than a delimiter. Well done Pandas.
In the course we don't teach Pandas, but given we're here lets have a quick look at some out of the box tools.
End of explanation
df.describe()
Explanation: There's a lot of useful info there! You can see immediately we have 418 entries (rows), and for most of the variables we have complete values (418 are non-null). But not for Age, or fare, Cabin -- those have nulls somewhere.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
df['Age'].dropna().hist(bins=16, range=(0,80), alpha = .5)
plt.show()
Explanation: This is also very useful: pandas has taken all of the numerical columns and quickly calculated the mean, std, minimum and maximum value. Convenient! But also a word of caution: we know there are a lot of missing values in Age, for example. How did pandas deal with that? It must have left out any nulls from the calculation. So if we start quoting the "average age on the Titanic" we need to caveat how we derived that number.
End of explanation |
1,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BUILDING A RECOMMENDER SYSTEM ON USER-USER COLLABORATIVE FILTERING (MOVIELENS DATASET)
We will load the data sets firsts.
Step1: We will use the file u.data first as it contains User ID, Movie IDs and Ratings. These three elements are all we need for determining the similarity of the users based on their ratings for a particular movie. I will first sort the DataFrame by User ID and then we are going to split the data-set into a training set and a test set (I just need one user for the training).
Step2: We convert them to a NumPy Array for ease of iteration!
Step3: Create a users_list which is a list of users that contains a list of movies rated by him. This part is going to greatly compromise on the program time unfortunately!
Step4: Define a Function by the Name of EucledianScore. The purpose of the EucledianScore is to measure the similarity between two users based on their ratings given to movies that they have both in common. But what if the users have just one movie in common? In my opinion having more movies in common is a great sign of similarity. So if users have less than 4 movies in common then we assign them a high EucledianScore.
Step5: Now we will iterate over users_list and find the similarity of the users to the test_user by means of this function and append the EucledianScore along with the User ID to a separate list score_list. We then convert it first to a DataFrame, sort it by the EucledianScore and finally convert it to a NumPy Array score_matrix for the ease of iteration.
Step6: Now we see that the user with ID 310 has the lowest Eucledian score and hence the highest similarity. So now we need to obtain the list of movies that are not common between the two users. Make two lists. Get the full list of movies which are there on USER_ID 310. And then the list of common movies. Convert these lists into sets and get the list of movies to be recommended.
Step7: Now we need to create a compiled list of the movies along with their mean ratings. Merge the item and data files.Then groupby movie titles, select the columns you need and then find the mean ratings of each movie. Then express the dataframe as a NumPy Array.
Step8: Now we find the movies on item_list by IDs from recommendation. Then append them to a separate list. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
#column headers for the dataset
data_cols = ['user id','movie id','rating','timestamp']
item_cols = ['movie id','movie title','release date','video release date','IMDb URL','unknown','Action',
'Adventure','Animation','Childrens','Comedy','Crime','Documentary','Drama','Fantasy','Film-Noir','Horror',
'Musical','Mystery','Romance ','Sci-Fi','Thriller','War' ,'Western']
user_cols = ['user id','age','gender','occupation','zip code']
#importing the data files onto dataframes
users = pd.read_csv('ml-100k/u.user', sep='|', names=user_cols, encoding='latin-1')
item = pd.read_csv('ml-100k/u.item', sep='|', names=item_cols, encoding='latin-1')
data = pd.read_csv('ml-100k/u.data', sep='\t', names=data_cols, encoding='latin-1')
Explanation: BUILDING A RECOMMENDER SYSTEM ON USER-USER COLLABORATIVE FILTERING (MOVIELENS DATASET)
We will load the data sets firsts.
End of explanation
utrain = (data.sort_values('user id'))[:99832]
print(utrain.tail())
utest = (data.sort_values('user id'))[99833:]
print(utest.head())
Explanation: We will use the file u.data first as it contains User ID, Movie IDs and Ratings. These three elements are all we need for determining the similarity of the users based on their ratings for a particular movie. I will first sort the DataFrame by User ID and then we are going to split the data-set into a training set and a test set (I just need one user for the training).
End of explanation
utrain = utrain.as_matrix(columns = ['user id', 'movie id', 'rating'])
utest = utest.as_matrix(columns = ['user id', 'movie id', 'rating'])
Explanation: We convert them to a NumPy Array for ease of iteration!
End of explanation
users_list = []
for i in range(1,943):
list = []
for j in range(0,len(utrain)):
if utrain[j][0] == i:
list.append(utrain[j])
else:
break
utrain = utrain[j:]
users_list.append(list)
Explanation: Create a users_list which is a list of users that contains a list of movies rated by him. This part is going to greatly compromise on the program time unfortunately!
End of explanation
def EucledianScore(train_user, test_user):
sum = 0
count = 0
for i in test_user:
score = 0
for j in train_user:
if(int(i[1]) == int(j[1])):
score= ((float(i[2])-float(j[2]))*(float(i[2])-float(j[2])))
count= count + 1
sum = sum + score
if(count<4):
sum = 1000000
return(math.sqrt(sum))
Explanation: Define a Function by the Name of EucledianScore. The purpose of the EucledianScore is to measure the similarity between two users based on their ratings given to movies that they have both in common. But what if the users have just one movie in common? In my opinion having more movies in common is a great sign of similarity. So if users have less than 4 movies in common then we assign them a high EucledianScore.
End of explanation
score_list = []
for i in range(0,942):
score_list.append([i+1,EucledianScore(users_list[i], utest)])
score = pd.DataFrame(score_list, columns = ['user id','Eucledian Score'])
score = score.sort_values(by = 'Eucledian Score')
print(score)
score_matrix = score.as_matrix()
Explanation: Now we will iterate over users_list and find the similarity of the users to the test_user by means of this function and append the EucledianScore along with the User ID to a separate list score_list. We then convert it first to a DataFrame, sort it by the EucledianScore and finally convert it to a NumPy Array score_matrix for the ease of iteration.
End of explanation
user= int(score_matrix[0][0])
common_list = []
full_list = []
for i in utest:
for j in users_list[user-1]:
if(int(i[1])== int(j[1])):
common_list.append(int(j[1]))
full_list.append(j[1])
common_list = set(common_list)
full_list = set(full_list)
recommendation = full_list.difference(common_list)
Explanation: Now we see that the user with ID 310 has the lowest Eucledian score and hence the highest similarity. So now we need to obtain the list of movies that are not common between the two users. Make two lists. Get the full list of movies which are there on USER_ID 310. And then the list of common movies. Convert these lists into sets and get the list of movies to be recommended.
End of explanation
item_list = (((pd.merge(item,data).sort_values(by = 'movie id')).groupby('movie title')))['movie id', 'movie title', 'rating']
item_list = item_list.mean()
item_list['movie title'] = item_list.index
item_list = item_list.as_matrix()
Explanation: Now we need to create a compiled list of the movies along with their mean ratings. Merge the item and data files.Then groupby movie titles, select the columns you need and then find the mean ratings of each movie. Then express the dataframe as a NumPy Array.
End of explanation
recommendation_list = []
for i in recommendation:
recommendation_list.append(item_list[i-1])
recommendation = (pd.DataFrame(recommendation_list,columns = ['movie id','mean rating' ,'movie title'])).sort_values(by = 'mean rating', ascending = False)
print(recommendation[['mean rating','movie title']])
Explanation: Now we find the movies on item_list by IDs from recommendation. Then append them to a separate list.
End of explanation |
1,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Checking standard usage
Step1: NOTE
easyplot crashes if nicknames are not given
Step2: Loading journal
Step3: NOTE
should rename index to cell_name instead of filename
Step4: Note
need to implement a way to export journals to xlsx
maybe we also need to allow for reading meta and session from xlsx journals
figure out or modify easyplot so that mass and other parameters can be given to it
figure out a way to save an easyplot session with session and metadata
Save journal to xlsx
Step5: loading xlsx journal | Python Code:
files = [f1, f2]
names = [f1.name, f2.name]
ezplt = easyplot.EasyPlot(files, names, figtitle="Test1")
ezplt.plot()
Explanation: Checking standard usage
End of explanation
easyplot.EasyPlot(
files,
names,
figtitle="Test2",
galvanostatic_normalize_capacity=True,
all_in_one=True,
dqdv_plot=True,
).plot()
f1.with_name(f"{f1.stem}_tmp.xlsx")
easyplot.help()
Explanation: NOTE
easyplot crashes if nicknames are not given
End of explanation
journal_file = Path("../../dev_data/db/test_journal.xlsx")
journal_file.is_file()
j = LabJournal(db_reader=None)
j.from_file(journal_file, paginate=False)
j.pages
Explanation: Loading journal
End of explanation
rawfiles = j.pages.raw_file_names.to_list()
names = j.pages.label.to_list()
masses = j.pages.mass.to_list()
easyplot.EasyPlot(
rawfiles,
names,
figtitle="Test3",
galvanostatic_normalize_capacity=True,
all_in_one=True,
dqdv_plot=True,
).plot()
outfile = journal_file
j.to_file(outfile, to_project_folder=False)
Explanation: NOTE
should rename index to cell_name instead of filename
End of explanation
bad_cycle_numbers = {"cell_name": ['20160805_test001_45_cc', '20160805_test001_45_cc', '20160805_test001_45_cc', '20160805_test001_47_cc', '20160805_test001_47_cc']
'20160805_test001_45_cc': [4, 337, 338],
'20160805_test001_47_cc': [7, 8, 9, 33],
}
bad_cycle_numbers = {
"20160805_test001_45_cc": [4, 337, 338],
"20160805_test001_47_cc": [7, 8, 9, 33],
}
bad_cells = ["20160805_test001_45_cc", "another_cell_000_cc"]
notes = ["one comment for the road", "another comment", "a third comment"]
session0 = {
"bad_cycle_numbers": bad_cycle_numbers,
"bad_cells": bad_cells,
"notes": notes,
}
meta = j._prm_packer()
meta
session
import pandas as pd
l_bad_cycle_numbers = []
for k, v in bad_cycle_numbers.items():
l_bad_cycle_numbers.append(pd.DataFrame(data=v, columns=[k]))
df_bad_cycle_numbers = (
pd.concat(l_bad_cycle_numbers, axis=1)
.melt(var_name="cell_name", value_name="cycle_index")
.dropna()
)
df_bad_cycle_numbers
df_bad_cells = pd.DataFrame(bad_cells, columns=["cell_name"])
df_bad_cells
df_notes = pd.DataFrame(notes, columns=["txt"])
df_notes
len(meta)
df_meta = pd.DataFrame(meta, index=[0]).melt(var_name="parameter", value_name="value")
df_meta
session = pd.concat(
[df_bad_cycle_numbers, df_bad_cells, df_notes],
axis=1,
keys=["bad_cycle_number", "bad_cells", "notes"],
)
session
file_name = Path("out.xlsx")
pages = j.pages
# saving journal to xlsx
try:
with pd.ExcelWriter(file_name, mode="w", engine="openpyxl") as writer:
pages.to_excel(writer, sheet_name="pages", engine="openpyxl")
# no index is not supported for multi-index (update to index=False when pandas implements it):
session.to_excel(writer, sheet_name="session", engine="openpyxl")
df_meta.to_excel(writer, sheet_name="meta", engine="openpyxl", index=False)
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
Explanation: Note
need to implement a way to export journals to xlsx
maybe we also need to allow for reading meta and session from xlsx journals
figure out or modify easyplot so that mass and other parameters can be given to it
figure out a way to save an easyplot session with session and metadata
Save journal to xlsx
End of explanation
import tempfile
import shutil
# loading journal from xlsx: pages
temporary_directory = tempfile.mkdtemp()
temporary_file_name = shutil.copy(file_name, temporary_directory)
try:
pages2 = pd.read_excel(temporary_file_name, sheet_name="pages", engine="openpyxl")
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
pages2
# loading journal from xlsx: session
temporary_directory = tempfile.mkdtemp()
temporary_file_name = shutil.copy(file_name, temporary_directory)
try:
session2 = pd.read_excel(
temporary_file_name, sheet_name="session", engine="openpyxl", header=[0, 1]
)
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
session2
bcn2 = {
l: list(sb["cycle_index"].values)
for l, sb in session2["bad_cycle_number"].groupby("cell_name")
}
bcn2
bc2 = list(session2["bad_cells"].dropna().values.flatten())
bc2
n2 = list(session2["notes"].dropna().values.flatten())
n2
session3 = {"bad_cycle_numbers": bcn2, "bad_cells": bc2, "notes": n2}
session0
session3
session0 == session3
# loading journal from xlsx: meta
temporary_directory = tempfile.mkdtemp()
temporary_file_name = shutil.copy(file_name, temporary_directory)
try:
df_meta2 = pd.read_excel(
temporary_file_name, sheet_name="meta", engine="openpyxl", index_col="parameter"
)
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
meta2 = df_meta2.to_dict()["value"]
meta2
meta
pages2
session2
df_meta2
j._prm_packer()
Explanation: loading xlsx journal
End of explanation |
1,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numerical Modeling
Reference
Step1: We need to give an initial wave which is a function of $x$ (remember, $u(x,0)=u_0(x)$). We can easily choose a step-function for the velocity
Step2: Breakout
Step3: Note
Step4: Breakout
Step5: Breakout
Step6: Shown above, we see that this does not look like our original step-function wave. What happened? We broke stability (wave travels a distance in one time step, $\Delta t$, that is greater than our spatial step, $dx$). To maintain this, we need to enforce stability
Step7: Step 4
Step8: Step 5
Step9: Continued
Step10: This is a "saw-tooth function" to which we have applied our periodic boundary conditions. Steps 1 & 2 continually move the plot off the screen, but with periodic boundary conditions, it will wrap around to the front again. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# spatial grid
nx = 41 # try changing from 41 to 81
dx = 2./(nx-1) #dx = delta x
nt = 20
dt = nt/1000. #dt = delta t
c = 1. # wavespeed
Explanation: Numerical Modeling
Reference: 12 steps to Navier-Stokes.
Given the broad topics of "Modeling" and "Statistics" for a 1-hour lecture, I could think of no better way than for one, recently introduced into the Python language, to build a numerical model from scratch using the core language and utilities. Once derived, this model will present itself to perturbations, extensions, and of course comparisons (i.e., statistics) with other models.
Objective: Create numerical model(s) and briefly compare with expected solution.
Step 1: Linear Convection
$$\frac{\partial{u}}{\partial{t}} + c\frac{\partial{u}}{\partial{x}} = 0\text{,}\hspace{12pt}\text{the 1-D Linear Convection equation.}$$
This equation above represents a wave uniformly propagating (not changing shape in time) with a speed of $c$. We then can describe the initial wave as
Initial Condition (wave):$u(x,0) = u_0(x)$
For the sake of simplicity,
Exact Solution: $u(x,t) = u_0(x-ct)$
But, what if this equation was something worse? How can knowing Python help us? A part of modeling is being able approximate the solution using finite difference schemes. Going back to the definition of a derivative (and removing the limit):
$$\frac{\partial{u}}{\partial{x}} \approx \frac{u(x+\Delta x)-u(x)}{\Delta x}$$
We then can show that the discrete version of our 1-D convection equation is
$$\frac{u_i^{n+1}-u_i^n}{\Delta t}+c\frac{u_i^n-u_{i-1}^n}{\Delta x} = 0$$
noting that we use a Forward Difference scheme for $t$ and a Backward Difference scheme for $x$. We then rearrange to solve for the next velocity in time (i.e., $u^{n+1}$):
$$u_i^{n+1} = u_i^n - c\frac{\Delta t}{\Delta x}(u_i^n-u_{i-1}^n)\text{,}\hspace{10pt}\text{n=time, i=space}$$
or, we could write it as the following:
$$u(x,t+\Delta t) = u(x,t) - c\frac{\Delta t}{\Delta x}[u(x,t)-u(x-\Delta x, t)]$$
Time for some Python!
End of explanation
u = np.ones(nx)
u[.5/dx : 1./dx+1] = 2
print u
#visually
plt.plot(np.linspace(0, 2, nx), u);
Explanation: We need to give an initial wave which is a function of $x$ (remember, $u(x,0)=u_0(x)$). We can easily choose a step-function for the velocity:
$$u(x,0) = u_0(x) = \left{\begin{array}{ll}2 & : 0.5 \le x \le 1 \ 1 & : \text{elsewhere in }(0,2)\end{array}\right}$$
Let's see this in Python!
End of explanation
un = np.ones(nx) #temporary n-th velocity array
for n in range(nt):
un = u.copy() # store the previous time-step's values
for i in range(1,nx):
#for i in range(nx): # next breakout question
u[i] = un[i] - c*(dt/dx)*(un[i] - un[i-1])
Explanation: Breakout:
Why does this step-function (our initial velocity), have slanted lines? (Hint: What is the numpy linspace producing?)
Now, let's apply our finite-difference schemes to the convection equation given our new initial conditions to solve for our new velocity after some time has elapsed.
End of explanation
plt.plot(np.linspace(0, 2, nx), u);
Explanation: Note: This routine is actually quite inefficient, but we can improve this later.
End of explanation
def linearconv(nx):
dx = 2./(nx-1)
nt = 20
dt = 0.025
c = 1.
u = np.ones(nx)
u[.5/dx : 1/dx+1] = 2
un = np.ones(nx)
for n in range(nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i] - c*(dt/dx)*(un[i] - un[i-1])
plt.plot(np.linspace(0,2,nx), u);
linearconv(41) # 41 grid points, same as Step 1
linearconv(71)
Explanation: Breakout:
What happens when we change the second for loop to the other one? Why did we have to start at i=1?
(Extra) Breakout:
What happens to the wave if we repeat the calculation for $u$ and plot? Can you explain why this is so? Try changing the parameters to make the wave go backwards.
(Extra! Extra!) Breakout:
If we change nx to 81, what happens to our $u$ plot after it advances in time?
Step 2: Non-linear Convection
Non-linear convection only changes the constant velocity to a varying velocity (i.e., $c \rightarrow u$).
$$\frac{\partial{u}}{\partial{t}} + u\frac{\partial{u}}{\partial{x}} = 0$$
Using the same discretization schemes as in Step 1, forward diff for time and backward diff for space, we have the following discretization equation:
$$\frac{u_i^{n+1}-u_i^n}{\Delta t}+u_i^n\frac{u_i^n-u_{i-1}^n}{\Delta x} = 0$$
yielding for the $u_i^{n+1}$ term:
$$u_i^{n+1} = u_i^n - u_i^n\frac{\Delta t}{\Delta x}(u_i^n-u_{i-1}^n)$$
Homework:
The rest of this section is left up to you. Revise our for loop to include now that velocity term for the non-linear convection discretized equation to produce a similar plot like below.
<img src="hw1.png">
Step 3: CFL Condition
End of explanation
linearconv(85)
Explanation: Breakout:
Start changing the number of grid points using our function and see what happens to our wave. What happens when you choose a large number? What happens when you choose a small number?
End of explanation
def linearconv(nx):
dx = 2./(nx-1)
nt = 20
c = 1.
sigma = .5 # for nx=41, we get sigma=0.5 from dt/dx = 0.025/(2./(nx-1))
dt = sigma*dx
u = np.ones(nx)
u[.5/dx : 1/dx+1] = 2
un = np.ones(nx)
for n in range(nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i] - c*(dt/dx)*(un[i] - un[i-1])
plt.plot(np.linspace(0,2,nx), u);
linearconv(85)
linearconv(201) # as we increase nx, our time window shortens due to a smaller dt
Explanation: Shown above, we see that this does not look like our original step-function wave. What happened? We broke stability (wave travels a distance in one time step, $\Delta t$, that is greater than our spatial step, $dx$). To maintain this, we need to enforce stability:
$$\sigma = \frac{u\Delta t}{\Delta x}\le \sigma_\text{max}$$
$\sigma$ is called the Courant number which $\sigma_\text{max}$ will ensure stability for our wave solution.
End of explanation
nx = 41
dx = 2./(nx-1)
nt = 20
nu = 0.3 #the value of viscosity
sigma = .2 # notice the different sigma value
dt = sigma*dx**2/nu
u = np.ones(nx)
u[.5/dx : 1/dx+1] = 2
un = np.ones(nx)
for n in range(nt):
un = u.copy()
for i in range(1,nx-1):
u[i] = un[i] + nu*dt/dx**2*(un[i+1]-2*un[i]+un[i-1])
plt.plot(np.linspace(0,2,nx), u);
Explanation: Step 4: 1-D Diffusion
$$\frac{\partial{u}}{\partial{t}}=\nu\frac{\partial^2{u}}{\partial{x^2}}\text{,}\hspace{12pt}\text{the 1-D Diffusion equation.}$$
Since we now have a second-order differential, and to save some time, the discretized version of the diffusion equation is as follows:
$$\frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}=\nu\frac{u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}}{\Delta x^2}$$
and then arranged to solve for the only unknown ($u_i^{n+1}$),
$$u_{i}^{n+1}=u_{i}^{n}+\nu\frac{\Delta t}{\Delta x^2}(u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n})$$
Note: We used the Central Difference scheme on the second-order deriviative which is a combination of the Forward Difference and Backward Difference of the first derivative.
End of explanation
import numpy as np
import sympy
from sympy import init_printing
init_printing(use_latex=True) # output to be rendered as LaTeX
x,nu,t = sympy.symbols('x nu t')
phi = sympy.exp(-(x-4*t)**2/(4*nu*(t+1))) + sympy.exp(-(x-4*t-2*np.pi)**2/(4*nu*(t+1)))
phi
phiprime = phi.diff(x)
phiprime
print phiprime # shows Pythonic version
from sympy.utilities.lambdify import lambdify
u = -2*nu*(phiprime/phi)+4
# we are sending variables to a function
ufunc = lambdify((t, x, nu), u)
print ufunc(1,4,3)
Explanation: Step 5: Burgers' Equation
$$\frac{\partial{u}}{\partial{t}}+u\frac{\partial{u}}{\partial{x}}=\nu\frac{\partial^2{u}}{\partial{x^2}}\text{,}\hspace{12pt}\text{Burgers' equation.}$$
which is a combination of non-linear convection and diffusion. Once again, we will draw from previous discretizations to obtain our discretized differential equation:
$$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}$$
Yet again solving for the unknown $u_i^{n+1}$
$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$
Whew! That looks scary, right?
Our initial condition for this problem is going to be:
\begin{eqnarray}
u(x,0) &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\
\phi &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} \bigg)
\end{eqnarray}
This has an analytical solution, given by:
\begin{eqnarray}
u(x,t) &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\
\phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg)
\end{eqnarray}
Our boundary condition will be:
$$u(0) = u(2\pi)$$
This is called a periodic boundary condition. Pay attention! This will cause you a bit of headache if you don't tread carefully.
Aside: SymPy
To save some time and frustrations, we are going to use a symbolic math library for Python. Let's explore it some:
End of explanation
###variable declarations
nx = 101
nt = 100
dx = 2*np.pi/(nx-1)
nu = .07
dt = dx*nu
x = np.linspace(0, 2*np.pi, nx)
un = np.empty(nx)
t = 0
u = np.asarray([ufunc(t, x0, nu) for x0 in x])
u
plt.figure(figsize=(11,7), dpi=100)
plt.plot(x, u, marker='o', lw=2)
plt.xlim([0,2*np.pi])
plt.ylim([0,10]);
Explanation: Continued: Burgers' Equation
End of explanation
for n in range(nt):
un = u.copy()
for i in range(nx-1):
u[i] = un[i] - un[i] * dt/dx *(un[i] - un[i-1]) + nu*dt/dx**2*\
(un[i+1]-2*un[i]+un[i-1])
u[-1] = un[-1] - un[-1] * dt/dx * (un[-1] - un[-2]) + nu*dt/dx**2*\
(un[0]-2*un[-1]+un[-2])
u_analytical = np.asarray([ufunc(nt*dt, xi, nu) for xi in x])
plt.figure(figsize=(11,7), dpi=100)
plt.plot(x,u, marker='o', lw=2, label='Computational')
plt.plot(x, u_analytical, label='Analytical')
plt.xlim([0,2*np.pi])
plt.ylim([0,10])
plt.legend();
Explanation: This is a "saw-tooth function" to which we have applied our periodic boundary conditions. Steps 1 & 2 continually move the plot off the screen, but with periodic boundary conditions, it will wrap around to the front again.
End of explanation |
1,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Core data model
Xarray-Beam tries to make it straightforward to write distributed pipelines with Xarray objects, but unlike libraries like Xarray with Dask or Dask/Spark DataFrames, it doesn't hide the distributed magic inside high-level objects.
Xarray-Beam is a lower-level tool. You will be manipulating large datasets piece-by-piece yourself, and you as the developer will be responsible for maintaining Xarray-Beam's internal invariants. This means that to successfully use Xarray-Beam, you will need to understand how how it represents distributed datasets.
This responsibility requires a bit more coding and understanding, but offers benefits in performance and flexibility. This brief tutorial will show you how.
We'll start off with some standard imports
Step1: Keys in Xarray-Beam
Xarray-Beam is designed around the model that every stage in your Beam pipeline could be stored in a single xarray.Dataset object, but is instead represented by a distributed beam PCollection of smaller xarray.Dataset objects, distributed in two possible ways
Step2: Or given an existing {py
Step3: {py
Step4: Let's take a look the entries, which are lazily constructed with the generator
Step5: ``{note}
There are multiple valid ways to represent a chunk of a larger dataset with aKey`.
Offsets for unchunked dimensions are optional. Because all chunks have the same offset along the y axis, including y in offsets is not required as long as we don't need to create multiple chunks along that dimension.
Indicating variables is optional, if all chunks have the same variables. We could have set vars={'foo', 'bar'} on each of these Key objects instead of vars=None. This would be an equally valid representation of the same records, since all of our datasets have the same variables.
```
We now have the inputs we need to use Xarray-Beam's helper functions and PTransforms. For example, we can fully consolidate chunks & variables to see what single xarray.Dataset these values would correspond to
Step6: To execute with Beam, of course, we need to turn Python lists/generators into Beam PCollections, e.g., with beam.Create()
Step7: Writing pipelines
Transforms in Xarray-Beam typically act on (key, value) pairs of (xbeam.Key, xarray.Dataset). For example, we can dump our dataset on disk in the scalable Zarr format using {py
Step8: Xarray-Beam doesn't try to provide transformations for everything. In particular, it omits most embarrassingly parallel operations that can be performed independently on each chunk of a larger dataset. You can write these yourself using beam.Map.
For example, consider elementwise arithmetic. We can write a lambda function that acts on each key-value pair updating the xarray.Dataset objects appropriately, and put it into an Xarray-Beam pipeline using beam.MapTuple
Step9: For operations that add or remove (unchunked) dimensions, you may need to update Key objects as well to maintain the Xarray-Beam invariants, e.g., if we want to remove the y dimension entirely | Python Code:
import apache_beam as beam
import numpy as np
import xarray_beam as xbeam
import xarray
Explanation: Core data model
Xarray-Beam tries to make it straightforward to write distributed pipelines with Xarray objects, but unlike libraries like Xarray with Dask or Dask/Spark DataFrames, it doesn't hide the distributed magic inside high-level objects.
Xarray-Beam is a lower-level tool. You will be manipulating large datasets piece-by-piece yourself, and you as the developer will be responsible for maintaining Xarray-Beam's internal invariants. This means that to successfully use Xarray-Beam, you will need to understand how how it represents distributed datasets.
This responsibility requires a bit more coding and understanding, but offers benefits in performance and flexibility. This brief tutorial will show you how.
We'll start off with some standard imports:
End of explanation
key = xbeam.Key({'x': 0, 'y': 10}, vars=None)
key
Explanation: Keys in Xarray-Beam
Xarray-Beam is designed around the model that every stage in your Beam pipeline could be stored in a single xarray.Dataset object, but is instead represented by a distributed beam PCollection of smaller xarray.Dataset objects, distributed in two possible ways:
Distinct variables in a Dataset may be separated across multiple records.
Individual arrays can also be split into multiple chunks, similar to those used by dask.array.
To keep track of how individual records could be combined into a larger (virtual) dataset, Xarray-Beam defines a {py:class}~xarray_beam.Key object. Key objects consist of:
offsets: integer offests for chunks from the origin in an immutabledict
vars: The subset of variables included in each chunk, either as a frozenset, or as None to indicate "all variables".
Making a {py:class}~xarray_beam.Key from scratch is simple:
End of explanation
key.replace(vars={'foo', 'bar'})
key.with_offsets(x=None, z=1)
Explanation: Or given an existing {py:class}~xarray_beam.Key, you can easily modify it with replace() or with_offsets():
End of explanation
def create_records():
for offset in [0, 4]:
key = xbeam.Key({'x': offset, 'y': 0})
data = 2 * offset + np.arange(8).reshape(4, 2)
chunk = xarray.Dataset({
'foo': (('x', 'y'), data),
'bar': (('x', 'y'), 100 + data),
})
yield key, chunk
Explanation: {py:class}~xarray_beam.Key objects don't do very much. They are just simple structs with two attributes, along with various special methods required to use them as dict keys or as keys in Beam pipelines. You can find a more examples of manipulating keys {py:class}in its docstring <xarray_beam.Key>.
Creating PCollections
The standard inputs & outputs for Xarray-Beam are PCollections of (xbeam.Key, xarray.Dataset) pairs. Xarray-Beam provides a bunch of PCollections for typical tasks, but many pipelines will still involve some manual manipulation of Key and Dataset objects, e.g., with builtin Beam transforms like beam.Map.
To start off, let's write a helper functions for creating our first collection from scratch:
End of explanation
inputs = list(create_records())
inputs
Explanation: Let's take a look the entries, which are lazily constructed with the generator:
End of explanation
xbeam.consolidate_fully(inputs)
Explanation: ``{note}
There are multiple valid ways to represent a chunk of a larger dataset with aKey`.
Offsets for unchunked dimensions are optional. Because all chunks have the same offset along the y axis, including y in offsets is not required as long as we don't need to create multiple chunks along that dimension.
Indicating variables is optional, if all chunks have the same variables. We could have set vars={'foo', 'bar'} on each of these Key objects instead of vars=None. This would be an equally valid representation of the same records, since all of our datasets have the same variables.
```
We now have the inputs we need to use Xarray-Beam's helper functions and PTransforms. For example, we can fully consolidate chunks & variables to see what single xarray.Dataset these values would correspond to:
End of explanation
with beam.Pipeline() as p:
p | beam.Create(create_records()) | beam.Map(print)
Explanation: To execute with Beam, of course, we need to turn Python lists/generators into Beam PCollections, e.g., with beam.Create():
End of explanation
inputs | xbeam.ChunksToZarr('my-data.zarr')
Explanation: Writing pipelines
Transforms in Xarray-Beam typically act on (key, value) pairs of (xbeam.Key, xarray.Dataset). For example, we can dump our dataset on disk in the scalable Zarr format using {py:class}~xarray_beam.ChunksToZarr:
End of explanation
inputs | beam.MapTuple(lambda k, v: (k, v + 1))
Explanation: Xarray-Beam doesn't try to provide transformations for everything. In particular, it omits most embarrassingly parallel operations that can be performed independently on each chunk of a larger dataset. You can write these yourself using beam.Map.
For example, consider elementwise arithmetic. We can write a lambda function that acts on each key-value pair updating the xarray.Dataset objects appropriately, and put it into an Xarray-Beam pipeline using beam.MapTuple:
End of explanation
inputs | beam.MapTuple(lambda k, v: (k.with_offsets(y=None), v.mean('y')))
Explanation: For operations that add or remove (unchunked) dimensions, you may need to update Key objects as well to maintain the Xarray-Beam invariants, e.g., if we want to remove the y dimension entirely:
End of explanation |
1,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 7
CHE 116
Step1: 2. Chemical Reaction (6 Points)
A set of frist-order chemical reactions can be described by the following system of differential equations
Step2: 2.4 Answer
That eigenvector results in all derivatives being $0$, meaning the concentrations do not change
Step3: 3. Python Practice (20 Points)
All your functions must have docstrings for full credit.
[4] Create a button and text input, where the value of the text box is printed out. Make it so that the output area is cleared each time the button is pressed.
[4] Create a button that prints a random integer from 0 to 10 using the random.randint function.
[4] Make a list of strings. Using your button from part 2, now have it print a random string from your list.
[8] Take the following matrix
Step4: 4. Integration (12 Points)
Compute the following integrals using scipy. Report all your answers using display.Latex and only three decimal places.
$$\int_0^1 \sin^2(x)\, dx$$
$$\int_0^\infty x^{-2}\, dx$$
Integrate the normal distribution with $\sigma = 2$, $\mu = -4$ from $-2$ to $2$. Do not use scipy.stats
Repeat part 1 but use the trapezoidal rule instead of scipy
Step5: 5. Numerical Integration/Differentiation (12 Points)
Compute and plot the numerical derivatives of the data given in the next cell. Use a for loop and the central difference rule
Repeat part 1 using numpy arrays.
Compute the integral of the numerical data using the trapezoidal rule. Use a for loop.
Repeat part 3 with numpy | Python Code:
import numpy as np
from numpy import linalg
#note z^2 doesn't affect our answer
a_matrix = [[6, 4,-1],\
[1, -1, 0],\
[2, -2, -1]]
b_matrix = [0, 6, -4]
#convert them to numpy arrays/matrices
np_a_matrix = np.array(a_matrix)
np_b_matrix = np.array(b_matrix).transpose()
#Solve the problem
np_a_inv = linalg.inv(np_a_matrix)
np_x_matrix = np_a_inv.dot(np_b_matrix)
#print the solution, making sure to use z
print(np_x_matrix[0], np_x_matrix[1], np.sqrt(np_x_matrix[2]))
Explanation: Homework 7
CHE 116: Numerical Methods and Statistics
Prof. Andrew White
Version 1.0 (2/22/2016)
1. System of Equations (5 Points)
Solve this system of equations using the linalg package. Answer in Python.
$$\begin{array}{ll}
6 x + 4 y &= z^2\
x - y &= 6 \
2x - 2y &= z^2 - 4 \
\end{array}$$
End of explanation
c_mat = np.array([[-2, 1,1], [2, -4, 0], [0, 3, -1]])
e_l, e_v = linalg.eig(c_mat)
for i in range(3):
print(e_l[i], e_v[:, i])
Explanation: 2. Chemical Reaction (6 Points)
A set of frist-order chemical reactions can be described by the following system of differential equations:
$$\begin{array}{lr}
\cfrac{dC_1(t)}{dt} = & -2 C_1(t) + C_2(t) + C_3(t)\
\cfrac{dC_2(t)}{dt} = & 2 C_1(t) - 4 C_2(t)\
\cfrac{dC_3(t)}{dt} = & 3 C_2(t) - C_3(t)\
\end{array}$$
Answer the following questions:
[1] Write down this system of ODEs using a matrix notation
[2] Mass is conserved in these equations. How can you tell from the matrix?
[1] Find the eigenvalues and eigenvectors for the coefficient matrix. It is not Hermitian. Use linalg.
[2] One of the eigenvalues is special. Make an argument, using math and python, for why this is significant with respect to equilibrium.
2.1 Answer
$$\left[\begin{array}{lcr}
-2 & 1 & 1\
2 & -4 & 0\
0 & 3 & -1\
\end{array}\right]
\left[\begin{array}{c}
C_1(t)\
C_2(t)\
C_3(t)\
\end{array}\right] =
\left[\begin{array}{c}
\cfrac{d\,C_1(t)}{dt}\
\cfrac{d\,C_2(t)}{dt}\
\cfrac{d\,C_3(t)}{dt}\
\end{array}\right]$$
2.2 Answer
The amount of $C_1$ leaving has to equal the amount of $C_1$ coming in. This is guaranteed by the columns summing to 0.
2.3 Answer
End of explanation
print(c_mat.dot(e_v[:,2]))
Explanation: 2.4 Answer
That eigenvector results in all derivatives being $0$, meaning the concentrations do not change
End of explanation
#3.1 Answer
from ipywidgets import widgets
from IPython import display
button = widgets.Button(description="Print")
text = widgets.Text('')
def clear_and_print(b):
display.clear_output()
print(text.value)
button.on_click(clear_and_print)
display.display(text)
display.display(button)
#3.2 Answer
import random
def randint(b):
display.clear_output()
print(random.randint(0,10))
button = widgets.Button(description='random')
button.on_click(randint)
display.display(button)
#3.3 Answer
my_strings = ['(⊙ᗜ⊙)', '╚═╏ ⇀ ͜ر ↼ ╏═╝', 'ლ(́◉◞౪◟◉‵ლ)', 'ლ(ʘ̆〰ʘ̆)ლ']
def rand_string(b):
display.clear_output()
print(my_strings[random.randint(0,len(my_strings))])
button = widgets.Button(description='Raise your donger!')
button.on_click(rand_string)
display.display(button)
#3.4 Answer
mat = np.array([[3, 2, -6], [2, 6, 4], [3, 4, 0]])
e_l, e_v = linalg.eigh(mat)
def print_eig(i):
display.display(display.Latex('''
$$\\left[\\begin{{array}}{{c}}
{0:0.5}\\\\
{1:0.5}\\\\
{2:0.5}\\
\end{{array}}\\right]
$$'''.format(e_v[i, 0], e_v[i, 1], e_v[i, 2])))
widgets.interact(print_eig, i=(0,2,1))
Explanation: 3. Python Practice (20 Points)
All your functions must have docstrings for full credit.
[4] Create a button and text input, where the value of the text box is printed out. Make it so that the output area is cleared each time the button is pressed.
[4] Create a button that prints a random integer from 0 to 10 using the random.randint function.
[4] Make a list of strings. Using your button from part 2, now have it print a random string from your list.
[8] Take the following matrix: [[3, 2, -6], [2, 6, 4], [3, 4, 0]] and use an interaction widget to display its eigenvalues and eigenvectors. Your slider should go from 0 to 2 and each value should result in a latex display showing the eigenvalue and eigenvector. Note that Python eats {} in strings, so you'll have to use {{}}. This is called escaping. Python also eats many things that have a backslash. For example, \b means backspace to python. And \\ means \ in python. So you'll have to write \\ when you want LaTeX to see \ and in general use some trial in error about backslashes. You can never have too many though! For example, write \\begin{{array}} to start your matrix. Use three ''' for example''' to have a string that spans multiple lines. Summary comic. Practice getting the LaTeX correct before putting it all together.
End of explanation
#4.1 Answer
from scipy.integrate import quad
def fxn(x):
return np.sin(x)**2
ans, err = quad(fxn, 0, 1)
display.Latex('$$\int_0^1 \sin^2(x)\, dx = {0:.3}$$'.format(ans))
#4.2 Answer
ans,_ = quad(lambda x: x**-2, 0, np.infty)
display.Latex('$$\int_0^\infty x^{{-2}}\, dx = {:.3}$$'.format(ans))
#4.3 Answer
def pdf(x, mu=-4, sig=2):
return 1 / np.sqrt(sig**2 * 2 * np.pi) * np.exp(- (x - mu)**2 / (2 * sig**2))
ans,_ = quad(pdf, -2, 2)
display.Latex('$$\int_{{-2}}^{{-2}} \\frac{{1}}{{\\sigma\\sqrt{{2\\pi^2}}}} e^{{\cfrac{{(x - \\mu)^2}}{{2\\sigma^2}}}} = {:.3}$$'.format(ans))
Explanation: 4. Integration (12 Points)
Compute the following integrals using scipy. Report all your answers using display.Latex and only three decimal places.
$$\int_0^1 \sin^2(x)\, dx$$
$$\int_0^\infty x^{-2}\, dx$$
Integrate the normal distribution with $\sigma = 2$, $\mu = -4$ from $-2$ to $2$. Do not use scipy.stats
Repeat part 1 but use the trapezoidal rule instead of scipy
End of explanation
data_5_x = [0.0, 0.2857, 0.5714, 0.8571, 1.1429, 1.4286, 1.7143, 2.0, 2.2857, 2.5714, 2.8571, 3.1429, 3.4286, 3.7143, 4.0, 4.2857, 4.5714, 4.8571, 5.1429, 5.4286, 5.7143, 6.0, 6.2857, 6.5714, 6.8571, 7.1429, 7.4286, 7.7143, 8.0, 8.2857, 8.5714, 8.8571, 9.1429, 9.4286, 9.7143, 10.0, 10.2857, 10.5714, 10.8571, 11.1429, 11.4286, 11.7143, 12.0, 12.2857, 12.5714, 12.8571, 13.1429, 13.4286, 13.7143, 14.0]
data_5_y = [67.9925, 67.5912, 67.4439, 66.7896, 66.4346, 66.3176, 65.7527, 65.1487, 65.7247, 65.1831, 64.5981, 64.5213, 63.6746, 63.9106, 62.6127, 63.3892, 62.6511, 62.601, 61.9718, 60.5553, 61.5862, 61.3173, 60.5913, 59.7061, 59.6535, 58.9301, 59.346, 59.2083, 60.3429, 58.752, 57.6269, 57.5139, 59.0293, 56.7979, 56.2996, 56.4188, 57.1257, 56.1569, 56.3077, 55.893, 55.4356, 56.7985, 55.6536, 55.8353, 54.4404, 54.2872, 53.9584, 53.3222, 53.2458, 53.7111]
#5.1 Answer
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
deriv = []
for i in range(1, len(data_5_x) - 1):
forward = (data_5_y[i + 1] - data_5_y[i]) / (data_5_x[i + 1] - data_5_x[i])
backward = (data_5_y[i] - data_5_y[i - 1]) / (data_5_x[i] - data_5_x[i - 1])
deriv.append((forward + backward) / 2)
plt.plot(data_5_x[1:-1], deriv)
plt.show()
#5.2 Answer
x = np.array(data_5_x)
y = np.array(data_5_y)
forward = (y[1:] - y[:-1]) / (x[1:] - x[:-1])
backward = forward
dervi = (forward[:-1] + backward[1:]) / 2.
plt.plot(x[1:-1], deriv)
plt.show()
#5.3 Answer
area = 0
for i in range(len(data_5_x) - 1):
width = data_5_x[i + 1] - data_5_x[i]
area += 0.5 * width * (data_5_y[i + 1] + data_5_y[i])
print(area)
#5.4 Answer
area = 0.5 * np.sum( (x[1:] - x[:-1]) * (y[1:] + y[:-1]) )
print(area)
Explanation: 5. Numerical Integration/Differentiation (12 Points)
Compute and plot the numerical derivatives of the data given in the next cell. Use a for loop and the central difference rule
Repeat part 1 using numpy arrays.
Compute the integral of the numerical data using the trapezoidal rule. Use a for loop.
Repeat part 3 with numpy
End of explanation |
1,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
请在环境变量中设置DB_URI指向数据库
Step1: 1. Single Day Analysis
Step2: Portfolio Construction
using EPS factor as alpha factor;
short selling is forbiden;
target of volatility for the activate weight is setting at 2.5% annually level.
Step8: 2. Porfolio Construction | Python Code:
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
Explanation: 请在环境变量中设置DB_URI指向数据库
End of explanation
ref_date = '2020-01-02'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('hs300')
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EMA5D', codes, 300, industry='sw', risk_model='short')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
Explanation: 1. Single Day Analysis
End of explanation
er = factor['EMA5D'].fillna(factor["EMA5D"].median()).values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)
status, p_er, p_weight = \
target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
Explanation: Portfolio Construction
using EPS factor as alpha factor;
short selling is forbiden;
target of volatility for the activate weight is setting at 2.5% annually level.
End of explanation
Back test parameter settings
start_date = '2020-01-01'
end_date = '2020-02-21'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe('hs300')
data_source = os.environ['DB_URI']
benchmark_code = 300
target_vol = 0.05
weights_bandwidth = 0.02
Factor Model
alpha_factors = {'f01': CSRank(LAST('EMA5D'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
Constraintes settings
constraint_risk = ['SIZE', 'SIZENL', 'BETA']
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
Running Settings
running_setting = RunningSetting(weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
Strategy run
strategy = Strategy(alpha_model,
data_meta,
universe=universe,
start_date=start_date,
end_date=end_date,
freq=freq,
benchmark=benchmark_code)
strategy.prepare_backtest_data()
ret_df, positions = strategy.run(running_setting)
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
Explanation: 2. Porfolio Construction: 2016 ~ 2018
End of explanation |
1,479 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
design a simple deep learning architecture for next word predection
| Python Code::
model = Sequential()
model.add(Embedding(vocab_size, 10, input_length=1))
model.add(LSTM(1000, return_sequences=True))
model.add(LSTM(1000))
model.add(Dense(1000, activation="relu"))
model.add(Dense(vocab_size, activation="softmax"))
|
1,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning to Use XGBoost
XGBoost is the leading model for working with standard tabular data (the type of data you store in pandas DataFrames, as opposed to more exotic types of data like images and videos).
XGBoost models do well in many Kaggle competitions.
To reach peak accuracy, XGBoost models require more knowledge and model tuning than techniques like Random Forest. After this tutorial, you'll be able to
Step1: Now we can build and fit a model just as we would in sklearn
Step2: And now on to evaluating the model and making predictions, also like in scikit-learn.
Step3: Model Tuning
XGBoost has a number of parameters that can dramatically affect your model's accuracy and speed.
Some significant parameters are
Step4: When using early_stopping_rounds, you need to set aside some of your data for checking the number of rounds to use.
If you later want to fit a model with all of your data, set n_estimators to whatever value you found to be optimal when run with early stopping.
learning_rate
Here's a subtle but important trick for better XGBoost models | Python Code:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import Imputer
data = pd.read_csv('input/train.csv')
data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = data.SalePrice
X = data.drop(['SalePrice'], axis=1).select_dtypes(exclude=['object'])
train_X, test_X, train_y, test_y = train_test_split(X.as_matrix(), y.as_matrix(), test_size=0.25)
my_imputer = Imputer()
train_X = my_imputer.fit_transform(train_X)
print("First entry of train_X :\n", train_X[:1])
print()
test_X = my_imputer.transform(test_X)
print("First entry of test_X :\n", test_X[:1])
Explanation: Learning to Use XGBoost
XGBoost is the leading model for working with standard tabular data (the type of data you store in pandas DataFrames, as opposed to more exotic types of data like images and videos).
XGBoost models do well in many Kaggle competitions.
To reach peak accuracy, XGBoost models require more knowledge and model tuning than techniques like Random Forest. After this tutorial, you'll be able to:
Follow the full modeling workflow with XGBoost, and
Fine-tune XGBoost models for optimal performance
XGBoost is an implementation of the Gradient Boosted Decision Trees algorithm (scikit-learn has another version of this algorithm, but XGBoost has some technical advantages.)
What are Gradient Boosted Decision Trees?
New models are generated in cycles, and the results of these models are aggregated and used to build into an ensemble model.
We start the cycle by calculating the errors for each observation in the dataset.
We then build a new model to predict those errors.
We add predictions from this error-predicting model to the ensemble of models.
To make a prediction, we include the predictions from all previous models.
We can use these predictions to calculate new errors, build the next model, and add it to the ensemble.
There's one piece outside that cycle.
We need some base prediction to start the cycle.
In practice, the initial predictions can be pretty naive.
Even if the predictions are wildly inaccurate, subsequent additions to the ensemble will address those errors.
This process may sound complicated, but the code to use it is straightforward.
We'll fill in some additional explanatory details in the model tuning section below.
End of explanation
from xgboost import XGBRegressor
my_model = XGBRegressor()
# Add silent=True to avoid printing out updates with each cycle:
# Don't forget to examine the parameters displayed when the model is built.
# Tuning those parameters properly may improve the model's performance.
my_model.fit(train_X, train_y, verbose=False)
Explanation: Now we can build and fit a model just as we would in sklearn:
End of explanation
predictions = my_model.predict(test_X)
predictions[:5]
from sklearn.metrics import mean_absolute_error
print("Mean Absolute Error:\n", str(mean_absolute_error(predictions, test_y)))
Explanation: And now on to evaluating the model and making predictions, also like in scikit-learn.
End of explanation
my_model = XGBRegressor(n_estimators=1000)
my_model.fit(train_X, train_y, early_stopping_rounds=5,
eval_set=[(test_X, test_y)], verbose=False)
Explanation: Model Tuning
XGBoost has a number of parameters that can dramatically affect your model's accuracy and speed.
Some significant parameters are:
n_estimators and early_stopping_rounds:
n_estimators specifies how many times the modeling cycle is repeated.
In the underfitting vs overfitting graph below, n_estimators moves you further to the right.
Too low a value causes underfitting, which will result in inaccurate predictions on both training data and new data. Too large a value causes overfitting, which means accurate predictions on training data, but inaccurate predictions on new data (which is what we care about).
You can experiment with your dataset to find the ideal.
Typical values range from 100-1000, though this depends a lot on the learning rate discussed below.
early_stopping_rounds offers a way to automatically find the maximum value.
Early stopping tells the program to stop iterating when the validation score stops improving.
One effective technique is to set a relatively high value for n_estimators and then use early_stopping_rounds to figure out when to stop.
Since random chance sometimes causes a single round where validation scores don't improve, you need to specify a number for how many rounds of straight deterioration to allow before stopping.
early_stopping_rounds = 5 is a reasonable value to experiment with.
Thus we stop after 5 straight rounds of deteriorating validation scores.
Here is the code to fit with early_stopping:
End of explanation
my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05)
my_model.fit(train_X, train_y, early_stopping_rounds=5,
eval_set=[(test_X, test_y)], verbose=False)
Explanation: When using early_stopping_rounds, you need to set aside some of your data for checking the number of rounds to use.
If you later want to fit a model with all of your data, set n_estimators to whatever value you found to be optimal when run with early stopping.
learning_rate
Here's a subtle but important trick for better XGBoost models:
Instead of getting predictions by simply adding up the predictions from each component model, we will multiply the predictions from each model by a small number before adding them in.
This means each tree we add to the ensemble helps us less.
In practice, this reduces the model's propensity to overfit.
So, you can use a higher value of n_estimators without overfitting.
If you use early stopping, the appropriate number of trees will be set automatically.
In general, a small learning rate (and large number of estimators) will yield more accurate XGBoost models, though it will also take the model longer to train since more iterations are needed.
End of explanation |
1,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parameter identification example
Here is a simple toy model that we use to demonstrate the working of the inference package
$\emptyset \xrightarrow[]{k_1(I)} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$
$ k_1(I) = \frac{k_1 I^2}{K_R^2 + I^2}$
Step1: Generate experimental data for multiple initial conditions
Simulate bioscrape model
Add Gaussian noise of non-zero mean and non-zero variance to the simulation
Create appropriate Pandas dataframes
Write the data to a CSV file
Step2: CSV looks like
Step3: Run the bioscrape MCMC algorithm to identify parameters from the experimental data
Step4: Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis.
OR
You can also plot the results as follows
Step5: Let us now try to fit all three parameters to see if results improve | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
%matplotlib inline
import bioscrape as bs
from bioscrape.types import Model
from bioscrape.simulator import py_simulate_model
import numpy as np
import pylab as plt
import pandas as pd
species = ['I','X']
reactions = [(['X'], [], 'massaction', {'k':'d1'}), ([], ['X'], 'hillpositive', {'s1':'I', 'k':'k1', 'K':'KR', 'n':2})]
k1 = 50.0
d1 = 0.5
params = [('k1', k1), ('d1', d1), ('KR', 20)]
initial_condition = {'X':0, 'I':0}
M = Model(species = species, reactions = reactions, parameters = params,
initial_condition_dict = initial_condition)
Explanation: Parameter identification example
Here is a simple toy model that we use to demonstrate the working of the inference package
$\emptyset \xrightarrow[]{k_1(I)} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$
$ k_1(I) = \frac{k_1 I^2}{K_R^2 + I^2}$
End of explanation
num_trajectories = 4 # each with different initial condition
initial_condition_list = [{'I':5},{'I':10},{'I':15},{'I':20}]
timepoints = np.linspace(0,5,100)
result_list = []
for init_cond in initial_condition_list:
M.set_species(init_cond)
result = py_simulate_model(timepoints, Model = M)['X']
result_list.append(result)
plt.plot(timepoints, result, label = 'I =' + str(list(init_cond.values())[0]))
plt.xlabel('Time')
plt.ylabel('[X]')
plt.legend()
plt.show()
exp_data = pd.DataFrame()
exp_data['timepoints'] = timepoints
for i in range(num_trajectories):
exp_data['X' + str(i)] = result_list[i] + np.random.normal(5, 1, size = np.shape(result))
plt.plot(timepoints, exp_data['X' + str(i)], 'r', alpha = 0.3)
plt.plot(timepoints, result_list[i], 'k', linewidth = 3)
plt.xlabel('Time')
plt.ylabel('[X]')
plt.show()
Explanation: Generate experimental data for multiple initial conditions
Simulate bioscrape model
Add Gaussian noise of non-zero mean and non-zero variance to the simulation
Create appropriate Pandas dataframes
Write the data to a CSV file
End of explanation
exp_data.to_csv('birth_death_data_multiple_conditions.csv')
exp_data
Explanation: CSV looks like:
End of explanation
from bioscrape.inference import py_inference
# Import data from CSV
# Import a CSV file for each experiment run
exp_data = []
for i in range(num_trajectories):
df = pd.read_csv('birth_death_data_multiple_conditions.csv', usecols = ['timepoints', 'X'+str(i)])
df.columns = ['timepoints', 'X']
exp_data.append(df)
prior = {'k1' : ['uniform', 0, 100]}
sampler, pid = py_inference(Model = M, exp_data = exp_data, measurements = ['X'], time_column = ['timepoints'],
nwalkers = 15, init_seed = 0.15, nsteps = 5000, sim_type = 'stochastic',
params_to_estimate = ['k1'], prior = prior, plot_show = False, convergence_check = False)
pid.plot_mcmc_results(sampler, convergence_check = False);
Explanation: Run the bioscrape MCMC algorithm to identify parameters from the experimental data
End of explanation
M_fit = M
timepoints = pid.timepoints[0]
flat_samples = sampler.get_chain(discard=200, thin=15, flat=True)
inds = np.random.randint(len(flat_samples), size=200)
for init_cond in initial_condition_list:
for ind in inds:
sample = flat_samples[ind]
for pi, pi_val in zip(pid.params_to_estimate, sample):
M_fit.set_parameter(pi, pi_val)
M_fit.set_species(init_cond)
plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.6)
# plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0)
for i in range(num_trajectories):
plt.plot(timepoints, list(pid.exp_data[i]['X']), 'b', alpha = 0.1)
plt.plot(timepoints, result, "k", label="original model")
plt.legend(fontsize=14)
plt.xlabel("Time")
plt.ylabel("[X]");
Explanation: Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis.
OR
You can also plot the results as follows
End of explanation
# prior = {'d1' : ['gaussian', 0, 10, 1e-3], 'k1' : ['gaussian', 0, 50, 1e-4]}
prior = {'d1' : ['uniform', 0.1, 10],'k1' : ['uniform',0,100],'KR' : ['uniform',0,100]}
sampler, pid = py_inference(Model = M, exp_data = exp_data, measurements = ['X'], time_column = ['timepoints'],
nwalkers = 15, init_seed = 0.15, nsteps = 10000, sim_type = 'stochastic',
params_to_estimate = ['d1','k1','KR'], prior = prior, plot_show = True, convergence_check = False)
M_fit = M
timepoints = pid.timepoints[0]
flat_samples = sampler.get_chain(discard=200, thin=15, flat=True)
inds = np.random.randint(len(flat_samples), size=200)
for init_cond in initial_condition_list:
for ind in inds:
sample = flat_samples[ind]
for pi, pi_val in zip(pid.params_to_estimate, sample):
M_fit.set_parameter(pi, pi_val)
M_fit.set_species(init_cond)
plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.6)
# plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0)
for i in range(num_trajectories):
plt.plot(timepoints, list(pid.exp_data[i]['X']), 'b', alpha = 0.2)
plt.plot(timepoints, result_list[i], "k")
# plt.legend(fontsize=14)
plt.xlabel("Time")
plt.ylabel("[X]");
Explanation: Let us now try to fit all three parameters to see if results improve:
End of explanation |
1,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overfitting
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$
Step1: Create random values for x in interval [0,1)
Step2: Compute y
Step3: Add random Gaussian noise to y
Step4: Put data into an SFrame to manipulate later
Step5: Create a function to plot the data, since we'll do it many times
Step6: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree
Step7: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data"
Step8: Define function to plot data and predictions made, since we are going to use it many times.
Step9: Create a function that prints the polynomial coefficients in a pretty way
Step10: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above
Step11: Inspect learned parameters
Step12: Form and plot our predictions along a grid of x values
Step13: Fit a degree-4 polynomial
Step14: Fit a degree-16 polynomial
Step15: Woah!!!! Those coefficients are crazy! On the order of 10^6.
Step16: Above
Step17: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
Step18: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
Step19: Let's look at fits for a sequence of increasing lambda values
Step20: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
Step21: Run LOO cross validation for "num" values of lambda, on a log scale
Step22: Plot results of estimating LOO for each value of lambda
Step23: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
Step24: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree
Step25: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty" | Python Code:
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Overfitting
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
End of explanation
random.seed(98103)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
Explanation: Create random values for x in interval [0,1)
End of explanation
y = x.apply(lambda x: math.sin(4*x))
Explanation: Compute y
End of explanation
random.seed(1)
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
Explanation: Add random Gaussian noise to y
End of explanation
data = graphlab.SFrame({'X1':x,'Y':y})
data
Explanation: Put data into an SFrame to manipulate later
End of explanation
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
Explanation: Create a function to plot the data, since we'll do it many times
End of explanation
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
Explanation: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree:
End of explanation
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
Explanation: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
End of explanation
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
y_pred = model.predict(polynomial_features(x_pred,deg))
# plot predictions
plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
Explanation: Define function to plot data and predictions made, since we are going to use it many times.
End of explanation
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
Explanation: Create a function that prints the polynomial coefficients in a pretty way :)
End of explanation
model = polynomial_regression(data, deg=2)
Explanation: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above:
End of explanation
print_coefficients(model)
Explanation: Inspect learned parameters
End of explanation
plot_poly_predictions(data,model)
Explanation: Form and plot our predictions along a grid of x values:
End of explanation
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Fit a degree-4 polynomial
End of explanation
model = polynomial_regression(data, deg=16)
print_coefficients(model)
Explanation: Fit a degree-16 polynomial
End of explanation
plot_poly_predictions(data,model)
Explanation: Woah!!!! Those coefficients are crazy! On the order of 10^6.
End of explanation
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
Explanation: Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
Ridge Regression
Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
Define our function to solve the ridge objective for a polynomial regression model of any degree:
End of explanation
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
End of explanation
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
End of explanation
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print 'lambda = %.2e' % l2_penalty
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('Ridge, lambda = %.2e' % l2_penalty)
Explanation: Let's look at fits for a sequence of increasing lambda values
End of explanation
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
Explanation: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
End of explanation
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
Explanation: Run LOO cross validation for "num" values of lambda, on a log scale
End of explanation
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\L2_penalty$')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
Explanation: Plot results of estimating LOO for each value of lambda
End of explanation
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
End of explanation
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=3000, convergence_threshold=1e-10)
return model
Explanation: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree:
End of explanation
for l1_penalty in [0.0001, 0.01, 0.1, 10]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
Explanation: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty"
End of explanation |
1,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating Galaxy Observations
Step1: Galaxy Model
The model for the spatial intensity of the galaxy we observe (i.e. the distribution of brightness on the sky) has two basic components
Step2: In addition to the "standard" way of defining functions shown above, Python has an additional method that can be used to define simple functions using lambda. These are convenient if you ever want to define functions "on the fly" for any reason, as discussed here. The same function defined in lambda notation is illustrated below.
Step3: Check that these give the same result using a grid of radii. Feel free to use np.linspace, np.arange, or any other tool to generate the set of radii for testing.
Step4: Experiment with different values for $a$ and $r_e$ and different gridding for radius to see how they change the resulting distribution. Also feel free to play around with the above plot until you have something you're happy with. (You can never spend too much time making good plots!)
Let's turn this radial model into a 2-dimensional image in $\mathbf{x}$ and $\mathbf{y}$. First, let's create a grid of points in x and y to plot. We'll set our galaxy to have an effective radius of 0.5 arcseconds. Check to make sure you've set $a$ back to the default value $a=1.68$ again before you continue.
Step5: The next step is to turn our two 1-D arrays into a 2-D grid in $\mathbf{x}$ and $\mathbf{y}$. More explicitly, we want to compute the intensity of our galaxy at (x_grid[i], y_grid[j]) for all i and j elements in x_grid and y_grid, respectively. This means we need to create a new array of values x and y of length $N_x \times N_y$.
Spend some time thinking about how you could create these arrays, which contain all possible combinations of x_grid and y_grid, so that if you plotted them using plt.plot you would get a grid. Feel free to try your hand at coding something up below before moving onto the "Pythonic" way of solving this problem.
Step6: As with many problems that come up often but are tedious to implement by hand, Python has a function to do this! The solution is shown below.
Step7: We see that our new x and y are the same shape, and contain $N_x \times N_y$ elements. Unfortunately, the number of points on our axes is flipped
Step8: Let's now visually check the results.
Step9: Woah -- what is going on here (assuming your notebook didn't crash making the plot)? Why do we havea bunch of different colored lines?
There's actually a lot to unpack here, so let's take our time getting into exactly what plt is doing here.
First, when plt sees a 2-D array, rather than "flattening" the results and treating it like a 1-D array (i.e. just a string of numbers), it instead plots each column of the array separately. In other words, plt.plot(x, y) actually calls plt.plot(x[
Step10: The second thing you might notice is that although we're using points specified by '.' in plt, what we get looks like a thick line. This is also plt working as intended
Step11: So with that in hand, let's now see what our grid looks like (thinned by 20). To help out with plotting, we're also going to "flatten" our arrays from 2-D to 1-D before plotting them.
Step12: With our 2-D grid of points in (x, y), we are now ready to compute our 2-D galaxy image. First, we can compute the radius using
$$ r^2 = x^2 + y^2 \quad ,$$
after which we can plug our compute radii $\mathbf{r}$ into our function prof_expo for the exponential profile.
Step13: Rather than trying to struggle with getting images to look good in plt.plot (or, alternately, plt.scatter), we're instead going to use plt.imshow (which is designed for this). Some examples are shown here.
Step14: Notice that there's some weird stuff going on with the default imshow plot
Step15: Let's assume a typical seeing of 0.8 arcsec. Compute and plot the PSF profile below. Take the code we used to compute and plot our galaxy profile earlier as a guide. Again, feel free to play around with the parameters to see how they change the PSF and plot; just make sure that $\beta$ is set back its default value of $\beta=4.765$ before moving on.
Step16: Observed Galaxy Model
As mentioned above, our final observed image is a convolution of our galaxy model and our PSF model. Convolutions can be tricky to deal with. Naively, we could imagine doing a convolution as follows
Step17: Compare our "convolved" image with the original galaxy model and the original PSF. This "smeared out" image now represents the galaxy we'd actually observe through the atmosphere from our ground-based telescope!
Slitmask
Let's imagine we're interested in taking a spectrum of this galaxy. Typically, this works by putting a slitmask on the telescope. These are typically made of metal, and are designed to blocks out everything except for the small amount of light passing through slits that have been drilled in the mask. The reason this has to be done is that the light from each slit is going to be spread out on the physical detector as a function of wavelength, so that blue light (i.e. shorter wavelengths) on one end and red light (i.e. longer wavelengths) are on the other. A general picture of this is shown below.
Using the slit dimensions specified below, see if you can compute (and plot) the galaxy image that would be seen by our telescope through the slitmask. I've included a solution below, but see if you can come up with your own version before moving on.
Extra Challenge
Step18: A solution is given below.
Step19: Pixelation
We're almost done simulating our galaxy. The last step is to account for the pixel scale of our observation. Telescopes don't have infinite resolving power, but have a minimum resolution determined by the size of the pixels on the charge-coupled device (CCD). This usually is expressed in units of arcseconds per pixel. Since our simulated observation above likely has a much higher resolution than the typical resolving scale, we need to bin it to lower resolution.
numpy has a few functions to accomplish this. The most relevant for our purposes are histogram as histogram2d, which compute the histogram of input data. There are also some nifty shortcuts for plotting these using the plt.hist and plt.hist2d functions.
As a warm up, let's first bin a set of normally distributed random numbers just to get familiar with the function.
Step20: Take a look at some of the additional arguments that can be passed to plt.hist. See if you can
Step21: Let's bin this using np.histogram to see what a histogram output looks like.
Step22: With all that done, let's now define the bins set by the pixel scale resolution of the Multiple Mirror Telescope (MMT) and Magellan Infrared Spectrograph (MMIRS) instrument. This is a spectrograph operated by a joint venture between the Smithsonian and the University of Arizona that you could one day use if you end up going to either institution!
Note that the code below uses a number of methods that you might not be familiar with. Take some time to see if you understand everything that's going on well enough that you could explain what the code below is doing to a friend.
Step23: Using the bins/bin centers computed above | Python Code:
# only necessary if you're running Python 2.7 or lower
from __future__ import print_function
from __builtin__ import range
import numpy as np
# import plotting utility and define our naming alias
from matplotlib import pyplot as plt
# plot figures within the notebook rather than externally
%matplotlib inline
Explanation: Simulating Galaxy Observations: Spectroscopy
This is the second component of the galaxy modeling unit, which involves simulating a spectroscopic observation (the intensity of light as a function of position and wavelength).
Remember that the overall goal here is to get comfortable hacking at code. How many cells you actually work through is not necessarily a great example of this -- there's a lot of subtleties in code, so sometimes working slowly but carefully can teach you a lot more than trying to get through all the prepared material.
Preamble
End of explanation
# Galaxy intensity model: Exponential
def prof_expo(r, re):
a = 1.68
return np.exp(-a * r / re)
Explanation: Galaxy Model
The model for the spatial intensity of the galaxy we observe (i.e. the distribution of brightness on the sky) has two basic components:
1. An intrinsic galaxy model, which governs the distribution of brightness away from the center of the galaxy. This needs to depend on the expected angular size of our galaxy, which determines how large an object is on the sky.
2. A model of the observed point-spread function (PSF), which governs how much a "point" on the sky is smeared out into a blob by the atmosphere/instrument. This needs to depend on the expected seeing (i.e. the typical size of a PSF blob).
Let's initialize each of these in turn using Python's built-in ability to store/define functions.
Our galaxy intensity will be a radially-symmetric exponential profile defined as
$$ I_{\textrm{gal}} = I_0 \exp \left(- a \frac{r}{r_e} \right) $$
where $I_0$ is a normalization factor (which we will take to be 1 here), $a = 1.68$ is a constant that governs how quickly light "falls off" towards the outskirts, and $r_e$ is the effective radius of the galaxy.
End of explanation
# defining galaxy intensity using exponential
prof_expo2 = lambda r, re: np.exp(-1.68 * r / re)
Explanation: In addition to the "standard" way of defining functions shown above, Python has an additional method that can be used to define simple functions using lambda. These are convenient if you ever want to define functions "on the fly" for any reason, as discussed here. The same function defined in lambda notation is illustrated below.
End of explanation
re = 1. # effective radius
radius = ... # radii
gal1 = ... # function 1 (def)
gal2 = ... # function 2 (lambda)
# numerical checks
# plot results
Explanation: Check that these give the same result using a grid of radii. Feel free to use np.linspace, np.arange, or any other tool to generate the set of radii for testing.
End of explanation
# galaxy effective radius
re = 0.5 # in arcsec
# define 2-D grid
Nx, Ny = 1000 + 1, 1050 + 1 # number of grid points in x and y (1 padded for the edge)
x_grid = np.linspace(-5. * re, 5. * re, Nx) # grid in x direction
y_grid = np.linspace(-5. * re, 5. * re, Ny) # grid in y direction
Explanation: Experiment with different values for $a$ and $r_e$ and different gridding for radius to see how they change the resulting distribution. Also feel free to play around with the above plot until you have something you're happy with. (You can never spend too much time making good plots!)
Let's turn this radial model into a 2-dimensional image in $\mathbf{x}$ and $\mathbf{y}$. First, let's create a grid of points in x and y to plot. We'll set our galaxy to have an effective radius of 0.5 arcseconds. Check to make sure you've set $a$ back to the default value $a=1.68$ again before you continue.
End of explanation
# space for experimenting with computing a 2-D grid from 2 1-D grids
Explanation: The next step is to turn our two 1-D arrays into a 2-D grid in $\mathbf{x}$ and $\mathbf{y}$. More explicitly, we want to compute the intensity of our galaxy at (x_grid[i], y_grid[j]) for all i and j elements in x_grid and y_grid, respectively. This means we need to create a new array of values x and y of length $N_x \times N_y$.
Spend some time thinking about how you could create these arrays, which contain all possible combinations of x_grid and y_grid, so that if you plotted them using plt.plot you would get a grid. Feel free to try your hand at coding something up below before moving onto the "Pythonic" way of solving this problem.
End of explanation
# mesh (x_grid, y_grid) into a new set of 2-D (x, y) arrays
x, y = np.meshgrid(x_grid, y_grid) # x,y for our 2-D grid
print(x, x.shape)
print(y, y.shape)
Explanation: As with many problems that come up often but are tedious to implement by hand, Python has a function to do this! The solution is shown below.
End of explanation
# *properly* mesh (x_grid, y_grid) into a new set of 2-D (x, y) arrays
x, y = np.meshgrid(x_grid, y_grid, indexing='ij') # x,y for our 2-D grid
# print array and array shapes
print(x, x.shape)
print(y, y.shape)
Explanation: We see that our new x and y are the same shape, and contain $N_x \times N_y$ elements. Unfortunately, the number of points on our axes is flipped: we wanted 1001 points in the $x$ direction and 1051 in the $y$ direction. This is because the default option in np.meshgrid uses 'xy' (Cartesian) indexing instead of 'ij' (matrix) indexing, which ends up flipping the order of the axes. Since we're using matrices, we actually need to specify the latter.
This result shows how important it is to understand exactly what the "default" options for a particular package are. If you're not exactly sure how something is looking, it's always in your best interest to sanity-check the outputs!
End of explanation
plt.plot(x, y, '.');
Explanation: Let's now visually check the results.
End of explanation
# select 10 columns of the array
x_temp, y_temp = x[:, 15:20], y[:, 15:20] # example of array slicing
plt.figure()
plt.plot(x_temp, y_temp, '.');
# print array and array shape
print(x_temp, x_temp.shape)
print(y_temp, y_temp.shape)
Explanation: Woah -- what is going on here (assuming your notebook didn't crash making the plot)? Why do we havea bunch of different colored lines?
There's actually a lot to unpack here, so let's take our time getting into exactly what plt is doing here.
First, when plt sees a 2-D array, rather than "flattening" the results and treating it like a 1-D array (i.e. just a string of numbers), it instead plots each column of the array separately. In other words, plt.plot(x, y) actually calls plt.plot(x[:, 0], y[:, 0]), plt.plot(x[:, 1], y[:, 1]), etc., plotting your data $N_y$ times. Since we're using the default setting, each new plot is also automatically assigned a new color based on the default color scheme in matplotlib, which is what leads to the color vomit shown on screen.
Let's check this out by plotting using just a few columns.
End of explanation
# select 10 columns of the array
x_temp, y_temp = x[::20, 15:20:1], y[::20, 15:20:1] # example of array slicing/thinning
plt.figure()
plt.plot(x_temp, y_temp, '.');
# print array shape
print('x:', x_temp.shape)
print('y:', y_temp.shape)
Explanation: The second thing you might notice is that although we're using points specified by '.' in plt, what we get looks like a thick line. This is also plt working as intended: our points are just so dense that they overlap with each other in that particular region. We can better see that actual results by "thinning" our arrays. This works using the format x[start:stop:step].
End of explanation
# thin grid by a factor of 20
x_temp, y_temp = x[::20, ::20].flatten(), y[::20, ::20].flatten() # slicing/thinning/flattening
plt.figure()
plt.plot(x_temp, y_temp, '.', markersize=2)
# print array shape (2-D vs flattened)
print(x[::20, ::20].shape, x_temp.shape)
Explanation: So with that in hand, let's now see what our grid looks like (thinned by 20). To help out with plotting, we're also going to "flatten" our arrays from 2-D to 1-D before plotting them.
End of explanation
r = ... # 2-D grid of radii
model_gal = prof_expo(r, re) # 2-D grid of galaxy intensity
Explanation: With our 2-D grid of points in (x, y), we are now ready to compute our 2-D galaxy image. First, we can compute the radius using
$$ r^2 = x^2 + y^2 \quad ,$$
after which we can plug our compute radii $\mathbf{r}$ into our function prof_expo for the exponential profile.
End of explanation
# plotting our galaxy profile
plt.figure()
# default plot
plt.imshow(model_gal)
# more detailed plot
#plt.imshow(model_gal.T, # take the transpose to flip x and y in plot
# origin='lower', # specify the origin to be at the bottom not the top
# extent=[x_grid[0], x_grid[-1], y_grid[0], y_grid[-1]], # specify [left, right, bottom, top] positions
# cmap='magma', interpolation='none') # additional options
#plt.xlabel('x [arcsec]')
#plt.ylabel('y [arcsec]')
#plt.title('Intrinsic Galaxy Profile')
#plt.colorbar(label='Intensity') # add a colorbar
Explanation: Rather than trying to struggle with getting images to look good in plt.plot (or, alternately, plt.scatter), we're instead going to use plt.imshow (which is designed for this). Some examples are shown here.
End of explanation
# PSF Model: Moffat
def prof_moffat(r, fwhm):
# compute constants
beta = 4.765 # beta is fixed
bnorm = np.sqrt(2.**(1. / beta) - 1) # constant, computed from beta
alpha = fwhm / 2. / bnorm # alpha, computed from beta and FWHM
# compute PSF
norm = (beta - 1.) / (np.pi * alpha**2)
psf = norm * (1 + (r / alpha)**2)**-beta
return psf
Explanation: Notice that there's some weird stuff going on with the default imshow plot:
- the left side starts from the top rather than the bottom,
- the axes again appear to be switched (x has 1051 elements instead of y), and
- the dimensions on each axes are indexes rather than positions.
These should be fixed in the detailed version, which specifies much of these directly. Switch the plot to instead use the detailed version. Read through the documentation and take a look at the different options used in the more detailed version to get a sense of what's changed. Feel free to play around with different options until you have something you like.
Extra Challenge: Try to rotate the label on the colorbar 180 degrees so it now reads vertically in the other direction.
Extra Extra Challenge: See if you can get the color scheme to function logarithmically rather than linearly using plt.imshow's norm argument.
PSF Model
Although the image above looks great, we rarely ever see it in practice from the ground because of the atmospheric point-spread function (PSF). If you imagine our image above as an infinite (rather than finite) collection of points, what the PSF does is turn every one of those points into a small blob. Our final observed image is then, in math terms, a convolution of our galaxy model and our PSF model.
PSFs can be very complicated, but in most cases can be approximated by a Moffat profile quite well, which is like a Normal (i.e. Gaussian, "bell curve") distribution but with heavier ("fatter") tails. This has the form:
$$ I(r \,|\, \alpha, \beta) = \frac{\beta - 1}{\pi \alpha^2} \left[ 1 + \left(\frac{r}{\alpha}\right)^2 \right]^{-\beta} $$
where $I(r \,|\, \alpha, \beta)$ stands for the intensity as a function of radius given $\alpha$ and $\beta$, where $\alpha$ and $\beta$ are constants that describe the overall shape of the profile. In general, $\alpha$ describes the "size" while $\beta$ describes how quickly the PSF "falls off" towards the edges (similar to the effective radius $r_e$ we defined above).
When astronomers talk about typical "seeing" conditions, they often are describing the full width at half maximum (FWHM), which measures the width ("diameter") of the distribution at half its maximum value. This relates to $\alpha$ as
$$ \textrm{FWHM}(\alpha, \beta) = 2 \times \alpha \times \sqrt{2^{1/\beta} - 1} \quad . $$
Let's now define our PSF profile in terms of the FWHM, fixing $\beta = 4.765$.
End of explanation
# define our typical seeing
psf_fwhm = 0.8 # FWHM [arcsec]
# compute our psf
model_psf = ...
# plot our psf
...
Explanation: Let's assume a typical seeing of 0.8 arcsec. Compute and plot the PSF profile below. Take the code we used to compute and plot our galaxy profile earlier as a guide. Again, feel free to play around with the parameters to see how they change the PSF and plot; just make sure that $\beta$ is set back its default value of $\beta=4.765$ before moving on.
End of explanation
# compute convolution of galaxy and PSF
model_obs = fftconvolve()
model_obs /= np.max(model_obs) # normalize result to 1.
# plot our result
...
Explanation: Observed Galaxy Model
As mentioned above, our final observed image is a convolution of our galaxy model and our PSF model. Convolutions can be tricky to deal with. Naively, we could imagine doing a convolution as follows:
1. Compute the standard PSF.
2. Turn every point in our 2-D r(x, y) grid into a PSF, making sure we keep the same overall amplitude $I(r(x,y))$ for each PSF.
3. Add up all the PSFs together.
This type of thinking represents the type of thinking that is useful when coding: breaking down each part of a larger problem into a bite-size chunk that is easier to code up before combining things together at the end. (Figuring out where loops are needed versus where array operations can be done instead is also incredibly useful.)
In addition to this discrete, "computing"-oriented way of thinking about the problem, convolutions also lend themselves naturally to a more theoretical/math-y approach (i.e. we're smearing out an infinite collection of points from one continuous function using another continuous function). From that perspective, it turns out there are some really neat tricks we can do to actually compute more "exact" convolutions even with a small set of data points.
As with most applications that are relatively common, Python has a bunch of functions/packages useful for convolutions! We'll use a specific version fftconvolve (which uses fast Fourier transforms) from scipy.signal (a set of functions in scipy useful for signal processing) to do our convolution.
Import fftconvolve from scipy.signal, use it to compute the observed galaxy model, and plot the results.
Note: make sure to compute the convolution using mode='same', otherwise you won't be able to plot your results! Check the documentation for additional details.
End of explanation
# slit dimensions [arcsec]
slit_width = 0.8
slit_height = 7.
# post-slit galaxy model
model_spec =
# plotting the results
...
Explanation: Compare our "convolved" image with the original galaxy model and the original PSF. This "smeared out" image now represents the galaxy we'd actually observe through the atmosphere from our ground-based telescope!
Slitmask
Let's imagine we're interested in taking a spectrum of this galaxy. Typically, this works by putting a slitmask on the telescope. These are typically made of metal, and are designed to blocks out everything except for the small amount of light passing through slits that have been drilled in the mask. The reason this has to be done is that the light from each slit is going to be spread out on the physical detector as a function of wavelength, so that blue light (i.e. shorter wavelengths) on one end and red light (i.e. longer wavelengths) are on the other. A general picture of this is shown below.
Using the slit dimensions specified below, see if you can compute (and plot) the galaxy image that would be seen by our telescope through the slitmask. I've included a solution below, but see if you can come up with your own version before moving on.
Extra Challenge: Try and compute the "slit loss" (the fraction of light that is not blocked by the slit). This tells us the overall "efficiency" of our observation so far.
End of explanation
# one possible solution using boolean algebra (0=False, 1=True)
# slit dimensions [arcsec]
slit_width = 0.8
slit_height = 7.
# slit model
model_slit = (abs(x) <= slit_width / 2.) # set all x's within slit_width / 2. of 0. to 1; otherwise set to 0
model_slit *= (abs(y) <= slit_height / 2.) # *also* set all y's within slit_height / 2. to 1; otherwise set to 0
# post-slit observed galaxy model
model_spec = model_slit * model_obs
# plot results (basic)
plt.figure(figsize=(14, 6)) # create figure object
plt.subplot(1, 2, 1) # split the figure into a grid with 1 row and 2 columns; pick subplot 1
plt.imshow(model_slit.T) # plot slitmask model
plt.colorbar(label='Transmission')
plt.subplot(1, 2, 2) # pick subplot 2
plt.imshow(model_spec.T) # plot combined galaxy+slit model
plt.colorbar(label='Intensity')
plt.tight_layout()
Explanation: A solution is given below.
End of explanation
rand = np.random.normal(loc=0., scale=1., size=100000) # normally distributed random numbers
plt.hist(rand); # plot a histogram
Explanation: Pixelation
We're almost done simulating our galaxy. The last step is to account for the pixel scale of our observation. Telescopes don't have infinite resolving power, but have a minimum resolution determined by the size of the pixels on the charge-coupled device (CCD). This usually is expressed in units of arcseconds per pixel. Since our simulated observation above likely has a much higher resolution than the typical resolving scale, we need to bin it to lower resolution.
numpy has a few functions to accomplish this. The most relevant for our purposes are histogram as histogram2d, which compute the histogram of input data. There are also some nifty shortcuts for plotting these using the plt.hist and plt.hist2d functions.
As a warm up, let's first bin a set of normally distributed random numbers just to get familiar with the function.
End of explanation
plt.plot(radius, gal1) # plot original relation
plt.hist(radius, weights=gal1, normed=True); # plot a histogram, normalized based on our weights
Explanation: Take a look at some of the additional arguments that can be passed to plt.hist. See if you can:
- adjust the number of bins,
- set the locations of the bins by hand,
- change the color of the histogram,
- change the "style" (type) of histogram, and
- plot the histogram horizontally.
Extra Challenge: Ensure that each histogram is properly normalized (i.e. integrates to 1.), cumulative (each bin is the sum of all previous bins), has 100 evenly spaced bins from -5 to 5, and ignores any values outside of those boundaries.
Extra Extra Challenge: Plot 10 realizations of random numbers such that they "stack" on top of each other, each with a different color based on a pre-defined colormap (see the previous notebook for an example of how to define a colormap).
Now let's look at weighted histograms. Our original exponential profile has a series of radii (stored in radius) corrsponding to a set of intensities (gal1 and gal2). We want to bin this to lower resolution using plt.hist. To do this, we just bin up the radius to lower resolution, where we weight each point by the intensity.
End of explanation
counts, bin_edges = np.histogram(radius, weights=gal1, normed=True)
print(counts)
print(bin_edges)
Explanation: Let's bin this using np.histogram to see what a histogram output looks like.
End of explanation
# the pixel scale of the MMIRS instrument
pix_scale = 0.2012 # [arcsec/pix]
# define our bins
x_bin = np.arange(x_grid[0], x_grid[-1] + pix_scale, pix_scale) # bin edges in x with width=pix_scale
y_bin = np.arange(y_grid[0], y_grid[-1] + pix_scale, pix_scale) # bin edges in y with width=pix_scale
# define the centers of each bin
x_cent, y_cent = 0.5 * (x_bin[1:] + x_bin[:-1]), 0.5 * (y_bin[1:] + y_bin[:-1])
# the total number of bins
Nx_pix, Ny_pix = len(x_cent), len(y_cent)
print(Nx_pix, Ny_pix)
Explanation: With all that done, let's now define the bins set by the pixel scale resolution of the Multiple Mirror Telescope (MMT) and Magellan Infrared Spectrograph (MMIRS) instrument. This is a spectrograph operated by a joint venture between the Smithsonian and the University of Arizona that you could one day use if you end up going to either institution!
Note that the code below uses a number of methods that you might not be familiar with. Take some time to see if you understand everything that's going on well enough that you could explain what the code below is doing to a friend.
End of explanation
# convert from arcsec to pixels
model_pix, x_edges, y_edges = np.histogram2d(Xs, Ys, weights=Model, bins=[x_bin, y_bin]) # bin over pixel scale
# plot results
plt.imshow(model_pix.T)
Explanation: Using the bins/bin centers computed above:
1. Bin our model_spec observation to the MMIRS pixel scale using np.histogram2d.
2. Plot the result using plt.imshow.
Note that the code below should not work as is: not only are the variables placeholders (i.e. they're undefined), histogram2d also requires that all the input data is 1-D, which means we have to flatten the input arrays (if needed).
End of explanation |
1,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
1,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We won't work through this notebook
We won't have time. But I thought I'd include it, in case you want to see exactly how I implement my population-level quality metric.
Step2: Let's put the CSMF Accuracy calculation right at the top
Step3: How can I test this?
Step7: Things we don't have time for
An approach to really do the cross-validation out of sample | Python Code:
import numpy as np, pandas as pd
Explanation: We won't work through this notebook
We won't have time. But I thought I'd include it, in case you want to see exactly how I implement my population-level quality metric.
End of explanation
def measure_prediction_quality(csmf_pred, y_test):
Calculate population-level prediction quality (CSMF Accuracy)
Parameters
----------
csmf_pred : pd.Series, predicted distribution of causes
y_test : array-like, labels for test dataset
Results
-------
csmf_acc : float
csmf_true = pd.Series(y_test).value_counts() / float(len(y_test))
csmf_acc = 1 -
return csmf_acc
Explanation: Let's put the CSMF Accuracy calculation right at the top
End of explanation
csmf_pred = pd.Series({'cause_1': .5, 'cause_2': .5})
y_test = ['cause_1', 'cause_2']
measure_prediction_quality(csmf_pred, y_test)
csmf_pred = pd.Series({'cause_1': 0., 'cause_2': 1.})
y_test = ['cause_1']*1000 + ['cause_2']
measure_prediction_quality(csmf_pred, y_test)
Explanation: How can I test this?
End of explanation
val = {}
module = 'Adult'
val[module] = pd.read_csv('../3-data/phmrc_cleaned.csv')
def get_data(module):
X = np.array(val[module].filter(regex='(^s[0-9]+|age|sex)').fillna(0))
y = np.array(val[module].gs_text34)
site = np.array(val[module].site)
return X, y, site
X, y, site = get_data(module)
X.shape
def my_resample(X, y, N2, csmf_new):
"Randomly resample X and y so that resampled cause distribution follows
csmf_new and there are N2 samples total
Parameters
----------
X : array-like, feature vectors
y : array-like, corresponding labels
N2 : int, number of samples in resampled results
csmf_new : pd.Series, distribution of resampled data
Results
-------
X_new : array-like, resampled feature vectors
y_new : array-like, corresponding resampled labels
N, I = X.shape
assert len(y) == N, 'X and y must have same length'
causes = csmf_new.index
J, = causes.shape # trailing comma for sneaky numpy reasons
# generate count of examples for each cause according to csmf_new
cnt_new = np.random.multinomial(N2, csmf_new)
# replace y_new with original values
y_new = []
for cnt, cause in zip(cnt_new, causes):
for n_j in range(cnt):
y_new.append(cause)
y_new = np.array(y_new)
# resample rows of X appropriately
X_new = np.zeros((len(y_new), I))
for j in causes:
new_rows, = np.where(y_new == j) # trailing comma for sneaky numpy reasons
candidate_rows, = np.where(y == j) # trailing comma for sneaky numpy reasons
assert len(candidate_rows) > 0, 'must have examples of each resampled cause'
old_rows = np.random.choice(candidate_rows, size=len(new_rows), replace=True)
X_new[new_rows,] = X[old_rows,]
return X_new, y_new
def random_allocation(X_train, y_train):
make predictions by random allocation
clf = sklearn.base.BaseEstimator()
def my_predict(X_test):
N = len(X_test)
J = float(len(np.unique(y_train)))
y_pred = np.ones((N, J)) / J
csmf_pred = pd.Series(y_pred.sum(axis=0),
index=np.unique(y_train)) / N
return csmf_pred
clf.my_predict = my_predict
return clf
def my_key(module, clf):
return '{}-{}'.format(module, clf)
import sklearn.model_selection
results = []
def measure_csmf_acc(my_fit_predictor, replicates=10):
my_fit_predictor : function that takes X,y returns clf object with my_predict method
clf.my_predict takes X_test, return csmf_pred
Results
-------
stores calculation in results dict,
returns calc for adults
X, y, site = get_data(module)
acc = []
np.random.seed(12345) # set seed for reproducibility
cv = sklearn.model_selection.StratifiedShuffleSplit(n_iter=replicates, test_size=0.25)
for train_index, test_index in cv.split(X, y):
# make train test split
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# resample train set for equal class weights
J = len(np.unique(y))
csmf_flat = pd.Series(np.ones(J)/J, index=np.unique(y))
X_train, y_train = my_resample(X_train, y_train, J*100, csmf_flat)
clf = my_fit_predictor(X_train, y_train)
# resample test set to have uninformative cause distribution
csmf_rand = pd.Series(np.random.dirichlet(np.ones(J)), index=np.unique(y))
X_test_resamp, y_test_resamp = my_resample(X_test, y_test, J*100, csmf_rand)
# make predictions
csmf_pred = clf.my_predict(X_test_resamp)
# test predictions
csmf_acc = measure_prediction_quality(csmf_pred, y_test_resamp)
results.append({'csmf_acc':csmf_acc, 'key':my_key(module, clf)})
df = pd.DataFrame(results)
g = df.groupby('key')
return g.csmf_acc.describe().unstack()
baseline_csmf_acc = measure_csmf_acc(random_allocation)
baseline_csmf_acc
import sklearn.naive_bayes
def nb_pr_allocation(X_train, y_train):
clf = sklearn.naive_bayes.BernoulliNB()
clf.fit(X_train, y_train)
def my_predict(X_test):
y_pred = clf.predict_proba(X_test)
csmf_pred = pd.Series(y_pred.sum(axis=0), index=clf.classes_) / float(len(y_pred))
return csmf_pred
clf.my_predict = my_predict
return clf
measure_csmf_acc(nb_pr_allocation)
Explanation: Things we don't have time for
An approach to really do the cross-validation out of sample:
End of explanation |
1,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AdaptiveMD
Example 4 - Custom Task objects
0. Imports
Step1: Let's open our test project by its name. If you completed the first examples this should all work out of the box.
Step2: Open all connections to the MongoDB and Session so we can get started.
Let's see again where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
Step3: Now restore our old ways to generate tasks by loading the previously used generators.
Step4: A simple task
A task is in essence a bash script-like description of what should be executed by the worker. It has details about files to be linked to the working directory, bash commands to be executed and some meta information about what should happen in case we succeed or fail.
The execution structure
Let's first explain briefly how a task is executed and what its components are. This was originally build so that it is compatible with radical.pilot and still is. So, if you are familiar with it, all of the following information should sould very familiar.
A task is executed from within a unique directory that only exists for this particular task. These are located in adaptivemd/workers/ and look like
worker.0x5dcccd05097611e7829b000000000072L/
the long number is a hex representation of the UUID of the task. Just if you are curious type
print hex(my_task.__uuid__)
Then we change directory to this folder write a running.sh bash script and execute it. This script is created from the task definition and also depends on your resource setting (which basically only contain the path to the workers directory, etc)
The script is divided into 1 or 3 parts depending on which Task class you use. The main Task uses a single list of commands, while PrePostTask has the following structure
Pre-Exec
Step5: We are linking a lot of files to the worker directory and change the name for the .pdb in the process. Then call the actual python script that runs openmm. And finally move the output.dcd and the restart file back tp the trajectory folder.
There is a way to list lot's of things about tasks and we will use it a lot to see our modifications.
Step6: Modify a task
As long as a task is not saved and hence placed in the queue, it can be altered in any way. All of the 3 / 5 phases can be changed separately. You can add things to the staging phases or bash phases or change the command. So, let's do that now
Add a bash line
First, a Task is very similar to a list of bash commands and you can simply append (or prepend) a command. A text line will be interpreted as a bash command.
Step7: As expected this line was added to the end of the script.
Add staging actions
To set staging is more difficult. The reason is, that you normally have no idea where files are located and hence writing a copy or move is impossible. This is why the staging commands are not bash lines but objects that hold information about the actual file transaction to be done. There are some task methods that help you move files but also files itself can generate this commands for you.
Let's move one trajectory (directory) around a little more as an example
Step8: This looks like in the script. The default for a copy is to move a file or folder to the worker directory under the same name, but you can give it another name/location if you use that as an argument. Note that since trajectories are a directory you need to give a directory name (which end in a /)
Step9: If you want to move it not to the worker directory you have to specify the location and you can do so with the prefixes (shared
Step10: Besides .copy you can also .move or .link files.
Step11: Local files
Let's mention these because they require special treatment. We cannot copy files to the HPC, we need to store them in the DB first.
Step12: Make sure you use file
Step13: Note that now there are 3 / in the filename, two from the
Step14: For local files you normally use .transfer, but copy, move or link work as well. Still, there is no difference since the file only exists in the DB now and copying from the DB to a place on the HPC results in a simple file creation.
Now, we want to add a command to the staging and see what happens.
Step15: We now have one more transfer command. But something else has changed. There is one more files listed as required. So, the task can only run, if that file exists, but since we loaded it into the DB, it exists (for us). For example the newly created trajectory 25.dcd does not exist yet. Would that be a requirement the task would fail. But let's check that it exists.
Step16: Okay, we have now the PDB file staged and so any real bash commands could work with a file ntl9.pdb. Alright, so let's output its stats.
Step17: Note that usually you place these stage commands at the top or your script.
Now we could run this task, as before and see, if it works. (Make sure you still have a worker running)
Step18: And check, that the task is running
Step19: If we did not screw up the task, it should have succeeded and we can look at the STDOUT.
Step20: Well, great, we have the pointless output and the stats of the newly staged file ntl9.pdb
How does a real script look like
Just for fun let's create the same scheduler that the adaptivemdworker uses, but from inside this notebook.
Step21: If you really wanted to use the worker you need to initialize it and it will create directories and stage files for the generators, etc. For that you need to call sc.enter(project), but since we only want it to parse our tasks, we only set the project without invoking initialization. You should normally not do that.
Step22: Now we can use a function .task_to_script that will parse a task into a bash script. So this is really what would be run on your machine now.
Step23: Now you see that all file paths have been properly interpreted to work. See that there is a comment about a temporary file from the DB that is then renamed. This is a little trick to be compatible with RPs way of handling files. (TODO
Step24: And voila, the path has changed to a relative path from the working directory of the worker. Note that you see here the line we added in the very beginning of example 1 to our resource!
A Task from scratch
If you want to start a new task you can begin with
Step25: as we did before.
Just start adding staging and bash commands and you are done. When you create a task you can assign it a generator, then the system will assume that this task was generated by that generator, so don't do it for you custom tasks, unless you generated them in a generator. Setting this allows you to tell a worker only to run tasks of certain types.
The Python RPC Task
The tasks so far a very powerful, but they lack the possibility to call a python function. Since we are using python here, it would be great to really pretend to call a python function from here and not taking the detour of writing a python bash executable with arguments, etc... An example for this is the PyEmma generator which uses this capability.
Let's do an example of this as well. Assume we have a python function in a file (you need to have your code in a file so far so that we can copy the file to the HPC if necessary). Let's create the .py file now.
Step26: Now create a PythonTask instead
Step27: and the call function has changed. Note that also now you can still add all the bash and stage commands as before. A PythonTask is also a subclass of PrePostTask so we have a .pre and .post phase available.
Step28: We call the function my_func with one argument
Step29: Well, interesting. What this actually does is to write the input arguments to the function into a temporary .json file on the worker, (in RP on the local machine and then transfers it to remote), rename it to input.json and read it in the _run_.py. This is still a little clumsy, but needs to be this way to be RP compatible which only works with files! Look at the actual script.
You see, that we really copy the .py file that contains the source code to the worker directory. All that is done automatically. A little caution on this. You can either write a function in a single file or use any installed package, but in this case the same package needs to be installed on the remote machine as well!
Let's run it and see what happens.
Step30: And wait until the task is done
Step31: The default settings will automatically save the content from the resulting output.json in the DB an you can access the data that was returned from the task at .output. In our example the result was just the size of a the file in bytes
Step32: And you can use this information in an adaptive script to make decisions.
success callback
The last thing we did not talk about is the possibility to also call a function with the returned data automatically on successful execution. Since this function is executed on the worker we (so far) only support function calls with the following restrictions.
you can call a function of the related generator class. for this you need to create the task using PythonTask(generator)
the function name you want to call is stored in task.then_func_name. So you can write a generator class with several possible outcomes and chose the function for each task.
The Generator needs to be part of adaptivemd
So in the case of modeller.execute we create a PythonTask that references the following functions
Step33: So we will call the default then_func of modeller or the class modeller is of.
Step34: These callbacks are called with the current project, the resulting data (which is in the modeller case a Model object) and array of initial inputs.
This is the actual code of the callback
py
@staticmethod
def then_func(project, task, model, inputs) | Python Code:
from adaptivemd import Project, File#, PythonTask, Task
Explanation: AdaptiveMD
Example 4 - Custom Task objects
0. Imports
End of explanation
project = Project('tutorial')
Explanation: Let's open our test project by its name. If you completed the first examples this should all work out of the box.
End of explanation
print project.files
print project.generators
print project.models
Explanation: Open all connections to the MongoDB and Session so we can get started.
Let's see again where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
task = engine.run(project.new_trajectory(pdb_file, 100))
task.script
Explanation: A simple task
A task is in essence a bash script-like description of what should be executed by the worker. It has details about files to be linked to the working directory, bash commands to be executed and some meta information about what should happen in case we succeed or fail.
The execution structure
Let's first explain briefly how a task is executed and what its components are. This was originally build so that it is compatible with radical.pilot and still is. So, if you are familiar with it, all of the following information should sould very familiar.
A task is executed from within a unique directory that only exists for this particular task. These are located in adaptivemd/workers/ and look like
worker.0x5dcccd05097611e7829b000000000072L/
the long number is a hex representation of the UUID of the task. Just if you are curious type
print hex(my_task.__uuid__)
Then we change directory to this folder write a running.sh bash script and execute it. This script is created from the task definition and also depends on your resource setting (which basically only contain the path to the workers directory, etc)
The script is divided into 1 or 3 parts depending on which Task class you use. The main Task uses a single list of commands, while PrePostTask has the following structure
Pre-Exec: Things to happen before the main command (optional)
Main: the main commands are executed
Post-Exec: Things to happen after the main command (optional)
Okay, lots of theory, now some real code for running a task that generated a trajectory
End of explanation
print task.description
Explanation: We are linking a lot of files to the worker directory and change the name for the .pdb in the process. Then call the actual python script that runs openmm. And finally move the output.dcd and the restart file back tp the trajectory folder.
There is a way to list lot's of things about tasks and we will use it a lot to see our modifications.
End of explanation
task.append('echo "This new line is pointless"')
print task.description
Explanation: Modify a task
As long as a task is not saved and hence placed in the queue, it can be altered in any way. All of the 3 / 5 phases can be changed separately. You can add things to the staging phases or bash phases or change the command. So, let's do that now
Add a bash line
First, a Task is very similar to a list of bash commands and you can simply append (or prepend) a command. A text line will be interpreted as a bash command.
End of explanation
traj = project.trajectories.one
transaction = traj.copy()
print transaction
Explanation: As expected this line was added to the end of the script.
Add staging actions
To set staging is more difficult. The reason is, that you normally have no idea where files are located and hence writing a copy or move is impossible. This is why the staging commands are not bash lines but objects that hold information about the actual file transaction to be done. There are some task methods that help you move files but also files itself can generate this commands for you.
Let's move one trajectory (directory) around a little more as an example
End of explanation
transaction = traj.copy('new_traj/')
print transaction
Explanation: This looks like in the script. The default for a copy is to move a file or folder to the worker directory under the same name, but you can give it another name/location if you use that as an argument. Note that since trajectories are a directory you need to give a directory name (which end in a /)
End of explanation
transaction = traj.copy('staging:///cached_trajs/')
print transaction
Explanation: If you want to move it not to the worker directory you have to specify the location and you can do so with the prefixes (shared://, sandbox://, staging:// as explained in the previous examples)
End of explanation
transaction = pdb_file.copy('staging:///delete.pdb')
print transaction
transaction = pdb_file.move('staging:///delete.pdb')
print transaction
transaction = pdb_file.link('staging:///delete.pdb')
print transaction
Explanation: Besides .copy you can also .move or .link files.
End of explanation
new_pdb = File('file://../files/ntl9/ntl9.pdb').load()
Explanation: Local files
Let's mention these because they require special treatment. We cannot copy files to the HPC, we need to store them in the DB first.
End of explanation
print new_pdb.location
Explanation: Make sure you use file:// to indicate that you are using a local file. The above example uses a relative path which will be replaced by an absolute one, otherwise we ran into trouble once we open the project at a different directory.
End of explanation
print new_pdb.get_file()[:300]
Explanation: Note that now there are 3 / in the filename, two from the :// and one from the root directory of your machine
The load() at the end really loads the file and when you save this File now it will contain the content of the file. You can access this content as seen in the previous example.
End of explanation
transaction = new_pdb.transfer()
print transaction
task.append(transaction)
print task.description
Explanation: For local files you normally use .transfer, but copy, move or link work as well. Still, there is no difference since the file only exists in the DB now and copying from the DB to a place on the HPC results in a simple file creation.
Now, we want to add a command to the staging and see what happens.
End of explanation
new_pdb.exists
Explanation: We now have one more transfer command. But something else has changed. There is one more files listed as required. So, the task can only run, if that file exists, but since we loaded it into the DB, it exists (for us). For example the newly created trajectory 25.dcd does not exist yet. Would that be a requirement the task would fail. But let's check that it exists.
End of explanation
task.append('stat ntl9.pdb')
Explanation: Okay, we have now the PDB file staged and so any real bash commands could work with a file ntl9.pdb. Alright, so let's output its stats.
End of explanation
project.queue(task)
Explanation: Note that usually you place these stage commands at the top or your script.
Now we could run this task, as before and see, if it works. (Make sure you still have a worker running)
End of explanation
task.state
Explanation: And check, that the task is running
End of explanation
print task.stdout
Explanation: If we did not screw up the task, it should have succeeded and we can look at the STDOUT.
End of explanation
from adaptivemd import WorkerScheduler
sc = WorkerScheduler(project.resource)
Explanation: Well, great, we have the pointless output and the stats of the newly staged file ntl9.pdb
How does a real script look like
Just for fun let's create the same scheduler that the adaptivemdworker uses, but from inside this notebook.
End of explanation
sc.project = project
Explanation: If you really wanted to use the worker you need to initialize it and it will create directories and stage files for the generators, etc. For that you need to call sc.enter(project), but since we only want it to parse our tasks, we only set the project without invoking initialization. You should normally not do that.
End of explanation
print '\n'.join(sc.task_to_script(task))
Explanation: Now we can use a function .task_to_script that will parse a task into a bash script. So this is really what would be run on your machine now.
End of explanation
task = Task()
task.append('touch staging:///my_file.txt')
print '\n'.join(sc.task_to_script(task))
Explanation: Now you see that all file paths have been properly interpreted to work. See that there is a comment about a temporary file from the DB that is then renamed. This is a little trick to be compatible with RPs way of handling files. (TODO: We might change this to just write to the target file. Need to check if that is still consistent)
A note on file locations
One problem with bash scripts is that when you create the tasks you have no concept on where the files actually are located. To get around this the created bash script will be scanned for paths, that contain prefixed like we are used to and are interpreted in the context of the worker / scheduler. The worker is the only instance to know all that is necessary so this is the place to fix that problem.
Let's see that in a little example, where we create an empty file in the staging area.
End of explanation
task = Task()
Explanation: And voila, the path has changed to a relative path from the working directory of the worker. Note that you see here the line we added in the very beginning of example 1 to our resource!
A Task from scratch
If you want to start a new task you can begin with
End of explanation
%%file my_rpc_function.py
def my_func(f):
import os
print f
return os.path.getsize(f)
Explanation: as we did before.
Just start adding staging and bash commands and you are done. When you create a task you can assign it a generator, then the system will assume that this task was generated by that generator, so don't do it for you custom tasks, unless you generated them in a generator. Setting this allows you to tell a worker only to run tasks of certain types.
The Python RPC Task
The tasks so far a very powerful, but they lack the possibility to call a python function. Since we are using python here, it would be great to really pretend to call a python function from here and not taking the detour of writing a python bash executable with arguments, etc... An example for this is the PyEmma generator which uses this capability.
Let's do an example of this as well. Assume we have a python function in a file (you need to have your code in a file so far so that we can copy the file to the HPC if necessary). Let's create the .py file now.
End of explanation
task = PythonTask()
Explanation: Now create a PythonTask instead
End of explanation
from my_rpc_function import my_func
Explanation: and the call function has changed. Note that also now you can still add all the bash and stage commands as before. A PythonTask is also a subclass of PrePostTask so we have a .pre and .post phase available.
End of explanation
task.call(my_func, f=project.trajectories.one)
print task.description
Explanation: We call the function my_func with one argument
End of explanation
project.queue(task)
Explanation: Well, interesting. What this actually does is to write the input arguments to the function into a temporary .json file on the worker, (in RP on the local machine and then transfers it to remote), rename it to input.json and read it in the _run_.py. This is still a little clumsy, but needs to be this way to be RP compatible which only works with files! Look at the actual script.
You see, that we really copy the .py file that contains the source code to the worker directory. All that is done automatically. A little caution on this. You can either write a function in a single file or use any installed package, but in this case the same package needs to be installed on the remote machine as well!
Let's run it and see what happens.
End of explanation
project.wait_until(task.is_done)
Explanation: And wait until the task is done
End of explanation
task.output
Explanation: The default settings will automatically save the content from the resulting output.json in the DB an you can access the data that was returned from the task at .output. In our example the result was just the size of a the file in bytes
End of explanation
task = modeller.execute(project.trajectories)
task.then_func_name
Explanation: And you can use this information in an adaptive script to make decisions.
success callback
The last thing we did not talk about is the possibility to also call a function with the returned data automatically on successful execution. Since this function is executed on the worker we (so far) only support function calls with the following restrictions.
you can call a function of the related generator class. for this you need to create the task using PythonTask(generator)
the function name you want to call is stored in task.then_func_name. So you can write a generator class with several possible outcomes and chose the function for each task.
The Generator needs to be part of adaptivemd
So in the case of modeller.execute we create a PythonTask that references the following functions
End of explanation
help(modeller.then_func)
Explanation: So we will call the default then_func of modeller or the class modeller is of.
End of explanation
project.close()
Explanation: These callbacks are called with the current project, the resulting data (which is in the modeller case a Model object) and array of initial inputs.
This is the actual code of the callback
py
@staticmethod
def then_func(project, task, model, inputs):
# add the input arguments for later reference
model.data['input']['trajectories'] = inputs['kwargs']['files']
model.data['input']['pdb'] = inputs['kwargs']['topfile']
project.models.add(model)
All it does is to add some of the input parameters to the model for later reference and then store the model in the project. You are free to define all sorts of actions here, even queue new tasks.
Next, we will talk about the factories for Task objects, called generators. There we will actually write a new class that does some stuff with the results.
End of explanation |
1,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: # 사전 작업 | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
from __future__ import print_function
import tensorflow as tf
c = tf.constant('Hello, world!')
with tf.Session() as sess:
print(sess.run(c))
Explanation: # 사전 작업: Hello World
학습 목표: 브라우저에서 텐서플로우 프로그램을 실행합니다.
다음은 'Hello World' 텐서플로우 프로그램입니다.
End of explanation |
1,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mixtures of Gaussian processes with GPclust
This notebook accompanies the paper
Nonparameteric Clustering of Structured Time Series
James Hensman, Magnus Rattray and Neil D. Lawrence
IEEE TPAMI 2014
The code is available at https
Step1: A simple sinusoid dataset
Here's a simulated dataset that contains the simple features that we expect to have in real data sets
Step2: In the plot below, we show the underlying function for each cluster as a smooth red function, and the data associated with the cluster as thinly connected blue crosses.
Step3: Constructing and optimizing a model
Now that we have generated a data set, it's straightforward to build and optimize a clustering model. First, we need to build two GPy kernels (covariance functions), which will be used to model the underlying function and the replication noise, respecively. We'll take a wild stab at the parameters of these covariances, and let the model optimize them for us later.
The two kernels model the underlying function of the cluster, and the deviations of each gene from that underlying function. If we believe that the only corruption of the data from the cluster mean is i.i.d. noise, we can specify a GPy.kern.White covariance. In practise, it's helpful to allow correlated noise. The model of any cluster of genes then has a hierarchical structure, with the unknown cluster-specific mean drawn from a GP, and then each gene in that cluster being drawn from a GP with said unknown mean function.
To optimize the model with the default optimization settings, we call m.optimize(). To invoke the recommended merge-split procedure, call m.systematic_splits(). Note that during the splitting procedure, many calls are made to the optimize function.
Step4: Plotting and examining the posterior
The model has quite extensive plotting built in, with various options for colour, display of the data as points or connected lines, etc. Here we find that the model manages to separate all but two of the true clusters. The number of 'genes' found in each cluster is labeled in the corner of each plot.
Step5: Structure is important
Why do we have to specify two kernels in GPclust? The first kernel describes the properties of the functions which underly each cluster. The second describes the properties of the functions which describe how each time-course (gene) deviates from the cluster.
This structure is important | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'png'#'svg' would be better, but eats memory for these big plots.
from matplotlib import pyplot as plt
import numpy as np
import GPy
import sys
sys.path.append('/home/james/work/gpclust/')
import GPclust
Explanation: Mixtures of Gaussian processes with GPclust
This notebook accompanies the paper
Nonparameteric Clustering of Structured Time Series
James Hensman, Magnus Rattray and Neil D. Lawrence
IEEE TPAMI 2014
The code is available at https://github.com/jameshensman/gpclust . The GPclust module depends on GPy.
The hierachical Gaussian process model was fleshed out in
Hierarchical Bayesian modelling of gene expression time series
across irregularly sampled replicates and clusters
James Hensman, Neil D. Lawrence and Magnus Rattray
http://www.biomedcentral.com/1471-2105/14/252
A simple implementation of hierarchical GPs is available as part of GPy. You may also be interested in the related notebook on hierarchical GPs.
End of explanation
#generate a data set. Here's the sinusoid demo from the manuscript.
Nclust = 10
Nx = 12
Nobs = [np.random.randint(20,31) for i in range(Nclust)] #a random number of realisations in each cluster
X = np.random.rand(Nx,1)
X.sort(0)
#random frequency and phase for each cluster
base_freqs = 2*np.pi + 0.3*(np.random.rand(Nclust)-.5)
base_phases = 2*np.pi*np.random.rand(Nclust)
means = np.vstack([np.tile(np.sin(f*X+p).T,(Ni,1)) for f,p,Ni in zip(base_freqs,base_phases,Nobs)])
#add a lower frequency sinusoid for the noise
freqs = .4*np.pi + 0.01*(np.random.rand(means.shape[0])-.5)
phases = 2*np.pi*np.random.rand(means.shape[0])
offsets = 0.3*np.vstack([np.sin(f*X+p).T for f,p in zip(freqs,phases)])
Y = means + offsets + np.random.randn(*means.shape)*0.05
Explanation: A simple sinusoid dataset
Here's a simulated dataset that contains the simple features that we expect to have in real data sets: smooth processes (here, sinusoids) corrupted by further smooth processes (here, more sinusoids) as well as noise.
End of explanation
#plotting.
x_plot, xmin, xmax = GPy.plotting.matplot_dep.base_plots.x_frame1D(X)
plt.figure(figsize=(18,6))
index_starts = np.hstack([0, np.cumsum(Nobs[:-1])])
index_stops = np.cumsum(Nobs)
for n in range(Nclust):
plt.subplot(2,Nclust/2, n+1)
plt.plot(X, Y[index_starts[n]:index_stops[n]].T, 'b', marker='x',ms=4, mew=1, linewidth=0.2)
plt.plot(x_plot, np.sin(base_freqs[n]*x_plot+base_phases[n]), 'r', linewidth=2)
GPy.plotting.matplot_dep.base_plots.align_subplots(2, Nclust/2, xlim=(xmin, xmax))
Explanation: In the plot below, we show the underlying function for each cluster as a smooth red function, and the data associated with the cluster as thinly connected blue crosses.
End of explanation
k_underlying = GPy.kern.RBF(input_dim=1, variance=0.1, lengthscale=0.1)
k_corruption = GPy.kern.RBF(input_dim=1, variance=0.01, lengthscale=0.1) + GPy.kern.White(1, variance=0.001)
m = GPclust.MOHGP(X, k_underlying, k_corruption, Y, K=10, prior_Z='DP', alpha=1.0)
m.optimize()
m.systematic_splits(verbose=False)
Explanation: Constructing and optimizing a model
Now that we have generated a data set, it's straightforward to build and optimize a clustering model. First, we need to build two GPy kernels (covariance functions), which will be used to model the underlying function and the replication noise, respecively. We'll take a wild stab at the parameters of these covariances, and let the model optimize them for us later.
The two kernels model the underlying function of the cluster, and the deviations of each gene from that underlying function. If we believe that the only corruption of the data from the cluster mean is i.i.d. noise, we can specify a GPy.kern.White covariance. In practise, it's helpful to allow correlated noise. The model of any cluster of genes then has a hierarchical structure, with the unknown cluster-specific mean drawn from a GP, and then each gene in that cluster being drawn from a GP with said unknown mean function.
To optimize the model with the default optimization settings, we call m.optimize(). To invoke the recommended merge-split procedure, call m.systematic_splits(). Note that during the splitting procedure, many calls are made to the optimize function.
End of explanation
plt.figure(figsize=(14,9))
m.plot(on_subplots=True, colour=True, newfig=False)
Explanation: Plotting and examining the posterior
The model has quite extensive plotting built in, with various options for colour, display of the data as points or connected lines, etc. Here we find that the model manages to separate all but two of the true clusters. The number of 'genes' found in each cluster is labeled in the corner of each plot.
End of explanation
#exactly as above, but with a white-noise kernel for the structure.
k_underlying = GPy.kern.RBF(input_dim=1, variance=0.1, lengthscale=0.1)
k_corruption = GPy.kern.White(1, variance=0.1)
m = GPclust.MOHGP(X, k_underlying, k_corruption, Y, K=10, prior_Z='DP', alpha=1.0)
m.optimize()
m.systematic_splits(verbose=False)
plt.figure(figsize=(14,9))
m.plot(on_subplots=True, colour=True, newfig=False)
Explanation: Structure is important
Why do we have to specify two kernels in GPclust? The first kernel describes the properties of the functions which underly each cluster. The second describes the properties of the functions which describe how each time-course (gene) deviates from the cluster.
This structure is important: if we model the deviation of each time-course from the cluster as simply noise, it's more difficult to infer the correct clusters. Such a model can be constructed in GPclust by using a white (noise) kernel for the structure, as follows.
End of explanation |
1,489 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a file with arrays or different shapes. I want to zeropad all the array to match the largest shape. The largest shape is (93,13). | Problem:
import numpy as np
a = np.ones((41, 12))
shape = (93, 13)
result = np.pad(a, ((0, shape[0]-a.shape[0]), (0, shape[1]-a.shape[1])), 'constant') |
1,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Explanation for Classification Models
This document describes the usage of a classification model to provide an explanation for a given prediction.
Model explanation provides the ability to interpret the effect of the predictors on the composition of an individual score. These predictors can then be ranked according to their contribution in the final score (leading to a positive or negative decision).
Model explanation has always been used in credit risk applications in presence of regulatory settings . The credit company is expected to give the customer the main (top n) reasons why the credit application was rejected (also known as reason codes).
Model explanation was also recently introduced by the European Union’s new General Data Protection Regulation (GDPR, https
Step1: For the classification task, we will build a ridge regression model, and train it on a part of the full dataset
Step2: Model Explanation
The goal here is to be able, for a given individual, the impact of each predictor on the final score.
For our model, we will do this by analyzing cross statistics between (binned) predictors and the (binned) final score.
For each score bin, we fit a linear model locally and use it to explain the score. This is generalization of the linear case, based on the fact that any model can be approximated well enough locally be a linear function (inside each score_bin). The more score bins we use, the more data we have, the better the approximation is.
For a random forest , the score can be seen as the probability of the positive class.
Step3: For simplicity, to describe our method, we use 5 score bins and 5 predictor bins.
We fit our local models on the training dataset, each model is fit on the values inside its score bin.
Step4: From the table above, we see that lower score values (score_bin_0) are all around zero probability and are not impacted by the predictor values, higher score values (score_bin_5) are all around 1 and are also not impacted. This is what one expects from a good classification model.
in the score bin 3, the score values increase significantly with mean area_bin and decrease with mean radius_bin values.
Predictor Effects
Predictor effects describe the impact of specific predictor values on the final score. For example, some values of a predictor can increase or decrease the score locally by 0.10 or more points and change the negative decision to a positive one.
The predictor effect reflects how a specific predictor increases the score (above or below the mean local contribtution of this variable).
Step5: The previous sample, shows that the first individual lost 0.000000 score points due to the feature $X_1$, gained 0.003994 with the feature $X_2$, etc
Reason Codes
The reason codes are a user-oriented representation of the decision making process. These are the predictors ranked by their effects. | Python Code:
from sklearn import datasets
import pandas as pd
%matplotlib inline
ds = datasets.load_breast_cancer();
NC = 4
lFeatures = ds.feature_names[0:NC]
df_orig = pd.DataFrame(ds.data[:,0:NC] , columns=lFeatures)
df_orig['TGT'] = ds.target
df_orig.sample(6, random_state=1960)
Explanation: Model Explanation for Classification Models
This document describes the usage of a classification model to provide an explanation for a given prediction.
Model explanation provides the ability to interpret the effect of the predictors on the composition of an individual score. These predictors can then be ranked according to their contribution in the final score (leading to a positive or negative decision).
Model explanation has always been used in credit risk applications in presence of regulatory settings . The credit company is expected to give the customer the main (top n) reasons why the credit application was rejected (also known as reason codes).
Model explanation was also recently introduced by the European Union’s new General Data Protection Regulation (GDPR, https://arxiv.org/pdf/1606.08813.pdf) to add the possibility to control the increasing use of machine learning algorithms in routine decision-making processes.
The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them.
The process we will use here is similar to LIME. The main difference is that LIME uses a data sampling around score value locally, while here we perform as full cross-statistics computation between the predictors and the score and use a local piece-wise linear approximation.
Sample scikit-learn Classification Model
Here, we will use a sciki-learn classification model on a standard dataset (breast cancer detection model).
The dataset used contains 30 predictor variables (numerical features) and one binary target (dependant variable). For practical reasons, we will restrict our study to the first 4 predictors in this document.
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=120, random_state = 1960)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df_orig[lFeatures].values,
df_orig['TGT'].values,
test_size=0.2,
random_state=1960)
df_train = pd.DataFrame(X_train , columns=lFeatures)
df_train['TGT'] = y_train
df_test = pd.DataFrame(X_test , columns=lFeatures)
df_test['TGT'] = y_test
clf.fit(X_train , y_train)
# clf.predict_proba(df[lFeatures])[:,1]
Explanation: For the classification task, we will build a ridge regression model, and train it on a part of the full dataset
End of explanation
from sklearn.linear_model import *
def create_score_stats(df, feature_bins = 4 , score_bins=30):
df_binned = df.copy()
df_binned['Score'] = clf.predict_proba(df[lFeatures].values)[:,0]
df_binned['Score_bin'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=False, duplicates='drop')
df_binned['Score_bin_labels'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=None, duplicates='drop')
for col in lFeatures:
df_binned[col + '_bin'] = pd.qcut(df[col] , feature_bins, labels=False, duplicates='drop')
binned_features = [col + '_bin' for col in lFeatures]
lInterpolated_Score= pd.Series(index=df_binned.index)
bin_classifiers = {}
coefficients = {}
intercepts = {}
for b in range(score_bins):
bin_clf = Ridge(random_state = 1960)
bin_indices = (df_binned['Score_bin'] == b)
# print("PER_BIN_INDICES" , b , bin_indexes)
bin_data = df_binned[bin_indices]
bin_X = bin_data[binned_features]
bin_y = bin_data['Score']
if(bin_y.shape[0] > 0):
bin_clf.fit(bin_X , bin_y)
bin_classifiers[b] = bin_clf
bin_coefficients = dict(zip(lFeatures, [bin_clf.coef_.ravel()[i] for i in range(len(lFeatures))]))
# print("PER_BIN_COEFFICIENTS" , b , bin_coefficients)
coefficients[b] = bin_coefficients
intercepts[b] = bin_clf.intercept_
predicted = bin_clf.predict(bin_X)
lInterpolated_Score[bin_indices] = predicted
df_binned['Score_interp'] = lInterpolated_Score
return (df_binned , bin_classifiers , coefficients, intercepts)
Explanation: Model Explanation
The goal here is to be able, for a given individual, the impact of each predictor on the final score.
For our model, we will do this by analyzing cross statistics between (binned) predictors and the (binned) final score.
For each score bin, we fit a linear model locally and use it to explain the score. This is generalization of the linear case, based on the fact that any model can be approximated well enough locally be a linear function (inside each score_bin). The more score bins we use, the more data we have, the better the approximation is.
For a random forest , the score can be seen as the probability of the positive class.
End of explanation
(df_cross_stats , per_bin_classifiers , per_bin_coefficients, per_bin_intercepts) = create_score_stats(df_train , feature_bins=5 , score_bins=10)
def debrief_score_bin_classifiers(bin_classifiers):
binned_features = [col + '_bin' for col in lFeatures]
score_classifiers_df = pd.DataFrame(index=(['intercept'] + list(binned_features)))
for (b, bin_clf) in per_bin_classifiers.items():
bin
score_classifiers_df['score_bin_' + str(b) + "_model"] = [bin_clf.intercept_] + list(bin_clf.coef_.ravel())
return score_classifiers_df
df = debrief_score_bin_classifiers(per_bin_classifiers)
df.head(10)
Explanation: For simplicity, to describe our method, we use 5 score bins and 5 predictor bins.
We fit our local models on the training dataset, each model is fit on the values inside its score bin.
End of explanation
for col in lFeatures:
lcoef = df_cross_stats['Score_bin'].apply(lambda x : per_bin_coefficients.get(x).get(col))
lintercept = df_cross_stats['Score_bin'].apply(lambda x : per_bin_intercepts.get(x))
lContrib = lcoef * df_cross_stats[col + '_bin'] + lintercept/len(lFeatures)
df1 = pd.DataFrame();
df1['contrib'] = lContrib
df1['Score_bin'] = df_cross_stats['Score_bin']
lContribMeanDict = df1.groupby(['Score_bin'])['contrib'].mean().to_dict()
lContribMean = df1['Score_bin'].apply(lambda x : lContribMeanDict.get(x))
# print("CONTRIB_MEAN" , col, lContribMean)
df_cross_stats[col + '_Effect'] = lContrib - lContribMean
df_cross_stats.sample(6, random_state=1960)
Explanation: From the table above, we see that lower score values (score_bin_0) are all around zero probability and are not impacted by the predictor values, higher score values (score_bin_5) are all around 1 and are also not impacted. This is what one expects from a good classification model.
in the score bin 3, the score values increase significantly with mean area_bin and decrease with mean radius_bin values.
Predictor Effects
Predictor effects describe the impact of specific predictor values on the final score. For example, some values of a predictor can increase or decrease the score locally by 0.10 or more points and change the negative decision to a positive one.
The predictor effect reflects how a specific predictor increases the score (above or below the mean local contribtution of this variable).
End of explanation
import numpy as np
reason_codes = np.argsort(df_cross_stats[[col + '_Effect' for col in lFeatures]].values, axis=1)
df_rc = pd.DataFrame(reason_codes, columns=['reason_idx_' + str(NC-c) for c in range(NC)])
df_rc = df_rc[list(reversed(df_rc.columns))]
df_rc = pd.concat([df_cross_stats , df_rc] , axis=1)
for c in range(NC):
reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x])
df_rc['reason_' + str(c+1)] = reason
# detailed_reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x] + "_bin")
# df_rc['detailed_reason_' + str(c+1)] = df_rc[['reason_' + str(c+1) , ]]
df_rc.sample(6, random_state=1960)
df_rc[['reason_' + str(NC-c) for c in range(NC)]].describe()
Explanation: The previous sample, shows that the first individual lost 0.000000 score points due to the feature $X_1$, gained 0.003994 with the feature $X_2$, etc
Reason Codes
The reason codes are a user-oriented representation of the decision making process. These are the predictors ranked by their effects.
End of explanation |
1,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook illustrates the use of tables in conveying the combination of inferential and computational thinking through studying concepts of probability theory. It the Birthday Surprise as a running example, based on a lecture by Ani Adhikari in http
Step1: Inferential thinking. I sit down next to a random person in class, what is the chance that I have the same birthday as that person?
Step2: Wow, nearly 3%?
Computational thinking? For a group of N people, what is the probability that at least two of them have the same birthday? An algorithmic question. Given K people with different birthdays, what is the probability that the next random person will have a different birthday than any of them? If we can answer this single step, we have a way to answer the general question? Yes, it is like induction. But it is constructive too. That's an algorithm.
Let's create a little table, kind of like what might think of as a spreadsheet, to collect all of our computational and inferential thoughts about solving this problem.
Step3: Cool, we have a column of the days, but its pretty long and we can see them all on the page, but a graph is a great way to summarize things, so let's look at what we have.
Step4: Yep, as easy as pi. The days in a year. But we really need those diminishing fractions of the days left in the year. OK, that's easy, lets build a new column of the days that are left after we've seen some. And let's look at what we've got
Step5: After we have k people with different birthdays, we have 365-k possible days left. But what we really want is the fraction of days in the year left. Obviously, divide by the number of days in the year. Let's do that and see where we are.
Step6: OK, that looks like its going from 1 to 0 just as we'd expect. And we can see how things are working together, just as we might in a spread sheet. We can focus on the data that comes out of the computation. In a spreadsheet this would be all spread around in the cells. Here the computation is clearly laid out and we can see how it progresses from one step to the next by building up the table.
We might want to select just 'fraction left' to look at.
Step7: Ah, but remember the inferential part. Given k people, what is the probability they all have different birthdays. That's the product of these diminishing fractions.
Step8: Phew that's a lot of numbers... tables gives us a little peek, but we can always look into it for more. Sure beats scrolling through 365 rows in excel!
So we need something to take the running product of a bunch of numbers. These things you'll learn to just build. But lots of folks built useful ones already. That's a beautiful thing about computing - you can naturally build on the work of others. Here we'll use the 'cumulative product' tool from the 'numpy' library. Don't worry you'll see that later. The important thing it that it does what we did for 2, 3 or 4 neighbors - but for all of them.
Step9: Now that we understanding this by building it up step by step, could we put it all into one place that we might call a program for answering this question? Sure. | Python Code:
# HIDDEN
from datascience import *
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
import numpy as np
# datascience version number of last run of this notebook
version.__version__
Explanation: This notebook illustrates the use of tables in conveying the combination of inferential and computational thinking through studying concepts of probability theory. It the Birthday Surprise as a running example, based on a lecture by Ani Adhikari in http://data8.org/
End of explanation
# what is the chance that we have different birthdays?
364/365
# what is the chance that we have the same birdthday?
1 - 364/365
# what is the chance that my birthday is different from both my neighbors?
(364/365) * (363/365)
# the same as one of them?
1 - (364/365) * (363/365)
# the same as one of my four neighbors?
1 - (364/365) * (363/365) * (362/365) * (361/365)
Explanation: Inferential thinking. I sit down next to a random person in class, what is the chance that I have the same birthday as that person?
End of explanation
bday = Table()
# lets start with numbering the days in the year (we'll start with zero). We don't care if they are
# in calendar order, backwars, sideways or the ways that we encounter them in meeting random people
bday["day"] = range(365)
bday
Explanation: Wow, nearly 3%?
Computational thinking? For a group of N people, what is the probability that at least two of them have the same birthday? An algorithmic question. Given K people with different birthdays, what is the probability that the next random person will have a different birthday than any of them? If we can answer this single step, we have a way to answer the general question? Yes, it is like induction. But it is constructive too. That's an algorithm.
Let's create a little table, kind of like what might think of as a spreadsheet, to collect all of our computational and inferential thoughts about solving this problem.
End of explanation
bday.plot()
Explanation: Cool, we have a column of the days, but its pretty long and we can see them all on the page, but a graph is a great way to summarize things, so let's look at what we have.
End of explanation
bday['left']= 364-bday['day']
bday
#of course, we could plot these
bday.plot()
# or on top of each other
bday.plot(overlay=True)
Explanation: Yep, as easy as pi. The days in a year. But we really need those diminishing fractions of the days left in the year. OK, that's easy, lets build a new column of the days that are left after we've seen some. And let's look at what we've got
End of explanation
bday['frac']=bday['left']/365
bday
Explanation: After we have k people with different birthdays, we have 365-k possible days left. But what we really want is the fraction of days in the year left. Obviously, divide by the number of days in the year. Let's do that and see where we are.
End of explanation
bday.select('frac').plot()
Explanation: OK, that looks like its going from 1 to 0 just as we'd expect. And we can see how things are working together, just as we might in a spread sheet. We can focus on the data that comes out of the computation. In a spreadsheet this would be all spread around in the cells. Here the computation is clearly laid out and we can see how it progresses from one step to the next by building up the table.
We might want to select just 'fraction left' to look at.
End of explanation
# each column in our table is really a sequence of values
bday["frac"]
Explanation: Ah, but remember the inferential part. Given k people, what is the probability they all have different birthdays. That's the product of these diminishing fractions.
End of explanation
bday["different"] = np.cumprod(bday["frac"])
bday
# finally the probably that at least two people have the same birthday
bday['some same bday'] = 1-bday['different']
bday
# so 14% with just ten people. How about the whole story
# Table.select produces a table containing only the selected columns
bday.select('some same bday').plot()
# wow, let's look at the start of that
# Table.take produces a table containing the rows taken from a Table.
bday.take(range(50))
# Table methods generally produce new tables so they compose naturally.
# Here to convey the essence of the Birthday Surprise
bday.take(range(50)).select('some same bday').plot()
# Since indexing a column by its name gives an array, it can be indexed.
bday['some same bday'][20]
Explanation: Phew that's a lot of numbers... tables gives us a little peek, but we can always look into it for more. Sure beats scrolling through 365 rows in excel!
So we need something to take the running product of a bunch of numbers. These things you'll learn to just build. But lots of folks built useful ones already. That's a beautiful thing about computing - you can naturally build on the work of others. Here we'll use the 'cumulative product' tool from the 'numpy' library. Don't worry you'll see that later. The important thing it that it does what we did for 2, 3 or 4 neighbors - but for all of them.
End of explanation
# altogether now like a real program
bd = Table()
bd["day"] = range(365)
bd['left']= 364-bd['day']
bd['frac']=bd['left']/365
bd["different"] = np.cumprod(bd["frac"])
bd['some same bday'] = 1-bd['different']
bd.select('some same bday').take(range(50)).plot()
bd
Explanation: Now that we understanding this by building it up step by step, could we put it all into one place that we might call a program for answering this question? Sure.
End of explanation |
1,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step14: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step16: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary
Step17: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step18: Create and run custom training job
To train a custom model, you perform two steps
Step19: Prepare your command-line arguments
Now define the command-line arguments for your custom training container
Step20: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters
Step21: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step22: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step23: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step24: Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model
Step25: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
Step26: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of
Step27: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of
Step28: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step29: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters
Step30: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step31: Prepare the request content
You are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes
Step32: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step33: Understanding the explanations response
Preview the images and their predicted classes without the explanations. Why did the model predict these classes?
Step34: Visualize the images with AI Explanations
The images returned show the explanations for only the top class predicted by the model. This means that if one of the model's predictions is incorrect, the pixels you see highlighted are for the incorrect class. For example, if the model predicted "airplane" when it should have predicted "cat", you can see explanations for why the model classified this image as an airplane.
If you deployed an Integrated Gradients model, you can visualize its feature attributions. Currently, the highlighted pixels returned from AI Explanations show the top 60% of pixels that contributed to the model's prediction. The pixels you see after running the cell below show the pixels that most signaled the model's prediction.
Step35: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step36: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: Custom training image classification model for online prediction with explainabilty
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_online_explain.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom image classification model for online prediction with explanation.
Dataset
The dataset used for this tutorial is the CIFAR10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction with explanations on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Set explanation parameters.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction with explanation.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
if os.environ["IS_TESTING"]:
! apt-get update && apt-get install -y python3-opencv-headless
! apt-get install -y libgl1-mesa-dev
! pip3 install --upgrade opencv-python-headless $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="cifar10_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
Explanation: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:<br/>
2. The labels are currently scalar (sparse). If you look back at the compile() step in the trainer/task.py script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
End of explanation
local_model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(local_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
local_model,
model_path_to_deploy,
signatures={
"serving_default": serving_fn,
# Required for XAI
"xai_preprocess": preprocess_fn,
"xai_model": m_call,
},
)
Explanation: Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
- io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- image.convert_image_dtype - Changes integer pixel values to float 32.
- image.resize - Resizes the image to match the input shape for the model.
- resized / 255.0 - Rescales (normalization) the pixel data between 0 and 1.
At this point, the data can be passed to the model (m_call).
XAI Signatures
When the serving function is saved back with the underlying model (tf.saved_model.save), you specify the input layer of the serving function as the signature serving_default.
For XAI image models, you need to save two additional signatures from the serving function:
xai_preprocess: The preprocessing function in the serving function.
xai_model: The concrete function for calling the model.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
End of explanation
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
Explanation: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:
parameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
Shapley - Note, not recommended for image data -- can be very long running
XRAI
Integrated Gradients
metadata: This is the specification for how the algoithm is applied on your custom model.
Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
path_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.
Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.
End of explanation
random_baseline = np.random.rand(32, 32, 3)
input_baselines = [{"number_vaue": x} for x in random_baseline]
INPUT_METADATA = {"input_tensor_name": CONCRETE_INPUT, "modality": "image"}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"image": input_metadata}, outputs={"class": output_metadata}
)
Explanation: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
outputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that is what we want to explain.
y = f(x)
Consider the following formulae, where the outputs are y and z. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.
y, z = f(x)
The dictionary format for outputs is:
{ "outputs": { "[your_display_name]":
"output_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the output to explain. A common example is "probability".<br/>
- "output_tensor_name": The key/value field to identify the output layer to explain. <br/>
- [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.
</blockquote>
inputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.
y = f(a,b)
The minimum dictionary format for inputs is:
{ "inputs": { "[your_display_name]":
"input_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the input to explain. A common example is "features".<br/>
- "input_tensor_name": The key/value field to identify the input layer for the feature attribution. <br/>
- [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.
</blockquote>
Since the inputs to the model are images, you can specify the following additional fields as reporting/visualization aids:
<blockquote>
- "modality": "image": Indicates the field values are image data.
</blockquote>
End of explanation
model = aip.Model.upload(
display_name="cifar10_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
explanation_parameters: Parameters to configure explaining for Model's predictions.
explanation_metadata: Metadata describing the Model's input and output for explanation.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
DEPLOYED_NAME = "cifar10-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
Explanation: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
import base64
import cv2
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
Explanation: Prepare the request content
You are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes:
cv2.imwrite: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
Denormalize the image data from [0,1) range back to [0,255).
Convert the 32-bit floating point values to 8-bit unsigned integers.
tf.io.read_file: Read the compressed JPG images back into memory as raw bytes.
base64.b64encode: Encode the raw bytes into a base 64 encoded string.
End of explanation
instances_list = [{serving_input: {"b64": b64str}}]
response = endpoint.explain(instances_list)
print(response)
Explanation: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[{serving_input: {'b64': bytes}]
Since the explain() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the explain() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The prediction per instance.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
explanations: The feature attributions
End of explanation
from io import BytesIO
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
CLASSES = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]
# Note: change the `ig_response` variable below if you didn't deploy an IG model
for prediction in response.predictions:
label_index = np.argmax(prediction)
class_name = CLASSES[label_index]
confidence_score = prediction[label_index]
print(
"Predicted class: "
+ class_name
+ "\n"
+ "Confidence score: "
+ str(confidence_score)
)
image = base64.b64decode(b64str)
image = BytesIO(image)
img = mpimg.imread(image, format="JPG")
plt.imshow(img, interpolation="nearest")
plt.show()
Explanation: Understanding the explanations response
Preview the images and their predicted classes without the explanations. Why did the model predict these classes?
End of explanation
import io
for explanation in response.explanations:
attributions = dict(explanation.attributions[0].feature_attributions)
label_index = explanation.attributions[0].output_index[0]
class_name = CLASSES[label_index]
b64str = attributions["image"]["b64_jpeg"]
image = base64.b64decode(b64str)
image = io.BytesIO(image)
img = mpimg.imread(image, format="JPG")
plt.imshow(img, interpolation="nearest")
plt.show()
Explanation: Visualize the images with AI Explanations
The images returned show the explanations for only the top class predicted by the model. This means that if one of the model's predictions is incorrect, the pixels you see highlighted are for the incorrect class. For example, if the model predicted "airplane" when it should have predicted "cat", you can see explanations for why the model classified this image as an airplane.
If you deployed an Integrated Gradients model, you can visualize its feature attributions. Currently, the highlighted pixels returned from AI Explanations show the top 60% of pixels that contributed to the model's prediction. The pixels you see after running the cell below show the pixels that most signaled the model's prediction.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
1,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Centrality
This evaluates the Eigenvector Centrality and PageRank implemented in Python against C++-native EVZ and PageRank. The Python implementation uses SciPy (and thus ARPACK) to compute the eigenvectors, while the C++ method implements a power iteration method itself.
Step1: First, we just compute the Python EVZ and display a sample. The "scores()" method returns a list of centrality scores in order of the vertices. Thus, what you see below are the (normalized, see the respective argument) centrality scores for G.nodes()[0], G.nodes()[1], ...
Step2: We now take a look at the 10 most central vertices according to the four heuristics. Here, the centrality algorithms offer the ranking() method that returns a list of (vertex, centrality) ordered by centrality.
Step3: Compute the EVZ using the C++ backend and also display the 10 most important vertices, just as above. This should hopefully look similar...
Please note
Step4: Now, let's take a look at the PageRank. First, compute the PageRank using the C++ backend and display the 10 most important vertices. The second argument to the algorithm is the dampening factor, i.e. the probability that a random walk just stops at a vertex and instead teleports to some other vertex.
Step5: Same in Python...
Step6: If everything went well, these should look similar, too.
Finally, we take a look at the relative differences between the computed centralities for the vertices | Python Code:
cd ../../
import networkit
G = networkit.graphio.readGraph("input/celegans_metabolic.graph", networkit.Format.METIS)
Explanation: Centrality
This evaluates the Eigenvector Centrality and PageRank implemented in Python against C++-native EVZ and PageRank. The Python implementation uses SciPy (and thus ARPACK) to compute the eigenvectors, while the C++ method implements a power iteration method itself.
End of explanation
evzSciPy = networkit.centrality.SciPyEVZ(G, normalized=True)
evzSciPy.run()
evzSciPy.scores()[:10]
Explanation: First, we just compute the Python EVZ and display a sample. The "scores()" method returns a list of centrality scores in order of the vertices. Thus, what you see below are the (normalized, see the respective argument) centrality scores for G.nodes()[0], G.nodes()[1], ...
End of explanation
evzSciPy.ranking()[:10]
Explanation: We now take a look at the 10 most central vertices according to the four heuristics. Here, the centrality algorithms offer the ranking() method that returns a list of (vertex, centrality) ordered by centrality.
End of explanation
evz = networkit.centrality.EigenvectorCentrality(G, True)
evz.run()
evz.ranking()[:10]
Explanation: Compute the EVZ using the C++ backend and also display the 10 most important vertices, just as above. This should hopefully look similar...
Please note: The normalization argument may not be passed as a named argument to the C++-backed centrality measures. This is due to some limitation in the C++ wrapping code.
End of explanation
pageRank = networkit.centrality.PageRank(G, 0.95, True)
pageRank.run()
pageRank.ranking()[:10]
Explanation: Now, let's take a look at the PageRank. First, compute the PageRank using the C++ backend and display the 10 most important vertices. The second argument to the algorithm is the dampening factor, i.e. the probability that a random walk just stops at a vertex and instead teleports to some other vertex.
End of explanation
SciPyPageRank = networkit.centrality.SciPyPageRank(G, 0.95, normalized=True)
SciPyPageRank.run()
SciPyPageRank.ranking()[:10]
Explanation: Same in Python...
End of explanation
differences = [(max(x[0], x[1]) / min(x[0], x[1])) - 1 for x in zip(evz.scores(), evzSciPy.scores())]
print("Average relative difference: {}".format(sum(differences) / len(differences)))
print("Maximum relative difference: {}".format(max(differences)))
Explanation: If everything went well, these should look similar, too.
Finally, we take a look at the relative differences between the computed centralities for the vertices:
End of explanation |
1,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Exploratory Data Analysis (EDA) for Propensity Modeling
This notebook helps to
Step1: Notebook custom settings
Step2: Configuration
Edit config.yaml to update GCP configuration that is used across the notebook.
Set parameters
Step3: First, we initialize Analysis with config parameters.
Step4: 1. Define the business and ML problem
Before proceeding into EDA for Propensity Modeling, define the business problem and questions that need to be addressed by the Propensity Model. Following are some high-level questions to answer before doing EDA
Step5: 3. Understand Dataset Structure
This section helps to answer the following questions
Step6: Check daily tables
Step7: Inspect sizes of the tables
Step8: Check if there are missing tables | Python Code:
# Uncomment to install required python modules
# !sh ../utils/setup.sh
# Add custom utils module to Python environment
import os
import sys
sys.path.append(os.path.abspath(os.pardir))
import pandas as pd
from gps_building_blocks.cloud.utils import bigquery as bigquery_utils
from utils import eda_ga
from utils import helpers
Explanation: 1. Exploratory Data Analysis (EDA) for Propensity Modeling
This notebook helps to:
check feasibility of building propensity model;
inspect dataset fields in order to identify relevant information for features and targets (labels);
perform initial exploratory data analysis to identify insights that help with building propensity model.
Google Merchandize Store GA360 dataset is used as an example.
Requirements
Google Analytics dataset stored in BigQuery.
Install and import required modules
End of explanation
# Prints all the outputs from cell (instead of using display each time)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
Explanation: Notebook custom settings
End of explanation
configs = helpers.get_configs('config.yaml')
source_configs, dest_configs = configs.source, configs.destination
# GCP project ID where queries and other computation will be run.
PROJECT_ID = dest_configs.project_id
# BigQuery dataset name to store query results (if needed).
DATASET_NAME = dest_configs.dataset_name
# To specify how many rows to display when examining dataframes
N_ROWS = 5
params = {
'project': PROJECT_ID,
'dataset_path': f'{source_configs.project_id}.{source_configs.dataset_name}',
'verbose': True
}
Explanation: Configuration
Edit config.yaml to update GCP configuration that is used across the notebook.
Set parameters
End of explanation
bq_utils = bigquery_utils.BigQueryUtils(project_id=PROJECT_ID)
eda = eda_ga.Analysis(bq_utils=bq_utils, params=params)
Explanation: First, we initialize Analysis with config parameters.
End of explanation
schema_html = 'https://support.google.com/analytics/answer/3437719?hl=en#'
df_schema = pd.read_html(schema_html)[0]
df_schema
Explanation: 1. Define the business and ML problem
Before proceeding into EDA for Propensity Modeling, define the business problem and questions that need to be addressed by the Propensity Model. Following are some high-level questions to answer before doing EDA:
* What is the business problem you are trying to solve?
* What are the success criteria of the project?
* What target do you want to predict?
* What are the essential fields to consider as the potential features?
2. Extract dataset schema and field descriptions
Following is an example of GA360 dataset schema and field descriptions more details read into Pandas DataFrame for reference:
End of explanation
table_options, description = eda.get_ds_description()
Explanation: 3. Understand Dataset Structure
This section helps to answer the following questions:
Is the dataset description available, and what does it say?
How long does the dataset stretch for, i.e., what is the entire period, and how many daily tables does it have?
How big are the daily tables?
Are there any missing days?
If the data is stored in BigQuery, then its schema can be extracted via INFORMATION_SCHEMA.
End of explanation
tables = eda.get_tables_stats()
Explanation: Check daily tables
End of explanation
# First set of tables.
tables[:N_ROWS]
# Last set of tables.
tables[-N_ROWS:]
Explanation: Inspect sizes of the tables
End of explanation
# Filter tables to analyse permanent `daily sessions` only
mask_not_intraday = (~tables['is_intraday'])
mask_sessions = (tables['table_id'].str.startswith('ga_sessions_'))
tables_permanent = tables[mask_sessions & mask_not_intraday].sort_values(
'table_id', ascending=True)
helpers.generate_date_range_stats(tables_permanent['last_suffix'])
Explanation: Check if there are missing tables
End of explanation |
1,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MSM of the alanine dipeptide
Here we run through most of the things that can be done with this package using a simple two-state model. There are more sophisticated examples that enable for further possibilities.
The first thing one must do is download the data from the following link. Once this is done, we will import a number of libraries we will need as we run this example.
Step1: Discretizing the trajectory
We start loading the simulation data using the trajectory module. For this we use the external library MDtraj, which contains all sorts of methods for parsing and calculating interestign properties of our time-series data.
Step2: So does what we have calculated look somewhat like a Ramachandran map?
Step3: Next we proceed to discretize the trajectory based on the Ramachandran angles.
Step4: For plotting we convert helical configurations in 1 and beta in 0.
Step5: In the plot we see how we go from the time series of continuous torsion angles converts into a time series of discrete states. We can obtain a list of states in the following way.
Step6: Building the master equation model
After having loaded our trajectory using the functionalities from the trajectory module we start building the master equation model. For this, we make use of the msm module. There are two steps corresponding to the two main classes within that module. First we create an instance of the SuperMSM, which can be used to direct the whole process of constructing and validating the MSM.
Step7: Then, using the do_msm method, we produce instances of the MSM class at a desired lag time, $\Delta t$. Each of these contains an MSM built at a specific lag time. These are stored as a dictionary in the msms attribute of the SuperMSM class.
Step8: The resulting model has a number of things we may be interested in, like its eigenvalue spectrum (in this case limited to a single relaxation time, corresponding to the exchange of helix and coil) or the equilibrium probabilities of the microstates.
Step9: Validation
However, from simply calculating these quantities we do not know how informative they really are. In order to understand whether the values we calculate are really reflective of the properties of the underlying system we resort to validation of the MSM. The two-level structure that we have described, consisting of the SuperMSM and MSM classes, allows for the user to test some global convergence properties first (at the level of the SuperMSM).
Convergence tests
For validating the model we first see at which point the relaxation times are sufficiently well converged.
Step10: Here we see that from the very beginning the relaxation times are independent of the lag time ($\Delta$t) used in the construction of the model. This convergence is a good indicator of the Markovianity of the model and is a result of the use of transition based assignment. The shaded area corresponds to the range of lag times where the information we obtain is largely unreliable, because the lag time itself is longer than the relaxation time.
Chapman-Kolmogorov test
Another important step in the validation is to carry out is the so-called Chapman-Kolmogorov test. In this case, the predictions from the MSM are validated against the simulation data used for its construction.
Step11: These plots show the decay of the population from a given initial condition. In this case, the left and right plots corresponds to starting in the E and A basins respectively. In both cases we compare the calculation from the simulation data (as circles) and the propagation from MSMs calculated at different lag times (lines). The agreement between the simulation data and the model predictions confirm the result from the convergence analysis.
Autocorrelation functions
The MSM can also be validated against the autocorrelation function (ACF) of the eigenmodes. If the simulation data is projected in the eigenmodes, then the ACF for mode $n$ should decay with a timescale equal to $-1/\lambda_n$. In this case there is only one mode to reproduce.
Step12: Calculation of the rate matrix
From the transition matrix we can calculate the rate matrix. One possibility is to use an approximate method based simply on a Taylor expansion (De Sancho, Mittal and Best, JCTC, 2013). We can check whether our approximate method gives a good result. We use short times since we have checked that short times are sufficient in this case for obtaining converged relaxation times. | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="ticks", color_codes=True, font_scale=1.5)
sns.set_style({"xtick.direction": "in", "ytick.direction": "in"})
Explanation: MSM of the alanine dipeptide
Here we run through most of the things that can be done with this package using a simple two-state model. There are more sophisticated examples that enable for further possibilities.
The first thing one must do is download the data from the following link. Once this is done, we will import a number of libraries we will need as we run this example.
End of explanation
import mdtraj as md
from mastermsm.trajectory import traj
tr = traj.TimeSeries(top='data/alaTB.gro', traj=['data/protein_only.xtc'])
print (tr.mdt)
Explanation: Discretizing the trajectory
We start loading the simulation data using the trajectory module. For this we use the external library MDtraj, which contains all sorts of methods for parsing and calculating interestign properties of our time-series data.
End of explanation
phi = md.compute_phi(tr.mdt)
psi = md.compute_psi(tr.mdt)
res = [x for x in tr.mdt.topology.residues]
fig,ax = plt.subplots(figsize=(4,4))
ax.plot(180./math.pi*phi[1],180./math.pi*psi[1],'o', markersize=1)
ax.set_xlim(-180,180)
ax.set_ylim(-180,180)
ax.xaxis.set_ticks(range(-180,181,90))
ax.yaxis.set_ticks(range(-180,181,90))
ax.set_xlabel(r'$\phi$', fontsize=18)
ax.set_ylabel(r'$\psi$', fontsize=18)
Explanation: So does what we have calculated look somewhat like a Ramachandran map?
End of explanation
tr.discretize(states=['A', 'E'])
Explanation: Next we proceed to discretize the trajectory based on the Ramachandran angles.
End of explanation
y = [0 if x == 'A' else 1 for x in tr.distraj]
fig, (ax1, ax2) = plt.subplots(2,1, sharex=True)
ax1.plot(psi[1]*180/math.pi,'o', markersize=1)
ax2.plot(y)
ax1.set_ylabel(r'$\psi$', fontsize=14)
ax1.set_xlim(0,2000)
ax1.set_ylim(-180,180)
ax1.yaxis.set_ticks(range(-180,181,90))
ax2.set_ylabel('State')
ax2.set_xlim(0,2000)
ax2.set_ylim(-0.2,1.2)
ax2.yaxis.set_ticks([0,1])
labels = [item.get_text() for item in ax2.get_xticklabels()]
labels[0] = 'c'
labels[1] = 'h'
ax2.set_yticklabels(labels)
ax2.set_xlabel('Time [ps]')
Explanation: For plotting we convert helical configurations in 1 and beta in 0.
End of explanation
tr.find_keys()
tr.keys
tr.file_name
Explanation: In the plot we see how we go from the time series of continuous torsion angles converts into a time series of discrete states. We can obtain a list of states in the following way.
End of explanation
from mastermsm.msm import msm
msm_alaTB = msm.SuperMSM([tr])
Explanation: Building the master equation model
After having loaded our trajectory using the functionalities from the trajectory module we start building the master equation model. For this, we make use of the msm module. There are two steps corresponding to the two main classes within that module. First we create an instance of the SuperMSM, which can be used to direct the whole process of constructing and validating the MSM.
End of explanation
lagt = 1
msm_alaTB.do_msm(lagt)
msm_alaTB.msms[lagt].do_trans()
msm_alaTB.msms[lagt].boots()
Explanation: Then, using the do_msm method, we produce instances of the MSM class at a desired lag time, $\Delta t$. Each of these contains an MSM built at a specific lag time. These are stored as a dictionary in the msms attribute of the SuperMSM class.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(5,2.5))
ax[0].errorbar([1], msm_alaTB.msms[lagt].tau_ave, msm_alaTB.msms[lagt].tau_std ,fmt='o-', markersize=10)
ax[1].errorbar([1,2], msm_alaTB.msms[lagt].peq_ave, msm_alaTB.msms[lagt].peq_std ,fmt='o-', markersize=10)
ax[0].set_ylabel(r'$\tau$ [ps]', fontsize=18)
ax[0].set_xlabel(r'$\lambda_1$', fontsize=18)
ax[1].set_ylabel(r'$P_{eq}$', fontsize=18)
ax[0].set_xticks([])
ax[1].set_xticks([1,2])
ax[1].set_xticklabels(labels[:2])
ax[1].set_xlim(0.5,2.5)
ax[0].set_ylim(0,50)
ax[1].set_ylim(0,1)
plt.tight_layout(w_pad=1)
Explanation: The resulting model has a number of things we may be interested in, like its eigenvalue spectrum (in this case limited to a single relaxation time, corresponding to the exchange of helix and coil) or the equilibrium probabilities of the microstates.
End of explanation
msm_alaTB.convergence_test(time=[1, 2, 5, 7, 10, 20, 50, 100], error=True)
tau_vs_lagt = np.array([[x,msm_alaTB.msms[x].tauT[0],msm_alaTB.msms[x].tau_std[0]] \
for x in sorted(msm_alaTB.msms.keys())])
fig, ax = plt.subplots()
ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10)
#ax.plot(tau_vs_lagt[:,0],tau_vs_lagt[:,0])
ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray', alpha=0.5)
ax.set_xlabel(r'$\Delta$t [ps]', fontsize=16)
ax.set_ylabel(r'$\tau$ [ps]', fontsize=16)
ax.set_xlim(0.8,200)
ax.set_ylim(0,70)
_ = ax.set_xscale('log')
#ax.set_yscale('log')
Explanation: Validation
However, from simply calculating these quantities we do not know how informative they really are. In order to understand whether the values we calculate are really reflective of the properties of the underlying system we resort to validation of the MSM. The two-level structure that we have described, consisting of the SuperMSM and MSM classes, allows for the user to test some global convergence properties first (at the level of the SuperMSM).
Convergence tests
For validating the model we first see at which point the relaxation times are sufficiently well converged.
End of explanation
pMSM_E, pMD_E, epMD_E = msm_alaTB.ck_test(time=[1, 2, 5, 7, 10, 20, 50], init=['E'])
pMSM_A, pMD_A, epMD_A = msm_alaTB.ck_test(time=[1, 2, 5, 7, 10, 20, 50], init=['A'])
fig, ax = plt.subplots(1,2, figsize=(8,3.5), sharex=True, sharey=True)
ax[0].errorbar(pMD_E[:,0], pMD_E[:,1], epMD_E, fmt='o')
for p in pMSM_E:
ax[0].plot(p[0], p[1], label="$\Delta t$=%g"%p[0][0])
ax[0].legend(fontsize=10)
ax[1].errorbar(pMD_A[:,0], pMD_A[:,1], epMD_A, fmt='o')
for p in pMSM_A:
ax[1].plot(p[0], p[1])
ax[0].set_xscale('log')
ax[0].set_ylabel('P(t)')
ax[0].set_xlabel('Time (ps)')
ax[1].set_xlabel('Time (ps)')
plt.tight_layout()
Explanation: Here we see that from the very beginning the relaxation times are independent of the lag time ($\Delta$t) used in the construction of the model. This convergence is a good indicator of the Markovianity of the model and is a result of the use of transition based assignment. The shaded area corresponds to the range of lag times where the information we obtain is largely unreliable, because the lag time itself is longer than the relaxation time.
Chapman-Kolmogorov test
Another important step in the validation is to carry out is the so-called Chapman-Kolmogorov test. In this case, the predictions from the MSM are validated against the simulation data used for its construction.
End of explanation
msm_alaTB.msms[2].do_trans(evecs=True)
acf = msm_alaTB.msms[2].acf_mode()
time = np.arange(len(acf[1]))*msm_alaTB.data[0].dt
fig, ax = plt.subplots()
ax.plot(time, acf[1], 'o')
ax.plot(time,np.exp(-time*1./msm_alaTB.msms[2].tauT[0]))
ax.set_xlim(0,200)
ax.set_ylim(0,1)
ax.set_xlabel('Time [ps]')
ax.set_ylabel('C$_{11}$(t)')
Explanation: These plots show the decay of the population from a given initial condition. In this case, the left and right plots corresponds to starting in the E and A basins respectively. In both cases we compare the calculation from the simulation data (as circles) and the propagation from MSMs calculated at different lag times (lines). The agreement between the simulation data and the model predictions confirm the result from the convergence analysis.
Autocorrelation functions
The MSM can also be validated against the autocorrelation function (ACF) of the eigenmodes. If the simulation data is projected in the eigenmodes, then the ACF for mode $n$ should decay with a timescale equal to $-1/\lambda_n$. In this case there is only one mode to reproduce.
End of explanation
fig, ax = plt.subplots(1,2, figsize=(7.5,3.5))
for i in [1, 2, 5, 7, 10, 20]:
msm_alaTB.msms[i].do_rate()
ax[0].errorbar(msm_alaTB.msms[i].tauT, msm_alaTB.msms[i].tauK, fmt='o', xerr=msm_alaTB.msms[i].tau_std, markersize=10, label=str(i))
ax[1].errorbar(msm_alaTB.msms[i].peqT, msm_alaTB.msms[i].peqK, fmt='o', xerr=msm_alaTB.msms[i].peq_std, markersize=10, label=str(i))
ax[0].plot([0,100],[0,100],'--', color='lightgray')
ax[0].set_xlabel(r'$\tau_T$ [ps]', fontsize=20)
ax[0].set_ylabel(r'$\tau_K$ [ps]', fontsize=20)
ax[0].set_xlim(0,60)
ax[0].set_ylim(0,60)
ax[1].plot([0.1,1],[0.1,1],'--', color='lightgray')
ax[1].set_xlabel(r'$p_T$', fontsize=20)
ax[1].set_ylabel(r'$p_K$', fontsize=20)
ax[1].set_xlim(0.2,0.8)
ax[1].set_ylim(0.2,0.8)
ax[0].legend(fontsize=9, bbox_to_anchor=(1.0, 0.65))
plt.tight_layout(pad=0.4, w_pad=3)
Explanation: Calculation of the rate matrix
From the transition matrix we can calculate the rate matrix. One possibility is to use an approximate method based simply on a Taylor expansion (De Sancho, Mittal and Best, JCTC, 2013). We can check whether our approximate method gives a good result. We use short times since we have checked that short times are sufficient in this case for obtaining converged relaxation times.
End of explanation |
1,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ODEs
We will use this notebook to define the differential equations that will be solved.
Each function representing a differential equation or set of differential equations should take a state vector and time array as its first two arguments, all other arguments relating to the system should follow after.
The function below describes the equation $\frac{dy}{dt}=y$ which has the analytical solution $y(t)=y_0e^t$
Step1: The function below describes the equation $\dfrac{dy}{dt}=\dfrac{-y}{\tau}$ which has the analytical solution $y(t)=y_0e^{-\dfrac{t}{\tau}}$
Step2: The function below describes the equation $\dfrac{dy}{dt}=t-y^2$
Step3: The function below describes the equation $\dfrac{dy}{dt}=-y^3+\sin{t}$
Step4: The function below represents the motion of a simple damped pendulum which is described by the following equation.
$$ \ddot\theta(t) + \alpha \dot\theta(t) + \frac{g}{L} \sin(\theta(t))=0$$
This is a second order system and must be split into linear parts to calculate a numerical solution.
This splitting works as follows
Step5: The function below describes a simple pendulum which has had a small angle approximation in normalised units
$$\dfrac{d^2x}{dt^2} + x = 0$$
This can be split as follows and solved numerically
Step6: The cell below defines the differential form of the Logistic equation
Step7: The cell below defines the Lotka-Volterra Model which describes population dynamics
$$
\begin{align}
\dfrac{dx}{dt}&=ax-bxy\
\dfrac{dy}{dt}&=cxy-dy
\end{align}
$$ | Python Code:
def Exponential(y, t, args=None):
dydt=y
return(dydt)
Explanation: ODEs
We will use this notebook to define the differential equations that will be solved.
Each function representing a differential equation or set of differential equations should take a state vector and time array as its first two arguments, all other arguments relating to the system should follow after.
The function below describes the equation $\frac{dy}{dt}=y$ which has the analytical solution $y(t)=y_0e^t$
End of explanation
def RadioactiveDecay(y, t, tau):
dydt=-1.0*y/tau
return(dydt)
Explanation: The function below describes the equation $\dfrac{dy}{dt}=\dfrac{-y}{\tau}$ which has the analytical solution $y(t)=y_0e^{-\dfrac{t}{\tau}}$
End of explanation
def Nonlinear1(y,t,args=None):
dydt=t-np.square(y)
return(dydt)
Explanation: The function below describes the equation $\dfrac{dy}{dt}=t-y^2$
End of explanation
def Nonlinear2(y,t,args=None):
dydt=-y**3 + np.sin(t)
return(dydt)
Explanation: The function below describes the equation $\dfrac{dy}{dt}=-y^3+\sin{t}$
End of explanation
def Pendulum(y,t,alpha, beta):
theta, omega = y
dydt = np.array([omega, -alpha*omega - beta*np.sin(theta)])
return dydt
Explanation: The function below represents the motion of a simple damped pendulum which is described by the following equation.
$$ \ddot\theta(t) + \alpha \dot\theta(t) + \frac{g}{L} \sin(\theta(t))=0$$
This is a second order system and must be split into linear parts to calculate a numerical solution.
This splitting works as follows:
$$
\begin{align}
\dot \theta(t) &= \omega(t) \
\dot \omega(t) &= -\alpha \omega(t) -\beta \sin(\theta(t))
\end{align}
$$
Where $\beta=\dfrac{g}{L}$
In this case the state vector $y$ consists of the angular position and angular velocity of the pendulum bob.
$$y = [\theta, \omega]$$
End of explanation
def SimplePendulum(y,t,b=0,omega=1):
x,v = y
dydt = np.array([v,-b*v-(omega**2)*x])
return dydt
Explanation: The function below describes a simple pendulum which has had a small angle approximation in normalised units
$$\dfrac{d^2x}{dt^2} + x = 0$$
This can be split as follows and solved numerically:
$$
\begin{align}
\dfrac{dx}{dt}&=y\
\dfrac{dy}{dt}&=-x
\end{align}
$$
End of explanation
def LogisticEquation(y,t,r,k=1):
dydt=r*y*(1-y/k)
return(dydt)
Explanation: The cell below defines the differential form of the Logistic equation:
$\dfrac{dx}{dt}=rx(1-x)$
End of explanation
def LotkaVoltera(f,t,a,b,c,d):
x,y=f
dxdt=a*x-b*x*y
dydt=c*x*y-d*y
dfdt=np.array([dxdt,dydt])
return(dfdt)
def Nonlinear3(f,t):
x,y=f
dxdt=y
dydt=-x+(1-x**2)*y
dfdt=np.array([dxdt,dydt])
return(dfdt)
def NonlinearSin(x,t):
return(np.sin(x))
def Budworm(x,t,r,k):
dxdt=r*x*(1-x/k)-(x**2/(1+x**2))
return(dxdt)
def Nonlinear4(f,t):
x,y=f
dxdt=x-y
dydt=(x*x)-4
dfdt=np.array([dxdt,dydt])
return(dfdt)
def Nonlinear5(f,t):
x,y=f
dxdt=-x-y
dydt=(x*x)-4
dfdt=np.array([dxdt,dydt])
return(dfdt)
Explanation: The cell below defines the Lotka-Volterra Model which describes population dynamics
$$
\begin{align}
\dfrac{dx}{dt}&=ax-bxy\
\dfrac{dy}{dt}&=cxy-dy
\end{align}
$$
End of explanation |
1,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Introduction to Regular Expressions
Regular Expressions are a powerful feature of the Python programming language. You can access Python's regular expression support through the re module.
Matching Literals
A regular expression is simply a string of text. The most basic regular expression is just a string containing only alphanumeric characters.
We use the re.compile(...) method to convert the regular expression string into a Pattern object.
Step2: Now that we have a compiled regular expression, we can see if the pattern matches another string.
Step3: In the case above we found a match because 'Hello' is part of 'Hello World'.
What happens if 'Hello' is not at the start of a string?
Step4: So the match only works if the pattern matches the start of the other string. What if the case is different?
Step5: Doesn't work. By default, the match is case sensitive.
What if it is only a partial match?
Step6: From what we have seen so far, matching with a string literal is pretty much functionally equivalent to the Python startswith(...) method that already comes as part of the String class.
Step7: Well, that isn't too exciting. But it does provide us with an opportunity for a valuable lesson
Step8: Zero or More
Sometimes we find ourselves in a situation where we're actually okay with "zero or more" instances of a character. For this we use the '*' sign.
In the example below we create an expression that looks for zero or more 'b' characters. In this case all of the matches are successful.
Step9: One or None
We've now seen cases where we will allow one-and-only-one of a character (exact match), one-or-more of a character, and zero-or-more of a character. The next case is the "one or none" case. For that we use the '?' sign.
Step10: M
What if you want to match a very specific number of a specific character, but you don't want to type all of those characters in? The {m} expression is great for that. The 'm' value specifies exactly how many repetitions you want.
Step11: M or More
You can also ask for m-or-more of a character. Leaving a dangling comma in the {m,} does the trick.
Step12: M through N
You can also request a specific range of repetition using {m,n}. Notice that 'n' is inclusive. This is one of the rare times that you'll find ranges in Python that are inclusive at the end. Any ideas why?
Step13: N or Fewer
Sometimes you want a specific number of repetitions or fewer. For this, you can use a comma before the 'n' parameter like {,n}. Notice that "fewer" includes zero instances of the character.
Step14: Though we have illustrated these repetition operations on single characters, they actually apply to more complex combinations of characters, as we'll see soon.
Character Sets
Matching a single character with repetition can be very useful, but often we want to work with more than one character. For that, the regular expressions need to have the concept of character sets. Character sets are contained within square brackets
Step15: Character sets can be bound to any of the repetition symbols that we have already seen. For example, if we wanted to match words that start with at least two vowels we could use the character set below.
Step16: Character sets can also be negated. Simply put a ^ symbol at the start of the character set.
Step17: Character Classes
Some groupings of characters are so common that they have a shorthand "character class" assigned to them. Common character classes are represented by a backslash and a letter designating the class. For instance \d is the class for digits.
Step18: These classes can have repetitions after them, just like character sets.
Step19: There are many common character classes.
\d matches digits
\s matches spaces, tabs, etc.
\w matches 'word' characters which include the letters of most languages, digits, and the underscore character
Step20: You can mix these classes with repetitions.
Step21: But what if you want to find everything that isn't a digit? Or everything that isn't a space?
To do that, simply put the character class in upper-case.
Step22: Placement
We've moved into some pretty powerful stuff, but up until now all of our regular expressions have started matching from the first letter of a string. That is useful, but sometimes you'd like to match from anywhere in the string, or specifically at the end of the string. Let's explore some options for moving past the first character.
The Dot
So far we have always had to have some character to match, but what if we don't care what character we encounter? The dot (.) is a placeholder for any character.
Step23: Though it might seem rather bland at first, the dot can be really useful when combined with repetition symbols.
Step24: As you can see, using the dot allows us to move past the start of the string we want to match and instead search deeper inside the target string.
Starting Anchor
Now we can search anywhere in a string. However, we might still want to add a starting anchor to the beginning of a string for part of our match. The ^ anchors our match to the start of the string.
Step25: Ending Anchor
We can anchor to the end of a string with the $ symbol.
Step26: Grouping
We have searched for exact patterns in our data, but sometimes we want either one thing or another. We can group searches with parentheses and match only one item in a group.
Step27: Grouping can also be done on a single item.
Step28: But why would you ever group a single item? It turns out that grouping is 'capture grouping' by default and allows you to extract items from a string.
Step29: In the case above, the entire string is considered group 0 because it matched the expression, but then the string 'dog' is group 1 because it was 'captured' by the parenthesis.
You can have more than one capture group
Step30: And capture groups can contain multiple values
Step31: Grouping can get even richer. For example
Step32: So far, we have compiled all of our regular expressions before using them. It turns out that many of the regular expression methods can accept a string and will compile that string for you.
You might see something like the code below in practice
Step33: sub is compiling the string "(cat|mouse)" into a pattern and then applying it to the input string.
Raw Strings
While working with Python code that uses regular expressions, you might occasionally encounter a string that looks like r'my string' instead of the 'my string' that you are accustomed to seeing.
The r designation means that the string is a raw string. Let's look at some examples to see what this means.
Step34: You'll notice that the regular string containing \t printed a tab character. The raw string printed a literal \t. Likewise the regular string printed \ while the raw string printed \\.
When processing a string, Python looks for escape sequences like \t (tab), \n (newline), \\ (backslash) and others to make your printed output more visually appealing.
Raw strings turn off that translation. This is useful for regular expressions because the backslash is a common character in regular expressions. Translating backslashes to other characters would break the expression.
Should you always use a raw string when creating a regular expression? Probably. Even if it isn't necessary now, the expression might grow over time, and it is helpful to have it in place as a safeguard.
Exercises
Exercise 1
Step35: Exercise 2
Step36: Exercise 3 | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/xx_misc/regular_expressions/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
import re
pattern = re.compile('Hello')
type(pattern)
Explanation: Introduction to Regular Expressions
Regular Expressions are a powerful feature of the Python programming language. You can access Python's regular expression support through the re module.
Matching Literals
A regular expression is simply a string of text. The most basic regular expression is just a string containing only alphanumeric characters.
We use the re.compile(...) method to convert the regular expression string into a Pattern object.
End of explanation
if pattern.match('Hello World'):
print("We found a match")
else:
print("No match found")
Explanation: Now that we have a compiled regular expression, we can see if the pattern matches another string.
End of explanation
if pattern.match('I said Hello World'):
print("We found a match")
else:
print("No match found")
Explanation: In the case above we found a match because 'Hello' is part of 'Hello World'.
What happens if 'Hello' is not at the start of a string?
End of explanation
if pattern.match('HELLO'):
print("We found a match")
else:
print("No match found")
Explanation: So the match only works if the pattern matches the start of the other string. What if the case is different?
End of explanation
if pattern.match('He'):
print("We found a match")
else:
print("No match found")
Explanation: Doesn't work. By default, the match is case sensitive.
What if it is only a partial match?
End of explanation
if "Hello World".startswith("Hello"):
print("We found a match")
else:
print("No match found")
Explanation: From what we have seen so far, matching with a string literal is pretty much functionally equivalent to the Python startswith(...) method that already comes as part of the String class.
End of explanation
pattern = re.compile("ab+c")
for string in (
'abc',
'abbbbbbbc',
'ac',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Well, that isn't too exciting. But it does provide us with an opportunity for a valuable lesson: Regular expressions are often not the best solution for a problem.
As we continue on in this colab, we'll see how powerful and expressive regular expressions can be. It is tempting to whip out a regular expression for many cases where they may not be the best solution. The regular expression engine can be slow for many types of expressions. Sometimes using other built-in tools or coding a solution in standard Python is better; sometimes it isn't.
Repetition
Matching exact characters one-by-one is kind of boring and doesn't allow regular expressions to showcase their true power. Let's move on to some more dynamic parts of the regular expression language. We will begin with repetition.
One or More
There are many cases where you'll need "one or more" of some character. To accomplish this, you simply add the + sign after the character that you want one or more of.
In the example below, we create an expression that looks for one or more 'b' characters. Notice how 'abc' and 'abbbbbbbc' are fine, but if we take all of the 'b' characters out, we don't get a match.
End of explanation
pattern = re.compile("ab*c")
for string in (
'abc',
'abbbbbbbc',
'ac',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Zero or More
Sometimes we find ourselves in a situation where we're actually okay with "zero or more" instances of a character. For this we use the '*' sign.
In the example below we create an expression that looks for zero or more 'b' characters. In this case all of the matches are successful.
End of explanation
pattern = re.compile("ab?c")
for string in (
'abc',
'abbbbbbbc',
'ac',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: One or None
We've now seen cases where we will allow one-and-only-one of a character (exact match), one-or-more of a character, and zero-or-more of a character. The next case is the "one or none" case. For that we use the '?' sign.
End of explanation
pattern = re.compile("ab{7}c")
for string in (
'abc',
'abbbbbbc',
'abbbbbbbc',
'abbbbbbbbc',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: M
What if you want to match a very specific number of a specific character, but you don't want to type all of those characters in? The {m} expression is great for that. The 'm' value specifies exactly how many repetitions you want.
End of explanation
pattern = re.compile("ab{2,}c")
for string in (
'abc',
'abbc',
'abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbc',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: M or More
You can also ask for m-or-more of a character. Leaving a dangling comma in the {m,} does the trick.
End of explanation
pattern = re.compile("ab{4,6}c")
for string in (
'abbbc',
'abbbbc',
'abbbbbc',
'abbbbbbc',
'abbbbbbbc',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: M through N
You can also request a specific range of repetition using {m,n}. Notice that 'n' is inclusive. This is one of the rare times that you'll find ranges in Python that are inclusive at the end. Any ideas why?
End of explanation
pattern = re.compile("ab{,4}c")
for string in (
'abbbbbc',
'abbbbc',
'abbbc',
'abbc',
'abc',
'ac',
'a',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: N or Fewer
Sometimes you want a specific number of repetitions or fewer. For this, you can use a comma before the 'n' parameter like {,n}. Notice that "fewer" includes zero instances of the character.
End of explanation
pattern = re.compile('[aeiou]')
for string in (
'a',
'e',
'i',
'o',
'u',
'x',
'ax',
'ex',
'ix',
'ox',
'ux',
'xa',
'xe',
'xi',
'xo',
'xu',
'xx',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Though we have illustrated these repetition operations on single characters, they actually apply to more complex combinations of characters, as we'll see soon.
Character Sets
Matching a single character with repetition can be very useful, but often we want to work with more than one character. For that, the regular expressions need to have the concept of character sets. Character sets are contained within square brackets: []
The character set below specifies that we'll match any string that starts with a vowel.
End of explanation
pattern = re.compile('[aeiou]{2,}')
for string in (
'aardvark',
'earth',
'eat',
'oar',
'aioli',
'ute',
'absolutely',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Character sets can be bound to any of the repetition symbols that we have already seen. For example, if we wanted to match words that start with at least two vowels we could use the character set below.
End of explanation
pattern = re.compile('[^aeiou]')
for string in (
'aardvark',
'earth',
'ice',
'oar',
'ukulele',
'bathtub',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Character sets can also be negated. Simply put a ^ symbol at the start of the character set.
End of explanation
pattern = re.compile('\d')
for string in (
'abc',
'123',
'1a2b',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Character Classes
Some groupings of characters are so common that they have a shorthand "character class" assigned to them. Common character classes are represented by a backslash and a letter designating the class. For instance \d is the class for digits.
End of explanation
pattern = re.compile('\d{4,}')
for string in (
'a',
'123',
'1234',
'12345',
'1234a',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: These classes can have repetitions after them, just like character sets.
End of explanation
pattern = re.compile('\w\s\d')
for string in (
'a',
'1 3',
'_ 4',
'w 5',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: There are many common character classes.
\d matches digits
\s matches spaces, tabs, etc.
\w matches 'word' characters which include the letters of most languages, digits, and the underscore character
End of explanation
pattern = re.compile('\d+\s\w+')
for string in (
'a',
'16 Candles',
'47 Hats',
'Number 5',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: You can mix these classes with repetitions.
End of explanation
print("Not a digit")
pattern = re.compile('\D')
for string in (
'a',
'1',
' ',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
print("\n")
print("Not a space")
pattern = re.compile('\S')
for string in (
'a',
'1',
' ',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
print("\n")
print("Not a word")
pattern = re.compile('\W')
for string in (
'a',
'1',
' ',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: But what if you want to find everything that isn't a digit? Or everything that isn't a space?
To do that, simply put the character class in upper-case.
End of explanation
pattern = re.compile('.')
for string in (
'a',
' ',
'4',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Placement
We've moved into some pretty powerful stuff, but up until now all of our regular expressions have started matching from the first letter of a string. That is useful, but sometimes you'd like to match from anywhere in the string, or specifically at the end of the string. Let's explore some options for moving past the first character.
The Dot
So far we have always had to have some character to match, but what if we don't care what character we encounter? The dot (.) is a placeholder for any character.
End of explanation
pattern = re.compile('.*s')
for string in (
'as',
' oh no bees',
'does this match',
'maybe',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Though it might seem rather bland at first, the dot can be really useful when combined with repetition symbols.
End of explanation
pattern = re.compile('^a.*s')
for string in (
'as',
'not as',
'a string that matches',
'a fancy string that matches',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: As you can see, using the dot allows us to move past the start of the string we want to match and instead search deeper inside the target string.
Starting Anchor
Now we can search anywhere in a string. However, we might still want to add a starting anchor to the beginning of a string for part of our match. The ^ anchors our match to the start of the string.
End of explanation
pattern = re.compile('.*s$')
for string in (
'as',
'beees',
'sa',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Ending Anchor
We can anchor to the end of a string with the $ symbol.
End of explanation
pattern = re.compile('.*(cat|dog)')
for string in (
'cat',
'dog',
'fat cat',
'lazy dog',
'hog',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Grouping
We have searched for exact patterns in our data, but sometimes we want either one thing or another. We can group searches with parentheses and match only one item in a group.
End of explanation
pattern = re.compile('.*(dog)')
for string in (
'cat',
'dog',
'fat cat',
'lazy dog',
'hog',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
Explanation: Grouping can also be done on a single item.
End of explanation
pattern = re.compile('.*(dog)')
match = pattern.match("hot diggity dog")
if match:
print(match.group(0))
print(match.group(1))
Explanation: But why would you ever group a single item? It turns out that grouping is 'capture grouping' by default and allows you to extract items from a string.
End of explanation
pattern = re.compile('.*(dog).*(cat)')
match = pattern.match("hot diggity dog barked at a scared cat")
if match:
print(match.group(0))
print(match.group(1))
print(match.group(2))
Explanation: In the case above, the entire string is considered group 0 because it matched the expression, but then the string 'dog' is group 1 because it was 'captured' by the parenthesis.
You can have more than one capture group:
End of explanation
pattern = re.compile('.*(dog).*(mouse|cat)')
match = pattern.match("hot diggity dog barked at a scared cat")
if match:
print(match.group(0))
print(match.group(1))
print(match.group(2))
Explanation: And capture groups can contain multiple values:
End of explanation
pattern = re.compile('(cat|mouse)')
re.sub(pattern, 'whale', 'The dog is afraid of the mouse')
Explanation: Grouping can get even richer. For example:
What happens when you have a group within another group?
Can a group be repeated?
These are more intermediate-to-advanced applications of regular expressions that you might want to explore on your own.
Substitution
So far we have been concerned with finding patterns in a string. Locating things is great, but sometimes you want to take action. A common action is substitution.
Say that I want to replace every instance of 'cat' or 'mouse' in a string with 'whale'. To do that I can compile a pattern that looks for 'cat' or 'mouse' and use that pattern in the re.sub method.
End of explanation
re.sub('(cat|mouse)', 'whale', 'The dog is afraid of the mouse')
Explanation: So far, we have compiled all of our regular expressions before using them. It turns out that many of the regular expression methods can accept a string and will compile that string for you.
You might see something like the code below in practice:
End of explanation
print('\tHello')
print(r'\tHello')
print('\\')
print(r'\\')
Explanation: sub is compiling the string "(cat|mouse)" into a pattern and then applying it to the input string.
Raw Strings
While working with Python code that uses regular expressions, you might occasionally encounter a string that looks like r'my string' instead of the 'my string' that you are accustomed to seeing.
The r designation means that the string is a raw string. Let's look at some examples to see what this means.
End of explanation
test_data = [
'apple',
'banana',
'grapefruit',
'apricot',
'orange'
]
# Create a pattern here
for test in test_data:
pass # Your pattern match goes here
Explanation: You'll notice that the regular string containing \t printed a tab character. The raw string printed a literal \t. Likewise the regular string printed \ while the raw string printed \\.
When processing a string, Python looks for escape sequences like \t (tab), \n (newline), \\ (backslash) and others to make your printed output more visually appealing.
Raw strings turn off that translation. This is useful for regular expressions because the backslash is a common character in regular expressions. Translating backslashes to other characters would break the expression.
Should you always use a raw string when creating a regular expression? Probably. Even if it isn't necessary now, the expression might grow over time, and it is helpful to have it in place as a safeguard.
Exercises
Exercise 1: Starts With 'a'
Create a regular expression pattern object that matches strings starting with the lower-case letter 'a'. Apply it to the test data provided. Loop over each string of test data and print "match" or "no match" as a result of your expression.
Student Solution
End of explanation
test_data = [
'zoo',
'ZOO',
'bazooka',
'ZOOLANDER',
'kaZoo',
'ZooTopia',
'ZOOT Suit',
]
# Create a pattern here
for test in test_data:
pass # Your pattern match goes here
Explanation: Exercise 2: Contains 'zoo' or 'ZOO'
Create a regular expression pattern object that matches strings containing 'zoo' or 'ZOO'. Apply it to the test data provided. Loop over each string of the test data and print "match" or "no match" as a result of your expression.
Student Solution
End of explanation
test_data = [
'sing',
'talking',
'SCREAMING',
'NeVeReNdInG',
'ingeron',
]
# Create a pattern here
for test in test_data:
pass # Your pattern match goes here
Explanation: Exercise 3: Endings
Create a regular expression pattern object that finds words that end with 'ing', independent of case. Apply it to the test data provided. Loop over each string of the test data and print "match" or "no match" as a result of your expression.
Student Solution
End of explanation |
1,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make figures more publication ready
In this example, we show several use cases to take MNE plots and
customize them for a more publication-ready look.
Step1: Imports
We are importing everything we need for this example
Step2: Evoked plot with brain activation
Suppose we want a figure with an evoked plot on top, and the brain activation
below, with the brain subplot slightly bigger than the evoked plot. Let's
start by loading some example data <sample-dataset>.
Step3: During interactive plotting, we might see figures like this
Step4: To make a publication-ready figure, first we'll re-plot the brain on a white
background, take a screenshot of it, and then crop out the white margins.
While we're at it, let's change the colormap, set custom colormap limits and
remove the default colorbar (so we can add a smaller, vertical one later)
Step5: Now let's crop out the white margins and the white gap between hemispheres.
The screenshot has dimensions (h, w, 3), with the last axis being R, G, B
values for each pixel, encoded as integers between 0 and 255. (255,
255, 255) encodes a white pixel, so we'll detect any pixels that differ
from that
Step6: A lot of figure settings can be adjusted after the figure is created, but
many can also be adjusted in advance by updating the
Step7: Now let's create our custom figure. There are lots of ways to do this step.
Here we'll create the figure and the subplot axes in one step, specifying
overall figure size, number and arrangement of subplots, and the ratio of
subplot heights for each row using
Step8: Custom timecourse with montage inset
Suppose we want a figure with some mean timecourse extracted from a number of
sensors, and we want a smaller panel within the figure to show a head outline
with the positions of those sensors clearly marked.
If you are familiar with MNE, you know that this is something that
Step9: Let's make a plot.
Step10: So far so good. Now let's add the smaller figure within the figure to show
exactly, which sensors we used to make the timecourse.
For that, we use an "inset_axes" that we plot into our existing axes.
The head outline with the sensor positions can be plotted using the
~mne.io.Raw object that is the source of our data.
Specifically, that object already contains all the sensor positions,
and we can plot them using the plot_sensors method.
Step11: That looks nice. But the sensor dots are way too big for our taste. Luckily,
all MNE-Python plots use Matplotlib under the hood and we can customize
each and every facet of them.
To make the sensor dots smaller, we need to first get a handle on them to
then apply a *.set_* method on them.
Step12: That's quite a a lot of objects, but we know that we want to change the
sensor dots, and those are most certainly a "PathCollection" object.
So let's have a look at how many "collections" we have in the axes.
Step13: There is only one! Those must be the sensor dots we were looking for.
We finally found exactly what we needed. Sometimes this can take a bit of
experimentation. | Python Code:
# Authors: Eric Larson <[email protected]>
# Daniel McCloy <[email protected]>
# Stefan Appelhoff <[email protected]>
#
# License: BSD-3-Clause
Explanation: Make figures more publication ready
In this example, we show several use cases to take MNE plots and
customize them for a more publication-ready look.
End of explanation
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import (make_axes_locatable, ImageGrid,
inset_locator)
import mne
Explanation: Imports
We are importing everything we need for this example:
End of explanation
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_stc = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-eeg-lh.stc')
fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname_evoked, 'Left Auditory')
evoked.pick_types(meg='grad').apply_baseline((None, 0.))
max_t = evoked.get_peak()[1]
stc = mne.read_source_estimate(fname_stc)
Explanation: Evoked plot with brain activation
Suppose we want a figure with an evoked plot on top, and the brain activation
below, with the brain subplot slightly bigger than the evoked plot. Let's
start by loading some example data <sample-dataset>.
End of explanation
evoked.plot()
stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample',
subjects_dir=subjects_dir, initial_time=max_t,
time_viewer=False, show_traces=False)
Explanation: During interactive plotting, we might see figures like this:
End of explanation
colormap = 'viridis'
clim = dict(kind='value', lims=[4, 8, 12])
# Plot the STC, get the brain image, crop it:
brain = stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample',
subjects_dir=subjects_dir, initial_time=max_t, background='w',
colorbar=False, clim=clim, colormap=colormap,
time_viewer=False, show_traces=False)
screenshot = brain.screenshot()
brain.close()
Explanation: To make a publication-ready figure, first we'll re-plot the brain on a white
background, take a screenshot of it, and then crop out the white margins.
While we're at it, let's change the colormap, set custom colormap limits and
remove the default colorbar (so we can add a smaller, vertical one later):
End of explanation
nonwhite_pix = (screenshot != 255).any(-1)
nonwhite_row = nonwhite_pix.any(1)
nonwhite_col = nonwhite_pix.any(0)
cropped_screenshot = screenshot[nonwhite_row][:, nonwhite_col]
# before/after results
fig = plt.figure(figsize=(4, 4))
axes = ImageGrid(fig, 111, nrows_ncols=(2, 1), axes_pad=0.5)
for ax, image, title in zip(axes, [screenshot, cropped_screenshot],
['Before', 'After']):
ax.imshow(image)
ax.set_title('{} cropping'.format(title))
Explanation: Now let's crop out the white margins and the white gap between hemispheres.
The screenshot has dimensions (h, w, 3), with the last axis being R, G, B
values for each pixel, encoded as integers between 0 and 255. (255,
255, 255) encodes a white pixel, so we'll detect any pixels that differ
from that:
End of explanation
# Tweak the figure style
plt.rcParams.update({
'ytick.labelsize': 'small',
'xtick.labelsize': 'small',
'axes.labelsize': 'small',
'axes.titlesize': 'medium',
'grid.color': '0.75',
'grid.linestyle': ':',
})
Explanation: A lot of figure settings can be adjusted after the figure is created, but
many can also be adjusted in advance by updating the
:data:~matplotlib.rcParams dictionary. This is especially useful when your
script generates several figures that you want to all have the same style:
End of explanation
# figsize unit is inches
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(4.5, 3.),
gridspec_kw=dict(height_ratios=[3, 4]))
# alternate way #1: using subplot2grid
# fig = plt.figure(figsize=(4.5, 3.))
# axes = [plt.subplot2grid((7, 1), (0, 0), rowspan=3),
# plt.subplot2grid((7, 1), (3, 0), rowspan=4)]
# alternate way #2: using figure-relative coordinates
# fig = plt.figure(figsize=(4.5, 3.))
# axes = [fig.add_axes([0.125, 0.58, 0.775, 0.3]), # left, bot., width, height
# fig.add_axes([0.125, 0.11, 0.775, 0.4])]
# we'll put the evoked plot in the upper axes, and the brain below
evoked_idx = 0
brain_idx = 1
# plot the evoked in the desired subplot, and add a line at peak activation
evoked.plot(axes=axes[evoked_idx])
peak_line = axes[evoked_idx].axvline(max_t, color='#66CCEE', ls='--')
# custom legend
axes[evoked_idx].legend(
[axes[evoked_idx].lines[0], peak_line], ['MEG data', 'Peak time'],
frameon=True, columnspacing=0.1, labelspacing=0.1,
fontsize=8, fancybox=True, handlelength=1.8)
# remove the "N_ave" annotation
for text in list(axes[evoked_idx].texts):
text.remove()
# Remove spines and add grid
axes[evoked_idx].grid(True)
axes[evoked_idx].set_axisbelow(True)
for key in ('top', 'right'):
axes[evoked_idx].spines[key].set(visible=False)
# Tweak the ticks and limits
axes[evoked_idx].set(
yticks=np.arange(-200, 201, 100), xticks=np.arange(-0.2, 0.51, 0.1))
axes[evoked_idx].set(
ylim=[-225, 225], xlim=[-0.2, 0.5])
# now add the brain to the lower axes
axes[brain_idx].imshow(cropped_screenshot)
axes[brain_idx].axis('off')
# add a vertical colorbar with the same properties as the 3D one
divider = make_axes_locatable(axes[brain_idx])
cax = divider.append_axes('right', size='5%', pad=0.2)
cbar = mne.viz.plot_brain_colorbar(cax, clim, colormap, label='Activation (F)')
# tweak margins and spacing
fig.subplots_adjust(
left=0.15, right=0.9, bottom=0.01, top=0.9, wspace=0.1, hspace=0.5)
# add subplot labels
for ax, label in zip(axes, 'AB'):
ax.text(0.03, ax.get_position().ymax, label, transform=fig.transFigure,
fontsize=12, fontweight='bold', va='top', ha='left')
Explanation: Now let's create our custom figure. There are lots of ways to do this step.
Here we'll create the figure and the subplot axes in one step, specifying
overall figure size, number and arrangement of subplots, and the ratio of
subplot heights for each row using :mod:GridSpec keywords
<matplotlib.gridspec>. Other approaches (using
:func:~matplotlib.pyplot.subplot2grid, or adding each axes manually) are
shown commented out, for reference.
End of explanation
data_path = mne.datasets.sample.data_path()
fname_raw = op.join(data_path, "MEG", "sample", "sample_audvis_raw.fif")
raw = mne.io.read_raw_fif(fname_raw)
# For the sake of the example, we focus on EEG data
raw.pick_types(meg=False, eeg=True)
Explanation: Custom timecourse with montage inset
Suppose we want a figure with some mean timecourse extracted from a number of
sensors, and we want a smaller panel within the figure to show a head outline
with the positions of those sensors clearly marked.
If you are familiar with MNE, you know that this is something that
:func:mne.viz.plot_compare_evokeds does, see an example output in
ex-hf-sef-data at the bottom.
In this part of the example, we will show you how to achieve this result on
your own figure, without having to use :func:mne.viz.plot_compare_evokeds!
Let's start by loading some example data <sample-dataset>.
End of explanation
# channels to plot:
to_plot = [f"EEG {i:03}" for i in range(1, 5)]
# get the data for plotting in a short time interval from 10 to 20 seconds
start = int(raw.info['sfreq'] * 10)
stop = int(raw.info['sfreq'] * 20)
data, times = raw.get_data(picks=to_plot,
start=start, stop=stop, return_times=True)
# Scale the data from the MNE internal unit V to µV
data *= 1e6
# Take the mean of the channels
mean = np.mean(data, axis=0)
# make a figure
fig, ax = plt.subplots(figsize=(4.5, 3))
# plot some EEG data
ax.plot(times, mean)
Explanation: Let's make a plot.
End of explanation
# recreate the figure (only necessary for our documentation server)
fig, ax = plt.subplots(figsize=(4.5, 3))
ax.plot(times, mean)
axins = inset_locator.inset_axes(ax, width="30%", height="30%", loc=2)
# pick_channels() edits the raw object in place, so we'll make a copy here
# so that our raw object stays intact for potential later analysis
raw.copy().pick_channels(to_plot).plot_sensors(title="", axes=axins)
Explanation: So far so good. Now let's add the smaller figure within the figure to show
exactly, which sensors we used to make the timecourse.
For that, we use an "inset_axes" that we plot into our existing axes.
The head outline with the sensor positions can be plotted using the
~mne.io.Raw object that is the source of our data.
Specifically, that object already contains all the sensor positions,
and we can plot them using the plot_sensors method.
End of explanation
# If we inspect our axes we find the objects contained in our plot:
print(axins.get_children())
Explanation: That looks nice. But the sensor dots are way too big for our taste. Luckily,
all MNE-Python plots use Matplotlib under the hood and we can customize
each and every facet of them.
To make the sensor dots smaller, we need to first get a handle on them to
then apply a *.set_* method on them.
End of explanation
print(axins.collections)
Explanation: That's quite a a lot of objects, but we know that we want to change the
sensor dots, and those are most certainly a "PathCollection" object.
So let's have a look at how many "collections" we have in the axes.
End of explanation
sensor_dots = axins.collections[0]
# Recreate the figure once more; shrink the sensor dots; add axis labels
fig, ax = plt.subplots(figsize=(4.5, 3))
ax.plot(times, mean)
axins = inset_locator.inset_axes(ax, width="30%", height="30%", loc=2)
raw.copy().pick_channels(to_plot).plot_sensors(title="", axes=axins)
sensor_dots = axins.collections[0]
sensor_dots.set_sizes([1])
# add axis labels, and adjust bottom figure margin to make room for them
ax.set(xlabel="Time (s)", ylabel="Amplitude (µV)")
fig.subplots_adjust(bottom=0.2)
Explanation: There is only one! Those must be the sensor dots we were looking for.
We finally found exactly what we needed. Sometimes this can take a bit of
experimentation.
End of explanation |
1,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Solution Notebook
Problem
Step1: Pythonic-Code
This question has an artificial constraint that prevented the use of the slice operator and the reversed method. For completeness, the solutions for these are provided below. Note these solutions are not in-place.
Step2: Unit Test
Step3: C Algorithm
This is a classic problem in C/C++
We'll want to keep two pointers | Python Code:
from __future__ import division
def list_of_chars(chars):
if chars is None:
return None
size = len(chars)
for i in range(size//2):
chars[i], chars[size-1-i] = \
chars[size-1-i], chars[i]
return chars
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Solution Notebook
Problem: Implement a function to reverse a string (a list of characters), in-place.
Constraints
Test Cases
Algorithm
Code
Pythonic-Code
Unit Test
Bonus C Algorithm
Bonus C Code
Constraints
Can I assume the string is ASCII?
Yes
Note: Unicode strings could require special handling depending on your language
Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function?
Correct
Since Python string are immutable, can I use a list of characters instead?
Yes
Test Cases
None -> None
[''] -> ['']
['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f']
Algorithm
Since Python strings are immutable, we'll use a list of chars instead to exercise in-place string manipulation as you would get with a C string.
Iterate len(string)/2 times, starting with i = 0:
Swap i and len(string) - 1 - i
Increment i
Complexity:
* Time: O(n)
* Space: O(1)
Note:
* You could use a byte array instead of a list to do in-place string operations
Code
End of explanation
def reverse_string_alt(string):
if string is None:
return None
return string[::-1]
def reverse_string_alt2(string):
if string is None:
return None
return ''.join(reversed(string))
Explanation: Pythonic-Code
This question has an artificial constraint that prevented the use of the slice operator and the reversed method. For completeness, the solutions for these are provided below. Note these solutions are not in-place.
End of explanation
%%writefile test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(list_of_chars(None), None)
assert_equal(list_of_chars(['']), [''])
assert_equal(list_of_chars(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def main():
test = TestReverse()
test.test_reverse()
if __name__ == '__main__':
main()
%run -i test_reverse_string.py
Explanation: Unit Test
End of explanation
# %load reverse_string.cpp
#include <stdio.h>
void Reverse(char* str) {
if (str) {
char* i = str; // first letter
char* j = str; // last letter
// find the end of the string
while (*j) {
j++;
}
// don't point to the null terminator
j--;
char tmp;
// swap chars to reverse the string
while (i < j) {
tmp = *i;
*i++ = *j;
*j-- = tmp;
}
}
}
int main() {
char test0[] = "";
char test1[] = "foo";
Reverse(NULL);
Reverse(test0);
Reverse(test1);
printf("%s \n", test0);
printf("%s \n", test1);
return 0;
}
Explanation: C Algorithm
This is a classic problem in C/C++
We'll want to keep two pointers:
* i is a pointer to the first char
* j is a pointer to the last char
To get a pointer to the last char, we need to loop through all characters, take note of null terminator.
while i < j
swap i and j
Complexity:
* Time: O(n)
* Space: In-place
Note:
* Instead of using i, you can use str instead, although this might not be as intuitive.
C Code
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.