Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1 - Introduction to Machine Learning
This chapter introduces some common concepts about learning (such as supervised and unsupervised learning) and some simples applications.
Supervised Learning
Classification (labels)
Regression (real)
Learn when we have a dataset with points and true responses variables. If we use a probabilistic approach to this kind of inference, we want to find the probability distribution of the response $y$ given the training dataset $\mathcal{D}$ and a new point $x$ outside of it.
$$p(y\ |\ x, \mathcal{D})$$
A good guess $\hat{y}$ for $y$ is the Maximum a Posteriori estimator
Step1: Logistic regression
Step2: Non-parametric models
These models don't have a finite number of parameters. For example the number of parameters increase with the amount of training data, as in KNN | Python Code:
%run ../src/LinearRegression.py
%run ../src/PolynomialFeatures.py
# LINEAR REGRESSION
# Generate random data
X = np.linspace(0,20,10)[:,np.newaxis]
y = 0.1*(X**2) + np.random.normal(0,2,10)[:,np.newaxis] + 20
# Fit model to data
lr = LinearRegression()
lr.fit(X,y)
# Predict new data
x_test = np.array([0,20])[:,np.newaxis]
y_predict = lr.predict(x_test)
# POLYNOMIAL REGRESSION
# Fit model to data
poly = PolynomialFeatures(2)
lr = LinearRegression()
lr.fit(poly.fit_transform(X),y)
# Predict new data
x_pol = np.linspace(0, 20, 100)[:, np.newaxis]
y_pol = lr.predict(poly.fit_transform(x_pol))
# Plot data
fig = plt.figure(figsize=(14, 6))
# Plot linear regression
ax1 = fig.add_subplot(1, 2, 1)
plt.scatter(X,y)
plt.plot(x_test, y_predict, "r")
plt.xlim(0, 20)
plt.ylim(0, 50)
# Plot polynomial regression
ax2 = fig.add_subplot(1, 2, 2)
plt.scatter(X,y)
plt.plot(x_pol, y_pol, "r")
plt.xlim(0, 20)
plt.ylim(0, 50);
Explanation: Chapter 1 - Introduction to Machine Learning
This chapter introduces some common concepts about learning (such as supervised and unsupervised learning) and some simples applications.
Supervised Learning
Classification (labels)
Regression (real)
Learn when we have a dataset with points and true responses variables. If we use a probabilistic approach to this kind of inference, we want to find the probability distribution of the response $y$ given the training dataset $\mathcal{D}$ and a new point $x$ outside of it.
$$p(y\ |\ x, \mathcal{D})$$
A good guess $\hat{y}$ for $y$ is the Maximum a Posteriori estimator:
$$ŷ = \underset{c}{\mathrm{argmax}}\ p(y = c|x, \mathcal{D})$$
Unsupervised Learning
Clustering
Dimensionality Reduction / Latent variables
Discovering graph structure
Matrix completions
Parametric models
These models have a finite (and fixed) number of parameters.
Examples:
* Linear regression:
$$y(\mathbf{x}) = \mathbf{w}^\intercal\mathbf{x} + \epsilon$$
Which can be written as
$$p(y\ |\ x, \theta) = \mathcal{N}(y\ |\ \mu(x), \sigma^2) = \mathcal{N}(y\ |\ w^\intercal x, \sigma^2)$$
End of explanation
%run ../src/LogisticRegression.py
X = np.hstack((np.random.normal(90, 2, 100), np.random.normal(110, 2, 100)))[:, np.newaxis]
y = np.array([0]*100 + [1]*100)[:, np.newaxis]
logr = LogisticRegression(learnrate=0.002, eps = 0.001)
logr.fit(X, y)
x_test = np.array([-logr.w[0]/logr.w[1]]).reshape(1,1) #np.linspace(-10, 10, 30)[:, np.newaxis]
y_probs = logr.predict_proba(x_test)[:, 0:1]
print("Probability:" + str(y_probs))
# Plot data
fig = plt.figure(figsize=(14, 6))
# Plot sigmoid function
ax1 = fig.add_subplot(1, 2, 1)
t = np.linspace(-15,15,100)
plt.plot(t, logr._sigmoid(t))
# Plot logistic regression
ax2 = fig.add_subplot(1, 2, 2)
plt.scatter(X, y)
plt.scatter(x_test, y_probs, c='r')
Explanation: Logistic regression:
Despite the name, this is a classification model
$$p(y\ |\ x, w) = \mathrm{Ber}(y\ |\ \mu(x)) = \mathrm{Ber}(y\ |\ \mathrm{sigm}(w^\intercal x))$$
where
$$\displaystyle \mathrm{sigm}(x) = \frac{e^x}{1+e^x}$$
End of explanation
%run ../src/KNearestNeighbors.py
# Generate data from 3 gaussians
gaussian_1 = np.random.multivariate_normal(np.array([1, 0.0]), np.eye(2)*0.01, size=100)
gaussian_2 = np.random.multivariate_normal(np.array([0.0, 1.0]), np.eye(2)*0.01, size=100)
gaussian_3 = np.random.multivariate_normal(np.array([0.1, 0.1]), np.eye(2)*0.001, size=100)
X = np.vstack((gaussian_1, gaussian_2, gaussian_3))
y = np.array([1]*100 + [2]*100 + [3]*100)
# Fit the model
knn = KNearestNeighbors(5)
knn.fit(X, y)
# Predict various points in space
XX, YY = np.mgrid[-5:5:.2, -5:5:.2]
X_test = np.hstack((XX.ravel()[:, np.newaxis], YY.ravel()[:, np.newaxis]))
y_test = knn.predict(X_test)
fig = plt.figure(figsize=(14, 6))
# Plot original data
ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(X[y == 1,0], X[y == 1,1], 'bo')
ax1.plot(X[y == 2,0], X[y == 2,1], 'go')
ax1.plot(X[y == 3,0], X[y == 3,1], 'ro')
# Plot predicted data
ax2 = fig.add_subplot(1, 2, 2)
ax2.contourf(XX, YY, y_test.reshape(50,50));
Explanation: Non-parametric models
These models don't have a finite number of parameters. For example the number of parameters increase with the amount of training data, as in KNN:
$$p(y=c\ |\ x, \mathcal{D}, K) = \frac{1}{K} \sum_{i \in N_K(x, \mathcal{D})} \mathbb{I}(y_i = c)$$
End of explanation |
6,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mock Spectra
Attempting to make synthetic spectra look real.
Step2: We will try to use urllib to pull synthetic spectra from the Phoenix model atmosphere server. This way, we can avoid storing large raw spectrum files and keep only the processed spectra. To make life easier, we should define a function to spit out a Phoenix spectrum file URL given a set of input parameters.
Step3: Testing the Phoenix URL and file name resolver to ensure proper URL request in urllib.
Step4: Now try requesting the file from the Phoenix server (note
Step5: Great. So, now we have properly pulled the data to the local directory structure. However, there are several complications that need to get figured out.
1. Data must be saved to a temporary file.
2. Data must be unziped (unxz).
3. All instances of Fortran doubles must be converted from D exponentials to E.
Now, we must trim the file as there is a significant amount of data that we don't need
Step6: Let's take a look at part of the raw spectrum, say the optical.
Step7: Of course, this is too high of resolution to be passable as a real spectrum. Two things need to happen
Step8: Finally, we convolve the Gaussian kernel with the original spectrum, being careful to preserve the shape of the original spectrum.
Step9: For comparison we can load an SDSS template of and M3 star, which is presumably warmer than the spectrum created here. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import urllib
Explanation: Mock Spectra
Attempting to make synthetic spectra look real.
End of explanation
def phoenixFileURL(Teff, logg, FeH=0.0, aFe=0.0, brand='BT-Settl', solar_abund='CIFIST2011_2015'):
Create file name for a Phoenix synthetic spectrum
if Teff % 100.0 != 0.0:
raise ValueError('Invalid temperature request for Phoenix server.')
if logg not in np.arange(-0.5, 5.6, 0.5):
raise ValueError('Invalid log(g) request for Phoenix server.')
url = 'https://phoenix.ens-lyon.fr/Grids/{:s}/{:s}/SPECTRA'.format(brand, solar_abund)
filename = 'lte{:05.1f}{:+4.1f}-{:3.1f}a{:+3.1f}.{:s}.spec.7.xz'.format(Teff/100.0, -1.0*logg, FeH, aFe, brand)
return url, filename
Explanation: We will try to use urllib to pull synthetic spectra from the Phoenix model atmosphere server. This way, we can avoid storing large raw spectrum files and keep only the processed spectra. To make life easier, we should define a function to spit out a Phoenix spectrum file URL given a set of input parameters.
End of explanation
phoenixFileURL(3000.0, 5.0)
Explanation: Testing the Phoenix URL and file name resolver to ensure proper URL request in urllib.
End of explanation
addr, filename = phoenixFileURL(3000.0, 5.0)
urllib.urlretrieve('{0}/{1}'.format(addr, filename), filename)
spectrum = np.genfromtxt('spectra/{0}'.format(filename[:-3]), usecols=(0, 1))
Explanation: Now try requesting the file from the Phoenix server (note: need internet access)
End of explanation
spectrum = np.array([line for line in spectrum if 3000.0 <= line[0] <= 40000.0])
spectrum[:, 1] = 10.0**(spectrum[:, 1] - 8.0)
Explanation: Great. So, now we have properly pulled the data to the local directory structure. However, there are several complications that need to get figured out.
1. Data must be saved to a temporary file.
2. Data must be unziped (unxz).
3. All instances of Fortran doubles must be converted from D exponentials to E.
Now, we must trim the file as there is a significant amount of data that we don't need: everything below 3000 Å and above 4.0 microns.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.set_xlabel('Wavelength ($\\AA$)', fontsize=20.0)
ax.set_ylabel('Flux', fontsize=20.0)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.set_xlim(5000., 7000.)
ax.plot(spectrum[:,0], spectrum[:,1], '-', color='#800000')
Explanation: Let's take a look at part of the raw spectrum, say the optical.
End of explanation
fwhm = 2.5 # R ~ 1000 at 5000 Å
domain = np.arange(-5.0*fwhm, 5.0*fwhm, 0.02) # note: must have same spacing as spectrum
window = np.exp(-0.5*(domain/fwhm)**2)/np.sqrt(2.0*np.pi*fwhm**2)
# visualize the window function (Kernel)
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.set_xlabel('$\\Delta\\lambda$ ($\\AA$)', fontsize=20.0)
ax.set_ylabel('Window', fontsize=20.0)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.plot(domain, window, '-', lw=2, color='#1e90ff')
Explanation: Of course, this is too high of resolution to be passable as a real spectrum. Two things need to happen: we need to degrade the resolution and add noise. To degrade the resolution, we'll convolve the spectrum with a Gaussian kernel whose FWHM is equal to the desired spectral resolution.
End of explanation
degraded = np.convolve(spectrum[:, 1], window, mode='same')
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.set_xlabel('Wavelength ($\\AA$)', fontsize=20.0)
ax.set_ylabel('Flux', fontsize=20.0)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.set_xlim(5000., 7000.)
ax.set_ylim(0.0, 0.5)
ax.plot(spectrum[:,0], degraded/1.e7, '-', lw=2, color='#800000')
Explanation: Finally, we convolve the Gaussian kernel with the original spectrum, being careful to preserve the shape of the original spectrum.
End of explanation
sdss_template = np.genfromtxt('../../../Projects/BlindSpot/spectra/tmp/SDSS_DR2_M3.template', usecols=(0, 1))
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.set_xlabel('Wavelength ($\\AA$)', fontsize=20.0)
ax.set_ylabel('Flux', fontsize=20.0)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.set_xlim(5000., 7000.)
ax.set_ylim(0., 10.)
ax.plot(sdss_template[:, 0], sdss_template[:, 1]/10. + 3.0, '-', lw=2, color='#444444')
ax.plot(spectrum[:,0], degraded/1.e6, '-', lw=2, color='#800000')
Explanation: For comparison we can load an SDSS template of and M3 star, which is presumably warmer than the spectrum created here.
End of explanation |
6,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Query for pile-up allignments at region "x"
We can query the database to obtain a pile-up of the reads from a given readgroup.
Initialize the client
As seen in the "1kg.ipynb" example, we take the following steps to create the client object that will be used to obtain the information we desire and query the serever
Step1: Make reference to the data from the server
We query the server for the dataset, which is the 1k-genomes dataset.
We follow to access the bases of reference. So to access it we first list the reference sets.
Step2: Reference chromosome & read group set read groups
We define our contigues sequence with a chromosome reference, and then make a reference array for our read group sets of read groups.
Step3: Function to obtain the complement of a negative strand read
This function takes the original sequence if it is in the negative strand and then returns the compliment of the input sequence
Step4: Pile up function
This function calculates the pile up's for a given region, that is the position being observed. It takes as input the chromosome reference and the readgroups to obtain the needed sequence read.
Step5: Function to calculate occurrence frequency
The frequency is obtain from the occurence of alleles in the observed position. And our function returns an array of occurances for a given instance as well as the overall frequency. | Python Code:
import ga4gh.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
Explanation: Query for pile-up allignments at region "x"
We can query the database to obtain a pile-up of the reads from a given readgroup.
Initialize the client
As seen in the "1kg.ipynb" example, we take the following steps to create the client object that will be used to obtain the information we desire and query the serever
End of explanation
dataset = c.searchDatasets().next()
referenceSet = c.searchReferenceSets().next()
references = [r for r in c.searchReferences(referenceSetId = referenceSet.id)]
Explanation: Make reference to the data from the server
We query the server for the dataset, which is the 1k-genomes dataset.
We follow to access the bases of reference. So to access it we first list the reference sets.
End of explanation
contig = references[0].id
rgIdsArr = []
for r in c.searchReadGroupSets(datasetId=dataset.id):
rgIdsArr.append([e for e in r.readGroups])
Explanation: Reference chromosome & read group set read groups
We define our contigues sequence with a chromosome reference, and then make a reference array for our read group sets of read groups.
End of explanation
def Revers_Compl(Sequence):
CompSeq = list(Sequence[:])
for i in range(len(Sequence)):
if Sequence[i]=="A":
CompSeq[i] = "T"
elif Sequence[i]=="C":
CompSeq[i] = "G"
elif Sequence[i] == "G":
CompSeq[i] = "C"
elif Sequence[i] == "T":
CompSeq[i] = "A"
else:
CompSeq[i] = "N"
return "".join(CompSeq)
Explanation: Function to obtain the complement of a negative strand read
This function takes the original sequence if it is in the negative strand and then returns the compliment of the input sequence
End of explanation
def pileUp(contig, position, rgIdsArr):
alleles = []
for i in rgIdsArr[0]:
for sequence in c.searchReads(readGroupIds=[i.id],start = position, end = position+1, referenceId=contig):
if sequence.alignment != None:
start = sequence.alignment.position.position
observe = position-start
if sequence.alignment.position.strand == "NEG_STRAND":
Rev_Comp_Seq = Revers_Compl(sequence.alignedSequence)
allele = Rev_Comp_Seq[-(observe+1)]
alleles.append({"allele":allele, "readGroupId":i.id})
else:
allele = sequence.alignedSequence[observe]
alleles.append({"allele": allele, "readGroupId": i.id })
return alleles
Explanation: Pile up function
This function calculates the pile up's for a given region, that is the position being observed. It takes as input the chromosome reference and the readgroups to obtain the needed sequence read.
End of explanation
def Calc_Freq(Position):
Test = pileUp(references[0].id, Position, rgIdsArr)
tot = len(Test)
A = [{"All": "A","Frq": float(0),"Occ": 0},{"All": "C","Frq": float(0), "Occ": 0},{"All": "G","Frq": float(0), "Occ": 0},{"All": "T","Frq": float(0), "Occ": 0}]
for i in range(tot):
if Test[i]["allele"] == "A":
A[0]["Occ"] += 1
elif Test[i]["allele"]=="C":
A[1]["Occ"] += 1
elif Test[i]["allele"] == "G":
A[2]["Occ"] += 1
elif Test[i]["allele"] == "T":
A[3]["Occ"] += 1
else:
tot -= 1
A[0]["Frq"] = float(A[0]["Occ"])/float(tot)
A[1]["Frq"] = float(A[1]["Occ"])/float(tot)
A[2]["Frq"] = float(A[2]["Occ"])/float(tot)
A[3]["Frq"] = float(A[3]["Occ"])/float(tot)
return A
X = Calc_Freq(10000)
Exampl = max(X)
print "The most frequent allele is : {}, with {} occurances and overall frequency of : {}".format(Exampl["All"], Exampl["Occ"], Exampl["Frq"])
Explanation: Function to calculate occurrence frequency
The frequency is obtain from the occurence of alleles in the observed position. And our function returns an array of occurances for a given instance as well as the overall frequency.
End of explanation |
6,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representation of data submission workflow components based on W3C-PROV
Step1: generate empty Prov document and load submission workflow representation
Step2: The Provenance Model used is based the PROV standard
described in https
Step3: Example name spaces
(from DOI
Step4: assign information to provenance graph nodes and edges
Step5: Transform submission object to a provenance graph | Python Code:
%load_ext autoreload
%autoreload 2
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('/home/stephan/Repos/ENES-EUDAT/submission_forms')
from dkrz_forms import form_handler
from dkrz_forms import checks
from dkrz_forms.config import test_config
from dkrz_forms.config import workflow_steps
#print test_config.cordex_directory
project_dir = test_config.cordex_directory
test1 = {'a':'b'}
test2 = {'c':'d'}
test3 = {'d':'e'}
test1.update(test2,test3)
print test1
Explanation: Representation of data submission workflow components based on W3C-PROV
End of explanation
from prov.model import ProvDocument
d1 = ProvDocument()
my_last_name = "ki"
my_keyword = "sk1"
form_info_json_file = project_dir + "/" + my_last_name+"_"+my_keyword+".json"
workflow_form = form_handler.load_workflow_form(form_info_json_file)
Explanation: generate empty Prov document and load submission workflow representation
End of explanation
from IPython.display import display, Image
Image(filename='key-concepts.png')
name_spaces={'sub':'http://enes.org/entities/ingest-workflow#sub',
'ing':'http://enes.org/entities/ingest-workflow#ing',
'qua':'http://enes.org/entities/ingest-workflow#qua',
'pub':'http://enes.org/entities/ingest-workflow#pub',
'wf':'http://enes.org/entities/ingest-workflow#wf',
'dm':'http://enes.org/entities/ingest-workflow#dm',
'dp':'http://enes.org/entities/ingest-workflow#dp',
'node':'http://enes.org/entities/ingest-workflow#node',
}
for key,value in name_spaces.iteritems():
d1.add_namespace(key,value)
d1.add_namespace('foaf','http://xmlns.com/foaf/0.1/')
Explanation: The Provenance Model used is based the PROV standard
described in https://www.w3.org/TR/prov-primer/
End of explanation
# later: organize things in bundles
data_manager_peter = {'foaf:givenName':'Peter','foaf:mbox':'[email protected]'}
data_manager_stephan = {'foaf:givenName':'Stephan','foaf:mbox':'[email protected]'}
data_manager_katharina = {'foaf:givenName':'Katharina','foaf:mbox':'[email protected]'}
data_manager_hdh = {'foaf:givenName':'hdh','foaf:mbox':'[email protected]'}
d1.entity('node:form_template')
def add_stage(agent,activity,in_state,out_state):
# in_stage exists, out_stage is generated
#d1.agent(agent, data_manager_ats)
d1.agent(agent)
d1.activity(activity)
d1.entity(out_state)
d1.wasGeneratedBy(out_state,activity)
d1.used(activity,in_state)
d1.wasAssociatedWith(activity,agent)
d1.wasDerivedFrom(out_state,in_state)
sf = workflow_form
data_provider = 'dp:'+'data_provider'
submission_agent = 'dm:'+'submission_manager'
ingest_agent = 'dm:'+'ingest_manager'
qua_agent = 'dm:'+'qua_manager'
publication_agent = 'dm:'+'publication_manager'
archival_agent = 'dm:'+'archival_manager'
add_stage(agent=data_provider,activity='wf:submission',in_state="node:form_template",out_state='node:form_filled')
add_stage(agent=submission_agent,activity='wf:review',in_state="node:form_filled",out_state='node:review_report')
add_stage(agent=ingest_agent,activity='wf:ingest',in_state="node:form_filled",out_state='node:ingest_report')
add_stage(agent=qua_agent,activity='wf:qua',in_state="node:ingest_report",out_state='node:qua_report')
add_stage(agent=publication_agent,activity='wf:publication',in_state="node:ingest_report",out_state='node:pub_report')
add_stage(agent=archival_agent,activity='wf:archival',in_state="node:pub_report",out_state='node:arch_report')
Explanation: Example name spaces
(from DOI: 10.3390/ijgi5030038 , mehr unter https://github.com/tsunagun/vocab/blob/master/all_20130125.csv)
owl Web Ontology Language http://www.w3.org/2002/07/owl#
dctype DCMI Type Vocabulary http://purl.org/dc/dcmitype/
dco DCO Ontology http://info.deepcarbon.net/schema#
prov PROV Ontology http://www.w3.org/ns/prov#
skos Simple Knowledge
Organization System http://www.w3.org/2004/02/skos/core#
foaf FOAF Ontology http://xmlns.com/foaf/0.1/
vivo VIVO Ontology http://vivoweb.org/ontology/core#
bibo Bibliographic Ontology http://purl.org/ontology/bibo/
xsd XML Schema Datatype http://www.w3.org/2001/XMLSchema#
rdf Resource Description
Framework http://www.w3.org/1999/02/22-rdf-syntax-ns#
rdfs Resource Description
Framework Schema http://www.w3.org/2000/01/rdf-schema#
End of explanation
%matplotlib inline
d1.plot()
#d1.wasAttributedTo(data_submission,'????')
Explanation: assign information to provenance graph nodes and edges
End of explanation
print d1.get_record('node:'+'form_template')[0]
wf_submission = d1.get_record('wf:submission')[0]
wf_submission.add_attributes({'sub:t':'trallala'})
print wf_submission
def get_prov_node(node_name):
return d1.get_record(node_name)[0]
print get_prov_node('node:form_template')
#d1.get_records()
form_template = get_prov_node('node:form_template')
form_filled = get_prov_node('node:form_filled')
review_report = get_prov_node('node:review_report')
ingest_report =get_prov_node('node:ingest_report')
qua_report = get_prov_node('node:qua_report')
pub_report = get_prov_node('node:pub_report')
arch_report = get_prov_node('node:arch_report')
submission_activity = get_prov_node('wf:submission')
review_activity = get_prov_node('wf:review')
ingest_activity = get_prov_node('wf:ingest')
qua_activity = get_prov_node('wf:qua')
pub_activity = get_prov_node('wf:publication')
review_agent = get_prov_node('dm:submission_manager')
ingest_agent = get_prov_node('dm:ingest_manager')
qua_agent = get_prov_node('dm:qua_manager')
pub_agent = get_prov_node('dm:publication_manager')
archival_agent = get_prov_node('dm:archival_manager')
#review = d1.get_record('node:out1_rev')[0]
#ingest = d1.get_record('node:out1_ing')[0]
#check = d1.get_record('node:out1_qua')[0]
#publication = d1.get_record('node:out1_pub')[0]
#lta = d1.get_record('node:out1_arch')[0]
# todo: generalize mapping to activity and state attributes, e.g. using naming convention for attributes
# or lists defined in workflow_steps or different namespace prefix ..
def get_atts_dict(atts_dict,form_object,namespace):
'''
get attributs from submission form object,
return attributes dictionary with keys prefixed by namespace
'''
res_dict = {}
for elem in atts_dict.keys():
res_dict[elem] = form_object.__dict__[elem]
pr_atts_dict = form_handler.prefix_dict(res_dict,namespace,atts_dict.keys())
return pr_atts_dict
#form_template_atts_list = ['source_path','form_version']
#form_filled_atts_list = ['first_name','last_name','email','timestamp','checks_done']
#form_reviewed_atts_list = ['package_path','form_path','repo','status']
#submission_atts_list = ['comment']
#review_atts_list = ['review_comment']
#ingest_atts_list = ['comment','ticket_id']
#qua_atts_list = ['comment','ticket_id','qa_tool_version']
#publish_atts_list = ['comment','ticket_id']
#submission_manager_atts_list = ['responsible_person']
#ingest_manager_atts_list = ['responsible_person']
#qua_manager_atts_list = ['responsible_person']
#publication_manager_atts_list = ['responsible_person']
#archival_manager_atts_list = ['responsible_person']
#data_ingested_atts_list = ['target_directory','drsdir_file_pattern','status']
#data_checked_atts_list = ['target_directory','follow_up_ticket','status']
#data_published_atts_list = ['pid_collections','search_string','publish_date','status']
#data_archived_atts_list = ['']
#print sf.sub.__dict__
submission_agent_atts = get_atts_dict(workflow_steps.submission_agent,sf.sub,'sub')
print submission_agent_atts
submission_activity_atts = get_atts_dict(workflow_steps.submission_activity,sf.sub,'sub')
submission_form_template_atts = get_atts_dict(workflow_steps.submission_form_template,sf.sub,'sub')
submission_form_filled_atts = get_atts_dict(workflow_steps.submission_form_filled,sf.sub,'sub')
review_agent_atts = get_atts_dict(workflow_steps.review_agent,sf.sub,'sub')
review_report_atts = get_atts_dict(workflow_steps.review_report,sf.sub,'sub')
review_activity_atts = get_atts_dict(workflow_steps.review_activity,sf.sub,'sub')
ingest_agent_atts = get_atts_dict(workflow_steps.ingest_agent,sf.ing,'ing')
ingest_activity_atts = get_atts_dict(workflow_steps.ingest_activity,sf.ing,'ing')
ingest_report_atts = get_atts_dict(workflow_steps.ingest_report,sf.ing,'ing')
qua_agent_atts = get_atts_dict(workflow_steps.qua_agent,sf.qua,'qua')
qua_activity_atts = get_atts_dict(workflow_steps.qua,sf.qua,'qua')
qua_report_atts = get_atts_dict(workflow_steps.qua_report,sf.qua,'qua')
pub_agent_atts = get_atts_dict(workflow_steps.pub_agent,sf.pub,'pub')
pub_activity_atts = get_atts_dict(workflow_steps.pub_activity,sf.pub,'pub')
pub_report_atts = get_atts_dict(workflow_steps.pub_report,sf.pub,'pub')
#archival_manager_atts_list = get_atts_dict(archival_manager_atts_list,sf.arch)
#data_archived_atts = ....
print pub_report
#ing = form_handler.prefix_dict(sf.ing.__dict__,'ing',sf.ing.__dict__.keys())
#qua = form_handler.prefix_dict(sf.qua.__dict__,'qua',sf.qua.__dict__.keys())
#pub = form_handler.prefix_dict(sf.pub.__dict__,'pub',sf.pub.__dict__.keys())
# data submit agent to be added ...
submission_agent.add_attributes(submission_agent_atts)
submission_activity.add_attributes(submission_activity_atts)
form_template.add_attributes(submission_form_template_atts)
form_filled.add_attributes(submission_form_filled_atts)
review_agent.add_attributes(review_agent_atts)
review_activity.add_attributes(review_activity_atts)
review_report.add_attributes(review_report_atts)
ingest_agent.add_attributes(ingest_agent_atts)
ingest_activity.add_attributes(ingest_activity_atts)
ingest_report.add_attributes(ingest_report_atts)
qua_agent.add_attributes(qua_agent_atts)
qua_activity.add_attributes(qua_activity_atts)
qua_report.add_attributes(qua_report_atts)
pub_agent.add_attributes(pub_agent_atts)
pub_activity.add_attributes(pub_activity_atts)
pub_report.add_attributes(pub_report_atts)
#data_archived.add_attributes(data_archived_atts)
#check.add_attributes(qua)
#publication.add_attributes(pub)
che_act = d1.get_record('subm:check')
tst = che_act[0]
test_dict = {'subm:test':'test'}
tst.add_attributes(test_dict)
print tst
tst.FORMAL_ATTRIBUTES
tst.
che_act = d1.get_record('subm:check')
#tst.formal_attributes
#tst.FORMAL_ATTRIBUTES
tst.add_attributes({'foaf:name':'tst'})
print tst.attributes
#for i in tst:
# print i
#tst.insert([('subm:givenName','sk')])
import sys
sys.path.append('/home/stephan/Repos/ENES-EUDAT/submission_forms')
from dkrz_forms import form_handler
sf,repo = form_handler.init_form("CORDEX")
init_dict = sf.__dict__
sub_form = form_handler.prefix(sf,'subm',sf.__dict__.keys())
sub_dict = sub_form.__dict__
#init_state = d1.get_record('subm:empty')[0]
#init_state.add_attributes(init_dict)
sub_state = d1.get_record('subm:out1_sub')[0]
init_state.add_attributes(sub_dict)
tst_dict = {'test1':'val1','test2':'val2'}
tst = form_handler.submission_form(tst_dict)
print tst.__dict__
print result.__dict__
dict_from_class(sf)
Explanation: Transform submission object to a provenance graph
End of explanation |
6,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Loading sample data
We begin by loading the data that we would like to summarize into a Pandas DataFrame.
- Variables are in columns
- Encounters/observations are in rows.
Step2: Example 1
Step3: Summary of the table
Step4: Exploring the warning raised by Tukey's rule
Tukey's rule has found far outliers in Height, so we'll look at this in a boxplot
Step5: In both cases it seems that there are values that may need to be taken into account when calculating the summary statistics. For SysABP, a clearly bimodal distribution, the researcher will need to decide how to handle the peak at ~0, perhaps by cleaning the data and/or describing the issue in the summary table. For Height, the researcher may choose to report median, rather than mean.
Example 2
Step6: Summary of the table
Step9: Summary of the table
Step10: Saving the table in custom formats (LaTeX, CSV, Markdown, etc) <a name="export"></a>
Tables can be exported to file in various formats, including
Step11: Exporting your table using the to_<format>() method
Alternatively, the table can be saved to file using the Pandas to_format() method. | Python Code:
# Import numerical libraries
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
# Import tableone
try:
from tableone import TableOne, load_dataset
except (ModuleNotFoundError, ImportError):
# install on Colab
!pip install tableone
from tableone import TableOne, load_dataset
Explanation: <a href="https://colab.research.google.com/github/tompollard/tableone/blob/master/tableone.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Demonstrating the tableone package
In research papers, it is common for the first table ("Table 1") to display summary statistics of the study data. The tableone package is used to create this table. For an introduction to basic statistical reporting in biomedical journals, we recommend reading the SAMPL Guidelines. For more reading on accurate reporting in health research, visit the EQUATOR Network.
Contents
Set up:
Suggested citation
Installation
Example usage:
Creating a simple Table 1
Creating a stratified Table 1
Adding p-values and standardized mean differences
Using a custom hypothesis test to calculate P-Values
Exporting the table:
Exporting to LaTex, Markdown, HTML etc
A note for users of tableone
While we have tried to use best practices in creating this package, automation of even basic statistical tasks can be unsound if done without supervision. We encourage use of tableone alongside other methods of descriptive statistics and, in particular, visualization to ensure appropriate data handling.
It is beyond the scope of our documentation to provide detailed guidance on summary statistics, but as a primer we provide some considerations for choosing parameters when creating a summary table at: http://tableone.readthedocs.io/en/latest/bestpractice.html.
Guidance should be sought from a statistician when using tableone for a research study, especially prior to submitting the study for publication.
Suggested citation <a name="citation"></a>
If you use tableone in your study, please cite the following paper:
Tom J Pollard, Alistair E W Johnson, Jesse D Raffa, Roger G Mark; tableone: An open source Python package for producing summary statistics for research papers, JAMIA Open, Volume 1, Issue 1, 1 July 2018, Pages 26–31, https://doi.org/10.1093/jamiaopen/ooy012
Download the BibTex file from: https://academic.oup.com/jamiaopen/downloadcitation/5001910?format=bibtex
Installation <a name="installation"></a>
To install the package with pip, run the following command in your terminal: pip install tableone. To install the package with Conda, run: conda install -c conda-forge tableone. For more detailed installation instructions, refer to the documentation.
Importing libraries
Before using the tableone package, we need to import it. We will also import pandas for loading our sample dataset and matplotlib for creating plots.
End of explanation
# Load PhysioNet 2012 sample data
data = load_dataset('pn2012')
data.head()
Explanation: Loading sample data
We begin by loading the data that we would like to summarize into a Pandas DataFrame.
- Variables are in columns
- Encounters/observations are in rows.
End of explanation
# View the tableone docstring
TableOne??
# Create a simple Table 1 with no grouping variable
# Test for normality, multimodality (Hartigan's Dip Test), and far outliers (Tukey's test)
# for versions >= 0.7.9
table1 = TableOne(data, dip_test=True, normal_test=True, tukey_test=True)
# for versions < 0.7.9
table1 = TableOne(data)
# View Table 1 (note the remarks below the table)
table1
# The pd.DataFrame object can be accessed using the `tableone` attribute
type(table1.tableone)
Explanation: Example 1: Simple summary of data with Table 1 <a name="simple-example"></a>
In this example we provide summary statistics across all of the data.
End of explanation
data[['Age','SysABP','Height']].dropna().plot.kde(figsize=[12,8])
plt.legend(['Age (years)', 'SysABP (mmHg)', 'Height (cm)'])
plt.xlim([-30,250])
Explanation: Summary of the table:
- the first row ('n') displays a count of the encounters/observations in the input data.
- the 'Missing' column displays a count of the null values for the particular variable.
- if categorical variables are not defined in the arguments, they are detected automatically.
- continuous variables (e.g. 'age') are summarized by 'mean (std)'.
- categorical variables (e.g. 'ascites') are summarized by 'n (% of non-null values)'.
- if label_suffix=True, "mean (SD); n (%);" etc are appended to the row label.
Exploring the warning raised by Hartigan's Dip Test
Hartigan's Dip Test is a test for multimodality. The test has suggested that the Age, SysABP, and Height distributions may be multimodal. We'll plot the distributions here.
End of explanation
data[['Age','Height','SysABP']].boxplot(whis=3)
plt.show()
Explanation: Exploring the warning raised by Tukey's rule
Tukey's rule has found far outliers in Height, so we'll look at this in a boxplot
End of explanation
# columns to summarize
columns = ['Age', 'SysABP', 'Height', 'Weight', 'ICU', 'death']
# columns containing categorical variables
categorical = ['ICU']
# non-normal variables
nonnormal = ['Age']
# limit the binary variable "death" to a single row
limit = {"death": 1}
# set the order of the categorical variables
order = {"ICU": ["MICU", "SICU", "CSRU", "CCU"]}
# alternative labels
labels={'death': 'Mortality'}
# set decimal places for age to 0
decimals = {"Age": 0}
# optionally, a categorical variable for stratification
groupby = ['death']
# rename the death column
labels={'death': 'Mortality'}
# display minimum and maximum for listed variables
min_max = ['Height']
table2 = TableOne(data, columns=columns, categorical=categorical, groupby=groupby,
nonnormal=nonnormal, rename=labels, label_suffix=True,
decimals=decimals, limit=limit, min_max=min_max)
table2
Explanation: In both cases it seems that there are values that may need to be taken into account when calculating the summary statistics. For SysABP, a clearly bimodal distribution, the researcher will need to decide how to handle the peak at ~0, perhaps by cleaning the data and/or describing the issue in the summary table. For Height, the researcher may choose to report median, rather than mean.
Example 2: Table 1 with stratification <a name="stratified-example"></a>
In this example we provide summary statistics across all of the data, specifying columns, categorical variables, and non-normal variables.
End of explanation
# create grouped_table with p values
table3 = TableOne(data, columns, categorical, groupby, nonnormal, pval = True, smd=True,
htest_name=True)
# view first 10 rows of tableone
table3
Explanation: Summary of the table:
variables are explicitly defined in the input arguments.
the variables are displayed in the same order as the columns argument.
the limit argument specifies that only a 1 value should be shown for death.
the order of categorical values is defined in the optional order argument.
nonnormal continuous variables are summarized by 'median [Q1,Q3]' instead of mean (SD).
'sex' is shown as 'gender and 'trt' is shown as 'treatment', as specified in the rename argument.
data is summarized across the groups specified in the groupby argument.
min_max displays [minimum, maximum] for the variable, instead of standard deviation or upper/lower quartiles.
Adding p-values and standardized mean differences <a name="pval-smd"></a>
We can run a test to compute p values by setting the pval argument to True.
Pairwise standardized mean differences can be added with the smd argument.
End of explanation
# load PhysioNet 2012 sample data
data = load_dataset('pn2012')
# define the custom tests
# `*` allows the function to take an unknown number of arguments
def my_custom_test(group1, group2):
Hypothesis test for test_self_defined_statistical_tests
my_custom_test.__name__ = "Custom test 1"
_, pval= stats.ks_2samp(group1, group2)
return pval
# If the number of groups is unknown, use *args
def my_custom_test2(*args):
Hypothesis test for test_self_defined_statistical_tests
# uncomment the following chunk to view the first 10 values in each group
for n, a in enumerate(args):
print("Group {} (total {} values.): {} ...".format(n, len(a), a[:10]))
my_custom_test2.__name__ = "Custom test 2"
_, pval= stats.ks_2samp(*args)
return pval
custom_tests = {'Age': my_custom_test, 'SysABP': my_custom_test2}
# create the table
table4 = TableOne(data, groupby="death", pval=True, htest_name=True, htest=custom_tests)
table4
Explanation: Summary of the table:
- the htest_name argument can be used to display the name of the hypothesis tests used.
- the 'p-value' column displays the p value generated to 3 decimal places.
Using a custom hypothesis test to compute P-Values <a name="custom-htest"></a>
Custom hypothesis tests can be defined using the htest argument, which takes a dictionary of variable: function pairs (i.e. htest = {var: custom_func}, where var is the variable and custom_func is a function that takes lists of values in each group. The custom function must return a single pval value.
End of explanation
# load PhysioNet 2012 sample data
data = load_dataset('pn2012')
# create the table
table5 = TableOne(data, groupby="death")
print(table5.tabulate(tablefmt = "latex"))
print(table5.tabulate(tablefmt = "github"))
Explanation: Saving the table in custom formats (LaTeX, CSV, Markdown, etc) <a name="export"></a>
Tables can be exported to file in various formats, including:
LaTeX
CSV
HTML
There are two options for exporting content:
Print and copy the table using the tabulate method
Call the relevant to_<format>() method on the DataFrame.
Printing your table using tabulate
The tableone object includes a tabulate method, that makes use of the tabulate package to display the table in custom output formats. Supported table formats include: "github", "grid", "fancy_grid", "rst", "html", "latex", and "latex_raw". See the tabulate package for more formats.
To export your table in LaTex (for example, to add to your document on Overleaf.com), it's simple with the tabulate method. Just copy and paste the output below.
End of explanation
# Save to Excel
fn1 = 'tableone.xlsx'
table5.to_excel(fn1)
# Save table to LaTeX
fn2 = 'tableone.tex'
table5.to_latex(fn2)
# Save table to HTML
fn3 = 'tableone.html'
table5.to_html(fn3)
Explanation: Exporting your table using the to_<format>() method
Alternatively, the table can be saved to file using the Pandas to_format() method.
End of explanation |
6,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times
and numbers of states, and compares the relaxation timescales
Step1: First
Step2: Now sequences is our featurized data. | Python Code:
from __future__ import print_function
import os
%matplotlib inline
from matplotlib.pyplot import *
from msmbuilder.featurizer import SuperposeFeaturizer
from msmbuilder.example_datasets import AlanineDipeptide
from msmbuilder.hmm import GaussianHMM
from msmbuilder.cluster import KCenters
from msmbuilder.msm import MarkovStateModel
Explanation: This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times
and numbers of states, and compares the relaxation timescales
End of explanation
print(AlanineDipeptide.description())
dataset = AlanineDipeptide().get()
trajectories = dataset.trajectories
topology = trajectories[0].topology
indices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']]
featurizer = SuperposeFeaturizer(indices, trajectories[0][0])
sequences = featurizer.transform(trajectories)
Explanation: First: load and "featurize"
Featurization refers to the process of converting the conformational
snapshots from your MD trajectories into vectors in some space $\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models the trajectory as a time-dependent
mixture of multivariate Gaussians.
In general, the featurization is somewhat of an art. For this example, we're using MSMBuilder's SuperposeFeaturizer, which superposes each snapshot onto a reference frame (trajectories[0][0] in this example), and then measure the distance from each
atom to its position in the reference conformation as the 'feature'
End of explanation
lag_times = [1, 10, 20, 30, 40]
hmm_ts0 = {}
hmm_ts1 = {}
n_states = [3, 5]
for n in n_states:
hmm_ts0[n] = []
hmm_ts1[n] = []
for lag_time in lag_times:
strided_data = [s[i::lag_time] for s in sequences for i in range(lag_time)]
hmm = GaussianHMM(n_states=n, n_init=1).fit(strided_data)
timescales = hmm.timescales_ * lag_time
hmm_ts0[n].append(timescales[0])
hmm_ts1[n].append(timescales[1])
print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales))
print()
figure(figsize=(14,3))
for i, n in enumerate(n_states):
subplot(1,len(n_states),1+i)
plot(lag_times, hmm_ts0[n])
plot(lag_times, hmm_ts1[n])
if i == 0:
ylabel('Relaxation Timescale')
xlabel('Lag Time')
title('%d states' % n)
show()
msmts0, msmts1 = {}, {}
lag_times = [1, 10, 20, 30, 40]
n_states = [4, 8, 16, 32, 64]
for n in n_states:
msmts0[n] = []
msmts1[n] = []
for lag_time in lag_times:
assignments = KCenters(n_clusters=n).fit_predict(sequences)
msm = MarkovStateModel(lag_time=lag_time, verbose=False).fit(assignments)
timescales = msm.timescales_
msmts0[n].append(timescales[0])
msmts1[n].append(timescales[1])
print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales[0:2]))
print()
figure(figsize=(14,3))
for i, n in enumerate(n_states):
subplot(1,len(n_states),1+i)
plot(lag_times, msmts0[n])
plot(lag_times, msmts1[n])
if i == 0:
ylabel('Relaxation Timescale')
xlabel('Lag Time')
title('%d states' % n)
show()
Explanation: Now sequences is our featurized data.
End of explanation |
6,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive 3-D Visualization
Step1: Loading an example geomodel
Step2: Basic plotting API
Data plot
Step3: Geomodel plot
Step4: Interactive plot
Passing the notebook=False keyword argument will run the pyvista visualization in an external window, allowing for interactivity
Step5: Granular 3-D Visualization
Plotting surfaces
Step6: Plotting individual surfaces
Step7: Plotting input data
Step8: Plot structured grids
Step9: Interactive Block with cross sections
Step10: Interactive Plotting
Step11: Now if you move the data the model updates!
To go back to static models
Step12: Interactive Plotting | Python Code:
# Importing GemPy
import gempy as gp
# Embedding matplotlib figures in the notebooks
%matplotlib inline
# Importing auxiliary libraries
import numpy as np
import matplotlib.pyplot as plt
Explanation: Interactive 3-D Visualization
End of explanation
data_path = 'https://raw.githubusercontent.com/cgre-aachen/gempy_data/master/'
geo_model = gp.create_data('viz_3d',
[0, 2000, 0, 2000, 0, 1600],
[50, 50, 50],
path_o=data_path + "data/input_data/lisa_models/foliations" + str(7) + ".csv",
path_i=data_path + "data/input_data/lisa_models/interfaces" + str(7) + ".csv"
)
gp.map_stack_to_surfaces(
geo_model,
{"Fault_1": 'Fault_1', "Fault_2": 'Fault_2',
"Strat_Series": ('Sandstone', 'Siltstone', 'Shale', 'Sandstone_2', 'Schist', 'Gneiss')}
)
geo_model.set_is_fault(['Fault_1', 'Fault_2'])
geo_model.set_topography()
gp.set_interpolator(geo_model)
gp.compute_model(geo_model, compute_mesh=True)
Explanation: Loading an example geomodel
End of explanation
gp.plot_3d(geo_model, show_surfaces=False, notebook=True)
Explanation: Basic plotting API
Data plot
End of explanation
gp.plot_3d(geo_model, notebook=True)
Explanation: Geomodel plot
End of explanation
gp.plot_3d(geo_model, notebook=False)
Explanation: Interactive plot
Passing the notebook=False keyword argument will run the pyvista visualization in an external window, allowing for interactivity:
End of explanation
geo_model.surfaces
gpv = gp.plot_3d(geo_model, show_data=False, show_results=False, plotter_type='background')
# Plotting all surfaces...
gpv.plot_surfaces()
# ... masked by topography
gpv.plot_topography()
# Just few surfaces
gpv.plot_surfaces(['Siltstone', 'Gneiss'])
Explanation: Granular 3-D Visualization
Plotting surfaces
End of explanation
gpv.plot_surfaces(["Fault_1"])
gpv.plot_surfaces(["Shale"], clear=False)
Explanation: Plotting individual surfaces
End of explanation
gpv.plot_surface_points()
gpv.plot_orientations()
mesh = gpv.surface_points_mesh
mesh
mesh.points[:, -1]
mesh.n_arrays
Explanation: Plotting input data
End of explanation
gpv.plot_structured_grid("scalar", series = 'Strat_Series')
Explanation: Plot structured grids
End of explanation
gp.plot.plot_interactive_3d(geo_model, show_topography=True)
Explanation: Interactive Block with cross sections
End of explanation
gpv = gp.plot_3d(geo_model, show_data=False, show_results=False, plotter_type='background')
gpv.plot_surface_points()
gpv.plot_orientations()
gpv.plot_surfaces()
gpv.toggle_live_updating()
Explanation: Interactive Plotting: Drag and drop
GemPy supports interactive plotting, meaning that you can drag & drop the input data and GemPy will update the geomodel live. This does not work in the static notebook plotter, but instead you have to pass the notebook=False argument to open an interactive plotting window. When running the next cell you can freely move the surface points (spheres) and orientations (arrows) of the Shale horizon and see how it updates the model.
Note: Everytime you move a data point, GemPy will recompute the geomodel. This works best whe running GemPy on a dedicated graphics card (GPU).
End of explanation
gpv.toggle_live_updating()
Explanation: Now if you move the data the model updates!
To go back to static models:
End of explanation
gpv.live_updating = True
gpv.plot_surface_points()
gpv.plot_orientations()
geo_model.modify_surface_points(0, X=-100, plot_object=gpv)
geo_model.add_surface_points(-200, 1500, 600, 'Schist', plot_object=gpv)
geo_model.delete_surface_points(22, plot_object=gpv)
Explanation: Interactive Plotting: Programatically
If the model is in live_updating model. It is also possible to change the model by passing the plotting object to the typical methods:
End of explanation |
6,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative filtering on Google Analytics data
This notebook demonstrates how to implement a WALS matrix refactorization approach to do collaborative filtering.
Step2: Create raw dataset
<p>
For collaborative filtering, we don't need to know anything about either the users or the content. Essentially, all we need to know is userId, itemId, and rating that the particular user gave the particular item.
<p>
In this case, we are working with newspaper articles. The company doesn't ask their users to rate the articles. However, we can use the time-spent on the page as a proxy for rating.
<p>
Normally, we would also add a time filter to this ("latest 7 days"), but our dataset is itself limited to a few days.
Step3: Create dataset for WALS
<p>
The raw dataset (above) won't work for WALS
Step4: Creating rows and columns datasets
Step5: To summarize, we created the following data files from collab_raw.csv
Step6: This code is helpful in developing the input function. You don't need it in production.
Step7: Run as a Python module
Let's run it as Python module for just a few steps.
Step8: Run on Cloud
Step9: This took <b>10 minutes</b> for me.
Get row and column factors
Once you have a trained WALS model, you can get row and column factors (user and item embeddings) from the checkpoint file. We'll look at how to use these in the section on building a recommendation system using deep neural networks.
Step10: You can visualize the embedding vectors using dimensional reduction techniques such as PCA. | Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "1.13"
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
import tensorflow as tf
print(tf.__version__)
Explanation: Collaborative filtering on Google Analytics data
This notebook demonstrates how to implement a WALS matrix refactorization approach to do collaborative filtering.
End of explanation
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
sql =
#standardSQL
WITH CTE_visitor_page_content AS (
SELECT
fullVisitorID,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS latestContentId,
(LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) - hits.time) AS session_duration
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
GROUP BY
fullVisitorId,
latestContentId,
hits.time )
-- Aggregate web stats
SELECT
fullVisitorID as visitorId,
latestContentId as contentId,
SUM(session_duration) AS session_duration
FROM
CTE_visitor_page_content
WHERE
latestContentId IS NOT NULL
GROUP BY
fullVisitorID,
latestContentId
HAVING
session_duration > 0
ORDER BY
latestContentId
df = bq.query(sql).to_dataframe()
df.head()
stats = df.describe()
stats
df[["session_duration"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5])
# The rating is the session_duration scaled to be in the range 0-1. This will help with training.
median = stats.loc["50%", "session_duration"]
df["rating"] = 0.3 * df["session_duration"] / median
df.loc[df["rating"] > 1, "rating"] = 1
df[["rating"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5])
del df["session_duration"]
%%bash
rm -rf data
mkdir data
df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False)
!head data/collab_raw.csv
Explanation: Create raw dataset
<p>
For collaborative filtering, we don't need to know anything about either the users or the content. Essentially, all we need to know is userId, itemId, and rating that the particular user gave the particular item.
<p>
In this case, we are working with newspaper articles. The company doesn't ask their users to rate the articles. However, we can use the time-spent on the page as a proxy for rating.
<p>
Normally, we would also add a time filter to this ("latest 7 days"), but our dataset is itself limited to a few days.
End of explanation
import pandas as pd
import numpy as np
def create_mapping(values, filename):
with open(filename, 'w') as ofp:
value_to_id = {value:idx for idx, value in enumerate(values.unique())}
for value, idx in value_to_id.items():
ofp.write("{},{}\n".format(value, idx))
return value_to_id
df = pd.read_csv(filepath_or_buffer = "data/collab_raw.csv",
header = None,
names = ["visitorId", "contentId", "rating"],
dtype = {"visitorId": str, "contentId": str, "rating": np.float})
df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False)
user_mapping = create_mapping(df["visitorId"], "data/users.csv")
item_mapping = create_mapping(df["contentId"], "data/items.csv")
!head -3 data/*.csv
df["userId"] = df["visitorId"].map(user_mapping.get)
df["itemId"] = df["contentId"].map(item_mapping.get)
mapped_df = df[["userId", "itemId", "rating"]]
mapped_df.to_csv(path_or_buf = "data/collab_mapped.csv", index = False, header = False)
mapped_df.head()
Explanation: Create dataset for WALS
<p>
The raw dataset (above) won't work for WALS:
<ol>
<li> The userId and itemId have to be 0,1,2 ... so we need to create a mapping from visitorId (in the raw data) to userId and contentId (in the raw data) to itemId.
<li> We will need to save the above mapping to a file because at prediction time, we'll need to know how to map the contentId in the table above to the itemId.
<li> We'll need two files: a "rows" dataset where all the items for a particular user are listed; and a "columns" dataset where all the users for a particular item are listed.
</ol>
<p>
### Mapping
End of explanation
import pandas as pd
import numpy as np
mapped_df = pd.read_csv(filepath_or_buffer = "data/collab_mapped.csv", header = None, names = ["userId", "itemId", "rating"])
mapped_df.head()
NITEMS = np.max(mapped_df["itemId"]) + 1
NUSERS = np.max(mapped_df["userId"]) + 1
mapped_df["rating"] = np.round(mapped_df["rating"].values, 2)
print("{} items, {} users, {} interactions".format( NITEMS, NUSERS, len(mapped_df) ))
grouped_by_items = mapped_df.groupby("itemId")
iter = 0
for item, grouped in grouped_by_items:
print(item, grouped["userId"].values, grouped["rating"].values)
iter = iter + 1
if iter > 5:
break
import tensorflow as tf
grouped_by_items = mapped_df.groupby("itemId")
with tf.python_io.TFRecordWriter("data/users_for_item") as ofp:
for item, grouped in grouped_by_items:
example = tf.train.Example(features = tf.train.Features(feature = {
"key": tf.train.Feature(int64_list = tf.train.Int64List(value = [item])),
"indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["userId"].values)),
"values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values))
}))
ofp.write(example.SerializeToString())
grouped_by_users = mapped_df.groupby("userId")
with tf.python_io.TFRecordWriter("data/items_for_user") as ofp:
for user, grouped in grouped_by_users:
example = tf.train.Example(features = tf.train.Features(feature = {
"key": tf.train.Feature(int64_list = tf.train.Int64List(value = [user])),
"indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["itemId"].values)),
"values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values))
}))
ofp.write(example.SerializeToString())
!ls -lrt data
Explanation: Creating rows and columns datasets
End of explanation
import os
import tensorflow as tf
from tensorflow.python.lib.io import file_io
from tensorflow.contrib.factorization import WALSMatrixFactorization
def read_dataset(mode, args):
def decode_example(protos, vocab_size):
features = {
"key": tf.FixedLenFeature(shape = [1], dtype = tf.int64),
"indices": tf.VarLenFeature(dtype = tf.int64),
"values": tf.VarLenFeature(dtype = tf.float32)}
parsed_features = tf.parse_single_example(serialized = protos, features = features)
values = tf.sparse_merge(sp_ids = parsed_features["indices"], sp_values = parsed_features["values"], vocab_size = vocab_size)
# Save key to remap after batching
# This is a temporary workaround to assign correct row numbers in each batch.
# You can ignore details of this part and remap_keys().
key = parsed_features["key"]
decoded_sparse_tensor = tf.SparseTensor(indices = tf.concat(values = [values.indices, [key]], axis = 0),
values = tf.concat(values = [values.values, [0.0]], axis = 0),
dense_shape = values.dense_shape)
return decoded_sparse_tensor
def remap_keys(sparse_tensor):
# Current indices of our SparseTensor that we need to fix
bad_indices = sparse_tensor.indices # shape = (current_batch_size * (number_of_items/users[i] + 1), 2)
# Current values of our SparseTensor that we need to fix
bad_values = sparse_tensor.values # shape = (current_batch_size * (number_of_items/users[i] + 1),)
# Since batch is ordered, the last value for a batch index is the user
# Find where the batch index chages to extract the user rows
# 1 where user, else 0
user_mask = tf.concat(values = [bad_indices[1:,0] - bad_indices[:-1,0], tf.constant(value = [1], dtype = tf.int64)], axis = 0) # shape = (current_batch_size * (number_of_items/users[i] + 1), 2)
# Mask out the user rows from the values
good_values = tf.boolean_mask(tensor = bad_values, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],)
item_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],)
user_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 1))[:, 1] # shape = (current_batch_size,)
good_user_indices = tf.gather(params = user_indices, indices = item_indices[:,0]) # shape = (current_batch_size * number_of_items/users[i],)
# User and item indices are rank 1, need to make rank 1 to concat
good_user_indices_expanded = tf.expand_dims(input = good_user_indices, axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1)
good_item_indices_expanded = tf.expand_dims(input = item_indices[:, 1], axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1)
good_indices = tf.concat(values = [good_user_indices_expanded, good_item_indices_expanded], axis = 1) # shape = (current_batch_size * number_of_items/users[i], 2)
remapped_sparse_tensor = tf.SparseTensor(indices = good_indices, values = good_values, dense_shape = sparse_tensor.dense_shape)
return remapped_sparse_tensor
def parse_tfrecords(filename, vocab_size):
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
else:
num_epochs = 1 # end-of-input after this
files = tf.gfile.Glob(filename = os.path.join(args["input_path"], filename))
# Create dataset from file list
dataset = tf.data.TFRecordDataset(files)
dataset = dataset.map(map_func = lambda x: decode_example(x, vocab_size))
dataset = dataset.repeat(count = num_epochs)
dataset = dataset.batch(batch_size = args["batch_size"])
dataset = dataset.map(map_func = lambda x: remap_keys(x))
return dataset.make_one_shot_iterator().get_next()
def _input_fn():
features = {
WALSMatrixFactorization.INPUT_ROWS: parse_tfrecords("items_for_user", args["nitems"]),
WALSMatrixFactorization.INPUT_COLS: parse_tfrecords("users_for_item", args["nusers"]),
WALSMatrixFactorization.PROJECT_ROW: tf.constant(True)
}
return features, None
return _input_fn
Explanation: To summarize, we created the following data files from collab_raw.csv:
<ol>
<li> ```collab_mapped.csv``` is essentially the same data as in ```collab_raw.csv``` except that ```visitorId``` and ```contentId``` which are business-specific have been mapped to ```userId``` and ```itemId``` which are enumerated in 0,1,2,.... The mappings themselves are stored in ```items.csv``` and ```users.csv``` so that they can be used during inference.
<li> ```users_for_item``` contains all the users/ratings for each item in TFExample format
<li> ```items_for_user``` contains all the items/ratings for each user in TFExample format
</ol>
Train with WALS
Once you have the dataset, do matrix factorization with WALS using the WALSMatrixFactorization in the contrib directory.
This is an estimator model, so it should be relatively familiar.
<p>
As usual, we write an input_fn to provide the data to the model, and then create the Estimator to do train_and_evaluate.
Because it is in contrib and hasn't moved over to tf.estimator yet, we use tf.contrib.learn.Experiment to handle the training loop.
End of explanation
def try_out():
with tf.Session() as sess:
fn = read_dataset(
mode = tf.estimator.ModeKeys.EVAL,
args = {"input_path": "data", "batch_size": 4, "nitems": NITEMS, "nusers": NUSERS})
feats, _ = fn()
print(feats["input_rows"].eval())
print(feats["input_rows"].eval())
try_out()
def find_top_k(user, item_factors, k):
all_items = tf.matmul(a = tf.expand_dims(input = user, axis = 0), b = tf.transpose(a = item_factors))
topk = tf.nn.top_k(input = all_items, k = k)
return tf.cast(x = topk.indices, dtype = tf.int64)
def batch_predict(args):
import numpy as np
with tf.Session() as sess:
estimator = tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"])
# This is how you would get the row factors for out-of-vocab user data
# row_factors = list(estimator.get_projections(input_fn=read_dataset(tf.estimator.ModeKeys.EVAL, args)))
# user_factors = tf.convert_to_tensor(np.array(row_factors))
# But for in-vocab data, the row factors are already in the checkpoint
user_factors = tf.convert_to_tensor(value = estimator.get_row_factors()[0]) # (nusers, nembeds)
# In either case, we have to assume catalog doesn"t change, so col_factors are read in
item_factors = tf.convert_to_tensor(value = estimator.get_col_factors()[0])# (nitems, nembeds)
# For each user, find the top K items
topk = tf.squeeze(input = tf.map_fn(fn = lambda user: find_top_k(user, item_factors, args["topk"]), elems = user_factors, dtype = tf.int64))
with file_io.FileIO(os.path.join(args["output_dir"], "batch_pred.txt"), mode = 'w') as f:
for best_items_for_user in topk.eval():
f.write(",".join(str(x) for x in best_items_for_user) + '\n')
def train_and_evaluate(args):
train_steps = int(0.5 + (1.0 * args["num_epochs"] * args["nusers"]) / args["batch_size"])
steps_in_epoch = int(0.5 + args["nusers"] / args["batch_size"])
print("Will train for {} steps, evaluating once every {} steps".format(train_steps, steps_in_epoch))
def experiment_fn(output_dir):
return tf.contrib.learn.Experiment(
tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"]),
train_input_fn = read_dataset(tf.estimator.ModeKeys.TRAIN, args),
eval_input_fn = read_dataset(tf.estimator.ModeKeys.EVAL, args),
train_steps = train_steps,
eval_steps = 1,
min_eval_frequency = steps_in_epoch
)
from tensorflow.contrib.learn.python.learn import learn_runner
learn_runner.run(experiment_fn = experiment_fn, output_dir = args["output_dir"])
batch_predict(args)
import shutil
shutil.rmtree(path = "wals_trained", ignore_errors=True)
train_and_evaluate({
"output_dir": "wals_trained",
"input_path": "data/",
"num_epochs": 0.05,
"nitems": NITEMS,
"nusers": NUSERS,
"batch_size": 512,
"n_embeds": 10,
"topk": 3
})
!ls wals_trained
!head wals_trained/batch_pred.txt
Explanation: This code is helpful in developing the input function. You don't need it in production.
End of explanation
os.environ["NITEMS"] = str(NITEMS)
os.environ["NUSERS"] = str(NUSERS)
%%bash
rm -rf wals.tar.gz wals_trained
gcloud ai-platform local train \
--module-name=walsmodel.task \
--package-path=${PWD}/walsmodel \
-- \
--output_dir=${PWD}/wals_trained \
--input_path=${PWD}/data \
--num_epochs=0.01 --nitems=${NITEMS} --nusers=${NUSERS} \
--job-dir=./tmp
Explanation: Run as a Python module
Let's run it as Python module for just a few steps.
End of explanation
%%bash
gsutil -m cp data/* gs://${BUCKET}/wals/data
%%bash
OUTDIR=gs://${BUCKET}/wals/model_trained
JOBNAME=wals_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=walsmodel.task \
--package-path=${PWD}/walsmodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--input_path=gs://${BUCKET}/wals/data \
--num_epochs=10 --nitems=${NITEMS} --nusers=${NUSERS}
Explanation: Run on Cloud
End of explanation
def get_factors(args):
with tf.Session() as sess:
estimator = tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"])
row_factors = estimator.get_row_factors()[0]
col_factors = estimator.get_col_factors()[0]
return row_factors, col_factors
args = {
"output_dir": "gs://{}/wals/model_trained".format(BUCKET),
"nitems": NITEMS,
"nusers": NUSERS,
"n_embeds": 10
}
user_embeddings, item_embeddings = get_factors(args)
print(user_embeddings[:3])
print(item_embeddings[:3])
Explanation: This took <b>10 minutes</b> for me.
Get row and column factors
Once you have a trained WALS model, you can get row and column factors (user and item embeddings) from the checkpoint file. We'll look at how to use these in the section on building a recommendation system using deep neural networks.
End of explanation
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
pca = PCA(n_components = 3)
pca.fit(user_embeddings)
user_embeddings_pca = pca.transform(user_embeddings)
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111, projection = "3d")
xs, ys, zs = user_embeddings_pca[::150].T
ax.scatter(xs, ys, zs)
Explanation: You can visualize the embedding vectors using dimensional reduction techniques such as PCA.
End of explanation |
6,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习纳米学位
监督学习
项目2
Step1: 练习:数据探索
首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量:
总的记录数量,'n_records'
年收入大于50,000美元的人数,'n_greater_50k'.
年收入最多为50,000美元的人数 'n_at_most_50k'.
年收入大于50,000美元的人所占的比例, 'greater_percent'.
提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。
Step2: 准备数据
在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。
转换倾斜的连续特征
一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。
运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。
Step3: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https
Step4: 规一化数字特征
除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。
运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。
Step5: 练习:数据预处理
从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C.
| | 一些特征 | | 特征_A | 特征_B | 特征_C |
|
Step6: 混洗和切分数据
现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。
运行下面的代码单元来完成切分。
Step7: 评价模型性能
在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。这里面的三个将是你选择的监督学习器,而第四种算法被称为一个朴素的预测器。
评价方法和朴素的预测器
CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
尤其是,当$\beta = 0.5$的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。
通过查看不同类别的数据分布(那些最多赚\$50,000和那些能够赚更多的),我们能发现:很明显的是很多的被调查者年收入没有超过\$50,000。这点会显著地影响准确率,因为我们可以简单地预测说“这个人的收入没有超过\$50,000”,这样我们甚至不用看数据就能做到我们的预测在一般情况下是正确的!做这样一个预测被称作是朴素的,因为我们没有任何信息去证实这种说法。通常考虑对你的数据使用一个朴素的预测器是十分重要的,这样能够帮助我们建立一个模型的表现是否好的基准。那有人说,使用这样一个预测是没有意义的:如果我们预测所有人的收入都低于\$50,000,那么CharityML就不会有人捐款了。
问题 1 - 朴素预测器的性能
如果我们选择一个无论什么情况都预测被调查者年收入大于\$50,000的模型,那么这个模型在这个数据集上的准确率和F-score是多少?
注意: 你必须使用下面的代码单元将你的计算结果赋值给'accuracy' 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。
注意:朴素预测器由于不是训练出来的,所以我们可以用全部数据来进行评估(也有人认为保证条件一致仅用测试数据来评估)。
Step8: 监督学习模型
下面的监督学习模型是现在在 scikit-learn 中你能够选择的模型
- 高斯朴素贝叶斯 (GaussianNB)
- 决策树
- 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K近邻 (KNeighbors)
- 随机梯度下降分类器 (SGDC)
- 支撑向量机 (SVM)
- Logistic回归
问题 2 - 模型应用
列出从上面的监督学习模型中选择的三个适合我们这个问题的模型,你将在人口普查数据上测试这每个算法。对于你选择的每一个算法:
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
这个模型的优势是什么?他什么情况下表现最好?
这个模型的缺点是什么?什么条件下它表现很差?
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答: 本项目特点:输入多个数值特征,输出2分类,数据比较丰富
- 决策树
- 应用场景:垃圾邮件过滤
- 优点:专家系统
- 计算复杂度不高。
- 可以处理不相关特征数据。
- 易于理解和理解。树可以形象化。
- 对中间值的缺失不敏感。
- 需要很少的数据准备。其他技术通常需要数据标准化,需要创建虚拟变量,并删除空白值。注意,这个模块不支持丢失的值。
- 使用树的成本(即。预测数据)是用于对树进行训练的数据点的对数。
- 能够处理数值和分类数据。其他技术通常是专门分析只有一种变量的数据集。
- 能够处理多输出问题。
- 使用白盒模型。如果一个给定的情况在模型中可以观察到,那么这个条件的解释很容易用布尔逻辑来解释。相比之下,在黑盒模型中(例如
Step9: 练习:初始模型的评估
在下面的代码单元中,您将需要实现以下功能:
- 导入你在前面讨论的三个监督学习模型。
- 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。
- 如果可能对每一个模型都设置一个random_state。
- 注意:这里先使用每一个模型的默认参数,在接下来的部分中你将需要对某一个模型的参数进行调整。
- 计算记录的数目等于1%,10%,和100%的训练数据,并将这些值存储在'samples'中
注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行!
random_state的作用主要有两个
Step10: 提高效果
在这最后一节中,您将从三个有监督的学习模型中选择最好的模型来使用学生数据。你将在整个训练集(X_train和y_train)上通过使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的F-score。
问题 3 - 选择最佳的模型
基于你前面做的评价,用一到两段向CharityML解释这三个模型中哪一个对于判断被调查者的年收入大于\$50,000是最合适的。
提示:你的答案应该包括关于评价指标,预测/训练时间,以及该算法是否适合这里的数据的讨论。
回答:Gradient Boosting最合适。
决策树的准确率和f-score在训练数据和测试数据之间差异明显,说明,它的泛化能力较差。K-近邻算法和Gradient Boosting则比较好。
K-近邻预测时间太长,增长太快,从0.59(%1),5.209(%10)到34.008(%100)。实际应用中,预测大规模的数据,执行时间太久,不能使用。
Gradient Boosting虽然,模型训练较慢,但是预测速度快。随着,预测数据的增加,预测执行时间基本没变化,维持在0.04秒左右。
问题 4 - 用通俗的话解释模型
用一到两段话,向CharityML用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。
回答:
Booting(提升)是将几个弱分类器提升为强分类器的思想。方法可以是,将这几个弱分类器直接相加或加权相加。
训练是从一棵参数很随机的决策树开始,它的预测结果仅比随机拆测要好一点。然后,把预测结果与真实结果比较,看与真实结果的差距,即损失函数的大小。使用损失函数的负梯度方向更新决策树的组合参数,使损失函数逐渐变小到满意的程度。
练习:模型调优
调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需给出并尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:
导入sklearn.model_selection.GridSearchCV和sklearn.metrics.make_scorer.
初始化你选择的分类器,并将其存储在clf中。
如果能够设置的话,设置random_state。
创建一个对于这个模型你希望调整参数的字典。
例如
Step11: 问题 5 - 最终模型评估
你的最优模型在测试数据上的准确率和F-score是多少?这些分数比没有优化的模型好还是差?你优化的结果相比于你在问题 1中得到的朴素预测器怎么样?
注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。
结果
Step12: 问题 7 - 提取特征重要性
观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。
这五个特征和你在问题 6中讨论的特征比较怎么样?如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?如果你的选择不相近,那么为什么你觉得这些特征更加相关?
回答:这个结果与我之前的预测差异明显。
1. capital-loss(资本损失),也许有较大的资本损失,才是高收入的人群的最显著的特性,他们回去投资,贫穷一点的人根本就没那么多资产。
2. capital-gain(资本收益),与Capital-loss类似,高收入人群会去投资,资本收益也会越大。
3. marital-status_Married-civ-spouse(与正常的配偶结婚),这个没想到会有那么高的权重。说明,良好的婚姻,能够促进财富增长。
4. ge(年龄),这个与之前的预测比较符合,财富,职位的增长需要时间。
5. education-num(教育年限),这个比预想的重要性小了很多,但是,高收入的人群确实有更多的教育。
特征选择
如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,并简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。 | Python Code:
# 为这个项目导入需要的库
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # 允许为DataFrame使用display()
# 导入附加的可视化代码visuals.py
import visuals as vs
# 为notebook提供更加漂亮的可视化
%matplotlib inline
# 导入人口普查数据
data = pd.read_csv("census.csv")
# 成功 - 显示第一条记录
display(data.head())
Explanation: 机器学习纳米学位
监督学习
项目2: 为CharityML寻找捐献者
欢迎来到机器学习工程师纳米学位的第二个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。
提示:Code 和 Markdown 区域可通过Shift + Enter快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将使用1994年美国人口普查收集的数据,选用几个监督学习算法以准确地建模被调查者的收入。然后,你将根据初步结果从中选择出最佳的候选算法,并进一步优化该算法以最好地建模这些数据。你的目标是建立一个能够准确地预测被调查者年收入是否超过50000美元的模型。这种类型的任务会出现在那些依赖于捐款而存在的非营利性组织。了解人群的收入情况可以帮助一个非营利性的机构更好地了解他们要多大的捐赠,或是否他们应该接触这些人。虽然我们很难直接从公开的资源中推断出一个人的一般收入阶层,但是我们可以(也正是我们将要做的)从其他的一些公开的可获得的资源中获得一些特征从而推断出该值。
这个项目的数据集来自UCI机器学习知识库。这个数据集是由Ron Kohavi和Barry Becker在发表文章_"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_之后捐赠的,你可以在Ron Kohavi提供的在线版本中找到这个文章。我们在这里探索的数据集相比于原有的数据集有一些小小的改变,比如说移除了特征'fnlwgt' 以及一些遗失的或者是格式不正确的记录。
探索数据
运行下面的代码单元以载入需要的Python库并导入人口普查数据。注意数据集的最后一列'income'将是我们需要预测的列(表示被调查者的年收入会大于或者是最多50,000美元),人口普查数据中的每一列都将是关于被调查者的特征。
End of explanation
# TODO:总的记录数
n_records = data.count().income
# TODO:被调查者的收入大于$50,000的人数
n_greater_50k = data[data.income == '>50K'].shape[0]
# TODO:被调查者的收入最多为$50,000的人数
n_at_most_50k = data[data.income == '<=50K'].shape[0]
# TODO:被调查者收入大于$50,000所占的比例
greater_percent = 100.0*n_greater_50k/n_records
# 打印结果
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
Explanation: 练习:数据探索
首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量:
总的记录数量,'n_records'
年收入大于50,000美元的人数,'n_greater_50k'.
年收入最多为50,000美元的人数 'n_at_most_50k'.
年收入大于50,000美元的人所占的比例, 'greater_percent'.
提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。
End of explanation
# 将数据切分成特征和对应的标签
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# 可视化原来数据的倾斜的连续特征
vs.distribution(data)
Explanation: 准备数据
在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。
转换倾斜的连续特征
一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。
运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。
End of explanation
# 对于倾斜的数据使用Log转换
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# 可视化经过log之后的数据分布
vs.distribution(features_raw, transformed = True)
Explanation: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">对数转换</a>,将数据转换成对数,这样非常大和非常小的值不会对学习算法产生负面的影响。并且使用对数变换显著降低了由于异常值所造成的数据范围异常。但是在应用这个变换时必须小心:因为0的对数是没有定义的,所以我们必须先将数据处理成一个比0稍微大一点的数以成功完成对数转换。
运行下面的代码单元来执行数据的转换和可视化结果。再次,注意值的范围和它们是如何分布的。
End of explanation
# 导入sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# 初始化一个 scaler,并将它施加到特征上
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# 显示一个经过缩放的样例记录
display(data.head())
display(features_raw.head())
Explanation: 规一化数字特征
除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。
运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。
End of explanation
print 'Origin features:'
display(features_raw.head())
print 'Origin income:'
display(income_raw.head())
# TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码
features = pd.get_dummies(features_raw)
print type(income_raw)
# TODO:将'income_raw'编码成数字值
income = income_raw.replace({'<=50K':0, '>50K':1})
# 打印经过独热编码之后的特征数量
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# 移除下面一行的注释以观察编码的特征名字
print encoded
Explanation: 练习:数据预处理
从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C.
| | 一些特征 | | 特征_A | 特征_B | 特征_C |
| :-: | :-: | | :-: | :-: | :-: |
| 0 | B | | 0 | 1 | 0 |
| 1 | C | ----> 独热编码 ----> | 0 | 0 | 1 |
| 2 | A | | 1 | 0 | 0 |
此外,对于非数字的特征,我们需要将非数字的标签'income'转换成数值以保证学习算法能够正常工作。因为这个标签只有两种可能的类别("<=50K"和">50K"),我们不必要使用独热编码,可以直接将他们编码分别成两个类0和1,在下面的代码单元中你将实现以下功能:
- 使用pandas.get_dummies()对'features_raw'数据来施加一个独热编码。
- 将目标标签'income_raw'转换成数字项。
- 将"<=50K"转换成0;将">50K"转换成1。
End of explanation
# 导入 train_test_split
from sklearn.model_selection import train_test_split
# 将'features'和'income'数据切分成训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# 显示切分的结果
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
Explanation: 混洗和切分数据
现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。
运行下面的代码单元来完成切分。
End of explanation
# TODO: 计算准确率
tp = income[income==1].shape[0]
fp = income[income==0].shape[0]
tn = 0
fn = 0
accuracy = 1.0*tp/income.shape[0]
precision = 1.0*tp/(tp+fp)
recall = 1.0*tp/(tp+fn)
# TODO: 使用上面的公式,并设置beta=0.5计算F-score
beta = 0.5
fscore = 1.0*(1+pow(beta,2))*precision*recall / ((pow(beta,2)*precision)+recall)
# 打印结果
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)
Explanation: 评价模型性能
在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。这里面的三个将是你选择的监督学习器,而第四种算法被称为一个朴素的预测器。
评价方法和朴素的预测器
CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
尤其是,当$\beta = 0.5$的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。
通过查看不同类别的数据分布(那些最多赚\$50,000和那些能够赚更多的),我们能发现:很明显的是很多的被调查者年收入没有超过\$50,000。这点会显著地影响准确率,因为我们可以简单地预测说“这个人的收入没有超过\$50,000”,这样我们甚至不用看数据就能做到我们的预测在一般情况下是正确的!做这样一个预测被称作是朴素的,因为我们没有任何信息去证实这种说法。通常考虑对你的数据使用一个朴素的预测器是十分重要的,这样能够帮助我们建立一个模型的表现是否好的基准。那有人说,使用这样一个预测是没有意义的:如果我们预测所有人的收入都低于\$50,000,那么CharityML就不会有人捐款了。
问题 1 - 朴素预测器的性能
如果我们选择一个无论什么情况都预测被调查者年收入大于\$50,000的模型,那么这个模型在这个数据集上的准确率和F-score是多少?
注意: 你必须使用下面的代码单元将你的计算结果赋值给'accuracy' 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。
注意:朴素预测器由于不是训练出来的,所以我们可以用全部数据来进行评估(也有人认为保证条件一致仅用测试数据来评估)。
End of explanation
# TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO:使用sample_size大小的训练数据来拟合学习器
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # 获得程序开始时间
learner = learner.fit(X_train[0:sample_size], y_train[0:sample_size])
end = time() # 获得程序结束时间
# TODO:计算训练时间
results['train_time'] = end - start
# TODO: 得到在测试集上的预测值
# 然后得到对前300个训练数据的预测结果
start = time() # 获得程序开始时间
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[0:300])
end = time() # 获得程序结束时间
# TODO:计算预测用时
results['pred_time'] = end - start
# TODO:计算在最前面的300个训练数据的准确率
results['acc_train'] = accuracy_score(y_train[0:300], predictions_train)
# TODO:计算在测试集上的准确率
results['acc_test'] = accuracy_score(y_test, predictions_test)
# TODO:计算在最前面300个训练数据上的F-score
results['f_train'] = fbeta_score(y_train[0:300], predictions_train, 0.5)
# TODO:计算测试集上的F-score
results['f_test'] = fbeta_score(y_test, predictions_test, 0.5)
# 成功
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# 返回结果
return results
Explanation: 监督学习模型
下面的监督学习模型是现在在 scikit-learn 中你能够选择的模型
- 高斯朴素贝叶斯 (GaussianNB)
- 决策树
- 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K近邻 (KNeighbors)
- 随机梯度下降分类器 (SGDC)
- 支撑向量机 (SVM)
- Logistic回归
问题 2 - 模型应用
列出从上面的监督学习模型中选择的三个适合我们这个问题的模型,你将在人口普查数据上测试这每个算法。对于你选择的每一个算法:
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
这个模型的优势是什么?他什么情况下表现最好?
这个模型的缺点是什么?什么条件下它表现很差?
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答: 本项目特点:输入多个数值特征,输出2分类,数据比较丰富
- 决策树
- 应用场景:垃圾邮件过滤
- 优点:专家系统
- 计算复杂度不高。
- 可以处理不相关特征数据。
- 易于理解和理解。树可以形象化。
- 对中间值的缺失不敏感。
- 需要很少的数据准备。其他技术通常需要数据标准化,需要创建虚拟变量,并删除空白值。注意,这个模块不支持丢失的值。
- 使用树的成本(即。预测数据)是用于对树进行训练的数据点的对数。
- 能够处理数值和分类数据。其他技术通常是专门分析只有一种变量的数据集。
- 能够处理多输出问题。
- 使用白盒模型。如果一个给定的情况在模型中可以观察到,那么这个条件的解释很容易用布尔逻辑来解释。相比之下,在黑盒模型中(例如:在人工神经网络中,结果可能更难解释。
- 可以使用统计测试验证模型。这样就可以解释模型的可靠性。
- 即使它的假设在某种程度上违反了生成数据的真实模型,也会表现得很好。
- 缺点:
- 决策树学习者可以创建那些不能很好地推广数据的过于复杂的树。这就是所谓的过度拟合。修剪(目前不支持)的机制,设置叶片节点所需的最小样本数目或设置树的最大深度是避免此问题的必要条件。
- 决策树可能不稳定,因为数据中的小变化可能导致生成完全不同的树。这个问题通过在一个集合中使用决策树来减轻。
- 我们知道,学习一种最优决策树的问题在最优性甚至是简单概念的几个方面都是np完备性的。因此,实际的决策树学习算法是基于启发式算法的,例如在每个节点上进行局部最优决策的贪婪算法。这种算法不能保证返回全局最优决策树。通过在集合学习者中培训多个树,可以减少这种情况,在这里,特征和样本是随机抽取的。
- 有些概念很难学,因为决策树无法很容易地表达它们,例如XOR、奇偶性或多路复用问题。
- 决策树学习者创建有偏见的树,如果某些类占主导地位。因此,建议在匹配决策树之前平衡数据集。
- 在本项目中,输出可以是比较直观形象的结果,易于理解
- Gradient Boosting
- 应用场景:搜索网站网页排名
- 优点:泛化错误率低,易编码,可以应用在大部分分类器上,无参数调整
- 缺点:对离群点敏感
- 本项目中,是典型的二分类应用,应该效果会比较好
- K近邻 (KNeighbors)
- 应用场景:人脸识别
- 优点:
- 精确度高,对异常值不敏感,无数据输入假定。
- 缺点:
- 计算复杂度高,空间复杂度高。
- 在本项目中,高精确度的区分人群,是个不错的选择
- 支撑向量机 (SVM)(CodeReview20170717弃用)
- 应用场景:手写字体识别
- 优点:
- 在高维空间中有效。
- 在决策函数中,使用一个训练点的子集(称为支持向量),因此它可以有效的存储。
- 缺点:
- 如果特性的数量远远大于样本的数量,则SVM方法可能会表现欠佳。
- SVM方法不直接提供概率性估计。
- 在本项目中,SVM可以处理多维度的问题,并且,通过设置软间隔,可以处理线性不可分的情况。
References:
1. [http://scikit-learn.org/stable/modules/tree.html#classification]
2. [http://scikit-learn.org/stable/modules/svm.html#implementation-details]
3. [https://en.wikipedia.org/wiki/Support_vector_machine]
4. Machine learning in action. Peter Harrington
5. [https://en.wikipedia.org/wiki/Gradient_boosting]
练习 - 创建一个训练和预测的流水线
为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在测试集上做预测的训练和测试的流水线是十分重要的。
你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能:
从sklearn.metrics中导入fbeta_score和accuracy_score。
用样例训练集拟合学习器,并记录训练时间。
用学习器来对训练集进行预测并记录预测时间。
在最前面的300个训练数据上做预测。
计算训练数据和测试数据的准确率。
计算训练数据和测试数据的F-score。
End of explanation
%%time
%pdb on
# TODO:从sklearn中导入三个监督学习模型
from sklearn import tree, svm, neighbors, ensemble
# TODO:初始化三个模型
clf_A = tree.DecisionTreeClassifier(random_state=20)
clf_B = neighbors.KNeighborsClassifier()
# clf_C = svm.SVC(random_state=20)
clf_C = ensemble.GradientBoostingClassifier(random_state=20)
# TODO:计算1%, 10%, 100%的训练数据分别对应多少点
samples_1 = int(X_train.shape[0]*0.01)
samples_10 = int(X_train.shape[0]*0.1)
samples_100 = int(X_train.shape[0]*1.0)
# 收集学习器的结果
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
for k in results.keys():
result_df = pd.DataFrame.from_dict(results[k]).T
result_df.index = ['1%', '10%', '100%']
print k
display(result_df)
# 对选择的三个模型得到的评价结果进行可视化
vs.evaluate(results, accuracy, fscore)
Explanation: 练习:初始模型的评估
在下面的代码单元中,您将需要实现以下功能:
- 导入你在前面讨论的三个监督学习模型。
- 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。
- 如果可能对每一个模型都设置一个random_state。
- 注意:这里先使用每一个模型的默认参数,在接下来的部分中你将需要对某一个模型的参数进行调整。
- 计算记录的数目等于1%,10%,和100%的训练数据,并将这些值存储在'samples'中
注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行!
random_state的作用主要有两个:
让别人能够复现你的结果 (如reviewer)
你可以确定调参带来的优化是参数调整带来的而不是random_state引来的波动.
可以参考这个帖子:
http://discussions.youdaxue.com/t/svr-random-state/30506
另外,模型初始化时,请使用默认参数,因此除了可能需要设置的random_state, 不需要设置其他参数.
End of explanation
%%time
%pdb on
# TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库
from sklearn.metrics import fbeta_score, make_scorer, accuracy_score
from sklearn.model_selection import GridSearchCV
from sklearn import ensemble
# TODO:初始化分类器
clf = ensemble.GradientBoostingClassifier(random_state=20)
# TODO:创建你希望调节的参数列表
#parameters = {'n_neighbors':range(5,10,5), 'algorithm':['ball_tree', 'brute']}
parameters = {'max_depth':range(2,10,1)}
# TODO:创建一个fbeta_score打分对象
scorer = make_scorer(fbeta_score, beta=0.5)
# TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数
grid_obj = GridSearchCV(clf, parameters, scorer)
# TODO:用训练数据拟合网格搜索对象并找到最佳参数
print "Start to GridSearchCV"
grid_obj.fit(X_train, y_train)
print "Start to fit origin model"
clf.fit(X_train, y_train)
# 得到estimator
best_clf = grid_obj.best_estimator_
# 使用没有调优的模型做预测
print "Start to predict"
predictions = clf.predict(X_test)
best_predictions = best_clf.predict(X_test)
# 汇报调参前和调参后的分数
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "Best parameter:"
print grid_obj.best_params_
Explanation: 提高效果
在这最后一节中,您将从三个有监督的学习模型中选择最好的模型来使用学生数据。你将在整个训练集(X_train和y_train)上通过使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的F-score。
问题 3 - 选择最佳的模型
基于你前面做的评价,用一到两段向CharityML解释这三个模型中哪一个对于判断被调查者的年收入大于\$50,000是最合适的。
提示:你的答案应该包括关于评价指标,预测/训练时间,以及该算法是否适合这里的数据的讨论。
回答:Gradient Boosting最合适。
决策树的准确率和f-score在训练数据和测试数据之间差异明显,说明,它的泛化能力较差。K-近邻算法和Gradient Boosting则比较好。
K-近邻预测时间太长,增长太快,从0.59(%1),5.209(%10)到34.008(%100)。实际应用中,预测大规模的数据,执行时间太久,不能使用。
Gradient Boosting虽然,模型训练较慢,但是预测速度快。随着,预测数据的增加,预测执行时间基本没变化,维持在0.04秒左右。
问题 4 - 用通俗的话解释模型
用一到两段话,向CharityML用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。
回答:
Booting(提升)是将几个弱分类器提升为强分类器的思想。方法可以是,将这几个弱分类器直接相加或加权相加。
训练是从一棵参数很随机的决策树开始,它的预测结果仅比随机拆测要好一点。然后,把预测结果与真实结果比较,看与真实结果的差距,即损失函数的大小。使用损失函数的负梯度方向更新决策树的组合参数,使损失函数逐渐变小到满意的程度。
练习:模型调优
调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需给出并尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:
导入sklearn.model_selection.GridSearchCV和sklearn.metrics.make_scorer.
初始化你选择的分类器,并将其存储在clf中。
如果能够设置的话,设置random_state。
创建一个对于这个模型你希望调整参数的字典。
例如: parameters = {'parameter' : [list of values]}。
注意: 如果你的学习器(learner)有 max_features 参数,请不要调节它!
使用make_scorer来创建一个fbeta_score评分对象(设置$\beta = 0.5$)。
在分类器clf上用'scorer'作为评价函数运行网格搜索,并将结果存储在grid_obj中。
用训练集(X_train, y_train)训练grid search object,并将结果存储在grid_fit中。
注意: 取决于你选择的参数列表,下面实现的代码可能需要花一些时间运行!
End of explanation
%%time
# TODO:导入一个有'feature_importances_'的监督学习模型
from sklearn.ensemble import GradientBoostingClassifier
# TODO:在训练集上训练一个监督学习模型
model = GradientBoostingClassifier()
model.fit(X_train, y_train)
# TODO: 提取特征重要性
importances = model.feature_importances_
# 绘图
vs.feature_plot(importances, X_train, y_train)
Explanation: 问题 5 - 最终模型评估
你的最优模型在测试数据上的准确率和F-score是多少?这些分数比没有优化的模型好还是差?你优化的结果相比于你在问题 1中得到的朴素预测器怎么样?
注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。
结果:
| 评价指标 | 基准预测器 | 未优化的模型 | 优化的模型 |
| :------------: | :-----------------: | :---------------: | :-------------: |
| 准确率 | 0.2478 | 0.8630 | 0.8697 |
| F-score | 0.2917 | 0.7395 | 0.7504 |
回答:最优模型在测试数据上的准确率是0.8697,F-score是0.7504。这个结果比没有优化的模型有明显的提升,比问题1中的朴素预测期好太多。
特征的重要性
在数据上(比如我们这里使用的人口普查的数据)使用监督学习算法的一个重要的任务是决定哪些特征能够提供最强的预测能力。通过专注于一些少量的有效特征和标签之间的关系,我们能够更加简单地理解这些现象,这在很多情况下都是十分有用的。在这个项目的情境下这表示我们希望选择一小部分特征,这些特征能够在预测被调查者是否年收入大于\$50,000这个问题上有很强的预测能力。
选择一个有feature_importance_属性(这是一个根据这个选择的分类器来对特征的重要性进行排序的函数)的scikit学习分类器(例如,AdaBoost,随机森林)。在下一个Python代码单元中用这个分类器拟合训练集数据并使用这个属性来决定这个人口普查数据中最重要的5个特征。
问题 6 - 观察特征相关性
当探索数据的时候,它显示在这个人口普查数据集中每一条记录我们有十三个可用的特征。
在这十三个记录中,你认为哪五个特征对于预测是最重要的,你会怎样对他们排序?理由是什么?
回答:最重要的5个特征依次是:年龄,教育年限(教育等级一般于这个有一定的正相关),种族,职业和资本收益
1. 年龄是影响最大的,因为个人能力和职位的上升,财富的积累都需要时间
2. 教育水平更高,相对的收入增长会更快更大,这也许就是大家投资教育的动力吧
3. 白人一般教育水平,社交圈的水平会更高
4. 有一个好的职业,当然能赚到更多的钱
5. 资本收益高,说明是有钱人呀
练习 - 提取特征重要性
选择一个scikit-learn中有feature_importance_属性的监督学习分类器,这个属性是一个在做预测的时候根据所选择的算法来对特征重要性进行排序的功能。
在下面的代码单元中,你将要实现以下功能:
- 如果这个模型和你前面使用的三个模型不一样的话从sklearn中导入一个监督学习模型。
- 在整个训练集上训练一个监督学习模型。
- 使用模型中的'.feature_importances_'提取特征的重要性。
End of explanation
%%time
# 导入克隆模型的功能
from sklearn.base import clone
# 减小特征空间
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# 在前面的网格搜索的基础上训练一个“最好的”模型
# 这里使用前面变量model里面AdaBoostClassifier()
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# 做一个新的预测
best_predictions = model.predict(X_test)
reduced_predictions = clf.predict(X_test_reduced)
# 对于每一个版本的数据汇报最终模型的分数
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))
Explanation: 问题 7 - 提取特征重要性
观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。
这五个特征和你在问题 6中讨论的特征比较怎么样?如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?如果你的选择不相近,那么为什么你觉得这些特征更加相关?
回答:这个结果与我之前的预测差异明显。
1. capital-loss(资本损失),也许有较大的资本损失,才是高收入的人群的最显著的特性,他们回去投资,贫穷一点的人根本就没那么多资产。
2. capital-gain(资本收益),与Capital-loss类似,高收入人群会去投资,资本收益也会越大。
3. marital-status_Married-civ-spouse(与正常的配偶结婚),这个没想到会有那么高的权重。说明,良好的婚姻,能够促进财富增长。
4. ge(年龄),这个与之前的预测比较符合,财富,职位的增长需要时间。
5. education-num(教育年限),这个比预想的重要性小了很多,但是,高收入的人群确实有更多的教育。
特征选择
如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,并简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。
End of explanation |
6,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
nb2py tutorial
All the examples are generated from this notebook and located in the folder tutorial_files.
Step1: Exporting marked cells
The dump function automatically exports cells starting with #~, but this behaviour could be changed with the parameter marker.
The markers are removed from the output file.
Step2: Exporting cells by indices
You can also export cells using a list of indices.
Consider that the cells will be written regardless of their type. | Python Code:
import nb2py
Explanation: nb2py tutorial
All the examples are generated from this notebook and located in the folder tutorial_files.
End of explanation
#~
#This is a cell example with the standard marker
a=2
b=3
print(a+b)
nb2py.dump('tutorial.ipynb','tutorial_files/standard.py')
#please export this cell
#This is a cell with a custom comment as marker
x=10
y=11
print(x+y)
nb2py.dump('tutorial.ipynb','tutorial_files/custom.py',marker='please export this cell')
Explanation: Exporting marked cells
The dump function automatically exports cells starting with #~, but this behaviour could be changed with the parameter marker.
The markers are removed from the output file.
End of explanation
nb2py.dump_indices('tutorial.ipynb',
'tutorial_files/indices.py',
indices=[3,5])
nb2py.dump_indices('tutorial.ipynb',
'tutorial_files/markdown.md',
indices=[0,2,7])
Explanation: Exporting cells by indices
You can also export cells using a list of indices.
Consider that the cells will be written regardless of their type.
End of explanation |
6,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(1)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# Input Layer
net = tflearn.input_data([None, 784])
# Hidden Layer(s)
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Ouput Layer
net = tflearn.fully_connected(net, 10, activation='softmax')
# Train network
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
6,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recovers true coefficients on artificial censored regression data
Step1: Note that the truncation values do not have to be the same for e.g. all left-censored observations, or all right-censored observations, as in this example. However, the model does assume that the errors will be normally-distributed.
Comparison to R censReg package result on AER data
Commands in R for Tobit analysis of Affairs data | Python Code:
rs = np.random.RandomState(seed=10)
ns = 100
nf = 10
x, y_orig, coef = make_regression(n_samples=ns, n_features=nf, coef=True, noise=0.0, random_state=rs)
x = pd.DataFrame(x)
y = pd.Series(y_orig)
n_quantiles = 3 # two-thirds of the data is truncated
quantile = 100/float(n_quantiles)
lower = np.percentile(y, quantile)
upper = np.percentile(y, (n_quantiles - 1) * quantile)
left = y < lower
right = y > upper
cens = pd.Series(np.zeros((ns,)))
cens[left] = -1
cens[right] = 1
y = y.clip(upper=upper, lower=lower)
hist = plt.hist(y)
tr = TobitModel()
result = tr.fit(x, y, cens, verbose=False)
fig, ax = plt.subplots()
ind = np.arange(len(coef))
width = 0.25
rects1 = ax.bar(ind, coef, width, color='g', label='True')
rects2 = ax.bar(ind + width, tr.coef_, width, color='r', label='Tobit')
rects3 = ax.bar(ind + (2 * width), tr.ols_coef_, width, color='b', label='OLS')
plt.ylabel("Coefficient")
plt.xlabel("Index of regressor")
plt.title("Tobit vs. OLS on censored data")
leg = plt.legend(loc=(0.22, 0.65))
Explanation: Recovers true coefficients on artificial censored regression data
End of explanation
data_file = 'tobit_data.txt'
df = pd.read_table(data_file, sep=' ')
df.loc[df.gender=='male', 'gender'] = 1
df.loc[df.gender=='female', 'gender'] = 0
df.loc[df.children=='yes', 'children'] = 1
df.loc[df.children=='no', 'children'] = 0
df = df.astype(float)
df.head()
y = df.affairs
x = df.drop(['affairs', 'gender', 'education', 'children'], axis=1)
cens = pd.Series(np.zeros((len(y),)))
cens[y==0] = -1
cens.value_counts()
tr = TobitModel()
tr = tr.fit(x, y, cens, verbose=False)
tr.coef_
Explanation: Note that the truncation values do not have to be the same for e.g. all left-censored observations, or all right-censored observations, as in this example. However, the model does assume that the errors will be normally-distributed.
Comparison to R censReg package result on AER data
Commands in R for Tobit analysis of Affairs data:
install.packages('censReg')
library(censReg)
install.packages('AER')
data('Affairs', package='AER')
write.table(Affairs, 'tobit_data.txt', quote=FALSE, row.names=FALSE)
estResult <- censReg( affairs ~ age + yearsmarried + religiousness +occupation + rating, data = Affairs)
summary(estResult)
Python analysis of same data
End of explanation |
6,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Whitening versus standardizing
Step1: First, just read in data, and take a peek. The data can be found on GitHub.
Step2: We're told that for gender, 1 is male, and 2 is female. Part (a) says to extract the height/weight data corresponding to the males. Then, we fit a 2d Gaussian to the male data, using the empirical mean and covariance. Then, we'll plot this data.
Let's extract the males first.
Step3: Next, we'll calculate the empirical mean and covariance.
Step4: Let's plot the data now.
Step6: Let $\mathbf{x} \sim \mathcal{N}\left(\boldsymbol\mu, \Sigma\right)$, where $\mathbf{x} \in \mathbb{R}^p$. We can write $\Sigma = SDS^\intercal$ by the spectral theorem, where the columns of $S$ are orthonormal eigenvectors, and $D$ is a diagonal matrix of eigenvectors, $\lambda_1, \lambda_2,\ldots,\lambda_n$.
$\mathbf{x}$ has probability mass function,
\begin{equation}
f(\mathbf{x}) = \frac{1}{(2\pi)^{p/2}\sqrt{\det{\Sigma}}}\exp\left(-\frac{1}{2}\left(\mathbf{x}-\boldsymbol\mu\right)^\intercal\Sigma^{-1}\left(\mathbf{x}-\boldsymbol\mu\right)\right).
\end{equation}
Note that $S^\intercal S = I$, and $S^{-1} = S^\intercal$. This implies that
\begin{equation}
\Sigma^{-1} = \left(SDS^\intercal\right)^{-1} = \left(S^\intercal\right)^{-1}D^{-1}S^{-1} = SD^{-1}S^\intercal.
\end{equation}
Let $\mathbf{y} = S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right)$.
Then, we have that
\begin{align}
\left(\mathbf{x}-\boldsymbol\mu\right)^\intercal\Sigma^{-1}\left(\mathbf{x}-\boldsymbol\mu\right)
&= \left(\mathbf{x}-\boldsymbol\mu\right)^\intercal S D^{-1} S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right) \
&= \left(S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right)\right)^\intercal D^{-1} S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right) \
&= \mathbf{y}^\intercal D^{-1} \mathbf{y}.
\end{align}
Moreover, $\mathbf{x} = S\mathbf{y} + \boldsymbol\mu$, so $D\mathbf{x}(\mathbf{y}) = S$, and so $\det D\mathbf{x}(\mathbf{y}) = 1$. Changing variables, the probability density function for $\mathbf{y}$ is
\begin{equation}
f(\mathbf{y}) = \frac{1}{(2\pi)^{p/2}\sqrt{\det D}}\exp\left(-\frac{1}{2}\mathbf{y}^\intercal D^{-1} \mathbf{y}\right)
= \frac{1}{(2\pi)^{p/2}\sqrt{\prod_{j=1}^p\lambda_j}}\exp\left(-\frac{1}{2}\sum_{j=1}^p \frac{y_j^2}{\lambda_j}\right) = \prod_{j=1}^p\frac{1}{\sqrt{2\pi\lambda_j}}\exp\left(-\frac{1}{2}\frac{y_j^2}{\lambda_j}\right).
\end{equation}
Thus, $\mathbf{y} \sim \mathcal{N}\left(\mathbf{0}, D\right)$, and the coordinates of $\mathbf{y}$ are independent, with $y_i \mathcal{N}\left(0, \lambda_j\right)$. Now, the level curves of $f$ correspond to the hyperellipsoids
\begin{equation}
\sum_{j=1}^p \frac{y_j^2}{\lambda_j} = \sum_{j=1}^p \left(\frac{y_j}{\sqrt{\lambda_j}}\right)^2 = c.
\end{equation}
$\frac{y_j}{\sqrt{\lambda_j}} \sim \mathcal{N}(0, 1)$, so $\sum_{j=1}^p \frac{y_j^2}{\lambda_j} \sim \chi^2_p$, and so a $95\%$ confidence region would be
\begin{equation}
\sum_{j=1}^p \frac{y_j^2}{\lambda_j} \leq F_{\chi^2_p}^{-1}(0.95),
\end{equation}
where $F$ is the cumulative distribution function. For our $p = 2$ case, $F_{\chi^2_p}^{-1}(0.95) \approx 5.991.$ Now $\mathbf{y}$ our coordinates with respect to a orthonormal eigenbasis. Since $\mathbf{x} = S\mathbf{y} + \boldsymbol\mu$, the confidence region is a rotated hyperellipsoid centered at $\boldsymbol\mu$ with semi-axes along the eigenvectors of $\Sigma$. Let's plot this hyperellipsoid along with the indexed data.
Step8: For part (b) says to do the same thing with standardized data.
Step9: Part (c) deals with whitening or sphereing the data. This involves transforming the data so that the dimensions are uncorrelated and have equal variances along the axes. Recall that
\begin{equation}
\mathbf{y} = S^\intercal\left(\mathbf{x} - \boldsymbol\mu\right) \sim \mathcal{N}\left(\mathbf{0}, D\right),
\end{equation}
so this transformation accomplishes the tasks of making the dimensions uncorrelated. Now, to make the variances equal simply multiply by $\sqrt{D^{-1}}$, which is easy to compute since $D$ is diagonal and positive definite, so our transformation is
\begin{equation}
\mathbf{y}^\prime = \sqrt{D^{-1}}\mathbf{y} = \sqrt{D^{-1}}S^\intercal\left(\mathbf{x} - \boldsymbol\mu\right)
\sim \mathcal{N}\left(\mathbf{0}, I\right).
\end{equation}
Let's plot this.
Step10: Now, we can plot all three figures together just like in the textbook. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
Explanation: Whitening versus standardizing
End of explanation
raw_data = pd.read_csv("heightWeightData.txt", header=None, names=["gender", "height", "weight"])
raw_data.info()
raw_data.head()
Explanation: First, just read in data, and take a peek. The data can be found on GitHub.
End of explanation
male_data = raw_data[raw_data.gender == 1]
male_data.head()
Explanation: We're told that for gender, 1 is male, and 2 is female. Part (a) says to extract the height/weight data corresponding to the males. Then, we fit a 2d Gaussian to the male data, using the empirical mean and covariance. Then, we'll plot this data.
Let's extract the males first.
End of explanation
mu_male = male_data.mean(axis=0)[1:].as_matrix() # remove gender
male_mean_diff = male_data.iloc[:,1:].as_matrix() - mu_male
covariance_male = np.dot(male_mean_diff.T, male_mean_diff)/len(male_data)
print(mu_male)
print(covariance_male)
Explanation: Next, we'll calculate the empirical mean and covariance.
End of explanation
plt.figure(figsize=(6,6))
plt.plot(male_data.height, male_data.weight, 'ko')
plt.axis([60,80,120,285])
plt.title('raw')
plt.xlabel('height')
plt.ylabel('weight')
plt.axes().set_aspect(0.2)
plt.grid(True)
plt.show()
Explanation: Let's plot the data now.
End of explanation
def calculate_2d_gaussian_confidence_region(mu, Sigma, p = 0.95, points = 200):
Returns a points x 2 numpy.ndarray of the confidence region.
Keyword arguments:
mu -- mean
Sigma -- covariance matrix
p -- percent confidence
points -- number of points to interpolate
assert(len(mu) == len(Sigma))
assert(np.all(Sigma == Sigma.T))
eigenvalues, S = np.linalg.eig(Sigma)
S = S[:,eigenvalues.argsort()[::-1]]
eigenvalues = eigenvalues[eigenvalues.argsort()[::-1]]
theta = np.linspace(0, 2*np.pi, num = points)
x = np.sqrt(eigenvalues[0]*stats.chi2.ppf(p, df=2))*np.cos(theta)
y = np.sqrt(eigenvalues[1]*stats.chi2.ppf(p, df=2))*np.sin(theta)
return np.dot(S, np.array([x,y])).T + mu
def plot_raw_males(ax=None):
if ax == None:
ax = plt.gca()
gaussian_fit_male = calculate_2d_gaussian_confidence_region(mu_male, covariance_male, p = 0.95, points = 100)
ax.axis([60,80,90,285])
ax.set_title('raw')
ax.set_xlabel('height')
ax.set_ylabel('weight')
for row in male_data.itertuples():
ax.text(row.height, row.weight, row.Index, horizontalalignment='center', verticalalignment='center')
ax.set_aspect(0.2)
ax.plot(gaussian_fit_male[:,0], gaussian_fit_male[:,1], linewidth=3, color='red')
ax.plot(mu_male[0], mu_male[1], 'rx', markersize=10, markeredgewidth=3)
ax.grid(True)
plt.figure(figsize=(8,8))
plot_raw_males(plt.gca())
plt.show()
Explanation: Let $\mathbf{x} \sim \mathcal{N}\left(\boldsymbol\mu, \Sigma\right)$, where $\mathbf{x} \in \mathbb{R}^p$. We can write $\Sigma = SDS^\intercal$ by the spectral theorem, where the columns of $S$ are orthonormal eigenvectors, and $D$ is a diagonal matrix of eigenvectors, $\lambda_1, \lambda_2,\ldots,\lambda_n$.
$\mathbf{x}$ has probability mass function,
\begin{equation}
f(\mathbf{x}) = \frac{1}{(2\pi)^{p/2}\sqrt{\det{\Sigma}}}\exp\left(-\frac{1}{2}\left(\mathbf{x}-\boldsymbol\mu\right)^\intercal\Sigma^{-1}\left(\mathbf{x}-\boldsymbol\mu\right)\right).
\end{equation}
Note that $S^\intercal S = I$, and $S^{-1} = S^\intercal$. This implies that
\begin{equation}
\Sigma^{-1} = \left(SDS^\intercal\right)^{-1} = \left(S^\intercal\right)^{-1}D^{-1}S^{-1} = SD^{-1}S^\intercal.
\end{equation}
Let $\mathbf{y} = S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right)$.
Then, we have that
\begin{align}
\left(\mathbf{x}-\boldsymbol\mu\right)^\intercal\Sigma^{-1}\left(\mathbf{x}-\boldsymbol\mu\right)
&= \left(\mathbf{x}-\boldsymbol\mu\right)^\intercal S D^{-1} S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right) \
&= \left(S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right)\right)^\intercal D^{-1} S^\intercal\left(\mathbf{x}-\boldsymbol\mu\right) \
&= \mathbf{y}^\intercal D^{-1} \mathbf{y}.
\end{align}
Moreover, $\mathbf{x} = S\mathbf{y} + \boldsymbol\mu$, so $D\mathbf{x}(\mathbf{y}) = S$, and so $\det D\mathbf{x}(\mathbf{y}) = 1$. Changing variables, the probability density function for $\mathbf{y}$ is
\begin{equation}
f(\mathbf{y}) = \frac{1}{(2\pi)^{p/2}\sqrt{\det D}}\exp\left(-\frac{1}{2}\mathbf{y}^\intercal D^{-1} \mathbf{y}\right)
= \frac{1}{(2\pi)^{p/2}\sqrt{\prod_{j=1}^p\lambda_j}}\exp\left(-\frac{1}{2}\sum_{j=1}^p \frac{y_j^2}{\lambda_j}\right) = \prod_{j=1}^p\frac{1}{\sqrt{2\pi\lambda_j}}\exp\left(-\frac{1}{2}\frac{y_j^2}{\lambda_j}\right).
\end{equation}
Thus, $\mathbf{y} \sim \mathcal{N}\left(\mathbf{0}, D\right)$, and the coordinates of $\mathbf{y}$ are independent, with $y_i \mathcal{N}\left(0, \lambda_j\right)$. Now, the level curves of $f$ correspond to the hyperellipsoids
\begin{equation}
\sum_{j=1}^p \frac{y_j^2}{\lambda_j} = \sum_{j=1}^p \left(\frac{y_j}{\sqrt{\lambda_j}}\right)^2 = c.
\end{equation}
$\frac{y_j}{\sqrt{\lambda_j}} \sim \mathcal{N}(0, 1)$, so $\sum_{j=1}^p \frac{y_j^2}{\lambda_j} \sim \chi^2_p$, and so a $95\%$ confidence region would be
\begin{equation}
\sum_{j=1}^p \frac{y_j^2}{\lambda_j} \leq F_{\chi^2_p}^{-1}(0.95),
\end{equation}
where $F$ is the cumulative distribution function. For our $p = 2$ case, $F_{\chi^2_p}^{-1}(0.95) \approx 5.991.$ Now $\mathbf{y}$ our coordinates with respect to a orthonormal eigenbasis. Since $\mathbf{x} = S\mathbf{y} + \boldsymbol\mu$, the confidence region is a rotated hyperellipsoid centered at $\boldsymbol\mu$ with semi-axes along the eigenvectors of $\Sigma$. Let's plot this hyperellipsoid along with the indexed data.
End of explanation
def standardize(x, mean, sd):
Standardizes assuming x is normally distributed.
return (x - mean)/sd
def plot_standardized_males(ax=None):
if ax == None:
ax = plt.gca()
gaussian_fit_male = calculate_2d_gaussian_confidence_region(mu_male, covariance_male, p = 0.95, points = 100)
ax.set_title('standardized')
ax.set_xlabel('height')
ax.set_ylabel('weight')
ax.plot(standardize(male_data.height, mu_male[0], np.sqrt(covariance_male[0,0])),
standardize(male_data.weight, mu_male[1], np.sqrt(covariance_male[1,1])),
" ")
for row in male_data.itertuples():
ax.text(standardize(row.height, mu_male[0], np.sqrt(covariance_male[0,0])),
standardize(row.weight, mu_male[1], np.sqrt(covariance_male[1,1])),
row.Index, horizontalalignment='center', verticalalignment='center')
ax.set_aspect('equal')
ax.plot(standardize(gaussian_fit_male[:,0], mu_male[0], np.sqrt(covariance_male[0,0])),
standardize(gaussian_fit_male[:,1], mu_male[1], np.sqrt(covariance_male[1,1])),
linewidth=3, color='red')
ax.plot(0, 0, 'rx', markersize=10, markeredgewidth=3)
ax.grid(True)
plt.figure(figsize=(8,8))
plot_standardized_males()
plt.show()
Explanation: For part (b) says to do the same thing with standardized data.
End of explanation
def whiten(X, mu, Sigma):
assert(len(mu) == len(Sigma))
assert(np.all(Sigma == Sigma.T))
eigenvalues, S = np.linalg.eig(Sigma)
S = S[:,eigenvalues.argsort()[::-1]]
eigenvalues = eigenvalues[eigenvalues.argsort()[::-1]]
inverse_precision = np.diag(1/np.sqrt(eigenvalues))
return np.dot(np.dot(X - mu, S), inverse_precision)
def plot_whitened_males(ax=None):
if ax == None:
ax = plt.gca()
gaussian_fit_male = calculate_2d_gaussian_confidence_region(mu_male, covariance_male, p = 0.95, points = 100)
whitened_gaussian_fit_male = whiten(gaussian_fit_male, mu_male, covariance_male)
ax.set_title('whitened')
ax.set_xlabel('height')
ax.set_ylabel('weight')
whitened_male_data = whiten(np.array([male_data.height, male_data.weight]).T, mu_male, covariance_male)
ax.plot(whitened_male_data[:,0], whitened_male_data[:,1], " ")
for i in range(len(whitened_male_data)):
ax.text(whitened_male_data[i, 0], whitened_male_data[i, 1],
male_data.index[i], horizontalalignment='center', verticalalignment='center')
ax.set_aspect('equal')
ax.plot(whitened_gaussian_fit_male[:,0], whitened_gaussian_fit_male[:,1],
linewidth=3, color='red')
ax.plot(0, 0, 'rx', markersize=10, markeredgewidth=3)
ax.grid(True)
plt.figure(figsize=(8,8))
plot_whitened_males()
plt.show()
Explanation: Part (c) deals with whitening or sphereing the data. This involves transforming the data so that the dimensions are uncorrelated and have equal variances along the axes. Recall that
\begin{equation}
\mathbf{y} = S^\intercal\left(\mathbf{x} - \boldsymbol\mu\right) \sim \mathcal{N}\left(\mathbf{0}, D\right),
\end{equation}
so this transformation accomplishes the tasks of making the dimensions uncorrelated. Now, to make the variances equal simply multiply by $\sqrt{D^{-1}}$, which is easy to compute since $D$ is diagonal and positive definite, so our transformation is
\begin{equation}
\mathbf{y}^\prime = \sqrt{D^{-1}}\mathbf{y} = \sqrt{D^{-1}}S^\intercal\left(\mathbf{x} - \boldsymbol\mu\right)
\sim \mathcal{N}\left(\mathbf{0}, I\right).
\end{equation}
Let's plot this.
End of explanation
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(1,3,1)
ax2 = fig.add_subplot(1,3,2)
ax3 = fig.add_subplot(1,3,3)
plot_raw_males(ax1)
plot_standardized_males(ax2)
plot_whitened_males(ax3)
Explanation: Now, we can plot all three figures together just like in the textbook.
End of explanation |
6,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower overall ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower overall ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
self.activation_function = lambda x : 1 / (1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
error = y - final_outputs
# TODO: Backpropagated output error term
output_error_term = error
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
# TODO: Backpropagated hidden error term
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# TODO: Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# TODO: Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 3000
learning_rate = 0.6
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.iloc[batch].values, train_targets.iloc[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
6,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mock community dataset generation
Run in a qiime 2.0.6 conda environment.
This notebook describes how mock community datasets were retrieved and files were generated for tax-credit comparisons. Only the feature tables, metadata maps, representative sequences, and expected taxonomies are included in tax-credit, but this notebook can regenerate intermediate files, generate these files for new mock communities, or tweaked to benchmark, e.g., quality control or OTU picking methods.
All mock communities are hosted on mockrobiota, though raw reads are deposited elsewhere. To use these mock communities, clone the mockrobiota repository into the repo_dir that contains the tax-credit repository.
Step1: Set source/destination filepaths
Step2: First we will define which mock communities we plan to use, and necessary parameters
Step3: Now we will generate data directories in tax-credit for each community and begin populating these will files from mockrobiota. This may take some time, as this involves downloading raw data fastq files.
Step4: Process data in QIIME2
Finally, we can get to processing our data. We begin by importing our data, demultiplexing, and viewing a few fastq quality summaries to decide how to trim our raw reads prior to processing.
Each dataset may require different parameters. For example, some mock communities used here require different barcode orientations, while others may already be demultiplexed. These parameters may be read in as a dictionary of tuples.
Step5: To view the demux_summary.qzv (demultiplexed sequences per sample counts) and demux_plot_qual.qzv (fastq quality profiles) summaries that you just created, drag and drop the files into q2view
Use the fastq quality data above to decide how to proceed. As each dataset will have different quality profiles and read lengths, we will enter trimming parameters as a dictionary. We can use this dict to pass other parameters to denoise_to_phylogeny(), including whether we want to build a phylogeny for each community.
Step6: Now we will quality filter with dada2, and use the representative sequences to generate a phylogeny.
Step7: To view the feature_table_summary.qzv summaries you just created, drag and drop the files into q2view
Extract results and move to repo | Python Code:
from tax_credit.process_mocks import (extract_mockrobiota_dataset_metadata,
extract_mockrobiota_data,
batch_demux,
denoise_to_phylogeny,
transport_to_repo
)
from os.path import expandvars, join
Explanation: Mock community dataset generation
Run in a qiime 2.0.6 conda environment.
This notebook describes how mock community datasets were retrieved and files were generated for tax-credit comparisons. Only the feature tables, metadata maps, representative sequences, and expected taxonomies are included in tax-credit, but this notebook can regenerate intermediate files, generate these files for new mock communities, or tweaked to benchmark, e.g., quality control or OTU picking methods.
All mock communities are hosted on mockrobiota, though raw reads are deposited elsewhere. To use these mock communities, clone the mockrobiota repository into the repo_dir that contains the tax-credit repository.
End of explanation
# base directory containing tax-credit and mockrobiota repositories
project_dir = expandvars("$HOME/Desktop/projects/")
# tax-credit directory
repo_dir = join(project_dir, "short-read-tax-assignment")
# mockrobiota directory
mockrobiota_dir = join(project_dir, "mockrobiota")
# temp destination for mock community files
mock_data_dir = join(project_dir, "mock-community")
# destination for expected taxonomy assignments
expected_data_dir = join(repo_dir, "data", "precomputed-results", "mock-community")
Explanation: Set source/destination filepaths
End of explanation
# We will just use a sequential set of mockrobiota datasets, otherwise list community names manually
communities = ['mock-{0}'.format(n) for n in range(2,27) if n != 11 and n != 17]
#communities = ['mock-{0}'.format(n) for n in range(16,27) if n != 17]
# Create dictionary of mock community dataset metadata
community_metadata = extract_mockrobiota_dataset_metadata(mockrobiota_dir, communities)
# Map marker-gene to reference database names in tax-credit and in mockrobiota
# marker-gene tax-credit-dir mockrobiota-dir version
reference_dbs = {'16S' : ('gg_13_8_otus', 'greengenes', '13-8', '99-otus'),
'ITS' : ('unite_20.11.2016', 'unite', '7-1', '99-otus')
}
Explanation: First we will define which mock communities we plan to use, and necessary parameters
End of explanation
extract_mockrobiota_data(communities, community_metadata, reference_dbs,
mockrobiota_dir, mock_data_dir,
expected_data_dir)
Explanation: Now we will generate data directories in tax-credit for each community and begin populating these will files from mockrobiota. This may take some time, as this involves downloading raw data fastq files.
End of explanation
# {community : (demultiplex, rev_comp_barcodes, rev_comp_mapping_barcodes)}
demux_params = {'mock-1' : (True, False, True),
'mock-2' : (True, False, True),
'mock-3' : (True, False, False),
'mock-4' : (True, False, True),
'mock-5' : (True, False, True),
'mock-6' : (True, False, True),
'mock-7' : (True, False, True),
'mock-8' : (True, False, True),
'mock-9' : (True, False, True),
'mock-10' : (True, False, True),
'mock-12' : (False, False, False),
'mock-13' : (False, False, False),
'mock-14' : (False, False, False),
'mock-15' : (False, False, False),
'mock-16' : (False, False, False),
'mock-18' : (False, False, False),
'mock-19' : (False, False, False),
'mock-20' : (False, False, False),
'mock-21' : (False, False, False),
'mock-22' : (False, False, False),
'mock-23' : (False, False, False),
'mock-24' : (False, False, False),
'mock-25' : (False, False, False),
'mock-26' : (True, False, False), # Note we only use samples 1-40 in mock-26
}
batch_demux(communities, mock_data_dir, demux_params)
Explanation: Process data in QIIME2
Finally, we can get to processing our data. We begin by importing our data, demultiplexing, and viewing a few fastq quality summaries to decide how to trim our raw reads prior to processing.
Each dataset may require different parameters. For example, some mock communities used here require different barcode orientations, while others may already be demultiplexed. These parameters may be read in as a dictionary of tuples.
End of explanation
# {community : (trim_left, trunc_len, build_phylogeny)}
trim_params = {'mock-1' : (0, 100, True),
'mock-2' : (0, 130, True),
'mock-3' : (0, 150, True),
'mock-4' : (0, 150, True),
'mock-5' : (0, 200, True),
'mock-6' : (0, 50, True),
'mock-7' : (0, 90, True),
'mock-8' : (0, 100, True),
'mock-9' : (0, 100, False),
'mock-10' : (0, 100, False),
'mock-12' : (0, 230, True),
'mock-13' : (0, 250, True),
'mock-14' : (0, 250, True),
'mock-15' : (0, 250, True),
'mock-16' : (19, 231, False),
'mock-18' : (19, 231, False),
'mock-19' : (19, 231, False),
'mock-20' : (0, 250, False),
'mock-21' : (0, 250, False),
'mock-22' : (19, 250, False),
'mock-23' : (19, 250, False),
'mock-24' : (0, 150, False),
'mock-25' : (0, 165, False),
'mock-26' : (0, 290, False),
}
Explanation: To view the demux_summary.qzv (demultiplexed sequences per sample counts) and demux_plot_qual.qzv (fastq quality profiles) summaries that you just created, drag and drop the files into q2view
Use the fastq quality data above to decide how to proceed. As each dataset will have different quality profiles and read lengths, we will enter trimming parameters as a dictionary. We can use this dict to pass other parameters to denoise_to_phylogeny(), including whether we want to build a phylogeny for each community.
End of explanation
denoise_to_phylogeny(communities, mock_data_dir, trim_params)
Explanation: Now we will quality filter with dada2, and use the representative sequences to generate a phylogeny.
End of explanation
transport_to_repo(communities, mock_data_dir, repo_dir)
Explanation: To view the feature_table_summary.qzv summaries you just created, drag and drop the files into q2view
Extract results and move to repo
End of explanation |
6,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LSESU Applicable Maths Python Lesson 3
1/11/16
Today we will be learning about
* Strings and string indexing
* Reading files
* Lists
* List comprehensions
Recap from week 2
Functions
This is what the my_function you made could look like
def my_function(a,b)
Step1: Indexing strings
We can access only specific elements of a string using square brackets [ ]. This way you can split a string up into more manageable parts. You might use this if you are given lots of news reports and you only want the first sentence to analyse.
Important
Step2: This is very close to what we did last week when we looked at for statements
Step3: Question - How would you get the last element of the list?
Step4: Formatting strings
Often when you are creating strings, you might want some part to be flexible or change in some way. You can do that with the .format() function.
Step5: Question - Write some code which asks a user for their birthday day, month and year separately and print the result formatted together
Step6: Escape characters
String formatting includes special characters to make more complicated strings easy. These are always denoted with the \ character and are referred to as escape characters.
Common escape characters you might use will be newline \n or tab space \t. You also use the backslash to put quotes inside your string, like this \". If you are interested more in all the escape chars for Python look here
Step7: String methods
Some maths operations work on strings, and strings have some of their own methods which are specifically for working with variables made of characters.
Step8: Triple quotes and multi line strings
Step9: A note on encoding
Step10: Reading files
There are two main ways to read the contents of a text file, either all at once or line by line
Step11: Note
Step12: Writing files
The .write() function does the opposite of .read(). If you give .write() a string to write, this will get put inside the file
Step13: Closing files
When you are done using your files, you should close them so that the Python programme no longer has access as so
Step14: Context managers
There is a way around using the close() statement called a Context Manager. These use the with statement to create a code block, and the opened file is only used inside the code block. When you exit the contexted managed code block, the file is automatically closed.
Step15: We've breezed over File IO for now but if you want to look more in detail the TutorialsPoint page in this subject can give you more guidance if you are lost.
Lists
You have seen lists a few times before, but I haven't explained them until now. A list is a type of "Data Structure" i.e. a specific frame for organising items of data.
A list is declared as so
Step16: Indexing a list (it's just like indexing a string!)
Step17: Iterating over a list
Step18: Modifying a list
Elements of a list can be changed after declaration
Step19: You can also delete elements and add to the end
Step20: Problem - Print the list in reverse order using a for loop
Here's a tip
Step21: List comprehensions
Why use a comprehension?
These are one of my favourite features of the Python language. A list comprehension is a very readable way to run some operation on a list in one line of code. If you haven't quite understood today's lesson so far, get comfortable with that part first and revisit this later. You certainly won't ever NEED to use a list comprehension, but it will make for shorter code that shows you know what you are doing
Converting a list of temperatures
Imagine you have a list of temperatures in Celsius and you need to convert all the temperatures to Fahrenheit. The obvious way to do this is using a for loop
Step22: This is a small example where the for loop code is only two lines so the advantage is small. But you can actually do all of this in 1 line!
Step23: This is the basic version of the comprehension. You can also add conditions, like to only keep Celsius temperatures above 37
Step24: If we wanted to do this with loops we would have a for loop and an if statement. Now we have neither!
Multiplying two lists
Step25: Challenge - Write a comprehension to create a list of the first 10 square numbers?
Hint | Python Code:
# Everyone should know how to create (or "declare") a string by now
var = 'This is a string'
alphabet = 'abcdefghijklmnopqrstuvwxyz'
Explanation: LSESU Applicable Maths Python Lesson 3
1/11/16
Today we will be learning about
* Strings and string indexing
* Reading files
* Lists
* List comprehensions
Recap from week 2
Functions
This is what the my_function you made could look like
def my_function(a,b):
if a==b:
return 'Arguments are the same'
elif a>b:
return True
else a<b:
return False
* import statements and packages
Do you understand every part of this line, and where it would be in your own code?
import matplotlib.pyplot as plt
This is what the random_string you made could look like
def random_string():
var = rand.randint(1,5)
if var==1:
return 'Random integer was 1'
elif var==2:
return 'Random integer was 2'
elif var==3:
return 'Random integer was 3'
elif var==4:
return 'Random integer was 4'
elif var==5:
return 'Random integer was 5'
else:
return 'Something went wrong :('
Strings
You've seen strings before in both the first and second lesson and so far, we've skipped over what they are and what you can do with them. Now we are going to introduce the fundamentals of how to use this data type.
name = 'Python O'PythonFace'
A string is a variable of any number of characters (letters, numbers and other symbols) enclosed inside quotation marks. In Python there is no difference between single (' ') or double (" ") quotes. Above, name is the variable identifier name we use in the code, it has type string and the value of the variable is the string of characters on the right side of =.
Declaring strings
End of explanation
# We can get only the first element of the alphabet
# Note that a is the 0th character in the string
first_letter = alphabet[0]
print(first_letter)
# To get the second letter we would do this
second_letter = alphabet[1]
print(second_letter)
# We can also get a range of characters in the string
first_five_letters = alphabet[0:5]
print(first_five_letters)
Explanation: Indexing strings
We can access only specific elements of a string using square brackets [ ]. This way you can split a string up into more manageable parts. You might use this if you are given lots of news reports and you only want the first sentence to analyse.
Important: Python uses "zero based indexing" which is quite common in many languages. When you count up from the first element you always start on 0!
End of explanation
# You can see that when we run the for loop, Python looks at the indexes of the letters
# in the string to iterate over
for letter in alphabet:
print(alphabet.index(letter))
print(letter)
print()
Explanation: This is very close to what we did last week when we looked at for statements
End of explanation
# How can we get the z from the alphabet variable?
print(alphabet[])
Explanation: Question - How would you get the last element of the list?
End of explanation
# This us how you read input from a user
name = input('Input name: ')
# Now use the {} to leave some parts of the string blank and fill them in later
var = 'Hello! My name is {}'.format(name)
print(var)
Explanation: Formatting strings
Often when you are creating strings, you might want some part to be flexible or change in some way. You can do that with the .format() function.
End of explanation
# TO DO - Make 3 input requests to the user and format the result together as 1 string
# END TO DO
Explanation: Question - Write some code which asks a user for their birthday day, month and year separately and print the result formatted together
End of explanation
n_string = 'This\nis\na\nmulti\nline\nstring\n'
print(n_string)
t_string = 'This\tstring\thas\ttab\tspaces\n'
print(t_string)
q_string = '\"This string is inside quotes\"'
print(q_string)
Explanation: Escape characters
String formatting includes special characters to make more complicated strings easy. These are always denoted with the \ character and are referred to as escape characters.
Common escape characters you might use will be newline \n or tab space \t. You also use the backslash to put quotes inside your string, like this \". If you are interested more in all the escape chars for Python look here
End of explanation
string1 = 'Hello'
string2 = 'Python'
# Add two strings together
print(string1 + string2)
# Another way to do this is using .join
print(' '.join([string1,string2]))
# Repetition using the * multiplication operator
print(string1*3)
# Remember the membership `in` keyword
print('o' in string1)
# You can make all characters uppercase
print(string1.upper())
# Or all lowercase
print(string2.lower())
# You can also search for patterns and replace characters
string3 = 'I\'m really more of a cat person'
print(string3.replace('cat','dog'))
# You can check if a string of a number is a number or not
print('12345'.isdigit())
# Or check if a string is only made of alphabet characters
print('alphabet'.isalpha())
# Get the length of a string
print(len(alphabet))
Explanation: String methods
Some maths operations work on strings, and strings have some of their own methods which are specifically for working with variables made of characters.
End of explanation
# To make big strings you use three quote marks
big_string = '''Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say
that they were perfectly normal, thank you very much. They were the last
people you'd expect to be involved in anything strange or mysterious,
because they just didn't hold with such nonsense.'''
# And Python will join strings declared over multiple lines together
multi_line_string = 'Hello my name is Tom ' \
'my favourite colour is Blue ' \
'and I live in Earl\'s Court'
print(multi_line_string)
Explanation: Triple quotes and multi line strings
End of explanation
# NOTE: This code won't work unless you make a file called `test_data.txt`
# in the same folder as this lesson
file = open('test_data.txt','r')
# The file is opened and has a "handle" which is the file variable for interaction
type(file)
Explanation: A note on encoding: there are many different kinds of character encodings people have created. The standard for Python 3 is 'UTF-8' which isn't worth knowing until you try and read data which isn't in this encoding and then it gets messy. More info on encoding can be read here
File I/O
Now that we are more comfortable with strings, I'm going to introduce the basics of reading and writing to files. Since a string can include (almost) any character they are always the default data type that the contents of a file are turned into when you import the data. If its numeric data like stock price changes, it's up to you to convert it to integer or float values!
Opening a file for reading or writing
When you run any Python script and you want to read or write to a file, the starting point is the open() statement. This looks for the file in the location you specify and takes a second argument depending on if you want to read of write to the file:
* open(FILEPATH,'r') - Reading from the file
* open(FILEPATH,'rb') - Reading from the file and writing
* open(FILEPATH,'w') - Writing to the file, will overwrite old data
* open(FILEPATH,'a') - Appending to the file, will add new data to the end of old data
If you add a b to any of these args (i.e. rb) it will open the file in binary mode, this isn't used often but is safer sometimes if you don't know what type of data is in the file.
End of explanation
file.read()
file.readline()
Explanation: Reading files
There are two main ways to read the contents of a text file, either all at once or line by line
End of explanation
file = open('test_data.txt','r') # Reopen to set the file handle to the beginning of the text
# This loop calls .readline() automatically
for line in file:
print(line)
Explanation: Note: When you read from a file in Python, the program remembers where you ended last time you read the file, so if you try and readline() after read(), you have read the whole file so you don't get anything back
Reading line by line
End of explanation
# You use open() to create a new file even if it did not already exist
file2 = open('write_data.txt','w+')
file2.write('this is a text file\n')
file2.write('test file for a python class\n')
# The number of chars written to the file is the return value
# Since we used w+ we can read the file to check what we created
for line in file2:
print(line)
Explanation: Writing files
The .write() function does the opposite of .read(). If you give .write() a string to write, this will get put inside the file
End of explanation
# This is important for file security so nothing is corrupted
file.close()
file2.close()
Explanation: Closing files
When you are done using your files, you should close them so that the Python programme no longer has access as so:
End of explanation
# We could have read the file using a context manager like this
with open('test_data.txt','r') as file:
# In this indented code block the file is open
for line in file:
print(line)
# No need to close the file! It's handled by the Python program now
Explanation: Context managers
There is a way around using the close() statement called a Context Manager. These use the with statement to create a code block, and the opened file is only used inside the code block. When you exit the contexted managed code block, the file is automatically closed.
End of explanation
# Declaring a list
list1 = [1,2,5,5,6]
# A list can have different types inside each element
list2 = list(('hello',4e-6,45))
print(list1)
print()
print(list2)
Explanation: We've breezed over File IO for now but if you want to look more in detail the TutorialsPoint page in this subject can give you more guidance if you are lost.
Lists
You have seen lists a few times before, but I haven't explained them until now. A list is a type of "Data Structure" i.e. a specific frame for organising items of data.
A list is declared as so:
list1 = [a,b,c,d,e,f]
Each item in the list is seperated by a comma, the data in each item can be any type, including another list. A list is an iterable object, and in fact you will come to see you can treat a string and a list very similarly! Just think a string as a list of characters
str = 'hello'
lis = ['h','e','l','l','o']
The other main Python data structures we will look at next week are tuples, dicts and sets.
Declaring a list
End of explanation
print(list1[0])
print(list2[1:3])
Explanation: Indexing a list (it's just like indexing a string!)
End of explanation
# Exactly the same as reading a file or printing all the chars in a string
for elem in list2:
print(elem)
Explanation: Iterating over a list
End of explanation
print(list1)
list1[-1] = 5000 # Use the -1 indexing the same as strings to get the end element
print(list1)
Explanation: Modifying a list
Elements of a list can be changed after declaration
End of explanation
# Remove the end item
del list1[-1]
print(list1)
# Add a new element to the end
list1.append(6)
print(list1)
# You can also use len() to get the size of a list
print(len(list1))
Explanation: You can also delete elements and add to the end
End of explanation
# TO DO - Print this list in reverse order
# END TO DO
Explanation: Problem - Print the list in reverse order using a for loop
Here's a tip
End of explanation
celsius = [39.2, 36.5, 37.3, 37.8]
print(celsius)
fahrenheit = [] #Declaring an empty list
for temp in celsius:
fahrenheit.append(float(9)/5*temp + 32)
print(fahrenheit)
Explanation: List comprehensions
Why use a comprehension?
These are one of my favourite features of the Python language. A list comprehension is a very readable way to run some operation on a list in one line of code. If you haven't quite understood today's lesson so far, get comfortable with that part first and revisit this later. You certainly won't ever NEED to use a list comprehension, but it will make for shorter code that shows you know what you are doing
Converting a list of temperatures
Imagine you have a list of temperatures in Celsius and you need to convert all the temperatures to Fahrenheit. The obvious way to do this is using a for loop
End of explanation
# The version using a list comprehension
fahrenheit2 = [float(9)/5*temp + 32 for temp in celsius]
print(fahrenheit2)
Explanation: This is a small example where the for loop code is only two lines so the advantage is small. But you can actually do all of this in 1 line!
End of explanation
fahrenheit3 = [float(9)/5*temp + 32 for temp in celsius if temp > 37]
print(fahrenheit3)
Explanation: This is the basic version of the comprehension. You can also add conditions, like to only keep Celsius temperatures above 37
End of explanation
ones = [1,2,3,4,5]
tens = [10,20,30,40,50]
# The zip function puts each indexed element in pairs, like the teeth of a zip being locked together.
mult = [i*j for i,j in zip(ones,tens)]
print(mult)
Explanation: If we wanted to do this with loops we would have a for loop and an if statement. Now we have neither!
Multiplying two lists
End of explanation
# TODO
# END TODO
Explanation: Challenge - Write a comprehension to create a list of the first 10 square numbers?
Hint: Use the range function we looked at when we looked at for loops.
End of explanation |
6,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Naive Bayes - Trabalho
Questão 1
Implemente um classifacor Naive Bayes para o problema de predizer a qualidade de um carro. Para este fim, utilizaremos um conjunto de dados referente a qualidade de carros, disponível no UCI. Este dataset de carros possui as seguintes features e classe
Step1: Questão 2
Step2: Questão 3 | Python Code:
#Bibliotecas
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report
#Lendo o arquivo com os dados
df = pd.read_csv('carData.csv', header=None)
df.columns = ['buying', 'maint','doors','persons','lug_boot','safety','class']
df.head()
#Separa em conjunto de treino (70%) e teste (30%)
df_train, df_test = train_test_split(df,test_size=0.3)
#Faz uma copia dos mesmos dados para usar na questão 2
df_train_sci = df_train.copy()
df_test_sci = df_test.copy()
#separa conjunto de teste em features e classe e calcula o tamanho total de treino
df = df_train
totalSize = len(df)
df_test_X = df_test.iloc[:,0:-1]
df_test_y = df_test['class']
#separa os dados por classe
for class_value in df['class'].unique():
class_set = df[df['class'] == class_value]
#Calcula a frequencia de cada atributo de cada feature em cada classe
#frequency = [[[[k,len(df[df['class'] == i].index & df[df[j] == k].index)] for k in df[j].unique()] for j in df.columns[:-1]] for i in df['class'].unique()]
frequency = [[[len(df[df['class'] == i].index & df[df[j] == k].index) for k in df[j].unique()] for j in df.columns[:-1]] for i in df['class'].unique()]
frequency
frequency_test = [[[k for k in df[j].unique()] for j in df.columns[:-1]] for i in df['class'].unique()]
#Monta tabela de indices para enumerar cada atributo
columns = ['buying', 'maint','doors','persons','lug_boot','safety']
matrixA={}
p = 0;
for j in columns:
matrixA[j] = frequency_test[0][p]
p += 1
matrixA['class'] = list(df['class'].unique())
frequency_label = pd.DataFrame.from_dict(matrixA, orient='index')
frequency_label
#(frequency_label.loc['buying'] == 'vhigh').argmax()
#sum(frequency[3][0][:] + frequency[2][0][:] + frequency[1][0][:] + frequency[0][0][:])
#frequency[3][0][:]
#Calcula a probabilidade de cada feature
likelihood_feature = [[(frequency[0][index][((frequency_label.loc[j] == k).argmax())] + frequency[1][index][((frequency_label.loc[j] == k).argmax())] + frequency[2][index][((frequency_label.loc[j] == k).argmax())] + frequency[3][index][((frequency_label.loc[j] == k).argmax())])/totalSize for k in df[j].unique()] for index,j in enumerate(df.columns[:-1])]
likelihood_feature
#Calcula a probabilidade de cada classe
likelihood_class = [(sum(df['class'] == h))/totalSize for h in df['class'].unique()]
likelihood_class
#Calcula a probabilidade de cada atributo em cada feature em cada classe
frequency_prob = [[[len(df[df['class'] == i].index & df[df[j] == k].index)/sum(df['class'] == i) for k in df[j].unique()] for j in df.columns[:-1]] for i in df['class'].unique()]
frequency_prob
#data = df_test.iloc[0]
#data = data[:-1] #esclui a classe
#df_test.iloc[0]
#feature_index = [(frequency_label.loc[i] == data.loc[i]).argmax() for i in data.index]
#feature_index
#Faz a predição do conjunto de teste
df_test = df_test_X
predicts = ["" for x in range(len(df_test))]
k = 0;
for row in range(len(df_test)):
data = df_test.iloc[row]
feature_index = [(frequency_label.loc[h] == data.loc[h]).argmax() for h in data.index]
class_chance = np.zeros(len(likelihood_class))
for i,prob_class in enumerate(likelihood_class):
prob = 1;
for j,feature in enumerate(feature_index):
prob *= frequency_prob[i][j][feature]
class_chance[i] = (prob * prob_class) #/ likelihood_feature
predicts[k] = [frequency_label.loc['class'][class_chance.argmax()]]
k += 1
result_y = np.squeeze(np.asarray(predicts))
correct_values = np.sum(result_y == df_test_y.values)
correct_pct = correct_values / len(df_test_y)
print('Porcentagem de acerto = {0}'.format(correct_pct))
print("\nClassification Report:")
print(classification_report(y_true=df_test_y, y_pred=result_y, target_names=["unacc", "acc", "good", "vgood"]))
Explanation: Naive Bayes - Trabalho
Questão 1
Implemente um classifacor Naive Bayes para o problema de predizer a qualidade de um carro. Para este fim, utilizaremos um conjunto de dados referente a qualidade de carros, disponível no UCI. Este dataset de carros possui as seguintes features e classe:
Attributos
1. buying: vhigh, high, med, low
2. maint: vhigh, high, med, low
3. doors: 2, 3, 4, 5, more
4. persons: 2, 4, more
5. lug_boot: small, med, big
6. safety: low, med, high
Classes
1. unacc, acc, good, vgood
Questão 2
Crie uma versão de sua implementação usando as funções disponíveis na biblioteca SciKitLearn para o Naive Bayes (veja aqui)
Questão 3
Analise a acurácia dos dois algoritmos e discuta a sua solução.
Questão 1
End of explanation
from sklearn.naive_bayes import MultinomialNB, GaussianNB
from sklearn.preprocessing import LabelEncoder
#Transforma os valores string em númericos
#Na minha versão foi feito o tratamento de todos os valores sem precisar converter, porém
#a função do scikit não é adaptado para valores strings
for i in range(0, df_train_sci.shape[1]):
df_train_sci.iloc[:,i] = LabelEncoder().fit_transform(df_train_sci.iloc[:,i])
for i in range(0, df_test_sci.shape[1]):
df_test_sci.iloc[:,i] = LabelEncoder().fit_transform(df_test_sci.iloc[:,i])
#Separa em features e classes
train_X_sci = df_train_sci.iloc[:,:-1]
test_X_sci = df_test_sci.iloc[:,:-1]
train_y_sci = df_train_sci.iloc[:,-1]
test_y_sci = df_test_sci.iloc[:,-1]
#Faz o fit e o predict com a implementação do scikit
nb = GaussianNB()
nb.fit(train_X_sci, train_y_sci)
result_y_sci = nb.predict(test_X_sci)
correct_pct_sci = accuracy_score(test_y_sci, result_y_sci)
print('Porcentagem de acerto = {0}'.format(correct_pct_sci))
print("\nClassification Report:")
print(classification_report(y_true=test_y_sci, y_pred=result_y_sci, target_names=["unacc", "acc", "good", "vgood"]))
Explanation: Questão 2
End of explanation
print('Resultado no meu algoritmo: {0}'.format(correct_pct))
print('Resultado no algoritmo do sklearn: {0}'.format(correct_pct_sci))
Explanation: Questão 3
End of explanation |
6,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov Madness
Ok let's get down to business! So the overall goal is to build a mathematical model that predicts with good accuracy who is likely to make it to the sweet 16 in the NCAA tournament. This project is going to have two parts
Step1: Excellent, now we have datasets. The first thing to do is to rank teams based on their performance in each year's NCAA march-madness tournament. This part of the calculation is rather subjective- I'm going to individually rank teams by how well they did in the tournament. I need to do this because, if you think about it, there were two teams that lost in the final four, four that lost in the elite 8, etc. How do we rank these teams? We could put them at relatively the same ranking, which I will. But I'm also going to differentiate between a bad loss and a close game. So this isn't an exact science but that's ok because the results will show how good my ranking system was.
In the following cells I select only the 64 teams in the NCAA tournament from the above list of every single team in Division 1 College basketball, and I assign each team a ranking (my assigned ranking is in column 34).
Step2: Now, for easier manipulation, we're going to convert the dataframe into a numpy array. Then we'll divide each value in the array by the total number of games that team played, ensuring we have 'per game' statistics.
Step3: Next we're going to begin the regression. First, we define a matrix y for our regression such that
$$ \textbf{Y} =\textbf{X} * \textbf{b}$$
where Y is our ratings, X is a matrix of our data points (each row represents the statistics for a single team), and b is our coefficients. I'm going to assume a linear relationship for now- I can play around with non-linear regressions later, but we really want to just get values for now and later we can figure out whether our regression is good.
Step4: Since we only want to use the statistics that are correlated with the ratings, we run a spearman correlation test on every statistic and select only the ones below our alpha level of $0.05$. These statistics then form our $\textbf{X}$ matrix. Next we use the "linalg.lstsq" regression function to perform a least squares regression of our data. Finally, I'll compute our predicted rankings by multiplying the $\textbf{X}$ and $b$ matrices.
Step5: Notice above that I had to make a cheeky and dubious adjustment- some of the predicted rankings came out negative, so to ensure that all rankings are positive (we'll need them positive to create our Markov chain), I change all negative rankings to a rank of 0. A higher ranking means a better team.
Alright, we now have an equation with 15 coefficients that predicts the ranking of a team based on its regular season stats. Now we are going to create a Markov Chain using these data!
Part 2
Step6: Inexplicably the rows don't add to one unless we use the normalization factor $\frac{1}{64 * 0.9921875}$. No biggie.
This is a special type of Markov chain- because none of the values in the transition matrix are 1 or 0, it's possible to go from any state in the matrix to any other state. We call this a regular Markov chain. In fact, this Markov Chain is regular, aperiodic, and irreducible. The special property of such a Markov chain is that there's a limiting probability distribution. This means that if we evolve the Markov process over infinite iterations (i.e. you randomly go from state 0 to state 1 to state 7 to state 32 etc. etc. infinite times) there is a set probability that the particle will be in any given state at time infinity. The limiting distribution follows this equation
Step7: Now the interesting part!!! We get to apply this to new data sets. Because I'm still salty about how poorly my bracket did this year (my beloved MSU Spartans fell in the first round...) let's take a look and see whether this rating scheme is good for the 2016 March Madness bracket. First, we need to call a new data set.
Step8: Luckily for me I don't have to create rankings for this set; I can just plug in the regular season stats of the 64 teams in the bracket and see what the program predicts. So let's do that!!! | Python Code:
import pandas as pd
import numexpr
import bottleneck
import numpy as np
import numpy.linalg as linalg
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats as ss
reg_14_15 = pd.read_csv('2014_2015 Regular Season Stats.csv')
#Testing out our system
reg_14_15
Explanation: Markov Madness
Ok let's get down to business! So the overall goal is to build a mathematical model that predicts with good accuracy who is likely to make it to the sweet 16 in the NCAA tournament. This project is going to have two parts:
Part 1:
-Performing an optimization/regression in order to write equations that predict a team's performance in the NCAA tournament based on their regular season statistics.
Part 2:
-Putting together a Markov Chain that will use these probabilities to predict the teams most likely to advance in the tourament. I'll explain more about a Markov Chain when we get there.
Let's start with Part 1!
Part 1: The Regression
In order to perform this regression we need to set up a system to pull data from CSV files.
End of explanation
reg_14_15 = reg_14_15.rename(columns={'Unnamed: 0': 'Number'})
#renaming the columns with integers so they can be more easily manipulated
d=[]
for i in range(0,34,1):
d.append(i)
d
reg_14_15.columns=[d]
#creating a new dataframe with only the teams in the tournamment
bracket_14_15=reg_14_15.iloc[[7,8,12,14,22,23,35,36,51,55,66,67,75,82,99,100,102,104,108,110,126,129,130,135,139,141,149,153,162,173,177,198,203,206,211,214,218,222,225,226,227,230,242,243,250,263,283,288,290,299,303,316,319,321,325,328,329,330,337,342,345,346,348,349],:]
newCol = [27,56,6,24,33,58,48,19,25,61,44,22,1,54,25,42,23,8,62,51,43,27,33,20,3,64,38,7,27,4,46,59,10,13,57,55,27,5,26,11,40,21,37,39,63,27,35,41,49,45,60,53,15,9,52,17,18,35,16,14,2,47,50,12]
newName = '34'
values = np.insert(bracket_14_15.values,bracket_14_15.shape[1],newCol,axis=1)
header = bracket_14_15.columns.values.tolist()
header.append(newName)
df = pd.DataFrame(values,columns=header)
df
Explanation: Excellent, now we have datasets. The first thing to do is to rank teams based on their performance in each year's NCAA march-madness tournament. This part of the calculation is rather subjective- I'm going to individually rank teams by how well they did in the tournament. I need to do this because, if you think about it, there were two teams that lost in the final four, four that lost in the elite 8, etc. How do we rank these teams? We could put them at relatively the same ranking, which I will. But I'm also going to differentiate between a bad loss and a close game. So this isn't an exact science but that's ok because the results will show how good my ranking system was.
In the following cells I select only the 64 teams in the NCAA tournament from the above list of every single team in Division 1 College basketball, and I assign each team a ranking (my assigned ranking is in column 34).
End of explanation
mat = np.zeros((64,32))
for j in range (0,64,1):
for i in range(3,34,1):
val = float(df.iat[j,i])/float(df.iat[j,2])
mat[j,i-3]=val
Explanation: Now, for easier manipulation, we're going to convert the dataframe into a numpy array. Then we'll divide each value in the array by the total number of games that team played, ensuring we have 'per game' statistics.
End of explanation
#creating our y matrix
ratings = np.zeros((64,1))
for j in range(0,64,1):
val = 64 - float(df.iat[j,34])
ratings[j] = val
Explanation: Next we're going to begin the regression. First, we define a matrix y for our regression such that
$$ \textbf{Y} =\textbf{X} * \textbf{b}$$
where Y is our ratings, X is a matrix of our data points (each row represents the statistics for a single team), and b is our coefficients. I'm going to assume a linear relationship for now- I can play around with non-linear regressions later, but we really want to just get values for now and later we can figure out whether our regression is good.
End of explanation
coeffs = []
for i in range(0,32,1):
results = ss.spearmanr(mat[:,i],ratings)
if results[1] < .05:
coeffs.append(i)
xmat = []
for i in coeffs:
xmat.append(mat[:,i])
result = linalg.lstsq(np.transpose(xmat),ratings)
x_mat = np.asarray(xmat)
x_matT = np.transpose(np.asarray(xmat))
rating = np.transpose(np.asarray(ratings))
npresult = np.asarray(result[0])
dot = np.dot(np.transpose(npresult),x_mat)
dot
dotadjusted = np.zeros((1,64))
for i in range(0,64,1):
if dot[0,i] < 0:
dotadjusted[0,i] = 1
else:
dotadjusted[0,i] = dot[0,i]
Explanation: Since we only want to use the statistics that are correlated with the ratings, we run a spearman correlation test on every statistic and select only the ones below our alpha level of $0.05$. These statistics then form our $\textbf{X}$ matrix. Next we use the "linalg.lstsq" regression function to perform a least squares regression of our data. Finally, I'll compute our predicted rankings by multiplying the $\textbf{X}$ and $b$ matrices.
End of explanation
brac2015 = np.zeros((64,64))
def brac(i):
a=0
for j in range(0,64,1):
a = a + dotadjusted[0,i]/(dotadjusted[0,i]+dotadjusted[0,j])
return 1/(64*.9921875)*a
for i in range(0,64,1):
for j in range(0,64,1):
if i != j:
brac2015[i,j] = 1/(64*.9921875) * dotadjusted[0,j]/(dotadjusted[0,i] + dotadjusted[0,j])
if i == j:
brac2015[i,i] = brac(i)
brac2015transpose = np.transpose(brac2015)
Explanation: Notice above that I had to make a cheeky and dubious adjustment- some of the predicted rankings came out negative, so to ensure that all rankings are positive (we'll need them positive to create our Markov chain), I change all negative rankings to a rank of 0. A higher ranking means a better team.
Alright, we now have an equation with 15 coefficients that predicts the ranking of a team based on its regular season stats. Now we are going to create a Markov Chain using these data!
Part 2: The Markov Chain
Let's play a game called the jumping particle.
Consider a particle that can jump between multiple different states. On each turn of the game, the particle has a probability of jumping to another state or remaining in the current state. This group of states represents a Markov chain. The probability that a particle jumps to any particular state is written in the form of a "transition probability matrix." For example, consider a 2-state Markov Chain with states 0 and 1:
$$P =
\left[
\begin{array}{cc}
0.4 & 0.6\
0.7 & 0.3\end{array}\right]
$$
In this case, the probability that a particle in state 0 on turn 1 jumps to state 1 on turn 2 is 0.6, and the probability it stays in state 0 is 0.4. Likewise, the probability that a particle in state 1 on turn 1 jumps to state 0 on turn 2 is 0.7 while the probability that it stays in state 1 is 0.3. Notice that each row sums to 1. This makes intuitive sense; the probability that the particle either jumps or stays must add to 1. It turns out that Markov Chains have lots of nice properties that we can exploit. First, however, we have to construct our transition probability matrix for our bracket.
Let's use our ranking system. Adopting a method suggested in Kvam et. al, we can define
$$p_{i,j}= \frac{r_j}{r_i+r_j}$$
and
$$p_{i,i} = \sum_{j = 1, j \neq i}^{64}\frac{r_i}{r_i+r_j}$$ where $r_i$represents the ranking of team i, $r_j$ the ranking of team j.
Notice, however, that there is an issue; this does not necessarily sum to 1 for all the values in a row. In fact,
$$ p_{i,1} + p_{i,2} + ... + p_{i,i-1} + p_{i,i+1} + ... + p_{i,64} + p_{i,i} = $$
$$ \frac{r_1}{r_i+r_1} + \frac{r_2}{r_i+r_2} + ... + \frac{r_{i-1}}{r_i+r_{i-1}} + \frac{r_{i+1}}{r_i+r_{i+1}} + ... + \frac{r_{64}}{r_i+r_{64}} + (\frac{r_i}{r_i+r_1} + ... + \frac{r_i}{r_i+r_{i-1}} + \frac{r_i}{r_i+r_{i+1}} + ... + \frac{r_i}{r_i+r_{64}}) = $$
$$ \frac{r_i + r_1}{r_i+r_1} + ...\frac{r_i + r_{i-1}}{r_i+r_{i-1}} + \frac{r_i+r_{i+1}}{r_i+r_{i+1}} + ... + \frac{r_i+r_{64}}{r_i+r_{64}} = 63(1) = 63 $$
So if we normalize by $\frac{1}{63}$ we should get rows that sum to 1. Now let's write the matrix.
End of explanation
#replace last equation of P with the second boundary condition.
for i in range(0,63,1):
for j in range(0,63,1):
if i == j:
brac2015eq[i,j] = brac2015transpose[i,i] - 1
if i != j:
brac2015eq[i,j] = brac2015transpose[i,j]
for i in range(0,64,1):
brac2015eq[63,i] = 1
b = np.zeros((64,1))
b[63,0] = 1
a = np.zeros((64,1))
c = []
d = []
for i in range(0,64,1):
cat = np.linalg.solve(brac2015eq,b)[i,0]
c.append(cat)
d.append(df.iat[i,1])
e = pd.Series(d)
f = pd.Series(c)
predictions = pd.DataFrame({ 'Team Name' : e,
'Steady State Probability' : f})
finalpredictions = predictions.sort_values(by = 'Steady State Probability')
print(finalpredictions.tail())
Explanation: Inexplicably the rows don't add to one unless we use the normalization factor $\frac{1}{64 * 0.9921875}$. No biggie.
This is a special type of Markov chain- because none of the values in the transition matrix are 1 or 0, it's possible to go from any state in the matrix to any other state. We call this a regular Markov chain. In fact, this Markov Chain is regular, aperiodic, and irreducible. The special property of such a Markov chain is that there's a limiting probability distribution. This means that if we evolve the Markov process over infinite iterations (i.e. you randomly go from state 0 to state 1 to state 7 to state 32 etc. etc. infinite times) there is a set probability that the particle will be in any given state at time infinity. The limiting distribution follows this equation:
$$ \pi* \textbf{P} = \pi$$
where $\pi$ is the limiting distribution and $\textbf{P}$ is the transition probability matrix we constructed. Notice that $\pi$ is a 64-dimensional vector in our case.
We can use these limiting distributions! If we rank teams by their limiting distribution probabilities, we should be able to see which teams will be the most likely to win the tournament.
The other equation of importance is
$$ \pi_1 + ... + \pi_{64} = 1$$
where $\pi = <\pi_1,\pi_2,...,\pi_{64}>$
which makes sense, since the particle must be in $\textit{some}$ state at time infinity (Note: $\pi_i$ is the probability that the particle will be in state i at time infinity).
So now we have 64 equations to solve 64 variables (the $\pi_i$).
End of explanation
reg_15_16 = pd.read_csv('2015_2016 Regular Season Stats.csv')
reg_15_16.head()
Explanation: Now the interesting part!!! We get to apply this to new data sets. Because I'm still salty about how poorly my bracket did this year (my beloved MSU Spartans fell in the first round...) let's take a look and see whether this rating scheme is good for the 2016 March Madness bracket. First, we need to call a new data set.
End of explanation
#teams = the 64 teams in the bracket that year. bracket = the associated data.
def predictor(regseasonstats,teams,vars,coefficients):
'''This function takes in multiple different constraints and outputs the teams most likely to win the NCAA tournament and
their probabilities of winning. Inputs:
regseasonstats = uploaded CSV file containing statistics for all teams as a Pandas Dataframe
teams = a list of the numerical indices associated with the 64 teams in the NCAA bracket that year
vars = the numerical values of the column headers of the variables desired to use in the regression
coefficients = the associated coefficients for each variable.'''
d=[]
for i in range(0,34,1):
d.append(i)
regseasonstats.columns=[d]
bracket = regseasonstats.iloc[teams,:]
mat = np.zeros((64,32))
for j in range (0,64,1):
for i in range(3,34,1):
val = float(bracket.iat[j,i])/float(bracket.iat[j,2])
mat[j,i-3]=val
xmat = []
for i in vars:
xmat.append(mat[:,i])
x_mat = np.asarray(xmat)
np.result = np.asarray(coefficients)
dot = np.dot(np.transpose(npresult),x_mat)
dotadjusted = np.zeros((1,64))
for i in range(0,64,1):
if dot[0,i] < 0:
dotadjusted[0,i] = 1
else:
dotadjusted[0,i] = dot[0,i]
#Making the Markov transition matrix
brac2015 = np.zeros((64,64))
def brac(i):
a=0
for j in range(0,64,1):
a = a + dotadjusted[0,i]/(dotadjusted[0,i]+dotadjusted[0,j])
return 1/(64*.9921875)*a
for i in range(0,64,1):
for j in range(0,64,1):
if i != j:
brac2015[i,j] = 1/(64*.9921875) * dotadjusted[0,j]/(dotadjusted[0,i] + dotadjusted[0,j])
if i == j:
brac2015[i,i] = brac(i)
brac2015transpose = np.transpose(brac2015)
for i in range(0,63,1):
for j in range(0,63,1):
if i == j:
brac2015eq[i,j] = brac2015transpose[i,i] - 1
if i != j:
brac2015eq[i,j] = brac2015transpose[i,j]
for i in range(0,64,1):
brac2015eq[63,i] = 1
b = np.zeros((64,1))
b[63,0] = 1
a = np.zeros((64,1))
mat1 = []
mat2 = []
for i in range(0,64,1):
cat = np.linalg.solve(brac2015eq,b)[i,0]
mat1.append(cat)
mat2.append(bracket.iat[i,1])
teamname = pd.Series(mat2)
probability = pd.Series(mat1)
predictions = pd.DataFrame({ 'Team Name' : teamname,
'Steady State Probability' : probability})
finalpredictions = predictions.sort_values(by = 'Steady State Probability')
return(finalpredictions[48:64])
#Here we define
teams2016 = [12,16,20,22,35,36,38,49,51,58,61,67,75,90,94,104,107,108,111,114,126,128,129,130,135,139,162,170,172
,173,174,203,207,209,218,222,226,230,231,236,242,243,256,269,276,281,290,292,293,294,299,300,305,320,
321,328,329,330,336,337,342,345,349,350]
#so
predictor(reg_15_16,teams2016,coeffs,result[0])
Explanation: Luckily for me I don't have to create rankings for this set; I can just plug in the regular season stats of the 64 teams in the bracket and see what the program predicts. So let's do that!!!
End of explanation |
6,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An RNN model for temperature data
This time we will be working with real data
Step1: Download Data
Step2: <a name="hyperparameters"></a>
<a name="assignment1"></a>
Hyperparameters
<div class="alert alert-block alert-info">
***Assignment #1*** Temperatures have a periodicity of 365 days. We would need to unroll the RNN over 365 steps (=SEQLEN) to capture that. That is way too much. We will have to work with averages over a handful of days instead of daily temperatures. Bump the unrolling length to SEQLEN=128 and then try averaging over 3 to 5 days (RESAMPLE_BY=3, 4, 5). Look at the data visualisations in [Resampling](#resampling) and [Training sequences](#trainseq). The training sequences should capture a recognizable part of the yearly oscillation.
***In the end, use these values
Step3: Temperature data
This is what our temperature datasets looks like
Step4: <a name="resampling"></a>
Resampling
Our RNN would need to be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
Step5: <a name="trainseq"></a>
Visualize training sequences
This is what the neural network will see during training.
Step6: <a name="assignment2"></a>
<div class="alert alert-block alert-info">
***Assignement #2*** Temperatures are noisy. If we ask the model to predict the naxt data point, noise might drown the trend and the model will not train. The trend should be clearer if we ask the moder to look further ahead. You can use the [hyperparameter](#hyperparameters) N_FORWARD to shift the target sequences by more than 1. Try values between 4 and 16 and see how [training sequences](#trainseq) look.<br/>
<br/>
If the model predicts N_FORWARD in advance, you will also need it to output N_FORWARD predicted values instead of 1. Please check that the output of your model is indeed `Yout = Yr[
Step7: Instantiate the model
Step8: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
Step9: <a name="train"></a>
The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
Step10: <a name="inference"></a>
Inference
This is a generative model
Step11: <a name="valid"></a>
Validation | Python Code:
import math
import sys
import time
import numpy as np
sys.path.insert(0, '../temperatures/utils/') #so python can find the utils_ modules
import utils_batching
import utils_args
import tensorflow as tf
from tensorflow.python.lib.io import file_io as gfile
print("Tensorflow version: " + tf.__version__)
from matplotlib import pyplot as plt
import utils_prettystyle
import utils_display
Explanation: An RNN model for temperature data
This time we will be working with real data: daily (Tmin, Tmax) temperature series from 36 weather stations spanning 50 years. It is to be noted that a pretty good predictor model already exists for temperatures: the average of temperatures on the same day of the year in N previous years. It is not clear if RNNs can do better but we will see how far they can go.
<div class="alert alert-block alert-info">
Things to do:<br/>
<ol start="0">
<li>Run the notebook as it is. Look at the data visualisations. Then look at the predictions at the end. Not very good...
<li>First play with the data to find good values for RESAMPLE_BY and SEQLEN in hyperparameters ([Assignment #1](#assignment1)).
<li>Now implement the RNN model in the model function ([Assignment #2](#assignment2)).
<li>Temperatures are noisy, let's try something new: predicting N data points ahead instead of only 1 ahead ([Assignment #3](#assignment3)).
<li>Now we will adjust more traditional hyperparameters and add regularisations. ([Assignment #4](#assignment4))
<li>
Look at the save-restore code. The model is saved at the end of the [training loop](#train) and restored when running [validation](#valid). Also see how the restored model is used for [inference](#inference).
<br/><br/>
You are ready to run in the cloud on all 1666 weather stations. Use [this bash notebook](../run-on-cloud-ml-engine.ipynb) to convert your code to a regular Python file and invoke the Google Cloud ML Engine command line.
When the training is finished on ML Engine, change one line in [validation](#valid) to load the SAVEDMODEL from its cloud bucket and display.
</div>
End of explanation
%%bash
DOWNLOAD_DIR=../temperatures/data
mkdir $DOWNLOAD_DIR
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/temperatures/* $DOWNLOAD_DIR
Explanation: Download Data
End of explanation
NB_EPOCHS = 5 # number of times the model sees all the data during training
N_FORWARD = 1 # train the network to predict N in advance (traditionnally 1)
RESAMPLE_BY = 1 # averaging period in days (training on daily data is too much)
RNN_CELLSIZE = 80 # size of the RNN cells
N_LAYERS = 1 # number of stacked RNN cells (needed for tensor shapes but code must be changed manually)
SEQLEN = 32 # unrolled sequence length
BATCHSIZE = 64 # mini-batch size
DROPOUT_PKEEP = 0.7 # probability of neurons not being dropped (should be between 0.5 and 1)
ACTIVATION = tf.nn.tanh # Activation function for GRU cells (tf.nn.relu or tf.nn.tanh)
JOB_DIR = "temperature_checkpoints"
DATA_DIR = "../temperatures/data"
# potentially override some settings from command-line arguments
if __name__ == '__main__':
JOB_DIR, DATA_DIR = utils_args.read_args1(JOB_DIR, DATA_DIR)
ALL_FILEPATTERN = DATA_DIR + "/*.csv" # pattern matches all 1666 files
EVAL_FILEPATTERN = DATA_DIR + "/USC000*2.csv" # pattern matches 8 files
# pattern USW*.csv -> 298 files, pattern USW*0.csv -> 28 files
print('Reading data from "{}".\nWrinting checkpoints to "{}".'.format(DATA_DIR, JOB_DIR))
Explanation: <a name="hyperparameters"></a>
<a name="assignment1"></a>
Hyperparameters
<div class="alert alert-block alert-info">
***Assignment #1*** Temperatures have a periodicity of 365 days. We would need to unroll the RNN over 365 steps (=SEQLEN) to capture that. That is way too much. We will have to work with averages over a handful of days instead of daily temperatures. Bump the unrolling length to SEQLEN=128 and then try averaging over 3 to 5 days (RESAMPLE_BY=3, 4, 5). Look at the data visualisations in [Resampling](#resampling) and [Training sequences](#trainseq). The training sequences should capture a recognizable part of the yearly oscillation.
***In the end, use these values: SEQLEN=128, RESAMPLE_BY=5.***
</div>
End of explanation
all_filenames = gfile.get_matching_files(ALL_FILEPATTERN)
eval_filenames = gfile.get_matching_files(EVAL_FILEPATTERN)
train_filenames = list(set(all_filenames) - set(eval_filenames))
# By default, this utility function loads all the files and places data
# from them as-is in an array, one file per line. Later, we will use it
# to shape the dataset as needed for training.
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames)
evtemps, _, evdates, _, _ = next(ite) # gets everything
print('Pattern "{}" matches {} files'.format(ALL_FILEPATTERN, len(all_filenames)))
print('Pattern "{}" matches {} files'.format(EVAL_FILEPATTERN, len(eval_filenames)))
print("Evaluation files: {}".format(len(eval_filenames)))
print("Training files: {}".format(len(train_filenames)))
print("Initial shape of the evaluation dataset: " + str(evtemps.shape))
print("{} files, {} data points per file, {} values per data point"
" (Tmin, Tmax, is_interpolated) ".format(evtemps.shape[0], evtemps.shape[1],evtemps.shape[2]))
# You can adjust the visualisation range and dataset here.
# Interpolated regions of the dataset are marked in red.
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 18262 # 18262 = Dec 31st 2009
visu_temperatures = evtemps[WEATHER_STATION,START_DATE:END_DATE]
visu_dates = evdates[START_DATE:END_DATE]
utils_display.picture_this_4(visu_temperatures, visu_dates)
Explanation: Temperature data
This is what our temperature datasets looks like: sequences of daily (Tmin, Tmax) from 1960 to 2010. They have been cleaned up and eventual missing values have been filled by interpolation. Interpolated regions of the dataset are marked in red on the graph.
End of explanation
# This time we ask the utility function to average temperatures over 5-day periods (RESAMPLE_BY=5)
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames, RESAMPLE_BY, tminmax=True)
evaltemps, _, evaldates, _, _ = next(ite)
# display five years worth of data
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 365*5//RESAMPLE_BY # 5 years
visu_temperatures = evaltemps[WEATHER_STATION, START_DATE:END_DATE]
visu_dates = evaldates[START_DATE:END_DATE]
plt.fill_between(visu_dates, visu_temperatures[:,0], visu_temperatures[:,1])
plt.show()
Explanation: <a name="resampling"></a>
Resampling
Our RNN would need to be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
End of explanation
# The function rnn_multistation_sampling_temperature_sequencer puts one weather station per line in
# a batch and continues with data from the same station in corresponding lines in the next batch.
# Features and labels are returned with shapes [BATCHSIZE, SEQLEN, 2]. The last dimension of size 2
# contains (Tmin, Tmax).
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
nb_epochs=1,
tminmax=True)
# load 6 training sequences (each one contains data for all weather stations)
visu_data = [next(ite) for _ in range(6)]
# Check that consecutive training sequences from the same weather station are indeed consecutive
WEATHER_STATION = 4
utils_display.picture_this_5(visu_data, WEATHER_STATION)
Explanation: <a name="trainseq"></a>
Visualize training sequences
This is what the neural network will see during training.
End of explanation
def model_rnn_fn(features, Hin, labels, step, dropout_pkeep):
print('features: {}'.format(features))
X = features # shape [BATCHSIZE, SEQLEN, 2], 2 for (Tmin, Tmax)
batchsize = tf.shape(X)[0] # allow for variable batch size
seqlen = tf.shape(X)[1] # allow for variable sequence length
cell = tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)
Hr, H = tf.nn.dynamic_rnn(cell,X,initial_state=Hin)
Yn = tf.reshape(Hr, [batchsize*seqlen, RNN_CELLSIZE])
Yr = tf.layers.dense(Yn, 2) # Yr [BATCHSIZE*SEQLEN, 2] predicting vectors of 2 element
Yr = tf.reshape(Yr, [batchsize, seqlen, 2]) # Yr [BATCHSIZE, SEQLEN, 2]
Yout = Yr[:,-N_FORWARD:,:] # Last N_FORWARD outputs. Yout [BATCHSIZE, N_FORWARD, 2]
loss = tf.losses.mean_squared_error(Yr, labels) # labels[BATCHSIZE, SEQLEN, 2]
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
return Yout, H, loss, train_op, Yr
Explanation: <a name="assignment2"></a>
<div class="alert alert-block alert-info">
***Assignement #2*** Temperatures are noisy. If we ask the model to predict the naxt data point, noise might drown the trend and the model will not train. The trend should be clearer if we ask the moder to look further ahead. You can use the [hyperparameter](#hyperparameters) N_FORWARD to shift the target sequences by more than 1. Try values between 4 and 16 and see how [training sequences](#trainseq) look.<br/>
<br/>
If the model predicts N_FORWARD in advance, you will also need it to output N_FORWARD predicted values instead of 1. Please check that the output of your model is indeed `Yout = Yr[:,-N_FORWARD:,:]`. The inference part has already been adjusted to generate the sequence by blocks of N_FORWARD points. You can have a [look at it](#inference).<br/>
<br/>
Train and evaluate to see if you are getting better results. ***In the end, use this value: N_FORWARD=8***
</div>
<a name="assignment3"></a>
<div class="alert alert-block alert-info">
***Assignement #3*** Try adjusting the follwing parameters:<ol><ol>
<li> Use a stacked RNN cell with 2 layers with in the model:<br/>
```
cells = [tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE) for _ in range(N_LAYERS)]
cell = tf.nn.rnn_cell.MultiRNNCell(cells, state_is_tuple=False)
```
<br/>Do not forget to set N_LAYERS=2 in [hyperparameters](#hyperparameters)
</li>
<li>Regularisation: add dropout between cell layers.<br/>
```
cells = [tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob = dropout_pkeep) for cell in cells]
```
<br/>
Check that you have a good value for DROPOUT_PKEEP in [hyperparameters](#hyperparameters). 0.7 should do. Also check that dropout is deactivated i.e. dropout_pkeep=1.0 during [inference](#inference).
</li>
<li>Increase RNN_CELLSIZE -> 128 to allow the cells to model more complex behaviors.</li>
</ol></ol>
Play with these options until you get a good fit for at least 1.5 years.
</div>
<div style="text-align: right; font-family: monospace">
X shape [BATCHSIZE, SEQLEN, 2]<br/>
Y shape [BATCHSIZE, SEQLEN, 2]<br/>
H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]
</div>
When executed, this function instantiates the Tensorflow graph for our model.
End of explanation
tf.reset_default_graph() # restart model graph from scratch
# placeholder for inputs
Hin = tf.placeholder(tf.float32, [None, RNN_CELLSIZE * N_LAYERS])
features = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
labels = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
step = tf.placeholder(tf.int32)
dropout_pkeep = tf.placeholder(tf.float32)
# instantiate the model
Yout, H, loss, train_op, Yr = model_rnn_fn(features, Hin, labels, step, dropout_pkeep)
Explanation: Instantiate the model
End of explanation
# variable initialization
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run([init])
saver = tf.train.Saver(max_to_keep=1)
Explanation: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
End of explanation
losses = []
indices = []
last_epoch = 99999
last_fileid = 99999
for i, (next_features, next_labels, dates, epoch, fileid) in enumerate(
utils_batching.rnn_multistation_sampling_temperature_sequencer(train_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
NB_EPOCHS, tminmax=True)):
# reinintialize state between epochs or when starting on data from a new weather station
if epoch != last_epoch or fileid != last_fileid:
batchsize = next_features.shape[0]
H_ = np.zeros([batchsize, RNN_CELLSIZE * N_LAYERS])
print("State reset")
#train
feed = {Hin: H_, features: next_features, labels: next_labels, step: i, dropout_pkeep: DROPOUT_PKEEP}
Yout_, H_, loss_, _, Yr_ = sess.run([Yout, H, loss, train_op, Yr], feed_dict=feed)
# print progress
if i%20 == 0:
print("{}: epoch {} loss = {} ({} weather stations this epoch)".format(i, epoch, np.mean(loss_), fileid+1))
sys.stdout.flush()
if i%10 == 0:
losses.append(np.mean(loss_))
indices.append(i)
# This visualisation can be helpful to see how the model "locks" on the shape of the curve
# if i%100 == 0:
# plt.figure(figsize=(10,2))
# plt.fill_between(dates, next_features[0,:,0], next_features[0,:,1]).set_alpha(0.2)
# plt.fill_between(dates, next_labels[0,:,0], next_labels[0,:,1])
# plt.fill_between(dates, Yr_[0,:,0], Yr_[0,:,1]).set_alpha(0.8)
# plt.show()
last_epoch = epoch
last_fileid = fileid
# save the trained model
SAVEDMODEL = JOB_DIR + "/ckpt" + str(int(time.time()))
tf.saved_model.simple_save(sess, SAVEDMODEL,
inputs={"features":features, "Hin":Hin, "dropout_pkeep":dropout_pkeep},
outputs={"Yout":Yout, "H":H})
plt.ylim(ymax=np.amax(losses[1:])) # ignore first value for scaling
plt.plot(indices, losses)
plt.show()
Explanation: <a name="train"></a>
The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
End of explanation
def prediction_run(predict_fn, prime_data, run_length):
H = np.zeros([1, RNN_CELLSIZE * N_LAYERS]) # zero state initially
Yout = np.zeros([1, N_FORWARD, 2])
data_len = prime_data.shape[0]-N_FORWARD
# prime the state from data
if data_len > 0:
Yin = np.array(prime_data[:-N_FORWARD])
Yin = np.reshape(Yin, [1, data_len, 2]) # reshape as one sequence of pairs (Tmin, Tmax)
r = predict_fn({'features': Yin, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
# initaily, put real data on the inputs, not predictions
Yout = np.expand_dims(prime_data[-N_FORWARD:], axis=0)
# Yout shape [1, N_FORWARD, 2]: batch of a single sequence of length N_FORWARD of (Tmin, Tmax) data pointa
# run prediction
# To generate a sequence, run a trained cell in a loop passing as input and input state
# respectively the output and output state from the previous iteration.
results = []
for i in range(run_length//N_FORWARD+1):
r = predict_fn({'features': Yout, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
results.append(Yout[0]) # shape [N_FORWARD, 2]
return np.concatenate(results, axis=0)[:run_length]
Explanation: <a name="inference"></a>
Inference
This is a generative model: run an trained RNN cell in a loop. This time, with N_FORWARD>1, we generate the sequence by blocks of N_FORWAD data points instead of point by point. The RNN is unrolled across N_FORWARD steps, takes in a the last N_FORWARD data points and predicts the next N_FORWARD data points and so on in a loop. State must be passed around correctly.
End of explanation
QYEAR = 365//(RESAMPLE_BY*4)
YEAR = 365//(RESAMPLE_BY)
# Try starting predictions from January / March / July (resp. OFFSET = YEAR or YEAR+QYEAR or YEAR+2*QYEAR)
# Some start dates are more challenging for the model than others.
OFFSET = 4*YEAR+1*QYEAR
PRIMELEN=5*YEAR
RUNLEN=3*YEAR
RMSELEN=3*365//(RESAMPLE_BY*2) # accuracy of predictions 1.5 years in advance
# Restore the model from the last checkpoint saved previously.
# Alternative checkpoints:
# Once you have trained on all 1666 weather stations on Google Cloud ML Engine, you can load the checkpoint from there.
# SAVEDMODEL = "gs://{BUCKET}/sinejobs/sines_XXXXXX_XXXXXX/ckptXXXXXXXX"
# A sample checkpoint is provided with the lab. You can try loading it for comparison.
# You will have to use the following parameters and re-run the entire notebook:
# N_FORWARD = 8, RESAMPLE_BY = 5, RNN_CELLSIZE = 128, N_LAYERS = 2
# SAVEDMODEL = "temperatures_best_checkpoint"
predict_fn = tf.contrib.predictor.from_saved_model(SAVEDMODEL)
for evaldata in evaltemps:
prime_data = evaldata[OFFSET:OFFSET+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
utils_display.picture_this_6(evaldata, evaldates, prime_data, results, PRIMELEN, RUNLEN, OFFSET, RMSELEN)
rmses = []
bad_ones = 0
for offset in [YEAR, YEAR+QYEAR, YEAR+2*QYEAR]:
for evaldata in evaltemps:
prime_data = evaldata[offset:offset+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
rmse = math.sqrt(np.mean((evaldata[offset+PRIMELEN:offset+PRIMELEN+RMSELEN] - results[:RMSELEN])**2))
rmses.append(rmse)
if rmse>7: bad_ones += 1
print("RMSE on {} predictions (shaded area): {}".format(RMSELEN, rmse))
print("Average RMSE on {} weather stations: {} ({} really bad ones, i.e. >7.0)".format(len(evaltemps), np.mean(rmses), bad_ones))
sys.stdout.flush()
Explanation: <a name="valid"></a>
Validation
End of explanation |
6,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test CacheOutput() class-based function decorator
Step1: Make an expensive function and time it
Step2: Cache the function results
If the exact same inputs are used again before the time limit expires,
then we should see the same outputs without re-calculating anything.
Step3: Does the cache expire like it should?
Step4: Show help() | Python Code:
import datetime as dt
import time
import fridge
Explanation: Test CacheOutput() class-based function decorator
End of explanation
# This decorator just displays how long a function takes to run
def showtime(func):
def wrapper(*args,**kwargs):
start = dt.datetime.now()
result = func(*args,**kwargs)
stop = dt.datetime.now()
print( "Elapsed time:", stop - start )
return result
return wrapper
# Make a potentially expensive function.
# Decorate it with @showtime so it will time itself.
@showtime
def power_tower(x,N):
for n in range(N):
x *= x
return x
# How long does it take to run?
big_number_1 = power_tower(2,25)
big_number_2 = power_tower(2,25)
big_number_1 == big_number_2
Explanation: Make an expensive function and time it
End of explanation
# Now make a cached version. Is it faster?
@showtime
@fridge.CacheOutput(seconds=5)
def cached_power_tower(x,N):
for n in range(N):
x *= x
return x
big_number_1 = cached_power_tower(2,25)
big_number_2 = cached_power_tower(2,25)
big_number_1 == big_number_2
Explanation: Cache the function results
If the exact same inputs are used again before the time limit expires,
then we should see the same outputs without re-calculating anything.
End of explanation
# After the cache expires, this function will need time to re-calculate
time.sleep(5)
big_number_3 = cached_power_tower(2,25)
Explanation: Does the cache expire like it should?
End of explanation
help(fridge.CacheOutput)
Explanation: Show help()
End of explanation |
6,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Google BigQuery
Motivated by the structure in the OKCupid usernames, I looked at the reddit data on Google BigQuery.
Try to answer the questions
Step1: Had to impose a limit so that I did not have to spin up a dataset or something with google.
Decided to spin up a dataset and see how well this really works. Google -- thanks for the free trial!
There are a few cohort tables that are a bit more detailed and would be good to use. They have been built for the following months
Step2: Yearly Data
Step3: Two Digit Numbers
Having a few years of comments, lets look at changes in the distribution.
Step4: Two Digit Excess
Lets define the excess popuation relative to the mean as this change we would like to measure
Step5: Four Digit Numbers
Step6: Excess Four Digit Numbers
Step7: Cohort Analysis
Step8: Wikipedia
Wikimedia provides an XML dump of the complete revision history for all Wikipedia articles.
Step9: Hacker News
DB from 2015?
Step10: Compare Reddit / Wikipedia / Hackernews | Python Code:
%matplotlib inline
import os
import glob
import pylab
import numpy as np
import pandas as pd
pd.set_option('display.max_rows', 5)
import seaborn as sns
sns.set_style('whitegrid')
from matplotlib.dates import date2num
from datetime import datetime
from pysurvey.plot import setup_sns as setup
from pysurvey.plot import minmax, icolorbar, density, legend, text, dateticks
Explanation: Google BigQuery
Motivated by the structure in the OKCupid usernames, I looked at the reddit data on Google BigQuery.
Try to answer the questions:
Can we estimate the age distribution of users on reddit from just their usernames?
Can we find changes in the age of reddit users since 2007?
End of explanation
# From: https://www.reddit.com/r/autowikibot/wiki/redditbots
bots = open('/Users/ajmendez/data/reddit/botlist.txt', 'r').read().splitlines()
filename
titles = []
dfs = []
for filename in glob.glob('/Users/ajmendez/data/reddit/*.csv'):
# make sure we are just reading the year csvs
if '20' not in filename:
continue
title = int(os.path.splitext(os.path.basename(filename))[0])
titles.append(title)
df = pd.read_csv(filename)
tmp = df['author'].str.extract(u'(\d+)')
df['year'] = title
df['number'] = tmp.apply(lambda x: int(x) if isinstance(x, (str, unicode)) else np.nan)
df['nlength'] = tmp.apply(lambda x: len(x) if isinstance(x, (str,unicode)) else 0)
# df['botname'] = df['author'].str.extract(u'({})'.format('|'.join(bots))) # slow
df['botname'] = df['author'].str.extract(u'(.*[Bb]ot$)')
df['isbot'] = df['botname'].notnull()
dfs.append(df)
print('Found Years: {}'.format(', '.join(map(str, titles))))
reddit = pd.concat(dfs)
# Save it to disk.
reddit.to_csv('/Users/ajmendez/data/reddit/all.csv')
Explanation: Had to impose a limit so that I did not have to spin up a dataset or something with google.
Decided to spin up a dataset and see how well this really works. Google -- thanks for the free trial!
There are a few cohort tables that are a bit more detailed and would be good to use. They have been built for the following months:
cohorts_201505
cohorts_201508
cohorts_201510
cohorts_201512
I have some decent compute nodes at my disposal so lets do the clean up and visualization here.
End of explanation
bins = np.arange(0, 101,1)
setup(figsize=(12, 18))
for i, (title,df) in enumerate(zip(titles,dfs)):
ax = setup(subplt=(len(titles),1,i+1),
ylabel=title,
xlabel='Two Digit Number', xticks=(i==len(titles)-1))
isdigits = (df['nlength'] == 2)
x = df.loc[isdigits, 'number'].as_matrix()
pylab.axvline(titles[i]-2000 + 0.5, lw=2, alpha=0.8, zorder=-2, color='k')
pylab.axvline(titles[i]-22-1900 + 0.5, lw=2, alpha=0.8, zorder=-2, color='r')
pylab.hist(x, bins=bins, alpha=0.6, lw=0)
v,l = np.histogram(x, bins)
ii = np.argsort(-v)[:10]
print title, l[ii], v[ii]
pylab.tight_layout()
bins = np.arange(1960, 2020, 1)
setup(figsize=(12, 18))
for i, (title, df) in enumerate(zip(titles, dfs)):
ax = setup(subplt=(len(titles),1,i+1),
ylabel=titles[i], #ylog=True, yr=[1, 1e4],
xr=minmax(bins), xlabel='Four Digit Number', xticks=i==len(titles)-1)
isdigits = (df['nlength'] == 4)
pylab.axvline(title + 0.5, zorder=-2, lw=2, color='k')
pylab.hist(df.loc[isdigits, 'number'].as_matrix(), bins=bins, alpha=0.6, lw=0)
pylab.tight_layout()
def make_bins(xr, yr, ndigit=2):
x,y = np.arange(xr[0],xr[1]+2), np.arange(yr[0], yr[1]+2),
X,Y = np.meshgrid(x*1.0,y*1.0)
Z = np.zeros( (len(x)-1, len(y)-1) )
for i,df in enumerate(dfs):
isdigits = (df['nlength'] == ndigit)
tmp = df.loc[isdigits, 'number'].as_matrix()
Z[:, i] = np.histogram(tmp, bins=x, density=True)[0]
return X, Y, np.ma.MaskedArray(Z, Z==0)
xr = [0,101]
yr = [2007, 2014]
X,Y,Z = make_bins(xr, yr)
Explanation: Yearly Data
End of explanation
years = np.arange(2007, 2015)
setup(figsize=(18,5),
xr=[0,100], xlabel='Two Digit Number',
xtickv=np.arange(0,101, 10) + 0.5, xticknames=np.arange(0,101, 10),
ylabel='Comment Year', ytickv=years+0.5, yticknames=titles,)
pcm = pylab.pcolormesh(X,Y,Z.T, label='Fraction of Users / bin',
vmin=0, vmax=0.04,
zorder=-2, alpha=0.9, cmap=pylab.cm.Spectral_r)
icolorbar(pcm, loc=1, borderpad=-4, tickfmt='{:.2f}', width='10%')
years = np.arange(2007, 2016)
pylab.plot(years-2000, years, lw=2, alpha=0.2)
numbers = [42, 69, 77, 33, 22, 88, 55]
for number in numbers:
pylab.text(number+0.5, 2015, number, ha='center', va='bottom', alpha=0.5)
Explanation: Two Digit Numbers
Having a few years of comments, lets look at changes in the distribution.
End of explanation
NZ = np.ma.MaskedArray(Z.T/np.mean(Z, axis=1), Z.T==0)
setup(figsize=(18,5),
xr=[0,100], xlabel='Two Digit Number',
xtickv=np.arange(0,101, 10) + 0.5, xticknames=np.arange(0,101, 10),
ylabel='Comment Year', ytickv=years+0.5, yticknames=titles,)
pcm = pylab.pcolormesh(X,Y,NZ, label='Excess Users / bin',
vmin=0.3, vmax=1.8,
zorder=-2, alpha=0.9, cmap=pylab.cm.Spectral_r)
icolorbar(pcm, loc=1, borderpad=-4, tickfmt='{:.1f}', width='10%')
numbers = [42, 69, 77, 33, 22, 88, 55]
for number in numbers:
pylab.text(number+0.5, 2015, number, ha='center', va='bottom', alpha=0.5)
Explanation: Two Digit Excess
Lets define the excess popuation relative to the mean as this change we would like to measure
End of explanation
four_xr = [1960,2021]
four_X,four_Y,four_Z = make_bins(four_xr, yr, ndigit=4)
years = np.arange(2007, 2015)
setup(figsize=(18,5),
xr=four_xr, xlabel='Four Digit Number',
xtickv=np.arange(1960,2021, 10) + 0.5, xticknames=np.arange(1960,2021, 10),
yr=yr, ylabel='Comment Year', ytickv=years+0.5, yticknames=titles,)
pcm = pylab.pcolormesh(four_X, four_Y, four_Z.T, label='Fractional Users / bin',
vmin=0, vmax=0.04,
zorder=-2, alpha=0.9, cmap=pylab.cm.Spectral_r)
icolorbar(pcm, loc=1, borderpad=-4, tickfmt='{:.2f}')
years = np.arange(2007, 2016)
pylab.plot(years, years, lw=2, alpha=0.5)
# numbers = [42, 69, 77, 33, 23, 88, 55]
# for number in numbers:
# pylab.text(number+0.5, 2015, number, ha='center', va='bottom')
Explanation: Four Digit Numbers
End of explanation
years = np.arange(2007, 2015)
four_NZ = np.ma.MaskedArray(four_Z.T/np.mean(four_Z, axis=1), four_Z.T==0)
setup(figsize=(18,5),
xr=four_xr, xlabel='Four Digit Number',
xtickv=np.arange(1960,2021, 10) + 0.5, xticknames=np.arange(1960,2021, 10),
yr=yr, ylabel='Comment Year', ytickv=years+0.5, yticknames=titles,)
pcm = pylab.pcolormesh(four_X, four_Y, four_NZ, label='Excess Users / bin',
vmin=0.2, vmax=2.8,
zorder=-2, alpha=0.9, cmap=pylab.cm.Spectral_r)
icolorbar(pcm, loc=1, borderpad=-4, tickfmt='{:.2f}')
pylab.plot(years, years, lw=4, alpha=0.9)
pylab.plot(years-25, years, lw=4, alpha=0.9, color='k')
df = dfs[-1]
isgood = ( (df['nlength'] == 4) & (dfs[-1]['number'] == 1960) )
df.loc[isgood, 'author']
Explanation: Excess Four Digit Numbers
End of explanation
narr = np.arange(0,101)
numbers = [42, 69, 92]
setup(figsize=(12,4), subplt=(1,3,1), title='"Interesting" numbers',
xr=yr, xlabel='Year', xtickv=np.arange(2007,2015,2), xticknames=np.arange(2007,2015,2),
yr=[0,0.04], ytickv=np.arange(0,0.041,0.01), ylabel='Fraction of users', )
for number in numbers:
i = np.where(number == narr)[0]
pylab.plot(Y[:-1,i], Z.T[:,i], lw=3, alpha=0.8, label=number)
legend(loc=1, alpha=0.8)
numbers = [1980, 1982, 1986, 1988, 1990, 1992]
# people use 99 in excess of years, so disable for plot
Z99 = Z.T.view()
Z99[:,-3:] = np.nan
setup(subplt=(1,3,2), yticks=False, title='Two Digit Cohort (age)',
xr=yr, xlabel='Year', xtickv=np.arange(2007,2015,2), xticknames=np.arange(2007,2015,2),
yr=[0,0.04], ytickv=np.arange(0,0.041,0.01), ylabel='Fraction of users', )
for k,number in enumerate(numbers):
i = np.arange(8)
j = np.where(number-1900 == narr)[0]
pylab.plot(Y[:-1,j], Z99[i,j+i], lw=3, alpha=0.8,
label=2007-number, color=pylab.cm.Blues_r(k*1.0/len(numbers)))
legend(loc=2, alpha=0.8)
narr = np.arange(1960,2020)
setup(subplt=(1,3,3), yticks=False, title='Four Digit Cohort (age)',
xr=yr, xlabel='Year', xtickv=np.arange(2007,2015,2), xticknames=np.arange(2007,2015,2),
yr=[0,0.04], ytickv=np.arange(0,0.041,0.01), ylabel='Fraction of users', )
for k,number in enumerate(numbers):
i = np.arange(8)
j = np.where(number == narr)[0]
pylab.plot(Y[:-1,j], four_Z.T[i,j+i], lw=3, alpha=0.8,
label=2007-number, color=pylab.cm.Blues_r(k*1.0/len(numbers)))
# legend(loc=2, alpha=0.8)
pylab.tight_layout()
2015-1992, 2015-1980
isgood = (dfall['nlength'] == 4)
setup(figsize=(16,5), ytickv=np.arange(2007,2015)+0.5, yticknames=np.arange(2007,2015))
den = density(dfall.loc[isgood, 'number'].astype(int),
dfall.loc[isgood, 'year'].astype(int),
weights=np.ones(isgood.sum())*1e2, vmin=0, vmax=4,
bins=[np.arange(1950, 2021), np.arange(2007,2015)],
ynorm=True, xnorm=False, cmap=pylab.cm.Spectral_r, colorbar=False)
icolorbar(den)
Explanation: Cohort Analysis
End of explanation
wiki = pd.read_csv('/Users/ajmendez/data/usernames/wikipedia.csv')
tmp = df['author'].str.extract(u'(\d+)')
wiki['number'] = tmp.apply(lambda x: int(x) if isinstance(x, (str, unicode)) else np.nan)
wiki['nlength'] = tmp.apply(lambda x: len(x) if isinstance(x, (str,unicode)) else 0)
print len(wiki)
def add_line(year, label, y=11500, color='k',offset=0, **kwargs):
pylab.axvline(year+0.5, color=color, zorder=-1, alpha=0.2, **kwargs)
text(year+0.6+offset, y, label, color=color, fontsize=12,
ha='center', va='top', multialignment='right', rotation=90,
**kwargs)
isdigit = (wiki['nlength'] == 2)
setup(figsize=(12,5), title='Wikipedia Revisions',
ylabel='Number / bin',
xlabel='Two digit number')
_ = pylab.hist(wiki.loc[isdigit, 'number'].as_matrix(),
bins=np.arange(0,101), alpha=0.7, lw=0)
add_line(13, '2013?', color='g', offset=1.5)
add_line(23, '23 years old')
add_line(69, '69', color='DarkBlue')
add_line(42, '42', color='DarkBlue')
add_line(88, '26 years old', color='r')
add_line(92, '22 years old')
Explanation: Wikipedia
Wikimedia provides an XML dump of the complete revision history for all Wikipedia articles.
End of explanation
hn = pd.read_csv('/Users/ajmendez/data/usernames/hackernews.csv')
tmp = df['author'].str.extract(u'(\d+)')
hn['number'] = tmp.apply(lambda x: int(x) if isinstance(x, (str, unicode)) else np.nan)
hn['nlength'] = tmp.apply(lambda x: len(x) if isinstance(x, (str,unicode)) else 0)
print len(hn)
isdigit = (hn['nlength'] == 2)
setup(figsize=(12,5), title='Hacker News Comments',
ylabel='Number / bin',
xlabel='Two digit number')
_ = pylab.hist(hn.loc[isdigit, 'number'].as_matrix(),
bins=np.arange(0,101), alpha=0.7, lw=0)
add_line(13, '2013?', y=650, color='g', offset=1.5)
add_line(23, '23 years old', y=650, offset=1.5)
add_line(69, '69', color='DarkBlue', y=650)
add_line(42, '42', color='DarkBlue', y=650)
add_line(88, '27 years old', color='r', y=650)
add_line(92, '23 years old', y=650)
Explanation: Hacker News
DB from 2015?
End of explanation
isdigit = (wiki['nlength'] == 4)
setup(figsize=(12,5), title='Wikipedia Revisions',
ylabel='Number / bin',
xlabel='Two digit number')
_ = pylab.hist(wiki.loc[isdigit, 'number'].as_matrix(),
bins=np.arange(1960,2021), alpha=0.7, lw=0)
add_line(1991, '23 years old', y=1900)
add_line(2014, '2014', y=1900)
isdigit = (hn['nlength'] == 4)
setup(figsize=(12,5), title='Hacker News Comments',
ylabel='Number / bin',
xlabel='Four digit number')
_ = pylab.hist(hn.loc[isdigit, 'number'].as_matrix(),
bins=np.arange(1960,2021), alpha=0.7, lw=0)
add_line(1991, '24 years old', y=130)
add_line(1984, '31 years old', y=130)
add_line(2014, '2014', y=130)
y=9000
isdigit = (reddit['nlength'] == 4)
setup(figsize=(12,5), title='Hacker News Comments',
ylabel='Number / bin',
xlabel='Four digit number')
_ = pylab.hist(reddit.loc[isdigit, 'number'].as_matrix(),
bins=np.arange(1960,2021), alpha=0.7, lw=0)
add_line(1990, '24 years old', y=y)
add_line(2014, '2014', y=y)
def make_hist(df, label):
isdigit = (df['nlength'] == 4)
color = next(pylab.gca()._get_lines.color_cycle)
pylab.hist(df.loc[isdigit, 'number'].as_matrix(), label=label, histtype='step',
bins=np.arange(1960,2021), alpha=0.7, lw=2, color=color, normed=True)
pylab.hist(df.loc[isdigit, 'number'].as_matrix(),
bins=np.arange(1960,2021), alpha=0.1, lw=0, color=color, normed=True)
setup(figsize=(12,5),
ylabel='Fraction',
xlabel='Two digit number')
make_hist(wiki, 'Wikipedia')
make_hist(reddit, 'Reddit')
make_hist(hn, 'Hacker News')
legend(loc=2)
def make_hist(df, label):
isdigit = (df['nlength'] == 2)
color = next(pylab.gca()._get_lines.color_cycle)
pylab.hist(df.loc[isdigit, 'number'].as_matrix(), label=label, histtype='step',
bins=np.arange(0,101), alpha=0.7, lw=2, color=color, normed=True)
pylab.hist(df.loc[isdigit, 'number'].as_matrix(),
bins=np.arange(0,101), alpha=0.1, lw=0, color=color, normed=True)
setup(figsize=(12,5),
ylabel='Fraction',
xlabel='Two digit number')
make_hist(wiki, 'Wikipedia')
make_hist(reddit, 'Reddit')
make_hist(hn, 'Hacker News')
legend(loc=2)
Explanation: Compare Reddit / Wikipedia / Hackernews
End of explanation |
6,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Evaluating-Imbalanced-Datasets" data-toc-modified-id="Evaluating-Imbalanced-Datasets-1"><span class="toc-item-num">1 </span>Evaluating Imbalanced Datasets</a></span><ul class="toc-item"><li><span><a href="#Dataset" data-toc-modified-id="Dataset-1.1"><span class="toc-item-num">1.1 </span>Dataset</a></span></li><li><span><a href="#Class-Weighting" data-toc-modified-id="Class-Weighting-1.2"><span class="toc-item-num">1.2 </span>Class Weighting</a></span></li><li><span><a href="#F1-Score" data-toc-modified-id="F1-Score-1.3"><span class="toc-item-num">1.3 </span>F1 Score</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.4"><span class="toc-item-num">1.4 </span>Conclusion</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Evaluating Imbalanced Datasets
This documentation illustrates the trade off between True Positive Rate and False Positive Rate using ROC and Precision/Recall (PR) curves. In the end, we will take a look at why, for binary classification problem, apart from solely using the popular evaluation metric ROC curve we should also look at other evaluation metric such as precision and recall especially when working with highly imbalanced dataset.
Dataset
The dataset we'll be using today can be downloaded from the Kaggle website.
Step2: A brief description of the dataset based on the data overview section from the download source.
The datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions.
Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction amount. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
The only feature-engineering that we'll be doing for now is to convert the feature "Time" (seconds from which the very first data observation took place) to hours of a day. While we're at it, let's take a look at a breakdown of our legit vs fraud transactions via a pivot table and a plot of fraud transactions' count over time for a quick exploratory data analysis.
Step3: Class Weighting
With scikit-learn, we can give higher weights to the minority class (the model will be penalized more when misclassifying a minority class) by modifying the class_weight argument during model initialization. Let's see what affect will this have with our model. The following code chunk manually selects a range of weights to boost the minority class and tracks various metrics to see the model's performance across different class weighting values.
Note that the following section assumes knowledge of model performance metric such as precision, recall and AUC. The following link contains resources into those concepts if needed. Notebook
Step5: A good classifier would have a PR (Precision/Recall) curve closer to the upper-right corner and a ROC curve to the upper-left corner. Based on the plot above, we can see that while both curves uses the same underlying data, i.e. the real class labels and the predicted probability, the two charts can tell different stories, with some weights seem to perform better based on the precision/recall curve's chart.
To be explicit, different settings of the class_weight argument all seem to perform pretty well for ROC curve, but some poorly for PR curve. This is due to the fact that for ROC curve, one of the axis shows the false positive rate (number of false positives / total number of negatives), and this ratio won't change much when the total number of negatives is extremely large. Whereas for PR curve, one of the axis, precision (number of true positives / total number of predicted positives), is less affected by this.
Another way to visualize the model's performance metric is to use a bar-plot to visualize the precision/recall/f1 score at different class weighting values.
Step6: Judging from the plot above, the can see that when the weight's value is set at 10, we seem to have strike a good balance between precision and recall (this setting has the highest f1 score, we'll have a deeper discussion on f1 score in the next section), where our model can detect 80% of the fraudulent transaction, while not annoying a bunch of customers with false positives. Another observation is that if we were to set the class weighting value to 10,000 we would be able to increase our recall score at the expense of more mis-classified legit cases (as depicted by the low precision score). | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_curve, roc_curve
from sklearn.metrics import precision_score, recall_score, f1_score
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,matplotlib
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Evaluating-Imbalanced-Datasets" data-toc-modified-id="Evaluating-Imbalanced-Datasets-1"><span class="toc-item-num">1 </span>Evaluating Imbalanced Datasets</a></span><ul class="toc-item"><li><span><a href="#Dataset" data-toc-modified-id="Dataset-1.1"><span class="toc-item-num">1.1 </span>Dataset</a></span></li><li><span><a href="#Class-Weighting" data-toc-modified-id="Class-Weighting-1.2"><span class="toc-item-num">1.2 </span>Class Weighting</a></span></li><li><span><a href="#F1-Score" data-toc-modified-id="F1-Score-1.3"><span class="toc-item-num">1.3 </span>F1 Score</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.4"><span class="toc-item-num">1.4 </span>Conclusion</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
filepath = os.path.join('data', 'creditcard.csv')
df = pd.read_csv(filepath)
print('dimension: ', df.shape)
df.head()
Explanation: Evaluating Imbalanced Datasets
This documentation illustrates the trade off between True Positive Rate and False Positive Rate using ROC and Precision/Recall (PR) curves. In the end, we will take a look at why, for binary classification problem, apart from solely using the popular evaluation metric ROC curve we should also look at other evaluation metric such as precision and recall especially when working with highly imbalanced dataset.
Dataset
The dataset we'll be using today can be downloaded from the Kaggle website.
End of explanation
df['hour'] = np.ceil(df['Time'].values / 3600) % 24
fraud_over_hour = df.pivot_table(values='Amount', index='hour', columns='Class', aggfunc='count')
fraud_over_hour
plt.rcParams['font.size'] = 12
plt.rcParams['figure.figsize'] = 8, 6
plt.plot(fraud_over_hour[1])
plt.title('Fraudulent Transaction over Hour')
plt.ylabel('Fraudulent Count')
plt.xlabel('Hour')
plt.show()
# prepare the dataset for modeling;
# extract the features and labels, perform a quick train/test split
label = df['Class']
pca_cols = [col for col in df.columns if col.startswith('V')]
input_cols = ['hour', 'Amount'] + pca_cols
df = df[input_cols]
df_train, df_test, y_train, y_test = train_test_split(
df, label, stratify=label, test_size=0.35, random_state=1)
print('training data dimension:', df_train.shape)
df_train.head()
# we'll be using linear models later, hence
# we standardize our features to ensure they are
# all at the same scale
standardize = StandardScaler()
X_train = standardize.fit_transform(df_train)
X_test = standardize.transform(df_test)
label_distribution = np.bincount(label) / label.size
print('labels distribution:', label_distribution)
print('Fraud is {}% of our data'.format(label_distribution[1] * 100))
Explanation: A brief description of the dataset based on the data overview section from the download source.
The datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions.
Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction amount. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
The only feature-engineering that we'll be doing for now is to convert the feature "Time" (seconds from which the very first data observation took place) to hours of a day. While we're at it, let's take a look at a breakdown of our legit vs fraud transactions via a pivot table and a plot of fraud transactions' count over time for a quick exploratory data analysis.
End of explanation
fig = plt.figure(figsize=(15, 8))
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_xlim([-0.05, 1.05])
ax1.set_ylim([-0.05, 1.05])
ax1.set_xlabel('Recall')
ax1.set_ylabel('Precision')
ax1.set_title('PR Curve')
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_xlim([-0.05, 1.05])
ax2.set_ylim([-0.05, 1.05])
ax2.set_xlabel('False Positive Rate')
ax2.set_ylabel('True Positive Rate')
ax2.set_title('ROC Curve')
f1_scores = []
recall_scores = []
precision_scores = []
pos_weights = [1, 10, 25, 50, 100, 10000]
for pos_weight in pos_weights:
lr_model = LogisticRegression(class_weight={0: 1, 1: pos_weight})
lr_model.fit(X_train, y_train)
# plot the precision-recall curve and AUC curve
pred_prob = lr_model.predict_proba(X_test)[:, 1]
precision, recall, _ = precision_recall_curve(y_test, pred_prob)
tpr, fpr, _ = roc_curve(y_test, pred_prob)
ax1.plot(recall, precision, label=pos_weight)
ax2.plot(tpr, fpr, label=pos_weight)
# track the precision, recall and f1 score
pred = lr_model.predict(X_test)
f1_test = f1_score(y_test, pred)
recall_test = recall_score(y_test, pred)
precision_test = precision_score(y_test, pred)
f1_scores.append(f1_test)
recall_scores.append(recall_test)
precision_scores.append(precision_test)
ax1.legend(loc='lower left')
ax2.legend(loc='lower right')
plt.show()
Explanation: Class Weighting
With scikit-learn, we can give higher weights to the minority class (the model will be penalized more when misclassifying a minority class) by modifying the class_weight argument during model initialization. Let's see what affect will this have with our model. The following code chunk manually selects a range of weights to boost the minority class and tracks various metrics to see the model's performance across different class weighting values.
Note that the following section assumes knowledge of model performance metric such as precision, recall and AUC. The following link contains resources into those concepts if needed. Notebook: AUC (Area under the ROC curve and precision/recall curve) from scratch
End of explanation
def score_barplot(precision_scores, recall_scores, f1_scores, pos_weights, figsize=(8, 6)):
Visualize precision/recall/f1 score at different class weighting values.
width = 0.3
ind = np.arange(len(precision_scores))
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
b1 = ax.bar(ind, precision_scores, width, color='lightskyblue')
b2 = ax.bar(ind + width, recall_scores, width, color='lightcoral')
b3 = ax.bar(ind + (2 * width), f1_scores, width, color='gold')
ax.set_xticks(ind + width)
ax.set_xticklabels(pos_weights)
ax.set_ylabel('score')
ax.set_xlabel('positive weights')
ax.set_ylim(0, 1.3)
ax.legend(handles=[b1, b2, b3], labels=['precision', 'recall', 'f1'])
plt.tight_layout()
plt.show()
score_barplot(precision_scores, recall_scores, f1_scores, pos_weights)
Explanation: A good classifier would have a PR (Precision/Recall) curve closer to the upper-right corner and a ROC curve to the upper-left corner. Based on the plot above, we can see that while both curves uses the same underlying data, i.e. the real class labels and the predicted probability, the two charts can tell different stories, with some weights seem to perform better based on the precision/recall curve's chart.
To be explicit, different settings of the class_weight argument all seem to perform pretty well for ROC curve, but some poorly for PR curve. This is due to the fact that for ROC curve, one of the axis shows the false positive rate (number of false positives / total number of negatives), and this ratio won't change much when the total number of negatives is extremely large. Whereas for PR curve, one of the axis, precision (number of true positives / total number of predicted positives), is less affected by this.
Another way to visualize the model's performance metric is to use a bar-plot to visualize the precision/recall/f1 score at different class weighting values.
End of explanation
# this code chunk shows the same idea applies when using tree-based models
f1_scores = []
recall_scores = []
precision_scores = []
pos_weights = [1, 10, 100, 10000]
for pos_weight in pos_weights:
rf_model = RandomForestClassifier(n_estimators=50, max_depth=6, n_jobs=-1,
class_weight={0: 1, 1: pos_weight})
rf_model.fit(df_train, y_train)
# track the precision, recall and f1 score
pred = rf_model.predict(df_test)
f1_test = f1_score(y_test, pred)
recall_test = recall_score(y_test, pred)
precision_test = precision_score(y_test, pred)
f1_scores.append(f1_test)
recall_scores.append(recall_test)
precision_scores.append(precision_test)
score_barplot(precision_scores, recall_scores, f1_scores, pos_weights)
Explanation: Judging from the plot above, the can see that when the weight's value is set at 10, we seem to have strike a good balance between precision and recall (this setting has the highest f1 score, we'll have a deeper discussion on f1 score in the next section), where our model can detect 80% of the fraudulent transaction, while not annoying a bunch of customers with false positives. Another observation is that if we were to set the class weighting value to 10,000 we would be able to increase our recall score at the expense of more mis-classified legit cases (as depicted by the low precision score).
End of explanation |
6,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The situation
Type C thermocouples are not NIST calibrated to below 273.15 K. For my research specific scenario, I need to cool my sample (Molybdenum) to cryogenic temperatures and also anneal to very high ~2000 K. There is no thermocouple with these properties.
The solution
We know that Type K thermocouples are accurate down to cryogenic temperatures. So what I've done here is to read the Type K temperature and record the corresponding Type C mV to create a calibration table. Both thermocouples were spot welded to a large mass very close to one another to ensure the temperature readings will be accurate.
Then I will use a polynomial fit to get the low T calibration for the Type C thermocouple.
Step1: The 10th degree polynomial appears to give the best fit overall.
The lower order polynomials dont fit the curve exceedingly well below 100 K
Also, the polynomial tracks the heating curve (the slightly higher mV points from 80-150K) a little more closely than the cooling curve (295 to 80 K). Heating occurred much more slowly than cooling, so I expect it to me more accurate anyways.
Step2: It's also a good idea to check that the polynomial does not have any inflection points, at least in the area we are interested in using the polynomial (77 K - 273.15 K). We can use the second derivative test to see if this will be important for our case.
Step3: Well this is not optimal-- there exists a local minimum at 83.86 K in our polynomial fit. We can attempt to fit an exponential curve to this very low temperature data and append this to the polynomial function.
Step4: This appears to be a better fit than the polynomial in this regime. Now lets concatenate these two functions and interpolate near the points around 100 K to smooth things out if necessary. Recall that the two functions are fit_poly and expfunc
Step5: The two fitted plots almost match near 103 K, but there is a little 'cusp'-like shape near the point of intersection. Let's smooth it out. Also, notice that the expfunc fit is a little better than the polyfit.
Step6: Now I will take the polynomial and take the values from 77 K to 273 K for calibration and append them to the NIST values
Step7: But wait! Suppose we also want to fix that discontinuity at 273.15 K? We can apply the same procudure as before.
1. Apply a tanh(x) function
Step8: The prior value at 273.15 K was -0.00867, when the actual value is 0. After the smoothing, the new value is -0.004336, about half of the prior value. Some of the values a little after 273.15 do not match exactly with the NIST table, but it is much better than the jump that we had before. | Python Code:
# import a few packages
%matplotlib notebook
from thermocouples_reference import thermocouples
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sympy as sp
from scipy import optimize, interpolate, signal
typeC=thermocouples['C']
# make sure you are in the same dir as the file
# read in the file and drop Na cols
df = pd.read_excel('Type C Table 4-2-18.xlsx')
df.dropna(axis=1, inplace=True)
df.head()
# NIST has values calibrated for T > 273.15 K, lets find the Tref based on these points
# I am using Kelvin for all T. The CJC is quoted in deg C.
tempdf = df.query('T>273.15')
tempdf.head()
# Let's find the T_ref by using this function to take the TypeC mV and the T to find the Tref
def find_Tref(mV, T):
x = np.arange(290, 301, 0.01)
x = x[::-1] # lets reverse x
i = 1
while typeC.inverse_KmV(mV, Tref=x[i]) - T >= 0:
i += 1
# print(x[i])
return x[i]
# This isn't the fastest way to do things, but since its just a short amount of rows, lets iterate over the mV and T
# to find Tref
Treflist=[]
for idx in tempdf.index:
# print(idx)
Treflist.append(find_Tref(mV=tempdf['TypeCmV'][idx], T=tempdf['T'][idx]))
print( ['%0.2f'% x for x in Treflist])
# now average the Trefs:
avg_Tref = np.mean(Treflist)
print(avg_Tref)
# I will use this Tref for further calcs
Tref_emf = typeC.emf_mVK(avg_Tref)
print(Tref_emf)
# The Tref_emf value is very close to the value in the table at 273.15 K, so we'll use this value to correct the new values
# The value taken at 273.15 K was during the cooling process and is likely to be less accurate than the room temperature value
# across these multiple observations
# The emf correction for 273.15 K is then: calibrated_emf = raw_emf + Tref_emf
# Let's add this to the df we initially imported...
df['TypeC_calib_mV'] = df['TypeCmV'] + Tref_emf
df.head()
# Compared to the NIST table, we appear to be off at most a little less than 1 deg K
# Had we used the CJC temperature as a proxy for room temp, we would've been even more off.
# compare the TypeCmV using Tref = CJC vs using Tref = 294.67:
print(typeC.emf_mVK(291.22, Tref =(25.26+273.15)))
print(typeC.emf_mVK(291.22, Tref =avg_Tref))
# Let's visualize these results
plt.plot(df['T'], df['TypeC_calib_mV'], 'o', ms=0.5 )
plt.xlabel('Temperature (K)')
plt.ylabel('Type C calibrated emf (mV)')
# Interesting. I cooled first to LN2 temperatures and then allowed the sample to heat up slowly by evaporating LN2
# The data agrees fairly well (within ~3 K) between the heating and cooling curves. I didn't heat all the way back up.
# Now lets fit the data to a polynowmial using least squares
fit_coeffs = np.polyfit(df['T'],df['TypeC_calib_mV'], deg = 10 , full=True)
# print(fit_coeffs)
fit_poly = np.poly1d(fit_coeffs[0])
print(fit_poly)
fig, ax = plt.subplots()
ax.plot(df['T'], df['TypeC_calib_mV'],'o',ms='0.5')
ax.plot(df['T'], fit_poly(df['T']) , 'o', ms='0.5')
Explanation: The situation
Type C thermocouples are not NIST calibrated to below 273.15 K. For my research specific scenario, I need to cool my sample (Molybdenum) to cryogenic temperatures and also anneal to very high ~2000 K. There is no thermocouple with these properties.
The solution
We know that Type K thermocouples are accurate down to cryogenic temperatures. So what I've done here is to read the Type K temperature and record the corresponding Type C mV to create a calibration table. Both thermocouples were spot welded to a large mass very close to one another to ensure the temperature readings will be accurate.
Then I will use a polynomial fit to get the low T calibration for the Type C thermocouple.
End of explanation
# These mV values are also close ~0.5 degrees K of one another
print(fit_poly(273.15)) # fit
print(typeC.emf_mVK(273.15)) # NIST value
Explanation: The 10th degree polynomial appears to give the best fit overall.
The lower order polynomials dont fit the curve exceedingly well below 100 K
Also, the polynomial tracks the heating curve (the slightly higher mV points from 80-150K) a little more closely than the cooling curve (295 to 80 K). Heating occurred much more slowly than cooling, so I expect it to me more accurate anyways.
End of explanation
x = sp.symbols('x')
polynom = sp.Poly(fit_coeffs[0],x)
# print(fit_coeffs[0])
# find the second derivative of the polynomial
second_derivative = polynom.diff(x,x)
print(second_derivative)
sp.solve(second_derivative,x, domain= sp.S.Reals)
print(second_derivative.evalf(subs={x:77}))
print(second_derivative.evalf(subs={x:80}))
print('\n')
print(second_derivative.evalf(subs={x:120}))
print(second_derivative.evalf(subs={x:125}))
print('\n')
print(second_derivative.evalf(subs={x:135}))
print(second_derivative.evalf(subs={x:145}))
print('\n')
print(second_derivative.evalf(subs={x:283}))
print(second_derivative.evalf(subs={x:291}))
first_deriv = polynom.diff(x)
print(first_deriv)
sp.solve(first_deriv,x, domain= sp.S.Reals)
print(first_deriv.evalf(subs={x:80}))
print(first_deriv.evalf(subs={x:84}))
Explanation: It's also a good idea to check that the polynomial does not have any inflection points, at least in the area we are interested in using the polynomial (77 K - 273.15 K). We can use the second derivative test to see if this will be important for our case.
End of explanation
lowT_df = df.query('T<103')
# Now lets fit the data to an exponential
# print(np.min(lowT_df['TypeC_calib_mV']))
def func(x, a, b, c, d):
return a * np.exp(b * x - c) + d
fit_coeffs = optimize.curve_fit(func, lowT_df['T'],lowT_df['TypeC_calib_mV'], p0=(1, 1, 90, -3))
print(fit_coeffs)
a = fit_coeffs[0][0]
b = fit_coeffs[0][1]
c = fit_coeffs[0][2]
d = fit_coeffs[0][3]
expfunc = func(lowT_df['T'],a,b,c,d)
fig3, ax3 = plt.subplots()
# ax3.plot(lowT_df['T'], a*np.exp(b*lowT_df['TypeC_calib_mV']), 'o',ms='0.5')
ax3.plot(lowT_df['T'], lowT_df['TypeC_calib_mV'], 'o',ms='0.5')
ax3.plot(lowT_df['T'], expfunc, 'o',ms='0.5',color='r')
Explanation: Well this is not optimal-- there exists a local minimum at 83.86 K in our polynomial fit. We can attempt to fit an exponential curve to this very low temperature data and append this to the polynomial function.
End of explanation
# select data from 103 to 120 K just so we can see the point of intersection a little better
checkT_df = df.query('77<=T<=120')
fig4, ax4 = plt.subplots()
ax4.plot(checkT_df['T'], fit_poly(checkT_df['T']), 'o', ms=0.5, label='polyfit', color='g')
ax4.plot(lowT_df['T'], expfunc, 'o', ms=0.5, label='expfunc', color='r')
ax4.plot(df['T'], df['TypeC_calib_mV'],'o',ms='0.5', label='Data', color='b')
ax4.set_xlim([80,110])
ax4.set_ylim([-1.88,-1.75])
ax4.legend()
Explanation: This appears to be a better fit than the polynomial in this regime. Now lets concatenate these two functions and interpolate near the points around 100 K to smooth things out if necessary. Recall that the two functions are fit_poly and expfunc
End of explanation
def switch_fcn(x, switchpoint, smooth):
s = 0.5 + 0.5*np.tanh((x - switchpoint)/smooth)
return s
sw = switch_fcn(df['T'], 103, 0.2)
expfunc2 = func(df['T'],a,b,c,d)
len(expfunc2)
fig, ax = plt.subplots()
ax.plot(df['T'], sw,'o', ms=0.5)
def combined(switch, low_f1, high_f2):
comb = (1-switch)*low_f1 + switch*high_f2
return comb
comb_fcn = combined(sw, expfunc2,fit_poly(df['T']))
fig, ax = plt.subplots()
ax.plot(df['T'], comb_fcn, 'o', ms=0.5)
fig5, ax5 = plt.subplots()
ax5.plot(df['T'],comb_fcn, 'o', ms=2, label='combined')
ax5.plot(checkT_df['T'], fit_poly(checkT_df['T']), 'o', ms=0.5, label='polyfit', color='g')
ax5.plot(lowT_df['T'], expfunc, 'o', ms=0.5, label='expfunc2', color='r')
ax5.set_xlim([80,110])
ax5.set_ylim([-1.88,-1.75])
ax5.legend()
Explanation: The two fitted plots almost match near 103 K, but there is a little 'cusp'-like shape near the point of intersection. Let's smooth it out. Also, notice that the expfunc fit is a little better than the polyfit.
End of explanation
# low temperature array
low_temp = np.arange(77.15,273.15, 0.1)
# low_temp_calib = fit_poly(low_temp)
low_temp_calib = combined(switch_fcn(low_temp, 103, 3), func(low_temp,a,b,c,d), fit_poly(low_temp))
# high temperature array
high_temp = np.arange(273.15,2588.15, 0.1)
high_temp_nist = typeC.emf_mVK(high_temp)
# concatentate and put into a dataframe and output to excel
Temperature = np.concatenate([low_temp, high_temp])
TypeC_mV = np.concatenate([low_temp_calib, high_temp_nist])
typeC_calibration = pd.DataFrame(data=TypeC_mV, index=Temperature, dtype='float32', columns = ['Type C (mV)'])
typeC_calibration.index.name = 'Temperature (Kelvin)'
print(typeC_calibration.head())
print(typeC_calibration.tail())
# Uncomment these lines and run the cell to output a calibration table
# write to excel
# xlwrite = pd.ExcelWriter('Type C calibration_low_res.xlsx')
# typeC_calibration.to_excel(xlwrite)
# xlwrite.save()
Explanation: Now I will take the polynomial and take the values from 77 K to 273 K for calibration and append them to the NIST values
End of explanation
low_calib = combined(switch_fcn(Temperature, 103, 3), func(Temperature,a,b,c,d), fit_poly(Temperature))
high_calib = pd.DataFrame(index=high_temp, data=high_temp_nist,columns=['mV'])
dummy_df = pd.DataFrame(index=low_temp, data=np.zeros(len(low_temp)),columns=['mV'])
concat_high_calib = dummy_df.append(high_calib)
print(concat_high_calib.loc[272.9:273.5])
freezept_calib = combined(switch_fcn(Temperature, 273.15, 0.45), low_calib, concat_high_calib['mV'] )
freezept_calib.index.name = 'T'
freezept_calib.loc[272.9:273.5]
Explanation: But wait! Suppose we also want to fix that discontinuity at 273.15 K? We can apply the same procudure as before.
1. Apply a tanh(x) function: $switch = 0.5 + 0.5np.tanh((x - switchpoint)/smooth)$
2. Combine both functions: $comb = (1-switch)f1 + (switch)*f2 $
End of explanation
fig, ax = plt.subplots()
freezept_calib.plot(ax=ax, label ='combined')
ax.plot(Temperature,low_calib, label = 'low calib')
ax.plot(Temperature,concat_high_calib, label= 'high_calib')
ax.set_ylim([-.04,0.04])
ax.set_xlim([268,277])
ax.legend()
print(signal.argrelmin(freezept_calib.values))
# print(signal.argrelextrema(freezept_calib.values,np.less))
# print(signal.argrelextrema(freezept_calib.values,np.greater))
# No local maxima!
# Uncomment these lines and run the cell to output a calibration table
# write to excel
xlwrite = pd.ExcelWriter('Type C calibration_corrected.xlsx')
freezept_calib.to_excel(xlwrite)
xlwrite.save()
Explanation: The prior value at 273.15 K was -0.00867, when the actual value is 0. After the smoothing, the new value is -0.004336, about half of the prior value. Some of the values a little after 273.15 do not match exactly with the NIST table, but it is much better than the jump that we had before.
End of explanation |
6,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Classification 1
Step3: Evaluating a classifier
Most classifiers are "soft" because they can output a score, higher means more likely to be $Y=1$
- Logistic regression
Step4: Confusion matrix and classification metrics
<table style='font-family
Step5: Comments
"Good" ROC should be in top left
"Good" PR should be large for all recall values
PR is better for large class imbalance
ROC treats each type of error equally
Exercise 6.2
Apply LDA and QDA to the above dataset and compare the PR curves to the previous two methods. To calculate the "score" you can use the predict_log_proba method. | Python Code:
import pandas as pd
import numpy as np
import matplotlib as mpl
import plotnine as p9
import matplotlib.pyplot as plt
import itertools
import warnings
warnings.simplefilter("ignore")
from sklearn import neighbors, preprocessing, impute, metrics, model_selection, linear_model, svm, feature_selection
from matplotlib.pyplot import rcParams
rcParams['figure.figsize'] = 6,6
def train_bank_to_xy(bank):
standardize and impute training
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
scaler = preprocessing.StandardScaler().fit(X)
imputer = impute.SimpleImputer(fill_value=0).fit(X)
trans_prep = lambda Z: imputer.transform(scaler.transform(Z))
X = trans_prep(X)
y = (y == 'yes')*1
return (X, y), trans_prep
def test_bank_to_xy(bank, trans_prep):
standardize and impute test
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
X = trans_prep(X)
y = (y == 'yes')*1
return (X, y)
bank = pd.read_csv('../../data/bank.csv',sep=';',na_values=['unknown',999,'nonexistent'])
bank.info()
bank_tr, bank_te = model_selection.train_test_split(bank,test_size=.33)
p9.ggplot(bank_tr, p9.aes(x = 'age',fill = 'y')) + p9.geom_density(alpha=.2)
(X_tr, y_tr), trans_prep = train_bank_to_xy(bank_tr)
X_te, y_te = test_bank_to_xy(bank_te, trans_prep)
def plot_conf_score(y_te,score,tau):
y_classes = (1,0)
cf_inds = ["Pred {}".format(c) for c in y_classes]
cf_cols = ["True {}".format(c) for c in y_classes]
y_pred = score_dur > tau
return pd.DataFrame(metrics.confusion_matrix(y_pred,y_te,labels=y_classes),index=cf_inds,columns=cf_cols)
Explanation: Classification 1: Generative methods
StatML: Lecture 6
Prof. James Sharpnack
Some content and images are from "The Elements of Statistical Learning" by Hastie, Tibshirani, Friedman
Reading ESL Chapter 4
Bayes rule in classification
Recall from homework that Bayes rule is
$$
g(x) = \left{ \begin{array}{ll} 1, &\mathbb P {Y = 1 | X = x } > \mathbb P {Y = 0 | X = x } \
0, &{\rm otherwise}\end{array}\right.
$$
Another way to write this event is (for $f_X(x) > 0$)
$$
f_{Y,X}(1, x) = \mathbb P {Y = 1 | X = x } f_X(x) > \mathbb P {Y = 0 | X = x } f_X(x) = f_{Y,X} (0, x)
$$
Let $\pi = \mathbb P { Y = 1}$ then this is also
$$
\pi f_{X|Y}(x | 1) > (1 - \pi) f_{X|Y} (x|0)
$$
which is
$$
\frac{f_{X|Y}(x | 1)}{f_{X|Y} (x|0)} > \tau = \frac{1-\pi}{\pi}
$$
Bayes rule in classification
$$
\frac{f_{X|Y}(x | 1)}{f_{X|Y} (x|0)} > \tau = \frac{1-\pi}{\pi}
$$
the Bayes rule is performing a likelihood ratio test
Generative methods
A generative method does the following
1. treats $Y=1$ and $Y=0$ as different datasets and tries to estimate the densities $\hat f_{X | Y}$.
2. then plug these in to the formula for the Bayes rule
Naive Bayes methods assume that each component of $X$ is independent of one another, but does non-parametric density estimation for the densities $\hat f_{X_j|Y}$
Parametric methods fit a parametric density to $X|Y$
Density estimation
Parametric maximum likelihood estimation
Nonparametric: Kernel density estimation (KDE), nearest neighbor methods,
Reasonable heuristic for estimating a density $\hat f_X$, based on data $x_1,\ldots,x_n$ is
1. Let $N(x,\epsilon)$ be the number of data points within $\epsilon$ of $x$
2. $\hat f(x) = N(x,\epsilon) / n$Vol$(B(\epsilon))$ divide by the volume of the ball of radius $\epsilon$
$$\mathbb E \left( \frac{N(x,\epsilon)}{n} \right)= \mathbb P{X \in B(x,\epsilon) } \approx f_x(x) \textrm{Vol}(B(\epsilon))$$
Kernel density estimation
Let the Boxcar kernel function be
$$
k(\|x_0-x_1\|) = \frac{1{ \| x_0 - x_1 \| \le 1 }}{{\rm Vol}(B(1))}
$$
then the number of pts within $\epsilon$ is
$$
N(x,\epsilon) = {\rm Vol}(B(1)) \sum_i k\left( \frac{\| x - x_i \|}{\epsilon} \right)
$$
and the density estimate is
$$
\hat f(x) = \frac 1n \sum_i \frac{{\rm Vol}(B(1))}{{\rm Vol}(B(\epsilon))} \cdot k\left( \frac{\| x - x_i \|}{\epsilon} \right)
$$
this is equal to
$$
\hat f(x) = \frac 1n \sum_i \frac{1}{\epsilon^p} \cdot k\left( \frac{\| x - x_i \|}{\epsilon} \right)
$$
Kernel density estimation
General kernel density estimate is based on a kernel such that
$$
\int k(\|x-x_0\|) dx = 1.
$$
Then KDE is
$$
\hat f(x') = \frac 1n \sum_i \frac{1}{\epsilon^p} \cdot k\left( \frac{\| x' - x_i \|}{\epsilon} \right)
$$
where $p$ is the dimensionality of the X space.
$\epsilon$ is a bandwidth parameter.
from wikipedia
Naive Bayes
For each $y = 0,1$ let $x_1,\ldots,x_{n_y}$ be the predictor data with $Y = y$
- For each dimension j
- Let $\hat f_{y,j}$ be the KDE of $x_{1,j},\ldots,x_{n_y,j}$
- Let $\hat f_y = \prod_j \hat f_{y,j}$
Let $\pi$ be the proportion of $Y = 1$ then let $\tau = (1 - \pi) / \pi$.
Predict $\hat y = 1$ for a new $x'$ if
$$
\frac{\hat f_{1}(x')}{\hat f_{0} (x')} > \tau
$$
and $\hat y=0$ otherwise.
from mathworks.org
Exercise 6.1
Let $x_0,x_1 \in \mathbb R^p$ and
$$k(\|x_0 - x_1\|) = \frac{1}{(2\pi)^{k/2}} \exp \left(- \frac 12 \|x_0 - x_1\|_2^2 \right).$$
How do we know that this is a valid kernel for multivariate density estimation?
Suppose that you used this kernel to obtain a multivariate density estimate, $\hat f: \mathbb R^p \rightarrow \mathbb R$, and also used the subroutine in Naive Bayes to estimate $\hat f_N(x') = \prod_j \hat f_j(x_j')$. Will these return the same results? Think about the boxcar kernel with bandwidth of 1, what are the main differences between these methods?
STOP
Answer to 6.1
This is a Gaussian pdf with mean $x_1$ and variance $I$ so it integrates to 1.
They are not the same because
$$
\frac 1n \sum_i \exp\left(-\frac 12 \sum_j (x_{ij} - x_j')^2\right) \ne \prod_j \left( \frac 1n \sum_i \exp(-\frac 12 (x_{ij} - x_j')^2)\right)
$$
For the boxcar kernel in p dimensions, $k(\| x' - x_i\|) \ne 0$ if $\| x' - x_i \| \le 1$ while $k(|x_j' - x_{ij}|) \ne 0$ if $|x_j' - x_{ij}| \le 1$. So $\hat f_N(x') \ne 0$ if $|x_j' - x_{ij}| \le 1$ for all j.
Gaussian Generative Models
Fit parametric model for each class using likelihood based approach.
Assume a Gaussian distribution
$$
X | Y = k \sim \mathcal N(\mu_k, \Sigma_k)
$$
for mean and variance parameters $\mu_k, \Sigma_k$.
End of explanation
score_dur = X_te[:,2]
p9.ggplot(bank_tr[['duration','y']].dropna(axis=0)) + p9.aes(x = 'duration',fill = 'y')\
+ p9.geom_density(alpha=.5)
y_te
plot_conf_score(y_te,score_dur,1.)
plot_conf_score(y_te,score_dur,2.)
## Fit and find NNs
nn = neighbors.NearestNeighbors(n_neighbors=10,metric="l2")
nn.fit(X_tr)
dists, NNs = nn.kneighbors(X_te)
NNs[1], y_tr[NNs[1]].mean(), y_te[1]
score_nn = np.array([(y_tr[knns] == 1).mean() for knns in NNs])
plot_conf_score(y_te,score_nn,.2)
nn = neighbors.KNeighborsClassifier(n_neighbors=10)
nn.fit(X_tr, y_tr)
score_nn = nn.predict_proba(X_te)[:,1]
plot_conf_score(y_te,score_nn,.2)
def print_top_k(score_dur,y_te,k_top):
ordering = np.argsort(score_dur)[::-1]
print("k: score, y")
for k, (yv,s) in enumerate(zip(y_te[ordering],score_dur[ordering])):
print("{}: {}, {}".format(k,s,yv))
if k >= k_top - 1:
break
print_top_k(score_dur,y_te,10)
Explanation: Evaluating a classifier
Most classifiers are "soft" because they can output a score, higher means more likely to be $Y=1$
- Logistic regression: output probability
- SVM: distance from margin
- kNN: percent of neighbors with $Y=1$
- LDA/QDA/Naive bayes: estimated likelihood ratio
If we order from largest to smallest then this gives us the points to predict as 1 first.
Choose a cut-off to say all above this value are 1 and below are 0 can see different errors
Confusion matrix and classification metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>True 1</td><td>True 0</td></tr>
<tr><td>Pred 1</td><td>True Pos</td><td>False Pos</td></tr>
<tr><td>Pred 0</td><td>False Neg</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
End of explanation
plt.style.use('ggplot')
fpr_dur, tpr_dur, threshs = metrics.roc_curve(y_te,score_dur)
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur)
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.title("ROC for 'duration'")
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur,label='duration')
plt.plot(fpr_nn,tpr_nn,label='knn')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
plt.title("ROC")
fpr_nn, tpr_nn, threshs = metrics.roc_curve(y_te,score_nn)
plot_temp()
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(rec_dur,prec_dur,label='duration')
plt.plot(rec_nn,prec_nn,label='knn')
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve")
prec_dur, rec_dur, threshs = metrics.precision_recall_curve(y_te,score_dur)
prec_nn, rec_nn, threshs = metrics.precision_recall_curve(y_te,score_nn)
plot_temp()
Explanation: Confusion matrix and classification metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>True 1</td><td>True 0</td></tr>
<tr><td>Pred 1</td><td>True Pos</td><td>False Pos</td></tr>
<tr><td>Pred 0</td><td>False Neg</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
End of explanation
from sklearn import discriminant_analysis
## Init previous predictors list
preds = [("Duration",score_dur), ("NN", score_nn)]
## Fit and predict with LDA
lda = discriminant_analysis.LinearDiscriminantAnalysis()
lda.fit(X_tr,y_tr)
score_pred = lda.predict_log_proba(X_te)[:,1]
preds += [("LDA",score_pred)]
## Fit and predict with QDA
qda = discriminant_analysis.QuadraticDiscriminantAnalysis()
qda.fit(X_tr,y_tr)
score_pred = qda.predict_log_proba(X_te)[:,1]
preds += [("QDA",score_pred)]
def plot_pr_models(X_te, y_te, preds):
plt.figure(figsize=(6,6))
for name, score_preds in preds:
prec, rec, threshs = metrics.precision_recall_curve(y_te,score_preds)
plt.plot(rec,prec,label=name)
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve")
plot_pr_models(X_te, y_te, preds)
Explanation: Comments
"Good" ROC should be in top left
"Good" PR should be large for all recall values
PR is better for large class imbalance
ROC treats each type of error equally
Exercise 6.2
Apply LDA and QDA to the above dataset and compare the PR curves to the previous two methods. To calculate the "score" you can use the predict_log_proba method.
End of explanation |
6,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
6,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating Keras DNN model
Learning Objectives
Create input layers for raw features
Create feature columns for inputs
Create DNN dense hidden layers and output layer
Build DNN model tying all of the pieces together
Train and evaluate
Introduction
In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Set up environment variables and load necessary libraries
Step1: Note
Step2: Set environment variables so that we can use them throughout the notebook.
Step3: Create ML datasets by sampling using BigQuery
We'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
Step4: We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash within the module. Feel free to play around with these values to get the perfect combination.
Step6: We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
Step8: For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
Step10: Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here.
Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
Step12: The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.
Step14: The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
Step16: The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
Step18: We'll now select the range of buckets to be used in training.
Step20: We'll do the same by selecting the range of buckets to be used evaluation.
Step22: Lastly, we'll select the hash buckets to be used for the test split.
Step24: In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after.
Step26: Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to that we were hoping to get.
Step28: Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train, eval, test sets do not overlap and takes a subsample of our global splits.
Step29: Preprocess data using Pandas
We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below.
Let's start by examining the training dataset as is.
Step30: Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
Step32: It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
Step33: Let's process the train, eval, test set and see a small sample of the training data after our preprocessing
Step34: Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up.
Step35: Write to .csv files
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
Step36: Create Keras model
Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* CSV_COLUMNS is going to be our header name of our column. Make sure that they are in the same order as in the CSV files
* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
Step39: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourselves from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
Step41: Create input layers for raw features.
We'll need to get the data to read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining
Step44: Create feature columns for inputs.
Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
Step46: Create DNN dense hidden layers and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
Step48: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
Step50: Build DNN model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
Step51: We can visualize the DNN using the Keras plot_model utility.
Step52: Run and evaluate model
Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data.
Step53: Visualize loss curve
Step54: Save the model | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
Explanation: Creating Keras DNN model
Learning Objectives
Create input layers for raw features
Create feature columns for inputs
Create DNN dense hidden layers and output layer
Build DNN model tying all of the pieces together
Train and evaluate
Introduction
In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Set up environment variables and load necessary libraries
End of explanation
from google.cloud import bigquery
import pandas as pd
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Import necessary libraries.
End of explanation
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
Explanation: Set environment variables so that we can use them throughout the notebook.
End of explanation
bq = bigquery.Client(project = PROJECT)
Explanation: Create ML datasets by sampling using BigQuery
We'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
End of explanation
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
Explanation: We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash within the module. Feel free to play around with these values to get the perfect combination.
End of explanation
def display_dataframe_head_from_query(query, count=10):
Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
Explanation: We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
End of explanation
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
display_dataframe_head_from_query(hash_cols_fixed_query)
Explanation: For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
End of explanation
data_query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
.format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
Explanation: Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here.
Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
End of explanation
# Get the counts of each of the unique hash of our splitting column
first_bucketing_query =
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
.format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
Explanation: The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.
End of explanation
# Get the number of records in each of the hash buckets
second_bucketing_query =
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
.format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
Explanation: The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
End of explanation
# Calculate the overall percentages
percentages_query =
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
.format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
Explanation: The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
End of explanation
# Choose hash buckets for training and pull in their statistics
train_query =
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
.format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
Explanation: We'll now select the range of buckets to be used in training.
End of explanation
# Choose hash buckets for validation and pull in their statistics
eval_query =
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
.format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
Explanation: We'll do the same by selecting the range of buckets to be used evaluation.
End of explanation
# Choose hash buckets for testing and pull in their statistics
test_query =
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
.format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
Explanation: Lastly, we'll select the hash buckets to be used for the test split.
End of explanation
# Union the training, validation, and testing dataset statistics
union_query =
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
.format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
Explanation: In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after.
End of explanation
# Show final splitting and associated statistics
split_query =
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
.format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
Explanation: Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to that we were hoping to get.
End of explanation
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
data_query, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
Explanation: Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train, eval, test sets do not overlap and takes a subsample of our global splits.
End of explanation
train_df.head()
Explanation: Preprocess data using Pandas
We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below.
Let's start by examining the training dataset as is.
End of explanation
train_df.describe()
Explanation: Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
End of explanation
def preprocess(df):
Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
Explanation: It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
End of explanation
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
Explanation: Let's process the train, eval, test set and see a small sample of the training data after our preprocessing:
End of explanation
train_df.describe()
Explanation: Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up.
End of explanation
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
%%bash
ls *.csv
%%bash
head -5 *.csv
Explanation: Write to .csv files
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
End of explanation
# Determine CSV, label, and key columns
# Create list of string column headers, make sure order matches.
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
Explanation: Create Keras model
Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* CSV_COLUMNS is going to be our header name of our column. Make sure that they are in the same order as in the CSV files
* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
End of explanation
def features_and_labels(row_data):
Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode='eval'):
Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: 'train' | 'eval' to determine if training or evaluating.
Returns:
`Dataset` object.
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
ignore_errors=True)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == 'train':
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
Explanation: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourselves from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
End of explanation
# TODO 1
def create_input_layers():
Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]})
return inputs
Explanation: Create input layers for raw features.
We'll need to get the data to read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
End of explanation
# TODO 2
def categorical_fc(name, values):
Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Indicator column of categorical feature.
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
return tf.feature_column.indicator_column(categorical_column=cat_column)
def create_feature_columns():
Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
feature_columns = {
colname : tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
feature_columns["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
feature_columns["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
return feature_columns
Explanation: Create feature columns for inputs.
Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
End of explanation
# TODO 3
def get_model_outputs(inputs):
Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
# Create two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation="relu", name="h1")(inputs)
h2 = tf.keras.layers.Dense(32, activation="relu", name="h2")(h1)
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(h2)
return output
Explanation: Create DNN dense hidden layers and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
End of explanation
def rmse(y_true, y_pred):
Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2))
Explanation: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
End of explanation
# TODO 4
def build_dnn_model():
Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
Explanation: Build DNN model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
End of explanation
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
Explanation: We can visualize the DNN using the Keras plot_model utility.
End of explanation
# TODO 5
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
trainds = load_dataset(
pattern="train*",
batch_size=TRAIN_BATCH_SIZE,
mode='train')
evalds = load_dataset(
pattern="eval*",
batch_size=1000,
mode='eval').take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
callbacks=[tensorboard_callback])
Explanation: Run and evaluate model
Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data.
End of explanation
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
Explanation: Visualize loss curve
End of explanation
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
Explanation: Save the model
End of explanation |
6,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1T_크롤링 수업(중고나라 모바일)
Step1: 이미지 가져오기
Step2: https | Python Code:
#네이버 중고나라(모바일)
import requests
from bs4 import BeautifulSoup
# "맥북" 키워드로 검색
url = "http://m.cafe.naver.com/ArticleSearchList.nhn?search.query=맥북&search.menuid=&search.searchBy=0&search.sortBy=sim&search.clubid=10050146"
response = requests.get(url)
response.status_code, response
dom = BeautifulSoup(response.text, "html.parser")
post_elements = dom.select("ul.list_tit li")
len(post_elements)
post_element = post_elements[0]
post_element
# Celery - Task Runner
# http://www.celeryproject.org/
# 파이썬의 일처리를 자동으로 해주는 테스크 언어. 조금 더 모듈화해서
title = post_element.select_one("h3").text
url = post_element.select_one("a").get("href") #href는 속성
nickname = post_element.select_one("span.name").text
created_at = post_element.select_one("span.time").text
count = post_element.select_one("span.no").text.split(" ")[-1]
print((title, url, nickname, created_at, count))
post_element.contents
#어떤 값들이 들어있는지 보여준다.
for post_element in post_elements:
title = post_element.select_one("h3").text
url = post_element.select_one("a").get("href")
nickname = post_element.select_one("span.name").text
created_at = post_element.select_one("span.time").text
count = post_element.select_one("span.no").text.split(" ")[-1]
print((title, url, nickname, created_at, count))
url
# "".join(url.split("?")).split("&") => 따로 빼서 사용할 수 있다.
url = "http://m.cafe.naver.com/ArticleRead.nhn?clubid=10050146&menuid=334&articleid=332341835&query=맥북"
# 상세 페이지 크롤링
response = requests.get(url)
dom = BeautifulSoup(response.text, "html.parser")
# 1. 핸드폰 번호 파싱 ( 정규표현식 이용 )
# 2. 첫 이미지의 url 가져오기
# 일단은 content 의 html 코드를 그대로 뽑아 와야 한다.
content_element = dom.select_one("#postContent")
# str(content_element) => 이걸로 html을 뽑아낼 수 있다.
# content_element.text => 이걸로 tag가 제외된 글자만을 뽑아낼 수 있다. (핸드폰 번호)
content_element.text
import re
# pattern = re.compile("[0-9영공일이삼사오육칠팔구O]{3}[-. ]+[0-9영공일이삼사오육칠팔구O]")
pattern = re.compile("".join([
"[0-9공영일이삼사오육칠팔구O]{3}", # 앞 숫자 3자리 ( 010 )
"[-. ]+", # 앞 3자리와 중간 4자리를 연결하는 애
"[0-9공영일이삼사오육칠팔구O]{4}", # 중간 숫자 4자리
"[-. ]+",
"[0-9공영일이삼사오육칠팔구O]{4}",
]))
pattern.findall(content_element.text)
def preprocess(phonenumber):
return phonenumber
Explanation: 1T_크롤링 수업(중고나라 모바일)
End of explanation
from IPython.display import Image
content_element = dom.select_one("#postContent")
image_elements = content_element.select("img")
len(image_elements)
image_elements[1].get('src')
image_url = image_elements[3].get('src') #인덱스 바꾸면서 url 계속 입력하면서 확인
image_url
# image_url = "".join(image_url.split("?")[:-1])
print(image_url)
#iPython에서 이렇게 제거하면 할 수 있는 방법은 없다.
image_url
from IPython.display import IFrame
IFrame(image_url, width=300, height=200)
Explanation: 이미지 가져오기
End of explanation
#이번에는 각자 주피터 노트북으로. 왜냐하면 selenium으로 할 것임
# coding: utf-8
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://cafe.naver.com/joonggonara")
# "맥북" 이라는 키워드로 검색 ( * ) => find_ele.., send_keys, click
# 검색 결과로 나오는 게시글 크롤링
# 1. 검색창 element
# 2. send_keys("맥북")
# 3. 검색 버튼 element
# 4. click()
search_form_element = driver.find_element_by_css_selector("#topLayerQueryInput")
search_form_element.send_keys("맥북")
# 할 수 있는 훨씬 쉬운 방법들이 있습니다. 이따 javascript 통해서 할 예정입니다.
search_button_element = driver.find_element_by_css_selector(".btn-search-green")
search_button_element.click()
post_elements = driver.find_elements_by_css_selector("td.board-list")
len(post_elements)
#실제로는 있는데, 0이라고 나옴. 눈으로는 보이는데
#왜 그러냐면 특이하게 동작함. 이중 구조. 이중 Frame 형태이기 때문
#네이버는 꽤 많은 IFrame을 쓰고 있음
iframe_element = driver.find_element_by_css_selector("#cafe_main")
driver.switch_to_frame(iframe_element) # 내부적으로는 focus가 옮겨진거.
post_elements = driver.find_elements_by_css_selector("td.board-list")
len(post_elements)
#관공서 사이트가 대부분 이런 형태로 되어 있음
post_element = post_elements[0]
post_element
title = post_element.find_element_by_css_selector("a").text
url = post_element.find_element_by_css_selector("a").get_attribute("href")
print(title)
print(url)
for post_element in post_elements:
title = post_element.find_element_by_css_selector("a").text
url = post_element.find_element_by_css_selector("a").get_attribute("href")
print(title)
print(url)
# 정적인 ( HTML )
# 동적인 ( API; json, xml )
# 동적인 ( Selenium )
# + 한국형 웹사이트
# - selenium iFrame
# - selenium javascript
#http://kcia.or.kr/cid/Document/020.Ingredient_shis/INGREDIENT_SHIS_10L.asp
#화장품 성분명 사이트 실습.
driver = webdriver.Firefox()
driver.get("http://kcia.or.kr/cid/Document/020.Ingredient_shis/INGREDIENT_SHIS_10L.asp")
driver.execute_script('fGoPage(1)')
for page in range(1, 100+1):
driver.execute_script("fGoPage({page})".format(page=page))
driver = webdriver.Firefox()
driver.get("http://fastcampus.co.kr")
driver.execute_script("alert('무료 수강권 이벤트에 당첨되었습니다.')")
Explanation: https://opentutorials.org/course/1375/6843 여기 참조
이걸로도 확인은 가능하나 우리는 https(안전한 곳). 그래서 보안 클릭해서 바꾸고 보면 됨
API형태의 ajax를 보지만 많은 종류의 ajax가 있다.
ajax는 asynchronous javascript and xml. 비동기적으로 페이지가 변하지 않는 상태에서 우리가 자바스크립트로 리퀘스트를 보낸 다음에 xml이라는 파일 양식을 받아오는 방식
그런데 이제 최근에는 xml이라는 방식보다 웹에서는 json을 많이 쓰면서 사실상 우리는 json을 받아오는 것임. json은 html도 있고 text도 있다.
이런 게 ajax. 웹사이트가 바뀌지 않으면서 우리가 클릭을 했을 때 웹사이트 뒤에서 동작이 일어나고 결과가 바뀌는 일련의 과정을 우리가 ajax라고 부른다.
2T_데이터 크롤링 유형 (4) - 한국형 웹사이트 ( iFrame, javascript )
End of explanation |
6,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Start by defining the parent catalog URL from NCI's THREDDS Data Server
Note
Step2: <a id='part1'></a>
Using Siphon
Siphon is a collection of Python utilities for downloading data from Unidata data technologies. More information on installing and using Unidata's Siphon can be found
Step3: The possible data services end points through NCI's THREDDS includes
Step4: We can create a small function that uses Siphon's Netcdf Subset Service (NCSS) to extract a spatial request (defined by a lat/lon box)
Step5: Query a single file and view result
Step6: Loop and query over the collection
Step7: Can make an animation of the temporal evolution (this example is by converting the series of *.png files above into a GIF)
<img src="./images/animated.gif">
Can also use Siphon to extract a single point
Step8: Time series example | Python Code:
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from siphon import catalog, ncss
import datetime
%matplotlib inline
Explanation: <img src="http://nci.org.au/wp-content/themes/nci/img/img-logo-large.png", width=400>
Programmatically accessing data through THREDDS and the VDI
...using Python 3
In this notebook:
<a href='#part1'>Using the Siphon Python package to programmatically access THREDDS data service endpoints</a>
<a href='#part2'>Programmatically accessing files from the VDI</a>
The following material uses CSIRO IMOS TERN-AusCover MODIS Data Collection. For more information on the collection and licensing, please click here.
Prerequisites:
A python 3 virtual environment with the following Python modules loaded:
matplotlib
netcdf4
siphon
shapely
requests
Some knowledge of navigating the NCI data catalogues to find a dataset. Screenshots at the start of this notebook are a useful example, although using a different dataset.
Setup instructions for python 3 virtual environments can be found here.
<br>
Import python packages
End of explanation
url = 'http://dapds00.nci.org.au/thredds/catalog/u39/public/data/modis/fractionalcover-clw/v2.2/netcdf/catalog.xml'
Explanation: Start by defining the parent catalog URL from NCI's THREDDS Data Server
Note: Switch the '.html' ending on the URL to '.xml'
End of explanation
tds = catalog.TDSCatalog(url)
datasets = list(tds.datasets)
endpts = list(tds.datasets.values())
list(tds.datasets.keys())
Explanation: <a id='part1'></a>
Using Siphon
Siphon is a collection of Python utilities for downloading data from Unidata data technologies. More information on installing and using Unidata's Siphon can be found:
https://github.com/Unidata/siphon
Once selecting a parent dataset directory, Siphon can be used to search and use the data access methods and services provided by THREDDS. For example, Siphon will return a list of data endpoints for the OPeNDAP data URL, NetCDF Subset Service (NCSS), Web Map Service (WMS), Web Coverage Service (WCS), and the HTTP link for direct download.
In this Notebook, we'll be demonstrating the Netcdf Subset Service (NCSS).
End of explanation
for key, value in endpts[0].access_urls.items():
print('{}, {}'.format(key, value))
Explanation: The possible data services end points through NCI's THREDDS includes: OPeNDAP, Netcdf Subset Service (NCSS), HTTP download, Web Map Service (WMS), Web Coverage Service (WCS), NetCDF Markup Language (NcML), and a few metadata services (ISO, UDDC).
End of explanation
def get_data(dataset, bbox):
nc = ncss.NCSS(dataset.access_urls['NetcdfSubset'])
query = nc.query()
query.lonlat_box(north=bbox[3],south=bbox[2],east=bbox[1],west=bbox[0])
query.variables('bs')
data = nc.get_data(query)
lon = data['longitude'][:]
lat = data['latitude'][:]
bs = data['bs'][0,:,:]
t = data['time'][:]
time_base = datetime.date(year=1800, month=1, day=1)
time = time_base + datetime.timedelta(t[0])
return lon, lat, bs, time
Explanation: We can create a small function that uses Siphon's Netcdf Subset Service (NCSS) to extract a spatial request (defined by a lat/lon box)
End of explanation
bbox = (135, 140, -31, -27)
lon, lat, bs, t = get_data(endpts[0], bbox)
plt.figure(figsize=(10,10))
plt.imshow(bs, extent=bbox, cmap='gist_earth', origin='upper')
plt.xlabel('longitude (degrees)', fontsize=14)
plt.ylabel('latitude (degrees)', fontsize=14)
print("Date: {}".format(t))
Explanation: Query a single file and view result
End of explanation
bbox = (135, 140, -31, -27)
plt.figure(figsize=(10,10))
for endpt in endpts[:15]:
try:
lon, lat, bs, t = get_data(endpt, bbox)
plt.imshow(bs, extent=bbox, cmap='gist_earth', origin='upper')
plt.clim(vmin=-2, vmax=100)
plt.tick_params(labelsize=14)
plt.xlabel('longitude (degrees)', fontsize=14)
plt.ylabel('latitude (degrees)', fontsize=14)
plt.title("Date: "+str(t), fontsize=16, weight='bold')
plt.savefig("./images/"+endpt.name+".png")
plt.cla()
except:
pass
plt.close()
Explanation: Loop and query over the collection
End of explanation
def get_point(dataset, lat, lon):
nc = ncss.NCSS(dataset.access_urls['NetcdfSubset'])
query = nc.query()
query.lonlat_point(lon, lat)
query.variables('bs')
data = nc.get_data(query)
bs = data['bs'][0]
date = data['date'][0]
return bs, date
bs, date = get_point(endpts[4], -27.75, 137)
print("{}, {}".format(bs, date))
Explanation: Can make an animation of the temporal evolution (this example is by converting the series of *.png files above into a GIF)
<img src="./images/animated.gif">
Can also use Siphon to extract a single point
End of explanation
data = []
for endpt in endpts[::20]:
bs, date = get_point(endpt, -27.75, 137)
data.append([date, bs])
import numpy as np
BS = np.array(data)[:,1]
Date = np.array(data)[:,0]
plt.figure(figsize=(12,6))
plt.plot(Date, BS, '-o', linewidth=2, markersize=8)
plt.tick_params(labelsize=14)
plt.xlabel('date', fontsize=14)
plt.ylabel('fractional cover of bare soil (%)', fontsize=14)
plt.title('Lat, Lon: -27.75, 137', fontsize=16)
Explanation: Time series example
End of explanation |
6,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Equalização de Histograma
A transformação de contraste que procura distribuir a ocorrência dos níveis de cinza igualmente
na faixa de tons de cinza é denominada equalização de histograma.
O objetivo do exemplo a seguir é o de fazer a equalização de histograma de uma imagem. Ele é
dado pela aplicação de uma transformação de contraste T(r) que é calculado pelo histograma
acumulado normalizado da imagem.
Equação da Transformação de Intensidade para Equalizar Histograma
A equação da transformação de intensidade T(r) a partir do histograma h(i) da imagem que equaliza
a imagem, é dada por
Step1: Plotamos seu histograma e calculamos a transformação de contraste que equaliza o histograma baseado
na equação vista anteriormente. O somatório da equação é eficientemente calculado com a função
np.cumsum que calcula a soma acumulada de um vetor. Visualizamos a transformação T[r] pelo gráfico
Step2: A aplicação da transformação T em f
Step3: Finalmente, plotamos o histograma da imagem equalizada. Note o efeito mencionado acima em que o
histograma equalizado fica espalhado. Quando se calcula o histograma acumulado, nota-se daí que
o histograma de fato está normalizado.
Step4: Um problema da formulação simplificada acima é que no caso da imagem original não ter nenhum
pixel igual a zero, a equalização da imagem usando esta formulação não irá fazer com que o
menor pixel seja zero. Veja o exemplo a seguir, onde o menor valor do pixel na imagem original é 75
Step5: Observe que a Transformação que equaliza a imagem, o seu primeiro valor
não zero é 8 (pois T[75] = 8). Isto faz com que o menor valor da imagem equalizada resultante seja 8 e não zero como
desejado
Step6: Para fazer com que o valor do menor pixel da imagem equalizada seja zero, temos duas opções básicas
Step7: Verificando a equação da wikipedia
Step8: Comparando-se o resultado (gn) com o valor da Wikipedia, percebemos que existe uma
pequena diferença nos valores de alguns pixels. Esta diferença é devido ao fato que
na equação da Wikipedia, é usado um arredondamento (round) enquanto que na
função ia898 | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
nb = ia.nbshow(2)
f = mpimg.imread('../data/cameraman.tif')
nb.nbshow(f,'Imagem original')
fsort = np.sort(f.ravel()).reshape(f.shape)
nb.nbshow(fsort, 'Imagem pixels ordenados')
nb.nbshow()
Explanation: Equalização de Histograma
A transformação de contraste que procura distribuir a ocorrência dos níveis de cinza igualmente
na faixa de tons de cinza é denominada equalização de histograma.
O objetivo do exemplo a seguir é o de fazer a equalização de histograma de uma imagem. Ele é
dado pela aplicação de uma transformação de contraste T(r) que é calculado pelo histograma
acumulado normalizado da imagem.
Equação da Transformação de Intensidade para Equalizar Histograma
A equação da transformação de intensidade T(r) a partir do histograma h(i) da imagem que equaliza
a imagem, é dada por:
$$ T(r) = \frac{(L-1)}{n} \sum_{i = 0}^{r} h(i), \quad r = 0, 1,..., L-1 $$
Os valores de intensidade dos pixels da imagem (indices i e r) variam de 0 a L-1;
n é o número total de pixels na imagem;
$h(i)$ é o histograma da imagem, i. e., é o número de vezes que o nível de cinza i aparece na imagem;
$T(r)$ é a função de mapeamento que equaliza a imagem dada pelo histograma $h$.
$L$ é o número de níveis na imagem final, usualmente 256 para imagens uint8.
Existem dois pontos que devem ser considerados no uso da equalização de histograma:
A formulação acima é exata no caso contínuo. Para o caso discreto, o histograma equalizado
não fica totalmente planar pois não é possível no modelo acima distribuir um nível de cinza
com muita ocorrência em mais de um valor de cinza. Normalmente mostra-se o histograma acumulado
que é normalmente uma reta de 45 graus, como será visto no exemplo abaixo. Veremos num outro
tutorial uma forma de contornar este problema.
Ilustração da equalização de histograma
A imagem original é mostrada a seguir, e para fins ilustrativos, a imagem com seus pixels ordenados também é mostrada ao
lado mostrando por exemplo que a imagem tem uma predominância de níveis de cinza escuros, bem
destacados.
End of explanation
h = ia.histogram(f)
plt.plot(h),plt.title('Histograma de f');
n = f.size
T = 255./n * np.cumsum(h)
T = T.astype('uint8')
plt.plot(T),plt.title('Transformação de intensidade para equalizar');
Explanation: Plotamos seu histograma e calculamos a transformação de contraste que equaliza o histograma baseado
na equação vista anteriormente. O somatório da equação é eficientemente calculado com a função
np.cumsum que calcula a soma acumulada de um vetor. Visualizamos a transformação T[r] pelo gráfico:
End of explanation
nb = ia.nbshow(3)
nb.nbshow(f,'imagem original, média=%d' % (f.mean()))
g = T[f]
nb.nbshow(g, 'imagem equalizada, média=%d' % (g.mean()))
gsort = np.sort(g.ravel()).reshape(g.shape)
nb.nbshow(gsort, 'imagem equalizada ordenada')
nb.nbshow()
Explanation: A aplicação da transformação T em f: fazendo-se g = T[f], resulta na imagem g equalizada.
Para fins ilustrativos, colocamos ao lado a imagem equalizada com seus pixels ordenados.
Observa-se que a distribuição dos níveis de cinza ficou uniforme:
End of explanation
plt.figure(0)
hg = ia.histogram(g)
plt.plot(hg),plt.title('Histograma da imagem equalizada')
plt.figure(1)
hgc = np.cumsum(hg)
plt.plot(hgc),plt.title('Histograma acumulado da imagem equalizada');
Explanation: Finalmente, plotamos o histograma da imagem equalizada. Note o efeito mencionado acima em que o
histograma equalizado fica espalhado. Quando se calcula o histograma acumulado, nota-se daí que
o histograma de fato está normalizado.
End of explanation
f = mpimg.imread('../data/angiogr.tif')
f = np.clip(f,75,255)
ia.adshow(f)
h = ia.histogram(f)
plt.plot(h);
#print('info:',ia.iaimginfo(f))
Explanation: Um problema da formulação simplificada acima é que no caso da imagem original não ter nenhum
pixel igual a zero, a equalização da imagem usando esta formulação não irá fazer com que o
menor pixel seja zero. Veja o exemplo a seguir, onde o menor valor do pixel na imagem original é 75:
End of explanation
n = f.size
T = 255./n * np.cumsum(h)
T = T.astype('uint8')
print('T:',T)
plt.plot(T),plt.title('Transformação de intensidade para equalizar')
g = T[f]
#print('info:', ia.iaimginfo(g))
ia.adshow(g, 'imagem equalizada')
Explanation: Observe que a Transformação que equaliza a imagem, o seu primeiro valor
não zero é 8 (pois T[75] = 8). Isto faz com que o menor valor da imagem equalizada resultante seja 8 e não zero como
desejado:
End of explanation
gn = ia.normalize(g)
#print 'info:',ia.iaimginfo(gn)
ia.adshow(gn, 'imagem equalizada e normalizada')
hgn = ia.histogram(gn)
plt.plot(hgn),plt.title('histograma');
Explanation: Para fazer com que o valor do menor pixel da imagem equalizada seja zero, temos duas opções básicas:
Após a equalização, normalizar a imagem entre 0 e 255. Isto pode ser feito com ia898:normalize.
Outra solução é alterar a equação do início desta página já incorporando esta normalização final. Esta
equação é a mesma que aparece na descrição da Wikipedia, no link no final da página:
End of explanation
wiki=np.array([[52,55,61,66,70,61,64,73],
[63,59,55,90,109,85,69,72],
[62,59,68,113,144,104,66,73],
[63,58,71,122,154,106,70,69],
[67,61,68,104,126,88,68,70],
[79,65,60,70,77,68,58,75],
[85,71,64,59,55,61,65,83],
[87,79,69,68,65,76,78,94]])
print('wiki=\n',wiki)
h = ia.histogram(wiki)
n = wiki.size
T = 255./n * np.cumsum(h)
T = np.floor(T).astype('uint8')
g = T[wiki]
print('g=\n',g)
gn = ia.normalize(g)
print('gn=\n',gn)
Explanation: Verificando a equação da wikipedia
End of explanation
faux = g.ravel().astype('float')
minimum = min(faux)
maximum = max(faux)
lower = 0
upper = 255
gnn = np.round((faux - minimum) * (upper - lower) / (maximum - minimum) + lower,0)
gnn = gnn.reshape(g.shape).astype(np.int)
print('gnn=\n',gnn)
Explanation: Comparando-se o resultado (gn) com o valor da Wikipedia, percebemos que existe uma
pequena diferença nos valores de alguns pixels. Esta diferença é devido ao fato que
na equação da Wikipedia, é usado um arredondamento (round) enquanto que na
função ia898:normalize, é usado um truncamento. A seguir foi feita uma outra
função similar à ianormalize, porém utilizando a função (round). Note que
neste caso o resultado confere com a Wikipedia.
End of explanation |
6,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
数据结构
这一节介绍pandas中的数据结构。首先,导入numpy和pandas
Step1: 我们先对数据结构进行简短的介绍, 然后再详细说明各个数据结构内置的方法。
Series
Series是一个一维带label的数组,元素可以是任何数据类型(整数、字符串、浮点数,Python对象等等)。和Python列表一样,Series元素的数据类型可以不同。Series是值可变的数据结构,你可以对它的值进行修改,但是Series的大小是不可以改变的。
Series的label就是index。
创建一个Series对象:
s = pd.Series(data, index=index)
这里,data可以是:
Python字典
ndarray
标量数值(比如5)
传入的index是label组成的列表。 因此,依据data的不同,我们可以分成以下几种情形:
ndarray
如果data是一个ndarray,index的长度必须和data一样。若没有指明index,则默认创建一个 [0, ..., len(data) - 1]的index。
Step2: 注意: 从0.8.0版本开始,index的值可以重复。
Step3: dict字典
如果dict是字典,index的长度可以和data长度不同,也可以不提供index:
Step4: 注意: NaN(not a number)是pandas中缺失值标记。
标量数值
如果data是一个标量值,必须提供index。为了匹配index的长度,该数值将被重复。
Step5: Series vs ndarray
大多数情况下,你都可以把Series看做一个ndarray来处理。甚至,大多数的NumPy函数都支持Series作参数。
Step6: Series vs dict
你可以把Series看做一个长度固定的字典,你可以通过index来获取和设置数值:
Step7: 如果label值不存在,将抛出异常:
Step8: 使用get方法,遇到缺失的label将返回None或指定值:
Step9: 向量操作和label对齐
进行数据分析时,你可以把Series看做向量进行整体操作。大多数接受ndarray作参数的NumPy方法也支持传入Series对象。
Step10: Series和ndarray之间的主要区别是,Series之间的操作会基于label对数据自动对齐。
所以,你不需要考虑两个参与计算的Series是否有完全相同的label。
Step11: 如果两个label不一样的Series参与计算,存储结果的Series的index/label是它们俩的union(并集)。正是自带label对齐,才使得pandas区别于大多数数据分析库,优越性一览无余。
name属性
Series构造函数还有另一个参数:name,
Step12: 通过pandas.Series.rename()方法,可以给Series重命名。
Step13: 注意s和s2是两个不同对象。
DataFrame
DataFrame是一个带label的二维数据结构。你可以把它看做一张SQL表,或是由Series构成的字典。它基本上是最常用的pandas数据结构。
和Series一样,DataFrame构造函数支持许多不同类型的输入:
字典,可以是普通字典,或者一维ndarray字典、列表字典、Series字典
二维numpy.ndarray
结构化或记录化的ndarray
字典列表
一个Series
DataFrame
DataFrame构造函数也支持传入index(行label)和columns(列label)参数。
Series字典或字典
结果的index是各个Series的index的union。如果存在任何嵌套类型的字典,将首先转换成Series。如果不传入columns,则默认把字典的键构成的有序列表看做columns。
Step14: 行label和列label可分别通过访问index和columns获取:
Step15: ndarray字典 列表字典
ndarrays必须具有相同的长度。如果传入index参数,它的长度也必须和ndarray长度相同。如果不传入index参数,DataFrame会默认创建index=range(n),其中n是ndarray的长度。
Step16: 结构化或记录ndarray
Step17: 字典列表
Step18: 元组字典
Step19: Series
结果是创建一个列数为1的DataFrame,它的index和输入的Series相同,column名是Series的name。
其他的构造函数
DataFrame.from_dict
DataFrame.from_dict接受字典的字典或数组序列的字典,并返回一个DataFrame。
DataFrame.from_records
DataFrame.from_records接受一个元组列表或结构化类型的ndarray。例如:
Step20: DataFrame.from_items
Step21: 列选择,添加,删除
你可以把DataFrame看做Series构成的字典,使用和字典同样的操作来增加列、删除列、对列重新赋值:
Step22: 列可以删除或像字典一样被弹出
Step23: 当插入一个标量值,它自然会被填充到列中
Step24: 当插入不具有相同的index的Series时,将服从该DataFrame的索引:
Step25: 你可以插入ndarray,但其长度必须与DataFrame的index长度相匹配。
缺省情况下,列被插在末端。insert()函数可以允许插入到指定的列位置:
Step26: 用方法链来分配新列
DataFrame有一个assign()方法,利用现有列来创建新列。
iris = pd.read_csv('data/iris.data')
iris.head()
(iris.assign(sepal_ratio = iris['SepalWidth'] / iris['SepalLength'])
.head())
iris.assign(sepal_ratio = lambda x
Step27: 数据对齐和运算
DataFrame对象之间会自动对index和column进行数据对齐。运算结果的index和columns分别是参与运算的DataFrame的index和columns的并集(union)。
Step28: DataFrame与Series进行运算时, 默认行为是DataFrame的column和Series的index对齐,也就是对Series进行行广播。例如
Step29: 在特殊情况下, 处理时间序列数据时, DataFrame的index包含日期,广播会变成列广播
Step30: 布尔运算
Step31: 转置
转置, T属性(转置函数)
Step32: DataFrame与NumPy函数的互用性
Step33: dot方法能够实现DataFrame矩阵乘法
Step34: 类似地,dot方法也能运用在Series上
Step35: 控制台显示
在控制台,非常大的DataFrames将被截断显示它们。你也可以使用info()查看详情。
baseball = pd.read_csv('data/baseball.csv')
print(baseball)
baseball.info()
然而,使用to_string,DataFrame将返回一个字符串表示的表格形式,虽然它并不总是适合控制台宽度 | Python Code:
import numpy as np
import pandas as pd
Explanation: 数据结构
这一节介绍pandas中的数据结构。首先,导入numpy和pandas:
End of explanation
s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
s
s.index
pd.Series(np.random.randn(5))
Explanation: 我们先对数据结构进行简短的介绍, 然后再详细说明各个数据结构内置的方法。
Series
Series是一个一维带label的数组,元素可以是任何数据类型(整数、字符串、浮点数,Python对象等等)。和Python列表一样,Series元素的数据类型可以不同。Series是值可变的数据结构,你可以对它的值进行修改,但是Series的大小是不可以改变的。
Series的label就是index。
创建一个Series对象:
s = pd.Series(data, index=index)
这里,data可以是:
Python字典
ndarray
标量数值(比如5)
传入的index是label组成的列表。 因此,依据data的不同,我们可以分成以下几种情形:
ndarray
如果data是一个ndarray,index的长度必须和data一样。若没有指明index,则默认创建一个 [0, ..., len(data) - 1]的index。
End of explanation
pd.Series(np.random.randn(3), index=['a', 'b', 'a'])
Explanation: 注意: 从0.8.0版本开始,index的值可以重复。
End of explanation
d = {'a' : 0., 'b' : 1., 'c' : 2.}
pd.Series(d)
pd.Series(d, index=['b', 'c', 'd', 'a'])
Explanation: dict字典
如果dict是字典,index的长度可以和data长度不同,也可以不提供index:
End of explanation
pd.Series(5., index=['a', 'b', 'c', 'd', 'e'])
Explanation: 注意: NaN(not a number)是pandas中缺失值标记。
标量数值
如果data是一个标量值,必须提供index。为了匹配index的长度,该数值将被重复。
End of explanation
s[0]
s[:3]
s[s > s.median()]
s[[4,3,1]]
np.exp(s)
Explanation: Series vs ndarray
大多数情况下,你都可以把Series看做一个ndarray来处理。甚至,大多数的NumPy函数都支持Series作参数。
End of explanation
s['a']
s['e'] = 12.
s
'e' in s
'f' in s
Explanation: Series vs dict
你可以把Series看做一个长度固定的字典,你可以通过index来获取和设置数值:
End of explanation
s['f']
Explanation: 如果label值不存在,将抛出异常:
End of explanation
s.get('f')
s.get('f', np.nan)
Explanation: 使用get方法,遇到缺失的label将返回None或指定值:
End of explanation
s + s
s * 2
np.exp(s)
Explanation: 向量操作和label对齐
进行数据分析时,你可以把Series看做向量进行整体操作。大多数接受ndarray作参数的NumPy方法也支持传入Series对象。
End of explanation
s[1:] + s[:-1]
Explanation: Series和ndarray之间的主要区别是,Series之间的操作会基于label对数据自动对齐。
所以,你不需要考虑两个参与计算的Series是否有完全相同的label。
End of explanation
s = pd.Series(np.random.randn(5), name='something')
s
s.name
Explanation: 如果两个label不一样的Series参与计算,存储结果的Series的index/label是它们俩的union(并集)。正是自带label对齐,才使得pandas区别于大多数数据分析库,优越性一览无余。
name属性
Series构造函数还有另一个参数:name,
End of explanation
s2 = s.rename("different")
s2.name
s.name
Explanation: 通过pandas.Series.rename()方法,可以给Series重命名。
End of explanation
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
df
pd.DataFrame(d, index=['d','b','a'])
pd.DataFrame(d, index=['d', 'b', 'a'], columns=['two', 'three'])
Explanation: 注意s和s2是两个不同对象。
DataFrame
DataFrame是一个带label的二维数据结构。你可以把它看做一张SQL表,或是由Series构成的字典。它基本上是最常用的pandas数据结构。
和Series一样,DataFrame构造函数支持许多不同类型的输入:
字典,可以是普通字典,或者一维ndarray字典、列表字典、Series字典
二维numpy.ndarray
结构化或记录化的ndarray
字典列表
一个Series
DataFrame
DataFrame构造函数也支持传入index(行label)和columns(列label)参数。
Series字典或字典
结果的index是各个Series的index的union。如果存在任何嵌套类型的字典,将首先转换成Series。如果不传入columns,则默认把字典的键构成的有序列表看做columns。
End of explanation
df.index
df.columns
Explanation: 行label和列label可分别通过访问index和columns获取:
End of explanation
d = {'one' : [1., 2., 3., 4.],
'two' : [4., 3., 2., 1.]}
pd.DataFrame(d)
pd.DataFrame(d, index=['a', 'b', 'c', 'd'])
Explanation: ndarray字典 列表字典
ndarrays必须具有相同的长度。如果传入index参数,它的长度也必须和ndarray长度相同。如果不传入index参数,DataFrame会默认创建index=range(n),其中n是ndarray的长度。
End of explanation
data = np.zeros((2,), dtype=[('A', 'i4'),('B', 'f4'),('C', 'a10')])
data[:] = [(1,2.,'Hello'), (2,3.,"World")]
pd.DataFrame(data)
pd.DataFrame(data, index=['first','second'])
pd.DataFrame(data, columns=['C','A','B'])
Explanation: 结构化或记录ndarray
End of explanation
data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
pd.DataFrame(data2)
pd.DataFrame(data2, index=['first','second'])
pd.DataFrame(data2, columns=['a','b'])
Explanation: 字典列表
End of explanation
pd.DataFrame({('a', 'b'): {('A', 'B'): 1, ('A', 'C'): 2},
('a', 'a'): {('A', 'C'): 3, ('A', 'B'): 4},
('a', 'c'): {('A', 'B'): 5, ('A', 'C'): 6},
('b', 'a'): {('A', 'C'): 7, ('A', 'B'): 8},
('b', 'b'): {('A', 'D'): 9, ('A', 'B'): 10}})
Explanation: 元组字典
End of explanation
data
pd.DataFrame.from_records(data, index='C')
Explanation: Series
结果是创建一个列数为1的DataFrame,它的index和输入的Series相同,column名是Series的name。
其他的构造函数
DataFrame.from_dict
DataFrame.from_dict接受字典的字典或数组序列的字典,并返回一个DataFrame。
DataFrame.from_records
DataFrame.from_records接受一个元组列表或结构化类型的ndarray。例如:
End of explanation
pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])])
pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])],
orient='index', columns=['one', 'two', 'three'])
Explanation: DataFrame.from_items
End of explanation
df['one']
df['three'] = df['one'] * df['two']
df['flag'] = df['one'] > 2
df
Explanation: 列选择,添加,删除
你可以把DataFrame看做Series构成的字典,使用和字典同样的操作来增加列、删除列、对列重新赋值:
End of explanation
del df['two']
three = df.pop('three')
df
Explanation: 列可以删除或像字典一样被弹出
End of explanation
df['foo'] = 'bar'
df
Explanation: 当插入一个标量值,它自然会被填充到列中
End of explanation
df['one_trunc'] = df['one'][:2]
df
Explanation: 当插入不具有相同的index的Series时,将服从该DataFrame的索引:
End of explanation
df.insert(1,'bar',df['one'])
df
Explanation: 你可以插入ndarray,但其长度必须与DataFrame的index长度相匹配。
缺省情况下,列被插在末端。insert()函数可以允许插入到指定的列位置:
End of explanation
df
df.loc['b']
df.iloc[2]
Explanation: 用方法链来分配新列
DataFrame有一个assign()方法,利用现有列来创建新列。
iris = pd.read_csv('data/iris.data')
iris.head()
(iris.assign(sepal_ratio = iris['SepalWidth'] / iris['SepalLength'])
.head())
iris.assign(sepal_ratio = lambda x: (x['SepalWidth'] /
x['SepalLength'])).head()
(iris.query('SepalLength > 5')
.assign(SepalRatio = lambda x: x.SepalWidth / x.SepalLength,
PetalRatio = lambda x: x.PetalWidth / x.PetalLength)
.plot(kind='scatter', x='SepalRatio', y='PetalRatio'))
检索
检索的基础知识如下:
操作|语法 |结果
-----|------|----
列选择 | df[col] | Series
通过label来选择行 | df.loc[label] | Series
通过下标选择行 | df.iloc[loc] | Series
选择部分行 | df[5:10] | DataFrame
通过布尔向量选择行 | df[bool_vec] | DataFrame
行选择,例如,返回一个index为DataFrame columns的Series:
End of explanation
df = pd.DataFrame(np.random.randn(10,4),columns=['A', 'B', 'C', 'D'])
df2 = pd.DataFrame(np.random.randn(7,3),columns=['A', 'B', 'C'])
df + df2
Explanation: 数据对齐和运算
DataFrame对象之间会自动对index和column进行数据对齐。运算结果的index和columns分别是参与运算的DataFrame的index和columns的并集(union)。
End of explanation
df - df.iloc[0]
Explanation: DataFrame与Series进行运算时, 默认行为是DataFrame的column和Series的index对齐,也就是对Series进行行广播。例如:
End of explanation
index = pd.date_range('1/1/2000',periods=8)
df = pd.DataFrame(np.random.randn(8,3), index=index, columns=list('ABC'))
df
type(df['A'])
df - df['A']
df.sub(df['A'], axis=0) #
df * 5 + 2
1 / df
df ** 4
Explanation: 在特殊情况下, 处理时间序列数据时, DataFrame的index包含日期,广播会变成列广播:
End of explanation
df1 = pd.DataFrame({'a' : [1, 0, 1], 'b' : [0, 1, 1] }, dtype=bool)
df2 = pd.DataFrame({'a' : [0, 1, 1], 'b' : [1, 1, 0] }, dtype=bool)
df1 & df2
df1 | df2
df1 ^ df2
-df1
Explanation: 布尔运算
End of explanation
df[:5].T
Explanation: 转置
转置, T属性(转置函数):
End of explanation
df
np.exp(df)
np.asarray(df)
Explanation: DataFrame与NumPy函数的互用性
End of explanation
df.T.dot(df)
Explanation: dot方法能够实现DataFrame矩阵乘法:
End of explanation
s1 = pd.Series(np.arange(5,10))
s1.dot(s1)
Explanation: 类似地,dot方法也能运用在Series上:
End of explanation
df = pd.DataFrame({'foo1':np.random.randn(5),'foo2':np.random.randn(5)})
df
df.foo1
Explanation: 控制台显示
在控制台,非常大的DataFrames将被截断显示它们。你也可以使用info()查看详情。
baseball = pd.read_csv('data/baseball.csv')
print(baseball)
baseball.info()
然而,使用to_string,DataFrame将返回一个字符串表示的表格形式,虽然它并不总是适合控制台宽度:
print(baseball.iloc[-20:, :12].to_string())
pd.DataFrame(np.random.randn(3, 12))
你可以设置display.width值,来改变单行打印的数量。
pd.set_option('display.width', 40)
pd.DataFrame(np.random.randn(3, 12))
DataFrame列属性访问
如果DataFrame的列名是一个有效的Python变量名, 就可以像访问属性一样访问列数据:
End of explanation |
6,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom generators
Step1: Custom generator without __init__ method
Step2: Explicitly setting the name of generated items
Let's repeat the previous example, but explicitly set the name of generated items by setting the __tohu_items_name__ attribute inside the custom generator.
Step3: The generated sequence is the same as above, but the name of the items has changed from Quux to Foobar.
Step4: Custom generator with __init__ method
Step5: Custom generator containing derived generators
Step6: Example
Step7: Example
Step8: Example | Python Code:
import tohu
from tohu.v4.primitive_generators import *
from tohu.v4.derived_generators import *
from tohu.v4.dispatch_generators import *
from tohu.v4.custom_generator import *
from tohu.v4.utils import print_generated_sequence, make_dummy_tuples
print(f'Tohu version: {tohu.__version__}')
Explanation: Custom generators
End of explanation
class QuuxGenerator(CustomGenerator):
aa = Integer(100, 200)
bb = HashDigest(length=6)
cc = FakerGenerator(method='name')
g = QuuxGenerator()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
Explanation: Custom generator without __init__ method
End of explanation
class SomeGeneratorWithExplicitItemsName(CustomGenerator):
__tohu_items_name__ = 'Foobar'
aa = Integer(100, 200)
bb = HashDigest(length=6)
cc = FakerGenerator(method='name')
g = SomeGeneratorWithExplicitItemsName()
Explanation: Explicitly setting the name of generated items
Let's repeat the previous example, but explicitly set the name of generated items by setting the __tohu_items_name__ attribute inside the custom generator.
End of explanation
print_generated_sequence(g, num=10, sep='\n', seed=12345)
Explanation: The generated sequence is the same as above, but the name of the items has changed from Quux to Foobar.
End of explanation
class QuuxGenerator(CustomGenerator):
aa = Integer(100, 200)
def __init__(self, faker_method):
self.bb = FakerGenerator(method=faker_method)
# Note: the call to super().__init__() needs to be at the end,
# and it needs to be passed the same arguments as the __init__()
# method from which it is called (here: `faker_method`).
super().__init__(faker_method)
g1 = QuuxGenerator(faker_method='first_name')
g2 = QuuxGenerator(faker_method='city')
print_generated_sequence(g1, num=10, sep='\n', seed=12345); print()
print_generated_sequence(g2, num=10, sep='\n', seed=12345)
Explanation: Custom generator with __init__ method
End of explanation
some_tuples = make_dummy_tuples('abcdefghijklmnopqrstuvwxyz')
#some_tuples[:5]
Explanation: Custom generator containing derived generators
End of explanation
class QuuxGenerator(CustomGenerator):
aa = SelectOne(some_tuples)
bb = GetAttribute(aa, 'x')
cc = GetAttribute(aa, 'y')
g = QuuxGenerator()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
Explanation: Example: extracting attributes
End of explanation
def square(x):
return x * x
def add(x, y):
return x + y
class QuuxGenerator(CustomGenerator):
aa = Integer(0, 20)
bb = Integer(0, 20)
cc = Apply(add, aa, Apply(square, bb))
g = QuuxGenerator()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
df = g.generate(num=100, seed=12345).to_df()
print(list(df['aa'][:20]))
print(list(df['bb'][:20]))
print(list(df['cc'][:20]))
all(df['aa'] + df['bb']**2 == df['cc'])
Explanation: Example: arithmetic
End of explanation
class QuuxGenerator(CustomGenerator):
name = FakerGenerator(method="name")
tag = SelectOne(['a', 'bb', 'ccc'])
g = QuuxGenerator()
quux_items = g.generate(num=100, seed=12345)
quux_items.to_df().head(5)
tag_lookup = {
'a': [1, 2, 3, 4, 5],
'bb': [10, 20, 30, 40, 50],
'ccc': [100, 200, 300, 400, 500],
}
class FoobarGenerator(CustomGenerator):
some_quux = SelectOne(quux_items)
number = SelectOneDerived(Lookup(GetAttribute(some_quux, 'tag'), tag_lookup))
h = FoobarGenerator()
h_items = h.generate(10000, seed=12345)
df = h_items.to_df(fields={'name': 'some_quux.name', 'tag': 'some_quux.tag', 'number': 'number'})
df.head()
print(df.query('tag == "a"')['number'].isin([1, 2, 3, 4, 5]).all())
print(df.query('tag == "bb"')['number'].isin([10, 20, 30, 40, 50]).all())
print(df.query('tag == "ccc"')['number'].isin([100, 200, 300, 400, 500]).all())
df.query('tag == "a"').head(5)
df.query('tag == "bb"').head(5)
df.query('tag == "ccc"').head(5)
Explanation: Example: multi-stage dependencies
End of explanation |
6,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 2016-10-07
Step2: 1. L1-Regularized Logistic Regression
Let us start with default parameters.
Step3: Question Compute the cross-validated predictions of the l1-regularized logistic regression with default parameters on our data.
Question Plot the corresponding ROC curve, and compare it to that obtained for non-regularized logistic regression.
Setting the C parameter
What does the C parameter correspond to? See the documentation at http
Step4: Question What criterion is used to chose the optimal C? See the documentation at http
Step5: Question Plot the corresponding ROC curve, and compare to that obtained for
* non-regularized logistic regression.
* l1-regularized logistic regression with default C parameter.
Regression weights
Remember the goal of l1-regularization is to build sparse models.
Step6: Question Compare the regression weights obtained with and without l1-regularization, in two side-by-side plots.
Step7: 2. L2-regularized logistic regression
Question What is the role of l2 regularization? | Python Code:
import numpy as np
%pylab inline
# Load the data as usual (here the code for Python 2.7)
X = np.loadtxt('data/small_Endometrium_Uterus.csv', delimiter=',', skiprows=1, usecols=range(1, 3001))
y = np.loadtxt('data/small_Endometrium_Uterus.csv', delimiter=',', skiprows=1, usecols=[3001],
converters={3001: lambda s: 0 if s=='Endometrium' else 1}, dtype='int')
# Set up a stratified 10-fold cross-validation
from sklearn import cross_validation
folds = cross_validation.StratifiedKFold(y, 10, shuffle=True)
# Create a function that does cross-validation and scales the features on each training set.
from sklearn import preprocessing
def cross_validate_with_scaling(design_matrix, labels, classifier, cv_folds):
Perform a cross-validation and returns the predictions.
Use a scaler to scale the features to mean 0, standard deviation 1.
Parameters:
-----------
design_matrix: (n_samples, n_features) np.array
Design matrix for the experiment.
labels: (n_samples, ) np.array
Vector of labels.
classifier: sklearn classifier object
Classifier instance; must have the following methods:
- fit(X, y) to train the classifier on the data X, y
- predict_proba(X) to apply the trained classifier to the data X and return probability estimates
cv_folds: sklearn cross-validation object
Cross-validation iterator.
Return:
-------
pred: (n_samples, ) np.array
Vectors of predictions (same order as labels).
pred = np.zeros(labels.shape) # vector of 0 in which to store the predictions
for tr, te in cv_folds:
# Restrict data to train/test folds
Xtr = design_matrix[tr, :]
ytr = labels[tr]
Xte = design_matrix[te, :]
#print Xtr.shape, ytr.shape, Xte.shape
# Scale data
scaler = preprocessing.StandardScaler() # create scaler
Xtr = scaler.fit_transform(Xtr) # fit the scaler to the training data and transform training data
Xte = scaler.transform(Xte) # transform test data
# Fit classifier
classifier.fit(Xtr, ytr)
# Predict probabilities (of belonging to +1 class) on test data
yte_pred = classifier.predict_proba(Xte) # two-dimensional array
# Identify the index, in yte_pred, of the positive class (y=1)
# index_of_class_1 = np.nonzero(classifier.classes_ == 1)[0][0]
index_of_class_1 = 1 - ytr[0] # 0 if the first sample is positive, 1 otherwise
pred[te] = yte_pred[:, index_of_class_1]
return pred
Explanation: 2016-10-07: Regularized Logistic Regression
In this lab, we will appply logistic regression to the Endometrium vs. Uterus cancer data.
Let us start by setting up our environment, loading the data, and setting up our cross-validation.
End of explanation
from sklearn import linear_model
clf = linear_model.LogisticRegression(penalty='l1')
Explanation: 1. L1-Regularized Logistic Regression
Let us start with default parameters.
End of explanation
from sklearn import grid_search
param_grid = {'C':[1e-3, 1e-2, 1e-1, 1., 1e2, 1e3]}
clf = grid_search.GridSearchCV(linear_model.LogisticRegression(penalty='l1'), param_grid)
Explanation: Question Compute the cross-validated predictions of the l1-regularized logistic regression with default parameters on our data.
Question Plot the corresponding ROC curve, and compare it to that obtained for non-regularized logistic regression.
Setting the C parameter
What does the C parameter correspond to? See the documentation at http://scikit-learn.org/stable/modules/linear_model.html#logistic-regression for help.
Scikit-learn makes it really easy to use a nested cross-validation to choose a good value for C among a grid of several choices.
End of explanation
print clf.best_estimator_
Explanation: Question What criterion is used to chose the optimal C? See the documentation at http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html#sklearn.grid_search.GridSearchCV. Try changing this criterion http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter
Question Compute the cross-validated predictions of the l1-regularized logistic regression with optimized C parameter on our data.
GridSearchCV also uses the optimal parameter(s) it detected to fit a model to its entire training data again, generating a "best model" that is accessible via the best_estimator_ attribute.
In our case, because we called GridSearchCV from inside a cross-validation loop, clf.best_estimator_ is the "best model" on the last training fold.
End of explanation
# This code plots the regression weights of the classifier 'clf'
plt.plot(range(len(clf.best_estimator_.coef_[0])), clf.best_estimator_.coef_[0],
color='blue', marker='+', linestyle='')
plt.xlabel('Genes', fontsize=16)
plt.ylabel('Weights', fontsize=16)
plt.title('Logistic regression weights', fontsize=16)
plt.xlim([0, X.shape[1]])
Explanation: Question Plot the corresponding ROC curve, and compare to that obtained for
* non-regularized logistic regression.
* l1-regularized logistic regression with default C parameter.
Regression weights
Remember the goal of l1-regularization is to build sparse models.
End of explanation
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(121) # use a 1x2 subplot grid; ax will refer to the 1st subplot
number_of_weights = #TODO
logreg_weights = #TODO
ax.plot(range(number_of_weights), logreg_weights,
color='blue', marker='+', linestyle='')
ax.set_xlabel('Genes', fontsize=16)
ax.set_ylabel('Weights', fontsize=16)
ax.set_title('Logistic regression weights', fontsize=16)
ax.set_xlim([0, X.shape[1]])
ax = fig.add_subplot(122) # use a 1x2 subplot grid; ax will refer to the 2nd subplot
l1_logreg_weights = #TODO
ax.plot(ange(number_of_weights), l1_logreg_weights,
color='blue', marker='+', linestyle='')
ax.set_xlabel('Genes', fontsize=16)
ax.set_ylabel('Weights', fontsize=16)
ax.set_title('Regularized Logistic regression weights', fontsize=16)
ax.set_xlim([0, X.shape[1]])
plt.tight_layout()
Explanation: Question Compare the regression weights obtained with and without l1-regularization, in two side-by-side plots.
End of explanation
clf = grid_search.GridSearchCV(linear_model.LogisticRegression(penalty='l2'), param_grid)
Explanation: 2. L2-regularized logistic regression
Question What is the role of l2 regularization?
End of explanation |
6,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Carnegie Mellon Data Science Club
Step1: Introduction
Step2: Let us take a look at the dimension of this data frame. This is held in the shape attribute of the dataframe.
Step3: We see that there are 59946 profile observations in this dataset, which is a sizable amount of profiles to consider. We also see that each profile contains 31 features, many of which were transcribed by the original data collectors. As discussed in the metadata analysis, the language-oriented features are found in the essay variables. For now, let us consider the self summary variable of the profiles contained in the essay0 variable.
Step4: Let us first check to see if there are any missing values in this column. This will be important for when we want to use these summaries for predictive purposes.
Step5: We see that we have 5485 profiles without self-summaries. For the sake of considering only completed profiles up to the summary, we will filter out observations with NaN entries for essay0.
Step6: Searching and analyzing a single profile
The basis of natural language processing comes simply from analyzing a string. In this extent, it is natural to start out analysis by analyzing a single document, which is one observation in a language dataset. In this case, our document would be a single self-summary.
Step7: Since this is a string, we can read it by a simple print statement.
Step8: Figure 1
Step9: We see that the speaker refers to himself in terms of "i" 8 times in this self-summary. This is actually more reasonable than most people when referring to themselves, but let's try to extend this regular expression to other self-centered terms. We will now search for
[ \.,
Step10: We see that when we extend our search to include "me" as a possible pattern to recognize, we see that the number of self-referrals increases to 13. We can extend this to other aspects of the self-summary, and potentially more interesting patterns we want to find in the language.
Regular Expressions can also be used to substitute particular components of the summary for data cleaning purposes. For instance, let us alter the mistake of "simularities" as "similarities" in the above summary.
Step11: Figure 2
Step12: Figure 3
Step13: We see that 234 non-unique words occur in this document. These are referred to as tokens in NLP. Of those non-unique words, we see that 144 unique words occur in this document. These distinct words are referred to as types in NLP. The relationship between tokens and types form the shallow basis of complexity in sentences.
In this context, since the ratio of types to tokens is around .61, there is a surprising amount of diversity in vocabulary used in this document. Let us see what are the most common words used in this document. We will key words by count in the dictionary and then pick out the words found in the highest keys to get our most common words.
Step14: Figure 4
Step15: Now that we have a string of the entire ordered corpus, we will now filter to only include legal words in the English dictionary and remove any potential stopwords. Stopwords are words that do not indicate significance in understanding the contents of a sentence. These are typically words like "the", "to", "on." For an understanding of how language practitioners choose these stopwords, please see the reference materials.
Step16: The filteredWordCounter object is what we would call an ordered dictionary. A dictionary in python is a set of key-value pairs where given a particular key as an index, the dictionary will return its respective value. The counter object is a slight refinement of a dictionary
Step17: We see that we have 2479674 tokens that occur in our document corpus.We also have 23107 types that occur in our corpus. It is important to note that the number of types is much smaller than the number of tokens in this context, which suggests that our vocabulary is not extremely rich in this context.
Let us study the full word distribution. We will place the relative frequency of words (i.e. the density of the distribution), on the rank of the words, where the rank of a word is $i$ if that word is the $i$th most frequent word in the corpus.
We will use the pyplot package in the matplotlib library for visualizing this distribution. We use a decorator feature to ensure that our plots stay within the Jupyter environment. We will also use the numpy package to gain some mathematical functions, in particular the log method.
Step18: Figure 5
Step19: Table 1
Step20: According to our language model, these are dating profiles that can be generated. However, do these represent realistic forms of communication? Do these look like realistic self-summaries for dating profiles? It is apparent that the answer is no. This is an extremely unrealistic model for how our langauge is generated, most notably because each word is generated independently of each other in the document. Given the many phrasal dependencies that occur in real documents, this is not a realistic assumption to make. That being said, we will see how using this model of language actually performs reasonably well for the task of prediction.
Given that we use a unigram model of language, the only information I need to know to inform a documents likelihood of generation, i.e. $P(W),$ is simply the frequencies of the words that appear in the document. Thus, This leads to a bag of words encoding that maps each document $W$ to an encoding $D_{W}.$ The encoding $D_{W}$ is a vector of length $|V|$, where $V$ is the vocabulary of our corpus. If we define an ordering of words over our vocabulary, we can say that the $i$th component of $D_{W}$ is
$$D_{W,i} = \text{number of times word }i\text{ appears in document }W.$$
This defines the way we will encode the documents in our corpus.
Prediction with Language
One of the essential reasons why NLP is significant in data science is that language has often been shown to be an effective predictor of structural and environmental components in the world. In our case, we might be interested in seeing how language of a writer can help us predict the age of the writer. Let us first take a look at the distribution of ages available in this dataset.
Step21: Figure 6
Step22: The typical supervised learning task is initialized as such
Step23: Now that we have fit our model, let us see how well we are currently fitting our dataset. We will first test for accuracy of our model on the decision rule that if our predicted probability is above $.5$, we will predict that an individual is a millenial.
Step24: We see that our model is predicting accurately around $78.99\% \approx 80\%$ of the time on our current dataset. This means that on the dataset it is using to train, it makes a predictive mistake on average about $1$ for every $5$ predictions. Depending on the context, this might not be an ideal fit for the data. That being said, given that this is an accuracy rate built on a relatively naïve feature set (see Language Modeling), we are performing surprisingly well despite rather simple methods that we are using.
Let us now look at the confusion matrix of our predictions. A confusion matrix is a matrix that compares our predicted outcomes on our actual labels. In particular, row $i$ indicates instances where our model predicts label $i$, and column $j$ indicates instances where our model predicts label $j$. When put together, Cell $i,j$ of the confusion matrix contains the number of observations where we predict label $i$ on outcomes that are actually labeled $j$.
Step25: We see that we have about $7526$ observations that we predicted as not millenials (label $0$), but were actually millenials (label $1$). This is referred to as false negatives in a binary classification problem. We also see that we have about $3912$ observations that we predicted as millenials (label $1$), but were actually non-millenials (label $0$). This is referred to as false positives in a binary classification problem. In this context, we see that our false negative rate is slightly larger than our false positive rate in magnitude, but it is very important to note that this is a large portion of the millenials we are predicting incorrectly upon (about $\frac{7526}{7526 + 12201} \cdot 100 \approx 38\%$). To some degree, it's important to note that we have way fewer millenials in this dataset by our labeling hypothesis than non-millenials. This leads to what we call an imbalanced classes problem in binary classification. We will discuss more about the potential impact of this issue in our Next Questions section.
Let us take a look at the coefficients fit for our model.
Step26: When we compare this to the fact there are $\approx 22000$ words in our corpus, we see that approximately $76\%$ of the words we considered in this initial model have no predictive effect in our current fitted model. Thus, this is an extremely sparse model in terms of our coefficients, which shows the strength of the $L_1$ penalty. It is also important to tie this back to the original sparsity in our word distribution. Since the $400$ most frequent words take up most of our word distribution, we have many words that occur so rarely that they do not have any predictive effect.
Let us look at our non-zero coefficients. For the time we have, we will interpret the coefficients with the largest magnitude.
Step27: To interpret the meaning of these coefficients, let us look back at the mathematical representation of the model we fit
Step29: We see that our model predicts this person is a millenial, and when we look at the contents of the summary, it is not extremely surprising. We see that this individual enjoys video games (a trope of millenials), and discusses a lot about their nerd-oriented hobbies.
However, let us see how our model performs in a different context. Let's take a look at my old Tinder bio.
Step31: Interestingly, this summary makes an innaccurate prediction, since I am $21$ and yet it suggests that I am not a Millenial. That being said, given the many emoticons and proper nouns featured in my profile, it is likely that there are many words that were not picked up in the features since they were not in the original vocabulary.
Let us take a look at another individual's Tinder profile. | Python Code:
import warnings
warnings.filterwarnings("ignore")
Explanation: Carnegie Mellon Data Science Club : Practical Natural Language Processing
By Michael Rosenberg.
Description: This notebook contains an introduction to document analysis with OkCupid data. It is designed to be used at a workshop for introducing individuals to natural language processing.
Some Initial Setup
(Note: If you are using this notebook locally, you can likely skip this step. This step is primarily intended for individuals who want to skip installing all features and want to immediately work on this notebook in the cloud.)
Before we get started, we have a couple things to set up in case you would like to follow along without many installations.
I am using the IBM Data Science Experience, a platform used for collaborating on data analyses across an organization. We can visit the data science experience here: http://datascience.ibm.com
Set up an account by clicking "Sign Up" and moving through the workflow.
Once you are logged in, click the Object Storage section to provide available containers for loading in data assets.
Go to My Projects and click on the create project section to initialize a new project. Give your project a name and a description. We will be using python 2 for this workshop. The spark version does not really matter, since we will not be using spark actively. Give your data asset container a name.
You will now be within your new project dashboard. You can start a new notebook by clicking the add notebook button. We will be using python 2 for this workshop, as discussed before.
Also, to clear up potential warnings that we aren't too concerned with in this context, run the following lines of code:
End of explanation
import pandas as pd
okCupidFrame = pd.read_csv("data/JSE_OkCupid/profiles.csv")
Explanation: Introduction: What Is Natural Language Processing?
"Natural Language Processing", or NLP for the sake of having an acronym, is the combination of two subphrases.
"Natural Language" refers to a system of communication that has been crafted over the centuries by cultures, faiths, nations, and empires. This term is meant to differentiate it from a language that has been artificially crafted by one community (for example, python).
"Language Processing" refers to the methods and technologies developed to encode language, edit language, and interpret language. In artificial languages such as Java, processing the language can be translating a script into bytecode for compilation. In natural languages, this could be a translation method such as the Rosetta Stone.
Together, "Natural Language Processing" refers to the methods and technologies used to infer properties and components of languages in the spoken and written word. Some of these methods include:
Translating a sentence from English to French.
Having a personal assistant respond to your spoken phrases.
Having an artificial intelligence find the logical implications of a written statement found to be true.
Today, we will be working on a simpler method: using the language of a writer to predict the age of that writer.
A note for 15-112 Students
In Fundamentals of Programming and Computer Science (15-112), you are taught a way of writing python (and code in general) for the purpose of writing software. In industry, there are many reasons for writing code, and you may come across instances where you are not writing code specifically meant for the next iPhone app.
In this notebook, I will show you the ways of structuring code in a way that is designed for data analysis and model selection. Thus, there will be some coding practices that are somewhat different - and potentially discourage - in the 15-112 curriculum. In particular, the use of global variables is considered rather standard in the practice of data mining.
I would like you to be aware that some of these practices in this notebook may need adaptation before leveraged in your 15-112 projects. Most of all, remember to stay within style rules of a course when practicing data science.
<a id="metadataAnalysis" />
Metadata Analysis
Before beginning to explore a dataset for prediction and analysis, it is essential that you study how the data is generated in order to inform your exploration.
Our dataset is a set of dating profiles from OkCupid, a dating website that targets young adults in a way that parallels dating websites such as Match.com and eHarmony. The data was transcribed from a web scrape and presented as a dataset designed for introductory statistics and data science courses.
This dataset has many features, but according to the documentation, the essay variables contain all language data inputted by users of OkCupid and the age variable contains the age of the user. For the sake of simplification in our analysis, we will limit our analysis the self-summary variable (essay0) and the age variable (age).
Things that we should note:
This dataset comes from web scrape, and while transcribed to some degree into a data frame by the data collectors, it is likely to be filled with many assets from the web.
This dataset is primarily user-inputted. This may mean that we will see spelling mistakes related to human error, and that we will see a vocabulary created by the users rather than by the platform owner (OkCupid).
Scanning a document
Let us start by loading in the dataset of profiles. This is a .csv file, which stands for Comma-Separated Values. If we take a look at the text representation of the dataset, we see that there is a set of column keys in the first row of the .csv file, and each row below it refers to a filled-in observation of the dataset. In this context, a "filled-in observation" is a transcribed OkCupid profile.
Typically, we can load in a .csv file using the csv package available in base Python. However, for the sake of having a more elegant coding process, I generally use the pandas package to manipulative large dataframes. You can refer to the reference materials for instructions on how to install pandas.
End of explanation
numRows = okCupidFrame.shape[0]
numCols = okCupidFrame.shape[1]
print "Number of Rows:", numRows
print "Number of Columns:", numCols
Explanation: Let us take a look at the dimension of this data frame. This is held in the shape attribute of the dataframe.
End of explanation
selfSummaries = okCupidFrame["essay0"]
Explanation: We see that there are 59946 profile observations in this dataset, which is a sizable amount of profiles to consider. We also see that each profile contains 31 features, many of which were transcribed by the original data collectors. As discussed in the metadata analysis, the language-oriented features are found in the essay variables. For now, let us consider the self summary variable of the profiles contained in the essay0 variable.
End of explanation
#make conditional on which summaries are empty
emptySections = selfSummaries[selfSummaries.isnull()]
numNullEntries = emptySections.shape[0]
print "The number of entries with null self-summaries is", numNullEntries
Explanation: Let us first check to see if there are any missing values in this column. This will be important for when we want to use these summaries for predictive purposes.
End of explanation
#get observations with non-null summaries
filteredOkCupidFrame = okCupidFrame[okCupidFrame["essay0"].notnull()]
#then reobtain self summaries
selfSummaries = filteredOkCupidFrame["essay0"]
Explanation: We see that we have 5485 profiles without self-summaries. For the sake of considering only completed profiles up to the summary, we will filter out observations with NaN entries for essay0.
End of explanation
consideredSummary = selfSummaries[0]
Explanation: Searching and analyzing a single profile
The basis of natural language processing comes simply from analyzing a string. In this extent, it is natural to start out analysis by analyzing a single document, which is one observation in a language dataset. In this case, our document would be a single self-summary.
End of explanation
print consideredSummary
Explanation: Since this is a string, we can read it by a simple print statement.
End of explanation
import re #regular expression library in base Python
#let us compile this for search
iRe = re.compile("[ \.,:;?!\n]i[ \.,:?!\n]")
#then find all the times it occurs in the summary
iObservanceList = iRe.findall(consideredSummary)
print iObservanceList
numIs = len(iObservanceList)
Explanation: Figure 1: A self-summary of an individual in our dataset.
We can see a couple of things just from looking at this profile:
This man sounds extremely pretentious.
There are some misspellings due to the user-inputted aspects in this self-summary. most notably, the word "simularities" should probably be "similarities."
Ther are several br tags within the document that do not add information to our understanding of the document. These tags are primarily for OkCupid to display the self-summary properly on their website.
Thus, before we analyze this dataset, we need to do some data cleansing.
Cleaning and searching with Regular Expression (regex)
Regular Expression is defined as a sequence of characters that defines a search pattern. This search pattern is used to "find" and "find and replace" certain information in strings through string search algorithms. To give an example, say that I am interested in quantifying the narcissism found in the self-summary above. Perhaps I am interested in the number of times that "i" shows up in the summary. We represent this with the simple regular expression search query that accounts for the letter $i$ and then accounts for all potential punctuation that usually follows a lone $i$:
[ \.,:;?!\n]i[ \.,:;?!\n$]
This expression looks for some measure of punctuation, then an $i$ and then looks for a potential followup punctuation to indicate that is a lone $i$. Theses measures of punctuation can be a space, period, comma, colon, semi-colon, question mark, explanation point, or an end-of-line marker ($).
End of explanation
selfCenteredRe = re.compile("[ \.,:;?!\n](i|me)[ \.,:?!\n]")
#find all observations of this regular expression
selfObsList = selfCenteredRe.findall(consideredSummary)
print selfObsList
numNarcissisticWords = len(selfObsList)
Explanation: We see that the speaker refers to himself in terms of "i" 8 times in this self-summary. This is actually more reasonable than most people when referring to themselves, but let's try to extend this regular expression to other self-centered terms. We will now search for
[ \.,:;?!\n](i|me)[ \.,:?!\n]
The | symbol represents an or operator for in a section. In this context, this regular expression is looking for either "i" or "me" followed by some punctuation in order to identify lone observations of i and me instead of appendages on other words (for example, i in intellectual and me in meandering).
End of explanation
#make the re
simRe = re.compile("simularities")
#then perform a sub
filteredSummary = simRe.sub("similarities",consideredSummary)
print filteredSummary
Explanation: We see that when we extend our search to include "me" as a possible pattern to recognize, we see that the number of self-referrals increases to 13. We can extend this to other aspects of the self-summary, and potentially more interesting patterns we want to find in the language.
Regular Expressions can also be used to substitute particular components of the summary for data cleaning purposes. For instance, let us alter the mistake of "simularities" as "similarities" in the above summary.
End of explanation
tagRe = re.compile("<.*>")
filteredSummary = tagRe.sub("",filteredSummary)
print filteredSummary
Explanation: Figure 2: The filtered summary after changing the stated spelling issue.
As we can see, "simularities" was changed to "similarities" without us having to find the exact beginning and ending indices for the "simularities" mistake. We can continue this cleaning by altering an even larger interpretation issue: the br tags. These tags are primarily used for OkCupid to understand how to display the text, but they generally are not informative to the summary itself.
We will remove these by building the regular expression
<.*>
The . is meant to represent any character available in the ASCII encoding framework. the * is meant to represent "0 or more observations of the prior character or expression." In this case, this regular expression is asking to find strings that start with "<" and end with ">" and feature any number of characters in between "<" and ">."
End of explanation
#make our split
sumWordList = re.split(" |\(|\)|\.|\n|,|:|;",filteredSummary)
print sumWordList
#filter to have non-degenerate words
filSumWordList = []
for word in sumWordList:
if (len(word) > 0): filSumWordList.append(word)
print filSumWordList
#this gave us number of words, let us get number of unique words
numWords = len(filSumWordList)
numUniqueWords = len(set(filSumWordList))
print "The length of the document is", numWords
print "The number of unique words is", numUniqueWords
Explanation: Figure 3: Our filtered summary after all br tags have been removed.
As we can see, we have cleaned the summary to a point where there are no tags whatsoever in the text. We can then use this edited summary within the main dataset. This process is essentially a form of data cleansing with text.
If you would like to learn more about regex, see the links in the reference materials.
Word Analysis on Document
Typically, we are interested in the terms and expressions used by a person in a document. The atom of a document to some degree is a word, and so it seems appropriate to start analyzing the types of words used in this document.
We will first do a "tokenization" of this document. Tokenization is when you simplify a document to just the sequence of words that make up the document. This requires us to split our document into a list based on certain punctuation marks (such as a ., a new-line character, or a ,) and spaces, and then filtering our list into non-degenerate words (i.e. not the null string "").
End of explanation
#creating a map from i : list of words with frequency i
wordCountDict = {}
for word in set(filSumWordList): #look through unique words
numOccurences = 0
for token in filSumWordList:
if (word == token): #that means we have seen an occurences
numOccurences += 1
#after running through our word list, let us add it to the dictionary keyed
#by count
if (numOccurences in wordCountDict): #already have the list started
wordCountDict[numOccurences].append(word)
else: #start the list
wordCountDict[numOccurences] = [word]
#print this out by key
for count in wordCountDict:
print count, ":", wordCountDict[count]
Explanation: We see that 234 non-unique words occur in this document. These are referred to as tokens in NLP. Of those non-unique words, we see that 144 unique words occur in this document. These distinct words are referred to as types in NLP. The relationship between tokens and types form the shallow basis of complexity in sentences.
In this context, since the ratio of types to tokens is around .61, there is a surprising amount of diversity in vocabulary used in this document. Let us see what are the most common words used in this document. We will key words by count in the dictionary and then pick out the words found in the highest keys to get our most common words.
End of explanation
#imports discussed
import nltk #for relevant corpora
import collections as co #for ordered dictionary
import StringIO #for string manipulation
#create a writeable string object
stringWriteTerm = StringIO.StringIO()
#write all summaries to the string write term
for essay in filteredOkCupidFrame["essay0"]:
stringWriteTerm.write(essay)
#add some space for the next essay
stringWriteTerm.write("\n")
#get the full string from the writer
summaryString = stringWriteTerm.getvalue()
stringWriteTerm.close()
#lower the lettering
summaryString = summaryString.lower()
numCharsToView = 2000
print summaryString[0:numCharsToView]
#then spit into a word list
summaryWordList = re.split("\.| |,|;|:|-|\n|<.*>",summaryString)
Explanation: Figure 4: Our words keyed by their frequency in the document.
We see that personal references to "you" and "i" seem to show up frequenty, and words such as "love" and "me" also have a sizable number of occurences. However, we do notice that there are a lot of words that show up generally infrequently, and only a small number of words that repeat themselves at the very least. We will see this behavior in our word distribution occur in our main corpus in the next section.
<a id="summaryStatistics" />
Summary Statistics on a corpus
We can then extend our analysis from the single document level to a multi-document level. In NLP, we refer to a collection of documents as a corpus. If you study this practice further, you will see that a collection of collections of documents is referred to as corpora.
We will start our macro-analysis by simply counting the words that occur in all self-summaries and studying the commonality of certain words in these OkCupid profiles.
We will import several packages that may be unfamiliar. nltk is the Natural Language Toolkit, which is perhaps the most important package in modern programming for language analysis and document processing. collections allows us to use data structures that not only provides us with a dictionary-like object, but also keeps track of some measure of order in our dictionary; in our case, it allows us to keep track of the most frequent words in our corpus. StringIO gives us a flexible mechanism to write the entire corpus (all self-summaries) out into one readable string.
End of explanation
#for downloading datasets
nltk.download()
#d
#words
#d
#stopwords
#q
#get legal words and get stop words
legalWordSet = set(nltk.corpus.words.words())
stopWordSet = set(nltk.corpus.stopwords.words())
#then make our filtration
filteredSumWordList = [word for word in summaryWordList
if word in legalWordSet and
word not in stopWordSet]
numWordsToView = numCharsToView
#print filteredSumWordList[0:numWordsToView]
#then count the frequency of words in a collection
filteredWordCounter = co.Counter(filteredSumWordList)
Explanation: Now that we have a string of the entire ordered corpus, we will now filter to only include legal words in the English dictionary and remove any potential stopwords. Stopwords are words that do not indicate significance in understanding the contents of a sentence. These are typically words like "the", "to", "on." For an understanding of how language practitioners choose these stopwords, please see the reference materials.
End of explanation
numNonDistinctWords = sum(filteredWordCounter.values())
numDistinctWords = len(filteredWordCounter.keys())
print "Number of non-distinct words is", numNonDistinctWords
print "Number of distinct words is ", numDistinctWords
Explanation: The filteredWordCounter object is what we would call an ordered dictionary. A dictionary in python is a set of key-value pairs where given a particular key as an index, the dictionary will return its respective value. The counter object is a slight refinement of a dictionary: it defines an ordering over the keys based on their integer values. In our case, given the list of strings it sees, it orders said strings by their frequency in the list. This allows us to both capture the number of words in the list (namely, the sum of the values) and the number of distinct words in the list (namely, the number of keys).
End of explanation
import matplotlib.pyplot as plt #for plotting
#for inline plotting with Jupyter notebook
%matplotlib inline
import numpy as np #for some mathematical features
#make series of word frequency ordered by most common words
#print filteredWordCounter.most_common()
#make series of word frequency ordered by most common words
wordFrequencyFrame = pd.DataFrame(filteredWordCounter.most_common(),
columns = ["Word","Frequency"])
wordFrequencyFrame["Density"] = (wordFrequencyFrame["Frequency"] /
sum(wordFrequencyFrame["Frequency"]))
print wordFrequencyFrame
#then plot rank-density plot
#for the sake of easier visuals, we will log the rank
desiredLineWidth = 3
plt.plot(np.log(wordFrequencyFrame.index+1),wordFrequencyFrame["Density"],
lw = desiredLineWidth)
plt.xlabel("Log-Rank")
plt.ylabel("Density")
plt.title("Log(Rank)-Density Plot\nFor Words in our Summary Corpus")
Explanation: We see that we have 2479674 tokens that occur in our document corpus.We also have 23107 types that occur in our corpus. It is important to note that the number of types is much smaller than the number of tokens in this context, which suggests that our vocabulary is not extremely rich in this context.
Let us study the full word distribution. We will place the relative frequency of words (i.e. the density of the distribution), on the rank of the words, where the rank of a word is $i$ if that word is the $i$th most frequent word in the corpus.
We will use the pyplot package in the matplotlib library for visualizing this distribution. We use a decorator feature to ensure that our plots stay within the Jupyter environment. We will also use the numpy package to gain some mathematical functions, in particular the log method.
End of explanation
#grab top ten words
topLev = 10
topTenWordFrame = wordFrequencyFrame.iloc[0:topLev,:].loc[:,
["Word","Frequency"]]
#then display
from IPython.display import display, HTML
display(HTML(topTenWordFrame.to_html(index = False)))
Explanation: Figure 5: Distribution of our words on the $\log(Rank)$ of the words.
We see that while we have around $e^{10} \approx 22000$ words in our distribution, the density peters out after the $e^6 \approx 400$ most frequent words. This suggests that we have many words that occur extremely rarely in our dataset and only a few words that occur relatively often. This is referred to as a distribution that behaves as a Zipfian distribution, which is a type of distribution where you have a few events that occur very often and many events that occur rarely. This distribution is fundamental to understanding the language-generating process within NLP.
Let us see what are top $10$ most frequent words are. We will import some visualization features to embed a table within our Jupyter notebook.
End of explanation
#build bag of words distribution
bow = []
for word in filteredWordCounter:
#make observation list by multiplying by count
wordObsList = [word] * filteredWordCounter[word]
#then extend bow as such
bow.extend(wordObsList)
#print bow
#build sampling function
import random #for sampling
def generateLanguage(numWords,bow):
#helper that generates our language using out bag of words
newPhraseList = []
for i in xrange(numWords):
#sample a word from our distribution
newWordList = random.sample(bow,1)
newPhraseList.extend(newWordList)
newPhrase = " ".join(newPhraseList)
return newPhrase
#some examples
sentenceLenList = [2,4,8,16]
for sentenceLen in sentenceLenList:
print generateLanguage(sentenceLen,bow)
Explanation: Table 1: Our top ten most frequent words by their count frequency.
We see that words that display some level affection are emphasized, such as "like" and "love." Words with positive connotations also seem to occur often, as shown by the frequency of "good" and "enjoy." There is also a sense of discovery of new individuals that pervades our vocabulary, as shown by the occurence words such as "looking", "new", and "people."
<a id="languageModels" />
Language Models
Before we begin trying to predict the age of a writer, we must first introduce the way we use language in the prediction process. Features are the components of your data you use to predict your outcome variable. In this context, our features are the language, and our outcome is age. However, how do I go about encoding language as a feature? This is the fundamental question of feature engineering.
In NLP, feature engineering often boils down to a way of representing each document in a corpus as some numerical vector. This numerical vector is referred to as the encoding of a document. Some would argue that encoding is purely for practical purposes, and would argue that an encoding of a document is arbitrary. However, NLP practitioners have shown how encodings place important assumptions on how a document was generated. The way we represent a document-generating process is referred to as language modeling.
Language modeling could be a lecture unto itself (which may be a good potential followup). However, for the time being, we will describe one of the most simple models of language generation: the unigram.
let us describe a document of lenght $n$ as a sequence of words
$$W = (w_1,w_2,...,w_n).$$
A simple way to describe how this document is generated is through some form of distribution over the words that exist in the document. The unigram model assumes that every word in the sequence is generated independently of the words before it and the words after it. For example, consider the second word that occurs in the sequence $w_2$. What we are stating is that the probability of seeing $w_2$ occur in the document is independent of whatever other words I see in this document. Mathematically,
$$P(w_2 | w_1,w_3,...,w_n) = P(w_2)$$
This essentially leads to the implication that the probability of seeing a document is equal to the product of the likelihood of each word occuring in this document:
$$P(W) = P(w_1,w_2,...,w_n) = P(w_1)P(w_2)...P(w_n) = \prod_{i = 1}^n P(w_i).$$
In this extent, the sufficient information we need to know to explain how a document is generated is just a probability distribution over all words in our vocabulary.
Let us simulate how this model generates language. We will take our current word distribution (see Summary Statistics) and simulate a process of pulling our language from a hat containing that distribution. This is why this language model is referred to commonly as a bag of words.
End of explanation
#mkae a histogram
newPlot = plt.hist(filteredOkCupidFrame["age"])
plt.xlabel("Age of User")
plt.ylabel("Frequency")
plt.title("Distribution of User Ages")
Explanation: According to our language model, these are dating profiles that can be generated. However, do these represent realistic forms of communication? Do these look like realistic self-summaries for dating profiles? It is apparent that the answer is no. This is an extremely unrealistic model for how our langauge is generated, most notably because each word is generated independently of each other in the document. Given the many phrasal dependencies that occur in real documents, this is not a realistic assumption to make. That being said, we will see how using this model of language actually performs reasonably well for the task of prediction.
Given that we use a unigram model of language, the only information I need to know to inform a documents likelihood of generation, i.e. $P(W),$ is simply the frequencies of the words that appear in the document. Thus, This leads to a bag of words encoding that maps each document $W$ to an encoding $D_{W}.$ The encoding $D_{W}$ is a vector of length $|V|$, where $V$ is the vocabulary of our corpus. If we define an ordering of words over our vocabulary, we can say that the $i$th component of $D_{W}$ is
$$D_{W,i} = \text{number of times word }i\text{ appears in document }W.$$
This defines the way we will encode the documents in our corpus.
Prediction with Language
One of the essential reasons why NLP is significant in data science is that language has often been shown to be an effective predictor of structural and environmental components in the world. In our case, we might be interested in seeing how language of a writer can help us predict the age of the writer. Let us first take a look at the distribution of ages available in this dataset.
End of explanation
#generate the variable from our stated cutoff
filteredOkCupidFrame["isMillenial"] = 0
millenialCutoff = 28
filteredOkCupidFrame.loc[filteredOkCupidFrame["age"] < millenialCutoff,
"isMillenial"] = 1
Explanation: Figure 6: Distribution of the ages of users in this dataset.
We see that there is a large concentration of users in the $[20,30]$ age range and a slight tail of users extending into older ages. This is expected due to the fact that OkCupid specifically targets young adults for its main user base.
We are interested in seeing if we can use the language content in the summary of a user to predict some measurement of age. For the sake of simplification, let us try to predict whether someone is within the millenial age range. For a beginner's prediction problem, it is much easier to deal with predicting a binary outcome than a continous scale of outcomes. For the sake of further simplification, we will assume that a millenial is anyone under of $28.$
End of explanation
#import our count vectorizer
from sklearn.feature_extraction.text import CountVectorizer
#make a vocab dictionary
counterList = filteredWordCounter.most_common()
vocabDict = {}
for i in xrange(len(counterList)):
rankWord = counterList[i][0]
vocabDict[rankWord] = i
#initialize vectorizer
vectorizer = CountVectorizer(min_df=1,stop_words=stopWordSet,
vocabulary = vocabDict)
#then fit and transform our summaries
bagOfWordsMatrix = vectorizer.fit_transform(filteredOkCupidFrame["essay0"])
#print bagOfWordsMatrix
#get language frame
langFrame = pd.DataFrame(bagOfWordsMatrix.toarray(),
columns = vectorizer.get_feature_names())
#display(langFrame)
#import linear model
import sklearn.linear_model as lm
#build model
initialLinearMod = lm.LogisticRegression(penalty = "l1")
initialLinearMod.fit(langFrame,filteredOkCupidFrame["isMillenial"])
Explanation: The typical supervised learning task is initialized as such:
Given a target variable $Y$ and a feature set $X$, We are looking for a function $f$ such that
$$f : X \rightarrow Y.$$
In the task of regression, $Y$ is some continuous set of variables (typically $\mathbb{R}$). In the task of classification, $Y$ is some discrete set of variables. In our case, $Y = {0,1},$ where $1$ represents the instance where someone is a millenial and $0$ represents the instance where someone is not a millenial. This would be considered a binary classification task.
For this task, we will introduce a model (or, to some degree, a family of functions) referred to as logistic regression. Say that we are interested in estimating the probability that one is a millenial given our feature set, i.e.
$$P(isMillenial|X).$$
Logistic regression assumes that this function takes on a sigmoidal form:
$$P(isMillenial|X) = \frac{1}{1 + e^{-r(X)}},$$
where $r : X \rightarrow \mathbb{R}$ is some regression function. This function $r$ is the main reason why logistic regression is referred to as regression despite being used for classification purposes.
Our model takes an additional assumption: that $r$ is a function that is linear in our feature set. Take $X_i$ to be the frequency of word $i$ in our document, $i \in V.$ We see that
$$r(X) = \sum_{i = 1}^{|V|} \beta_i X_i,$$
where $\beta_i$ is some weight, $i \in V.$ The main question that we want to know is, what are the optimal $\beta_i$'s to fit this model?
We won't have enough time to discuss the training algorithm used to find our weights for $r$, but for the time being, we will say that we will train this logistic regression with an objective function featuring an $L_1$ penalty. An objective function is some function that we either minimize or maximize in a way to find our weights. In that sense, the objective function is some measurement as to how well our model is performing relative to other models. We would prefer this objective function to be one that measures how well our model is fitting the data. The $L_1$ penalty is a component of the objective function that penalizes models that place to large of weights. To some degree, this allows our objective function to prefer models that fit the data well, but penalize models that are overly complex and assign too many weights to too many features. The $L_1$ penalty is useful when our feature set is very large.
We will use Scikit-learn sklearn to build the bag-of-words representation of our summaries discussed in the previous section. We will then use a Logistic Regression with $L_1$ penalty for our predictive pipeline.
End of explanation
#make predictions
predictionVec = initialLinearMod.predict(langFrame)
filteredOkCupidFrame["predictedLabel"] = list(predictionVec)
#print filteredOkCupidFrame["predictedLabel"]
#then test for accuracy
accurateFrame = filteredOkCupidFrame[filteredOkCupidFrame["isMillenial"] ==
filteredOkCupidFrame["predictedLabel"]]
accuracy = float(accurateFrame.shape[0]) / filteredOkCupidFrame.shape[0]
print accuracy
Explanation: Now that we have fit our model, let us see how well we are currently fitting our dataset. We will first test for accuracy of our model on the decision rule that if our predicted probability is above $.5$, we will predict that an individual is a millenial.
End of explanation
#confusion matrix
confusionMat = pd.DataFrame({0:[0,0],1:[0,0]})
#get our indices
rowIndices = list(confusionMat.index)
colIndices = list(confusionMat.columns)
#then run through theses
for row in rowIndices:
for col in colIndices:
#grab observations with row predictedLabel and col actual label
givenObs = filteredOkCupidFrame[
(filteredOkCupidFrame["predictedLabel"] == row) &
(filteredOkCupidFrame["isMillenial"] == col)]
#then just get the number of observations in this situation
numObs = givenObs.shape[0]
#then store is
confusionMat.loc[row,col] = numObs
#then display our confusion matrix
display(confusionMat)
Explanation: We see that our model is predicting accurately around $78.99\% \approx 80\%$ of the time on our current dataset. This means that on the dataset it is using to train, it makes a predictive mistake on average about $1$ for every $5$ predictions. Depending on the context, this might not be an ideal fit for the data. That being said, given that this is an accuracy rate built on a relatively naïve feature set (see Language Modeling), we are performing surprisingly well despite rather simple methods that we are using.
Let us now look at the confusion matrix of our predictions. A confusion matrix is a matrix that compares our predicted outcomes on our actual labels. In particular, row $i$ indicates instances where our model predicts label $i$, and column $j$ indicates instances where our model predicts label $j$. When put together, Cell $i,j$ of the confusion matrix contains the number of observations where we predict label $i$ on outcomes that are actually labeled $j$.
End of explanation
#look at coefficients
coefVec = initialLinearMod.coef_[0] #because logistic regression syntax
coefFrame = pd.DataFrame({"Feature Name": vectorizer.get_feature_names(),
"Coefficient":coefVec})
#get number of 0 coefficients
zeroCoefFrame = coefFrame[coefFrame["Coefficient"] == 0]
numZeroCoeffs = zeroCoefFrame.shape[0]
print "The number of betas that equal 0 are", numZeroCoeffs
Explanation: We see that we have about $7526$ observations that we predicted as not millenials (label $0$), but were actually millenials (label $1$). This is referred to as false negatives in a binary classification problem. We also see that we have about $3912$ observations that we predicted as millenials (label $1$), but were actually non-millenials (label $0$). This is referred to as false positives in a binary classification problem. In this context, we see that our false negative rate is slightly larger than our false positive rate in magnitude, but it is very important to note that this is a large portion of the millenials we are predicting incorrectly upon (about $\frac{7526}{7526 + 12201} \cdot 100 \approx 38\%$). To some degree, it's important to note that we have way fewer millenials in this dataset by our labeling hypothesis than non-millenials. This leads to what we call an imbalanced classes problem in binary classification. We will discuss more about the potential impact of this issue in our Next Questions section.
Let us take a look at the coefficients fit for our model.
End of explanation
#consider nonzero coefficients
nonZeroCoeffFrame = coefFrame[coefFrame["Coefficient"] != 0]
#get absolute magnitude of coefficients
nonZeroCoeffFrame["absCoeff"] = np.abs(nonZeroCoeffFrame["Coefficient"])
#order by absolute value
nonZeroCoeffFrame = nonZeroCoeffFrame.sort_values("absCoeff",
ascending = False)
#then display values
#display(HTML(nonZeroCoeffFrame.to_html(index = False)))
Explanation: When we compare this to the fact there are $\approx 22000$ words in our corpus, we see that approximately $76\%$ of the words we considered in this initial model have no predictive effect in our current fitted model. Thus, this is an extremely sparse model in terms of our coefficients, which shows the strength of the $L_1$ penalty. It is also important to tie this back to the original sparsity in our word distribution. Since the $400$ most frequent words take up most of our word distribution, we have many words that occur so rarely that they do not have any predictive effect.
Let us look at our non-zero coefficients. For the time we have, we will interpret the coefficients with the largest magnitude.
End of explanation
millenialProfileInd = 17 #checked this earlier
print "The age of this profile is :", filteredOkCupidFrame.loc[millenialProfileInd,"age"]
print "Their Self-Summary:"
print
givenEssay = filteredOkCupidFrame.loc[millenialProfileInd,"essay0"]
print givenEssay
#make our predictions
newBOW = vectorizer.transform([givenEssay])
newPrediction = initialLinearMod.predict(newBOW)
print
if (newPrediction[0] == 1):
print "Our Model Predicts this person is a millenial"
else:
print "Our Model Predicts this person is not a millenial"
Explanation: To interpret the meaning of these coefficients, let us look back at the mathematical representation of the model we fit:
$$P(isMillenial | X) = \frac{1}{1 + e^{-\sum_{i = 1}^{|V|} \beta_i X_i}}.$$
We see that if our coefficient for word $i$ ($\beta_i$) is positive, this makes our denominator smaller and pushes our prediction closer to the $isMillenial$ direction. Similarly, if the coefficient is negative, this will make our denominator bigger and push our predictions away from the $isMillenial$ direction.
We see that "pregnancy" seems to be a word that has the highest magnitude in its coefficient, which suggests that pregnancy seems to be an important indicator for whether the profile writer is a millenial. This is a bit peculiar, given that pregnancy seems to be something that is uninteresting to younger millenials who tend to not be interested in "settling down" any time soon. That being said, we see that "unyielding" and "unsatisfied" seem to also be strong indicators of whether a writer is a millenial or not. To some degree, this paints an interesting narrative: that millenials can often look at relationships as a new experience, something where they can have "unyielding" discover of themselves compared to a lack of satisfaciton in previous relationships. This would then beg the question as to whether these are also focuses for non-millenials.
That being said, it is often useful to look at a set of example predictions to see how flexible our model might be in other dating contexts. Let's take a look at one profile we accurately predict is a millenial.
End of explanation
#try my Tinder profile
michaelTinderSummary = Senior at Carnegie Mellon, starting work in Boston
next summer as a data scientist at place X :D
I enjoy books, singing, coffee, and polar bears :D
#lower it
michaelTinderSummary = michaelTinderSummary.lower()
#then predict
personalBOW = vectorizer.transform([michaelTinderSummary])
personalPrediction = initialLinearMod.predict(personalBOW)
print personalPrediction
Explanation: We see that our model predicts this person is a millenial, and when we look at the contents of the summary, it is not extremely surprising. We see that this individual enjoys video games (a trope of millenials), and discusses a lot about their nerd-oriented hobbies.
However, let us see how our model performs in a different context. Let's take a look at my old Tinder bio.
End of explanation
#try a friend's Tinder profile
friendTinderSummary = Insert a friend's profile here
#lower it
friendTinderSummary = friendTinderSummary.lower()
#then predict
friendBOW = vectorizer.transform([friendTinderSummary])
friendPrediction = initialLinearMod.predict(friendBOW)
print friendPrediction
Explanation: Interestingly, this summary makes an innaccurate prediction, since I am $21$ and yet it suggests that I am not a Millenial. That being said, given the many emoticons and proper nouns featured in my profile, it is likely that there are many words that were not picked up in the features since they were not in the original vocabulary.
Let us take a look at another individual's Tinder profile.
End of explanation |
6,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizzazione Facebook
Step1: 1. Confronto Salvini, Renzi e M5S
Step2: 2. Dettaglio Salvini
Step3: 2.1. Distribuzione Totale dei Likes ai Post di Salvini
Step4: 2. Focus su Anni 2014, 2015, 2016
Step5: Principali post tra Ottobre e Novembre
(90.000 likes) Uno STUPRATORE tunisino di 28 anni, già in galera per violenza sessuale, è evaso dal carcere di Pordenone e ha violentato una ragazza di 28 anni. È stato arrestato. Fossi ministro, applicherei (come già sperimentato in numerosi Paesi europei) la CASTRAZIONE CHIMICA e poi lo rimanderei in Tunisia. Che dite?",
Step6: (135.000 likes) Ragazzi, da non credere! Ascoltate e divulgate.Sabato e domenica tutti in piazza, vieni a firmare. (nel video vengono intervistate due minorenni rom che si vantavano di rubare, video scoperto poi essere un falso)
"Una mamma di 41 anni, separata e con due figli, si è impiccata vicino a Bologna. Le avevano staccato il gas, e per luglio rischiava lo sfratto.Una preghiera per questa mamma, un abbraccio ai suoi due cuccioli di 10 e 11 anni che non lasceremo soli, e tanta rabbia. Stato italiano, dove sei?",
Da FARE SUBITO.Sostegno militare alla Russia per annientare l’ISIS, controllo delle frontiere, blocco degli sbarchi ed espulsione dei clandestini, verifica a tappeto di tutte le occupazioni abusive nei nostri quartieri popolari, da Milano a Palermo. Ci hanno dichiarato GUERRA. E alla guerra non si risponde con le chiacchiere di Renzi e dell’inutile Alfano!", | Python Code:
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%pylab inline
matplotlib.style.use('ggplot')
# Directory di Staging
dir_df = os.path.join(os.path.abspath(''),'stg')
dir_out = os.path.join(os.path.abspath(''),'out')
# Dataset Salvini
df_filename = r'df_posts_likes_salvini.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts_salvini = pd.read_pickle(df_fullpath)
# Statistiche
# Numero Posts
df_posts_salvini['ID'].count()
# Numero Likes
df_posts_salvini['Likes'].sum()
# Dataset Renzi
df_filename = r'df_posts_likes_renzi.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts_renzi = pd.read_pickle(df_fullpath)
# Statistiche
# Numero Posts
df_posts_renzi['ID'].count()
# Numero Likes
df_posts_renzi['Likes'].sum()
# Dataset M5S
df_filename = r'df_posts_likes_m5s.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts_m5s = pd.read_pickle(df_fullpath)
# Statistiche
# Numero Posts
df_posts_m5s['ID'].count()
# Numero Likes
df_posts_m5s['Likes'].sum()
Explanation: Visualizzazione Facebook
End of explanation
# Dimensione Temporale -> ANNO
df_posts_m5s['Post_Date'] = df_posts_m5s['Post_Date'].str[:4]
df_posts_salvini['Post_Date'] = df_posts_m5s['Post_Date'].str[:4]
df_posts_renzi['Post_Date'] = df_posts_m5s['Post_Date'].str[:4]
df_posts_m5s = df_posts_m5s.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
df_posts_salvini = df_posts_salvini.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
df_posts_renzi = df_posts_renzi.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
df_posts_m5s.rename(columns={'ID': 'Posts_M5S', 'Likes': 'Likes_M5S'}, inplace=True)
df_posts_m5s = df_posts_m5s.set_index(['Post_Date'])
df_posts_m5s.head(2)
df_posts_renzi.rename(columns={'ID': 'Posts_Renzi', 'Likes': 'Likes_Renzi'}, inplace=True)
df_posts_renzi = df_posts_renzi.set_index(['Post_Date'])
df_posts_renzi.head(2)
df_posts_salvini.rename(columns={'ID': 'Posts_Salvini', 'Likes': 'Likes_Salvini'}, inplace=True)
df_posts_salvini = df_posts_salvini.set_index(['Post_Date'])
df_posts_salvini.head(2)
# Numero Posts
result_post = pd.concat([df_posts_renzi, df_posts_salvini, df_posts_m5s], axis=1)
del result_post['Likes_Renzi']
del result_post['Likes_Salvini']
del result_post['Likes_M5S']
result_post.rename(columns={'Posts_Renzi': 'Renzi', 'Posts_Salvini': 'Salvini', 'Posts_M5S': 'M5S'}, inplace=True)
result_post.plot(
kind='bar'
)
result_post.to_csv(os.path.join(dir_out,r'Distr_Posts.csv'),header=True, index=True)
# Numero Likes
result_likes = pd.concat([df_posts_renzi, df_posts_salvini, df_posts_m5s], axis=1)
del result_likes['Posts_Renzi']
del result_likes['Posts_Salvini']
del result_likes['Posts_M5S']
result_likes.rename(columns={'Likes_Renzi': 'Renzi', 'Likes_Salvini': 'Salvini', 'Likes_M5S': 'M5S'}, inplace=True)
result_likes.plot(
kind='bar'
)
result_likes.to_csv(os.path.join(dir_out,r'Distr_Likes.csv'),header=True, index=True)
Explanation: 1. Confronto Salvini, Renzi e M5S
End of explanation
# Dataset Salvini
df_filename = r'df_posts_likes_salvini.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts = pd.read_pickle(df_fullpath)
# Estraggo la Data da Str
df_posts['Post_Date'] = df_posts['Post_Date'].str[:10]
# Converto in Date
df_posts['Post_Date'] = pd.to_datetime(df_posts['Post_Date'])
# Ordino per Data
df_posts = df_posts.sort_values(by='Post_Date')
# Mi tengo DS totale per Data per analisi successive
df_posts_dett = df_posts
df_posts_dett = df_posts_dett.set_index(['Post_Date'])
# Raggruppo per Data
df_posts = df_posts.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
# Elimino i le date per cui non ho post con likes (privacy ?)
df_posts = df_posts[np.isfinite(df_posts['Likes'])]
# Setto indice la Data
df_posts = df_posts.set_index(['Post_Date'])
# Lavoro con TimeSeries, raggruppo tutto per Anno/Mese (la data era per giorno)
df_posts = df_posts.groupby(pd.TimeGrouper("M")).sum()
# Elimino Numero di Posts
del df_posts['ID']
# Ok, i numeri tornano dopo le elaborazioni
df_posts['Likes'].sum()
Explanation: 2. Dettaglio Salvini
End of explanation
df_posts.sort_values(by='Likes').head(5)
# Costruisco il Grafico, l'obiettivo è analizzare i picchi e capire a quale evento è collegato
tp = df_posts.plot(
marker='o',
markersize=7,
# x-axis da 0 a 84
markevery=[58,60,63,65,70])
tp.set_xlabel("Data del Post")
vals = tp.get_yticks()
tp.set_yticklabels(['{:,.0f}'.format(x) for x in vals])
fig_posts = tp.get_figure()
fig_posts.tight_layout()
fig_posts.savefig(os.path.join(dir_out,'Distr_Posts_Salvini.png'), format='png', dpi=300)
Explanation: 2.1. Distribuzione Totale dei Likes ai Post di Salvini
End of explanation
df_post_14 = df_posts['20140101':'20141231']
tp_14 = df_post_14.plot()
fig_posts_14 = tp_14.get_figure()
fig_posts_14.tight_layout()
fig_posts_14.savefig(os.path.join(dir_out,'posts_2014.png'), format='png', dpi=300)
# Dettaglio 2014
df_posts_dett['20140101':'20141231'].sort_values(by=['Likes'],ascending=False).head(1)
# Analizzo gli ID direttamente dall API Graph Tool di Facebook
Explanation: 2. Focus su Anni 2014, 2015, 2016
End of explanation
df_post_15 = df_posts['20150101':'20151231']
tp_15 = df_post_15.plot()
fig_posts_15 = tp_15.get_figure()
fig_posts_15.tight_layout()
fig_posts_15.savefig(os.path.join(dir_out,'posts_2015.png'), format='png', dpi=300)
# Controllo 2015
df_posts_dett['20150101':'20151231'].sort_values(by=['Likes'],ascending=False).head(3)
# Analizzo gli ID direttamente dall API Graph Tool di Facebook
Explanation: Principali post tra Ottobre e Novembre
(90.000 likes) Uno STUPRATORE tunisino di 28 anni, già in galera per violenza sessuale, è evaso dal carcere di Pordenone e ha violentato una ragazza di 28 anni. È stato arrestato. Fossi ministro, applicherei (come già sperimentato in numerosi Paesi europei) la CASTRAZIONE CHIMICA e poi lo rimanderei in Tunisia. Che dite?",
End of explanation
df_post_16 = df_posts['20160101':'20161231']
tp_16 = df_post_16.plot()
fig_posts_16 = tp_16.get_figure()
fig_posts_16.tight_layout()
fig_posts_16.savefig(os.path.join(dir_out,'posts_2016.png'), format='png', dpi=300)
Explanation: (135.000 likes) Ragazzi, da non credere! Ascoltate e divulgate.Sabato e domenica tutti in piazza, vieni a firmare. (nel video vengono intervistate due minorenni rom che si vantavano di rubare, video scoperto poi essere un falso)
"Una mamma di 41 anni, separata e con due figli, si è impiccata vicino a Bologna. Le avevano staccato il gas, e per luglio rischiava lo sfratto.Una preghiera per questa mamma, un abbraccio ai suoi due cuccioli di 10 e 11 anni che non lasceremo soli, e tanta rabbia. Stato italiano, dove sei?",
Da FARE SUBITO.Sostegno militare alla Russia per annientare l’ISIS, controllo delle frontiere, blocco degli sbarchi ed espulsione dei clandestini, verifica a tappeto di tutte le occupazioni abusive nei nostri quartieri popolari, da Milano a Palermo. Ci hanno dichiarato GUERRA. E alla guerra non si risponde con le chiacchiere di Renzi e dell’inutile Alfano!",
End of explanation |
6,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An RNN model for temperature data
This time we will be working with real data
Step1: <a name="hyperparameters"></a>
<a name="assignment1"></a>
Hyperparameters
<div class="alert alert-block alert-info">
***Assignment #1*** Temperatures have a periodicity of 365 days. We would need to unroll the RNN over 365 steps (=SEQLEN) to capture that. That is way too much. We will have to work with averages over a handful of days instead of daily temperatures. Bump the unrolling length to SEQLEN=128 and then try averaging over 3 to 5 days (RESAMPLE_BY=3, 4, 5). Look at the data visualisations in [Resampling](#resampling) and [Training sequences](#trainseq). The training sequences should capture a recognizable part of the yearly oscillation.
***In the end, use these values
Step2: Temperature data
This is what our temperature datasets looks like
Step3: <a name="resampling"></a>
Resampling
Our RNN would need ot be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
Step4: <a name="trainseq"></a>
Visualize training sequences
This is what the neural network will see during training.
Step5: <a name="assignment2"></a><a name="assignment3"></a>
The model definition
<div class="alert alert-block alert-info">
***Assignement #2*** Implement the RNN model. You can copy-paste it from the previous exercise but you will have to make one modification
Step6: Instantiate the model
Step7: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
Step8: <a name="train"></a>
The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
Step9: <a name="inference"></a>
Inference
This is a generative model
Step10: <a name="valid"></a>
Validation | Python Code:
import math
import sys
import time
import numpy as np
import utils_batching
import utils_args
import tensorflow as tf
from tensorflow.python.lib.io import file_io as gfile
print("Tensorflow version: " + tf.__version__)
from matplotlib import pyplot as plt
import utils_prettystyle
import utils_display
Explanation: An RNN model for temperature data
This time we will be working with real data: daily (Tmin, Tmax) temperature series from 1666 weather stations spanning 50 years. It is to be noted that a pretty good predictor model already exists for temperatures: the average of temperatures on the same day of the year in N previous years. It is not clear if RNNs can do better but we will se how far they can go.
<div class="alert alert-block alert-info">
Things to do:<br/>
<ol start="0">
<li>Run the notebook as it is. Look at the data visualisations. Then look at the predictions at the end. Not very good...
<li>Fist play with the data to find good values for RESAMPLE_BY and SEQLEN in hyperparameters ([Assignment #1](#assignment1)).
<li>Now implement the RNN model in the model function ([Assignment #2](#assignment2)).
<li>Temperatures are noisy, let's try something new: predicting N data points ahead instead of only 1 ahead ([Assignment #3](#assignment3)).
<li>Now we will adjust more traditional hyperparameters and add regularisations. ([Assignment #4](#assignment4))
<li>
Look at the save-restore code. The model is saved at the end of the [training loop](#train) and restored when running [validation](#valid). Also see how the restored model is used for [inference](#inference).
<br/><br/>
You are ready to run in the cloud on all 1666 weather stations. Use [this bash notebook](../run-on-cloud-ml-engine.ipynb) to convert your code to a regular Python file and invoke the Google Cloud ML Engine command line.
When the training is finished on ML Engine, change one line in [validation](#valid) to load the SAVEDMODEL from its cloud bucket and display.
</div>
End of explanation
NB_EPOCHS = 5 # number of times the model sees all the data during training
N_FORWARD = 1 # train the network to predict N in advance (traditionnally 1)
RESAMPLE_BY = 1 # averaging period in days (training on daily data is too much)
RNN_CELLSIZE = 80 # size of the RNN cells
N_LAYERS = 1 # number of stacked RNN cells (needed for tensor shapes but code must be changed manually)
SEQLEN = 32 # unrolled sequence length
BATCHSIZE = 64 # mini-batch size
DROPOUT_PKEEP = 1.0 # dropout: probability of neurons being kept (NOT dropped). Should be between 0.5 and 1.
ACTIVATION = tf.nn.tanh # Activation function for GRU cells (tf.nn.relu or tf.nn.tanh)
JOB_DIR = "checkpoints"
DATA_DIR = "temperatures"
# potentially override some settings from command-line arguments
if __name__ == '__main__':
JOB_DIR, DATA_DIR = utils_args.read_args1(JOB_DIR, DATA_DIR)
ALL_FILEPATTERN = DATA_DIR + "/*.csv" # pattern matches all 1666 files
EVAL_FILEPATTERN = DATA_DIR + "/USC000*2.csv" # pattern matches 8 files
# pattern USW*.csv -> 298 files, pattern USW*0.csv -> 28 files
print('Reading data from "{}".\nWrinting checkpoints to "{}".'.format(DATA_DIR, JOB_DIR))
Explanation: <a name="hyperparameters"></a>
<a name="assignment1"></a>
Hyperparameters
<div class="alert alert-block alert-info">
***Assignment #1*** Temperatures have a periodicity of 365 days. We would need to unroll the RNN over 365 steps (=SEQLEN) to capture that. That is way too much. We will have to work with averages over a handful of days instead of daily temperatures. Bump the unrolling length to SEQLEN=128 and then try averaging over 3 to 5 days (RESAMPLE_BY=3, 4, 5). Look at the data visualisations in [Resampling](#resampling) and [Training sequences](#trainseq). The training sequences should capture a recognizable part of the yearly oscillation.
***In the end, use these values: SEQLEN=128, RESAMPLE_BY=5.***
</div>
End of explanation
all_filenames = gfile.get_matching_files(ALL_FILEPATTERN)
eval_filenames = gfile.get_matching_files(EVAL_FILEPATTERN)
train_filenames = list(set(all_filenames) - set(eval_filenames))
# By default, this utility function loads all the files and places data
# from them as-is in an array, one file per line. Later, we will use it
# to shape the dataset as needed for training.
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames)
evtemps, _, evdates, _, _ = next(ite) # gets everything
print('Pattern "{}" matches {} files'.format(ALL_FILEPATTERN, len(all_filenames)))
print('Pattern "{}" matches {} files'.format(EVAL_FILEPATTERN, len(eval_filenames)))
print("Evaluation files: {}".format(len(eval_filenames)))
print("Training files: {}".format(len(train_filenames)))
print("Initial shape of the evaluation dataset: " + str(evtemps.shape))
print("{} files, {} data points per file, {} values per data point"
" (Tmin, Tmax, is_interpolated) ".format(evtemps.shape[0], evtemps.shape[1],evtemps.shape[2]))
# You can adjust the visualisation range and dataset here.
# Interpolated regions of the dataset are marked in red.
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 18262 # 18262 = Dec 31st 2009
visu_temperatures = evtemps[WEATHER_STATION,START_DATE:END_DATE]
visu_dates = evdates[START_DATE:END_DATE]
utils_display.picture_this_4(visu_temperatures, visu_dates)
Explanation: Temperature data
This is what our temperature datasets looks like: sequences of daily (Tmin, Tmax) from 1960 to 2010. They have been cleaned up and eventual missing values have been filled by interpolation. Interpolated regions of the dataset are marked in red on the graph.
End of explanation
# This time we ask the utility function to average temperatures over 5-day periods (RESAMPLE_BY=5)
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames, RESAMPLE_BY, tminmax=True)
evaltemps, _, evaldates, _, _ = next(ite)
# display five years worth of data
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 365*5//RESAMPLE_BY # 5 years
visu_temperatures = evaltemps[WEATHER_STATION, START_DATE:END_DATE]
visu_dates = evaldates[START_DATE:END_DATE]
plt.fill_between(visu_dates, visu_temperatures[:,0], visu_temperatures[:,1])
plt.show()
Explanation: <a name="resampling"></a>
Resampling
Our RNN would need ot be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
End of explanation
# The function rnn_multistation_sampling_temperature_sequencer puts one weather station per line in
# a batch and continues with data from the same station in corresponding lines in the next batch.
# Features and labels are returned with shapes [BATCHSIZE, SEQLEN, 2]. The last dimension of size 2
# contains (Tmin, Tmax).
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
nb_epochs=1,
tminmax=True)
# load 6 training sequences (each one contains data for all weather stations)
visu_data = [next(ite) for _ in range(6)]
# Check that consecutive training sequences from the same weather station are indeed consecutive
WEATHER_STATION = 4
utils_display.picture_this_5(visu_data, WEATHER_STATION)
Explanation: <a name="trainseq"></a>
Visualize training sequences
This is what the neural network will see during training.
End of explanation
def model_rnn_fn(features, Hin, labels, step, dropout_pkeep):
X = features # shape [BATCHSIZE, SEQLEN, 2], 2 for (Tmin, Tmax)
batchsize = tf.shape(X)[0] # allow for variable batch size
seqlen = tf.shape(X)[1] # allow for variable sequence length
# --- dummy model that does almost nothing (one trainable variable is needed) ---
# --- Dhe regression layer is missing too! ---
# --- When adding it, keep in mind we are prediction two values: Tmin, Tmax ---
Yr = X * tf.Variable(tf.ones([]), name = "dummy")
H = Hin
# --- end of dummy model ---
Yout = Yr[:,-N_FORWARD:,:] # Last N_FORWARD outputs. Yout [BATCHSIZE, N_FORWARD, 2]
loss = tf.losses.mean_squared_error(Yr, labels) # labels[BATCHSIZE, SEQLEN, 2]
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
return Yout, H, loss, train_op, Yr
Explanation: <a name="assignment2"></a><a name="assignment3"></a>
The model definition
<div class="alert alert-block alert-info">
***Assignement #2*** Implement the RNN model. You can copy-paste it from the previous exercise but you will have to make one modification: we are now predicting vectors of 2 values (Tmin, Tmax) instead of single values. Train then evaluate to see if you are getting better results.
</div>
<div class="alert alert-block alert-info">
***Assignement #3*** Temperatures are noisy. If we ask the model to predict the naxt data point, noise might drown the trend and the model will not train. The trend should be clearer if we ask the moder to look further ahead. You can use the [hyperparameter](#hyperparameters) N_FORWARD to shift the target sequences by more than 1. Try values between 4 and 16 and see how [training sequences](#trainseq) look.<br/>
<br/>
If the model predicts N_FORWARD in advance, you will also need it to output N_FORWARD predicted values instead of 1. Please check that the output of your model is indeed `Yout = Yr[:,-N_FORWARD:,:]`. The inference part has already been adjusted to generate the sequence by blocks of N_FORWARD points. You can have a [look at it](#inference).<br/>
<br/>
Train and evaluate to see if you are getting better results. ***In the end, use this value: N_FORWARD=8***
</div>
<a name="assignment4"></a>
<div class="alert alert-block alert-info">
***Assignement #4*** Try adjusting the follwing parameters:<ol><ol>
<li> Use a stacked RNN cell with 2 layers with in the model:<br/>
```
cells = [tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE) for _ in range(N_LAYERS)]
cell = tf.nn.rnn_cell.MultiRNNCell(cells, state_is_tuple=False)
```
<br/>Do not forget to set N_LAYERS=2 in [hyperparameters](#hyperparameters)
</li>
<li>Increase RNN_CELLSIZE -> 128 to allow the cells to model more complex behaviors.</li>
<li>Regularisation: add a decaying learning rate. Replace learning_rate=0.01 with:<br/>
```
learning_rate = 0.001 + tf.train.exponential_decay(0.01, step, 1000, 0.5) # 0.001+0.01*0.5^(step/1000)
``` </li>
<li>Regularisation: add dropout between cell layers.<br/>
```
cells = [tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob = dropout_pkeep) for cell in cells]
```
<br/>
Check that you have a good value for DROPOUT_PKEEP in [hyperparameters](#hyperparameters). 0.7 should do. Also check that dropout is deactivated i.e. dropout_pkeep=1.0 during [inference](#inference).
</li>
</ol></ol>
Play with these options until you get a good fit for at least 1.5 years.
</div>
<div style="text-align: right; font-family: monospace">
X shape [BATCHSIZE, SEQLEN, 2]<br/>
Y shape [BATCHSIZE, SEQLEN, 2]<br/>
H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]
</div>
When executed, this function instantiates the Tensorflow graph for our model.
End of explanation
tf.reset_default_graph() # restart model graph from scratch
# placeholder for inputs
Hin = tf.placeholder(tf.float32, [None, RNN_CELLSIZE * N_LAYERS])
features = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
labels = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
step = tf.placeholder(tf.int32)
dropout_pkeep = tf.placeholder(tf.float32)
# instantiate the model
Yout, H, loss, train_op, Yr = model_rnn_fn(features, Hin, labels, step, dropout_pkeep)
Explanation: Instantiate the model
End of explanation
# variable initialization
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run([init])
saver = tf.train.Saver(max_to_keep=1)
Explanation: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
End of explanation
losses = []
indices = []
last_epoch = 99999
last_fileid = 99999
for i, (next_features, next_labels, dates, epoch, fileid) in enumerate(
utils_batching.rnn_multistation_sampling_temperature_sequencer(train_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
NB_EPOCHS, tminmax=True)):
# reinintialize state between epochs or when starting on data from a new weather station
if epoch != last_epoch or fileid != last_fileid:
batchsize = next_features.shape[0]
H_ = np.zeros([batchsize, RNN_CELLSIZE * N_LAYERS])
print("State reset")
#train
feed = {Hin: H_, features: next_features, labels: next_labels, step: i, dropout_pkeep: DROPOUT_PKEEP}
Yout_, H_, loss_, _, Yr_ = sess.run([Yout, H, loss, train_op, Yr], feed_dict=feed)
# print progress
if i%20 == 0:
print("{}: epoch {} loss = {} ({} weather stations this epoch)".format(i, epoch, np.mean(loss_), fileid+1))
sys.stdout.flush()
if i%10 == 0:
losses.append(np.mean(loss_))
indices.append(i)
# This visualisation can be helpful to see how the model "locks" on the shape of the curve
# if i%100 == 0:
# plt.figure(figsize=(10,2))
# plt.fill_between(dates, next_features[0,:,0], next_features[0,:,1]).set_alpha(0.2)
# plt.fill_between(dates, next_labels[0,:,0], next_labels[0,:,1])
# plt.fill_between(dates, Yr_[0,:,0], Yr_[0,:,1]).set_alpha(0.8)
# plt.show()
last_epoch = epoch
last_fileid = fileid
# save the trained model
SAVEDMODEL = JOB_DIR + "/ckpt" + str(int(time.time()))
tf.saved_model.simple_save(sess, SAVEDMODEL,
inputs={"features":features, "Hin":Hin, "dropout_pkeep":dropout_pkeep},
outputs={"Yout":Yout, "H":H})
plt.ylim(ymax=np.amax(losses[1:])) # ignore first value for scaling
plt.plot(indices, losses)
plt.show()
Explanation: <a name="train"></a>
The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
End of explanation
def prediction_run(predict_fn, prime_data, run_length):
H = np.zeros([1, RNN_CELLSIZE * N_LAYERS]) # zero state initially
Yout = np.zeros([1, N_FORWARD, 2])
data_len = prime_data.shape[0]-N_FORWARD
# prime the state from data
if data_len > 0:
Yin = np.array(prime_data[:-N_FORWARD])
Yin = np.reshape(Yin, [1, data_len, 2]) # reshape as one sequence of pairs (Tmin, Tmax)
r = predict_fn({'features': Yin, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
# initaily, put real data on the inputs, not predictions
Yout = np.expand_dims(prime_data[-N_FORWARD:], axis=0)
# Yout shape [1, N_FORWARD, 2]: batch of a single sequence of length N_FORWARD of (Tmin, Tmax) data pointa
# run prediction
# To generate a sequence, run a trained cell in a loop passing as input and input state
# respectively the output and output state from the previous iteration.
results = []
for i in range(run_length//N_FORWARD+1):
r = predict_fn({'features': Yout, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
results.append(Yout[0]) # shape [N_FORWARD, 2]
return np.concatenate(results, axis=0)[:run_length]
Explanation: <a name="inference"></a>
Inference
This is a generative model: run an trained RNN cell in a loop. This time, with N_FORWARD>1, we generate the sequence by blocks of N_FORWAD data points instead of point by point. The RNN is unrolled across N_FORWARD steps, takes in a the last N_FORWARD data points and predicts the next N_FORWARD data points and so on in a loop. State must be passed around correctly.
End of explanation
QYEAR = 365//(RESAMPLE_BY*4)
YEAR = 365//(RESAMPLE_BY)
# Try starting predictions from January / March / July (resp. OFFSET = YEAR or YEAR+QYEAR or YEAR+2*QYEAR)
# Some start dates are more challenging for the model than others.
OFFSET = 4*YEAR+1*QYEAR
PRIMELEN=7*YEAR
RUNLEN=4*YEAR
RMSELEN=3*365//(RESAMPLE_BY*2) # accuracy of predictions 1.5 years in advance
# Restore the model from the last checkpoint saved previously.
# Alternative checkpoints:
# Once you have trained on all 1666 weather stations on Google Cloud ML Engine, you can load the checkpoint from there.
# SAVEDMODEL = "gs://{BUCKET}/sinejobs/sines_XXXXXX_XXXXXX/ckptXXXXXXXX"
# A sample checkpoint is provided with the lab. You can try loading it for comparison.
# You will have to use the following parameters and re-run the entire notebook:
# N_FORWARD = 8, RESAMPLE_BY = 5, RNN_CELLSIZE = 128, N_LAYERS = 2
# SAVEDMODEL = "temperatures_best_checkpoint"
predict_fn = tf.contrib.predictor.from_saved_model(SAVEDMODEL)
for evaldata in evaltemps:
prime_data = evaldata[OFFSET:OFFSET+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
utils_display.picture_this_6(evaldata, evaldates, prime_data, results, PRIMELEN, RUNLEN, OFFSET, RMSELEN)
rmses = []
bad_ones = 0
for offset in [YEAR, YEAR+QYEAR, YEAR+2*QYEAR]:
for evaldata in evaltemps:
prime_data = evaldata[offset:offset+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
rmse = math.sqrt(np.mean((evaldata[offset+PRIMELEN:offset+PRIMELEN+RMSELEN] - results[:RMSELEN])**2))
rmses.append(rmse)
if rmse>7: bad_ones += 1
print("RMSE on {} predictions (shaded area): {}".format(RMSELEN, rmse))
print("Average RMSE on {} weather stations: {} ({} really bad ones, i.e. >7.0)".format(len(evaltemps), np.mean(rmses), bad_ones))
sys.stdout.flush()
Explanation: <a name="valid"></a>
Validation
End of explanation |
6,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
6,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot MST Measures
Here we plot different MST measures from the computed connectivity matrices.
Step1: MST Dataset Structure
The order of frequency bands is as follows
Step2: Plotting of MST Measures | Python Code:
from pprint import pprint
import scipy
import pandas as pd
from pandas.tools.plotting import parallel_coordinates
from pandas import concat
import matplotlib
# Set backend to pgf
matplotlib.use('pgf')
import matplotlib.pyplot as plt
import numpy as np
# Some nice default configuration for plots
plt.rcParams['figure.figsize'] = 10, 7.5
plt.rcParams['axes.grid'] = True
plt.rcParams['axes.color_cycle'] = ['g', 'b', 'r']
plt.gray()
%matplotlib inline
from pylab import *
from scipy.io import loadmat
import operator
pwd
randomSeed = 20;
# TODO: Change accordingly
CONNECTIVITY_MEASURE = 'dWPLI'
DATASETS_FOLDER = '/home/dragos/DTC/MSc/SummerProject/processed_data/features/'
DATASETS_FOLDER = DATASETS_FOLDER + CONNECTIVITY_MEASURE + '/mst/datasets/'
nameOfDataFileMat = 'datasetMSTGraphMeasures.mat'
nameOfDataFileCSV = 'datasetMSTGraphMeasures.csv'
Explanation: Plot MST Measures
Here we plot different MST measures from the computed connectivity matrices.
End of explanation
# store frequencies of interest order as they appear in the dataset table
FoQ_table_order = dict([('delta', 3), ('theta', 5),
('alpha', 1), ('beta', 2),
('gamma', 4)])
# store plot order (in order of frequency values of each bands)
plot_order = dict([ (1, 'delta'), (2, 'theta'), (3, 'alpha'),
(4, 'beta'), (5, 'gamma') ])
# stores class labels
classLabels = dict([ (1, "CS"), (2, "MCI"), (3, "AD") ])
# stores the order in which the measures are specified in the MST dataset matrix
graphMeasures = dict([('leaf no', 1), ('L', 2), ('diameter', 6)])
graphMeasuresForPlots = dict([('leaf no', 1), ('L', 2), ('diameter', 3)])
#### CAREFUL! I have 6 MST measures instead of 5 classical measures! ###
# Although I'm interested in only leaf no, L and diameter, I need the actual no of measures in the table for indexing!
NO_OF_GRAPH_MEASURES = 6
print NO_OF_GRAPH_MEASURES
### The following code snippet is from http://stackoverflow.com/a/22937095 ###
# this is just a helper class to keep things clean
class MyAxis(object):
def __init__(self,ax,fig):
# this flag tells me if there is a plot in these axes
self.empty = False
self.ax = ax
self.fig = fig
self.pos = self.ax.get_position()
def del_ax(self):
# delete the axes
self.empty = True
self.fig.delaxes(self.ax)
def swap(self,other):
# swap the positions of two axes
#
# THIS IS THE IMPORTANT BIT!
#
new_pos = other.ax.get_position()
self.ax.set_position(new_pos)
other.ax.set_position(self.pos)
self.pos = new_pos
for bandOrderIdx in plot_order.keys():
print bandOrderIdx
data_file_path = DATASETS_FOLDER + nameOfDataFileMat
print data_file_path
Explanation: MST Dataset Structure
The order of frequency bands is as follows:
Alfa | Beta | Delta | Gamma | Theta | Class
Each band has 6 columns, where each column corresponds to a graph feature, in the following order:
no of leaves
characteristic path length (L)
global efficiency (GE)
average eccentricity (avgECC)
radius
diameter
End of explanation
myFig, axes = plt.subplots(nrows=5, ncols=3)
my_axes = [MyAxis(ax,myFig) for ax in axes.ravel()]
#myFig.tight_layout()
left = 0.125 # the left side of the subplots of the figure
right = 0.9 # the right side of the subplots of the figure
bottom = 0.1 # the bottom of the subplots of the figure
top = 0.93 # the top of the subplots of the figure
wspace = 0.4 # the amount of width reserved for blank space between subplots
hspace = 0.3 # the amount of height reserved for white space between subplots
myFig.subplots_adjust(left, bottom, right, top, wspace, hspace)
for thisAx in my_axes:
# delete all axes
thisAx.del_ax()
#my_axes[6].del_ax()
myFig.set_size_inches(11,15)
myFig.suptitle('MST Measures for dWPLI', fontsize=15)
### Here we generate dictionaries of graph measures, where each graph measure points to a
### dict of bands of interest (alfa, beta..); each band of interest points to a dictionary of classes (MCI, AD..);
### Summary: graph measure --> band --> class
measureToBand = dict()
# signficiant results/plots by indices
significants = [1,4,5,6]
# for each graph measure
for currentMeasure, measureOrder in sorted(graphMeasures.iteritems(), key=operator.itemgetter(1)):
bandToClass = dict()
# for each frequency band of interest
for bandOrderIdx in plot_order.keys():
bandName = plot_order[bandOrderIdx]
# plot bands as columns and MST measures as rows
plotIndex = len(graphMeasures)*(bandOrderIdx-1)+graphMeasuresForPlots[currentMeasure]
bandPlot = myFig.add_subplot(5,3,plotIndex)
bandPlot.hold(True)
# if significant p value, put (*)
if plotIndex in significants:
bandPlot.set_title(bandName + " (*)")
else:
bandPlot.set_title(bandName)
#leg = []
#legp = []
bandPlot.set_ylabel(currentMeasure)
for currentClass in sorted(classLabels.keys()):
data_file_path = DATASETS_FOLDER + nameOfDataFileMat
#load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
n_samples = data.shape[0]
features = data[:, :-1]
targets = data[:, -1]
classIdxs = np.where(targets == currentClass)
classFeatures = features[classIdxs]
measureIndex = NO_OF_GRAPH_MEASURES*(FoQ_table_order[bandName]-1) + (graphMeasures[currentMeasure]-1)
if currentClass == 1:
CSdata = classFeatures[:, measureIndex]
elif currentClass == 2:
MCIdata = classFeatures[:, measureIndex]
elif currentClass == 3:
ADdata = classFeatures[:, measureIndex]
bandPlot.boxplot([CSdata, MCIdata, ADdata])
plt.xticks([1,2,3],['CON','MCI','AD'])
myFig.savefig('MSTMeasuresdWPLI.pdf')
pwd
Explanation: Plotting of MST Measures
End of explanation |
6,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Re-referencing the EEG signal
Load raw data and apply some EEG referencing schemes.
Step1: Apply different EEG referencing schemes and plot the resulting evokeds. | Python Code:
# Authors: Marijn van Vliet <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from matplotlib import pyplot as plt
print(__doc__)
# Setup for reading the raw data
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Read the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
events = mne.read_events(event_fname)
# The EEG channels will be plotted to visualize the difference in referencing
# schemes.
picks = mne.pick_types(raw.info, meg=False, eeg=True, eog=True, exclude='bads')
Explanation: Re-referencing the EEG signal
Load raw data and apply some EEG referencing schemes.
End of explanation
reject = dict(eeg=180e-6, eog=150e-6)
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
picks=picks, reject=reject)
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, sharex=True)
# No reference. This assumes that the EEG has already been referenced properly.
# This explicitly prevents MNE from adding a default EEG reference.
raw.set_eeg_reference([])
evoked_no_ref = mne.Epochs(raw, **epochs_params).average()
evoked_no_ref.plot(axes=ax1, titles=dict(eeg='EEG Original reference'))
# Average reference. This is normally added by default, but can also be added
# explicitly.
raw.set_eeg_reference()
evoked_car = mne.Epochs(raw, **epochs_params).average()
evoked_car.plot(axes=ax2, titles=dict(eeg='EEG Average reference'))
# Re-reference from an average reference to the mean of channels EEG 001 and
# EEG 002.
raw.set_eeg_reference(['EEG 001', 'EEG 002'])
evoked_custom = mne.Epochs(raw, **epochs_params).average()
evoked_custom.plot(axes=ax3, titles=dict(eeg='EEG Custom reference'))
Explanation: Apply different EEG referencing schemes and plot the resulting evokeds.
End of explanation |
6,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connecting to the database
Step1: Create a list of dictionaries that have the following structure
Step2: Create a dataframe from list of dicts
Step3: FIVE NUMBER SUMMARY for the length of time between creating the complaint and closing the complaint
minimum, maximum, first quartile, median, third quartile
Step4: A quick boxplot of hours a complaint is open, grouped by type of complaint
*the alignment on this graph is shifted one to the right | Python Code:
conn = pg8000.connect(user = 'dot_student', database='training', port=5432, host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', password='qgis')
conn.rollback()
cursor = conn.cursor()
cursor.execute("SELECT column_name FROM information_schema.columns WHERE table_name='dot_311'")
# run the commented out code to see list of all column names
# cursor.fetchall()
cursor.execute("SELECT complaint, descriptor, created_date, closed_date FROM dot_311")
dot311 = cursor.fetchall()
# dot311
Explanation: Connecting to the database
End of explanation
dot311_list=[]
for item in dot311:
dot311_dict={}
dot311_dict['complaint'] = [item[0]]
dot311_dict['descriptor'] = [item[1]]
dot311_dict['created_date'] = [item[2]]
dot311_dict['closed_date'] = [item[3]]
# checks that created_date and closed_date are not NoneTypes and that the closed date is after the created date.
if dot311_dict['created_date'][0] != None and dot311_dict['closed_date'][0] != None and dot311_dict['closed_date']>dot311_dict['created_date']:
# subtracting datetime.datetime objects gives you a dateime.timedelta object
# helpful: http://stackoverflow.com/questions/2861770/how-do-i-subtract-two-dates-in-django-python
dot311_dict['time_open'] = dot311_dict['closed_date'][0] - dot311_dict['created_date'][0]
else:
dot311_dict['time_open'] = None
dot311_list.append(dot311_dict)
# dot311_list
Explanation: Create a list of dictionaries that have the following structure:
[
...
{'closed_date': [datetime.datetime(2016, 2, 2, 9, 1)],
'complaint': ['Street Condition'],
'created_date': [datetime.datetime(2016, 2, 1, 7, 0)],
'descriptor': ['Pothole'],
'time_open': datetime.timedelta(1, 7260)},
{'closed_date': [datetime.datetime(2016, 2, 2, 12, 50)],
'complaint': ['Street Condition'],
'created_date': [datetime.datetime(2016, 2, 1, 7, 0)],
'descriptor': ['Pothole'],
'time_open': datetime.timedelta(1, 21000)},
...
]
End of explanation
df = pd.DataFrame.from_dict(dot311_list)
df.head(50)
Explanation: Create a dataframe from list of dicts
End of explanation
minimum = df['time_open'].min()
print(minimum)
# dang, so fast.
maximum = df['time_open'].max()
print(maximum)
first_qtr = df['time_open'].quantile(q=0.25)
print(first_qtr)
median = df['time_open'].quantile(q=0.5)
print(median)
third_qtr = df['time_open'].quantile(q=0.75)
print(third_qtr)
df['time_open_hours'] = df['time_open'].apply(lambda x: x.total_seconds()) / 60 / 60
# note: this was necessary because the by parameter of the box plot only accepts sequences, i.e. tuples and other such things
# if you try to use the original complaint column it returns an error saying that "by" only accepts sequences, not groupby sequences
df['complaint_'] = df['complaint'].apply(lambda x: (x[0]))
Explanation: FIVE NUMBER SUMMARY for the length of time between creating the complaint and closing the complaint
minimum, maximum, first quartile, median, third quartile
End of explanation
ax = df.boxplot(column='time_open_hours', by='complaint_', figsize=(20,20), rot=45, fontsize=14)
Explanation: A quick boxplot of hours a complaint is open, grouped by type of complaint
*the alignment on this graph is shifted one to the right
End of explanation |
6,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation Code
The code below will generate a test folume and populate it with a set of clusters.
Step1: Cluster Extraction Validation
First, be sure that our cluster extraction algorithm works as intended on the test volume
Step2: The nonzero xor between the volumes proves that clustering the volume maintains the integrity of the cluster lists. This test is run 5 times in a row below to validate that the 0 difference metric is valid across trials
Step3: The set of 5 zeros validates that the error in the F1 code is not the extraction of the cluster lists
Precision Recall F1 2.0
Step4: Precision Recall F1 2.0 Testing
First, the algorithm will be tested on a case where it should get 100% precision and 100% recall
Step5: The 1 precision, recall and f1 metrics show that the algorithm performs as expected on the case of labels being their own predicions
Next, the recall will be tested by randomly selecting a percentile of the true clusters to be passed to the prediction. This should modulate only the recall, and not the precision, of the data, as all clusters being passed are still "correct" predictions. If the algorithm works as expected, I should see a constant precision metric and an upwardly sloping recall metric with slope of 10% That is, for every additional 10% of the labels included in the predictions, the recall sould increase by 10%
Step6: The linearly increasing nature of recall plot demonstrates that the recall portion of the code indeed corresponds to the ratio of true labels passed in.
Additionally, the precision plot being constant shows that modulating only the recall has no affect on the precision of the data
Next, the data will be diluted such that it contains a portion of noise clusters. This should change the precision, but not the recall, of the data. If this algorithm works as expected, it will produce a constant recall and a downward sloping precision curve with a slope of 10%. That is, for every additional 10% of noise added to the predictions, the precision should drop by 10%
Step7: As these plots demonstrate, adding noise to the list of all true clusters delivers the expected result, that the precision drops, and the recall remains constant.
In our data, there is not a guarantee that the predicted clusters and the actual clusters will exactly overlap. In fact, this is likely not the case. However, we would not like to consider a cluster a false positive if it only differs from the true cluster by one pixel. For this reason, I have included an overlapRatio parameter to vary how much overlap between a prediction and a true cluster must exist for the prediction to be considered correct
In the following simulation, the cluster labels will be evenly divided and then eroded between 10% and 100%. I will then run the precision recall code against them with an ever increasing percent overlap metric. If the code works, I expect both the precision and the recall to drop by about 10% for every 10% increase in the percent overlap metric. | Python Code:
def generatePointSet():
center = (rand(0, 9), rand(0, 999), rand(0, 999))
toPopulate = []
for z in range(-3, 2):
for y in range(-3, 2):
for x in range(-3, 2):
curPoint = (center[0]+z, center[1]+y, center[2]+x)
#only populate valid points
valid = True
for dim in range(3):
if curPoint[dim] < 0 or curPoint[dim] >= 1000:
valid = False
if valid:
toPopulate.append(curPoint)
return set(toPopulate)
def generateTestVolume():
#create a test volume
volume = np.zeros((10, 1000, 1000))
myPointSet = set()
for _ in range(rand(1000, 2000)):
potentialPointSet = generatePointSet()
#be sure there is no overlap
while len(myPointSet.intersection(potentialPointSet)) > 0:
potentialPointSet = generatePointSet()
for elem in potentialPointSet:
myPointSet.add(elem)
#populate the true volume
for elem in myPointSet:
volume[elem[0], elem[1], elem[2]] = rand(40000, 60000)
#introduce noise
noiseVolume = np.copy(volume)
for z in range(noiseVolume.shape[0]):
for y in range(noiseVolume.shape[1]):
for x in range(noiseVolume.shape[2]):
if not (z, y, x) in myPointSet:
toPop = rand(0, 10)
if toPop == 5:
noiseVolume[z][y][x] = rand(0, 60000)
return volume, noiseVolume
Explanation: Simulation Code
The code below will generate a test folume and populate it with a set of clusters.
End of explanation
myTestVolume, _ = generateTestVolume()
#passed arbitrarily large param for upper bound to get all clusters
testList = clusterThresh(myTestVolume, 0, 100000)
#generate an annotation volume from the test list
annotations = mv.generateAnnotations(testList, *(myTestVolume.shape))
print np.count_nonzero(np.logical_xor(annotations, myTestVolume))
Explanation: Cluster Extraction Validation
First, be sure that our cluster extraction algorithm works as intended on the test volume
End of explanation
for i in range(5):
myTestVolume, _ = generateTestVolume()
testList = clusterThresh(myTestVolume, 0, 100000)
annotations = mv.generateAnnotations(testList, *(myTestVolume.shape))
print np.count_nonzero(np.logical_xor(annotations, myTestVolume))
Explanation: The nonzero xor between the volumes proves that clustering the volume maintains the integrity of the cluster lists. This test is run 5 times in a row below to validate that the 0 difference metric is valid across trials
End of explanation
def precision_recall_f1(labels, predictions, overlapRatio):
if len(predictions) == 0:
print 'ERROR: prediction list is empty'
return 0., 0., 0.
labelFound = np.zeros(len(labels))
truePositives = 0
falsePositives = 0
for prediction in predictions:
#casting to set is ok here since members are uinque
predictedMembers = set([tuple(elem) for elem in prediction.getMembers()])
detectionCutoff = overlapRatio * len(predictedMembers)
found = False
for idx, label in enumerate(labels):
labelMembers = set([tuple(elem) for elem in label.getMembers()])
#if the predictedOverlap is over the detectionCutoff ratio
if len(predictedMembers & labelMembers) >= detectionCutoff:
truePositives +=1
found=True
labelFound[idx] = 1
if not found:
falsePositives +=1
precision = truePositives/float(truePositives + falsePositives)
recall = np.count_nonzero(labelFound)/float(len(labels))
f1 = 0
try:
f1 = 2 * (precision*recall)/(precision + recall)
except ZeroDivisionError:
f1 = 0
return precision, recall, f1
Explanation: The set of 5 zeros validates that the error in the F1 code is not the extraction of the cluster lists
Precision Recall F1 2.0
End of explanation
myTestVolume, _ = generateTestVolume()
testList = clusterThresh(myTestVolume, 0, 100000)
#run the code on a test volume identical labels
precision, recall, f1 = precision_recall_f1(testList, testList, 1)
print precision, recall, f1
Explanation: Precision Recall F1 2.0 Testing
First, the algorithm will be tested on a case where it should get 100% precision and 100% recall
End of explanation
statList = []
for i in range(1, 11):
percentile = i/10.
predictions = np.random.choice(testList, int(percentile * len(testList)), replace=False)
precision, recall, f1 = precision_recall_f1(testList, predictions, 1)
statList.append([percentile, precision, recall])
print 'Percentile: ', percentile
print '\t precision: ', precision
print '\t recall: ', recall
fig = plt.figure()
elemwiseStats = zip(*(statList))
plt.title('Recall vs Percentile Passed')
plt.scatter(elemwiseStats[0], elemwiseStats[2])
plt.show()
fig = plt.figure()
plt.title('Precision vs Percentile Passed')
plt.scatter(elemwiseStats[0], elemwiseStats[1], c='r')
plt.show()
Explanation: The 1 precision, recall and f1 metrics show that the algorithm performs as expected on the case of labels being their own predicions
Next, the recall will be tested by randomly selecting a percentile of the true clusters to be passed to the prediction. This should modulate only the recall, and not the precision, of the data, as all clusters being passed are still "correct" predictions. If the algorithm works as expected, I should see a constant precision metric and an upwardly sloping recall metric with slope of 10% That is, for every additional 10% of the labels included in the predictions, the recall sould increase by 10%
End of explanation
statList = []
#get list points in data that I can populate noise clusters with
noCluster = zip(*(np.where(myTestVolume == 0)))
for i in range(0, 10):
#get the number of noise clusters that must be added for data to be diluted
#to target percent
percentile = i/10.
numNoise = int(percentile * len(testList)/float(1-percentile))
#generate the prediction + noise list
noiseList =[]
for j in range(numNoise):
badPoint = noCluster[rand(0, len(noCluster)-1)]
noiseList.append(Cluster([list(badPoint)]))
predictions = testList + noiseList
precision, recall, f1 = precision_recall_f1(testList, predictions, 1)
statList.append([percentile, precision, recall])
print 'Percentile: ', percentile
print '\t precision: ', precision
print '\t recall: ', recall
fig = plt.figure()
elemwiseStats = zip(*(statList))
plt.title('Recall vs Percentile of Data that is Not Noise')
plt.scatter(elemwiseStats[0], elemwiseStats[2])
plt.show()
fig = plt.figure()
plt.title('Precision vs Percent of Data that is Not Noise')
plt.scatter(elemwiseStats[0], elemwiseStats[1], c='r')
plt.show()
Explanation: The linearly increasing nature of recall plot demonstrates that the recall portion of the code indeed corresponds to the ratio of true labels passed in.
Additionally, the precision plot being constant shows that modulating only the recall has no affect on the precision of the data
Next, the data will be diluted such that it contains a portion of noise clusters. This should change the precision, but not the recall, of the data. If this algorithm works as expected, it will produce a constant recall and a downward sloping precision curve with a slope of 10%. That is, for every additional 10% of noise added to the predictions, the precision should drop by 10%
End of explanation
statList = []
#generate the list of eroded clusters
erodedList = []
for idx, cluster in enumerate(testList):
percentile = (idx%10)/10. + .1
members = cluster.getMembers()
erodedList.append(Cluster(members[:int(len(members)*percentile)]))
for i in range(1, 11):
percentile = i/10.
precision, recall, f1 = precision_recall_f1(erodedList, testList, percentile)
statList.append([percentile, precision, recall])
print 'Percentile: ', percentile
print '\t precision: ', precision
print '\t recall: ', recall
fig = plt.figure()
elemwiseStats = zip(*(statList))
plt.title('Recall vs Percent Required Overlap')
plt.scatter(elemwiseStats[0], elemwiseStats[2])
plt.show()
fig = plt.figure()
elemwiseStats = zip(*(statList))
plt.title('Precision vs Percent Required Overlap')
plt.scatter(elemwiseStats[0], elemwiseStats[1])
plt.show()
Explanation: As these plots demonstrate, adding noise to the list of all true clusters delivers the expected result, that the precision drops, and the recall remains constant.
In our data, there is not a guarantee that the predicted clusters and the actual clusters will exactly overlap. In fact, this is likely not the case. However, we would not like to consider a cluster a false positive if it only differs from the true cluster by one pixel. For this reason, I have included an overlapRatio parameter to vary how much overlap between a prediction and a true cluster must exist for the prediction to be considered correct
In the following simulation, the cluster labels will be evenly divided and then eroded between 10% and 100%. I will then run the precision recall code against them with an ever increasing percent overlap metric. If the code works, I expect both the precision and the recall to drop by about 10% for every 10% increase in the percent overlap metric.
End of explanation |
6,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 23 Pre-class assignment
Goals for today's pre-class assignment
In this pre-class assignment, you will
Step1: Task 2
Step2: Task 3
Step3: Task 4
Step4: Task 5
Step6: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
# Put your code here!
Explanation: Day 23 Pre-class assignment
Goals for today's pre-class assignment
In this pre-class assignment, you will:
Create and slice multi-dimensional numpy arrays
Plot 2D numpy arrays
Make an animation of 2D numpy arrays
Assignment instructions
First, work through the Numpy 2D array tutorial (Numpy_2D_array_tutorial.ipynb).
After that, write code to do the following things:
Task 1: Create a 2D Numpy array, named A, that is a 10x10 array integers, each of which is set to 0. Write a pair of for loops that iterate over A and sets A[i,j] = i + j. Print that array out to verify that it's behaving as expected - it should look like this:
[[ 0 1 2 3 4 5 6 7 8 9]
[ 1 2 3 4 5 6 7 8 9 10]
[ 2 3 4 5 6 7 8 9 10 11]
[ 3 4 5 6 7 8 9 10 11 12]
[ 4 5 6 7 8 9 10 11 12 13]
[ 5 6 7 8 9 10 11 12 13 14]
[ 6 7 8 9 10 11 12 13 14 15]
[ 7 8 9 10 11 12 13 14 15 16]
[ 8 9 10 11 12 13 14 15 16 17]
[ 9 10 11 12 13 14 15 16 17 18]]
End of explanation
# Put your code here!
Explanation: Task 2: Use Numpy's array slicing capabilities to create a new array, B, which is a subset of the values in array A. Specifically, extract every second element in both dimensions in array A, starting with the second element (i.e., index=1) in each dimension. Store this in B, and print out array B. It should look like this:
[[ 2 4 6 8 10]
[ 4 6 8 10 12]
[ 6 8 10 12 14]
[ 8 10 12 14 16]
[10 12 14 16 18]]
End of explanation
# Put your code here!
Explanation: Task 3: Using Numpy's slicing capabilities, extract the second and third rows of array B and store them in a third array, C. Print out C. It should look like this:
[[ 4 6 8 10 12]
[ 6 8 10 12 14]]
End of explanation
# Put your code here!
Explanation: Task 4: Write a function, called add_neighborhood(), that:
Takes in an array as an argument
Creates an array, D, that is the same shape and data type as the incoming array but full of zeros.
Loops over all of the elements of the incoming array (using the shape() method to adjust for the fact that you don't know what its size is) and sets D[i,j] equal to the values of A[i,j] plus its four neighbors, A[i+1,j], A[i-1,j], A[i,j+1], A[i,j-1]. If you are at the edge or corner of the array (say, at A[0,0]) do not include any values that go over the edge of the array (into negative numbers or beyond the last index in any dimension).
Return the array D and print it out once it has been returned from the function.
Test this out using array A and B. When applied to array A, you should get this output:
[[ 2 5 9 13 17 21 25 29 33 27]
[ 5 10 15 20 25 30 35 40 45 39]
[ 9 15 20 25 30 35 40 45 50 43]
[13 20 25 30 35 40 45 50 55 47]
[17 25 30 35 40 45 50 55 60 51]
[21 30 35 40 45 50 55 60 65 55]
[25 35 40 45 50 55 60 65 70 59]
[29 40 45 50 55 60 65 70 75 63]
[33 45 50 55 60 65 70 75 80 67]
[27 39 43 47 51 55 59 63 67 52]]
and when you apply this function to array B, you should get this output:
[[10 18 26 34 30]
[18 30 40 50 46]
[26 40 50 60 54]
[34 50 60 70 62]
[30 46 54 62 50]]
Note: Make sure that the edges and corners have the right values!
End of explanation
# Put your code here!
Explanation: Task 5: Using the pyplot matshow() method, plot array A in a plot that uses a color map of your choice (that is not the default color map!), and where the axes are invisible.
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/VwY5ods4ugnwidnG2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
6,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Query Coordianted Canyon Experiment database for BED information
Connect to a remote database, select specific data using Django queries
Executing this Notebook requires a personal STOQS server. Follow the steps to build your own development system — this will take a few hours and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands
Step1: For each BED event (activity) print start and end times and locations
Step2: Validate the query by a spot check of one of the NetCDF files for the start and end data. Let's choose the penultimate one | Python Code:
acts = (Activity.objects.using('stoqs_cce2015')
.filter(name__contains='trajectory')
.order_by('name'))
Explanation: Query Coordianted Canyon Experiment database for BED information
Connect to a remote database, select specific data using Django queries
Executing this Notebook requires a personal STOQS server. Follow the steps to build your own development system — this will take a few hours and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands:
vagrant ssh -- -X
cd /vagrant/dev/stoqsgit
source venv-stoqs/bin/activate
Connect to your Institution's STOQS database server using read-only credentials. (Note: firewalls typically limit unprivileged access to such resources.)
cd stoqs
ln -s mbari_campaigns.py campaigns.py
export DATABASE_URL=postgis://everyone:[email protected]:5433/stoqs
Launch Jupyter Notebook on your system with:
cd contrib/notebooks
../../manage.py shell_plus --notebook
navigate to this file and open it. You will then be able to execute the cells and experiment with this notebook.
For reference please see the STOQS schema diagram.
Get list of Activities that are BED event trajectories:
End of explanation
fmt = '\t{}: {}, {:.6f}, {:.6f}, {:.2f}'
for activity in acts:
measuredparameters = (MeasuredParameter.objects.using('stoqs_cce2015')
.filter(measurement__instantpoint__activity=activity)
.order_by('measurement__instantpoint__timevalue'))
start = measuredparameters.earliest('measurement__instantpoint__timevalue')
end = measuredparameters.latest('measurement__instantpoint__timevalue')
print('{}'.format(activity))
print(fmt.format('Start',
start.measurement.instantpoint,
start.measurement.geom.x,
start.measurement.geom.y,
start.measurement.depth))
print(fmt.format('End ',
end.measurement.instantpoint,
end.measurement.geom.x,
end.measurement.geom.y,
end.measurement.depth))
Explanation: For each BED event (activity) print start and end times and locations:
End of explanation
from coards import from_udunits
print(str(from_udunits(95896909, 'seconds since 2013-01-01 00:00:00')))
print(str(from_udunits(95896956.0000112, 'seconds since 2013-01-01 00:00:00')))
Explanation: Validate the query by a spot check of one of the NetCDF files for the start and end data. Let's choose the penultimate one: 50200057_trajectory.nc (stride=1). The OPeNDAP URL for the NetCDF file is http://elvis64.shore.mbari.org/opendap/data/CCE_Processed/BEDs/BED05/MBCCE_BED05_20151027_Event20160115/netcdf/50200057_trajectory.nc.html. From this form we can select the coordinate variables and choose the first and last indices to construct .ascii requests:
First:
```
http://elvis64.shore.mbari.org:8080/opendap/data/CCE_Processed/BEDs/BED05/MBCCE_BED05_20151027_Event20160115/netcdf/50200057_trajectory.nc.ascii?time[0:1:0],latitude[0:1:0],longitude[0:1:0],depth[0:1:0]
Dataset: 50200057_trajectory.nc
time, 95896909
latitude.time, 95896909
latitude.latitude, 36.797185
longitude.time, 95896909
longitude.longitude, -121.88547
depth.time, 95896909
depth.depth, 426.721524445596
Last:
http://elvis64.shore.mbari.org:8080/opendap/data/CCE_Processed/BEDs/BED05/MBCCE_BED05_20151027_Event20160115/netcdf/50200057_trajectory.nc.ascii?time[235:1:235],latitude[235:1:235],longitude[235:1:235],depth[235:1:235]
Dataset: 50200057_trajectory.nc
time, 95896956.0000112
latitude.time, 95896956.0000112
latitude.latitude, 36.797514
longitude.time, 95896956.0000112
longitude.longitude, -121.887633
depth.time, 95896956.0000112
depth.depth, 429.287069222699
```
The time values can be converted from 'seconds since 2013-01-01 00:00:00' to a string that we can compare to report from STOQS:
End of explanation |
6,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objectives
Show effect and validity of Wiener-Khinchin
Method
Step1: Parameters
Step2: Signals and their spectra
Step3: Plotting | Python Code:
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 10) )
Explanation: Content and Objectives
Show effect and validity of Wiener-Khinchin
Method: Several windowed signals are being generated and their FFT is being determined
Import
End of explanation
# set time resp. pulse interval and related parameters
t_min = -0.0
t_max = 50.0
t_s = 0.1 # sample time
t = np.arange(t_min, t_max+t_s, t_s)
# frequency of sine
f0=1
# frequency regime (mind properties of FFT)
f=np.arange( -1/(2*t_s), 1/(2*t_s)+1/(t_max-t_min), 1/(t_max-t_min))
Explanation: Parameters
End of explanation
# original signal
x = np.sin( 2*np.pi * f0 * t )
# windowed versions and fft thereoff
T1 = 2
x_T1 = x * [ (tau < T1) for tau in t]
X_T1 = abs( np.fft.fftshift(np.fft.fft( x_T1 )))**2 / (2*T1)
T2 = 10
x_T2 = x * [ (tau < T2) for tau in t]
X_T2 = abs( np.fft.fftshift(np.fft.fft( x_T2 )))**2 / (2*T2)
T3 = 50
x_T3 = x * [ (tau < T3) for tau in t]
X_T3 = abs( np.fft.fftshift(np.fft.fft( x_T3 )))**2 / (2*T3)
Explanation: Signals and their spectra
End of explanation
# plotting
plt.figure(1)
plt.subplot(321)
plt.plot(t, x_T1, label='$x_{2}(t)$')
plt.grid(True); #plt.xlabel('$t$');
#plt.ylabel('$x_{2}(t)$')
plt.legend( loc='upper right')
#plt.annotate('$T=2.0$', xy=(1.0,0.0), xytext=(10,0.5), )
plt.subplot(323)
plt.plot(t, x_T2, label='$x_{10}(t)$')
plt.grid(True); #plt.xlabel('$t$');
#plt.ylabel('$x_{10}(t)$')
plt.legend( loc='upper right')
#plt.annotate('$T=10.0$', xy=(1.0,0.0), xytext=(15,0.5), )
plt.subplot(325)
plt.plot(t, x_T3, label='$x_{50}(t)$')
plt.grid(True); plt.xlabel('$t/\mathrm{s}$');
#plt.ylabel('$x_{50}(t)$')
plt.legend( loc='upper right')
plt.subplot(322)
plt.plot(f, X_T1, label='$|X_{2}(f)|^2$')
plt.grid(True); #plt.xlabel('$f$');
plt.legend( loc='upper right')
#plt.ylabel('$|X_{2}(f)|^2/4$')
plt.subplot(324)
plt.plot(f, X_T2, label='$|X_{10}(f)|^2$')
plt.grid(True); #plt.xlabel('$f$');
#plt.ylabel('$|X_{10}(f)|^2/20$')
plt.legend( loc='upper right')
plt.subplot(326)
plt.plot(f, X_T3, label='$|X_{50}(f)|^2$')
plt.grid(True); plt.xlabel('$f/\mathrm{Hz}$');
#plt.ylabel('$|X_{50}(f)|^2/100$')
plt.legend( loc='upper right')
Explanation: Plotting
End of explanation |
6,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Real-time Patient Montioring with Streams and BioPy
This notebook shows how to use Python to analyze medical data in real time using Streams and existing Python modules like BioSPPY and SciPy.
It shows how the Streams health toolkit makes it easy for clinicians to get started developing applications to monitor patients in real time.
<img src="https
Step1: <a name="notcpd"></a>
Option 2
Step2: 1.2 Install required modules
This notebook requires version 1.14.13 or later of the streamsx package. Check the version below and upgrade if needed.
Step3: Import the healthdemo utility package
This is a set of utilities from the streamsx.health package used in this application
Step4: <a id="createfeed"></a>
Step 2
Step5: <a id="visualization"></a>
Part 4
Step6: <a id="providedata"></a>
4.2 Provide data for the graphs
This cell is responsible for propagating the graph objects with data in the view.
The view data contains vital data for all patients, and is continuously retrieved from the Streaming Analytics service in a background job. Each graph object receives data for a specified patient. The graph objects extract and store the data that is relevant for that particular graph.
Step7: <a id="displaygraphs"></a>
4.3 Display the graphs
This cell is responsible for laying out and displaying the graphs.
Each time a call to update() is made on a graph object, the next data point is retrieved and displayed. Each graph object maintains an internal queue so that each time a call to update() is made, the next element in the queue is retrieved and removed.
There is a loop that continuously calls the update() method on each of the graphs for 60 seconds. After each graph has been updated, a call to push_notebook() is made, which causes the notebook to update the graphics.
The graphs will stop updating after 60 seconds. To extend the period for graph update, change the timeout variable.
To restart graph updates after the timeout period | Python Code:
from icpd_core import icpd_util
from streamsx.topology.context import JobConfig
from streamsx.topology import context
streams_instance_name = ## Change this to Streams instance
try:
cfg=icpd_util.get_service_instance_details(name=streams_instance_name, instance_type="streams")
except TypeError:
cfg=icpd_util.get_service_instance_details(name=streams_instance_name)
def submit_topology(topo):
global cfg
# Disable SSL certificate verification if necessary
cfg[context.ConfigParams.SSL_VERIFY] = False
# Topology wil be deployed as a distributed app
contextType = context.ContextTypes.DISTRIBUTED
return context.submit (contextType, topo, config = cfg)
if cfg:
print("Successfully set up connection to Streams instance")
Explanation: Real-time Patient Montioring with Streams and BioPy
This notebook shows how to use Python to analyze medical data in real time using Streams and existing Python modules like BioSPPY and SciPy.
It shows how the Streams health toolkit makes it easy for clinicians to get started developing applications to monitor patients in real time.
<img src="https://raw.githubusercontent.com/IBMStreams/streamsx.health/develop/samples/HealthcareJupyterDemo/images/notebook-viz.gif" alt="screenshot of running visualization"/>
<p style="text-align: center; font-size: 10px;"><em>Image showing the visualization using data from the Streams application.</em></p>
Streams Health Toolkit Overview
The toolkit includes microservices to:
- Ingest health data from popular devices and database, like Physionet.
- Perform basic analysis: Early Warning Score (EWS) computation, ECG, etc.
So for example, this notebook is going to analyze ECG signals to compute Heart Rate Variability using scipy.
So instead of spending time writing a connector to the Physionet database to get the ECG data, we can just use the Physionet microservice from the health toolkit that gives us the data we need to start developing our application.
The following diagram outlines the architecture of this demo. The Ingest part is handled by launching the Physionet ingest service, the notebook handles the Analyze portion.
<img height="700" width="900" src='https://github.com/IBMStreams/streamsx.health/blob/develop/samples/HealthcareJupyterDemo/images/architecture_diagram.jpg?raw=true' alt="Demo Architecture" title="Demo Architecture"></img>
Prerequisites
This notebook can be used as-is from within an IBM Cloud Pak for Data project.
If you are not running this notebook from within IBM Cloud Pak for Data, follow these steps to make sure you have installed all the prerequisites.
<a name="setup"></a>
1. Set up a connection to the Streams instance
To submit the application for execution, you have to connect to the Streams instance. The information required to connect to the instance depends on the target installation of Streams.
Choose the option that matches your development environment.
Option 1: I'm running the notebook from an IBM Cloud for Data project
Option 2: I'm using IBM Watson Studio, Jupyter Notebooks, or any other development environment
<a name="cpd"></a>
<a name="cpd"></a>
Option 1: Connect to a Streams instance from an IBM Cloud Pak for Data project
If you are not running the notebook from a Cloud Pak for Data project, skip to the next section.
In order to submit a Streams application you need to provide the name of the Streams instance.
From the navigation menu, click Services > Instances. This will take you to a list of instances.
Find your streams instance and update the value of streams_instance_name in the cell below according to your Streams instance name.
Run the cell and skip to section 1.2
The cell below defines a function called submit_topology that will be used later on to submit the Topology once it is defined.
End of explanation
# paste connection code here
Explanation: <a name="notcpd"></a>
Option 2: Connect to a Streams instance from IBM Watson Studio and other environments
Skip this section if you are running the notebook from a Cloud Pak for Data project.
The code for each scenario is available in the development guide.
Each snippet will define a function called submit_topology that will be used later on to submit the Topology once it is defined.
Choose the tab that best matches your environment.
Copy the code under the heading Copy this code snippet.
Paste it in the cell below.
Connection instructions from the development guide
End of explanation
import sys
import streamsx.topology.context
print("INFO: streamsx package version: " + streamsx.topology.context.__version__)
#For more details uncomment line below.
#!pip show streamsx
# Uncomment this line to upgrade the streamsx package
#!pip install --user --upgrade streamsx
Explanation: 1.2 Install required modules
This notebook requires version 1.14.13 or later of the streamsx package. Check the version below and upgrade if needed.
End of explanation
!pip install "https://github.com/IBMStreams/streamsx.health/raw/develop/samples/HealthcareJupyterDemo/whl/healthdemo-1.0-py3-none-any.whl"
Explanation: Import the healthdemo utility package
This is a set of utilities from the streamsx.health package used in this application
End of explanation
from streamsx.topology import schema
from streamsx.topology.topology import Topology
from streamsx.topology.context import submit
## The healthdemo package provides tools to analyze patient data
## See https://github.com/IBMStreams/streamsx.health/tree/develop/samples/HealthcareJupyterDemo/package
from healthdemo.patientmonitoring_functions import streaming_rpeak
from healthdemo.healthcare_functions import GenTimestamp, aggregate
from healthdemo.windows import SlidingWindow
from healthdemo.utils import get_patient_id
topo = Topology('PatientMonitoringDemo')
## The ingest-physionet provides data at a rate of 125 tuples/sec
sample_rate = 125
## Subscribe to the topic
patients_data_source = topo.subscribe('ingest-physionet', schema.CommonSchema.Json)
## Add timestamp to the data, so you can perform windowing
patients_data_source = patients_data_source.map(GenTimestamp(sample_rate))
## Generate a window based on the provided sample_rate
patients_data_window = patients_data_source.last(size=sample_rate).trigger(sample_rate-1).partition(get_patient_id)
## Aggregate the data within the window and create a tuple
patients_data = patients_data_window.aggregate(aggregate)
## Process data from 'ECG Lead II' and calculate RPeak and RR delta
patients_data = streaming_rpeak(patients_data, sample_rate, data_label='ECG Lead II')
## Create view for viewing patient vital data
patients_vital = patients_data.view(name='patients_vitals')
## include the healthdemo package so it is accessible at runtime
topo.add_pip_package(requirement="https://github.com/IBMStreams/streamsx.health/raw/develop/samples/HealthcareJupyterDemo/whl/healthdemo-1.0-py3-none-any.whl",
name="healthdemo")
print ("Submitting topology for execution..")
result = submit_topology(topo)
if (result and result.job):
print ("Submitted job successfully, job id: " + str(result.job.id))
Explanation: <a id="createfeed"></a>
Step 2: Start the Physionet ingest service
We will analyze simulated data generated by a pre-compiled Streams application called thePhysionetIngestService. This is a microservice, or small application that retrieves patient waveform and vital data from a Physionet database (https://www.physionet.org/) and makes it available to other applications. The Python application we will create later in this notebook will connect to the PhysionetIngestService service.
To start the PhysionetIngestService,
Download and save the compiled application: https://github.com/IBMStreams/streamsx.health/releases/download/v0.1/com.ibm.streamsx.health.physionet.PhysionetIngestServiceMulti.sab.
First open the Streams Console:
From IBM Cloud Pak for Data:
From the navigation menu, click Services > Instances.
Click on your Streams instance.
In the details page that opens, look for the list of Streams external endpoints.
Click the Console link to open the Streams Console.
If you are not using Cloud Pak for Data, see this document for steps to open the Streams Console in your installation.
From the Streams Console, click Submit job:
Select the .sab file you downloaded earlier, and click Submit.
Click Submit.
Click OK in the Submission-time parameters dialog.
<a id="buildapp"></a>
Step 3: Build a streaming app
Now you're ready to create and run the HealthcareDemo Python streaming application.
The following cell contains source code for the Python Topology application. This is a Python streaming application that ingests the patient data from the ingest-physionet topic, and performs analysis on the patient data to calculate vital data for all patients. It finally creates a view for displaying the result of the analysis.
End of explanation
## load BokehJS visualization library (must be loaded in a separate cell)
from bokeh.io import output_notebook, push_notebook
from bokeh.resources import INLINE
output_notebook(resources=INLINE)
%autosave 0
%reload_ext autoreload
%autoreload 1
from healthdemo.medgraphs import ECGGraph, PoincareGraph, NumericText, ABPNumericText
## Select which patient's data to plot
patientId = 'patient-1'
graph = {
'leadII_poincare': PoincareGraph(signal_label='Poincare - ECG Lead II', title='Poincare - ECG Lead II'),
'ecg_leadII_graph': ECGGraph(signal_label='ECG Lead II', title='ECG Lead II',
plot_width=600, min_range=-0.5, max_range=2.0),
'ecg_leadV_graph': ECGGraph(signal_label='ECG Lead V', title='ECG Lead V', plot_width=600),
'resp_graph': ECGGraph(signal_label='Resp', title='Resp', min_range=-1, max_range=3, plot_width=600),
'pleth_graph': ECGGraph(signal_label='Pleth', title='Pleth', min_range=0, max_range=5, plot_width=600),
'hr_numeric': NumericText(signal_label='HR', title='HR', color='#7cc7ff'),
'pulse_numeric': NumericText(signal_label='PULSE', title='PULSE', color='#e71d32'),
'spo2_numeric': NumericText(signal_label='SpO2', title='SpO2', color='#8cd211'),
'abp_numeric': ABPNumericText(abp_sys_label='ABP Systolic', abp_dia_label='ABP Diastolic',
title='ABP', color='#fdd600')
}
print ("DONE")
Explanation: <a id="visualization"></a>
Part 4: Visualization
Complete the following steps to visualize the results of your app:
4.1 Set up graphs for plotting patient vitals<br>
4.2 Provide data for the graphs<br>
4.3 Display the graphs<br>
<a id="setupgraphs"></a>
4.1 Set up graphs for plotting patient vitals
This cell initializes the nine graphs which will be used to display one patient's vital data.
Each property of the patient's vital data is identified by the signal label. Each graph is initialized by providing the signal label it plots and a title.
End of explanation
from healthdemo.utils import get_patient_id
patients_vital = patients_vital
continue_data_collection = True
## retrieve data from Streams view in a background job
def data_collector(view, g):
queue = view.start_data_fetch()
while continue_data_collection:
tup = queue.get()
if patientId == get_patient_id(tup):
for graphtype in g:
g[graphtype].add(tup)
view.stop_data_fetch()
from IPython.lib import backgroundjobs as bg
jobs = bg.BackgroundJobManager()
jobs.new(data_collector, patients_vital, graph)
Explanation: <a id="providedata"></a>
4.2 Provide data for the graphs
This cell is responsible for propagating the graph objects with data in the view.
The view data contains vital data for all patients, and is continuously retrieved from the Streaming Analytics service in a background job. Each graph object receives data for a specified patient. The graph objects extract and store the data that is relevant for that particular graph.
End of explanation
import time
from bokeh.io import show
from bokeh.layouts import column, row, widgetbox
import bokeh
enableHandle = True
## display graphs for a patient
t = show(
row(
column(
graph['ecg_leadII_graph'].get_figure(),
graph['ecg_leadV_graph'].get_figure(),
graph['resp_graph'].get_figure(),
graph['pleth_graph'].get_figure()
),
column(
graph['leadII_poincare'].get_figure(),
widgetbox(graph['hr_numeric'].get_figure()),
widgetbox(graph['pulse_numeric'].get_figure()),
widgetbox(graph['spo2_numeric'].get_figure()),
widgetbox(graph['abp_numeric'].get_figure())
)
),
notebook_handle=enableHandle
)
## Timeout(in seconds) before stopping the graph
timeout = 30
endtime = time.time() + timeout
cnt = 0
while time.time() < endtime:
## update graphs
for graphtype in graph:
graph[graphtype].update()
## update notebook
cnt += 1
if cnt % 5 == 0:
#output_notebook()
#show(..., notebook_handle=True)
push_notebook(handle=t) ## refresh the graphs
cnt = 0
time.sleep(0.008)
# Stop data collection running in background thread
continue_data_collection = False
Explanation: <a id="displaygraphs"></a>
4.3 Display the graphs
This cell is responsible for laying out and displaying the graphs.
Each time a call to update() is made on a graph object, the next data point is retrieved and displayed. Each graph object maintains an internal queue so that each time a call to update() is made, the next element in the queue is retrieved and removed.
There is a loop that continuously calls the update() method on each of the graphs for 60 seconds. After each graph has been updated, a call to push_notebook() is made, which causes the notebook to update the graphics.
The graphs will stop updating after 60 seconds. To extend the period for graph update, change the timeout variable.
To restart graph updates after the timeout period:
Rerun the cell in 4.2 Provide data for the graphs to restart the background thread to fetch data.
Rerun the cell in this section to restart graph updates.
End of explanation |
6,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<a href="http
Step1: La commande suivante permet de verifier qu'une carte GPU est bien disponible sur la machine utilisée. Si c'est le cas et si Keras a bien été installé dans la configuration GPU (c'est généralement le cas dans l'environement virtuel GPU d'Anaconda), deux options vont apparaitre, une CPU et une GPU. La configuration GPU sera alors automatiquement utilisée.
Step2: Prise en charge des données
Structure des données
Les données originales peuvent être téléchargées à partir du site kaggle.
L'ensemble d'apprentissage contient 25.000 images. C'est beaucoup trop pour des machines usuelles à moins de se montrer très patient. Aussi, deux sous-échantillons d'apprentissage ont été créés et disposés dans le dépôt.
100 images de chats et 100 images de chiens plus un échantillon de validation consitué de 40 images de chats et 40 images de chien.
1000 images de chats et 1000 images de chiens plus un échantillon de validation consitué de 400 images de chats et 400 images de chien.
Pour utiliser certaines fonctionnalités de Keras, les données doivent être organisées selon une abrorescence précise. Les fichiers appartenant à une même classe doivent être dans un même dossier.
data_dir
└───subsample/
│ └───train/
│ │ └───cats/
│ │ │ │ cat.0.jpg
│ │ │ │ cat.1.jpg
│ │ │ │ ...
│ │ └───dogs/
│ │ │ │ dog.0.jpg
│ │ │ │ dog.1.jpg
│ │ │ │ ...
│ └───test/
│ │ └───cats/
│ │ │ │ cat.1000.jpg
│ │ │ │ cat.1000.jpg
│ │ │ │ ...
│ │ └───dogs/
│ │ │ │ dog.1000.jpg
│ │ │ │ dog.1000.jpg
│ │ │ │ ...
N.B. Des sous-échantillons plus importants créés à partir des données originales doivent être enregistrés en respectant scrupuleusement cette structure.
Création d'un jeu d'apprentissage et de validation
Spécifier le chemin du dossier contenant les données, si ce n'est pas le répertoire courant, ainsi que les tailles des échantillons d'apprentissage et de validation.
Step3: Illustration des données
La fonction load_img permet de charger une image comme une image PIL.
Step4: La fonction img_to_array génére un array numpy a partir d'une image PIL .
Step5: Pré-traitements
Les images du jeu de données sont de dimensions différentes
Step6: Or les images doivent être de même dimensions pour être utilisée dans un même réseau.
La fonction ImageDataGeneratorde Keras permet de remédier à ce problème.
Plus généralement cette fonction applique un certain nombre de traitements (transformation, normalisation) aléatoires sur les images de sorte que le modèle n'apprenne jamais deux fois la même image.
Quelques arguments de cette fonction
Step7: La commande .flow() genere de nouveaux exemples à partir de l'image originale et les sauve dans le dossier spécifié dans save_to_dir.
On force l'arrêt de cette génération après huits images générées.
Step8: Illustration des images transformées.
Step9: Classification d'image à l'aide du Deep Learning
Dans un premier temps, nous allons fixer le nombre d'epochs ainsi que la taille de notre batch afin que ces deux paramètres soit communs aux différentes méthodes que nous allons tester.
Queques règles à suivre pour le choix de ces paramètres
Step10: Réseau convolutionnel
Dans un premiers temps, on construit notre propre réseau de neurones convolutionnel.
Génération des données
On définit deux objets ImageDataGenerator
Step11: Définition du modèle
Le modèle est consitué de 3 blocs de convolution consitutés chacun de
Step12: Apprentissage
Step13: Prédiction
Step14: Q Commentez les valeurs de prédictions d'apprentissage et de validation. Comparez les avec les résultats de la dernière epochs d'apprentissage. Qu'observez vous? Est-ce normal?
Exercice Re-faites tournez ce modèle en ajoutant plus de transformation aléatoire dans le générateur d'image au moment de l'apprentissage. Que constatez-vous?
Réseau pré-entrainé
Step15: Création des caractéristiques
On applique alors les 5 blocs du modèle VGG16 sur les images de nos échantillons d'apprentissage et de validation.
Cette opération peut-être couteuse, c'est pourquoi on va sauver ces features dans des fichiers afin d'effectuer qu'une fois cette opération.
Si ces fichiers existent, les poids seront téléchargés, sinon il seront créés.
Step16: Construction d'un réseaux de neurone classique.
On construit un réseaux de neurones "classique", identique à la seconde partie du réseau précédent.
Attention
Step17: Apprentissage
Step18: Q Commentez les performances de ce nouveau modèle
Nous allons également sauver les poids de ce modèle afin de les réusiliser dans la prochaine partie.
Step19: Prédiction
Step20: Ajustement fin du réseau VGG16
Dans la partie précédente, nous avons configurer un bloc de réseaux de neurones, à même de prendre en entrée les features issues des transformation des 5 premiers blocs de convolution du modèle VGG16.
Dans cette partie, nous allons 'brancher' ce bloc directement sur les cinq premiers blocs du modèle VGG16 pour pouvoir affiner le modèle en itérant a la fois sur les blocs de convolution mais également sur notre bloc de réseau de neurone.
Création du modèle
On télécharge dans un premier temps le modèle VGG16, comme précédement.
Cependant, le modèle va cette fois être "entrainé" directement. Il ne va pas servir qu'a générer des features. Il faut donc préciser en paramètre la taille des images que l'on va lui donner.
Step21: On ajoute au modèle VGG, notre bloc de réseaux de neuronne construit précédemment pour générer des features.
Pour cela, on construit le bloc comme précédemment, puis on y ajoute les poids issus de l'apprentissage réalisé précédemment.
Step22: Enfin on assemble les deux parties du modèles
Step23: Gèle des 4 premiers blocs de convolution
En pratique, et pour pouvoir effectuer ces calculs dans un temps raisonable, nous allons "fine-tuner" seulement le dernier bloc de convolution du modèle, le bloc 5 (couches 16 à 19 dans le summary du modèle précédent) ainsi que le bloc de réseau de neurones que nous avons ajoutés.
Pour cela on va "geler" (Freeze) les 15 premières couches du modèle pour que leur paramètre ne soit pas optimiser pendant la phase d'apprentissage.
Step24: Generate Data
Step25: Apprentissage
Step26: Prédiction
Step27: Autres modèles
Keras possède un certain nombre d'autres modèles pré-entrainés | Python Code:
# Utils
import sys
import os
import shutil
import time
import pickle
import numpy as np
# Deep Learning Librairies
import tensorflow as tf
import keras.preprocessing.image as kpi
import keras.layers as kl
import keras.optimizers as ko
import keras.backend as k
import keras.models as km
import keras.applications as ka
# Visualisaiton des données
from matplotlib import pyplot as plt
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="max-width: 250px; display: inline" alt="Wikistat"/></a>
<a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" width=400, style="float:right; display: inline" alt="IMT"/> </a>
</center>
Ateliers: Technologies des grosses data
Reconnaissance d'images: cats vs. dogs
Tranfert d'apprentissage avec <a href="https://www.tensorflow.org/"><img src="https://avatars0.githubusercontent.com/u/15658638?s=200&v=4" width=100, style="display: inline" alt="TensorFlow"/></a> tensorflow et <a href="https://keras.io/"><img src="https://s3.amazonaws.com/keras.io/img/keras-logo-2018-large-1200.png" width=250, style="display: inline" alt="Keras"/></a>
Résumé
Apprentissage d'un réseau convolutionnel élémentaire puis utilisation de réseaux pré-entrainés (VGG16, InceptionV3) sur la base ImageNet afin de résoudre un autre exemple de reconnaissance d'images. Utilisation de Keras pour piloter la librairie tensorFlow. Comparaison des performances des réseaux et des environnements de calcul CPU et GPU.
Introduction
Objectifs
La reconnaissance d'images a franchi une étape majeure en 2012. L'empilement de couches de neurones, dont certaines convolutionnelles, ont conduit à des algorithmes nettement plus performants en reconnaissance d'image, traitement du langage naturel, et à l'origine d'un battage médiatique considérable autour de l'apprentissage épais ou deep learning. Néanmoins, apprendre un réseau profond comportant des milions de paramètres nécessite une base d'apprentissage excessivement volumineuse (e.g. ImageNet) avec des millions d'images labellisées.
L'apprentissage s'avère donc très couteux en temps de calcul, même avec des technologies adaptées (GPU). Pour résoudre ce problème il est possible d'utiliser des réseaux pré-entrainés. Ces réseaux possèdent une structure particulière, établie de façon heuristique dans différents départements de recherche (Microsoft: Resnet, Google: Inception V3, Facebook: ResNet) avant d'être ajustés sur des banques d'images publiques telles que ImageNet.
La stratégie de ce transfert d'apprentissage consiste à exploiter la connaissance acquise sur un problème de classification général pour l’appliquer à un problème particulier.
La librairie Keras permet de construire de tels réseaux en utlisant relativement simplement l'environnement tensorFlow de Google à partir de programmes récrits en Python. De plus Keras permet d'utiliser les performances d'une carte GPU afin d'atteindre des performances endant possible ce transfert d'apprentissage, même avec des réseaux complexes.
L'objectif de ce tutoriel est de montrer les capacités du transfert d'apprentissage permettant de résoudre des problèmes complexes avec des moyens de calcul modestes. Néanmoins, une carte GPU est vivement conseillé.
Ce tutoriel est en grande partie inspiré du blog de François Chollet à l'initiative de Keras.
Environnement matériel et logiciel
Keras et tensorFlow s'installent simplement à partir de la distribution Anaconda de Python.
End of explanation
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
MODE = "GPU" if "GPU" in [k.device_type for k in device_lib.list_local_devices()] else "CPU"
print(MODE)
Explanation: La commande suivante permet de verifier qu'une carte GPU est bien disponible sur la machine utilisée. Si c'est le cas et si Keras a bien été installé dans la configuration GPU (c'est généralement le cas dans l'environement virtuel GPU d'Anaconda), deux options vont apparaitre, une CPU et une GPU. La configuration GPU sera alors automatiquement utilisée.
End of explanation
data_dir = '' # chemin d'accès aux données
N_train = 200 #2000
N_val = 80 #800
data_dir_sub = data_dir+'subsample_%d_Ntrain_%d_Nval' %(N_train, N_val)
Explanation: Prise en charge des données
Structure des données
Les données originales peuvent être téléchargées à partir du site kaggle.
L'ensemble d'apprentissage contient 25.000 images. C'est beaucoup trop pour des machines usuelles à moins de se montrer très patient. Aussi, deux sous-échantillons d'apprentissage ont été créés et disposés dans le dépôt.
100 images de chats et 100 images de chiens plus un échantillon de validation consitué de 40 images de chats et 40 images de chien.
1000 images de chats et 1000 images de chiens plus un échantillon de validation consitué de 400 images de chats et 400 images de chien.
Pour utiliser certaines fonctionnalités de Keras, les données doivent être organisées selon une abrorescence précise. Les fichiers appartenant à une même classe doivent être dans un même dossier.
data_dir
└───subsample/
│ └───train/
│ │ └───cats/
│ │ │ │ cat.0.jpg
│ │ │ │ cat.1.jpg
│ │ │ │ ...
│ │ └───dogs/
│ │ │ │ dog.0.jpg
│ │ │ │ dog.1.jpg
│ │ │ │ ...
│ └───test/
│ │ └───cats/
│ │ │ │ cat.1000.jpg
│ │ │ │ cat.1000.jpg
│ │ │ │ ...
│ │ └───dogs/
│ │ │ │ dog.1000.jpg
│ │ │ │ dog.1000.jpg
│ │ │ │ ...
N.B. Des sous-échantillons plus importants créés à partir des données originales doivent être enregistrés en respectant scrupuleusement cette structure.
Création d'un jeu d'apprentissage et de validation
Spécifier le chemin du dossier contenant les données, si ce n'est pas le répertoire courant, ainsi que les tailles des échantillons d'apprentissage et de validation.
End of explanation
img = kpi.load_img(data_dir_sub+'/train/cats/cat.1.jpg') # this is a PIL image
img
Explanation: Illustration des données
La fonction load_img permet de charger une image comme une image PIL.
End of explanation
x = kpi.img_to_array(img)
plt.imshow(x/255, interpolation='nearest')
plt.show()
Explanation: La fonction img_to_array génére un array numpy a partir d'une image PIL .
End of explanation
x_0 = kpi.img_to_array(kpi.load_img(data_dir_sub+"/train/cats/cat.0.jpg"))
x_1 = kpi.img_to_array(kpi.load_img(data_dir_sub+"/train/cats/cat.1.jpg"))
x_0.shape, x_1.shape
Explanation: Pré-traitements
Les images du jeu de données sont de dimensions différentes :
End of explanation
datagen = kpi.ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
Explanation: Or les images doivent être de même dimensions pour être utilisée dans un même réseau.
La fonction ImageDataGeneratorde Keras permet de remédier à ce problème.
Plus généralement cette fonction applique un certain nombre de traitements (transformation, normalisation) aléatoires sur les images de sorte que le modèle n'apprenne jamais deux fois la même image.
Quelques arguments de cette fonction:
* rotation_range: Un interval représentant les degrés possibles de rotation de l'image,
* width_shift and height_shift: intervales au sein desquels les données peuvent être translatées horizontalement ou verticalement,
* rescale: Une valeur par lequelle les données sont multipliées,
* shear_range: Transvection,
* zoom_range: Permet des zoom au sein d'une image,
* horizontal_flip: Inverse aléatoirement des images selon l'axe horizontal,
* fill_mode: La strategie adoptée pour combler les pixels manquants après une transformation.
End of explanation
img_width = 150
img_height = 150
img = kpi.load_img(data_dir_sub+"/train/cats/cat.1.jpg") # this is a PIL image
x = kpi.img_to_array(img)
x_ = x.reshape((1,) + x.shape)
if not(os.path.isdir(data_dir_sub+"/preprocessing_example")):
os.mkdir(data_dir_sub+"/preprocessing_example")
i = 0
for batch in datagen.flow(x_, batch_size=1,save_to_dir=data_dir_sub+"/preprocessing_example", save_prefix='cat', save_format='jpeg'):
i += 1
if i > 7:
break
Explanation: La commande .flow() genere de nouveaux exemples à partir de l'image originale et les sauve dans le dossier spécifié dans save_to_dir.
On force l'arrêt de cette génération après huits images générées.
End of explanation
X_list=[]
for f in os.listdir(data_dir_sub+"/preprocessing_example"):
X_list.append(kpi.img_to_array(kpi.load_img(data_dir_sub+"/preprocessing_example/"+f)))
fig=plt.figure(figsize=(16,8))
fig.patch.set_alpha(0)
ax = fig.add_subplot(3,3,1)
ax.imshow(x/255, interpolation="nearest")
ax.set_title("Image original")
for i,xt in enumerate(X_list):
ax = fig.add_subplot(3,3,i+2)
ax.imshow(xt/255, interpolation="nearest")
ax.set_title("Random transformation %d" %(i+1))
plt.tight_layout()
plt.savefig("cats_transformation.png", dpi=100, bbox_to_anchor="tight", facecolor=fig.get_facecolor())
plt.show()
Explanation: Illustration des images transformées.
End of explanation
epochs = 10
batch_size=20
Explanation: Classification d'image à l'aide du Deep Learning
Dans un premier temps, nous allons fixer le nombre d'epochs ainsi que la taille de notre batch afin que ces deux paramètres soit communs aux différentes méthodes que nous allons tester.
Queques règles à suivre pour le choix de ces paramètres :
epochs: Commencer avec un nombre d'epochs relativement faible (2,3) afin de voir le temps de calcul nécessaire à votre machine, puis augmenter le en conséquence.
batch_size: La taille du batch correspond au nombre d'éléments qui seront étudié a chaque itération au cours d'une epochs.
Important : Avec Keras, lorsque les données sont générés avec un générateur (voir précédemment) la taille du batch doit être un diviseur de la taille de l'échantillon. Sinon l'algorithme aura des comportement anormaux qui ne généreront pas forcément un message d'erreur.
End of explanation
# this is the augmentation configuration we will use for training
train_datagen = kpi.ImageDataGenerator(
rescale=1./255,
)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = kpi.ImageDataGenerator(rescale=1./255)
# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
data_dir_sub+"/train/", # this is the target directory
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
validation_generator = valid_datagen.flow_from_directory(
data_dir_sub+"/validation/",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
Explanation: Réseau convolutionnel
Dans un premiers temps, on construit notre propre réseau de neurones convolutionnel.
Génération des données
On définit deux objets ImageDataGenerator :
train_datagen: pour l'apprentissage, où différentes transformations sont appliquées, comme précédement
valid_datagen: pour la validation, où l'on applique seulement une transformation rescale pour ne pas déformer les données.
Il est également important de définir la taille des images dans laquelle nos images seront reformatées. Ici nous choisirons un taille d'image de 150x150
End of explanation
model_conv = km.Sequential()
model_conv.add(kl.Conv2D(32, (3, 3), input_shape=(img_width, img_height, 3), data_format="channels_last"))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Conv2D(32, (3, 3)))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Conv2D(64, (3, 3)))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Flatten()) # this converts our 3D feature maps to 1D feature vectors
model_conv.add(kl.Dense(64))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.Dropout(0.5))
model_conv.add(kl.Dense(1))
model_conv.add(kl.Activation('sigmoid'))
model_conv.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model_conv.summary()
Explanation: Définition du modèle
Le modèle est consitué de 3 blocs de convolution consitutés chacun de:
Une couche de Convolution2D
Une couche d'Activation ReLU
Une couche MaxPooling2D
Suivi de :
* Une couche Flatten, permettant de convertir les features de 2 à 1 dimensions.
* Une couche Dense (Fully connected layer)
* Une couche d' Activation ReLU
* Une couche Dropout
* Une couche Dense de taille 1 suivi d'une Activation sigmoid permettant la classification binaire
On utilise la fonction de perte binary_crossentropy pour apprendre notre modèle
End of explanation
ts = time.time()
model_conv.fit_generator(train_generator, steps_per_epoch=N_train // batch_size, epochs=epochs,
validation_data=validation_generator,validation_steps=N_val // batch_size)
te = time.time()
t_learning_conv_simple_model = te-ts
print("Learning TIme for %d epochs : %d seconds"%(epochs,t_learning_conv_simple_model))
model_conv.save(data_dir_sub+'/'+MODE+'_models_convolutional_network_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
Explanation: Apprentissage
End of explanation
ts = time.time()
score_conv_val = model_conv.evaluate_generator(validation_generator, N_val /batch_size, verbose=1)
score_conv_train = model_conv.evaluate_generator(train_generator, N_train / batch_size, verbose=1)
te = time.time()
t_prediction_conv_simple_model = te-ts
print('Train accuracy:', score_conv_train[1])
print('Validation accuracy:', score_conv_val[1])
print("Time Prediction: %.2f seconds" %t_prediction_conv_simple_model )
Explanation: Prédiction
End of explanation
model_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet')
model_VGG16_without_top.summary()
Explanation: Q Commentez les valeurs de prédictions d'apprentissage et de validation. Comparez les avec les résultats de la dernière epochs d'apprentissage. Qu'observez vous? Est-ce normal?
Exercice Re-faites tournez ce modèle en ajoutant plus de transformation aléatoire dans le générateur d'image au moment de l'apprentissage. Que constatez-vous?
Réseau pré-entrainé : VGG16
Nous allons voir dans cette partie deux façon d'utiliser un modèle pré-entrainé:
Dans un premier temps on utilise le modèle pour extraire des features des images qui seront utilisés dans un réseaux de convolution "classique". Ces features sont le résultats des transformations des différents blocs de convolution sur nos images.
Dans un second temps on branchera le modèle "classique" généré directement sur le modèle pré-entrainé. Ce modèle sera ensuite ré-entraîné plus finement (Fine Tuning) sur le dernier bloc de convolution.
Illustration du réseau
Extraction de nouvelle caractéristiques (features)
Téléchargement des poids du modèle
Si cest la première fois que vous appeler l'application VGG16, le lancement des poids commencera automatiquement et seront stocké dans votre home : "~/.keras/models"
On utilise le modèle avec l'option ìnclude_top = False. C'est à dire que l'on ne télécharge pas le dernier bloc Fully connected classifier.
La fonction summary permet de retrouver la structure décrite précédemment.
End of explanation
features_train_path = data_dir_sub+'/features_train.npy'
features_validation_path = data_dir_sub+'/features_validation.npy'
if os.path.isfile(features_train_path) and os.path.isfile(features_validation_path):
print("Load Features")
features_train = np.load(open(features_train_path, "rb"))
features_validation = np.load(open(features_validation_path, "rb"))
else:
print("Generate Features")
datagen = kpi.ImageDataGenerator(rescale=1. / 255)
generator = datagen.flow_from_directory(
data_dir_sub+"/train",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None, # this means our generator will only yield batches of data, no labels
shuffle=False)
features_train = model_VGG16_without_top.predict_generator(generator, N_train / batch_size, verbose = 1)
# save the output as a Numpy array
np.save(open(features_train_path, 'wb'), features_train)
generator = datagen.flow_from_directory(
data_dir_sub+"/validation",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
features_validation = model_VGG16_without_top.predict_generator(generator, N_val / batch_size, verbose = 1)
# save the output as a Numpy array
np.save(open(features_validation_path, 'wb'), features_validation)
Explanation: Création des caractéristiques
On applique alors les 5 blocs du modèle VGG16 sur les images de nos échantillons d'apprentissage et de validation.
Cette opération peut-être couteuse, c'est pourquoi on va sauver ces features dans des fichiers afin d'effectuer qu'une fois cette opération.
Si ces fichiers existent, les poids seront téléchargés, sinon il seront créés.
End of explanation
model_VGG_fcm = km.Sequential()
model_VGG_fcm.add(kl.Flatten(input_shape=features_train.shape[1:]))
model_VGG_fcm.add(kl.Dense(64, activation='relu'))
model_VGG_fcm.add(kl.Dropout(0.5))
model_VGG_fcm.add(kl.Dense(1, activation='sigmoid'))
model_VGG_fcm.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model_VGG_fcm.summary()
Explanation: Construction d'un réseaux de neurone classique.
On construit un réseaux de neurones "classique", identique à la seconde partie du réseau précédent.
Attention : La première couche de ce réseaux (Flatten) doit être configurée pour prendre en compte des données dans la dimension des features générées précédemment
End of explanation
# On créer des vecteurs labels
train_labels = np.array([0] * int((N_train/2)) + [1] * int((N_train/2)))
validation_labels = np.array([0] * int((N_val/2)) + [1] * int((N_val/2)))
model_VGG_fcm.fit(features_train, train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(features_validation, validation_labels))
t_learning_VGG_fcm = te-ts
Explanation: Apprentissage
End of explanation
model_VGG_fcm.save_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
Explanation: Q Commentez les performances de ce nouveau modèle
Nous allons également sauver les poids de ce modèle afin de les réusiliser dans la prochaine partie.
End of explanation
ts = time.time()
score_VGG_fcm_val = model_VGG_fcm.evaluate(features_validation, validation_labels)
score_VGG_fcm_train = model_VGG_fcm.evaluate(features_train, train_labels)
te = time.time()
t_prediction_VGG_fcm = te-ts
print('Train accuracy:', score_VGG_fcm_train[1])
print('Validation accuracy:', score_VGG_fcm_val[1])
print("Time Prediction: %.2f seconds" %t_prediction_VGG_fcm)
Explanation: Prédiction
End of explanation
# build the VGG16 network
model_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet', input_shape=(150,150,3))
print('Model loaded.')
Explanation: Ajustement fin du réseau VGG16
Dans la partie précédente, nous avons configurer un bloc de réseaux de neurones, à même de prendre en entrée les features issues des transformation des 5 premiers blocs de convolution du modèle VGG16.
Dans cette partie, nous allons 'brancher' ce bloc directement sur les cinq premiers blocs du modèle VGG16 pour pouvoir affiner le modèle en itérant a la fois sur les blocs de convolution mais également sur notre bloc de réseau de neurone.
Création du modèle
On télécharge dans un premier temps le modèle VGG16, comme précédement.
Cependant, le modèle va cette fois être "entrainé" directement. Il ne va pas servir qu'a générer des features. Il faut donc préciser en paramètre la taille des images que l'on va lui donner.
End of explanation
# build a classifier model to put on top of the convolutional model
top_model = km.Sequential()
top_model.add(kl.Flatten(input_shape=model_VGG16_without_top.output_shape[1:]))
top_model.add(kl.Dense(64, activation='relu'))
top_model.add(kl.Dropout(0.5))
top_model.add(kl.Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
Explanation: On ajoute au modèle VGG, notre bloc de réseaux de neuronne construit précédemment pour générer des features.
Pour cela, on construit le bloc comme précédemment, puis on y ajoute les poids issus de l'apprentissage réalisé précédemment.
End of explanation
# add the model on top of the convolutional base
model_VGG_LastConv_fcm = km.Model(inputs=model_VGG16_without_top.input, outputs=top_model(model_VGG16_without_top.output))
model_VGG_LastConv_fcm.summary()
Explanation: Enfin on assemble les deux parties du modèles
End of explanation
for layer in model_VGG_LastConv_fcm.layers[:15]:
layer.trainable = False
Explanation: Gèle des 4 premiers blocs de convolution
En pratique, et pour pouvoir effectuer ces calculs dans un temps raisonable, nous allons "fine-tuner" seulement le dernier bloc de convolution du modèle, le bloc 5 (couches 16 à 19 dans le summary du modèle précédent) ainsi que le bloc de réseau de neurones que nous avons ajoutés.
Pour cela on va "geler" (Freeze) les 15 premières couches du modèle pour que leur paramètre ne soit pas optimiser pendant la phase d'apprentissage.
End of explanation
# prepare data augmentation configuration
train_datagen = kpi.ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = kpi.ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
data_dir_sub+"/train/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
data_dir_sub+"/validation/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
Explanation: Generate Data
End of explanation
model_VGG_LastConv_fcm.compile(loss='binary_crossentropy',
optimizer=ko.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# fine-tune the model
ts = time.time()
model_VGG_LastConv_fcm.fit_generator(
train_generator,
steps_per_epoch=N_train // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=N_val // batch_size)
te = time.time()
t_learning_VGG_LastConv_fcm = te-ts
Explanation: Apprentissage
End of explanation
ts = time.time()
score_VGG_LastConv_fcm_val = model_VGG_LastConv_fcm.evaluate_generator(validation_generator, N_val // batch_size)
score_VGG_LastConv_fcm_train = model_VGG_LastConv_fcm.evaluate_generator(train_generator, N_train // batch_size)
te = time.time()
t_prediction_VGG_LastConv_fcm = te-ts
print('Train accuracy:', score_VGG_LastConv_fcm_val[1])
print('Validation accuracy:', score_VGG_LastConv_fcm_train[1])
print("Time Prediction: %.2f seconds" %t_prediction_VGG_LastConv_fcm)
Explanation: Prédiction
End of explanation
data_dir_test = data_dir+'test/'
N_test = len(os.listdir(data_dir_test+"/test"))
test_datagen = kpi.ImageDataGenerator(rescale=1. / 255)
test_generator = test_datagen.flow_from_directory(
data_dir_test,
#data_dir_sub+"/train/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode=None,
shuffle=False)
test_prediction = model_VGG_LastConv_fcm.predict_generator(test_generator, N_test // batch_size)
images_test = [data_dir_test+"/test/"+k for k in os.listdir(data_dir_test+"/test")][:9]
x_test = [kpi.img_to_array(kpi.load_img(image_test))/255 for image_test in images_test] # this is a PIL image
fig = plt.figure(figsize=(10,10))
for k in range(9):
ax = fig.add_subplot(3,3,k+1)
ax.imshow(x_test[k], interpolation='nearest')
pred = test_prediction[k]
if pred >0.5:
title = "Probabiliy for dog : %.1f" %(pred*100)
else:
title = "Probabiliy for cat : %.1f" %((1-pred)*100)
ax.set_title(title)
plt.show()
Explanation: Autres modèles
Keras possède un certain nombre d'autres modèles pré-entrainés:
Xception
VGG16
VGG19
ResNet50
InceptionV3
InceptionResNetV2
MobileNet
Certains possèdent une structure bien plus complexe, notamment InceptionV3. Vous pouvez très facilement remplacer la fonction ka.VGG16 par une autre fonction (ex : ka.InceptionV3) pour tester la performance des ces différents modèles et leur complexité.
Exercice Vous pouvez re-effectuer les manipulations précédentes sur d'autres modèle pré-entrainé, en prenant le temps d'étudiez leur architecture.
Exercice Vous pouvez également re-effectuer ces apprentissage sur un jeu de données plus gros en en créant un nouveau à partir des données originales.
L'application de ces exercices sur les données du challenge est vivement conseillées :)
Prédiction sur le jeu test de Kaggle
Voyons à présent comment notre réseau performe sur un échantillon du dataset test de keras.
End of explanation |
6,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Variables
Frequently, when an experiment is performed, we are interested mainly in some function of the outcome as opposed to the actual outcome itself.
For instance,<br>
1) In recent flipping a coin experiment, we may be interested in the total number of heads that occur and not care at all about the actual Head(H)–Tail(T) sequence that results. <br>
2) In throwing dice, we are often interested in the sum of the two dice and are not really concerned about the separate values of each die. That is, we may be interested in knowingthat the sum is 7 and may not be concerned over whether the actual outcome was
Step1: As shown earlier in slide,<br>
A probability space $(\Omega, P)$ is an outcome space accompanied by the probabilities of all the outcomes.
<br>If you assume all eight outcomes of three tosses are equally likely, the probabilities are all 1/8
Step2: As you can see above, Product spaces(Probability spaces) get large very quickly.
If we are tossing 10 times, the outcome space would consist of the $2^{10}$ sequences of 10 elements where each element is H or T. <br>
The outcomes are a pain to list by hand, but computers are good at saving us that kind of pain.
Lets take example of rolling die,<br>
If we roll a die 5 times, there are almost 8,000 possible outcomes
Step3: A Function on the Outcome Space
Suppose you roll a die five times and add up the number of spots you see. If that seems artificial, be patient for a moment and you'll soon see why it's interesting.
The sum of the rolls is a numerical function on the outcome space $\Omega$ of five rolls. The sum is thus a random variable. Let's call it $S$ . Then, formally,
$S
Step4: Functions of Random Variables,
A random variable is a numerical function on $\Omega$ . Therefore by composition, a numerical function of a random variable is also a random variable.
For example, $S^2$ is a random variable, calculated as follows
Step5: There are 126 values of $\omega$ for which $S(\omega) = 10$. Since all the $\omega$ are equally likely, the chance that $S$ has the value 10 is 126/7776.
We are informal with notation and write ${ S = 10 }$ instead of ${ S \in {10} }$
Step6: The contents of the table – all the possible values of the random variable, along with all their probabilities – are called the probability distribution of $S$ , or just distribution of $S$ for short. The distribution shows how the total probability of 100% is distributed over all the possible values of $S$ .
Let's check this, to make sure that all the $\omega$ 's in the outcome space have been accounted for in the column of probabilities.
Step7: That's 1 in a computing environment, and it is true in general for the distribution of any random variable.
Probabilities in a distribution are non-negative and sum to 1.
Visualising Distribution | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from itertools import product
# from IPython.core.display import HTML
# css = open('media/style-table.css').read() + open('media/style-notebook.css').read()
# HTML('<style>{}</style>'.format(css))
one_toss = np.array(['H', 'T'])
two_tosses = list(product(one_toss, repeat=2))
two_tosses
# For three tosses, just change the number of repetitions:
three_tosses = list(product(one_toss, repeat=3))
three_tosses
Explanation: Random Variables
Frequently, when an experiment is performed, we are interested mainly in some function of the outcome as opposed to the actual outcome itself.
For instance,<br>
1) In recent flipping a coin experiment, we may be interested in the total number of heads that occur and not care at all about the actual Head(H)–Tail(T) sequence that results. <br>
2) In throwing dice, we are often interested in the sum of the two dice and are not really concerned about the separate values of each die. That is, we may be interested in knowingthat the sum is 7 and may not be concerned over whether the actual outcome was: (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), or (6, 1). <br>
Also, These quantities of interest, or, more formally, these real-valued functions defined on the sample space, are known as 'Random Variables'.
Lets do an experiment with Python to demostrate
Why we need Random Variables?
& show its importance
End of explanation
three_toss_probs = (1/8)*np.ones(8)
three_toss_space = pd.DataFrame({
'Omega':three_tosses,
'P(omega)':three_toss_probs
})
three_toss_space
Explanation: As shown earlier in slide,<br>
A probability space $(\Omega, P)$ is an outcome space accompanied by the probabilities of all the outcomes.
<br>If you assume all eight outcomes of three tosses are equally likely, the probabilities are all 1/8:
End of explanation
die = np.arange(1, 7, 1)
five_rolls = list(product(die, repeat=5))
# five_rolls = [list(i) for i in product(die, repeat=5)]
five_roll_probs = (1/6**5)**np.ones(6**5)
five_roll_space = pd.DataFrame({
'Omega':five_rolls,
'P(omega)':five_roll_probs
})
five_roll_space
Explanation: As you can see above, Product spaces(Probability spaces) get large very quickly.
If we are tossing 10 times, the outcome space would consist of the $2^{10}$ sequences of 10 elements where each element is H or T. <br>
The outcomes are a pain to list by hand, but computers are good at saving us that kind of pain.
Lets take example of rolling die,<br>
If we roll a die 5 times, there are almost 8,000 possible outcomes:
End of explanation
five_rolls_sum = pd.DataFrame({
'Omega':five_rolls,
'S(omega)':five_roll_space['Omega'].map(lambda val: sum(val)),
'P(omega)':five_roll_probs
})
five_rolls_sum
Explanation: A Function on the Outcome Space
Suppose you roll a die five times and add up the number of spots you see. If that seems artificial, be patient for a moment and you'll soon see why it's interesting.
The sum of the rolls is a numerical function on the outcome space $\Omega$ of five rolls. The sum is thus a random variable. Let's call it $S$ . Then, formally,
$S: \Omega \rightarrow { 5, 6, \ldots, 30 }$
The range of $S$ is the integers 5 through 30, because each die shows at least one and at most six spots. We can also use the equivalent notation
$\Omega \stackrel{S}{\rightarrow} { 5, 6, \ldots, 30 }$
From a computational perspective, the elements of $\Omega$ are in the column omega of five_roll_space. Let's apply this function and create a larger table.
End of explanation
five_rolls_sum[five_rolls_sum['S(omega)']==10]
Explanation: Functions of Random Variables,
A random variable is a numerical function on $\Omega$ . Therefore by composition, a numerical function of a random variable is also a random variable.
For example, $S^2$ is a random variable, calculated as follows:
$S^2(\omega) = \big{(} S(\omega)\big{)}^2$
Thus for example $S^2(\text{[6 6 6 6 6]}) = 30^2 = 900$.
Events Determined by $S$
From the table five_rolls_sum it is hard to tell how many rows show a sum of 6, or 10, or any other value. To better understand the properties of $S$, we have to organize the information in five_rolls_sum.
For any subset $A$ of the range of $S$, define the event ${S \in A}$ as
$$
{S \in A } = {\omega: S(\omega) \in A }
$$
That is, ${ S \in A}$ is the collection of all $\omega$ for which $S(\omega)$ is in $A$.
If that definition looks unfriendly, try it out in a special case. Take $A = {5, 30}$. Then ${S \in A}$ if and only if either all the rolls show 1 spot or all the rolls show 6 spots. So
$$
{S \in A} = {\text{[1 1 1 1 1], [6 6 6 6 6]}}
$$
It is natural to ask about the chance the sum is a particular value, say 10. That's not easy to read off the table, but we can access the corresponding rows:
End of explanation
dist_S = five_rolls_sum.drop('Omega', axis=1).groupby('S(omega)', as_index=False).sum()
dist_S
Explanation: There are 126 values of $\omega$ for which $S(\omega) = 10$. Since all the $\omega$ are equally likely, the chance that $S$ has the value 10 is 126/7776.
We are informal with notation and write ${ S = 10 }$ instead of ${ S \in {10} }$:
$$
P(S = 10) = \frac{126}{7776} = 1.62\%
$$
This is how Random Variables help us quantify the results of experiments for the purpose of analysis.
i.e., Random variables provide numerical summaries of the experiment in question. - Stats110 harvard (also below paragraph)
This definition is abstract but fundamental; one of the most important skills to
develop when studying probability and statistics is the ability to go back and forth
between abstract ideas and concrete examples. Relatedly, it is important to work
on recognizing the essential pattern or structure of a problem and how it connectsto problems you have studied previously. We will often discuss stories that involve
tossing coins or drawing balls from urns because they are simple, convenient sce-
narios to work with, but many other problems are isomorphic: they have the same
essential structure, but in a different guise.
we can use mathematical opeartion on these variables since they are real valued function nowto problems you have studied previously. We will often discuss stories that involve
tossing coins or drawing balls from urns because they are simple, convenient sce-
narios to work with, but many other problems are isomorphic: they have the same
essential structure, but in a di↵erent guise.
Looking at Distributions
The table below shows all the possible values of $S$ along with all their probabilities. It is called a "Probability Distribution Table" for $S$ .
End of explanation
dist_S.ix[:,1].sum()
Explanation: The contents of the table – all the possible values of the random variable, along with all their probabilities – are called the probability distribution of $S$ , or just distribution of $S$ for short. The distribution shows how the total probability of 100% is distributed over all the possible values of $S$ .
Let's check this, to make sure that all the $\omega$ 's in the outcome space have been accounted for in the column of probabilities.
End of explanation
dist_S.ix[:,0], dist_S.ix[:,1]
s = dist_S.ix[:,0]
p_s = dist_S.ix[:,1]
dist_S = pd.concat([s,p_s],axis=1)
dist_S
dist_S.plot(x="S(omega)",y="P(omega)", kind="bar")
from prob140 import Plot
!pip install sympy
Explanation: That's 1 in a computing environment, and it is true in general for the distribution of any random variable.
Probabilities in a distribution are non-negative and sum to 1.
Visualising Distribution
End of explanation |
6,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Segmentation
Step1: Load the volume, it contains two spheres. You can either identify the regions of interest (ROIs) yourself or use the predefined rectangular regions of interest specified below ((min_x,max_x), (min_y, max_y), (min_z, max_z)).
To evaluate the sensitivity of the algorithms to the image content (varying size and shape of the ROI) you should identify the ROIs yourself.
Step2: We use a GUI to specify a region of interest. The GUI below allows you to specify a box shaped ROI. Draw a rectangle on the image (move and resize it) and specify the z range of the box using the range slider. You can then view the ROI overlaid onto the slices using the slice slider. The toolbar on the bottom of the figure allows you to zoom and pan. In zoom/pan mode the rectangle interaction is disabled. Once you exit zoom/pan mode (click the button again) you can specify a rectangle and interact with it.
We already specify two ROIs containing the two spheres found in the data (second row below).
To evaluate the sensitivity of the two approaches used in this notebook you should select the ROI on your own and see how the different sizes effect the results.
Step3: Get the user specified ROIs and select one of them.
Step4: Thresholding based approach
To see whether this approach is appropriate we look at the histogram of intensity values inside the ROI. We know that the spheres have higher intensity values. Ideally we would have a bimodal distribution with clear separation between the sphere and background.
Step5: Can you identify the region of the histogram associated with the sphere?
In our case it looks like we can automatically select a threshold separating the sphere from the background. We will use Otsu's method for threshold selection to segment the sphere and estimate its radius.
Step6: Based on your visual inspection, did the automatic threshold correctly segment the sphere or did it over/under segment it?
If automatic thresholding did not provide the desired result, you can correct it by allowing the user to modify the threshold under visual inspection. Implement this approach below.
Step7: Edge detection based approach
In this approach we will localize the sphere's edges in 3D using SimpleITK. We then compute a least squares sphere that optimally fits the 3D points using scipy/numpy. The mathematical formulation we use is as follows
Step8: Get the 3D location of the edge points and fit a sphere to them. | Python Code:
import SimpleITK as sitk
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
%matplotlib notebook
import gui
import matplotlib.pyplot as plt
import numpy as np
from scipy import linalg
from ipywidgets import interact, fixed
Explanation: Segmentation: Thresholding and Edge Detection <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F33_Segmentation_Thresholding_Edge_Detection.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
In this notebook our goal is to estimate the location and radius of spherical markers visible in a Cone-Beam CT volume.
We will use two approaches:
1. Segment the fiducial using a thresholding approach, derive the sphere's radius from the segmentation. This approach is solely based on SimpleITK.
2. Localize the fiducial's edges using the Canny edge detector and then fit a sphere to these edges using a least squares approach. This approach is a combination of SimpleITK and scipy/numpy.
Note that all of the operations, filtering and computations, are natively in 3D. This is the "magic" of ITK and SimpleITK at work.
The practical need for localizing spherical fiducials in CBCT images and additional algorithmic details are described in:
Z. Yaniv, "Localizing spherical fiducials in C-arm based cone-beam CT", Med. Phys., Vol. 36(11), pp. 4957-4966.
End of explanation
spherical_fiducials_image = sitk.ReadImage(fdata("spherical_fiducials.mha"))
roi_list = [((280, 320), (65, 90), (8, 30)), ((200, 240), (65, 100), (15, 40))]
Explanation: Load the volume, it contains two spheres. You can either identify the regions of interest (ROIs) yourself or use the predefined rectangular regions of interest specified below ((min_x,max_x), (min_y, max_y), (min_z, max_z)).
To evaluate the sensitivity of the algorithms to the image content (varying size and shape of the ROI) you should identify the ROIs yourself.
End of explanation
roi_acquisition_interface = gui.ROIDataAquisition(spherical_fiducials_image)
roi_acquisition_interface.set_rois(roi_list)
Explanation: We use a GUI to specify a region of interest. The GUI below allows you to specify a box shaped ROI. Draw a rectangle on the image (move and resize it) and specify the z range of the box using the range slider. You can then view the ROI overlaid onto the slices using the slice slider. The toolbar on the bottom of the figure allows you to zoom and pan. In zoom/pan mode the rectangle interaction is disabled. Once you exit zoom/pan mode (click the button again) you can specify a rectangle and interact with it.
We already specify two ROIs containing the two spheres found in the data (second row below).
To evaluate the sensitivity of the two approaches used in this notebook you should select the ROI on your own and see how the different sizes effect the results.
End of explanation
specified_rois = roi_acquisition_interface.get_rois()
# select the one ROI we will work on
ROI_INDEX = 0
roi = specified_rois[ROI_INDEX]
mask_value = 255
mask = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt8)
mask.CopyInformation(spherical_fiducials_image)
mask[
roi[0][0] : roi[0][1] + 1, roi[1][0] : roi[1][1] + 1, roi[2][0] : roi[2][1] + 1
] = mask_value
Explanation: Get the user specified ROIs and select one of them.
End of explanation
intensity_values = sitk.GetArrayViewFromImage(spherical_fiducials_image)
roi_intensity_values = intensity_values[
roi[2][0] : roi[2][1], roi[1][0] : roi[1][1], roi[0][0] : roi[0][1]
].flatten()
plt.figure()
plt.hist(roi_intensity_values, bins=100)
plt.title("Intensity Values in ROI")
plt.show()
Explanation: Thresholding based approach
To see whether this approach is appropriate we look at the histogram of intensity values inside the ROI. We know that the spheres have higher intensity values. Ideally we would have a bimodal distribution with clear separation between the sphere and background.
End of explanation
# Set pixels that are in [min_intensity,otsu_threshold] to inside_value, values above otsu_threshold are
# set to outside_value. The sphere's have higher intensity values than the background, so they are outside.
inside_value = 0
outside_value = 255
number_of_histogram_bins = 100
mask_output = True
labeled_result = sitk.OtsuThreshold(
spherical_fiducials_image,
mask,
inside_value,
outside_value,
number_of_histogram_bins,
mask_output,
mask_value,
)
# Estimate the sphere radius from the segmented image using the LabelShapeStatisticsImageFilter.
label_shape_analysis = sitk.LabelShapeStatisticsImageFilter()
label_shape_analysis.SetBackgroundValue(inside_value)
label_shape_analysis.Execute(labeled_result)
print(
"The sphere's location is: {0:.2f}, {1:.2f}, {2:.2f}".format(
*(label_shape_analysis.GetCentroid(outside_value))
)
)
print(
f"The sphere's radius is: {label_shape_analysis.GetEquivalentSphericalRadius(outside_value):.2f}mm"
)
# Visually evaluate the results of segmentation, just to make sure. Use the zoom tool, second from the right, to
# inspect the segmentation.
gui.MultiImageDisplay(
image_list=[
sitk.LabelOverlay(
sitk.Cast(
sitk.IntensityWindowing(
spherical_fiducials_image,
windowMinimum=-32767,
windowMaximum=-29611,
),
sitk.sitkUInt8,
),
labeled_result,
opacity=0.5,
)
],
title_list=["thresholding result"],
);
Explanation: Can you identify the region of the histogram associated with the sphere?
In our case it looks like we can automatically select a threshold separating the sphere from the background. We will use Otsu's method for threshold selection to segment the sphere and estimate its radius.
End of explanation
# Your code here:
Explanation: Based on your visual inspection, did the automatic threshold correctly segment the sphere or did it over/under segment it?
If automatic thresholding did not provide the desired result, you can correct it by allowing the user to modify the threshold under visual inspection. Implement this approach below.
End of explanation
# Create a cropped version of the original image.
sub_image = spherical_fiducials_image[
roi[0][0] : roi[0][1], roi[1][0] : roi[1][1], roi[2][0] : roi[2][1]
]
# Edge detection on the sub_image with appropriate thresholds and smoothing.
edges = sitk.CannyEdgeDetection(
sitk.Cast(sub_image, sitk.sitkFloat32),
lowerThreshold=0.0,
upperThreshold=200.0,
variance=(5.0, 5.0, 5.0),
)
Explanation: Edge detection based approach
In this approach we will localize the sphere's edges in 3D using SimpleITK. We then compute a least squares sphere that optimally fits the 3D points using scipy/numpy. The mathematical formulation we use is as follows:
Given $m$ points in $\mathbb{R}^n$, $m>n+1$, we want to fit them to a sphere such that
the sum of the squared algebraic distances is minimized. The algebraic distance is:
$$
\delta_i = \mathbf{p_i}^T\mathbf{p_i} - 2\mathbf{p_i}^T\mathbf{c} + \mathbf{c}^T\mathbf{c}-r^2
$$
The optimal sphere parameters are computed as:
$$
[\mathbf{c^},r^] = argmin_{\mathbf{c},r} \Sigma _{i=1}^m \delta _i ^2
$$
setting $k=\mathbf{c}^T\mathbf{c}-r^2$ we obtain the following linear equation system ($Ax=b$):
$$
\left[\begin{array}{cc}
-2\mathbf{p_1}^T & 1\
\vdots & \vdots \
-2\mathbf{p_m}^T & 1
\end{array}
\right]
\left[\begin{array}{c}
\mathbf{c}\ k
\end{array}
\right] =
\left[\begin{array}{c}
-\mathbf{p_1}^T\mathbf{p_1}\
\vdots\
-\mathbf{p_m}^T\mathbf{p_m}
\end{array}
\right]
$$
The solution of this equation system minimizes $\Sigma _{i=1}^m \delta _i ^2 = \|Ax-b\|^2$.
Note that the equation system admits solutions where $k \geq
\mathbf{c}^T\mathbf{c}$. That is, we have a solution that does not
represent a valid sphere, as $r^2<=0$. This situation can arise in
the presence of outliers.
Note that this is not the geometric distance which is what we really want to minimize and that we are assuming that there are no outliers. Both issues were addressed in the original work ("Localizing spherical fiducials in C-arm based cone-beam CT").
End of explanation
edge_indexes = np.where(sitk.GetArrayViewFromImage(edges) == 1.0)
# Note the reversed order of access between SimpleITK and numpy (z,y,x)
physical_points = [
edges.TransformIndexToPhysicalPoint([int(x), int(y), int(z)])
for z, y, x in zip(edge_indexes[0], edge_indexes[1], edge_indexes[2])
]
# Setup and solve linear equation system.
A = np.ones((len(physical_points), 4))
b = np.zeros(len(physical_points))
for row, point in enumerate(physical_points):
A[row, 0:3] = -2 * np.array(point)
b[row] = -linalg.norm(point) ** 2
res, _, _, _ = linalg.lstsq(A, b)
print("The sphere's location is: {0:.2f}, {1:.2f}, {2:.2f}".format(*res[0:3]))
print(f"The sphere's radius is: {np.sqrt(linalg.norm(res[0:3])**2 - res[3]):.2f}mm")
# Visually evaluate the results of edge detection, just to make sure. Note that because SimpleITK is working in the
# physical world (not pixels, but mm) we can easily transfer the edges localized in the cropped image to the original.
# Use the zoom tool, second from the right, for close inspection of the edge locations.
edge_label = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt16)
edge_label.CopyInformation(spherical_fiducials_image)
e_label = 255
for point in physical_points:
edge_label[edge_label.TransformPhysicalPointToIndex(point)] = e_label
gui.MultiImageDisplay(
image_list=[
sitk.LabelOverlay(
sitk.Cast(
sitk.IntensityWindowing(
spherical_fiducials_image,
windowMinimum=-32767,
windowMaximum=-29611,
),
sitk.sitkUInt8,
),
edge_label,
opacity=0.5,
)
],
title_list=["edge detection result"],
);
Explanation: Get the 3D location of the edge points and fit a sphere to them.
End of explanation |
6,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(z, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1 * alpha, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse=True, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
6,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using sci-analysis with pandas
Pandas is a python package that simplifies working with tabular or relational data. Because columns and rows of data in a pandas DataFrame are naturally array-like, using pandas with sci-analysis is the preferred way to use sci-analysis.
Let's create a pandas DataFrame to use for analysis
Step1: This creates a table (pandas DataFrame object) with 6 columns and an index which is the row id. The following command can be used to analyze the distribution of the column titled One
Step2: Anywhere you use a python list or numpy Array in sci-analysis, you can use a column or row of a pandas DataFrame (known in pandas terms as a Series). This is because a pandas Series has much of the same behavior as a numpy Array, causing sci-analysis to handle a pandas Series as if it were a numpy Array.
By passing two array-like arguments to the analyze() function, the correlation can be determined between the two array-like arguments. The following command can be used to analyze the correlation between columns One and Three
Step3: Since there isn't a correlation between columns One and Three, it might be useful to see where most of the data is concentrated. This can be done by adding the argument contours=True and turning off the best fit line with fit=False. For example
Step4: With a few point below -2.0, it might be useful to know which data point they are. This can be done by passing the ID column to the labels argument and then selecting which labels to highlight with the highlight argument
Step5: To check whether an individual Condition correlates between Column One and Column Three, the same analysis can be done, but this time by passing the Condition column to the groups argument. For example
Step6: The borders of the graph have boxplots for all the data points on the x-axis and y-axis, regardless of which group they belong to. The borders can be removed by adding the argument boxplot_borders=False.
According to the Spearman Correlation, there is no significant correlation among the groups. Group B is the only group with a negative slope, but it can be difficult to see the data points for Group B with so many colors on the graph. The Group B data points can be highlighted by using the argument highlight=['Group B']. In fact, any number of groups can be highlighted by passing a list of the group names using the highlight argument.
Step7: Performing a location test on data in a pandas DataFrame requires some explanation. A location test can be performed with stacked or unstacked data. One method will be easier than the other depending on how the data to be analyzed is stored. In the example DataFrame used so far, to perform a location test between the groups in the Condition column, the stacked method will be easier to use.
Let's start with an example. The following code will perform a location test using each of the four values in the Condition column
Step8: From the graph, there are four groups
Step9: To perform a location test using the unstacked method, the columns to be analyzed are passed in a list or tuple, and the groups argument needs to be a list or tuple of the group names. One thing to note is that the groups argument was used to explicitly define the group names. This will only work if the group names and order are known in advance. If they are unknown, a dictionary comprehension can be used instead of a list comprehension to to get the group names along with the data
Step10: The output will be identical to the previous example. The analysis also shows that the variances are not equal, and the means are not matched. Also, because the data in column Three is not normally distributed, the Levene Test is used to test for equal variance instead of the Bartlett Test, and the Kruskal-Wallis Test is used instead of the Oneway ANOVA.
With pandas, it's possible to perform advanced aggregation and filtering functions using the GroupBy object's apply() method. Since the sample sizes were small for each month in the above examples, it might be helpful to group the data by annual quarters instead. First, let's create a function that adds a column called Quarter to the DataFrame where the value is either Q1, Q2, Q3 or Q4 depending on the month.
Step11: This function will take a GroupBy object called data, where data's DataFrame object was grouped by month, and set the variable quarter based off the month. Then, a new column called Quarter is added to data where the value of each row is equal to quarter. Finally, the resulting DataFrame object is returned.
Using the new function is simple. The same techniques from previous examples are used, but this time, a new DataFrame object called df2 is created by first grouping by the Month column then calling the apply() method which will run the set_quarter() function. | Python Code:
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
import pandas as pd
np.random.seed(987654321)
df = pd.DataFrame(
{
'ID' : np.random.randint(10000, 50000, size=60).astype(str),
'One' : st.norm.rvs(0.0, 1, size=60),
'Two' : st.norm.rvs(0.0, 3, size=60),
'Three' : st.weibull_max.rvs(1.2, size=60),
'Four' : st.norm.rvs(0.0, 1, size=60),
'Month' : ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] * 5,
'Condition' : ['Group A', 'Group B', 'Group C', 'Group D'] * 15
}
)
df
Explanation: Using sci-analysis with pandas
Pandas is a python package that simplifies working with tabular or relational data. Because columns and rows of data in a pandas DataFrame are naturally array-like, using pandas with sci-analysis is the preferred way to use sci-analysis.
Let's create a pandas DataFrame to use for analysis:
End of explanation
analyze(
df['One'],
name='Column One',
title='Distribution from pandas'
)
Explanation: This creates a table (pandas DataFrame object) with 6 columns and an index which is the row id. The following command can be used to analyze the distribution of the column titled One:
End of explanation
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
title='Bivariate Analysis between Column One and Column Three'
)
Explanation: Anywhere you use a python list or numpy Array in sci-analysis, you can use a column or row of a pandas DataFrame (known in pandas terms as a Series). This is because a pandas Series has much of the same behavior as a numpy Array, causing sci-analysis to handle a pandas Series as if it were a numpy Array.
By passing two array-like arguments to the analyze() function, the correlation can be determined between the two array-like arguments. The following command can be used to analyze the correlation between columns One and Three:
End of explanation
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
contours=True,
fit=False,
title='Bivariate Analysis between Column One and Column Three'
)
Explanation: Since there isn't a correlation between columns One and Three, it might be useful to see where most of the data is concentrated. This can be done by adding the argument contours=True and turning off the best fit line with fit=False. For example:
End of explanation
analyze(
df['One'],
df['Three'],
labels=df['ID'],
highlight=df[df['Three'] < -2.0]['ID'],
fit=False,
xname='Column One',
yname='Column Three',
title='Bivariate Analysis between Column One and Column Three'
)
Explanation: With a few point below -2.0, it might be useful to know which data point they are. This can be done by passing the ID column to the labels argument and then selecting which labels to highlight with the highlight argument:
End of explanation
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
groups=df['Condition'],
title='Bivariate Analysis between Column One and Column Three'
)
Explanation: To check whether an individual Condition correlates between Column One and Column Three, the same analysis can be done, but this time by passing the Condition column to the groups argument. For example:
End of explanation
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
groups=df['Condition'],
boxplot_borders=False,
highlight=['Group B'],
title='Bivariate Analysis between Column One and Column Three'
)
Explanation: The borders of the graph have boxplots for all the data points on the x-axis and y-axis, regardless of which group they belong to. The borders can be removed by adding the argument boxplot_borders=False.
According to the Spearman Correlation, there is no significant correlation among the groups. Group B is the only group with a negative slope, but it can be difficult to see the data points for Group B with so many colors on the graph. The Group B data points can be highlighted by using the argument highlight=['Group B']. In fact, any number of groups can be highlighted by passing a list of the group names using the highlight argument.
End of explanation
analyze(
df['Two'],
groups=df['Condition'],
categories='Condition',
name='Column Two',
title='Oneway from pandas'
)
Explanation: Performing a location test on data in a pandas DataFrame requires some explanation. A location test can be performed with stacked or unstacked data. One method will be easier than the other depending on how the data to be analyzed is stored. In the example DataFrame used so far, to perform a location test between the groups in the Condition column, the stacked method will be easier to use.
Let's start with an example. The following code will perform a location test using each of the four values in the Condition column:
End of explanation
analyze(
[df['One'], df['Two'], df['Three'], df['Four']],
groups=['One', 'Two', 'Three', 'Four'],
categories='Columns',
title='Unstacked Oneway'
)
Explanation: From the graph, there are four groups: Group A, Group B, Group C and Group D in Column Two. The analysis shows that the variances are equal and there is no significant difference in the means. Noting the tests that are being performed, the Bartlett test is being used to check for equal variance because all four groups are normally distributed, and the Oneway ANOVA is being used to test if all means are equal because all four groups are normally distributed and the variances are equal. However, if not all the groups are normally distributed, the Levene Test will be used to check for equal variance instead of the Bartlett Test. Also, if the groups are not normally distributed or the variances are not equal, the Kruskal-Wallis test will be used instead of the Oneway ANOVA.
If instead the four columns One, Two, Three and Four are to be analyzed, the easier way to perform the analysis is with the unstacked method. The following code will perform a location test of the four columns:
End of explanation
analyze(
{'One': df['One'], 'Two': df['Two'], 'Three': df['Three'], 'Four': df['Four']},
categories='Columns',
title='Unstacked Oneway Using a Dictionary'
)
Explanation: To perform a location test using the unstacked method, the columns to be analyzed are passed in a list or tuple, and the groups argument needs to be a list or tuple of the group names. One thing to note is that the groups argument was used to explicitly define the group names. This will only work if the group names and order are known in advance. If they are unknown, a dictionary comprehension can be used instead of a list comprehension to to get the group names along with the data:
End of explanation
def set_quarter(data):
month = data['Month']
if month.all() in ('Jan', 'Feb', 'Mar'):
quarter = 'Q1'
elif month.all() in ('Apr', 'May', 'Jun'):
quarter = 'Q2'
elif month.all() in ('Jul', 'Aug', 'Sep'):
quarter = 'Q3'
elif month.all() in ('Oct', 'Nov', 'Dec'):
quarter = 'Q4'
else:
quarter = 'Unknown'
data.loc[:, 'Quarter'] = quarter
return data
Explanation: The output will be identical to the previous example. The analysis also shows that the variances are not equal, and the means are not matched. Also, because the data in column Three is not normally distributed, the Levene Test is used to test for equal variance instead of the Bartlett Test, and the Kruskal-Wallis Test is used instead of the Oneway ANOVA.
With pandas, it's possible to perform advanced aggregation and filtering functions using the GroupBy object's apply() method. Since the sample sizes were small for each month in the above examples, it might be helpful to group the data by annual quarters instead. First, let's create a function that adds a column called Quarter to the DataFrame where the value is either Q1, Q2, Q3 or Q4 depending on the month.
End of explanation
quarters = ('Q1', 'Q2', 'Q3', 'Q4')
df2 = df.groupby(df['Month']).apply(set_quarter)
data = {quarter: data['Two'] for quarter, data in df2.groupby(df2['Quarter'])}
analyze(
[data[quarter] for quarter in quarters],
groups=quarters,
categories='Quarters',
name='Column Two',
title='Oneway of Annual Quarters'
)
Explanation: This function will take a GroupBy object called data, where data's DataFrame object was grouped by month, and set the variable quarter based off the month. Then, a new column called Quarter is added to data where the value of each row is equal to quarter. Finally, the resulting DataFrame object is returned.
Using the new function is simple. The same techniques from previous examples are used, but this time, a new DataFrame object called df2 is created by first grouping by the Month column then calling the apply() method which will run the set_quarter() function.
End of explanation |
6,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data input for BIDS datasets
DataGrabber and SelectFiles are great if you are dealing with generic datasets with arbitrary organization. However if you have decided to use Brain Imaging Data Structure (BIDS) to organized your data (or got your hands on a BIDS dataset) you can take advanted of a formal structure BIDS imposes. In this short tutorial you will learn how to do this.
pybids - a Python API for working with BIDS datasets
pybids is a lightweight python API for querying BIDS folder structure for specific files and metadata. You can install it from PyPi
Step1: Let's figure out what are the subject labels in this dataset
Step2: What modalities are included in this dataset?
Step3: What different data types are included in this dataset?
Step4: What are the different tasks included in this dataset?
Step5: We can also ask for all of the data for a particular subject.
Step6: We can also ask for a specific subset of data. Note that we are using extension filter to get just the imaging data (BIDS allows both .nii and .nii.gz so we need to include both).
Step7: You probably noticed that this method does not only return the file paths, but objects with relevant query fields. We can easily extract just the file paths.
Step8: Exercise 1
Step9: Ok we got our function. Now we need to wrap it inside a Node object.
Step10: Works like a charm! (hopefully
Step11: Exercise 2
Step12: Accessing additional metadata
Querying different files is nice, but sometimes you want to access more metadata. For example RepetitionTime. pybids can help with that as well
Step13: Can we incorporate this into our pipeline? Yes we can! | Python Code:
from bids.grabbids import BIDSLayout
layout = BIDSLayout("/data/ds102/")
!tree /data/ds102/
Explanation: Data input for BIDS datasets
DataGrabber and SelectFiles are great if you are dealing with generic datasets with arbitrary organization. However if you have decided to use Brain Imaging Data Structure (BIDS) to organized your data (or got your hands on a BIDS dataset) you can take advanted of a formal structure BIDS imposes. In this short tutorial you will learn how to do this.
pybids - a Python API for working with BIDS datasets
pybids is a lightweight python API for querying BIDS folder structure for specific files and metadata. You can install it from PyPi:
pip install pybids
Please note it should be already installed in the tutorial Docker image.
The layout object and simple queries
To begin working with pubids we need to initalize a layout object. We will need it to do all of our queries
End of explanation
layout.get_subjects()
Explanation: Let's figure out what are the subject labels in this dataset
End of explanation
layout.get_modalities()
Explanation: What modalities are included in this dataset?
End of explanation
layout.get_types()
layout.get_types(modality='func')
Explanation: What different data types are included in this dataset?
End of explanation
layout.get_tasks()
Explanation: What are the different tasks included in this dataset?
End of explanation
layout.get(subject='01')
Explanation: We can also ask for all of the data for a particular subject.
End of explanation
layout.get(subject='01', type='bold', extensions=['nii', 'nii.gz'])
Explanation: We can also ask for a specific subset of data. Note that we are using extension filter to get just the imaging data (BIDS allows both .nii and .nii.gz so we need to include both).
End of explanation
[f.filename for f in layout.get(subject='01', type='T1w', extensions=['nii', 'nii.gz'])]
Explanation: You probably noticed that this method does not only return the file paths, but objects with relevant query fields. We can easily extract just the file paths.
End of explanation
def get_niftis(subject_id, data_dir):
# Remember that all the necesary imports need to be INSIDE the function for the Function Interface to work!
from bids.grabbids import BIDSLayout
layout = BIDSLayout(data_dir)
bolds = [f.filename for f in layout.get(subject=subject_id, type='bold', extensions=['nii', 'nii.gz'])]
return bolds
get_niftis('01', '/data/ds102')
Explanation: Exercise 1:
List all of the BOLD files for flanker task for subject 03, but only from the second run
Including pybids in your nipype workflow
This is great, but what we really want is to include this into our nipype workflows. How to do this? We can create our own custom BIDSDataGrabber using a Function Interface. First we need a plain Python function that for a given subject label and dataset location will return list of BOLD and T1w files.
End of explanation
from nipype.pipeline import Node, MapNode, Workflow
from nipype.interfaces.utility import IdentityInterface, Function
BIDSDataGrabber = Node(Function(function=get_niftis, input_names=["subject_id",
"data_dir"],
output_names=["bolds",
"T1ws"]), name="BIDSDataGrabber")
BIDSDataGrabber.inputs.data_dir = "/data/ds102"
BIDSDataGrabber.inputs.subject_id='01'
res = BIDSDataGrabber.run()
res.outputs
Explanation: Ok we got our function. Now we need to wrap it inside a Node object.
End of explanation
def printMe(paths):
print("\n\nanalyzing " + str(paths) + "\n\n")
analyzeBOLD = Node(Function(function=printMe, input_names=["paths"],
output_names=[]), name="analyzeBOLD")
wf = Workflow(name="bids_demo")
wf.connect(BIDSDataGrabber, "bolds", analyzeBOLD, "paths")
wf.run()
Explanation: Works like a charm! (hopefully :) Lets put it in a workflow. We are not going to analyze any data, but for demostrantion purposes we will add a couple of nodes that pretend to analyze their inputs
End of explanation
BIDSDataGrabber.iterables = ('subject_id', layout.get_subjects())
wf.run()
Explanation: Exercise 2:
Modify the BIDSDataGrabber and the workflow to include T1ws.
Iterating over subject labels
In the previous example we demostrated how to use pybids to "analyze" one subject. How can we scale it for all subjects? Easy - using iterables.
End of explanation
layout.get_metadata('/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz')
Explanation: Accessing additional metadata
Querying different files is nice, but sometimes you want to access more metadata. For example RepetitionTime. pybids can help with that as well
End of explanation
def printMetadata(path, data_dir):
from bids.grabbids import BIDSLayout
layout = BIDSLayout(data_dir)
print("\n\nanalyzing " + path + "\nTR: "+ str(layout.get_metadata(path)["RepetitionTime"]) + "\n\n")
analyzeBOLD2 = MapNode(Function(function=printMetadata, input_names=["path", "data_dir"],
output_names=[]), name="analyzeBOLD2", iterfield="path")
analyzeBOLD2.inputs.data_dir = "/data/ds102/"
wf = Workflow(name="bids_demo")
wf.connect(BIDSDataGrabber, "bolds", analyzeBOLD2, "path")
wf.run()
Explanation: Can we incorporate this into our pipeline? Yes we can!
End of explanation |
6,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: k - means
Randomly select starting locations for k points
Assign data points to closest k point
If no data changed its cluster membership stop
If there was a change, compute new means and repeat
Step4: Choosing k
Step9: Hierarchical Clustering | Python Code:
class KMeans:
k-means algo
def __init__(self, k):
self.k = k # number of clusters
self.means = None # means of clusters
def classify(self, input):
return the index of the cluster to closest to input
return min(range(self.k),
key = lambda i: squared_distance(input, self.means[i]))
def train(self, inputs):
# choose k rand points as initials
self.means = random.sample(inputs, self.k)
assignments = None
while True:
# Find new assignments
new_assignments = map(self.classify, inputs)
# If nothing changed we're good to go
if assignments == new_assignments:
return
# otherwise keep
assignments = new_assignments
# And compute new means based on assigments
for i in range(self.k):
# get points in cluster
i_points = [p for p,a in zip(inputs, assignments) if a == i]
# check for membership
if i_points:
self.means[i] = vector_mean(i_points)
inputs = [[-14,-5],[13,13],[20,23],[-19,-11],[-9,-16],[21,27],[-49,15],[26,13],[-46,5],[-34,-1],[11,15],[-49,0],[-22,-16],[19,28],[-12,-8],[-13,-19],[-41,8],[-11,-6],[-25,-9],[-18,-3]]
random.seed(0)
clusterer = KMeans(2)
clusterer.train(inputs)
clusterer.means
Explanation: k - means
Randomly select starting locations for k points
Assign data points to closest k point
If no data changed its cluster membership stop
If there was a change, compute new means and repeat
End of explanation
def squared_clustering_errors(inputs, k):
finds total square error for k
clusterer = KMeans(k)
clusterer.train(inputs)
means = clusterer.means
assignments = map(clusterer.classify, inputs)
return sum(squared_distance(input, means[cluster])
for input, cluster in zip(inputs, assignments))
ks = range(1, len(inputs) + 1)
errors = [squared_clustering_errors(inputs, k) for k in ks]
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
plt.plot(ks, errors, '.')
Explanation: Choosing k
End of explanation
def is_leaf(cluster):
a cluster is a leaf if it has len 1
return len(cluster) == 1
def get_children(cluster):
returns children of the cluster if merged else exception
if is_leaf(cluster):
raise TypeError("a leaf cluster has no children")
else:
return cluster[1]
def get_values(cluster):
returns the value in the cluster (if leaf)
or all values in leaf clusters below
if is_leaf(cluster):
return cluster
else:
return [value
for child in get_children(cluster)
for value in get_values(child)]
def cluster_distance(cluster1, cluster2, distance_agg = min):
compute all pairwise distances btw clusters
and apply distance_agg to the list
return distance_agg([distance(input1, input2)
for input1 in get_values(cluster1)
for input2 in get_values(cluster2)])
def get_merge_order(cluster):
if is_leaf(cluster):
return float('inf')
else:
return cluster[0]
def bottom_up_cluster(inputs, distance_agg = min):
# we start with all leaf clusters (this is bottom up after all)
clusters = [(input,) for input in inputs]
# Don't stop until we have one cluster
while len(clusters) > 1:
# the two clusters we want to merge
# are the clusters that are closest without touching
c1, c2 = min([(cluster1, cluster2)
for i, cluster1 in enumerate(clusters)
for cluster2 in clusters[:i]],
key = lambda (x,y): cluster_distance(x, y, distance_agg))
# the above is really inefficient in distance calc
# we should instead "look up" the distance
# once we merge them we remove them from the list
clusters = [c for c in clusters if c != c1 and c != c2]
# merge them with order = # of clusters left (so that last merge is "0")
merged_cluster = (len(clusters), [c1, c2])
# append the merge
clusters.append(merged_cluster)
return clusters[0]
base_cluster = bottom_up_cluster(inputs)
base_cluster
def generate_clusters(base_cluster, num_clusters):
clusters = [base_cluster]
# keep going till we have the desired number of clusters
while len(clusters) < num_clusters:
# choose the last-merge
next_cluster = min(clusters, key = get_merge_order)
# remove it from the list
clusters = [c for c in clusters if c != next_cluster]
# add its children to the list (this is an unmerge)
clusters.extend(get_children(next_cluster))
return clusters
three_clusters = [get_values(cluster)
for cluster in generate_clusters(base_cluster, 3)]
three_clusters
Explanation: Hierarchical Clustering
End of explanation |
6,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This script is dedicated to querying all needed statistics for the project.
Step1: Helpers
Step2: Distribution of job posts among job titles
Step3: Job posts distribution among standard job titles
Step4: Statistics for Domains
Note
Step5: Why no. of job titles in IT is reduced a lot after std?
Step6: Statistics for functions
Note | Python Code:
import my_util as my_util; from my_util import *
HOME_DIR = 'd:/larc_projects/job_analytics/'
DATA_DIR = HOME_DIR + 'data/clean/'
title_df = pd.read_csv(DATA_DIR + 'new_titles_2posts_up.csv')
Explanation: This script is dedicated to querying all needed statistics for the project.
End of explanation
def distTitle(agg_df, for_domain=False, for_func=False):
fig = plt.figure()
plt.hist(agg_df.n_title)
mean_n_title = round(agg_df.n_title.mean(), 1)
xl = '# job titles' + r'$(\mu = {})$'.format(mean_n_title)
plt.xlabel(xl, fontsize=16);
if for_domain: plt.ylabel('# domains', fontsize=16)
if for_func: plt.ylabel('# functions', fontsize=16)
plt.grid(True)
return fig
def aggBy(col, title_df):
by_col = title_df.groupby(col)
print('# {}: {}'.format(col, by_col.ngroups) )
agg_df = by_col.agg({'title': 'nunique','non_std_title': 'nunique','n_post': sum})
agg_df = agg_df.rename(columns={'title': 'n_title',
'std_title': 'n_std_title'}).reset_index()
return agg_df
Explanation: Helpers
End of explanation
title_stats = pd.read_csv(DATA_DIR + 'stats_job_titles.csv')
titles = title_stats['title']
print('# titles: %d' %len(titles))
by_n_post = pd.read_csv(DATA_DIR + 'stats_job_post_dist.csv')
by_n_post.head()
Explanation: Distribution of job posts among job titles
End of explanation
by_n_post_after_std = title_stats.groupby('n_post').agg({'title': len})
by_n_post_after_std = by_n_post_after_std.rename(columns={'title': 'n_title_after_std'}).reset_index()
quantile(by_n_post_after_std.n_post)
fig = vizJobPostDist(by_n_post)
plt.savefig(RES_DIR + 'fig/dist_job_post_by_title.pdf')
plt.show(); plt.close()
print('# job titles with >= 2 posts: {}'.format(title_df.shape[0]) )
Explanation: Job posts distribution among standard job titles
End of explanation
by_domain_agg = aggBy('domain', title_df)
by_domain_agg.sort_values('n_title', ascending=False, inplace=True)
by_domain_agg.to_csv(DATA_DIR + 'stats_domains.csv', index=False)
by_domain_agg.describe().round(1).to_csv(DATA_DIR + 'tmp/domain_desc.csv')
by_domain_agg.describe().round(1)
plt.close('all')
fig = distTitle(by_domain_agg, for_domain=True)
fig.set_tight_layout(True)
plt.savefig(DATA_DIR + 'title_dist_by_domain.pdf')
plt.show(); plt.close()
Explanation: Statistics for Domains
Note: The domains are domains of job titles with >= 2 posts.
End of explanation
title_df.query('domain == "information technology"').sort_values('std_title')
Explanation: Why no. of job titles in IT is reduced a lot after std?
End of explanation
by_func_agg = aggBy('pri_func', title_df)
by_func_agg.sort_values('n_title', ascending=False, inplace=True)
by_func_agg.to_csv(DATA_DIR + 'stats_pri_funcs.csv', index=False)
by_func_agg.describe().round(1).to_csv(DATA_DIR + 'tmp/func_desc.csv')
by_func_agg.describe().round(1)
by_func_agg.head(10)
fig = distTitle(by_func_agg, for_func=True)
fig.set_tight_layout(True)
plt.savefig(DATA_DIR + 'title_dist_by_func.pdf')
plt.show(); plt.close()
sum(title_df.domain == 'information technology')
title_df.std_title[title_df.pri_func == 'technician'].nunique()
job_df = pd.read_csv(DATA_DIR + 'jobs.csv')
print job_df.shape
job_df.head(1)
full_job_df = pd.read_csv(DATA_DIR + 'job_posts.csv')
print full_job_df.shape
full_job_df.head(1)
full_job_df = pd.merge(full_job_df, job_df[['job_id', 'doc']])
print full_job_df.shape
print('# job ids including dups: %d' %len(full_job_df.job_id))
print('# unique job ids: %d' % full_job_df.job_id.nunique())
full_job_df.head(1)
full_job_df.to_csv(DATA_DIR + 'job_posts.csv', index=False)
Explanation: Statistics for functions
Note: Functions are limited to those of job titles with >= 2 posts.
End of explanation |
6,852 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
making predictions for a regression problem
| Python Code::
from keras.models import Sequential
from keras.layers import Dense
from sklearn.datasets import make_regression
from sklearn.preprocessing import MinMaxScaler
from numpy import array
X, y = make_regression(n_samples=100, n_features=2, noise=0.1, random_state=1)
scalarX, scalarY = MinMaxScaler(), MinMaxScaler()
scalarX.fit(X)
scalarY.fit(y.reshape(100,1))
X = scalarX.transform(X)
y = scalarY.transform(y.reshape(100,1))
model = Sequential()
model.add(Dense(4, input_shape=(2,), activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='adam')
model.fit(X, y, epochs=1000, verbose=0)
Xnew = array([[0.29466096, 0.30317302]])
ynew = model.predict(Xnew)
|
6,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unit Models
Exploration of model for basic constituent units of a neural network.
Perceptron
Starting with the perceptron, an early version model based on binary inputs and step function.
Step1: Sigmoid Neuron
Allowing real input values (range 0-1) and using non linear activation function.
Step2: Neural Network for Linear Regression
Solve a line-fitting problem using a vanilla keras neural network
Setup Data
Step3: Train with TensorFlow
Step4: Train with Keras | Python Code:
# weights
W = np.array([-2,-2])
# bias
b = 3
# threshold. Can be discarded using the bias instead (bias=-threshold)
#threshold = 3
# perceptron firing rule
perceptron = lambda x : 1 if np.dot(X, W) + b >0 else 0
# input array
X = np.array([1,1])
# compute perceptron output
perceptron(X)
Explanation: Unit Models
Exploration of model for basic constituent units of a neural network.
Perceptron
Starting with the perceptron, an early version model based on binary inputs and step function.
End of explanation
# neuron firing rule
neuron = lambda x : 1/(1 + np.exp(-np.dot(X, W) - b))
# input array
X = np.array([1,1])
# compute neuron output
neuron(X)
Explanation: Sigmoid Neuron
Allowing real input values (range 0-1) and using non linear activation function.
End of explanation
# line function
def line(intercept, slope, x):
return x*slope + intercept
# sin modulated line function
def sin_line(x):
return np.sin(x)
# create our random data (line)
n = 1000
slope = 1.5
intercept = 5.
x = np.random.random(n)
y = line(intercept, slope, x)
# create our random data (sin line)
n = 1000
x_data = np.linspace(-10., 10., n)
y_data = sin_line(x_data) + np.random.uniform(-0.5, 0.5, n)
# plot data
sns.regplot(x_data, y_data)
sns.plt.show()
Explanation: Neural Network for Linear Regression
Solve a line-fitting problem using a vanilla keras neural network
Setup Data
End of explanation
import tensorflow as tf
# Network parameters
X = tf.placeholder(tf.float32, name='X')
y = tf.placeholder(tf.float32, name='Y')
W = tf.Variable(tf.random_normal([1], dtype=tf.float32, stddev=0.1), name='weight')
b = tf.Variable(tf.constant([0], dtype=tf.float32), name='bias')
# computation
y_pred = W*X+b
# cost definition
def cost_fun(y, y_pred):
return tf.abs(y-y_pred)
cost = tf.reduce_mean(cost_fun(y, y_pred))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
n_iters = 10000
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(n_iters):
sess.run(optimizer, feed_dict={X: x_data, y: y_data})
training_cost = sess.run(cost, feed_dict={X: x_data, y: y_data})
if i%100 == 0:
print(training_cost)
ys_pred = y_pred.eval(feed_dict={X: x_data}, session=sess)
fig, ax = plt.subplots(1, 1)
sns.regplot(x_data, y_data, fit_reg=False, ax=ax)
sns.regplot(x_data, ys_pred, fit_reg=False, ax=ax)
plt.show()
Explanation: Train with TensorFlow
End of explanation
from keras import models
from keras import layers
# Create neural network
nn = models.Sequential()
nn.add(layers.Dense(1, input_dim=1))
nn.compile(optimizer='sgd', loss='mse')
# train model
# dummy way of training, for the sake of retrieving our weights at each single training step
theta_history = [] #weights
loss_history = []
for i in range(1000):
loss_history.append(nn.fit(x, y, nb_epoch=1, verbose=0).history['loss'][0])
theta_history.append((nn.layers[0].get_weights()[0][0][0], nn.layers[0].get_weights()[1][0]))
# Plot SGD animation
from matplotlib import pyplot as plt, animation
fig = sns.plt.figure(dpi=100, figsize=(5, 4))
# original data
sns.regplot(x, y, fit_reg=False)
# initial parameters
init_slope, init_intercept = theta_history[0]
line, = plt.plot([0, 1.0], [init_intercept, line(init_intercept, init_slope, 1.0)], 'k-')
epoch_text = sns.plt.text(0, 0, "Epoch 0")
sns.plt.show()
def animate(i):
current_slope, current_intercept = theta_history[i]
line.set_ydata([current_intercept, line(current_intercept, current_slope, 1.0)])
epoch_text.set_text("Epoch {}, cost {:.3f}".format(i, loss_history[i]))
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(0, len(theta_history)), interval=10)
Explanation: Train with Keras
End of explanation |
6,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mini-Assignment 2
Step1: Let's have a look at what the data looks like for our example paper
Step5: Some Utility Functions
We'll define some utility functions that allow us to tokenize a string into terms, perform linguistic preprocessing on a list of terms, as well as a function to display information about a paper in a nice way. Note that these tokenization and preprocessing functions are rather naive - you may have to make them smarter in a later assignment.
Step6: Creating our first index
We will now create an inverted index based on the words in the abstracts of the papers in our dataset.
We will implement our inverted index as a Python dictionary with terms as keys and posting lists as values. For the posting lists, instead of using Python lists and then implementing the different operations on them ourselves, we will use Python sets and use the predefined set operations to process these posting "lists". This will also ensure that each document is added at most once per term. The use of Python sets is not the most efficient solution but will work for our purposes. (As an optional additional exercise, you can try to implement the posting lists as Python lists for this and the following mini-assignments.)
Not every paper in our dataset has an abstract; we will only index papers for which an abstract is available.
Step7: Let's see what's in the index for the example term 'network'
Step8: We can now use this inverted index to answer simple one-word queries, for example to show all papers that contain the word 'amsterdam'
Step9: Assignments
Your name
Step10: Task 2
Construct a second function called or_query that works in the same way as and_query you just implemented, but returns documents that contain at least one of the words in the query. Demonstrate the working of this second function also with an example (again, choose one that leads to fewer than 100 hits).
Step11: Task 3
Show how many hits the query "the who" returns for your two query functions (and_query and or_query). | Python Code:
Summaries_file = 'data/malaria__Summaries.pkl.bz2'
Abstracts_file = 'data/malaria__Abstracts.pkl.bz2'
import pickle, bz2
from collections import namedtuple
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )
for (id, paper_info) in Summaries.items():
Summaries[id] = paper( *paper_info )
Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )
Explanation: Mini-Assignment 2: Building a Simple Search Index
In this mini-assignment, we will build a simple search index, which we will use later for Boolean retrieval. The assignment tasks are again at the bottom of this document.
Loading the Data
End of explanation
Summaries[24130474]
Abstracts[24130474]
Explanation: Let's have a look at what the data looks like for our example paper:
End of explanation
def tokenize(text):
Function that tokenizes a string in a rather naive way. Can be extended later.
return text.split(' ')
def preprocess(tokens):
Perform linguistic preprocessing on a list of tokens. Can be extended later.
result = []
for token in tokens:
result.append(token.lower())
return result
print(preprocess(tokenize("Lorem ipsum dolor sit AMET")))
from IPython.display import display, HTML
import re
def display_summary( id, show_abstract=False, show_id=True, extra_text='' ):
Function for printing a paper's summary through IPython's Rich Display System.
Trims long author lists, and adds a link to the paper's DOI (when available).
s = Summaries[id]
lines = []
title = s.title
if s.doi != '':
title = '<a href=http://dx.doi.org/%s>%s</a>' % (s.doi, title)
title = '<strong>' + title + '</strong>'
lines.append(title)
authors = ', '.join( s.authors[:20] ) + ('' if len(s.authors) <= 20 else ', ...')
lines.append(str(s.year) + '. ' + authors)
if (show_abstract):
lines.append('<small><strong>Abstract:</strong> <em>%s</em></small>' % Abstracts[id])
if (show_id):
lines.append('[ID: %d]' % id)
if (extra_text != ''):
lines.append(extra_text)
display( HTML('<br>'.join(lines)) )
display_summary(22433778)
display_summary(24130474, show_abstract=True)
Explanation: Some Utility Functions
We'll define some utility functions that allow us to tokenize a string into terms, perform linguistic preprocessing on a list of terms, as well as a function to display information about a paper in a nice way. Note that these tokenization and preprocessing functions are rather naive - you may have to make them smarter in a later assignment.
End of explanation
from collections import defaultdict
inverted_index = defaultdict(set)
# This may take a while:
for (id, abstract) in Abstracts.items():
for term in preprocess(tokenize(abstract)):
inverted_index[term].add(id)
Explanation: Creating our first index
We will now create an inverted index based on the words in the abstracts of the papers in our dataset.
We will implement our inverted index as a Python dictionary with terms as keys and posting lists as values. For the posting lists, instead of using Python lists and then implementing the different operations on them ourselves, we will use Python sets and use the predefined set operations to process these posting "lists". This will also ensure that each document is added at most once per term. The use of Python sets is not the most efficient solution but will work for our purposes. (As an optional additional exercise, you can try to implement the posting lists as Python lists for this and the following mini-assignments.)
Not every paper in our dataset has an abstract; we will only index papers for which an abstract is available.
End of explanation
print(inverted_index['network'])
Explanation: Let's see what's in the index for the example term 'network':
End of explanation
query_word = 'amsterdam'
for i in inverted_index[query_word]:
display_summary(i)
Explanation: We can now use this inverted index to answer simple one-word queries, for example to show all papers that contain the word 'amsterdam':
End of explanation
# Add your code here
Explanation: Assignments
Your name: ...
Task 1
Construct a function called and_query that takes as input a single string, consisting of one or more words, and returns a list of matching documents. and_query, as its name suggests, should require that all query terms are present in the documents of the result list. Demonstrate the working of your function with an example (choose one that leads to fewer than 100 hits to not overblow this notebook file).
(You can use the tokenize and preprocess functions we defined above to tokenize and preprocess your query. You can also exploit the fact that the posting lists are sets, which means you can easily perform set operations such as union, difference and intersect on them.)
End of explanation
# Add your code here
Explanation: Task 2
Construct a second function called or_query that works in the same way as and_query you just implemented, but returns documents that contain at least one of the words in the query. Demonstrate the working of this second function also with an example (again, choose one that leads to fewer than 100 hits).
End of explanation
# Add your code here
Explanation: Task 3
Show how many hits the query "the who" returns for your two query functions (and_query and or_query).
End of explanation |
6,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimizing a <span style="font-variant
Step1: The function cart_prod(A, B) computes the Cartesian product $A \times B$ of the sets $A$ and $B$ where $A \times B$ is defined as follows
Step2: The function separate takes four arguments
Step3: Given a state p and a Partition of the set of all states, the function find_equivalence_class(p, Partition) returns the equivalence class of p, i.e. it returns the set from Partition that contains x.
Step4: The function reachable(q0, Σ, 𝛿) takes three arguments
Step5: The function all_separable(Q, A, Σ, 𝛿) takes four arguments
Step6: The function minimize(A) takes a deterministic
<span style="font-variant | Python Code:
def arb(M):
for x in M:
return x
assert False, 'Error: arb called with empty set!'
Explanation: Minimizing a <span style="font-variant:small-caps;">Fsm</span>
The function arb(M) takes a non-empty set M as its argument and returns an arbitrary element from this set.
The set M is not changed.
End of explanation
def cart_prod(A, B):
return { (x, y) for x in A for y in B }
Explanation: The function cart_prod(A, B) computes the Cartesian product $A \times B$ of the sets $A$ and $B$ where $A \times B$ is defined as follows:
$$ A \times B := { (x, y) \mid x \in A \wedge y \in B }. $$
End of explanation
def separate(Pairs, States, Σ, 𝛿):
Result = { (q1, q2) for q1 in States
for q2 in States
for c in Σ
if (𝛿[q1, c], 𝛿[q2, c]) in Pairs
}
return Result
Explanation: The function separate takes four arguments:
- Pairs a set Pairs of pairs of states from some given <span style="font-variant:small-caps;">Fsm</span> $F$.
If $(p_1, p_2) \in \texttt{Pairs}$, then $p_1$ and $p_2$ are known to be separable.
- States is the set of all states of the <span style="font-variant:small-caps;">Fsm</span> $F$,
- Σ is the alphabet of the <span style="font-variant:small-caps;">Fsm</span> $F$.
- 𝛿 is the transition function of the <span style="font-variant:small-caps;">Fsm</span>
The function separate(Pairs, States, Σ, 𝛿) computes the set of pairs of states $(q_1, q_2)$ that are separable because there is some character $c \in \Sigma$ such that
$$\delta(q_1,c) = p_1, \quad \textrm{but} \quad \delta(q_2,c) = p_2. $$
End of explanation
def find_equivalence_class(p, Partition):
return arb({ C for C in Partition if p in C })
Explanation: Given a state p and a Partition of the set of all states, the function find_equivalence_class(p, Partition) returns the equivalence class of p, i.e. it returns the set from Partition that contains x.
End of explanation
def reachable(q0, Σ, 𝛿):
Result = { q0 }
while True:
NewStates = { 𝛿[p, c] for p in Result for c in Σ }
if NewStates <= Result:
return Result
Result |= NewStates
Explanation: The function reachable(q0, Σ, 𝛿) takes three arguments:
* q0 is the start state of an Fsm,
* Σ is the alphabet.
* 𝛿 is the transition function. The transition function is assumed to be complete. 𝛿 is represented as a dictionary.
It returns the set of all states that can be reached from the start state q0 by reading strings of characters from Σ.
End of explanation
def all_separable(Q, A, Σ, 𝛿):
Separable = cart_prod(Q - A, A) | cart_prod(A, Q - A)
while True:
NewPairs = separate(Separable, Q, Σ, 𝛿)
if NewPairs <= Separable:
return Separable
Separable |= NewPairs
Explanation: The function all_separable(Q, A, Σ, 𝛿) takes four arguments:
* Q is the set of states of the Fsm.
* A is the set of all accepting states,
* Σ is the alphabet.
* 𝛿 is the transition function.
𝛿 is represented as a dictionary.
The function computes the set of all Pairs (p, q) such that p and q are separable, i.e. all pairs such that
$$ \exists s \in \Sigma^: \bigl(\delta^(p, s) \in A \wedge \delta^(q,s) \not\in A\bigr) \vee
\bigl(\delta^(p, s) \not\in A \wedge \delta^*(q,s) \in A\bigr).
$$
End of explanation
def minimize(F):
Q, Σ, 𝛿, q0, A = F
Q = reachable(q0, Σ, 𝛿)
Separable = all_separable(Q, A, Σ, 𝛿)
Equivalent = cart_prod(Q, Q) - Separable
EquivClasses = { frozenset({ p for p in Q if (p, q) in Equivalent })
for q in Q
}
newQ0 = arb({ M for M in EquivClasses if q0 in M })
newAccept = { M for M in EquivClasses if arb(M) in A }
newDelta = {}
for q in Q:
for c in Σ:
p = 𝛿.get((q, c))
if p != None:
classOfP = find_equivalence_class(p, EquivClasses)
classOfQ = find_equivalence_class(q, EquivClasses)
newDelta[(classOfQ, c)] = classOfP
else:
newDelta[(classOfQ, c)] = frozenset()
return EquivClasses, Σ, newDelta, newQ0, newAccept
Explanation: The function minimize(A) takes a deterministic
<span style="font-variant:small-caps;">Fsm</span> F as its input.
Here F is a 5-tuple of the form
$$ F = (Q, \Sigma, \delta, q_0, A) $$
The algorithm performs the following steps:
1. All unreachable states are eliminated.
2. All accepting states are separated form all non-accepting states.
3. States are separated as long as possible.
Two states $p_1$ and $p_2$ are separable if there is a character
$c \in \Sigma$ such that
$$\delta(p_1,c) = q_1, \quad \delta(p_2,c) = q_2, \quad \textrm{and} \quad
\mbox{$q_1$ and $q_2$ are separable.}
$$
4. States that are not separable are equivalent and are therefore identified and grouped
in equivalence classes. The states in an equivalence class are then identified.
End of explanation |
6,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
API calls
Step1: Now, we can all the rows together to print the tweets with one run. Try/except is also added to except possible errors.
Step2: User tweets
The library also provides the opportunity to access the timeline of a user/the tweet s/he has. | Python Code:
from TwitterSearch import *
with open('token.txt','r') as f:
token = f.read().split()
# pass your credentials to the TwitterSearch class to create and object called "ts"
ts = TwitterSearch(
consumer_key = token[0],
consumer_secret = token[1],
access_token = token[2],
access_token_secret = token[3]
)
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.set_keywords(['python']) # let's define all words we would like to have a look for
tso.set_language('en') # we want to see Enlish tweets only
tso.set_include_entities(False) # and don't give us all those entity information
# this is where the fun actually starts :)
for tweet in ts.search_tweets_iterable(tso):
print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )
tweet_list = []
for tweet in ts.search_tweets_iterable(tso):
tweet_list.append(tweet['text'])
len(tweet_list)
Explanation: API calls: Twitter
This notebook introduces a simple yet powerful package developed by University of Munich researchers and called TwitterSearch. One can easily install it by opening the command prompt and running the following command:
pip install twittersearch
The package provides the opportunity to eassily search all the tweets on given keyword(s). The user can even set the language of the tweet, as far as the latter is supported by the twitter API. To be able to make use of the package one needs to go to the developers page on Twitter, create an account, create a new app and create an access token. Then Consumer key/secret and Access token/secret should be copied to be used in the code.
WARNING ! The above mentioned keys and sectrets are confidential. Please, do not share yours with others.
End of explanation
try:
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.set_keywords(['python', 'r']) # let's define all words we would like to have a look for
tso.set_language('en') # we want to see English tweets only
tso.set_include_entities(False) # and don't give us all those entity information
# it's about time to create a TwitterSearch object with our secret tokens
ts = TwitterSearch(
consumer_key = token[0],
consumer_secret = token[1],
access_token = token[2],
access_token_secret = token[3]
)
# this is where the fun actually starts :)
for tweet in ts.search_tweets_iterable(tso):
print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )
except TwitterSearchException as e: # take care of all those ugly errors if there are some
print(e)
Explanation: Now, we can all the rows together to print the tweets with one run. Try/except is also added to except possible errors.
End of explanation
try:
tuo = TwitterUserOrder('wef') # create a TwitterUserOrder
# it's about time to create TwitterSearch object again
ts = TwitterSearch(
consumer_key = token[0],
consumer_secret = token[1],
access_token = token[2],
access_token_secret = token[3]
)
# start asking Twitter about the timeline
for tweet in ts.search_tweets_iterable(tuo):
print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )
except TwitterSearchException as e: # catch all those ugly errors
print(e)
Explanation: User tweets
The library also provides the opportunity to access the timeline of a user/the tweet s/he has.
End of explanation |
6,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SAM
This notebook explores parsing and understanding the SAM (Sequence Alignment/Map) and related BAM format. SAM is an extremely common format for representing read alignments. Most widely-used read alignment tools output SAM alignments.
SAM is a text format. There is a closely related binary format called BAM. There are two types of BAM files
Step1: SAM fields
As you can see from this last example, each SAM record is on a separate line, and it consists of several tab-delimited fields. The SAM specification is the authoritative source for information on what all the fields mean exactly, but here's a brief summary of each of the fields (in order)
Step2: Next we construct a function to parse the MD
Step3: Now we can write a fucntion that takes a read sequennce, a parsed CIGAR string, and a parse MD
Step4: From the stacked alignment, it's easy to do other things. E.g. we can turn a stacked alignment into a new CIGAR string that uses the = and X operations instead of the less specific M operation | Python Code:
# Here's a string representing a three-line SAM file. I'm temporarily
# ignoring the fact that SAM files usually have several header lines at
# the beginning.
samStr = '''\
r1 0 gi|9626243|ref|NC_001416.1| 18401 42 122M * 0 0 TGAATGCGAACTCCGGGACGCTCAGTAATGTGACGATAGCTGAAAACTGTACGATAAACNGTACGCTGAGGGCAGAAAAAATCGTCGGGGACATTNTAAAGGCGGCGAGCGCGGCTTTTCCG +"@6<:27(F&5)9)"B:%B+A-%5A?2$HCB0B+0=D<7E/<.03#!.F77@6B==?C"7>;))%;,3-$.A06+<-1/@@?,26">=?*@'0;$:;??G+:#+(A?9+10!8!?()?7C> AS:i:-5 XN:i:0 XM:i:3 XO:i:0 XG:i:0 NM:i:3 MD:Z:59G13G21G26 YT:Z:UU
r2 0 gi|9626243|ref|NC_001416.1| 8886 42 275M * 0 0 NTTNTGATGCGGGCTTGTGGAGTTCAGCCGATCTGACTTATGTCATTACCTATGAAATGTGAGGACGCTATGCCTGTACCAAATCCTACAATGCCGGTGAAAGGTGCCGGGATCACCCTGTGGGTTTATAAGGGGATCGGTGACCCCTACGCGAATCCGCTTTCAGACGTTGACTGGTCGCGTCTGGCAAAAGTTAAAGACCTGACGCCCGGCGAACTGACCGCTGAGNCCTATGACGACAGCTATCTCGATGATGAAGATGCAGACTGGACTGC (#!!'+!$""%+(+)'%)%!+!(&++)''"#"#&#"!'!("%'""("+&%$%*%%#$%#%#!)*'(#")(($&$'&%+&#%*)*#*%*')(%+!%%*"$%"#+)$&&+)&)*+!"*)!*!("&&"*#+"&"'(%)*("'!$*!!%$&&&$!!&&"(*"$&"#&!$%'%"#)$#+%*+)!&*)+(""#!)!%*#"*)*')&")($+*%%)!*)!('(%""+%"$##"#+(('!*(($*'!"*('"+)&%#&$+('**$$&+*&!#%)')'(+(!%+ AS:i:-14 XN:i:0 XM:i:8 XO:i:0 XG:i:0 NM:i:8 MD:Z:0A0C0G0A108C23G9T81T46 YT:Z:UU
r3 16 gi|9626243|ref|NC_001416.1| 11599 42 338M * 0 0 GGGCGCGTTACTGGGATGATCGTGAAAAGGCCCGTCTTGCGCTTGAAGCCGCCCGAAAGAAGGCTGAGCAGCAGACTCAAGAGGAGAAAAATGCGCAGCAGCGGAGCGATACCGAAGCGTCACGGCTGAAATATACCGAAGAGGCGCAGAAGGCTNACGAACGGCTGCAGACGCCGCTGCAGAAATATACCGCCCGTCAGGAAGAACTGANCAAGGCACNGAAAGACGGGAAAATCCTGCAGGCGGATTACAACACGCTGATGGCGGCGGCGAAAAAGGATTATGAAGCGACGCTGTAAAAGCCGAAACAGTCCAGCGTGAAGGTGTCTGCGGGCGAT 7F$%6=$:9B@/F'>=?!D?@0(:A*)7/>9C>6#1<6:C(.CC;#.;>;2'$4D:?&B!>689?(0(G7+0=@37F)GG=>?958.D2E04C<E,*AD%G0.%$+A:'H;?8<72:88?E6((CF)6DF#.)=>B>D-="C'B080E'5BH"77':"@70#4%A5=6.2/1>;9"&-H6)=$/0;5E:<8G!@::1?2DC7C*;@*#.1C0.D>H/20,!"C-#,6@%<+<D(AG-).?�.00'@)/F8?B!&"170,)>:?<A7#1(A@0E#&A.*DC.E")AH"+.,5,2>5"2?:G,F"D0B8D-6$65D<D!A/38860.*4;4B<*31?6 AS:i:-22 XN:i:0 XM:i:8 XO:i:0 XG:i:0 NM:i:8 MD:Z:80C4C16A52T23G30A8T76A41 YT:Z:UU'''
# I'll read this string in line-by-line as though it were a file.
# I'll (lightly) parse the alignment records as I go.
import string
from io import StringIO # reading from string rather than file
for ln in StringIO(samStr):
qname, flag, rname, pos, mapq, cigar, rnext, \
pnext, tlen, seq, qual, extras = str.split(ln, '\t', 11)
print(qname, len(seq)) # print read name, length of read sequence
Explanation: SAM
This notebook explores parsing and understanding the SAM (Sequence Alignment/Map) and related BAM format. SAM is an extremely common format for representing read alignments. Most widely-used read alignment tools output SAM alignments.
SAM is a text format. There is a closely related binary format called BAM. There are two types of BAM files: unsorted or sorted. Various tools, notably SAMtools and Picard, can convert back and forth between SAM and BAM, and can sort an existing BAM file. When we say a BAM file is sorted we almost always mean that the alignments are sorted left-to-right along the reference genome. (It's also possible to sort a BAM file by read name, though that's only occassionally useful.)
Once you have an interesting set of read alignments that you would like to keep for a while and perhaps analyze further, it's a good idea to keep them in sorted BAM files. This is because:
They will be well compressed. BAM files are smaller than corresponding SAM files, and sorted BAM files are smaller than corresponding unsorted BAM files.
From a sorted BAM file, it's easy to extract just the alignments that overlap a specified stretch of the genome, making it easy to convert from sorted BAM to many other useful formats.
That said, most alignment tools output SAM (not BAM), and the alignments come out in an arbitrary order -- not sorted.
An authoritative and complete document describing the SAM and BAM formats is the SAM specification. This document is thorough, but, being a specification, it does not describe is the various "dialects" of legal SAM emitted by popular tools. I'll cover some of that here.
End of explanation
def cigarToList(cigar):
''' Parse CIGAR string into a list of CIGAR operations. For more
info on CIGAR operations, see SAM spec:
http://samtools.sourceforge.net/SAMv1.pdf '''
ret, i = [], 0
op_map = {'M':0, # match or mismatch
'=':0, # match
'X':0, # mismatch
'I':1, # insertion in read w/r/t reference
'D':2, # deletion in read w/r/t reference
'N':3, # long gap due e.g. to splice junction
'S':4, # soft clipping due e.g. to local alignment
'H':5, # hard clipping
'P':6} # padding
# Seems like = and X together are strictly more expressive than M.
# Why not just have = and X and get rid of M? Space efficiency,
# mainly. The titans discuss: http://www.biostars.org/p/17043/
while i < len(cigar):
run = 0
while i < len(cigar) and cigar[i].isdigit():
# parse one more digit of run length
run *= 10
run += int(cigar[i])
i += 1
assert i < len(cigar)
# parse cigar operation
op = cigar[i]
i += 1
assert op in op_map
# append to result
ret.append([op_map[op], run])
return ret
cigarToList('10=1X10=')
Explanation: SAM fields
As you can see from this last example, each SAM record is on a separate line, and it consists of several tab-delimited fields. The SAM specification is the authoritative source for information on what all the fields mean exactly, but here's a brief summary of each of the fields (in order):
qname is the name of the read
flags is a bit field encoding some yes/no pieces of information about whether and how the read aligned
rname is the name of the reference sequence that the read aligned to (if applicable). E.g., might be "chr17" meaning "chromosome 17"
pos is the 1-based offset into the reference sequence where the read aligned.
mapq for an aligned read, this is a confidence value; high when we're very confident we've found the correct alignment, low when we're not confident
cigar indicates where any gaps occur in the alignment
rnext only relevant for paired-end reads; name of the reference sequence where other end aligned
pnext only relevant for paired-end reads; 1-based offset into the reference sequence where other end aligned
tlen only relevant for paired-end reads; fragment length inferred from alignment
seq read sequence. If read aligned to reference genome's reverse-complement, this is the reverse complement of the read sequence.
qual quality sequence. If read aligned to reference genome's reverse-complement, this is the reverse of the quality sequence.
extras tab-separated "extra" fields, usually optional and aligner-specific but often very important!
Field 1, qname is the name of the read. Read names often contain information about:
The scientific study for which the read was sequenced.
The sequencing instrument, and the exact part of the sequencing instrument, where the DNA was sequenced.
Field 5, mapq, encodes the probability p that the alignment reported is incorrect. The probability is encoded as an integer Q on the Phred scale:
$$ Q = -10 \cdot \log_{10}(p) $$
Fields 7, 8 and 9 (rnext, pnext and tlen) are only relevant if the read is part of a pair. By way of background, sequencers can be configured to report pairs of DNA snippets that appear close to each other in the genome. To accomplish this, the sequencer sequences both ends of a longer fragment of DNA. When this is the case rnext and pnext tell us where the other end of the pair aligned, and tlen tells us the length of the fragment, as inferred from the alignments of the two ends.
Field 10, seq, is the nucleotide sequence of the read. The nucleotide sequence is reverse complemented if the read aligned to the reverse complement of the reference genome. (This is equivalent to the reverse complement of the read aligning to the genome.) seq can contain the character "N". N essentially means "no confidence." The sequencer knows there's a nucleotide there but doesn't know whether it's an A, C, G or T.
Field 11, qual, is the quality sequence of the read. Each nucleotide in seq has a corresponding quality value in this string. A nucleotide's quality value encodes the probability that the nucleotide was incorrectly called by the sequencing instrument and its software. For details on this encoding, see the FASTQ notebook.
Flags
The flags field is a bitfield. Individual bits correspond to certain yes/no properties of the alignment. Here are the most relevant ones:
Bit 0 (least significant): 1 if read is paired-end, 0 otherwise
Bit 1: for paired-end reads only: 1 if the pair aligns concordantly, 0 otherwise
Bit 2: 1 if read failed to align, 0 otherwise
Bit 3: for paried-end reads only: 1 if the other end failed to align, 0 otherwise
Bit 4: 1 if read aligned to Crick strand, 0 if Watson strand
Bit 5: for paired-end reads only: 1 if the other end aligned to Crick strand, 0 if Watson strand
Bit 6: for paired-end reads only: 1 if this is the first (#1) end, 0 if this is the second (#2) end
Bit 7: for paired-end reads only: 0 if this is the first (#1) end, 1 if this is the second (#2) end
There are a few more that are used less often; see the SAM specification for details.
Alignment score
How do we know how good an alignment is, i.e., how well the read sequence matches the corresponding referene sequence. This information is spread across a few places:
The AS:i extra field
The cigar field
The MD:Z extra field
While the AS:i extra field is not required by the specification, all the most popular tools output it. The integer appearing in this field is an alignment score. The higher the score, the more similar the read sequence is to the reference sequence.
Different tools use differen scales for AS:i. Sometimes (e.g. in Bowtie 2's --end-to-end alignment mode)
End-to-end versus local alignment
Some alignment tools such as Bowtie, BWA and Bowtie 2 in --end-to-end mode, will attempt to align a sequencing read end-to-end. In other words, the read will align such that every nucleotide of the read participates.
Alignment shape
We would like to know exactly how the read aligned to the reference, including where all the mismatches and gaps are, and which characters appear opposite the gaps. This is not possible just by looking at the read sequence together with the CIGAR string. That doesn't tell us what reference characters appear in the mismatched position, or in the positions involved in deletions. Instead, we combine information from the (a) read sequence, (b) CIGAR string, and (c) MD:Z string. The CIGAR and MD:Z strings are both described in the SAM specification.
First we construct a function to parse the CIGAR field:
End of explanation
def mdzToList(md):
''' Parse MD:Z string into a list of operations, where 0=match,
1=read gap, 2=mismatch. '''
i = 0;
ret = [] # list of (op, run, str) tuples
while i < len(md):
if md[i].isdigit(): # stretch of matches
run = 0
while i < len(md) and md[i].isdigit():
run *= 10
run += int(md[i])
i += 1 # skip over digit
if run > 0:
ret.append([0, run, ""])
elif md[i].isalpha(): # stretch of mismatches
mmstr = ""
while i < len(md) and md[i].isalpha():
mmstr += md[i]
i += 1
assert len(mmstr) > 0
ret.append([1, len(mmstr), mmstr])
elif md[i] == "^": # read gap
i += 1 # skip over ^
refstr = ""
while i < len(md) and md[i].isalpha():
refstr += md[i]
i += 1 # skip over inserted character
assert len(refstr) > 0
ret.append([2, len(refstr), refstr])
else:
raise RuntimeError('Unexpected character in MD:Z: "%d"' % md[i])
return ret
# Each element in the list returned by this call is itself a list w/ 3
# elements. Element 1 is the MD:Z operation (0=match, 1=mismatch,
# 2=deletion). Element 2 is the length and element 3 is the relevant
# sequence of nucleotides from the reference.
mdzToList('10A5^AC6')
Explanation: Next we construct a function to parse the MD:Z extra field:
End of explanation
def cigarMdzToStacked(seq, cgp, mdp_orig):
''' Takes parsed CIGAR and parsed MD:Z, generates a stacked alignment:
a pair of strings with gap characters inserted (possibly) and where
characters at at the same offsets are opposite each other in the
alignment. Only knows how to handle CIGAR ops M=XDINSH right now.
'''
mdp = mdp_orig[:]
rds, rfs = [], []
mdo, rdoff = 0, 0
for c in cgp:
op, run = c
skipping = (op == 4 or op == 5)
assert skipping or mdo < len(mdp)
if op == 0: # CIGAR op M, = or X
# Look for block matches and mismatches in MD:Z string
mdrun = 0
runleft = run
while runleft > 0 and mdo < len(mdp):
op_m, run_m, st_m = mdp[mdo]
run_comb = min(runleft, run_m)
runleft -= run_comb
assert op_m == 0 or op_m == 1
rds.append(seq[rdoff:rdoff + run_comb])
if op_m == 0: # match from MD:Z string
rfs.append(seq[rdoff:rdoff + run_comb])
else: # mismatch from MD:Z string
assert len(st_m) == run_comb
rfs.append(st_m)
mdrun += run_comb
rdoff += run_comb
# Stretch of matches in MD:Z could span M and I CIGAR ops
if run_comb < run_m:
assert op_m == 0
mdp[mdo][1] -= run_comb
else:
mdo += 1
elif op == 1: # CIGAR op I
rds.append(seq[rdoff:rdoff + run])
rfs.append("-" * run)
rdoff += run
elif op == 2: # D
op_m, run_m, st_m = mdp[mdo]
assert op_m == 2
assert run == run_m
assert len(st_m) == run
mdo += 1
rds.append("-" * run)
rfs.append(st_m)
elif op == 3: # N
rds.append("-" * run)
rfs.append("-" * run)
elif op == 4: # S
rds.append(seq[rdoff:rdoff + run].lower())
rfs.append(' ' * run)
rdoff += run
elif op == 5: # H
rds.append('!' * run)
rfs.append(' ' * run)
elif op == 6: # P
raise RuntimeError("Don't know how to handle P in CIGAR")
else:
raise RuntimeError('Unexpected CIGAR op: %d' % op)
assert mdo == len(mdp)
return ''.join(rds), ''.join(rfs)
# Following example includes gaps and mismatches
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGATAAACC', cigarToList('12M2D17M2I14M'), mdzToList('12^AT30G0'))
# Following example also includes soft clipping (CIGAR: S)
# SAM spec: Soft clipping: "clipped sequences present in SEQ"
# We print them in lowercase to emphasize their clippedness
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC', cigarToList('12M2D17M2I8M6S'), mdzToList('12^AT25'))
# Following example also includes hard clipping (CIGAR: H)
# SAM spec: Hard clipping: "clipped sequences NOT present in SEQ"
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC', cigarToList('12M2D17M2I8M6S3H'), mdzToList('12^AT25'))
# Note: don't see hard clipping in practice much
# Following example also includes skipping (CIGAR: N), as seen in
# TopHat alignments
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC',
cigarToList('12M2D10M10N7M2I8M6S3H'),
mdzToList('12^AT25'))
Explanation: Now we can write a fucntion that takes a read sequennce, a parsed CIGAR string, and a parse MD:Z string and combines information from all three to make what I call a "stacked alignment."
End of explanation
def cigarize(rds, rfs):
off = 0
oplist = []
lastc, cnt = '', 0
for i in range(len(rds)):
c = None
if rfs[i] == ' ':
c = 'S'
elif rds[i] == '-' and rfs[i] == '-':
c = 'N'
elif rds[i] == '-':
c = 'D'
elif rfs[i] == '-':
c = 'I'
elif rds[i] != rfs[i]:
c = 'X'
else:
c = '='
if c == lastc:
cnt += 1
else:
if len(lastc) > 0:
oplist.append((lastc, cnt))
lastc, cnt = c, 1
if len(lastc) > 0:
oplist.append((lastc, cnt))
return ''.join(map(lambda x: str(x[1]) + x[0], oplist))
x, y = cigarMdzToStacked('ACGTACGT', cigarToList('8M'), mdzToList('4G3'))
cigarize(x, y)
x, y = cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC',
cigarToList('12M2D10M10N7M2I8M6S3H'),
mdzToList('12^AT25'))
cigarize(x, y)
Explanation: From the stacked alignment, it's easy to do other things. E.g. we can turn a stacked alignment into a new CIGAR string that uses the = and X operations instead of the less specific M operation:
End of explanation |
6,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy
문제1.
NumPy 명령을 사용하여 다음 행렬을 생성하는 코드를 1줄로 작성하세요.
Step1: 문제2.
X 행렬이 다음과 같을 때 NumPy 슬라이싱 인덱싱을 사용하여 행렬의 짝수 부분만을 선택하여 행렬로 만드는 NumPy코드를 작성하세요.(행렬 인덱싱을 사용하지 말 것!)
Step2: 문제3.
X행렬이 다음과 같을 때 행렬 인덱싱을 사용하여 4의 배수만을 선택하여 하나의 벡터로 만드는 NumPy 코드를 작성하세요.(NumPy 슬라이싱 인덱싱을 사용하지 말 것!)
Step3: 문제4.
모든 원소의 값이 1인 (5,4)행렬 X와 모든 원소의 값이 0인 (5,4) 행렬 Y를 순서대로 합쳐서 크기가 (5,8)인 행렬을 만드는 NumPy코드를 3줄로 작성하세요.
Step4: 문제 5.
arange명령과 reshape명령만을 사용하여 다음 행렬을 만드는 NumPy 코드를 1줄로 작성하세요.
Step5: 문제6.
다음과 같이 x배열 변수가 존재하는 경우 이 x배열과 newaxis명령, 그리고 +연산만을 이용하여 다음 행렬을 만드는 코드를 1줄로 작성하세요.
Step6: 문제7.
다음 행렬 X가 5명의 학생이 3번 시험 본 성적이라고 가정하고 각 학생의 최고 성적을 구하는 코드를 1줄로 작성하세요.
Step7: 문제10
meshgrid명령과 scatter명령을 사용하여 다음 그림을 그리는 코드를 1줄로 작성하라.
Step8: 선형대수
문제12.
다음 중 일반적으로 성립하지 않는 것을 모두 고르세요.
2.det(cA) = c*det(A)
6.det(A+B) = det(A) + det(B)
9.tr(A)-1(역함수) = tr(A-1(역)) | Python Code:
X = np.array([[11, 12], [21, 22], [31, 32]])
X
Explanation: NumPy
문제1.
NumPy 명령을 사용하여 다음 행렬을 생성하는 코드를 1줄로 작성하세요.
End of explanation
X = np.array([[1,1,1,1], [1,2,4,8], [1,3,5,7], [1,4,16,32], [1,5,9,13]])
X
X[1::2, 1:]
Explanation: 문제2.
X 행렬이 다음과 같을 때 NumPy 슬라이싱 인덱싱을 사용하여 행렬의 짝수 부분만을 선택하여 행렬로 만드는 NumPy코드를 작성하세요.(행렬 인덱싱을 사용하지 말 것!)
End of explanation
X = np.array([[1,1,1,1], [1,2,4,8], [1,3,5,7],[1,4,16,32],[1,5,9,13]])
X
X[X%4==0]
Explanation: 문제3.
X행렬이 다음과 같을 때 행렬 인덱싱을 사용하여 4의 배수만을 선택하여 하나의 벡터로 만드는 NumPy 코드를 작성하세요.(NumPy 슬라이싱 인덱싱을 사용하지 말 것!)
End of explanation
X = np.ones((5,4))
Y = np.zeros((5,4))
np.hstack([X, Y])
Explanation: 문제4.
모든 원소의 값이 1인 (5,4)행렬 X와 모든 원소의 값이 0인 (5,4) 행렬 Y를 순서대로 합쳐서 크기가 (5,8)인 행렬을 만드는 NumPy코드를 3줄로 작성하세요.
End of explanation
np.arange(1,6)
np.arange(1,6).reshape(5,1)
Explanation: 문제 5.
arange명령과 reshape명령만을 사용하여 다음 행렬을 만드는 NumPy 코드를 1줄로 작성하세요.
End of explanation
x = np.arange(5)
x
x[:, np.newaxis] + 1
10 * (x[:, np.newaxis] + 1)
10 * (x[:, np.newaxis] + 1) + x
Explanation: 문제6.
다음과 같이 x배열 변수가 존재하는 경우 이 x배열과 newaxis명령, 그리고 +연산만을 이용하여 다음 행렬을 만드는 코드를 1줄로 작성하세요.
End of explanation
np.random.seed(0)
X = np.random.random_integers(0, 100, (5,3))
X
X.max(axis=1)
Explanation: 문제7.
다음 행렬 X가 5명의 학생이 3번 시험 본 성적이라고 가정하고 각 학생의 최고 성적을 구하는 코드를 1줄로 작성하세요.
End of explanation
plt.scatter(*np.meshgrid(range(5), range(6)));
Explanation: 문제10
meshgrid명령과 scatter명령을 사용하여 다음 그림을 그리는 코드를 1줄로 작성하라.
End of explanation
A = np.array([[1,2], [3,4]])
B = np.array([[5,6,], [7,8]])
print(np.linalg.det(3*A), 3*np.linalg.det(A)) #no.2
print(np.linalg.det(A+B), np.linalg.det(A) + np.linalg.det(B)) #no.6
print(np.trace(A), np.trace(np.linalg.inv(A))) #no.9
Explanation: 선형대수
문제12.
다음 중 일반적으로 성립하지 않는 것을 모두 고르세요.
2.det(cA) = c*det(A)
6.det(A+B) = det(A) + det(B)
9.tr(A)-1(역함수) = tr(A-1(역))
End of explanation |
6,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 1 - Setup
We will be using
* PIL (Python Image Library) for getting the picture
* pytesseract for the OCR (Object Character Recognition)
* googlemaps and gmaps for mapping
Step1: Part 2 - Use OCR to read the address
On Android this can be done with Google Vision
Step2: Testing location
This might not be neccessary as google will take addresses
Step3: Google Maps
This api does not have a waypoint optimization
Here are more details on how to do it | Python Code:
from PIL import Image
import pytesseract
import googlemaps
import gmaps as jupmap
import sys
from datetime import datetime
# get my private keys for google maps and gmaps
f = open('private.key', 'r')
for line in f:
temp = line.rstrip('').replace(',','').replace('\n','').split(" ")
exec(temp[0])
myMap = googlemaps.Client(key=googlemap_key)
jupmap.configure(api_key=jupmap_key)
Explanation: Part 1 - Setup
We will be using
* PIL (Python Image Library) for getting the picture
* pytesseract for the OCR (Object Character Recognition)
* googlemaps and gmaps for mapping
End of explanation
img = Image.open('mm_address.jpg')
label = pytesseract.image_to_string(img)
print(label)
clientLocation = label.splitlines()[2] + ', ' + label.splitlines()[3]
print(clientLocation)
testLocation = '2403 Englewood Ave, Durham, NC 27705'
print(testLocation)
Explanation: Part 2 - Use OCR to read the address
On Android this can be done with Google Vision
End of explanation
testGeoCode = myMap.geocode(testLocation)[0]
lat = testGeoCode.get('geometry').get('location').get('lat')
lng = testGeoCode.get('geometry').get('location').get('lng')
print(lat, ' ', lng )
clientList = ['300 N Roxboro St, Durham, NC 27701','911 W Cornwallis Rd, Durham, NC 27707', '345 W Main Street, Durham, NC 27701' ]
wp=[]
for x in clientList:
testGeoCode = myMap.geocode(x)[0]
lat = testGeoCode.get('geometry').get('location').get('lat')
lng = testGeoCode.get('geometry').get('location').get('lng')
wp.append([lat,lng])
print(wp)
Explanation: Testing location
This might not be neccessary as google will take addresses
End of explanation
m = jupmap.Map()
home = (36.0160282, -78.9321707)
foodLion = (36.0193147,-78.9603636)
church = (35.9969749, -78.9091543)
dl = jupmap.directions_layer(church, home, waypoints=wp)
#googlemaps has an optimize_waypoints=True but I can't find it in jupyter gmaps
m.add_layer(dl)
m
Explanation: Google Maps
This api does not have a waypoint optimization
Here are more details on how to do it: https://developers.google.com/maps/documentation/directions/intro
End of explanation |
6,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Übung 2
Problem 2.1 - Lineare Regression mit Least Squares
Step1: Numpy Dokumentation
Step2: Versuch zur Bestimmung einer linearen Gleichungsfunktion.
Hier wird die händische Berechnung durchgeführt. Im späteren Verlauf wird auf die Funktion scipy.optimize.curve_fit verwendet. Eine Gleichung dazu ist in der Form $f(x) = \alpha + \beta x$ mit
$\beta = \frac{\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})}{\sum_{i=1}^{n}(x_i - \bar{x})^2}$
$\alpha = \bar{y} - \beta\bar{x}$
$\bar{x} = \frac{\sum_{i=1}^{n}x_i}{n}$ und $\bar{y} = \frac{\sum_{i=1}^{n}y_i}{n}$
Siehe Wikipedia
Step4: 2.1.1 Polynom-Fit
Es sollen die Koeffizienten $\beta$ zu Polynomen der Grade 1,2 und 3 bestimmt werden.
$f(x) = a + bx$
$f(x) = a + bx + cx^2$
$f(x) = a + bx + cx^2 + dx^3$
Zuvor wurde bereits das Polynom ersten Grades durch die lineare Gleichungsfunktion bestimmt, muss also nicht mehr ermittelt werden. Im Folgenden werden die Koeffizienten lediglich mit Hilfe der Funktion scipy.optimize.curve_fit bestimmt.
Least Squares Fitting in Python
Step5: 2.1.2 Plot der Residuen
Step6: 2.1.3 Bestimmung der Anpassungsgüte
Step7: 2.1.4 Betrachtung Videostreamdaten
Step8: Test über Exponentialfunktion
$f(x) = a e^{-bx} + c$
Zur Erinnerung folgt die Darstellung der e-Funktionen
$f(x) = e^x$ und $f(x) = e^{-x}$
Step9: Das Residuenquadrat enthält eine sehr gute Annäherung. Trotzdem wird im Plot ein vermeintlich großer Abstand dargestellt. Dieser ist auf die höhere Verteilung auf y-Werten zurückzuführen. Später wird das genauer erkennbar, wenn mit den Residuenquadraten der Polynome verglichen wird.
Step10: 2.1.5 Bewertung mit Polynomen
Step11: Darstellung der Residuenquadrate der Polynome neben der Exponentialfunktion
Step12: Summe der Residuenquadrate | Python Code:
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
Explanation: Übung 2
Problem 2.1 - Lineare Regression mit Least Squares
End of explanation
def print_sample(genfromtxt):
# column names
print(genfromtxt.dtype.names)
# data
for row in genfromtxt[:min((len(genfromtxt), 10))]:
print(row)
# laden der daten mit numpy
x_y = np.genfromtxt('experiment1.csv', delimiter=',', names=True)
loss_qoe = np.genfromtxt('experiment2.csv', delimiter=',', names=True)
print_sample(x_y)
print_sample(loss_qoe)
# extraktion der x und y werte
xs = [i[0] for i in x_y]
ys = [i[1] for i in x_y]
# plot in punkten. andere darstellung zb durch - oder o
plt.plot(xs, ys, '.')
# hinzufuegen eines innenabstands zum plot
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .05, x_max + .05, y_min - .05, y_max + .05))
plt.title('Messwerte X-Y')
plt.show()
Explanation: Numpy Dokumentation: numpy.genfromtxt
Laden der Daten inklusive Header.
End of explanation
x_avg = sum(xs) / len(xs)
y_avg = sum(ys) / len(ys)
b_numerator = sum([(xs[i] - x_avg) * (ys[i] - y_avg) for i in range(len(xs))])
b_denominator = sum([np.power(xs[i] - x_avg, 2) for i in range(len(xs))])
b = b_numerator / b_denominator
a = y_avg - b * x_avg
print('Formel: f(x) = {:.3f} + {:.3f}x'.format(a, b))
# plot der punkte
xs = [i[0] for i in x_y]
ys = [i[1] for i in x_y]
plt.plot(xs, ys, '.')
# plot der linearen funktion mit gegebenen x werten
plt.plot((0, 1), (a, a + b))
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .05, x_max + .05, y_min - .05, y_max + .05))
plt.title('Lineare Fit-Funktion')
plt.show()
Explanation: Versuch zur Bestimmung einer linearen Gleichungsfunktion.
Hier wird die händische Berechnung durchgeführt. Im späteren Verlauf wird auf die Funktion scipy.optimize.curve_fit verwendet. Eine Gleichung dazu ist in der Form $f(x) = \alpha + \beta x$ mit
$\beta = \frac{\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})}{\sum_{i=1}^{n}(x_i - \bar{x})^2}$
$\alpha = \bar{y} - \beta\bar{x}$
$\bar{x} = \frac{\sum_{i=1}^{n}x_i}{n}$ und $\bar{y} = \frac{\sum_{i=1}^{n}y_i}{n}$
Siehe Wikipedia
End of explanation
from scipy.optimize import curve_fit
poly_1 = lambda x, a, b: a + b*x
poly_2 = lambda x, a, b, c: a + b*x + c*x*x
poly_3 = lambda x, a, b, c, d: a + b*x + c*x*x + d*x*x*x
# plot der punkte
xs = [i[0] for i in x_y]
ys = [i[1] for i in x_y]
plt.plot(xs, ys, '.')
# im folgenden enthalten popt die koeffizienten der jeweiligen funktionen
# pcov ist das 2d-array was die kovarianz matrix enthaelt
x_points = np.linspace(0, 1, 20)
def plot_fit(func, label, x, y):
Fuehrt ein curve fit mit gegebener Funktion und
x und y Werten aus. Das Ergebnis wird mit dem
gegebenen Label geplottet.
popt, pcov = curve_fit(func, x, y)
y_i = [func(_x, *popt) for _x in x_points]
plt.plot(x_points, y_i, label=label)
plot_fit(poly_1, '1', xs, ys)
plot_fit(poly_2, '2', xs, ys)
plot_fit(poly_3, '3', xs, ys)
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .05, x_max + .05, y_min - .05, y_max + .05))
# legende hinzufuegen
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title='Polynomgrad')
plt.title('Curve Fit in den Graden 1, 2 und 3')
plt.show()
Explanation: 2.1.1 Polynom-Fit
Es sollen die Koeffizienten $\beta$ zu Polynomen der Grade 1,2 und 3 bestimmt werden.
$f(x) = a + bx$
$f(x) = a + bx + cx^2$
$f(x) = a + bx + cx^2 + dx^3$
Zuvor wurde bereits das Polynom ersten Grades durch die lineare Gleichungsfunktion bestimmt, muss also nicht mehr ermittelt werden. Im Folgenden werden die Koeffizienten lediglich mit Hilfe der Funktion scipy.optimize.curve_fit bestimmt.
Least Squares Fitting in Python
End of explanation
def calc_residuals(func, x, y):
popt, pcov = curve_fit(func, x, y)
# r^2 an der stelle x_i
r_squares = [np.power(y[i] - func(x[i], *popt), 2) for i in range(len(x))]
# hinzufuegen des vorzeichens
r_squared_unsigned = [-r_squares[i] if y[i] < func(x[i], *popt)
else r_squares[i] for i in range(len(r_squares)) ]
return r_squared_unsigned
f, axes = plt.subplots(ncols=3, nrows=1, sharex=True, sharey=True, figsize=(12, 3))
ax1, ax2, ax3 = axes.ravel()
ax1.set_title('linear')
linear_r = calc_residuals(poly_1, xs, ys)
ax1.bar(xs, linear_r, width=.01)
quadratic_r = calc_residuals(poly_2, xs, ys)
ax2.set_title('quadratic')
ax2.bar(xs, quadratic_r, width=.01)
cubic_r = calc_residuals(poly_3, xs, ys)
ax3.set_title('cubic')
ax3.bar(xs, cubic_r, width=.01)
plt.show()
Explanation: 2.1.2 Plot der Residuen
End of explanation
from IPython.display import display, Math
def sums_r(func, x, y):
popt, pcov = curve_fit(func, x, y)
return sum([(y[i] - func(x[i], *popt)) ** 2 for i in range(len(x))])
sums_r_linear = sums_r(poly_1, xs, ys)
sums_r_quadratic = sums_r(poly_2, xs, ys)
sums_r_cubic = sums_r(poly_3, xs, ys)
display(Math(r'{:s} = {:.5f}'.format('SS_r(linear)', sums_r_linear)))
display(Math(r'{:s} = {:.5f}'.format('SS_r(quadratic)', sums_r_quadratic)))
display(Math(r'{:s} = {:.5f}'.format('SS_r(cubic)', sums_r_cubic)))
print()
def sums_tot(func, y):
yd = sum(y) / len(y)
return sum(np.power(np.array(y) - yd, 2))
# equivalent: return sum([(ys[i] - yd) ** 2 for i in range(len(ys))])
def r_squared(sums_r_value, sums_tot_value):
return 1 - sums_r_value / sums_tot_value
display(Math(r'{:s} = {:.5f}'.format('R^2(linear)', r_squared(sums_r_linear, sums_tot(poly_1, ys)))))
display(Math(r'{:s} = {:.5f}'.format('R^2(quadratic)', r_squared(sums_r_quadratic, sums_tot(poly_2, ys)))))
display(Math(r'{:s} = {:.5f}'.format('R^2(cubic)', r_squared(sums_r_cubic, sums_tot(poly_3, ys)))))
Explanation: 2.1.3 Bestimmung der Anpassungsgüte
End of explanation
x_vid = [i[0] for i in loss_qoe]
y_vid = [i[1] for i in loss_qoe]
plt.plot(x_vid, y_vid, '.')
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .05, x_max + .05, y_min - .05, y_max + .05))
plt.title('Messwerte Videostream')
plt.show()
Explanation: 2.1.4 Betrachtung Videostreamdaten
End of explanation
x_e = np.linspace(0, 1, 20)
y_e = np.exp(x_e)
plt.plot(x_e, y_e, '-', label=r'$e^x$')
y_e = np.exp(-x_e)
plt.plot(x_e, y_e, '-', label=r'$e^{-x}$')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title='E-Funktionen')
plt.show()
# plot der rohdaten
x_vid = [i[0] for i in loss_qoe]
y_vid = [i[1] for i in loss_qoe]
plt.plot(x_vid, y_vid, '.')
# Curve fitting funktion
e_func = lambda x, a, b, c: a * np.exp(-b * x) + c
# plot des fits
plot_fit(e_func, r'$a e^{-bx} + c$', x_vid, y_vid)
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .05, x_max + .05, y_min - .05, y_max + .05))
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title='Funktion')
plt.title('Curve Fit mittels Exponentialfunktion')
plt.ylabel('qoe')
plt.xlabel('loss')
plt.show()
sums_r_exp = sums_r(e_func, x_vid, y_vid)
display(Math(r'{:s} = {:.5f}'.format('SS_r(exp)', sums_r_exp)))
display(Math(r'{:s} = {:.5f}'.format('R^2(exp)', r_squared(sums_r_exp, sums_tot(e_func, y_vid)))))
Explanation: Test über Exponentialfunktion
$f(x) = a e^{-bx} + c$
Zur Erinnerung folgt die Darstellung der e-Funktionen
$f(x) = e^x$ und $f(x) = e^{-x}$
End of explanation
residuals_e = calc_residuals(e_func, x_vid, y_vid)
plt.title(r'Residuenquadrat nach $a e^{-bx} + c$')
plt.bar(x_vid, residuals_e, width=.005)
plt.show()
Explanation: Das Residuenquadrat enthält eine sehr gute Annäherung. Trotzdem wird im Plot ein vermeintlich großer Abstand dargestellt. Dieser ist auf die höhere Verteilung auf y-Werten zurückzuführen. Später wird das genauer erkennbar, wenn mit den Residuenquadraten der Polynome verglichen wird.
End of explanation
x_vid = [i[0] for i in loss_qoe]
y_vid = [i[1] for i in loss_qoe]
plt.plot(x_vid, y_vid, '.')
plot_fit(poly_2, r'$ax^2 + bx + c$', x_vid, y_vid)
plot_fit(poly_3, r'$ax^3 + bx^2 + cx + d$', x_vid, y_vid)
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .05, x_max + .05, y_min - .05, y_max + .05))
# legende hinzufuegen
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title='Funktion')
plt.title('Curve Fit mit Polynomen')
plt.ylabel('qoe')
plt.xlabel('loss')
plt.show()
Explanation: 2.1.5 Bewertung mit Polynomen
End of explanation
f, axes = plt.subplots(ncols=3, nrows=1, sharex=True, sharey=True, figsize=(12, 3))
ax1, ax2, ax3 = axes.ravel()
residuals_e = calc_residuals(e_func, x_vid, y_vid)
ax1.set_title(r'exponential')
ax1.bar(x_vid, residuals_e, width=.01)
quadratic_r = calc_residuals(poly_2, x_vid, y_vid)
ax2.set_title('quadratic')
ax2.bar(x_vid, quadratic_r, width=.01)
cubic_r = calc_residuals(poly_3, x_vid, y_vid)
ax3.set_title('cubic')
ax3.bar(x_vid, cubic_r, width=.01)
plt.show()
Explanation: Darstellung der Residuenquadrate der Polynome neben der Exponentialfunktion
End of explanation
sums_r_quadratic = sums_r(poly_2, x_vid, y_vid)
sums_r_cubic = sums_r(poly_3, x_vid, y_vid)
display(Math(r'{:s} = {:.5f}'.format('R^2(exp)', r_squared(sums_r_exp, sums_tot(e_func, y_vid)))))
display(Math(r'{:s} = {:.5f}'.format('R^2(quadratic)', r_squared(sums_r_quadratic, sums_tot(poly_2, y_vid)))))
display(Math(r'{:s} = {:.5f}'.format('R^2(cubic)', r_squared(sums_r_cubic, sums_tot(poly_3, y_vid)))))
Explanation: Summe der Residuenquadrate
End of explanation |
6,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning - Hands on Lab - Session #1
Lecturer
Step1: 2. Re-Import the Check NaN Function from Part 2
Step2: 3. Loading in the Data from Session #2
Step3: 3. Transforming Categorical Data
Let's go for some One Hot Encoding - replacing the categorical fields in the dataset with multiple columns representing one value from each column.
Step4: 5. Mid - Conclusion
We had 14 Columns, we now have 147 columns.
It seems like a lot, however we mostly just have restructured the information and only created 10 columns
6. Adding new data
6.1. Understanding session.csv
Exactly like we did with the training / testing data. We now investigate session data.
Step5: 6.2. Cleaning and Transforming the Data
6.2.1. Extract the primary and secondary devices for each user
The first piece of information we are going to extract is the primary and secondary device for each user.
How do we determine what is the user's primary and secondary devices are? We look at how much time they spent on each device.
Step6: 6.2.2. Determine Counts of Actions
The next thing we are going to do is take counts of how many times each action was taken by each user. This is a two-step process.
To handle the multiple action columns, we repeat these steps for each column individually, effectively creating three separate tables.
Because we have now created tables where each row represents one user, we can now join these three tables together on the basis of the user id.
Step7: 6.2.3. Combine Data Sets
The final steps are to combine the various datasets we have created into one large dataset.
we combine the two device dataframes (df_primary and df_secondary) to create a device dataframe
we combine the device dataframe with the actions dataframe to create a sessions dataframe with all the features we extracted from sessions.csv
Finally, we combine the sessions dataframe with the training and testing data dataframe
Step8: 7. Saving the DataFrame to csv | Python Code:
import os
from datetime import datetime
import numpy as np
import pandas as pd
import sklearn as sk
Explanation: Machine Learning - Hands on Lab - Session #1
Lecturer: Jonathan DEKHTIAR
Date: 2017-03-13
<br/><br/>
Contact: [email protected]
Twitter: @born2data
LinkedIn: JonathanDEKHTIAR
Personal Website: JonathanDEKHTIAR
RSS Feed: FeedCrunch.io
Tech. Blog: born2data.com
Github: DEKHTIARJonathan
<br/><br/>
```
2017 March 13
In place of a legal notice, here is a blessing:
May you do good and not evil.
May you find forgiveness for yourself and forgive others.
May you share freely, never taking more than you give.
**
```
1. Loading the Python libraries
End of explanation
def check_NaN_Values_in_df(df):
# searching for NaN values is all the columns
for col in df:
nan_count = df[col].isnull().sum()
if nan_count != 0:
print (col + " => "+ str(nan_count) + " NaN Values")
Explanation: 2. Re-Import the Check NaN Function from Part 2
End of explanation
df_all = pd.read_csv(
"output/cleaned.csv",
dtype={
'country_destination': str
}
)
# We transform again the date column into datetime
df_all['date_account_created'] = pd.to_datetime(df_all['date_account_created'], format='%Y-%m-%d %H:%M:%S')
df_all['timestamp_first_active'] = pd.to_datetime(df_all['timestamp_first_active'], format='%Y-%m-%d %H:%M:%S')
# Check for NaN Values => We must find: country_destination => 62096 NaN Values
check_NaN_Values_in_df(df_all)
df_all.sample(n=5) # Only display a few lines and not the whole dataframe
Explanation: 3. Loading in the Data from Session #2
End of explanation
# Home made One Hot Encoding function
def convert_to_binary(df, column_to_convert):
categories = list(df[column_to_convert].drop_duplicates())
for category in categories:
cat_name = str(category).replace(" ", "_").replace("(", "").replace(")", "").replace("/", "_").replace("-", "").lower()
col_name = column_to_convert[:5] + '_' + cat_name[:10]
df[col_name] = 0
df.loc[(df[column_to_convert] == category), col_name] = 1
return df
columns_to_convert = [
'gender',
'signup_method',
'signup_flow',
'language',
'affiliate_channel',
'affiliate_provider',
'first_affiliate_tracked',
'signup_app',
'first_device_type',
'first_browser'
]
# One Hot Encoding
for column in columns_to_convert:
df_all = convert_to_binary(df=df_all, column_to_convert=column)
df_all.drop(column, axis=1, inplace=True)
df_all.sample(n=5)
# Add new date related fields
df_all['day_account_created'] = df_all['date_account_created'].dt.weekday
df_all['month_account_created'] = df_all['date_account_created'].dt.month
df_all['quarter_account_created'] = df_all['date_account_created'].dt.quarter
df_all['year_account_created'] = df_all['date_account_created'].dt.year
df_all['hour_first_active'] = df_all['timestamp_first_active'].dt.hour
df_all['day_first_active'] = df_all['timestamp_first_active'].dt.weekday
df_all['month_first_active'] = df_all['timestamp_first_active'].dt.month
df_all['quarter_first_active'] = df_all['timestamp_first_active'].dt.quarter
df_all['year_first_active'] = df_all['timestamp_first_active'].dt.year
df_all['created_less_active'] = (df_all['date_account_created'] - df_all['timestamp_first_active']).dt.days
# Drop unnecessary columns
columns_to_drop = ['date_account_created', 'timestamp_first_active', 'date_first_booking', 'country_destination']
for column in columns_to_drop:
if column in df_all.columns:
df_all.drop(column, axis=1, inplace=True)
print ("Dataframe Shape:", df_all.shape)
df_all.sample(n=5)
Explanation: 3. Transforming Categorical Data
Let's go for some One Hot Encoding - replacing the categorical fields in the dataset with multiple columns representing one value from each column.
End of explanation
df_sessions = pd.read_csv("data/sessions.csv")
print ("DF Session Shape:", df_sessions.shape)
df_sessions.head(n=5) # Only display a few lines and not the whole dataframe
Explanation: 5. Mid - Conclusion
We had 14 Columns, we now have 147 columns.
It seems like a lot, however we mostly just have restructured the information and only created 10 columns
6. Adding new data
6.1. Understanding session.csv
Exactly like we did with the training / testing data. We now investigate session data.
End of explanation
# Determine primary device
sessions_device = df_sessions.loc[:, ['user_id', 'device_type', 'secs_elapsed']]
aggregated_lvl1 = sessions_device.groupby(['user_id', 'device_type'], as_index=False, sort=False).aggregate(np.sum)
idx = aggregated_lvl1.groupby(['user_id'], sort=False)['secs_elapsed'].transform(max) == aggregated_lvl1['secs_elapsed']
df_primary = pd.DataFrame(aggregated_lvl1.loc[idx , ['user_id', 'device_type', 'secs_elapsed']])
df_primary.rename(columns = {'device_type':'primary_device', 'secs_elapsed':'primary_secs'}, inplace=True)
df_primary = convert_to_binary(df=df_primary, column_to_convert='primary_device')
df_primary.drop('primary_device', axis=1, inplace=True)
df_primary.sample(n=5)
# Determine Secondary device
remaining = aggregated_lvl1.drop(aggregated_lvl1.index[idx])
idx = remaining.groupby(['user_id'], sort=False)['secs_elapsed'].transform(max) == remaining['secs_elapsed']
df_secondary = pd.DataFrame(remaining.loc[idx , ['user_id', 'device_type', 'secs_elapsed']])
df_secondary.rename(columns = {'device_type':'secondary_device', 'secs_elapsed':'secondary_secs'}, inplace=True)
df_secondary = convert_to_binary(df=df_secondary, column_to_convert='secondary_device')
df_secondary.drop('secondary_device', axis=1, inplace=True)
df_secondary.sample(n=5)
Explanation: 6.2. Cleaning and Transforming the Data
6.2.1. Extract the primary and secondary devices for each user
The first piece of information we are going to extract is the primary and secondary device for each user.
How do we determine what is the user's primary and secondary devices are? We look at how much time they spent on each device.
End of explanation
# Count occurrences of value in a column
def convert_to_counts(df, id_col, column_to_convert):
id_list = df[id_col].drop_duplicates()
df_counts = df.loc[:,[id_col, column_to_convert]]
df_counts['count'] = 1
df_counts = df_counts.groupby(by=[id_col, column_to_convert], as_index=False, sort=False).sum()
new_df = df_counts.pivot(index=id_col, columns=column_to_convert, values='count')
new_df = new_df.fillna(0)
# Rename Columns
categories = list(df[column_to_convert].drop_duplicates())
for category in categories:
cat_name = str(category).replace(" ", "_").replace("(", "").replace(")", "").replace("/", "_").replace("-", "").lower()
col_name = column_to_convert + '_' + cat_name
new_df.rename(columns = {category:col_name}, inplace=True)
return new_df
# Aggregate and combine actions taken columns
session_actions = df_sessions.loc[:,['user_id', 'action', 'action_type', 'action_detail']]
columns_to_convert = ['action', 'action_type', 'action_detail']
session_actions = session_actions.fillna('not provided')
first = True
for column in columns_to_convert:
print("Converting " + column + " column...")
current_data = convert_to_counts(df=session_actions, id_col='user_id', column_to_convert=column)
# If first loop, current data becomes existing data, otherwise merge existing and current
if first:
first = False
actions_data = current_data
else:
actions_data = pd.concat([actions_data, current_data], axis=1, join='inner')
actions_data.sample(n=5)
Explanation: 6.2.2. Determine Counts of Actions
The next thing we are going to do is take counts of how many times each action was taken by each user. This is a two-step process.
To handle the multiple action columns, we repeat these steps for each column individually, effectively creating three separate tables.
Because we have now created tables where each row represents one user, we can now join these three tables together on the basis of the user id.
End of explanation
# Merge device datasets
df_primary.set_index('user_id', inplace=True)
df_secondary.set_index('user_id', inplace=True)
device_data = pd.concat([df_primary, df_secondary], axis=1, join="outer")
# Merge device and actions datasets
combined_results = pd.concat([device_data, actions_data], axis=1, join='outer')
df_sessions = combined_results.fillna(0)
# Merge user and session datasets
df_all.set_index('id', inplace=True)
df_all = pd.concat([df_all, df_sessions], axis=1, join='inner')
df_all.sample(n=5)
df_sessions = df_sessions.astype(int)
df_sessions.sample(n=5)
#Just recheck we don't have new NaN Values before saving our data
# We must find no NaN, the column "country_destination" has been deleted
check_NaN_Values_in_df(df_all)
Explanation: 6.2.3. Combine Data Sets
The final steps are to combine the various datasets we have created into one large dataset.
we combine the two device dataframes (df_primary and df_secondary) to create a device dataframe
we combine the device dataframe with the actions dataframe to create a sessions dataframe with all the features we extracted from sessions.csv
Finally, we combine the sessions dataframe with the training and testing data dataframe
End of explanation
# We create the output directory if necessary
if not os.path.exists("output"):
os.makedirs("output")
# We export to csv
df_all.to_csv("output/enriched.csv", sep=',')
Explanation: 7. Saving the DataFrame to csv
End of explanation |
6,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python
Python présente plusieurs avantage à l'origine de son choix pour ce cours
Step1: Nombres
Step2: Chaînes de caractères
Step3: Listes et dictionnaires
Step4: Structures de contrôle (ou boucles)
Step5: Fonctions
On crée un fichier
Step6: Classes
Step7: Fichiers
Step8: Modules
Les outils présents dans Python ont vocation à être très généralistes. De très nombreux outils orientés vers des applications particulières existes mais ils ne font pas directement partie du Python, ils sont alors disponibles sous forme de modules ou de packages. A titre d'exemple, si on cherche à utiliser des fonctions mathématiques de base dans Python, on remarque qu'elles n'existent pas
Step9: Cependant, elles sont disponibles dans plusieurs libraires (modules ou packages dans Python). Il en existe beaucoup ce qui permet de faire à peu près tout. Ici, nous nous focalisons sur les plus indispensables
Step10: Scipy
Scipy ou scientific Python est une bibliothèque qui contient la majorité des agorithmes classiques en méthodes numériques.
Matplotlib
Matplotlib est un module de graphisme qui produit des figures de grande qualité dans tous les formats utiles. Voici quelques exemples
Step11: Tracé d'un champ $z = \sin (2 \pi x) \sin (2 \pi y) / \sqrt{x^2 + y^2}$ | Python Code:
print 'Hello World !'
a = 5.
b = 7.
a + b
Explanation: Python
Python présente plusieurs avantage à l'origine de son choix pour ce cours:
C'est un langage généraliste présent dans de nombreuses domaines: calcul scientifique, web, bases de données, jeu vidéo, graphisme, etc. C'est un outil polyvalent qu'un ingénieur peut utiliser pour toutes les tâches numériques dont il a besoin.
Il est présent sur la majorité des plateformes: Windows, Mac OS, Linux, Unix, Android
C'est un langage libre et gratuit avec un grande communauté d'utilisateurs. Tous vos programmes sont donc échangeables sans contraintes. Vos questions trouveront toujours des réponses sur internet. Enfin, très peu de bugs sont présents dans Python.
Installation
Sous Windows, nous vous conseillons la distribution Anaconda qui permet d'installer tous les outils nécessaires aux exemples traités ici.
Introduction
Premiers pas
Vous pouvez voir Python comme une grosse calculatrice, vous pouvez y taper des commandes et voir le résultat instantanément:
End of explanation
a = 3. # On définit un flottant (64 bits par défaut)
b = 7 # On définit un entier (32 bits par défaut)
type(a), type(b) # On demande le type de a et b
a + b # On additionne a et b, on remarque que le résultat est un flottant.
c = a + b # On assigne à c la valeur a + b
Explanation: Nombres
End of explanation
mon_texte = 'salade verte' # Une chaîne de caractères
mon_texte[0] # Premier caractère
mon_texte[1] # Second caractère
mon_texte[-1] # Dernier caractère
motif = 'Les {0} sont {1}' # Une comportant des balises de formatage
motif.format('lapins', 'rouges') # Formatage de la chaine
motif.format('tortues', 5)
Explanation: Chaînes de caractères
End of explanation
ma_liste = [] # On crée une liste vide
ma_liste.append(45) # On ajoute 45 à la fin de la liste.
mon_texte = 'Les lapins ont des grandes oreilles' # On définit une chaine de caractères nommé mon_texte
ma_liste.append(mon_texte) # On ajoute mon_texte à la fin de ma_liste.
ma_liste # On demande à voir le contenu de ma_liste
ma_liste[0] # On demande le premier élément de la liste (Python compte à partir de 0)
ma_liste[1]
ma_liste[0] = a + b # On écrase le premier élément de ma_liste avec a + b
ma_liste
mon_dict = {} # On définit un dictionnaire
mon_dict['lapin'] = 'rabbit' # On associe à la clé 'lapin' la valeur 'rabbit'
mon_dict[1] = 'one' # On associe à la clé 1 la valeur 'one'
mon_dict
mon_dict[1]
mon_dict.keys() # Liste des clés
mon_dict.values() # Liste des valeurs
Explanation: Listes et dictionnaires
End of explanation
# Boucles en Python
# Boucle FOR
print 'Boucle FOR'
ma_liste = ['rouge', 'vert', 'noir', 56]
for truc in ma_liste:
print truc # Bien remarquer le decalage de cette ligne (ou indentation) qui delimite le bloc de code qui appartient a la boucle. Dans Python, les blocs sont toujours definis par une indentation.
# Boucle IF
print 'Boucle IF'
nombre = raw_input(' 2 + 2 = ')
if nombre == 4:
print 'Bon'
else:
print 'Pas bon'
# Boucle WHILE
print 'boucle WHILE'
nombre = 3.
while nombre < 4.:
nombre = raw_input('Donnez un nombre plus petit que 4: ')
Explanation: Structures de contrôle (ou boucles)
End of explanation
# Definition d'une fonction
def ma_fonction(x, k = 1.): # On declare la fonction et ses arguments
'''
Renvoie k*x**2 avec k ayant une valeur par defaut de 1.
'''
out = k * x**2 # On fait les calculs necessaires
return out # La commande return permet de renvoyer un resultat
ma_fonction(3)
ma_fonction(5.)
ma_fonction(5., k = 5)
help(ma_fonction)
Explanation: Fonctions
On crée un fichier :download:fonctions.py <Python/Example_code/fonctions.py>:
End of explanation
# Creation d'une classe de vecteurs
class vecteur:
'''
Classe vecteur: decrit le comportement d'un vecteur a 3 dimensions.
'''
def __init__(self, x = 0., y = 0., z = 0.): # Constructeur: c'est la fonction (ou methode) qui est lancee lors de la creation d'un exemplaire de la classe.
self.x = float(x)
self.y = float(y)
self.z = float(z)
def norme(self): # Une methode qui renvoie la norme
x, y, z = self.x, self.y, self.z
return (x**2 + y**2 + z**2)**.5
def __repr__(self): # On definit comment la classe apparait dans le terminal
x, y, z = self.x, self.y, self.z
return '<vecteur: ({0}, {1}, {2})>'.format(x, y, z)
# Addition
def __add__(self, other): # On definit le comportement de la classe vis-a-vis de l'addition
x, y, z = self.x, self.y, self.z
if type(other) in [float, int]: # Avec un nombre
return vecteur(x + other, y + other, z + other)
if isinstance(other, vecteur): # Avec un vecteur
return vecteur(x + other.x, y + other.y, z + other.z)
__radd__ = __add__ # On definit l'addition a gauche pour garantir la commutativite
# Multiplication:
def __mul__(self, other): # On definit le comportement de la classe vis-a-vis de la multiplication
x, y, z = self.x, self.y, self.z
if type(other) in [float, int]: # Avec un nombre
return vecteur(x * other, y * other, z * other)
if isinstance(other, vecteur): # Avec un vecteur: produit vectoriel
x2, y2, z2 = other.x, other.y, other.z
xo = y * z2 - y2 * z
yo = z * x2 - z2 * x
zo = x * y2 - x2 * y
return vecteur(xo, yo, zo)
__rmul__ = __mul__ # On definit le produit vectoriel a gauche
def scalaire(self, other):
'''
Effectue le produit scalaire entre 2 vecteurs.
'''
x, y, z = self.x, self.y, self.z
x2, y2, z2 = other.x, other.y, other.z
return x * x2 + y * y2 + z * z2
def normaliser(self):
'''
Normalise le vecteur.
'''
x, y, z = self.x, self.y, self.z
n = self.norme()
self.x, self.y, self.z = x / n, y / n , z / n
v = vecteur(1, 0, 0)
v + 4
w = vecteur(0, 1, 0)
v + w
v * w
v.scalaire(w)
q = v + w
q
q.norme()
k = vecteur(2, 5, 6)
k.normaliser()
k
k.norme()
Explanation: Classes
End of explanation
f = open("fichier.txt", "wb") # On ouvre un fichier en ecriture
f.write("Very important data") # On ecrit
f.close() # On ferme de fichier
f = open("fichier.txt", "r") # On ouvre le fichier en lecture
data = f.read()
data
Explanation: Fichiers
End of explanation
sin(0)
Explanation: Modules
Les outils présents dans Python ont vocation à être très généralistes. De très nombreux outils orientés vers des applications particulières existes mais ils ne font pas directement partie du Python, ils sont alors disponibles sous forme de modules ou de packages. A titre d'exemple, si on cherche à utiliser des fonctions mathématiques de base dans Python, on remarque qu'elles n'existent pas:
End of explanation
ma_liste = [1., 3., 5., 10. ] # Une liste
ma_liste + ma_liste # Somme de liste = concaténation
ma_liste*2 # Produit = concaténation aussi....
import numpy as np # On import numpy et on le renomme np par commodité.
mon_array = np.array(ma_liste) # On crée un array à partir de la liste.
mon_array
mon_array * 2 # array * entier = produit terme à terme
mon_array +5 # array + array = somme terme à terme
mon_array.sum() # Somme du array
mon_array.mean() # Valeur moyenne
mon_array.std() # Ecart type
np.where(mon_array > 3., 1., 0.) # Seuillage avec le très puissant where
Explanation: Cependant, elles sont disponibles dans plusieurs libraires (modules ou packages dans Python). Il en existe beaucoup ce qui permet de faire à peu près tout. Ici, nous nous focalisons sur les plus indispensables:
Numpy: l'indispensable outil du calcul numérique
End of explanation
# PACKAGES
import numpy as np
from matplotlib import pyplot as plt # On import pyplot (un sous module de Matplotlib) et on le renomme plt
%matplotlib nbagg
# FONCTIONS
def ma_fonction(x):
'''
Une fonction a tracer.
'''
return np.sin(2 * np.pi * x ) / x
# DEFINITIONS DIVERSES
x = np.linspace(1., 10., 500) # On demande un array contenant 100 points equirepartis entre 0 et 5.
y = ma_fonction(x) # Grace a numpy, on applique la fonction a tous les points x d'un coup
# TRACE DE LA COURBE
fig = plt.figure() # On cree une figure
plt.clf() # On purge la figure
plt.plot(x, y, 'b-', linewidth = 2.) # On trace y en fonction de x
plt.xlabel('$x$') # On definit le label de l'axe x
plt.ylabel('$y$')
plt.grid() # On demande d'avoir une grille
plt.title(r'$y = \sin (2 \pi x) / x$') # On definit le titre et on utilise la syntaxe de LaTeX pour y introduire des maths.
plt.show()
Explanation: Scipy
Scipy ou scientific Python est une bibliothèque qui contient la majorité des agorithmes classiques en méthodes numériques.
Matplotlib
Matplotlib est un module de graphisme qui produit des figures de grande qualité dans tous les formats utiles. Voici quelques exemples:
Tracé d'une courbe $y = \dfrac{\sin( 2 \pi x)}{x}$:
End of explanation
# FONCTIONS
def ma_fonction(x,y):
'''
Une fonction a tracer.
'''
return np.sin(np.pi * 2 *x) * np.sin(np.pi * 2 *y) / (x**2 + y**2)**.5
# DEFINITIONS DIVERSES
x = np.linspace(1., 5., 100) # On demande un array contenant 100 points equirepartis entre 0 et 5.
y = np.linspace(1., 5., 100)
X, Y = np.meshgrid(x,y) # On cree des grilles X, Y qui couvrent toutes les combinaisons de x et de y
Z = ma_fonction(X, Y) # Grace a numpy, on applique la fonction a tous les points x d'un coup
# TRACE DE LA COURBE
niveaux = 20
fig = plt.figure() # On cree une figure
plt.clf() # On purge la figure
plt.contourf(X, Y, Z, niveaux)
cbar = plt.colorbar()
plt.contour(X, Y, Z, niveaux, colors = 'black')
cbar.set_label('Z')
plt.xlabel('$X$') # On definit le label de l'axe x
plt.ylabel('$Y$')
plt.grid() # On demande d'avoir une grille
plt.title(r'$z = \sin (2 \pi x) \sin (2 \pi y) / \sqrt{x^2 + y^2}$') # On definit le titre et on utilise la syntaxe de LaTeX pour y introduire des maths.
plt.show()
Explanation: Tracé d'un champ $z = \sin (2 \pi x) \sin (2 \pi y) / \sqrt{x^2 + y^2}$:
End of explanation |
6,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Statistics
Step1: Is that counter-intuitive? Not really... The false positive rate is actually fairly high, and with the numbers as given, the chance is about ten times higher that you are a false positive than that you are a real positive
Step2: This is what bayesiansism is all about
Step4: The question now is
Step5: The Bayesian way
Bayes would say that it makes little sense tot talk about the probability of data, given the true flux, but that it rather should be the other way around
Step7: In order to call emcee, we will have to define a few parameters in order to tell emcee how to do the sampling
Step8: A somewhat more complicated problem
Let us do something very similar to the example of the stellar flux, but now sampling values from a distribution that itself is Gaussian, with a fixed mean and given standard deviation. The "measurement error" on the flux is still given by the square root of the actually measured value.
Below I will plot the example, and what we are after is a best guess for the real average flux and the standard deviation of that flux distribution (both with some confidence intervals around them, preferably). Note that we are still dealing with nice gaussian distributions to keep things simple.
Step10: Good-old frequentist methods
Like before, maximizing the likelihood should give us the best guess for the parameters we are after. The vector theta of model parameters now has two unknowns. The likelihood is now a convolution between the intrinsic distribution for F and the error distribution, like we saw before. This can be expressed as
Step11: Bootstrap methods to estimate uncertainties
The frequentist method results in an estimate for the mean of the gaussian that describes the variation of the true mean flux and the standard deviation of that gaussian. Note that this standard deviation is not the error on the mean! So how can a frequentist find out how well determined the mean and standard deviation are?
One (of many) option(s) is to do a bootstrap of the results. I do not want to go into too much detail, because we better spend our time on more natural bayesian methods, but I will quickly show what the frequentist would (or should) do in this case. A bootstrap is a method that does not assume anything about the underlying data (so normality is not an issue) and does in principle always work. How it works is as follows
Step12: We can see that even though the original estimate of the true standard deviation was pretty far off, the bootstrap resampling shows that the uncertainty of this value is reasonably large and does indeed include the original value within one standard deviation.
And now for the bayesians
The big advantage of bayesian methods is about to show now. Even though the problem became only slightly more complicated, the frequentist needed a whole new arsenal of methods to go about. For the bayesian, the problem stays of roughly equal complexity, even though we do also need the slightly more complicated likelihood function that the frequentists start out with as well.
The model parameter vector theta now consists of two elements
Step13: MCMC runs now result in a two-dimenional space of posterior samplings, one for mu and one for sigma. The package astroML, that we also used for the bootstrapping above, has a convenient plot routine for the sampled posterior that is pretty good at drawing iso-pdf contours.
Step14: Notice the asymmetry! The previous alluded to interaction between the two unknowns results in a non-elliptical posterior. An elliptical posterior could result in purely Gaussian projections on either of the two axes, but the present posterior clearly won't (especially obvious along the $\sigma$-axis).
As you can see, the true value for both ingredients of theta are not on the maximum of the posterior. There is no reason for it to be there, the inner solid curve shows the range whithin which the real values should lie at 95% confidence. Run the same code with a different random number seed and the posterior will look different.
In principle, the posterior is the end point with most meaning. It doesn't make a lot of sense to quote something like $\sigma = X \pm Y$, as the errors are asymmetric, and in general even of very irregular shape.
In the next exercise, you will play with this estimator, and introduce your own, non-flat priors!
And now for something completely different - Nuisance parameters
When bayesian and frequentist methods seem to disagree
Let's walk through an example that is very close to something Bayes himself once used. Imagine a billiard table. Someone rolls a ball until it stops. The imaginary line along the longest axis on the table through that ball determines two parts of the table
Step15: Bayesians wouldn't be bayesians if they wouldn't immediately pull out bayes' theorem. Unfortunately the derivation of the bayesian result is somewhat complicated, involving beta-functions, due to a marginalization over the parameter $p$ that encapsulates the unknown location of the first ball. The full derivation can be found on a the second part of the blog by VanderPlas.
In short, it goes as follows. Given that we do not know $p$ we will have to marginalize over all possible values of $p$, such that the probability of Bob winning given the data can be written as
$$
P(B~|~D) \equiv \int_{-\infty}^\infty P(B,p~|~D) {\mathrm d}p
$$
If you were to use Bayes' rule, and some manipulation in the link above, you arrive at a result that is given by
$$
P(B~|~D) = \frac{\int_0^1 (1 - p)^6 p^5 dp}{\int_0^1 (1 - p)^3 p^5 dp}
$$
I personally am not a huge fan of such integrals, but luckily they turn out to be pecial cases of the Beta Function
Step16: That's not the same! The difference, as you might expect, is caused by the marginalization over the different values for p, the location of the first ball. The value for p is not equal to 5/8, but is a pdf, in which 5/8 is the maximum likelihood value. It is, nevertheless, quite skewed. On top of that, the propagation fo different values for p into chance of Bob still winning is non-linear, so for another not-very-improbable value of p the chance of Bob winning can become much bigger. This works in such a way that the marginalisation
$$P (B | D) = \int P(B, p | D) \textrm{d}p$$
results in a much higher chance of Bob winning than that for taking p=5/8. That parameter p is called the nuisance parameter
Step17: This result is not meant to show you that frequentists methods are incorrect. In fact, also a frequentist could marginalze over values of p (although one might argue that the frequentist is then using methods that are basically bayesian). What it does mean to say is that, while a frequentist needs to both think of the necessary marginalization (rather than just using the maximum likelihood for p, as they would do for any other parameter) and resort to methods that go beyond the usual. The bayesian, on the other hand, will more naturally do the numerical integration over the nuisance parameter, and the method a bayesian uses is not any different from any other problem. In short
Step18: Not by accident, the outliers at low X are too high, whereas the one at high X is too low, to make sure that conventional naive methods are likely to underestimate the slope of a function describing the non-outlier points.
Let's fit a simple linear model, with a slope and an intercept, taken together in parameter vector $\theta$
Step19: Bayesian objections
The quadratic loss function follows directly from a Gaussian likelihood, so why would you opt for another arbitrary loss function? And why this one? And what value for c should one pick and why? That a real-life data set contains outliers is something one often just has to live with and it would be good to use a stable method with a minimum number of arbitrary decisions.
Introduce nuisance parameters!
One line of attack used by Bayesians is to fit a model that allows every point to be an outlier, by adding a term on top of the linear relation. There are several ways to do so anf the metod chosen here looks somewhat like a "signal with a background".
We write the model as follows.
$
\begin{array}{ll}
p({x_i}, {y_i},{e_i}~|~\theta,{g_i},\sigma,\sigma_b) = & \frac{g_i}{\sqrt{2\pi e_i^2}}\exp\left[\frac{-\left(\hat{y}(x_i~|~\theta) - y_i\right)^2}{2e_i^2}\right] \
&+ \frac{1 - g_i}{\sqrt{2\pi \sigma_B^2}}\exp\left[\frac{-\left(\hat{y}(x_i~|~\theta) - y_i\right)^2}{2\sigma_B^2}\right]
\end{array}
$
The idea is to introduce weights
Step20: The markov chains (or rather
Step21: There is an exercise to identify what's up with those points in the upper right part of the plot.
Step22: All a matter of interpretation...
We have seen frequentist and Bayesian methods in action side by side. It is now time to get back to that difference in interpretation between the two. Roughly
Step26: Now assume that we have data of failure moments of devices given by $D={10, 12, 15}$ and the question would be what the value of $\theta$ might be. Common sense will tell you that $\theta<10$, but probably also that $\theta>5$ or there would be more data close to 5. So what is it?
The details are, as you may have expected by now on a blog by Jake VanderPlas. In the interest of time we will not go into the nitty-gritty, but the result is outlined here.
Because of the low number of measurements, namely 3, the commonly employed "normal approximation", that is worked out in the given link, breaks down. This normal approximation happens to give nonsense results that have our intuitive answer outside the 95% confidence interval, which is [10.2 - 12.5].
For small $N$, the normal approximation will not apply, and we must instead compute the confidence integral from the actual sampling distribution, which is the distribution of the mean of $N$ variables each distributed according to $p(\theta)$. The sum of random variables is distributed according to the convolution of the distributions for individual variables, so we can exploit the convolution theorem and use the method of characteristic functions to find the following sampling distribution for the sum of $N$ variables distributed according to our particular $p(x~|~\theta)$
Step28: The exact confidence interval is slightly different than the approximate one, but still reflects the same problem | Python Code:
P_positive_if_ill = .99
P_positive_if_notill = .01
P_ill = 1.e-3
P_notill = 1 - P_ill
print("P_ill_if_positive = ",
P_positive_if_ill * P_ill / (P_positive_if_ill * P_ill + P_positive_if_notill * P_notill ))
Explanation: Bayesian Statistics: the what, the why, and the how
This notebook contains illustration material for a workshop about Bayesian statistics. It is not meant to be a complete set of documentation about everything that is discussed. It should, however, be sufficient to follow the whole story, without hearing the author speak. There is, therefore, slightly more text than one generally wants to display on a projector screen, but alas.
The workshop is meant to give a conceptual introduction to the topic of Bayesian statistics. Sometimes, some mathematical "rigor" is helpful to understand the concept. I will use it for that purpose only. The focus will instead be on what it means, and how one can use Bayesian statistics from a Pythonic point of view. Along this notebook with instruction material, there is a notebook with exercises that you are free to explore or ignore. There is also a solutions notebook that has a (at points somehwat thin bodied) guide to what the exercises meant to result in. Often, there is no "right or wrong", but qualitatively your solution should correspond to the one in that notebook. At points, stuff is left out of this instruction notebook, and you will actually learn new things from the exercises. Most of the time, however, the instructor will have you make some exercises and discuss the results later from this notebook. It is, therefore, not advised to scroll down much further in the notebook than what you see on the screen.
I did not want or need to reinvent any wheels, so much of this material is adapted from material from the web. Wherever applicable, I have referenced the source. Like the material used, this notebook (as well as the corresponding exercises and solution notebooks) are BSD licensed, so free to use and distribute.
Without any further ado, let's dive into the material. As if it should surprise anybody, the one equation all of Bayesian statistics revolves around is Bayes' theorem:
<img src="figures/Bayes_Theorem.jpg" alt="Bayes theorem">
This equation is trivially derived from conditional probabilities and is not subject to any criticism from the "frequentist" camp, which is what "conventional" statistics is often called in this context. The difference will become clear in throughout this workshop. The probability that both $A$ and $B$ happen is denoted $P(A \wedge B)$ is given by two symmetric conditional probabilities, namely the probability that $A$ happens given that $B$ already happened ($P(A \,|\, B)$), times the probability that $B$ happens and vice versa:
$$ P (A \wedge B) = P(A \,| \,B) P(B) = P(B\,|\,A) P(A)$$
In the discussion of frequentist versus Bayesian statistics, the interpretation of probabilities is often the crux of (often philosophical) disagreement. Bayes' theorem itself is not subject of discussion. It is a basic result from probability theory, but has fairly drastic consequences for the interpretation of statistical results. Neither ferquentism nor bayesianism is wrong, but they answer fundamentally different questions. Far too often bayesian questions are answered with a frequentist method. From today onwards, you do not need to make such mistakes any longer!
As will hopefully become clear from the examples, the main benefits of bayesian methods are:
- That it is much easier, and in fact trivial, to include prior knowledge about the subject of study
- That uninteresting, but important quantities to derive quantities of interest are included in a much more natural way.
Testing positively to a serious disease
As an example of the use of Bayes' theorem, let's assume the following situation. There is a fairly rare disease, from which about 1 in a 1000 people suffer. You tested positively to this disease in a test that correctly identifies 99% of infected people. It is also 99% accurate for healthy people. What is the chance that you do indeed have the disease?
Let's fill out Bayes' theorem:
$$ P(\textrm{ill } | \textrm{ positive}) = \frac{P(\textrm{positive } | \textrm{ ill}) \cdot P(\textrm{ill}) }{ P(\textrm{positive}) } $$
in which the denominator is given by the sum of testing positive while ill times the chance you are ill and the chance of testing positive while not ill times the chance not being ill:
$$ P(\textrm{positive}) = P(\textrm{positive } | \textrm{ ill}) \cdot P(\textrm{ill}) + P(\textrm{positive } | -\textrm{ill}) \cdot P(-\textrm{ill}) $$
And given that this is a Python notebook, let's just solve this with "code":
End of explanation
P_positive_if_ill = .99
P_positive_if_notill = .01
P_ill = 0.09016
P_notill = 1 - P_ill
print("P_ill_if_positive = ",
P_positive_if_ill * P_ill / (P_positive_if_ill * P_ill + P_positive_if_notill * P_notill ))
Explanation: Is that counter-intuitive? Not really... The false positive rate is actually fairly high, and with the numbers as given, the chance is about ten times higher that you are a false positive than that you are a real positive: of all 999 people in a thousand that are not ill, 10 (or, rather, 9.99) will be tested positive!
Exercise 1 in the exercise notebook lets you play with the numbers if you don't want to change them in this notebook.
Now this you could have easily calculated without Bayes' theorem. One needs to think of the false positives though, something that is hard to forget when filling out Bayes' theorem. Now what if you would do a second test, exactly like the first, but independent? How can you incorporate the knowledge from the first test and update the probability of being ill, given the second positive result, after you already know the probability resulting from the first test?
Simple, plug in the result from the first go as the probability that you are ill:
End of explanation
# A random seed, for reproducibilty.
np.random.seed(42)
F_true = 1000 # True value that we will try to recover
N = 50 # Number of measurements
F = stats.poisson(F_true).rvs(N) # N measurments of a Poisson distributed F
e = np.sqrt(F) # Poisson errors
fig, ax = plt.subplots()
ax.errorbar(F, np.arange(N), xerr=e, fmt='ok', ecolor='gray', alpha=0.5)
ax.axvline(x=F_true, linewidth=5, alpha=0.2)
ax.set_xlabel("F");ax.set_ylabel("Measurement number");
Explanation: This is what bayesiansism is all about: The new (posterior) probability about a phenomenon is determined by what you already knew (the prior probability), updated with new data.
In exercise 2 you can play around with updating knowledge.
Frequentism vs. Bayesianism - philosophy
Text here, as well as examples, are largely taken and adapted from a blog by Jake VanderPlas of the eScience Institute at the University of Washington (blog made in Jupyter notebook). There is a paper that belongs to this blog post.
For frequentists, probability only has meaning in terms of a limiting case of repeated measurements; probabilities are fundamentally related to frequencies of events.
For Bayesians, the concept of probability is extended to cover degrees of certainty about statements; probabilities are fundamentally related to our own knowledge about an event.
Some terminology
In Bayes' theorem, right of the equality sign there are the likelihood, the prior (what do you know on beforehand about the quantity you're interested in, this is one of the main reasons for criticism and discussion) and the data probablility (also called evidence, which is often used merely as a normalization factor, which we will justify below). The left side of the equation is our quantity of interest, the posterior: what do you know about the quantity of interest after you updated prior knowledge with new data?
The flux of photons from a star
In this example we are interested in measuring the flux of photons from a star, that we assume to be constant in time. We measure the flux some number of times with some uncertainty (for simplicity we take the approximation that the error on the number of photons meaured is equal to the square root of the measured number, which is a reasonable approximation for large numbers):
End of explanation
w = 1. / e ** 2
print(
F_t = {0}
F_est = {1:.0f} +/- {2:.0f} (gebaseerd op {3} metingen)
.format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N))
Explanation: The question now is: given these measurements, what is the best guess for the real flux, F?
Let's call all measurements $F_i$ with error $e_i$ combined a data point $D_i = {F_i, e_i }$.
The frequentist method: maximum likelihood
Given the assumed Gaussian distributed errors, one can determine the likelhood by taking all measurements together. The probability of a given data point $i$, given $F_{true}$ is nothing else than the Gaussian PDF:
$$ P(D_i | F_{true}) = \frac1{\sqrt{2 \pi e_i^2}} \exp \Big( \frac{-(F_i - F_t)^2}{2e_i^2} \Big) $$
This doesn't help much by itself, because you do not know $F_{t}$. Therefore, one constructs the likelihood, which is the product of all $ P(D_i | F_{true})$, maximize that and isolate $F_t$:
$$ \mathcal{L} (D | F_t) = \prod_i P(D_i | F_{true}) \Leftrightarrow \log \mathcal{L} (D | F_t) = -\frac1{2} \sum_{i=1}^N \Big( \log(2 \pi e_i^2) + \frac{(F_i - F_t)^2}{e_i^2} \Big)$$
After taking the logarithm, the maximum does not change ($\textrm{d} \log \mathcal{L} / \textrm{d} F_t = 0$).
The result is that the best estimate for the flux is nothing else than a weighted average of all data points:
$$ F_{\textrm{est}} = \frac{\sum w_i F_i}{\sum w_i}$$
in which $w_i = 1/e_i^2$. With all errors being equal, $ F_{\textrm est}$ is the mean of the measurements. The standard error of the estimate of $F_t$ is given by
$$ \sigma_{\textrm{est}} = \Big( \sum_{i=1}^{N} w_i\Big)^{-1/2}$$
Simple! Let's fill out the measurements of our star and see how well we do:
End of explanation
def log_prior(theta):
return 1 # flat prior
def log_likelihood(theta, F, e):
return -0.5 * np.sum(np.log(2 * np.pi * e ** 2)
+ (F - theta[0]) ** 2 / e ** 2)
def log_posterior(theta, F, e):
return log_prior(theta) + log_likelihood(theta, F, e)
Explanation: The Bayesian way
Bayes would say that it makes little sense tot talk about the probability of data, given the true flux, but that it rather should be the other way around: the data is given and you should be interested in the probability of your estimate. Therefore, we would like to obtain
$$ P (F_t | D) = \frac{P(D | F_t) P(F_t)}{ P(D)} $$
On the right hand side of the equation we recognize the prior, $P(F_t)$, about which we might know nothing yet. In that case it is good practice to use a so-called uninformative prior; a prior that does not favor some outcomes in the posterior over others. Often this is called a "flat prior", but not in all situations is the functional form of an uninformative prior really flat. In the simple example here it is, though (and in case of scalar probablities for a stochastic variable rather than probability density functions it would be 1).
Aside from the denominator we see that a flat prior results in a posterior that is equivalent to the likelihood function that is used in the frequentist approach. In a simple toy problem like the one of the star's constant flux, both methods are bound to give the exact same answer. Nevertheless, we will use this toy example to illustrate the way in which a Bayesian would go about this problem. This will show useful in more difficult examples where the frequentist approach is less trivial.
Our goal is to determine $P (F_t | D)$ as a function of the model parameter $F_t$. Our knowledge of that model parameter is encoded in the posterior distribution function: it resembles what we already knew (nothing), updated with our new data (via the likelihood function). This posterior is a nice one-dimensional PDF, that will turn out to have a very nice unimodal shape. In genereal this thing may be messy, multimodal, multi-dimensional and very hard to parametrize.
It is mostly the multi-dimensionality that requires a careful sampling of the posterior distribution function, before inferences about model parameters can be made. In the example of our flux measurements we want to sample the posterior, which we don't know a priori, so instead we:
- take guesses for the model parameters,
- evaluate the prior and likelihood for those guesses of model parameters (given the data)
- obtain a value for the posterior (evidence doesn't depend on model parameters)
- try to improve on that in a next iteration.
Those iterations are typically done with Markov Chain Monte Carlo simulations. Monte Carlo is a typical random iteration process, and the Markov Chain bit indicates that iterations are not independent of the previous iterations, like they would be in classical Monte Carlo.
These samplers work as follows. Many different algorithms for building up the Markov Chains exist, but most are fairly similar in general. Given a random initialisation of the guess for the model parameters (or better: sampled from the prior PDF), the first value of the posterior PDF is calculated. See "Iteration 1" in the figure below. From that, a jump is taken to another place in parameter space. This will give a new value for prior and likelihood, and thus for posterior. Based on a comparison of the old and the new value the jump is accepted or not. It will be accepted:
- if the new posterior pdf has a higher value for the new parameters: $P_{i+1} > P_i$
- with a probability $P_{i+1} / P_i$ if $P_{i+1} < P_i$
<img src="figures/walkers_early.png" alt="Metropolis algoritme">
Different choices can be made in accepting the jump or not, and this what defines some of the sampling algorithms. Note that this is also why for the chain, the evidence term doesn't matter. That it is possible to also accept jumps with lower posterior probability density ensures that the chains walk to a local maximum and then swarm around it, thus sampling the full high posterior probability region well. In fact, the density of sampled points is supposed to be directly proportional the probability density.
Two independent walkers will result in two chains, which after sampling the posterior to a fair extent may look like:
<img src="figures/walkers.png" alt="Markov chain">
As you can see, the first few steps are necessary to get in the vicinity of the maximum of the posterior, and these steps in the realm of low posterior PDF are often called "burn in", and these steps can be taken out later, based on the evolution of the posterior probability density.
There is a non-zero probability that a Markov Chain jumps from one local maximum to another. This probability is pretty low though, so evolving one Markov chain further is generally not a wise method to find all local maxima, if there are several. A better idea is to use several walkers that sample the allowed space well. In practice, for an uninformative prior, this may be hard to determine, so experimentation is advised. For a fairly well determined prior, make sure to sample the whole prior space well and evolve from there. It may even be wise to force a bunch of walkers to start from very low prior probability, especially if the likelihood and prior probability seem to be in tension.
Examples of action, using the emcee package
A light-weight package, that can still do much of the heavy lifting, is brought to you by a team of mostly astronomers:
<img src="figures/emcee.JPG" alt="emcee">
The ADS listing is here.
It implements an affine-invariant ensemble sampler for Markov chain Monte Carlo and has a very intuitive Python API. Many choices are made for you, which has the advantage that it is easy to use (and requires very little hand-tuning), but the disavantage that you cannot play with these choices. Below we will mention some other packages with more freedom (which are, by the law of conservation of misery, more difficult to use).
What the package needs from us is a functional form of the posterior. To keep things in line with what we have already seen, and given that we know how it depends on prior and likelihood, it is insightful to define the latter two explicitly and then pass the posterior as a combination of them. Because numbers tend to get small, and because additions are easier then multiplications, it is common practice to pass around the logarithm of the distribution functions instead. In fact, emcee expects us to do so.
The prior should be a function of the (vector of) model parameters, called theta, and the likelihood should be a function of the model parameters vector, the data vector and the vector of corresponding errors (F and e below).
For the example of the stellar flux, with a flat prior and a Gaussian likelihood, this looks like:
End of explanation
# The dimensionality of the problem, or: how many parameters do I want to infer?
# This is the length of the theta vector.
ndim = 1
# The number of independent walkers, i.e. Markov Chains, you want to construct
nwalkers = 50
# The number of steps you want the walkers to take along one chain (this is the number of accepted steps)
nsteps = 2000
# Pick starting guesses for all values in the vector of model parameters, for all walkers.
# Here we take them random between zero and 2000.
starting_guesses = 2000 * np.random.rand(nwalkers, ndim)
# Import the package! This will typically be done on top of the notebook.
import emcee
# This necessary first step only initializes a session, where the full set of parameters is given
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])
# The object now has a method to actually run the MCMC sampler, from the starting points for nsteps steps
sampler.run_mcmc(starting_guesses, nsteps)
sample = sampler.chain # Result after run_mcmc; shape = (nwalkers, nsteps, ndim)
# The number of burn-in points has to be set by hand (you don't actually need to do this),
# how to pick the number is an exercise later today.
nburn = 1000
sample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points
# plot a histogram of the sample
plt.hist(sample, bins=50, histtype="stepfilled", alpha=0.3, normed=True)
# plot a "best-fit Gaussian"
F_fit = np.linspace(980, 1020, num=500)
pdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit)
plt.plot(F_fit, pdf, '-k')
plt.xlabel("F"); plt.ylabel("P(F)")
plt.title('Posterior for F');
print(
F_true = {0}
F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)
.format(F_true, np.mean(sample), np.std(sample), N))
Explanation: In order to call emcee, we will have to define a few parameters in order to tell emcee how to do the sampling:
End of explanation
np.random.seed(42)
N = 100
mu_true, sigma_true = 1000, 10 # True flux at time of measurement is distributed following a gaussian.
F_true = stats.norm(mu_true, sigma_true).rvs(N) # Onbekende werkelijke aantallen, nu met scatter
F = stats.poisson(F_true).rvs() # Waargenomen aantallen, met errors
e = np.sqrt(F) # root-N error
fig, ax = plt.subplots()
ax.errorbar(F, np.arange(N), xerr=e, fmt='ok', ecolor='gray', alpha=0.5)
ax.vlines([F_true], 0, N, linewidth=5, alpha=0.1)
ax.set_xlabel("F");ax.set_ylabel("Measurement number");
Explanation: A somewhat more complicated problem
Let us do something very similar to the example of the stellar flux, but now sampling values from a distribution that itself is Gaussian, with a fixed mean and given standard deviation. The "measurement error" on the flux is still given by the square root of the actually measured value.
Below I will plot the example, and what we are after is a best guess for the real average flux and the standard deviation of that flux distribution (both with some confidence intervals around them, preferably). Note that we are still dealing with nice gaussian distributions to keep things simple.
End of explanation
def log_likelihood(theta, F, e):
return -0.5 * np.sum(np.log(2 * np.pi * (theta[1] ** 2 + e ** 2))
+ (F - theta[0]) ** 2 / (theta[1] ** 2 + e ** 2))
# maximize likelihood <--> minimize negative likelihood
def neg_log_likelihood(theta, F, e):
return -log_likelihood(theta, F, e)
from scipy import optimize
theta_guess = [900, 50] # You will have to start your optimzation somewhere...
theta_est = optimize.fmin(neg_log_likelihood, theta_guess, args=(F, e)) # Black box?
print(
Maximum likelihood estimate for {0} data points:
mu={theta[0]:.0f}, sigma={theta[1]:.0f}
.format(N, theta=theta_est))
Explanation: Good-old frequentist methods
Like before, maximizing the likelihood should give us the best guess for the parameters we are after. The vector theta of model parameters now has two unknowns. The likelihood is now a convolution between the intrinsic distribution for F and the error distribution, like we saw before. This can be expressed as:
$$\mathcal{L}(D~|~\theta) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi(\sigma^2 + e_i^2)}}\exp\left[\frac{-(F_i - \mu)^2}{2(\sigma^2 + e_i^2)}\right]$$
So for the best guess we can now write:
$$\mu_{est} = \frac{\sum w_i F_i}{\sum w_i};\, \textrm{where}\, w_i = \frac{1}{\sigma^2+e_i^2} $$
And here we run into a problem! The best value for $\mu$ depends on the best value for $\sigma$! Luckily, we can numerically find the solution to this problem, by using the optimization functions in scipy:
End of explanation
from astroML.resample import bootstrap
def fit_samples(sample):
# sample is an array of size [n_bootstraps, n_samples]
# compute the maximum likelihood for each bootstrap.
return np.array([optimize.fmin(neg_log_likelihood, theta_guess,
args=(F, np.sqrt(F)), disp=0)
for F in sample])
samples = bootstrap(F, 1000, fit_samples) # 1000 bootstrap resamplings
mu_samp = samples[:, 0]
sig_samp = abs(samples[:, 1])
print(" mu = {0:.0f} +/- {1:.0f}".format(mu_samp.mean(), mu_samp.std()))
print(" sigma = {0:.0f} +/- {1:.0f}".format(sig_samp.mean(), sig_samp.std()))
Explanation: Bootstrap methods to estimate uncertainties
The frequentist method results in an estimate for the mean of the gaussian that describes the variation of the true mean flux and the standard deviation of that gaussian. Note that this standard deviation is not the error on the mean! So how can a frequentist find out how well determined the mean and standard deviation are?
One (of many) option(s) is to do a bootstrap of the results. I do not want to go into too much detail, because we better spend our time on more natural bayesian methods, but I will quickly show what the frequentist would (or should) do in this case. A bootstrap is a method that does not assume anything about the underlying data (so normality is not an issue) and does in principle always work. How it works is as follows: for many random subsamples of the data you determine the likelihood as well and you investigate the distribution of means and standard deviations of all of those resulting likelihoods.
There is quite a lot of literature about the ins and outs of bootstrap resampling, that is beyond the scope of this workshop, but I do want to advice you to have a careful looka th this before applying it blindly. I will apply it blindly here though:
End of explanation
def log_prior(theta):
# sigma needs to be positive.
if theta[1] <= 0:
return -np.inf
else:
return 0
def log_posterior(theta, F, e):
return log_prior(theta) + log_likelihood(theta, F, e)
# same setup as above:
ndim, nwalkers = 2, 50
nsteps, nburn = 2000, 1000
starting_guesses = np.random.rand(nwalkers, ndim)
starting_guesses[:, 0] *= 2000 # start mu between 0 and 200
starting_guesses[:, 1] *= 50 # start sigma between 0 and 20
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])
sampler.run_mcmc(starting_guesses, nsteps)
sample = sampler.chain # shape = (nwalkers, nsteps, ndim)
sample = sampler.chain[:, nburn:, :].reshape(-1, 2)
Explanation: We can see that even though the original estimate of the true standard deviation was pretty far off, the bootstrap resampling shows that the uncertainty of this value is reasonably large and does indeed include the original value within one standard deviation.
And now for the bayesians
The big advantage of bayesian methods is about to show now. Even though the problem became only slightly more complicated, the frequentist needed a whole new arsenal of methods to go about. For the bayesian, the problem stays of roughly equal complexity, even though we do also need the slightly more complicated likelihood function that the frequentists start out with as well.
The model parameter vector theta now consists of two elements: the mean and standard deviation of the distribution of true fluxes. This of course means that the starting points are a two-dimensional array as well. The rest of the procedure is completely in line with the simpler problem:
End of explanation
from astroML.plotting import plot_mcmc
fig = plt.figure()
ax = plot_mcmc(sample.T, fig=fig, labels=[r'$\mu$', r'$\sigma$'], colors='k', )
ax[0].plot(sample[:, 0], sample[:, 1], '.k', alpha=0.1, ms=4)
ax[0].plot([mu_true], [sigma_true], 'o', color='red', ms=10);
Explanation: MCMC runs now result in a two-dimenional space of posterior samplings, one for mu and one for sigma. The package astroML, that we also used for the bootstrapping above, has a convenient plot routine for the sampled posterior that is pretty good at drawing iso-pdf contours.
End of explanation
p_hat = 5. / 8.
freq_prob = (1 - p_hat) ** 3
print("Naive frequentist probability of Bob winning: {0:.3f}".format(freq_prob))
print('In terms of odds: {0:.0f} against 1'.format((1. - freq_prob) / freq_prob))
Explanation: Notice the asymmetry! The previous alluded to interaction between the two unknowns results in a non-elliptical posterior. An elliptical posterior could result in purely Gaussian projections on either of the two axes, but the present posterior clearly won't (especially obvious along the $\sigma$-axis).
As you can see, the true value for both ingredients of theta are not on the maximum of the posterior. There is no reason for it to be there, the inner solid curve shows the range whithin which the real values should lie at 95% confidence. Run the same code with a different random number seed and the posterior will look different.
In principle, the posterior is the end point with most meaning. It doesn't make a lot of sense to quote something like $\sigma = X \pm Y$, as the errors are asymmetric, and in general even of very irregular shape.
In the next exercise, you will play with this estimator, and introduce your own, non-flat priors!
And now for something completely different - Nuisance parameters
When bayesian and frequentist methods seem to disagree
Let's walk through an example that is very close to something Bayes himself once used. Imagine a billiard table. Someone rolls a ball until it stops. The imaginary line along the longest axis on the table through that ball determines two parts of the table: one with points for A(lice) and the other with points for B(ob):
<img src="figures/billiards.png" alt="Bayesian Billiards">
After that, more balls are rolled on the table. Every time the person whose imaginary part the ball ends on gets a point and the one who is at 6 points first wins.
In problems like these, the location of that first, table dividing ball can be a nuisance parameter if it is unkown in a given setting. The location in principle is not the parameter of interest, butit is important for the outcome of the analysis. Imagine a problem like this: in a particular game, after 8 balls Alice has 5 points and Bob has three. What is the probability that Bob wins after all?
In a naive (frequentist?) approach, you could say that the best guess for a ball to end on Bob's terrain is 3/8, so Bob's chance of still winning would be:
End of explanation
from scipy.special import beta
bayes_prob = beta(6 + 1, 5 + 1) / beta(3 + 1, 5 + 1)
print("P(B|D) = {0:.3f}".format(bayes_prob))
print("Bayesian odds against Bob winning: {0:.0f} to 1".format((1. - bayes_prob) / bayes_prob))
Explanation: Bayesians wouldn't be bayesians if they wouldn't immediately pull out bayes' theorem. Unfortunately the derivation of the bayesian result is somewhat complicated, involving beta-functions, due to a marginalization over the parameter $p$ that encapsulates the unknown location of the first ball. The full derivation can be found on a the second part of the blog by VanderPlas.
In short, it goes as follows. Given that we do not know $p$ we will have to marginalize over all possible values of $p$, such that the probability of Bob winning given the data can be written as
$$
P(B~|~D) \equiv \int_{-\infty}^\infty P(B,p~|~D) {\mathrm d}p
$$
If you were to use Bayes' rule, and some manipulation in the link above, you arrive at a result that is given by
$$
P(B~|~D) = \frac{\int_0^1 (1 - p)^6 p^5 dp}{\int_0^1 (1 - p)^3 p^5 dp}
$$
I personally am not a huge fan of such integrals, but luckily they turn out to be pecial cases of the Beta Function:
$$
\beta(n, m) = \int_0^1 (1 - p)^{n - 1} p^{m - 1}
$$
The Beta function can be further expressed in terms of gamma functions (i.e. factorials), but for simplicity we'll compute them directly using Scipy's beta function implementation:
End of explanation
np.random.seed(0)
# 10k games with a random dividing point on the table, between 0 and 1
p = np.random.random(100000)
# Given the situation, 11 balls are sufficient and necessary
rolls = np.random.random((11, len(p)))
# Did either win yet?
Alice_count = np.cumsum(rolls < p, 0)
Bob_count = np.cumsum(rolls >= p, 0)
# Select games that fit the current situation
good_games = Bob_count[7] == 3
print("Number of suitable games: {0}".format(good_games.sum()))
Alice_count = Alice_count[:, good_games]
Bob_count = Bob_count[:, good_games]
# Which ones did Bob win?
bob_won = np.sum(Bob_count[10] == 6)
print("Number won by Bob: {0}".format(bob_won.sum()))
# So the probability is...
mc_prob = bob_won.sum() * 1. / good_games.sum()
print("Monte Carlo Probability of Bob winning: {0:.3f}".format(mc_prob))
print("MC Odds against Bob winning: {0:.0f} to 1".format((1. - mc_prob) / mc_prob))
Explanation: That's not the same! The difference, as you might expect, is caused by the marginalization over the different values for p, the location of the first ball. The value for p is not equal to 5/8, but is a pdf, in which 5/8 is the maximum likelihood value. It is, nevertheless, quite skewed. On top of that, the propagation fo different values for p into chance of Bob still winning is non-linear, so for another not-very-improbable value of p the chance of Bob winning can become much bigger. This works in such a way that the marginalisation
$$P (B | D) = \int P(B, p | D) \textrm{d}p$$
results in a much higher chance of Bob winning than that for taking p=5/8. That parameter p is called the nuisance parameter: it is important for the result, but not of actual interest.
A simple Monte Carlo simulation can show that the Bayesian result indeed is correct:
End of explanation
x = np.array([ 0, 3, 9, 14, 15, 19, 20, 21, 30, 35,
40, 41, 42, 43, 54, 56, 67, 69, 72, 88])
y = np.array([33, 68, 34, 34, 37, 71, 37, 44, 48, 49,
53, 49, 50, 48, 56, 60, 61, 63, 44, 71])
e = np.array([ 3.6, 3.9, 2.6, 3.4, 3.8, 3.8, 2.2, 2.1, 2.3, 3.8,
2.2, 2.8, 3.9, 3.1, 3.4, 2.6, 3.4, 3.7, 2.0, 3.5])
plt.errorbar(x, y, e, fmt='.k', ecolor='gray')
plt.xlabel('X');plt.ylabel('Y');
Explanation: This result is not meant to show you that frequentists methods are incorrect. In fact, also a frequentist could marginalze over values of p (although one might argue that the frequentist is then using methods that are basically bayesian). What it does mean to say is that, while a frequentist needs to both think of the necessary marginalization (rather than just using the maximum likelihood for p, as they would do for any other parameter) and resort to methods that go beyond the usual. The bayesian, on the other hand, will more naturally do the numerical integration over the nuisance parameter, and the method a bayesian uses is not any different from any other problem. In short: in bayesian analysis the treatment of nuisance parameters is more natural, and easier.
I must admit that I myself would probably shoot for the simple, numeric Monte Carlo simulation as above. I am a practical statistician and love the methods that Jake VanderPlas also advertises in his talk about statistics for hackers.
Example two: a linear fit with outliers
A second, perhaps slightly more fair comparison between bayesian and frequentist methods is a linear fit to data with outliers. We first construct a data set, with errors:
End of explanation
from scipy import optimize
def squared_loss(theta, x=x, y=y, e=e):
dy = y - theta[0] - theta[1] * x
return np.sum(0.5 * (dy / e) ** 2)
theta1 = optimize.fmin(squared_loss, [0, 0], disp=False)
xfit = np.linspace(0, 100)
# plt.errorbar(x, y, e, fmt='.k', ecolor='gray')
# plt.plot(xfit, theta1[0] + theta1[1] * xfit, '-k')
# plt.title('Maximum Likelihood fit: Squared Loss');plt.xlabel('X');plt.ylabel('Y');
t = np.linspace(-20, 20)
def huber_loss(t, c=3):
return ((abs(t) < c) * 0.5 * t ** 2
+ (abs(t) >= c) * -c * (0.5 * c - abs(t)))
def total_huber_loss(theta, x=x, y=y, e=e, c=3):
return huber_loss((y - theta[0] - theta[1] * x) / e, c).sum()
theta2 = optimize.fmin(total_huber_loss, [0, 0], disp=False)
plt.errorbar(x, y, e, fmt='.k', ecolor='gray')
plt.plot(xfit, theta1[0] + theta1[1] * xfit, color='lightgray', label="Standard ML")
plt.plot(xfit, theta2[0] + theta2[1] * xfit, color='black', label="Including Huber loss")
plt.legend(loc='lower right')
plt.title('Maximum Likelihood fit: frequentists');plt.xlabel('X');plt.ylabel('Y');
Explanation: Not by accident, the outliers at low X are too high, whereas the one at high X is too low, to make sure that conventional naive methods are likely to underestimate the slope of a function describing the non-outlier points.
Let's fit a simple linear model, with a slope and an intercept, taken together in parameter vector $\theta$:
$$
\hat{y}(x~|~\theta) = \theta_0 + \theta_1 x
$$
With this model, the Gaussian likelihood for all poits is given by:
$$
p(x_i,y_i,e_i~|~\theta) \propto \exp\left[-\frac{1}{2e_i^2}\left(y_i - \hat{y}(x_i~|~\theta)\right)^2\right]
$$
As before, the likelihood is the product of all points, and the log of that looks like:
$$
\log \mathcal{L}(D~|~\theta) = \mathrm{const} - \sum_i \frac{1}{2e_i^2}\left(y_i - \hat{y}(x_i~|~\theta)\right)^2
$$
Maximising this to find the best value for $\theta$ is the same as minimizing the sum, which is called the loss:
$$
\mathrm{loss} = \sum_i \frac{1}{2e_i^2}\left(y_i - \hat{y}(x_i~|~\theta)\right)^2
$$
This probably all looks familiar in terms of $\chi^2$ and the method is commonly know as minimizing $\chi^2$ and is a special, and very common, form of loss minimization.
In the bayesian form this will result in the same expression for the posterior in case of a flat prior (which in this case is a doubtful choice). Frequentists can choose to do the minimization analytically, but for completeness we can solve it using optimization routines in scipy, as illustrated below.
There exist frequentist methods to deal with outliers as well. For example, one could use non-quadratic loss functions, like the Huber loss (see VanderPlas' blog for details) or by iteratively refitting on the sample with outliers excluded.
End of explanation
# theta will be an array of length 2 + N, where N is the number of points
# theta[0] is the intercept, theta[1] is the slope,
# and theta[2 + i] is the weight g_i
def log_prior(theta):
#g_i needs to be between 0 and 1
if (all(theta[2:] > 0) and all(theta[2:] < 1)):
return 0
else:
return -np.inf # log(0) = -inf
def log_likelihood(theta, x, y, e, sigma_B):
dy = y - theta[0] - theta[1] * x
g = np.clip(theta[2:], 0, 1) # g<0 or g>1 leads to NaNs in logarithm
logL1 = np.log(g) - 0.5 * np.log(2 * np.pi * e ** 2) - 0.5 * (dy / e) ** 2
logL2 = np.log(1 - g) - 0.5 * np.log(2 * np.pi * sigma_B ** 2) - 0.5 * (dy / sigma_B) ** 2
return np.sum(np.logaddexp(logL1, logL2))
def log_posterior(theta, x, y, e, sigma_B):
return log_prior(theta) + log_likelihood(theta, x, y, e, sigma_B)
# Note that this step will take a few minutes to run!
ndim = 2 + len(x) # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nburn = 10000 # "burn-in" period to let chains stabilize
nsteps = 15000 # number of MCMC steps to take
# set theta near the maximum likelihood, with
np.random.seed(42)
starting_guesses = np.zeros((nwalkers, ndim))
starting_guesses[:, :2] = np.random.normal(theta1, 1, (nwalkers, 2))
starting_guesses[:, 2:] = np.random.normal(0.5, 0.1, (nwalkers, ndim - 2))
import emcee
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[x, y, e, 50])
sampler.run_mcmc(starting_guesses, nsteps)
sample = sampler.chain # shape = (nwalkers, nsteps, ndim)
sample = sampler.chain[:, nburn:, :].reshape(-1, ndim)
Explanation: Bayesian objections
The quadratic loss function follows directly from a Gaussian likelihood, so why would you opt for another arbitrary loss function? And why this one? And what value for c should one pick and why? That a real-life data set contains outliers is something one often just has to live with and it would be good to use a stable method with a minimum number of arbitrary decisions.
Introduce nuisance parameters!
One line of attack used by Bayesians is to fit a model that allows every point to be an outlier, by adding a term on top of the linear relation. There are several ways to do so anf the metod chosen here looks somewhat like a "signal with a background".
We write the model as follows.
$
\begin{array}{ll}
p({x_i}, {y_i},{e_i}~|~\theta,{g_i},\sigma,\sigma_b) = & \frac{g_i}{\sqrt{2\pi e_i^2}}\exp\left[\frac{-\left(\hat{y}(x_i~|~\theta) - y_i\right)^2}{2e_i^2}\right] \
&+ \frac{1 - g_i}{\sqrt{2\pi \sigma_B^2}}\exp\left[\frac{-\left(\hat{y}(x_i~|~\theta) - y_i\right)^2}{2\sigma_B^2}\right]
\end{array}
$
The idea is to introduce weights: ${g_i}$. When this weight is zero, you are dealing with a noise term with a high standard deviation $\sigma_B$ (e.g. 50, or some arbitrary high number, or something drawn from a very wide distribution, or you name it, as long as there is the freedom to basically get you to any value for Y, independent of X). This "nuisance parameter", ${g_i}$, is a vector with a value for every data point. What we do, is changing our model from the two-dimensional problem of just fitting an intercept and slope, into a twentytwo-dimensional problem with the same slope and intercept and a "noise wight" for all data points.
End of explanation
plt.plot(sample[:, 0], sample[:, 1], ',k', alpha=0.1)
plt.xlabel('intercept')
plt.ylabel('slope');
# Or with astroML to easily overplot the contours:
fig = plt.figure()
ax = plot_mcmc(sample[:,:2].T, fig=fig, labels=[r'Intercept', r'slope'], colors='k')
ax[0].plot(sample[:, 0], sample[:, 1], '.k', alpha=0.1, ms=4);
# ax[0].plot([mu_true], [sigma_true], 'o', color='red', ms=10);
Explanation: The markov chains (or rather: the part of the markov chain after burn in) samples the posterior pdf. All of it. Marginalisation of a parameter is not much else than ignoring that parameter when looking at the posterior (given that the posterior space for that parameter is well sampled)! So, here we want to marginalise over all 20 noise weight factors and just obtain the posterior in the slope and intercept.
End of explanation
# Let's have a look at all the walkers and a choice of the burn in
for i in range(len(sampler.lnprobability)):
plt.plot(sampler.lnprobability[i,:], linewidth=0.3, color='k', alpha=0.1)
plt.ylabel('ln(P)')
plt.xlabel('Step number')
plt.axvline(nburn, linestyle='dotted')
plt.text(nburn*1.01, -140, "Burn-in");
# And now the linear fit that results, see below for the outlier definition.
theta3 = np.mean(sample[:, :2], 0)
g = np.mean(sample[:, 2:], 0)
outliers = (g < 0.5)
plt.errorbar(x, y, e, fmt='.k', ecolor='gray')
plt.plot(xfit, theta1[0] + theta1[1] * xfit, color='lightgray', label="Standard ML")
plt.plot(xfit, theta2[0] + theta2[1] * xfit, color='lightblue', label="ML + Huber loss")
plt.plot(xfit, theta3[0] + theta3[1] * xfit, color='black', label="Bayesian marginalization")
plt.scatter(x[outliers], y[outliers], edgecolor='red', facecolor='none', s=500, label="Outliers - for FREE!")
plt.xlim([0,100])
plt.ylim([20,80])
plt.legend(loc="lower right")
plt.title('Linear fit: Comparison of methods');
# What about the nuisance parameters?
plt.plot(sample[:, 2], sample[:, 3], ',k', alpha=0.1)
plt.xlabel('$g_1$')
plt.ylabel('$g_2$')
print("g1 mean: {0:.2f}".format(sample[:, 2].mean()))
print("g2 mean: {0:.2f}".format(sample[:, 3].mean()))
# Distribution of the means of all nuisance paramaters
plt.hist(g, bins=5);
Explanation: There is an exercise to identify what's up with those points in the upper right part of the plot.
End of explanation
def p(x, theta):
return (x > theta) * np.exp(theta - x)
x = np.linspace(5, 18, 1000)
plt.fill(x, p(x, 5), alpha=0.3)
plt.ylim(0, 1.2)
plt.xlabel('x')
plt.ylabel('p(x)');
Explanation: All a matter of interpretation...
We have seen frequentist and Bayesian methods in action side by side. It is now time to get back to that difference in interpretation between the two. Roughly:
- Frequentists say: If you repeat the experiment many times, then the real value of $\theta$ will fall within the confidence interval in 95% of the trials.
- Bayesians say: Given the data, there is a 95% probability that the real value of $\theta$ falls within the credible region.
So the basic difference is that frequentists see the value of $\theta$ as a given, while the data is a random variable. Bayesians, on the other hadn tae the data as given and regard the model parameters as random variables. After all: in general, the experiment won't be repeated many times...
So is that any important? Yes! As it turns out, many scientists answer bayesian questions with frequentist methods and that is just fundamentally wrong. Let's look at an example again.
Jaynes' truncated exponential
Jaynes' truncated exponential is a concept that comes from technology and it is often used for failure of a device, after a chemical prohibitor runs out. The model for the time of device failure is given by
$$
p(x~|~\theta) = \left{
\begin{array}{lll}
\exp(\theta - x) &,& x > \theta\
0 &,& x < \theta
\end{array}
\right}
$$
So for $\theta=5$, that looks like:
End of explanation
from scipy.special import gammaincc, erfinv
from scipy import optimize
# This approximate CI is the result of the much simpler normal approximation,
# it serves as a simple helper function here, for the optimization down below.
def approx_CI(D, sig=0.95):
Approximate truncated exponential confidence interval
# use erfinv to convert percentage to number of sigma
Nsigma = np.sqrt(2) * erfinv(sig)
D = np.asarray(D)
N = D.size
theta_hat = np.mean(D) - 1
return [theta_hat - Nsigma / np.sqrt(N),
theta_hat + Nsigma / np.sqrt(N)]
def exact_CI(D, frac=0.95):
Exact truncated exponential confidence interval
D = np.asarray(D)
N = D.size
theta_hat = np.mean(D) - 1
def f(theta, D):
z = theta_hat + 1 - theta
return (z > 0) * z ** (N - 1) * np.exp(-N * z)
def F(theta, D):
return gammaincc(N, np.maximum(0, N * (theta_hat + 1 - theta))) - gammaincc(N, N * (theta_hat + 1))
def eqns(CI, D):
Equations which should be equal to zero
theta1, theta2 = CI
return (F(theta2, D) - F(theta1, D) - frac,
f(theta2, D) - f(theta1, D))
guess = approx_CI(D, 0.68) # use 1-sigma interval as a guess
result = optimize.root(eqns, guess, args=(D,))
if not result.success:
print("warning: CI result did not converge!")
return result.x
D = [10, 12, 15]
print("CI: ({0:.1f}, {1:.1f})".format(*exact_CI(D)))
print("The approximate CI using the normal approximation would give: ({0:.1f}, {1:.1f})".format(*approx_CI(D)))
Explanation: Now assume that we have data of failure moments of devices given by $D={10, 12, 15}$ and the question would be what the value of $\theta$ might be. Common sense will tell you that $\theta<10$, but probably also that $\theta>5$ or there would be more data close to 5. So what is it?
The details are, as you may have expected by now on a blog by Jake VanderPlas. In the interest of time we will not go into the nitty-gritty, but the result is outlined here.
Because of the low number of measurements, namely 3, the commonly employed "normal approximation", that is worked out in the given link, breaks down. This normal approximation happens to give nonsense results that have our intuitive answer outside the 95% confidence interval, which is [10.2 - 12.5].
For small $N$, the normal approximation will not apply, and we must instead compute the confidence integral from the actual sampling distribution, which is the distribution of the mean of $N$ variables each distributed according to $p(\theta)$. The sum of random variables is distributed according to the convolution of the distributions for individual variables, so we can exploit the convolution theorem and use the method of characteristic functions to find the following sampling distribution for the sum of $N$ variables distributed according to our particular $p(x~|~\theta)$:
$$
f(\theta~|~D) \propto
\left{
\begin{array}{lll}
z^{N - 1}\exp(-z) &,& z > 0\
0 &,& z < 0
\end{array}
\right}
;~ z = N(\hat{\theta} + 1 - \theta)
$$
To compute the 95% confidence interval, we can start by computing the cumulative distribution: we integrate $f(\theta~|~D)$ from $0$ to $\theta$ (note that we are not actually integrating over the parameter $\theta$, but over the estimate of $\theta$. Frequentists cannot integrate over parameters).
This integral is relatively painless if we make use of the expression for the incomplete gamma function:
$$
\Gamma(a, x) = \int_x^\infty t^{a - 1}e^{-t} dt
$$
which looks strikingly similar to our $f(\theta)$.
Using this to perform the integral, we find that the cumulative distribution is given by
$$
F(\theta~|~D) = \frac{1}{\Gamma(N)}\left[ \Gamma\left(N, \max[0, N(\hat{\theta} + 1 - \theta)]\right) - \Gamma\left(N,~N(\hat{\theta} + 1)\right)\right]
$$
The 95% confidence interval $(\theta_1, \theta_2)$ satisfies the following equation:
$$
F(\theta_2~|~D) - F(\theta_1~|~D) = 0.95
$$
and the probability density is equal at either side of the interval:
$$
f(\theta_2~|~D) = f(\theta_1~|~D)
$$
Solving this system of two nonlinear equations will give us the desired confidence interval. Let's compute this numerically:
End of explanation
def bayes_CR(D, frac=0.95):
Bayesian Credibility Region
D = np.asarray(D)
N = float(D.size)
theta2 = D.min()
theta1 = theta2 + np.log(1. - frac) / N
return theta1, theta2
print("Bayesian 95% CR = ({0:.1f}, {1:.1f})".format(*bayes_CR(D)))
Explanation: The exact confidence interval is slightly different than the approximate one, but still reflects the same problem: we know from common-sense reasoning that $\theta$ can't be greater than 10, yet the 95% confidence interval is entirely in this forbidden region! The confidence interval seems to be giving us unreliable results.
Let's have a look at a bayesian approach.
For the Bayesian solution, we start by writing Bayes' rule:
$$
p(\theta~|~D) = \frac{p(D~|~\theta)p(\theta)}{P(D)}
$$
Using a constant prior $p(\theta)$, and with the likelihood
$$
p(D~|~\theta) = \prod_{i=1}^N p(x~|~\theta)
$$
we find
$$
p(\theta~|~D) \propto \left{
\begin{array}{lll}
N\exp\left[N(\theta - \min(D))\right] &,& \theta < \min(D)\
0 &,& \theta > \min(D)
\end{array}
\right}
$$
where $\min(D)$ is the smallest value in the data $D$, which enters because of the truncation of $p(x~|~\theta)$.
Because $p(\theta~|~D)$ increases exponentially up to the cutoff, the shortest 95% credibility interval $(\theta_1, \theta_2)$ will be given by
$$
\theta_2 = \min(D)
$$
and $\theta_1$ given by the solution to the equation
$$
\int_{\theta_1}^{\theta_2} N\exp[N(\theta - \theta_2)]d\theta = f
$$
this can be solved analytically by evaluating the integral, which gives
$$
\theta_1 = \theta_2 + \frac{\log(1 - f)}{N}
$$
Let's write a function which computes this:
End of explanation |
6,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Auto-Batched Joint Distributions
Step1: <table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 전제 조건
Step3: 데시데라타(Desiderata)
확률적 추론에서는 종종 두 가지 기본 연산을 수행하려고 합니다.
sample
Step4: 첫 번째 시도는 모델을 코드로 직접 변환하는 것입니다. 기울기 m과 바이어스 b는 간단합니다. Y는 lambda 함수를 사용하여 정의됩니다. 일반적인 패턴은 JointDistributionSequential(JDS)에서 $k$ 인수의 lambda 함수가 모델의 이전 $k$ 분포를 사용한다는 것입니다. '역'순서에 유의하세요.
샘플을 생성하는 데 사용된 샘플 및 기본 '하위 분포'를 모두 반환하는 sample_distributions를 호출합니다(sample을 호출하여 샘플만 생성할 수 있습니다. 튜토리얼의 뒷부분에서 이들 분포가 있으면 편리할 것입니다). 적절한 샘플은 다음과 같습니다.
Step5: 하지만 log_prob는 원하지 않는 형상이 있는 결과를 생성합니다.
Step6: 여러 샘플 추출하기도 동작하지 않습니다.
Step7: 무엇이 잘못되었는지 이해하려고 해봅시다.
간략한 검토
Step8: 그리고 분포는 다음과 같습니다.
Step9: 로그 확률은 각 부분의 일치하는 요소에서 하위 분포의 로그 확률을 합산하여 계산됩니다.
Step10: 따라서 한 가지 설명으로는 log_prob_parts의 3번째 서브 구성 요소가 7-텐서이므로 로그 확률 계산이 7-텐서를 반환한다는 것입니다. 이유는 무엇일까요?
수학 공식에서 Y에 대한 분포에 해당하는 dists의 마지막 요소가 [7]의 batch_shape를 가짐을 확인합니다. 즉, Y에 대한 분포는 7개의 독립적인 정규 분포(평균이 다르며 이 경우 규모는 같음)입니다.
이제 무엇이 잘못되었는지 이해했습니다. JDS에서 Y 대한 분포는 batch_shape=[7]이고, JDS의 샘플은 m과 b에 대한 스칼라와 7개의 독립적 정규 분포의 '배치'를 나타냅니다. log_prob는 7개의 개별 로그 확률을 계산하며, 각각은 m과 b를 추출할 로그 확률과 특정 X[i]에서 단일 관측치 Y[i]를 나타냅니다.
Independent로 log_prob(sample()) 수정하기
dists[2]에는 event_shape=[]과 batch_shape=[7]이 있음을 상기하세요.
Step11: 배치 차원을 이벤트 차원으로 변환하는 TFP의 Independent 메타분포를 사용하여 event_shape=[7] 및 batch_shape=[]의 분포로 변환할 수 있습니다(Y의 분포이므로 y_dist_i 이름을 변경하고 _i는 Independent 래핑을 대신합니다).
Step12: 이제 7-벡터의 log_prob는 스칼라입니다.
Step13: 내부적으로 배치에 대한 Independent 합계는 다음과 같습니다.
Step14: 그리고 실제로 이를 사용하여 log_prob가 스칼라를 반환하는 새로운 jds_i(i는 다시 Independent 나타냄)를 생성할 수 있습니다.
Step15: 몇 가지 참고 사항입니다.
jds_i.log_prob(s)은 tf.reduce_sum(jds.log_prob(s))와 같지 않습니다. 전자는 결합 분포의 '올바른' 로그 확률을 생성합니다. 후자는 7-텐서에 대해 합하고, 각 요소는 m, b 및 Y 로그 확률의 단일 요소의 합이므로 m과 b를 초과합니다(log_prob(m) + log_prob(b) + log_prob(Y)는 TFP가 TF 및 NumPy의 브로드캐스팅 규칙을 따르므로 예외로 처리하지 않고 결과를 반환합니다. 벡터에 스칼라를 추가하면 벡터 크기의 결과가 생성됩니다).
이 특정 경우에는 문제를 해결하고 Independent(Normal(...)) 대신 MultivariateNormalDiag를 사용하여 같은 결과를 얻을 수 있습니다. MultivariateNormalDiag는 벡터 값 분포입니다(즉, 이미 벡터 이벤트 형상이 있음). MultivariateNormalDiag는 Independent와 Normal의 구성으로 구현될 수 있지만 실제로 구현되지는 않습니다. 벡터 V가 주어지면 n1 = Normal(loc=V)와 n2 = MultivariateNormalDiag(loc=V)의 샘플은 구별할 수 없음을 기억하는 것이 좋습니다. 이러한 분포의 차이점은 n1.log_prob(n1.sample())이 벡터이고 n2.log_prob(n2.sample())은 스칼라라는 것입니다.
여러 샘플
여러 샘플 추출하기가 여전히 동작하지 않습니다.
Step16: 그 이유를 생각해봅시다. jds_i.sample([5, 3])을 호출할 때 먼저 m과 b 샘플을 각각 형상 (5, 3)으로 추출합니다. 그 후, 다음을 통해 Normal 분포를 구성하려고 합니다.
tfd.Normal(loc=m*X + b, scale=1.)
그러나 m이 형상 (5, 3)이고 X가 형상 7이면 이 둘을 함께 곱할 수 없으며 실제로 이러한 오류가 발생합니다.
Step17: 이 문제를 해결하기 위해 Y에 대한 분포에 어떤 속성이 있어야 하는지 생각해 보겠습니다. jds_i.sample([5, 3])을 호출했다면 m과 b가 모두 형상(5, 3)을 가질 것임을 압니다. Y 분포에서 sample에 대한 호출은 어떤 형상을 생성해야 할까요? 분명한 대답은 (5, 3, 7)입니다. 각 배치 지점에 대해 X와 같은 크기의 샘플이 필요합니다. TensorFlow의 브로드캐스팅 기능으로 추가 차원을 더하여 이를 달성할 수 있습니다.
Step18: m과 b 모두에 축을 추가하면 여러 샘플을 지원하는 새 JDS를 정의할 수 있습니다.
Step19: 추가 검사로 단일 배치 지점에 대한 로그 확률이 이전과 일치하는지 확인합니다.
Step20: <a id="AutoBatching-For-The-Win"></a>
성공적으로 자동 일괄 처리하기
아주 좋습니다. 이제 모든 데시데라타를 처리하는 JointDistribution의 버전을 갖추었습니다. log_prob는 tfd.Independent를 사용하여 스칼라를 반환하며, 추가 축을 더하여 브로드캐스팅을 수정했으므로 이제 여러 샘플이 제대로 동작합니다.
더 쉽고 더 좋은 방법이 있다면 어떨까요? 그 방법은 바로 JointDistributionSequentialAutoBatched(JDSAB)라고 합니다.
Step21: 어떻게 동작하나요? 깊은 이해를 위해 코드 읽기를 시도할 수 있지만 대부분의 사용 사례에 대해 충분한, 간략한 개요를 제공합니다.
첫 번째 문제는 Y에 batch_shape=[7]와 event_shape=[]가 있고Independent를 사용하여 배치 차원을 이벤트 차원으로 변환했다는 점을 기억하세요. JDSAB는 구성 요소 분포의 배치 형상을 무시합니다. 대신 batch_ndims > 0을 설정하여 달리 지정하지 않는 한, 배치 형상을 모델의 전체 속성으로 처리하며 []로 간주합니다. 이 효과는 위에서 수동으로 수행한 것처럼 tfd.Independent를 사용하여 구성 요소 분포의 <em>모든</em> 배치 차원을 이벤트 차원으로 변환하는 것과 같습니다.
두 번째 문제는 여러 샘플을 만들 때 X로 적절하게 브로드캐스팅할 수 있도록 m과 b의 형상을 조정해야 한다는 것이었습니다. JDSAB를 사용하면 모델을 작성하여 단일 샘플을 생성하고 전체 모델을 '리프트(lift)'하여 TensorFlow의 vectorized_map으로 여러 샘플을 생성합니다(이 특성은 JAX의 vmap과 유사합니다).
배치 형상 문제를 더 자세히 살펴보면, 원래의 '불량' 결합 분포 jds, 배치 고정 분포 jds_i, jds_ia 및 자동 일괄 처리된 jds_ab의 배치 형상을 비교할 수 있습니다.
Step22: 원본 jds에는 배치 형상이 다른 하위 분포가 있습니다. jds_i와 jds_ia는 같은 배치 형상(비어 있음)으로 하위 분포를 만들어 이 문제를 해결합니다. jds_ab에는 단일 배치 형상(비어 있음)만 있습니다.
JointDistributionSequentialAutoBatched가 몇 가지 추가 일반성을 무료로 제공한다는 점은 주목할 만합니다. 공변량 X(및 암시적으로 관측치 Y)를 2차원으로 만든다고 가정합니다.
Step23: JointDistributionSequentialAutoBatched는 변경 없이 동작합니다(X의 형상이 jds_ab.log_prob로 캐싱되므로 모델을 재정의해야 합니다).
Step24: 반면에 신중하게 만들어진 JointDistributionSequential은 더 이상 동작하지 않습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Auto-Batched Joint Distributions: A Gentle Tutorial
Copyright 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Import and set ups{ display-mode: "form" }
import functools
import numpy as np
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
Explanation: <table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Modeling_with_JointDistribution"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a>
</td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
시작하기
TensorFlow Probability(TFP)는 여러 JointDistribution 추상화를 제공함으로 사용자가 수학적 형식에 가까운 확률적 그래프 모델을 쉽게 표현할 수 있도록 하여 확률적 추론을 더 쉽게 만듭니다. 추상화는 모델에서 샘플링하고 모델에서 샘플의 로그 확률을 평가하는 방법을 생성합니다. 이 튜토리얼에서는 원본 JointDistribution 추상화 이후 개발된 '자동 일괄 처리된' 변형을 검토합니다. 원래의 자동 일괄 처리되지 않은 추상화에 비해 자동 일괄 처리 버전은 사용이 더 간단하고 인체 공학적이므로 더 적은 상용구로 많은 모델을 표현할 수 있습니다. 이 colab에서는 단순한 모델을 자세히 탐색하여(다소 지루할 수 있음) 자동 일괄 처리로 해결되는 문제를 명확하게 하고, 앞으로 독자에게 TFP 형상 개념을 더 많이 소개할 수 있기를 바랍니다.
자동 일괄 처리가 도입되기 전에, 확률 모델을 표현하기 위한 다양한 구문 스타일에 해당하는 JointDistribution의 변형이 다음과 같이 몇 가지 있었습니다. JointDistributionSequential, JointDistributionNamed 및 JointDistributionCoroutine. 자동 일괄 처리는 믹스인으로 존재하므로 이제 다양한 AutoBatched 변형이 존재합니다. 이 튜토리얼에서는 JointDistributionSequential과 JointDistributionSequentialAutoBatched의 차이점을 살펴봅니다. 그러나 여기서 수행하는 모든 작업은 본질적으로 변경 없이 다른 변형에 적용됩니다.
종속성과 전제 조건
End of explanation
X = np.arange(7)
X
Explanation: 전제 조건: 베이지안 회귀 문제
매우 간단한 베이지안 회귀 시나리오를 고려하겠습니다.
$$ \begin{align} m & \sim \text{Normal}(0, 1) \ b & \sim \text{Normal}(0, 1) \ Y & \sim \text{Normal}(mX + b, 1) \end{align} $$
In this model, m and b are drawn from standard normals, and the observations Y are drawn from a normal distribution whose mean depends on the random variables m and b, and some (nonrandom, known) covariates X. (For simplicity, in this example, we assume the scale of all random variables is known.)
이 모델에서 추론을 수행하려면 공변량 X와 관측치 Y를 모두 알아야 하지만, 이 튜토리얼에서는 X만 필요하므로 간단한 더미 X를 정의합니다.
End of explanation
jds = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y
])
Explanation: 데시데라타(Desiderata)
확률적 추론에서는 종종 두 가지 기본 연산을 수행하려고 합니다.
sample: 모델에서 샘플을 추출합니다.
log_prob: 모델에서 샘플의 로그 확률을 계산합니다.
TFP의 JointDistribution 추상화(확률적 프로그래밍에 대한 다른 많은 접근 방식 포함)를 사용할 때의 주요 이점은 사용자가 모델을 한 번 작성하고 sample 및 log_prob 계산에 모두 액세스할 수 있다는 것입니다.
데이터세트(X.shape = (7,))에 7개의 지점이 있다는 점에 주목하여 이제 우수한 JointDistribution에 대한 데시데라타를 명시할 수 있습니다.
sample()은 각각 스칼라 기울기, 스칼라 바이어스 및 벡터 관측치에 해당하는 형상 [(), (), (7,)]을 가진 Tensors 목록을 생성해야 합니다.
log_prob(sample())은 특정 기울기, 바이어스 및 관측치의 로그 확률인 스칼라를 생성해야 합니다.
sample([5, 3])은 모델의 샘플 (5, 3)-<em>배치</em>를 나타내는 형상 <code>[(5, 3), (5, 3), (5, 3, 7)]</code>을 가진 Tensors 목록을 생성해야 합니다.
log_prob(sample([5, 3]))은 형상 (5, 3)의 Tensor를 생성해야 합니다.
이제 일련의 JointDistribution 모델을 살펴보고, 위의 데시데라타를 달성하는 방법을 통해 그 과정에서 TFP 형상에 대해 조금 더 배워보겠습니다.
스포일러 경고: 상용구를 추가하지 않고 위의 데시데라타를 충족하는 접근 방식은 자동 일괄 처리입니다.
첫 번째 시도: JointDistributionSequential
End of explanation
dists, sample = jds.sample_distributions()
sample
Explanation: 첫 번째 시도는 모델을 코드로 직접 변환하는 것입니다. 기울기 m과 바이어스 b는 간단합니다. Y는 lambda 함수를 사용하여 정의됩니다. 일반적인 패턴은 JointDistributionSequential(JDS)에서 $k$ 인수의 lambda 함수가 모델의 이전 $k$ 분포를 사용한다는 것입니다. '역'순서에 유의하세요.
샘플을 생성하는 데 사용된 샘플 및 기본 '하위 분포'를 모두 반환하는 sample_distributions를 호출합니다(sample을 호출하여 샘플만 생성할 수 있습니다. 튜토리얼의 뒷부분에서 이들 분포가 있으면 편리할 것입니다). 적절한 샘플은 다음과 같습니다.
End of explanation
jds.log_prob(sample)
Explanation: 하지만 log_prob는 원하지 않는 형상이 있는 결과를 생성합니다.
End of explanation
try:
jds.sample([5, 3])
except tf.errors.InvalidArgumentError as e:
print(e)
Explanation: 여러 샘플 추출하기도 동작하지 않습니다.
End of explanation
sample
Explanation: 무엇이 잘못되었는지 이해하려고 해봅시다.
간략한 검토: 배치 형상 및 이벤트 형상
TFP에서 일반(JointDistribution이 아님) 확률 분포에는 이벤트 형상과 배치 형상이 있으며, TFP를 효과적으로 사용하려면 이 차이를 이해하는 것이 중요합니다.
이벤트 형상은 분포에서 단일 추출의 형상을 설명합니다. 추출은 차원에 따라 달라질 수 있습니다. 스칼라 분포의 경우 이벤트 형상은 []입니다. 5차원 MultivariateNormal의 경우 이벤트 형상은 [5]입니다.
배치 형상은 동일하게 분포되지 않은 독립적인 추출, 일명 '배치'를 나타냅니다. 단일 Python 객체에서 분포의 배치를 나타내는 것은 TFP가 대규모 효율성을 달성하는 주요 방법 중 하나입니다.
목적을 위해 명심해야 할 중요한 사실은 분포의 단일 샘플에 대해 log_prob를 호출하면 결과가 항상 배치 형상과 일치하는 형상(즉, 가장 오른쪽 차원을 가짐)을 갖게 된다는 것입니다.
형상에 대한 자세한 내용은 'TensorFlow 분포 형상 이해하기' 튜토리얼을 참조하세요.
log_prob(sample())이 스칼라를 생성하지 않는 이유는 무엇일까요?
배치 형상 및 이벤트 형상에 대한 지식을 사용하여 log_prob(sample())에서 무슨 일이 일어나고 있는지 살펴보겠습니다. 다음의 샘플을 다시 보겠습니다.
End of explanation
dists
Explanation: 그리고 분포는 다음과 같습니다.
End of explanation
log_prob_parts = [dist.log_prob(s) for (dist, s) in zip(dists, sample)]
log_prob_parts
np.sum(log_prob_parts) - jds.log_prob(sample)
Explanation: 로그 확률은 각 부분의 일치하는 요소에서 하위 분포의 로그 확률을 합산하여 계산됩니다.
End of explanation
dists[2]
Explanation: 따라서 한 가지 설명으로는 log_prob_parts의 3번째 서브 구성 요소가 7-텐서이므로 로그 확률 계산이 7-텐서를 반환한다는 것입니다. 이유는 무엇일까요?
수학 공식에서 Y에 대한 분포에 해당하는 dists의 마지막 요소가 [7]의 batch_shape를 가짐을 확인합니다. 즉, Y에 대한 분포는 7개의 독립적인 정규 분포(평균이 다르며 이 경우 규모는 같음)입니다.
이제 무엇이 잘못되었는지 이해했습니다. JDS에서 Y 대한 분포는 batch_shape=[7]이고, JDS의 샘플은 m과 b에 대한 스칼라와 7개의 독립적 정규 분포의 '배치'를 나타냅니다. log_prob는 7개의 개별 로그 확률을 계산하며, 각각은 m과 b를 추출할 로그 확률과 특정 X[i]에서 단일 관측치 Y[i]를 나타냅니다.
Independent로 log_prob(sample()) 수정하기
dists[2]에는 event_shape=[]과 batch_shape=[7]이 있음을 상기하세요.
End of explanation
y_dist_i = tfd.Independent(dists[2], reinterpreted_batch_ndims=1)
y_dist_i
Explanation: 배치 차원을 이벤트 차원으로 변환하는 TFP의 Independent 메타분포를 사용하여 event_shape=[7] 및 batch_shape=[]의 분포로 변환할 수 있습니다(Y의 분포이므로 y_dist_i 이름을 변경하고 _i는 Independent 래핑을 대신합니다).
End of explanation
y_dist_i.log_prob(sample[2])
Explanation: 이제 7-벡터의 log_prob는 스칼라입니다.
End of explanation
y_dist_i.log_prob(sample[2]) - tf.reduce_sum(dists[2].log_prob(sample[2]))
Explanation: 내부적으로 배치에 대한 Independent 합계는 다음과 같습니다.
End of explanation
jds_i = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Independent( # Y
tfd.Normal(loc=m*X + b, scale=1.),
reinterpreted_batch_ndims=1)
])
jds_i.log_prob(sample)
Explanation: 그리고 실제로 이를 사용하여 log_prob가 스칼라를 반환하는 새로운 jds_i(i는 다시 Independent 나타냄)를 생성할 수 있습니다.
End of explanation
try:
jds_i.sample([5, 3])
except tf.errors.InvalidArgumentError as e:
print(e)
Explanation: 몇 가지 참고 사항입니다.
jds_i.log_prob(s)은 tf.reduce_sum(jds.log_prob(s))와 같지 않습니다. 전자는 결합 분포의 '올바른' 로그 확률을 생성합니다. 후자는 7-텐서에 대해 합하고, 각 요소는 m, b 및 Y 로그 확률의 단일 요소의 합이므로 m과 b를 초과합니다(log_prob(m) + log_prob(b) + log_prob(Y)는 TFP가 TF 및 NumPy의 브로드캐스팅 규칙을 따르므로 예외로 처리하지 않고 결과를 반환합니다. 벡터에 스칼라를 추가하면 벡터 크기의 결과가 생성됩니다).
이 특정 경우에는 문제를 해결하고 Independent(Normal(...)) 대신 MultivariateNormalDiag를 사용하여 같은 결과를 얻을 수 있습니다. MultivariateNormalDiag는 벡터 값 분포입니다(즉, 이미 벡터 이벤트 형상이 있음). MultivariateNormalDiag는 Independent와 Normal의 구성으로 구현될 수 있지만 실제로 구현되지는 않습니다. 벡터 V가 주어지면 n1 = Normal(loc=V)와 n2 = MultivariateNormalDiag(loc=V)의 샘플은 구별할 수 없음을 기억하는 것이 좋습니다. 이러한 분포의 차이점은 n1.log_prob(n1.sample())이 벡터이고 n2.log_prob(n2.sample())은 스칼라라는 것입니다.
여러 샘플
여러 샘플 추출하기가 여전히 동작하지 않습니다.
End of explanation
m = tfd.Normal(0., 1.).sample([5, 3])
try:
m * X
except tf.errors.InvalidArgumentError as e:
print(e)
Explanation: 그 이유를 생각해봅시다. jds_i.sample([5, 3])을 호출할 때 먼저 m과 b 샘플을 각각 형상 (5, 3)으로 추출합니다. 그 후, 다음을 통해 Normal 분포를 구성하려고 합니다.
tfd.Normal(loc=m*X + b, scale=1.)
그러나 m이 형상 (5, 3)이고 X가 형상 7이면 이 둘을 함께 곱할 수 없으며 실제로 이러한 오류가 발생합니다.
End of explanation
m[..., tf.newaxis].shape
(m[..., tf.newaxis] * X).shape
Explanation: 이 문제를 해결하기 위해 Y에 대한 분포에 어떤 속성이 있어야 하는지 생각해 보겠습니다. jds_i.sample([5, 3])을 호출했다면 m과 b가 모두 형상(5, 3)을 가질 것임을 압니다. Y 분포에서 sample에 대한 호출은 어떤 형상을 생성해야 할까요? 분명한 대답은 (5, 3, 7)입니다. 각 배치 지점에 대해 X와 같은 크기의 샘플이 필요합니다. TensorFlow의 브로드캐스팅 기능으로 추가 차원을 더하여 이를 달성할 수 있습니다.
End of explanation
jds_ia = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Independent( # Y
tfd.Normal(loc=m[..., tf.newaxis]*X + b[..., tf.newaxis], scale=1.),
reinterpreted_batch_ndims=1)
])
shaped_sample = jds_ia.sample([5, 3])
shaped_sample
jds_ia.log_prob(shaped_sample)
Explanation: m과 b 모두에 축을 추가하면 여러 샘플을 지원하는 새 JDS를 정의할 수 있습니다.
End of explanation
(jds_ia.log_prob(shaped_sample)[3, 1] -
jds_i.log_prob([shaped_sample[0][3, 1],
shaped_sample[1][3, 1],
shaped_sample[2][3, 1, :]]))
Explanation: 추가 검사로 단일 배치 지점에 대한 로그 확률이 이전과 일치하는지 확인합니다.
End of explanation
jds_ab = tfd.JointDistributionSequentialAutoBatched([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y
])
jds_ab.log_prob(jds.sample())
shaped_sample = jds_ab.sample([5, 3])
jds_ab.log_prob(shaped_sample)
jds_ab.log_prob(shaped_sample) - jds_ia.log_prob(shaped_sample)
Explanation: <a id="AutoBatching-For-The-Win"></a>
성공적으로 자동 일괄 처리하기
아주 좋습니다. 이제 모든 데시데라타를 처리하는 JointDistribution의 버전을 갖추었습니다. log_prob는 tfd.Independent를 사용하여 스칼라를 반환하며, 추가 축을 더하여 브로드캐스팅을 수정했으므로 이제 여러 샘플이 제대로 동작합니다.
더 쉽고 더 좋은 방법이 있다면 어떨까요? 그 방법은 바로 JointDistributionSequentialAutoBatched(JDSAB)라고 합니다.
End of explanation
jds.batch_shape
jds_i.batch_shape
jds_ia.batch_shape
jds_ab.batch_shape
Explanation: 어떻게 동작하나요? 깊은 이해를 위해 코드 읽기를 시도할 수 있지만 대부분의 사용 사례에 대해 충분한, 간략한 개요를 제공합니다.
첫 번째 문제는 Y에 batch_shape=[7]와 event_shape=[]가 있고Independent를 사용하여 배치 차원을 이벤트 차원으로 변환했다는 점을 기억하세요. JDSAB는 구성 요소 분포의 배치 형상을 무시합니다. 대신 batch_ndims > 0을 설정하여 달리 지정하지 않는 한, 배치 형상을 모델의 전체 속성으로 처리하며 []로 간주합니다. 이 효과는 위에서 수동으로 수행한 것처럼 tfd.Independent를 사용하여 구성 요소 분포의 <em>모든</em> 배치 차원을 이벤트 차원으로 변환하는 것과 같습니다.
두 번째 문제는 여러 샘플을 만들 때 X로 적절하게 브로드캐스팅할 수 있도록 m과 b의 형상을 조정해야 한다는 것이었습니다. JDSAB를 사용하면 모델을 작성하여 단일 샘플을 생성하고 전체 모델을 '리프트(lift)'하여 TensorFlow의 vectorized_map으로 여러 샘플을 생성합니다(이 특성은 JAX의 vmap과 유사합니다).
배치 형상 문제를 더 자세히 살펴보면, 원래의 '불량' 결합 분포 jds, 배치 고정 분포 jds_i, jds_ia 및 자동 일괄 처리된 jds_ab의 배치 형상을 비교할 수 있습니다.
End of explanation
X = np.arange(14).reshape((2, 7))
X
Explanation: 원본 jds에는 배치 형상이 다른 하위 분포가 있습니다. jds_i와 jds_ia는 같은 배치 형상(비어 있음)으로 하위 분포를 만들어 이 문제를 해결합니다. jds_ab에는 단일 배치 형상(비어 있음)만 있습니다.
JointDistributionSequentialAutoBatched가 몇 가지 추가 일반성을 무료로 제공한다는 점은 주목할 만합니다. 공변량 X(및 암시적으로 관측치 Y)를 2차원으로 만든다고 가정합니다.
End of explanation
jds_ab = tfd.JointDistributionSequentialAutoBatched([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y
])
shaped_sample = jds_ab.sample([5, 3])
shaped_sample
jds_ab.log_prob(shaped_sample)
Explanation: JointDistributionSequentialAutoBatched는 변경 없이 동작합니다(X의 형상이 jds_ab.log_prob로 캐싱되므로 모델을 재정의해야 합니다).
End of explanation
jds_ia = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Independent( # Y
tfd.Normal(loc=m[..., tf.newaxis]*X + b[..., tf.newaxis], scale=1.),
reinterpreted_batch_ndims=1)
])
try:
jds_ia.sample([5, 3])
except tf.errors.InvalidArgumentError as e:
print(e)
Explanation: 반면에 신중하게 만들어진 JointDistributionSequential은 더 이상 동작하지 않습니다.
End of explanation |
6,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment logger
The logger stores experimental data in a single SQLite database. It is intended to be fast and lightweight, but record all necessary meta data and timestamps for experimental trials.
As a consequence, SQLiteBrowser can be used to browse the log results without having to do any coding.
Most of the entries are stored as JSON strings in the database tables; any object that can be serialised by Python's json module can be added directly.
Structure
Log ExperimentLog has a single master log which records all logged data as JSON (with a timestamp) in a single series. The log is annotated with different streams that represent distinct sensors or inputs.
Session The log is indexed by sessions, where a session is a logical part of an experiment (a whole experiment, a condition, a repetition, etc.).
Metadata JSON Metadata about any log stream, session, run, user and the whole dataset can be recorded in the database, so there is a single, consistent record of everything to do with the experimental trials.
Binding
ExperimentLog uses the idea of binding metadata to sessions. So if you have a user who is doing an experiment, you can create a metadata entry for that user, and then bind it to the sessions that involve that user.
Session structures are hierarchical, and bindings apply to sessions and all of their children; so if a user is bound to an experiment, they are also bound to all the conditions, sub-conditions, repetitions, etc.
Runs
Run ExperimentLog also tracks runs of the experimental software. A run exists from the start of the experimental software until the process exits. Each session can be part of a single run, or a session can be spread over many runs (e.g. if only part of the data is collected at one time).
Debug logging
The logger also provides a custom logging handler for the standard Python logging module (via the get_logger() method), so that any debug messages can be stored in the DB and cross-referenced against experimental runs.
Simplest possible example
ExperimentLog allows the addition of metadata and structure, but doesn't mandate it. The simplest example would be something like
Step1: Using paths
A filesystem-like structure is provided to make it easy to separate data
Step2: A more complex example
Step3: Setting up the database
When a log is set up for the first time, the database needs to be configured for the experimental sessions.
Each sensor/information stream can be registered with the database. This could be individual sensors like a mouse (x,y) time series, or questionnaire results.
Step4: Sessions
ExperimentLog uses the concept of sessions to manage experimental data. Sessions are much like folders in a filesystem and usually form a hierarchy, for example
Step5: We'd usually only want to do this metadata creation once-ever; this setup procedure can be recorded by changing the database stage
Step6: Users
Each instance of a session (usually) involves experimental subjects. Each user should be registered, and then attached to a recording session. Multiple users can be attached to one session (e.g. for experiments with groups) but normally there will just be one user.
The pseudo module can generate pronounceable, random, verifiable pseudonyms for subjects.
Step7: Test how fast we can write into the database
Step8: Post-processing
Once all data is logged, it is wise to add indices so that logs can be accessed quickly.
Step10: SQL format
There are a few basic tables in the ExperimentLog
Step11: Sync points
If you are recording media alongside an experimental trial (e.g. a video), you can use sync_ext() to record the link between external media and the master log.
Usage | Python Code:
# import sqlexperiment as sqle
# from sqlexperiment import experimentlog
from explogger import ExperimentLog
# log some JSON data
e = ExperimentLog(":memory:", ntp_sync=False)
e.log("mouse", data={"x":0, "y":0})
e.log("mouse", data={"x":0, "y":1})
e.log("mouse", data={"x":0, "y":2})
e.close()
# from experimentlog import ExperimentLog
from explogger import extract
import logging
e = ExperimentLog(":memory:", ntp_sync=False)
## shows how to add the SQL logger
sql_handler = e.get_logger()
sql_handler.setLevel(logging.INFO)
log_formatter = logging.Formatter(fmt="%(asctime)s [%(levelname)-5.5s] %(message)s",
datefmt='%m-%d %H:%M')
sql_handler.setFormatter(log_formatter)
logging.getLogger().addHandler(sql_handler)
# use the logger
logging.info("Some information")
logging.info("Some more information")
# get the extracted logs as a line-separated single string
print("All logs")
print(extract.get_logs(e.cursor))
print("Just the log for this run")
print(extract.get_logs(e.cursor, run=e.run_id))
e.close()
Explanation: Experiment logger
The logger stores experimental data in a single SQLite database. It is intended to be fast and lightweight, but record all necessary meta data and timestamps for experimental trials.
As a consequence, SQLiteBrowser can be used to browse the log results without having to do any coding.
Most of the entries are stored as JSON strings in the database tables; any object that can be serialised by Python's json module can be added directly.
Structure
Log ExperimentLog has a single master log which records all logged data as JSON (with a timestamp) in a single series. The log is annotated with different streams that represent distinct sensors or inputs.
Session The log is indexed by sessions, where a session is a logical part of an experiment (a whole experiment, a condition, a repetition, etc.).
Metadata JSON Metadata about any log stream, session, run, user and the whole dataset can be recorded in the database, so there is a single, consistent record of everything to do with the experimental trials.
Binding
ExperimentLog uses the idea of binding metadata to sessions. So if you have a user who is doing an experiment, you can create a metadata entry for that user, and then bind it to the sessions that involve that user.
Session structures are hierarchical, and bindings apply to sessions and all of their children; so if a user is bound to an experiment, they are also bound to all the conditions, sub-conditions, repetitions, etc.
Runs
Run ExperimentLog also tracks runs of the experimental software. A run exists from the start of the experimental software until the process exits. Each session can be part of a single run, or a session can be spread over many runs (e.g. if only part of the data is collected at one time).
Debug logging
The logger also provides a custom logging handler for the standard Python logging module (via the get_logger() method), so that any debug messages can be stored in the DB and cross-referenced against experimental runs.
Simplest possible example
ExperimentLog allows the addition of metadata and structure, but doesn't mandate it. The simplest example would be something like:
End of explanation
e = ExperimentLog(":memory:", ntp_sync=False)
e.cd("/Experiment/Condition1")
e.log("mouse", data={"x":0, "y":0})
e.log("mouse", data={"x":0, "y":1})
e.log("mouse", data={"x":0, "y":2})
e.cd("/Experiment/Condition2")
e.log("mouse", data={"x":0, "y":0})
e.log("mouse", data={"x":0, "y":1})
e.log("mouse", data={"x":0, "y":2})
e.close()
import IPython.nbconvert
from IPython.core.display import HTML
def md_html(md):
return HTML(IPython.nbconvert.filters.markdown.markdown2html(md))
from explogger import report
md_html(report.string_report(e.cursor))
Explanation: Using paths
A filesystem-like structure is provided to make it easy to separate data:
End of explanation
# from experimentlog import ExperimentLog, np_to_str, str_to_np
import numpy as np
## open a connection to a database; will be created if it does not exist.
# here we use a memory database so the results are not stored to disk
e = ExperimentLog(":memory:", ntp_sync=False)
Explanation: A more complex example
End of explanation
# check if we've already set everything up
# note we use the special .meta field to access persistent metadata
if e.meta.stage=="init":
e.create("STREAM", name="mouse", description="A time series of x,y cursor positions",
# the data is optional, and can contain anything you want
data={
"sample_rate": 60,
"dpi": 3000,
"mouse_device":"Logitech MX600"})
# and a post-condition questionnaire
e.create("STREAM", name="satisfaction",
description="A simple satisfaction score",
# here, we store the questions used for future reference
data={
"questions":["How satisfied were you with your performance?",
"How satisfied were you with the interface?"]}
)
Explanation: Setting up the database
When a log is set up for the first time, the database needs to be configured for the experimental sessions.
Each sensor/information stream can be registered with the database. This could be individual sensors like a mouse (x,y) time series, or questionnaire results.
End of explanation
if e.meta.stage=="init":
# We'll register an experiment, with three different conditions
e.create("SESSION", "Experiment", description="The main experiment",
data={"target_size":40.0, "cursor_size":5.0})
e.create("SESSION","ConditionA",description="Condition A:circular targets",
data={"targets":["circle"]})
e.create("SESSION","ConditionB", description="Condition B:square targets",
data={"targets":["square"]})
e.create("SESSION","ConditionC", description="Condition C:mixed targets",
data={"targets":["circle","square"]})
Explanation: Sessions
ExperimentLog uses the concept of sessions to manage experimental data. Sessions are much like folders in a filesystem and usually form a hierarchy, for example:
/
Experiment1/
ConditionA/
0/
1/
2/
ConditionB/
0/
1/
2/
Experiment 2
ConditionA/
0/
1/
2/
3/
ConditionC/
0/
1/
2/
3/
Each session can have metadata attached to it; for example giving the parameters for a given condition.
When an experiment is run, instances of sessions are created, like files inside the filesystem.
End of explanation
# mark the database as ready to log data
# meta is a special field that looks like an object, but is actually backed
# onto the database. Any field can be read or written to, as long as the value
# can be dumped to JSON
e.meta.stage="setup"
Explanation: We'd usually only want to do this metadata creation once-ever; this setup procedure can be recorded by changing the database stage:
End of explanation
from explogger import pseudo
user = pseudo.get_pseudo()
print(user)
# now register the user with the database
e.create("USER", name=user, data={"age":30, "leftright":"right"})
# note that passing the session="" parameter automatically
# binds to that session prototype at the start of the session
e.enter("Experiment", session="Experiment")
# attach the user to this experiment, and thus to all conditions, etc.
e.bind("USER", user)
e.enter("ConditionA", session="ConditionA")
# calling enter() without any argument creates a numbered repetition (in this case, 0)
e.enter()
print(e.session_path)
print(e.bindings)
# log some data
e.log("mouse", data={"x":0, "y":10})
e.log("mouse", data={"x":0, "y":20})
Explanation: Users
Each instance of a session (usually) involves experimental subjects. Each user should be registered, and then attached to a recording session. Multiple users can be attached to one session (e.g. for experiments with groups) but normally there will just be one user.
The pseudo module can generate pronounceable, random, verifiable pseudonyms for subjects.
End of explanation
%%timeit -n 50000
e.log("mouse", data={"x":20, "y":20})
# log questionnaire output
e.log("satisfaction", data={"q1":4,"q2":5})
# leave this repetition
e.leave()
# move out of condition A
e.leave()
e.enter("ConditionB")
# could log more stuff...
from explogger import ExperimentLog, np_to_str#, str_to_np
x = np.random.uniform(-1,1,(16,16))
# if we need to attach binary data to a log file (e.g. an image), we can do this:
# in general, it is best to avoid using blobs unless absolutely necessary
i = e.log("important_matrix", binary=np_to_str({"matrix":(x)}))
# back to the root -- here we mark this session (ConditionB) as being invalid.
e.leave(valid=False)
e.leave()
# end the run; normally you would not need to do this, since
# e.close() does this automatically -- but here we keep the DB
# open to make it quicker to demo querying it
e.end()
# print some results with raw SQL queries
mouse_log = e.cursor.execute("SELECT time, json FROM mouse", ())
print("\n".join([str(m) for m in mouse_log.fetchone()]))
from explogger import report
import IPython.nbconvert
from IPython.core.display import HTML
def md_html(md):
return HTML(IPython.nbconvert.filters.markdown.markdown2html(md))
md_html(report.string_report(e.cursor))
Explanation: Test how fast we can write into the database:
End of explanation
# should only do this when all data is logged; otherwise there may be
# a performance penalty
e.add_indices()
Explanation: Post-processing
Once all data is logged, it is wise to add indices so that logs can be accessed quickly.
End of explanation
# make the new table -- must have a reference to the main
# log table
e.execute(CREATE TABLE accelerometer
(id INTEGER PRIMARY KEY, device INT, x REAL, y REAL, z REAL, log INT,
FOREIGN KEY(log) REFERENCES log(id))
)
# register a new stream
e.create("STREAM", name="acc", description="A time series of accelerometer values")
# now log a new value, put it into the separate accelerometer table and link
# it to the main log
def log_acc(dev,x,y,z):
log_id = e.log("acc")
e.execute("INSERT INTO accelerometer VALUES (?,?,?,?,?)",
(dev, x, y, z, log_id))
Explanation: SQL format
There are a few basic tables in the ExperimentLog:
Metadata
meta:
id, Unique ID
mtype, Type of this metadata: one of LOG, SESSION, USER, PATH
name, Name of the object, e.g. user pseudonym
type, (Optional) type tag
description, (Optional) text description
json (Optional) JSON string holding any other metadata.
The metadata for a log, session or user, path. mtype specifies the kind of metadata it is. There are convenience views of this table:
stream, mtype=STREAM
users, mtype=USER
session_meta, mtype=SESSION
equipment, mtype=EQUIPMENT
dataset, mtype=DATASET
path, mtype=PATH
All have the same fields as above.
Session
session:
id, Unique ID
start_time, Time this session was started
end_time, Time this session was completed (if it was)
test_run, If this is a test run or not
random_seed, Random seed used for this session can be stored here
valid, If this session was marked valid or not
complete, If this session was marked completed or not
parent, ID of the session this session is a subsession of
path, ID of the full path this session belongs to
json, Any additional metadata
run_session: (maps sessions to runs)
id, Unique ID
run, ID of the run
session, ID of the session
meta_session:
id, Unique ID,
meta, ID of the metadata
session, Session this is bound to
time, Time at which this metadata was bound
Logs
log:
id, Unique ID
time, Timestamp
valid, Valid flag for this data (e.g. to mark faulty sensor data)
stream, ID of the stream this log belongs to
session, ID of the session this log entry belongs to
json, The log entry itself
tag, (optional) tag for this log entry
binary, (optional) ID of the binary table entry
binary:
id, Unique ID
binary, Blob representation of binary values
Debug logs
debug_logging:
id, Unique row ID
time, Timestamp
record, String from the log formatter
run, ID of the run this log belongs to
level, The numeric code of the debugging level (see stdlib logging module docs for detail)
Custom tables
If you want to log values with a custom table where the fields are not just plain JSON, you can add a new table to the database and just attach it to the log fields. The log() function returns the ID of the new log entry; use this as a foreign key in the new log table.
Example:
End of explanation
e.meta.title="TestSet-1"
e.meta.institution="University of Glasgow"
e.meta.funder="ABC:XXX:101"
e.meta.ethics="CSEnnnn"
e.meta.authors="John Williamson"
e.meta.license="CC-BY-SA 2.0"
e.meta.confidential="no"
e.meta.paper="'A good paper', Williamson J., Proceedings of Things International 2016, pp.44-46"
e.meta.description="A study of the experimental logging process. Includes numerous repetitive examples of simple logged data."
e.meta.short_description="A quick logging test."
e.meta.doi= "DOI:xxxxxx"
print(dir(e.meta))
md_html(report.string_readme(e.cursor))
Explanation: Sync points
If you are recording media alongside an experimental trial (e.g. a video), you can use sync_ext() to record the link between external media and the master log.
Usage:
# look up a log entry with a sync point
t = e.execute('SELECT time FROM log WHERE tag="video_sync_mark"').fetchone()[0]
sync_ext("videos/myvideo.mp4", start_time=t)
If you want to record a segment of a video as being aligned:
t = e.execute('SELECT time FROM log WHERE tag="video_sync_mark"').fetchone()[0]
# marks a synchronisation of myvideo.mp4, from 20.0 -> 25.0 to the log time starting at t
sync_ext("videos/myvideo_002.mp4", start_time=t, duration=5.0, media_start_time=20.0)
NTP
To ensure logs are consistent in their timings, ExperimentLog will try and sync to an NTP server on start-up and will record all times with the estimated clock offset already applied.
If you pass ntp_sync=False to the ExperimentLog constructor, this will be skipped. Custom NTP servers can also be passed as a list:
# don't sync to NTP (not recommended)
e = ExperimentLog(ntp_sync=False)
# use custom NTP servers
e = ExperimentLog(ntp_servers=["1.pool.ntp.org", "2.pool.ntp.org"])
Whole-dataset metadata
Dataset-wide metadata can be set using the special .meta field of ExperimentLog, which is backed to the database. The report auto-generator can use this to build automatic readme files suitable for deposit for open access.
The following fields should be set:
title (title of this dataset)
institution (institution(s) this dataset was recorded by)
authors (comma separated list of authors)
license (e.g. CC-BY-SA 2.0)
confidential (e.g. No, InternalOnly, ConsortiumOnly, Confidential, StrictlyConfidential), etc.
funder (name of funder and project name/code)
ethics (ethics board approval number)
paper (full details of associated paper)
short_description (one sentence description of the data set)
description (longer description of the dataset)
doi (DOI of this dataset)
End of explanation |
6,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Кафедра дискретной математики МФТИ
Курс математической статистики
Никита Волков
На основе http
Step1: Можно преобразовать список в массив.
Step2: print печатает массивы в удобной форме.
Step3: Класс ndarray имеет много методов.
Step4: Наш массив одномерный.
Step5: В $n$-мерном случае возвращается кортеж размеров по каждой координате.
Step6: size - это полное число элементов в массиве; len - размер по первой координате (в 1-мерном случае это то же самое).
Step7: numpy предоставляет несколько типов для целых (int16, int32, int64) и чисел с плавающей точкой (float32, float64).
Step8: Индексировать массив можно обычным образом.
Step9: Массивы - изменяемые объекты.
Step10: Массивы, разумеется, можно использовать в for циклах. Но при этом теряется главное преимущество numpy - быстродействие. Всегда, когда это возможно, лучше использовать операции над массивами как едиными целыми.
Step11: Массив чисел с плавающей точкой.
Step12: Точно такой же массив.
Step13: Преобразование данных
Step14: Массив, значения которого вычисляются функцией. Функции передаётся массив. Так что в ней можно использовать только такие операции, которые применимы к массивам.
Step15: Массивы, заполненные нулями или единицами. Часто лучше сначала создать такой массив, а потом присваивать значения его элементам.
Step16: Если нужно создать массив, заполненный нулями, длины другого массива, то можно использовать конструкцию
Step17: Функция arange подобна range. Аргументы могут быть с плавающей точкой. Следует избегать ситуаций, когда $(конец-начало)/шаг$ - целое число, потому что в этом случае включение последнего элемента зависит от ошибок округления. Лучше, чтобы конец диапазона был где-то посредине шага.
Step18: Последовательности чисел с постоянным шагом можно также создавать функцией linspace. Начало и конец диапазона включаются; последний аргумент - число точек.
Step19: Последовательность чисел с постоянным шагом по логарифмической шкале от $10^0$ до $10^1$.
Step20: Массив случайных чисел.
Step21: Случайные числа с нормальным (гауссовым) распределением (среднее 0, среднеквадратичное отклонение 1).
Step22: Операции над одномерными массивами
Арифметические операции проводятся поэлементно.
Step23: Когда операнды разных типов, они пиводятся к большему типу.
Step24: numpy содержит элементарные функции, которые тоже применяются к массивам поэлементно. Они называются универсальными функциями (ufunc).
Step25: Один из операндов может быть скаляром, а не массивом.
Step26: Сравнения дают булевы массивы.
Step27: Кванторы "существует" и "для всех".
Step28: Модификация на месте.
Step29: При выполнении операций над массивами деление на 0 не возбуждает исключения, а даёт значения np.nan или np.inf.
Step30: Сумма и произведение всех элементов массива; максимальный и минимальный элемент; среднее и среднеквадратичное отклонение.
Step31: Имеются встроенные функции
Step32: Иногда бывает нужно использовать частичные (кумулятивные) суммы. В нашем курсе такое пригодится.
Step33: Функция sort возвращает отсортированную копию, метод sort сортирует на месте.
Step34: Объединение массивов.
Step35: Расщепление массива в позициях 3 и 6.
Step36: Функции delete, insert и append не меняют массив на месте, а возвращают новый массив, в котором удалены, вставлены в середину или добавлены в конец какие-то элементы.
Step37: Есть несколько способов индексации массива. Вот обычный индекс.
Step38: Диапазон индексов. Создаётся новый заголовок массива, указывающий на те же данные. Изменения, сделанные через такой массив, видны и в исходном массиве.
Step39: Диапазон с шагом 2.
Step40: Массив в обратном порядке.
Step41: Подмассиву можно присвоить значение - массив правильного размера или скаляр.
Step42: Тут опять создаётся только новый заголовок, указывающий на те же данные.
Step43: Чтобы скопировать и данные массива, нужно использовать метод copy.
Step44: Можно задать список индексов.
Step45: Можно задать булев массив той же величины.
Step46: 2-мерные массивы
Step47: Атрибуту shape можно присвоить новое значение - кортеж размеров по всем координатам. Получится новый заголовок массива; его данные не изменятся.
Step48: Можно растянуть в одномерный массив
Step49: Арифметические операции поэлементные
Step50: Поэлементное и матричное (только в Python 3.5) умножение.
Step51: Умножение матрицы на вектор.
Step52: Если у вас Питон более ранней версии, то для работы с матрицами можно использовать класс np.matrix, в котором операция умножения реализуется как матричное умножение.
Step53: Внешнее произведение $a_{ij}=u_i v_j$
Step54: Двумерные массивы, зависящие только от одного индекса
Step55: Единичная матрица.
Step56: Метод reshape делает то же самое, что присваивание атрибуту shape.
Step57: Строка.
Step58: Цикл по строкам.
Step59: Столбец.
Step60: Подматрица.
Step61: Можно построить двумерный массив из функции.
Step62: Транспонированная матрица.
Step63: Соединение матриц по горизонтали и по вертикали.
Step64: Сумма всех элементов; суммы столбцов; суммы строк.
Step65: Аналогично работают prod, max, min и т.д.
Step66: След - сумма диагональных элементов.
Step67: Многомерные массивы
Step68: Суммирование (аналогично остальные операции)
Step69: Линейная алгебра
Step70: Обратная матрица.
Step71: Решение линейной системы $au=v$.
Step72: Проверим.
Step73: Собственные значения и собственные векторы
Step74: Проверим.
Step75: Функция diag от одномерного массива строит диагональную матрицу; от квадратной матрицы - возвращает одномерный массив её диагональных элементов.
Step76: Все уравнения $a u_i = \lambda_i u_i$ можно собрать в одно матричное уравнение $a u = u \Lambda$, где $\Lambda$ - диагональная матрица с собственными значениями $\lambda_i$ по диагонали.
Step77: Поэтому $u^{-1} a u = \Lambda$.
Step78: Найдём теперь левые собственные векторы $v_i a = \lambda_i v_i$ (собственные значения $\lambda_i$ те же самые).
Step79: Собственные векторы нормированы на 1.
Step80: Левые и правые собственные векторы, соответствующие разным собственным значениям, ортогональны, потому что $v_i a u_j = \lambda_i v_i u_j = \lambda_j v_i u_j$.
Step81: Интегрирование
Step82: Адаптивное численное интегрирование (может быть до бесконечности). err - оценка ошибки.
Step83: Сохранение в файл и чтение из файла
Step84: Получится такой файл
Step85: Теперь его можно прочитать
Step86: Библиотека scipy (модуль scipy.stats)
Нам пригодится только модуль scipy.stats.
Полное описание http
Step87: <b>Общий принцип
Step88: Cгенерируем выборку размера $N = 200$ из распределения $Bin(10, 0.6)$ и посчитаем некоторые статистики.
В терминах выше описанных функций у нас $X$ = sps.binom, а params = (n=10, p=0.6).
Step89: Отдельно есть класс для <b>многомерного нормального распределения</b>.
Для примера сгенерируем выборку размера $N=200$ из распределения $\mathscr{N} \left( \begin{pmatrix} 1 \ 1 \end{pmatrix}, \begin{pmatrix} 2 & 1 \ 1 & 2 \end{pmatrix} \right)$.
Step90: Некоторая хитрость
Step91: Бывает так, что <b>надо сгенерировать выборку из распределения, которого нет в scipy.stats</b>.
Для этого надо создать класс, который будет наследоваться от класса rv_continuous для непрерывных случайных величин и от класса rv_discrete для дискретных случайных величин.
Пример есть на странице http
Step92: Если дискретная случайная величина может принимать небольшое число значений, то можно не создавать новый класс, как показано выше, а явно указать эти значения и из вероятности. | Python Code:
import numpy as np
Explanation: Кафедра дискретной математики МФТИ
Курс математической статистики
Никита Волков
На основе http://www.inp.nsk.su/~grozin/python/
Библиотека numpy
Пакет numpy предоставляет $n$-мерные однородные массивы (все элементы одного типа); в них нельзя вставить или удалить элемент в произвольном месте. В numpy реализовано много операций над массивами в целом. Если задачу можно решить, произведя некоторую последовательность операций над массивами, то это будет столь же эффективно, как в C или matlab - львиная доля времени тратится в библиотечных функциях, написанных на C.
Одномерные массивы
End of explanation
a = np.array([0, 2, 1])
a, type(a)
Explanation: Можно преобразовать список в массив.
End of explanation
print(a)
Explanation: print печатает массивы в удобной форме.
End of explanation
set(dir(a)) - set(dir(object))
Explanation: Класс ndarray имеет много методов.
End of explanation
a.ndim
Explanation: Наш массив одномерный.
End of explanation
a.shape
Explanation: В $n$-мерном случае возвращается кортеж размеров по каждой координате.
End of explanation
len(a), a.size
Explanation: size - это полное число элементов в массиве; len - размер по первой координате (в 1-мерном случае это то же самое).
End of explanation
a.dtype, a.dtype.name, a.itemsize
Explanation: numpy предоставляет несколько типов для целых (int16, int32, int64) и чисел с плавающей точкой (float32, float64).
End of explanation
a[1]
Explanation: Индексировать массив можно обычным образом.
End of explanation
a[1] = 3
print(a)
Explanation: Массивы - изменяемые объекты.
End of explanation
for i in a:
print(i)
Explanation: Массивы, разумеется, можно использовать в for циклах. Но при этом теряется главное преимущество numpy - быстродействие. Всегда, когда это возможно, лучше использовать операции над массивами как едиными целыми.
End of explanation
b = np.array([0., 2, 1])
b.dtype
Explanation: Массив чисел с плавающей точкой.
End of explanation
c = np.array([0, 2, 1], dtype=np.float64)
print(c)
Explanation: Точно такой же массив.
End of explanation
print(c.dtype)
print(c.astype(int))
print(c.astype(str))
Explanation: Преобразование данных
End of explanation
def f(i):
print(i)
return i ** 2
a = np.fromfunction(f, (5,), dtype=np.int64)
print(a)
a = np.fromfunction(f, (5,), dtype=np.float64)
print(a)
Explanation: Массив, значения которого вычисляются функцией. Функции передаётся массив. Так что в ней можно использовать только такие операции, которые применимы к массивам.
End of explanation
a = np.zeros(3)
print(a)
b = np.ones(3, dtype=np.int64)
print(b)
Explanation: Массивы, заполненные нулями или единицами. Часто лучше сначала создать такой массив, а потом присваивать значения его элементам.
End of explanation
np.zeros_like(b)
Explanation: Если нужно создать массив, заполненный нулями, длины другого массива, то можно использовать конструкцию
End of explanation
a = np.arange(0, 9, 2)
print(a)
b = np.arange(0., 9, 2)
print(b)
Explanation: Функция arange подобна range. Аргументы могут быть с плавающей точкой. Следует избегать ситуаций, когда $(конец-начало)/шаг$ - целое число, потому что в этом случае включение последнего элемента зависит от ошибок округления. Лучше, чтобы конец диапазона был где-то посредине шага.
End of explanation
a = np.linspace(0, 8, 5)
print(a)
Explanation: Последовательности чисел с постоянным шагом можно также создавать функцией linspace. Начало и конец диапазона включаются; последний аргумент - число точек.
End of explanation
b = np.logspace(0, 1, 5)
print(b)
Explanation: Последовательность чисел с постоянным шагом по логарифмической шкале от $10^0$ до $10^1$.
End of explanation
print(np.random.random(5))
Explanation: Массив случайных чисел.
End of explanation
print(np.random.normal(size=5))
Explanation: Случайные числа с нормальным (гауссовым) распределением (среднее 0, среднеквадратичное отклонение 1).
End of explanation
print(a + b)
print(a - b)
print(a * b)
print(a / b)
print(a ** 2)
Explanation: Операции над одномерными массивами
Арифметические операции проводятся поэлементно.
End of explanation
i = np.ones(5, dtype=np.int64)
print(a + i)
Explanation: Когда операнды разных типов, они пиводятся к большему типу.
End of explanation
np.sin, type(np.sin)
print(np.sin(a))
Explanation: numpy содержит элементарные функции, которые тоже применяются к массивам поэлементно. Они называются универсальными функциями (ufunc).
End of explanation
print(a + 1)
print(2 * a)
Explanation: Один из операндов может быть скаляром, а не массивом.
End of explanation
print(a > b)
print(a == b)
c = a > 5
print(c)
Explanation: Сравнения дают булевы массивы.
End of explanation
np.any(c), np.all(c)
Explanation: Кванторы "существует" и "для всех".
End of explanation
a += 1
print(a)
b *= 2
print(b)
b /= a
print(b)
Explanation: Модификация на месте.
End of explanation
print(np.array([0.0, 0.0, 1.0, -1.0]) / np.array([1.0, 0.0, 0.0, 0.0]))
np.nan + 1, np.inf + 1, np.inf * 0, 1. / np.inf
Explanation: При выполнении операций над массивами деление на 0 не возбуждает исключения, а даёт значения np.nan или np.inf.
End of explanation
b.sum(), b.prod(), b.max(), b.min(), b.mean(), b.std()
x = np.random.normal(size=1000)
x.mean(), x.std()
Explanation: Сумма и произведение всех элементов массива; максимальный и минимальный элемент; среднее и среднеквадратичное отклонение.
End of explanation
print(np.sqrt(b))
print(np.exp(b))
print(np.log(b))
print(np.sin(b))
print(np.e, np.pi)
Explanation: Имеются встроенные функции
End of explanation
print(b.cumsum())
Explanation: Иногда бывает нужно использовать частичные (кумулятивные) суммы. В нашем курсе такое пригодится.
End of explanation
print(np.sort(b))
print(b)
b.sort()
print(b)
Explanation: Функция sort возвращает отсортированную копию, метод sort сортирует на месте.
End of explanation
a = np.hstack((a, b))
print(a)
Explanation: Объединение массивов.
End of explanation
np.hsplit(a, [3, 6])
Explanation: Расщепление массива в позициях 3 и 6.
End of explanation
a = np.delete(a, [5, 7])
print(a)
a = np.insert(a, 2, [0, 0])
print(a)
a = np.append(a, [1, 2, 3])
print(a)
Explanation: Функции delete, insert и append не меняют массив на месте, а возвращают новый массив, в котором удалены, вставлены в середину или добавлены в конец какие-то элементы.
End of explanation
a = np.linspace(0, 1, 11)
print(a)
b = a[2]
print(b)
Explanation: Есть несколько способов индексации массива. Вот обычный индекс.
End of explanation
b = a[2:6]
print(b)
b[0] = -0.2
print(b)
print(a)
Explanation: Диапазон индексов. Создаётся новый заголовок массива, указывающий на те же данные. Изменения, сделанные через такой массив, видны и в исходном массиве.
End of explanation
b = a[1:10:2]
print(b)
b[0] = -0.1
print(a)
Explanation: Диапазон с шагом 2.
End of explanation
b = a[len(a):0:-1]
print(b)
Explanation: Массив в обратном порядке.
End of explanation
a[1:10:3] = 0
print(a)
Explanation: Подмассиву можно присвоить значение - массив правильного размера или скаляр.
End of explanation
b = a[:]
b[1] = 0.1
print(a)
Explanation: Тут опять создаётся только новый заголовок, указывающий на те же данные.
End of explanation
b = a.copy()
b[2] = 0
print(b)
print(a)
Explanation: Чтобы скопировать и данные массива, нужно использовать метод copy.
End of explanation
print(a[[2, 3, 5]])
Explanation: Можно задать список индексов.
End of explanation
b = a > 0
print(b)
print(a[b])
Explanation: Можно задать булев массив той же величины.
End of explanation
a = np.array([[0.0, 1.0], [-1.0, 0.0]])
print(a)
a.ndim
a.shape
len(a), a.size
a[1, 0]
Explanation: 2-мерные массивы
End of explanation
b = np.linspace(0, 3, 4)
print(b)
b.shape
b.shape = 2, 2
print(b)
Explanation: Атрибуту shape можно присвоить новое значение - кортеж размеров по всем координатам. Получится новый заголовок массива; его данные не изменятся.
End of explanation
print(b.ravel())
Explanation: Можно растянуть в одномерный массив
End of explanation
print(a + 1)
print(a * 2)
print(a + [0, 1]) # второе слагаемое дополняется до матрицы копированием строк
print(a + np.array([[0, 2]]).T) # .T - транспонирование
print(a + b)
Explanation: Арифметические операции поэлементные
End of explanation
print(a * b)
print(a @ b)
print(b @ a)
Explanation: Поэлементное и матричное (только в Python 3.5) умножение.
End of explanation
v = np.array([1, -1], dtype=np.float64)
print(b @ v)
print(v @ b)
Explanation: Умножение матрицы на вектор.
End of explanation
np.matrix(a) * np.matrix(b)
Explanation: Если у вас Питон более ранней версии, то для работы с матрицами можно использовать класс np.matrix, в котором операция умножения реализуется как матричное умножение.
End of explanation
u = np.linspace(1, 2, 2)
v = np.linspace(2, 4, 3)
print(u)
print(v)
a = np.outer(u, v)
print(a)
Explanation: Внешнее произведение $a_{ij}=u_i v_j$
End of explanation
x, y = np.meshgrid(u, v)
print(x)
print(y)
Explanation: Двумерные массивы, зависящие только от одного индекса: $x_{ij}=u_j$, $y_{ij}=v_i$
End of explanation
I = np.eye(4)
print(I)
Explanation: Единичная матрица.
End of explanation
print(I.reshape(16))
print(I.reshape(2, 8))
Explanation: Метод reshape делает то же самое, что присваивание атрибуту shape.
End of explanation
print(I[1])
Explanation: Строка.
End of explanation
for row in I:
print(row)
Explanation: Цикл по строкам.
End of explanation
print(I[:, 2])
Explanation: Столбец.
End of explanation
print(I[0:2, 1:3])
Explanation: Подматрица.
End of explanation
def f(i, j):
print(i)
print(j)
return 10 * i + j
print(np.fromfunction(f, (4, 4), dtype=np.int64))
Explanation: Можно построить двумерный массив из функции.
End of explanation
print(b.T)
Explanation: Транспонированная матрица.
End of explanation
a = np.array([[0, 1], [2, 3]])
b = np.array([[4, 5, 6], [7, 8, 9]])
c = np.array([[4, 5], [6, 7], [8, 9]])
print(a)
print(b)
print(c)
print(np.hstack((a, b)))
print(np.vstack((a, c)))
Explanation: Соединение матриц по горизонтали и по вертикали.
End of explanation
print(b.sum())
print(b.sum(axis=0))
print(b.sum(axis=1))
Explanation: Сумма всех элементов; суммы столбцов; суммы строк.
End of explanation
print(b.max())
print(b.max(axis=0))
print(b.min(axis=1))
Explanation: Аналогично работают prod, max, min и т.д.
End of explanation
np.trace(a)
Explanation: След - сумма диагональных элементов.
End of explanation
X = np.arange(24).reshape(2, 3, 4)
print(X)
Explanation: Многомерные массивы
End of explanation
# суммируем только по нулевой оси, то есть для фиксированных j и k суммируем только элементы с индексами (*, j, k)
print(X.sum(axis=0))
# суммируем сразу по двум осям, то есть для фиксированной i суммируем только элементы с индексами (i, *, *)
print(X.sum(axis=(1, 2)))
Explanation: Суммирование (аналогично остальные операции)
End of explanation
np.linalg.det(a)
Explanation: Линейная алгебра
End of explanation
a1 = np.linalg.inv(a)
print(a1)
print(a @ a1)
print(a1 @ a)
Explanation: Обратная матрица.
End of explanation
v = np.array([0, 1], dtype=np.float64)
print(a1 @ v)
u = np.linalg.solve(a, v)
print(u)
Explanation: Решение линейной системы $au=v$.
End of explanation
print(a @ u - v)
Explanation: Проверим.
End of explanation
l, u = np.linalg.eig(a)
print(l)
print(u)
Explanation: Собственные значения и собственные векторы: $a u_i = \lambda_i u_i$. l - одномерный массив собственных значений $\lambda_i$, столбцы матрицы $u$ - собственные векторы $u_i$.
End of explanation
for i in range(2):
print(a @ u[:, i] - l[i] * u[:, i])
Explanation: Проверим.
End of explanation
L = np.diag(l)
print(L)
print(np.diag(L))
Explanation: Функция diag от одномерного массива строит диагональную матрицу; от квадратной матрицы - возвращает одномерный массив её диагональных элементов.
End of explanation
print(a @ u - u @ L)
Explanation: Все уравнения $a u_i = \lambda_i u_i$ можно собрать в одно матричное уравнение $a u = u \Lambda$, где $\Lambda$ - диагональная матрица с собственными значениями $\lambda_i$ по диагонали.
End of explanation
print(np.linalg.inv(u) @ a @ u)
Explanation: Поэтому $u^{-1} a u = \Lambda$.
End of explanation
l, v = np.linalg.eig(a.T)
print(l)
print(v)
Explanation: Найдём теперь левые собственные векторы $v_i a = \lambda_i v_i$ (собственные значения $\lambda_i$ те же самые).
End of explanation
print(u.T @ u)
print(v.T @ v)
Explanation: Собственные векторы нормированы на 1.
End of explanation
print(v.T @ u)
Explanation: Левые и правые собственные векторы, соответствующие разным собственным значениям, ортогональны, потому что $v_i a u_j = \lambda_i v_i u_j = \lambda_j v_i u_j$.
End of explanation
from scipy.integrate import quad, odeint
from scipy.special import erf
def f(x):
return np.exp(-x ** 2)
Explanation: Интегрирование
End of explanation
res, err = quad(f, 0, np.inf)
print(np.sqrt(np.pi) / 2, res, err)
res, err = quad(f, 0, 1)
print(np.sqrt(np.pi) / 2 * erf(1), res, err)
Explanation: Адаптивное численное интегрирование (может быть до бесконечности). err - оценка ошибки.
End of explanation
x = np.arange(0, 25, 0.5).reshape((5, 10))
# Сохраняем в файл example.txt данные x в формате с двумя точками после запятой и разделителем ';'
np.savetxt('example.txt', x, fmt='%.2f', delimiter=';')
Explanation: Сохранение в файл и чтение из файла
End of explanation
! cat example.txt
Explanation: Получится такой файл
End of explanation
x = np.loadtxt('example.txt', delimiter=';')
print(x)
Explanation: Теперь его можно прочитать
End of explanation
import scipy.stats as sps
Explanation: Библиотека scipy (модуль scipy.stats)
Нам пригодится только модуль scipy.stats.
Полное описание http://docs.scipy.org/doc/scipy/reference/stats.html
End of explanation
sample = sps.norm.rvs(size=200, loc=1, scale=3)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Выборочная дисперсия: %.3f' % sample.var())
print('Плотность:\t\t', sps.norm.pdf([-1, 0, 1, 2, 3], loc=1, scale=3))
print('Функция распределения:\t', sps.norm.cdf([-1, 0, 1, 2, 3], loc=1, scale=3))
print('Квантили:', sps.norm.ppf([0.05, 0.1, 0.5, 0.9, 0.95], loc=1, scale=3))
Explanation: <b>Общий принцип:</b>
$X$ — некоторое распределение с параметрами params
<ul>
<li>`X.rvs(size=N, params)` — генерация выборки размера $N$ (<b>R</b>andom <b>V</b>ariate<b>S</b>). Возвращает `numpy.array`</li>
<li>`X.cdf(x, params)` — значение функции распределения в точке $x$ (<b>C</b>umulative <b>D</b>istribution <b>F</b>unction)</li>
<li>`X.logcdf(x, params)` — значение логарифма функции распределения в точке $x$</li>
<li>`X.ppf(q, params)` — $q$-квантиль (<b>P</b>ercent <b>P</b>oint <b>F</b>unction)</li>
<li>`X.mean(params)` — математическое ожидание</li>
<li>`X.median(params)` — медиана</li>
<li>`X.var(params)` — дисперсия (<b>Var</b>iance)</li>
<li>`X.std(params)` — стандартное отклонение = корень из дисперсии (<b>St</b>andard <b>D</b>eviation)</li>
</ul>
Кроме того для непрерывных распределений определены функции
<ul>
<li>`X.pdf(x, params)` — значение плотности в точке $x$ (<b>P</b>robability <b>D</b>ensity <b>F</b>unction)</li>
<li>`X.logpdf(x, params)` — значение логарифма плотности в точке $x$</li>
</ul>
А для дискретных
<ul>
<li>`X.pmf(k, params)` — значение дискретной плотности в точке $k$ (<b>P</b>robability <b>M</b>ass <b>F</b>unction)</li>
<li>`X.logpdf(k, params)` — значение логарифма дискретной плотности в точке $k$</li>
</ul>
Параметры могут быть следующими:
<ul>
<li>`loc` — параметр сдвига</li>
<li>`scale` — параметр масштаба</li>
<li>и другие параметры (например, $n$ и $p$ для биномиального)</li>
</ul>
Для примера сгенерируем выборку размера $N = 200$ из распределения $\mathscr{N}(1, 9)$ и посчитаем некоторые статистики.
В терминах выше описанных функций у нас $X$ = sps.norm, а params = (loc=1, scale=3).
End of explanation
sample = sps.binom.rvs(size=200, n=10, p=0.6)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Выборочная дисперсия: %.3f' % sample.var())
print('Дискретная плотность:\t', sps.binom.pmf([-1, 0, 5, 5.5, 10], n=10, p=0.6))
print('Функция распределения:\t', sps.binom.cdf([-1, 0, 5, 5.5, 10], n=10, p=0.6))
print('Квантили:', sps.binom.ppf([0.05, 0.1, 0.5, 0.9, 0.95], n=10, p=0.6))
Explanation: Cгенерируем выборку размера $N = 200$ из распределения $Bin(10, 0.6)$ и посчитаем некоторые статистики.
В терминах выше описанных функций у нас $X$ = sps.binom, а params = (n=10, p=0.6).
End of explanation
sample = sps.multivariate_normal.rvs(mean=[1, 1], cov=[[2, 1], [1, 2]], size=200)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее:', sample.mean(axis=0))
print('Выборочная матрица ковариаций:\n', np.cov(sample.T))
Explanation: Отдельно есть класс для <b>многомерного нормального распределения</b>.
Для примера сгенерируем выборку размера $N=200$ из распределения $\mathscr{N} \left( \begin{pmatrix} 1 \ 1 \end{pmatrix}, \begin{pmatrix} 2 & 1 \ 1 & 2 \end{pmatrix} \right)$.
End of explanation
sample = sps.norm.rvs(size=10, loc=np.arange(10), scale=0.1)
print(sample)
Explanation: Некоторая хитрость :)
End of explanation
class cubic_gen(sps.rv_continuous):
def _pdf(self, x):
return 4 * x ** 3 / 15
cubic = cubic_gen(a=1, b=2, name='cubic')
sample = cubic.rvs(size=200)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Выборочная дисперсия: %.3f' % sample.var())
Explanation: Бывает так, что <b>надо сгенерировать выборку из распределения, которого нет в scipy.stats</b>.
Для этого надо создать класс, который будет наследоваться от класса rv_continuous для непрерывных случайных величин и от класса rv_discrete для дискретных случайных величин.
Пример есть на странице http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html#scipy.stats.rv_continuous
Для примера сгенерируем выборку из распределения с плотностью $f(x) = \frac{4}{15} x^3 I{x \in [1, 2] = [a, b]}$.
End of explanation
some_distribution = sps.rv_discrete(name='some_distribution', values=([1, 2, 3], [0.6, 0.1, 0.3]))
sample = some_distribution.rvs(size=200)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Частота значений по выборке:', (sample == 1).mean(), (sample == 2).mean(), (sample == 3).mean())
Explanation: Если дискретная случайная величина может принимать небольшое число значений, то можно не создавать новый класс, как показано выше, а явно указать эти значения и из вероятности.
End of explanation |
6,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
6,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
==============================================
Read and visualize projections (SSP and other)
==============================================
This example shows how to read and visualize Signal Subspace Projectors (SSP)
vector. Such projections are sometimes referred to as PCA projections.
Step1: Load the FIF file and display the projections present in the file. Here the
projections are added to the file during the acquisition and are obtained
from empty room recordings.
Step2: Display the projections one by one
Step3: Use the function in mne.viz to display a list of projections
Step4: .. TODO
Step5: Displaying the projections from a raw object requires no extra information
since all the layout information is present in raw.info.
MNE is able to automatically determine the layout for some magnetometer and
gradiometer configurations but not the layout of EEG electrodes.
Here we display the ecg_projs individually and we provide extra parameters
for EEG. (Notice that planar projection refers to the gradiometers and axial
refers to magnetometers.)
Notice that the conditional is just for illustration purposes. We could
raw.info in all cases to avoid the guesswork in plot_topomap and ensure
that the right layout is always found
Step6: The correct layout or a list of layouts from where to choose can also be
provided. Just for illustration purposes, here we generate the
possible_layouts from the raw object itself, but it can come from somewhere
else. | Python Code:
# Author: Joan Massich <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import read_proj
from mne.io import read_raw_fif
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ecg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'
Explanation: ==============================================
Read and visualize projections (SSP and other)
==============================================
This example shows how to read and visualize Signal Subspace Projectors (SSP)
vector. Such projections are sometimes referred to as PCA projections.
End of explanation
raw = read_raw_fif(fname)
empty_room_proj = raw.info['projs']
# Display the projections stored in `info['projs']` from the raw object
raw.plot_projs_topomap()
Explanation: Load the FIF file and display the projections present in the file. Here the
projections are added to the file during the acquisition and are obtained
from empty room recordings.
End of explanation
fig, axes = plt.subplots(1, len(empty_room_proj))
for proj, ax in zip(empty_room_proj, axes):
proj.plot_topomap(axes=ax)
Explanation: Display the projections one by one
End of explanation
assert isinstance(empty_room_proj, list)
mne.viz.plot_projs_topomap(empty_room_proj)
Explanation: Use the function in mne.viz to display a list of projections
End of explanation
# read the projections
ecg_projs = read_proj(ecg_fname)
# add them to raw and plot everything
raw.add_proj(ecg_projs)
raw.plot_projs_topomap()
Explanation: .. TODO: add this when the tutorial is up: "As shown in the tutorial
:doc:../auto_tutorials/preprocessing/plot_projectors, ..."
The ECG projections can be loaded from a file and added to the raw object
End of explanation
fig, axes = plt.subplots(1, len(ecg_projs))
for proj, ax in zip(ecg_projs, axes):
if proj['desc'].startswith('ECG-eeg'):
proj.plot_topomap(axes=ax, info=raw.info)
else:
proj.plot_topomap(axes=ax)
Explanation: Displaying the projections from a raw object requires no extra information
since all the layout information is present in raw.info.
MNE is able to automatically determine the layout for some magnetometer and
gradiometer configurations but not the layout of EEG electrodes.
Here we display the ecg_projs individually and we provide extra parameters
for EEG. (Notice that planar projection refers to the gradiometers and axial
refers to magnetometers.)
Notice that the conditional is just for illustration purposes. We could
raw.info in all cases to avoid the guesswork in plot_topomap and ensure
that the right layout is always found
End of explanation
possible_layouts = [mne.find_layout(raw.info, ch_type=ch_type)
for ch_type in ('grad', 'mag', 'eeg')]
mne.viz.plot_projs_topomap(ecg_projs, layout=possible_layouts)
Explanation: The correct layout or a list of layouts from where to choose can also be
provided. Just for illustration purposes, here we generate the
possible_layouts from the raw object itself, but it can come from somewhere
else.
End of explanation |
6,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Built-in plotting methods for Raw objects
This tutorial shows how to plot continuous data as a time series, how to plot
the spectral density of continuous data, and how to plot the sensor locations
and projectors stored in
Step1: We've seen in a previous tutorial <tut-raw-class> how to plot data
from a
Step2: It may not be obvious when viewing this tutorial online, but by default, the
Step3: If the data have been filtered, vertical dashed lines will automatically
indicate filter boundaries. The spectrum for each channel type is drawn in
its own subplot; here we've passed the average=True parameter to get a
summary for each channel type, but it is also possible to plot each channel
individually, with options for how the spectrum should be computed,
color-coding the channels by location, and more. For example, here is a plot
of just a few sensors (specified with the picks parameter), color-coded
by spatial location (via the spatial_colors parameter, see the
documentation of
Step4: Alternatively, you can plot the PSD for every sensor on its own axes, with
the axes arranged spatially to correspond to sensor locations in space, using
Step5: This plot is also interactive; hovering over each "thumbnail" plot will
display the channel name in the bottom left of the plot window, and clicking
on a thumbnail plot will create a second figure showing a larger version of
the selected channel's spectral density (as if you had called
Step6: Plotting sensor locations from Raw objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The channel locations in a
Step7: Plotting projectors from Raw objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As seen in the output of | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
Explanation: Built-in plotting methods for Raw objects
This tutorial shows how to plot continuous data as a time series, how to plot
the spectral density of continuous data, and how to plot the sensor locations
and projectors stored in :class:~mne.io.Raw objects.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the :class:~mne.io.Raw
object to just 60 seconds before loading it into RAM to save memory:
End of explanation
raw.plot()
Explanation: We've seen in a previous tutorial <tut-raw-class> how to plot data
from a :class:~mne.io.Raw object using :doc:matplotlib
<matplotlib:index>, but :class:~mne.io.Raw objects also have several
built-in plotting methods:
:meth:~mne.io.Raw.plot
:meth:~mne.io.Raw.plot_psd
:meth:~mne.io.Raw.plot_psd_topo
:meth:~mne.io.Raw.plot_sensors
:meth:~mne.io.Raw.plot_projs_topomap
The first three are discussed here in detail; the last two are shown briefly
and covered in-depth in other tutorials.
Interactive data browsing with Raw.plot()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The :meth:~mne.io.Raw.plot method of :class:~mne.io.Raw objects provides
a versatile interface for exploring continuous data. For interactive viewing
and data quality checking, it can be called with no additional parameters:
End of explanation
raw.plot_psd(average=True)
Explanation: It may not be obvious when viewing this tutorial online, but by default, the
:meth:~mne.io.Raw.plot method generates an interactive plot window with
several useful features:
It spaces the channels equally along the y-axis.
20 channels are shown by default; you can scroll through the channels
using the :kbd:↑ and :kbd:↓ arrow keys, or by clicking on the
colored scroll bar on the right edge of the plot.
The number of visible channels can be adjusted by the n_channels
parameter, or changed interactively using :kbd:page up and :kbd:page
down keys.
You can toggle the display to "butterfly" mode (superimposing all
channels of the same type on top of one another) by pressing :kbd:b,
or start in butterfly mode by passing the butterfly=True parameter.
It shows the first 10 seconds of the :class:~mne.io.Raw object.
You can shorten or lengthen the window length using :kbd:home and
:kbd:end keys, or start with a specific window duration by passing the
duration parameter.
You can scroll in the time domain using the :kbd:← and
:kbd:→ arrow keys, or start at a specific point by passing the
start parameter. Scrolling using :kbd:shift:kbd:→ or
:kbd:shift:kbd:← scrolls a full window width at a time.
It allows clicking on channels to mark/unmark as "bad".
When the plot window is closed, the :class:~mne.io.Raw object's
info attribute will be updated, adding or removing the newly
(un)marked channels to/from the :class:~mne.Info object's bads
field (AKA raw.info['bads']).
.. TODO: discuss annotation snapping in the below bullets
It allows interactive :term:annotation <annotations> of the raw data.
This allows you to mark time spans that should be excluded from future
computations due to large movement artifacts, line noise, or other
distortions of the signal. Annotation mode is entered by pressing
:kbd:a. See annotations-tutorial for details.
It automatically applies any :term:projectors <projector> before plotting
the data.
These can be enabled/disabled interactively by clicking the Proj
button at the lower right corner of the plot window, or disabled by
default by passing the proj=False parameter. See
tut-projectors-background for more info on projectors.
These and other keyboard shortcuts are listed in the Help window, accessed
through the Help button at the lower left corner of the plot window.
Other plot properties (such as color of the channel traces, channel order and
grouping, simultaneous plotting of :term:events, scaling, clipping,
filtering, etc.) can also be adjusted through parameters passed to the
:meth:~mne.io.Raw.plot method; see the docstring for details.
Plotting spectral density of continuous data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To visualize the frequency content of continuous data, the
:class:~mne.io.Raw object provides a :meth:~mne.io.Raw.plot_psd to plot
the spectral density_ of the data.
End of explanation
midline = ['EEG 002', 'EEG 012', 'EEG 030', 'EEG 048', 'EEG 058', 'EEG 060']
raw.plot_psd(picks=midline)
Explanation: If the data have been filtered, vertical dashed lines will automatically
indicate filter boundaries. The spectrum for each channel type is drawn in
its own subplot; here we've passed the average=True parameter to get a
summary for each channel type, but it is also possible to plot each channel
individually, with options for how the spectrum should be computed,
color-coding the channels by location, and more. For example, here is a plot
of just a few sensors (specified with the picks parameter), color-coded
by spatial location (via the spatial_colors parameter, see the
documentation of :meth:~mne.io.Raw.plot_psd for full details):
End of explanation
raw.plot_psd_topo()
Explanation: Alternatively, you can plot the PSD for every sensor on its own axes, with
the axes arranged spatially to correspond to sensor locations in space, using
:meth:~mne.io.Raw.plot_psd_topo:
End of explanation
raw.copy().pick_types(meg=False, eeg=True).plot_psd_topo()
Explanation: This plot is also interactive; hovering over each "thumbnail" plot will
display the channel name in the bottom left of the plot window, and clicking
on a thumbnail plot will create a second figure showing a larger version of
the selected channel's spectral density (as if you had called
:meth:~mne.io.Raw.plot_psd on that channel).
By default, :meth:~mne.io.Raw.plot_psd_topo will show only the MEG
channels if MEG channels are present; if only EEG channels are found, they
will be plotted instead:
End of explanation
raw.plot_sensors(ch_type='eeg')
Explanation: Plotting sensor locations from Raw objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The channel locations in a :class:~mne.io.Raw object can be easily plotted
with the :meth:~mne.io.Raw.plot_sensors method. A brief example is shown
here; notice that channels in raw.info['bads'] are plotted in red. More
details and additional examples are given in the tutorial
tut-sensor-locations.
End of explanation
raw.plot_projs_topomap()
Explanation: Plotting projectors from Raw objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As seen in the output of :meth:mne.io.read_raw_fif above, there are
:term:projectors <projector> included in the example :class:~mne.io.Raw
file (representing environmental noise in the signal, so it can later be
"projected out" during preprocessing). You can visualize these projectors
using the :meth:~mne.io.Raw.plot_projs_topomap method. By default it will
show one figure per channel type for which projectors are present, and each
figure will have one subplot per projector. The three projectors in this file
were only computed for magnetometers, so one figure with three subplots is
generated. More details on working with and plotting projectors are given in
tut-projectors-background and tut-artifact-ssp.
End of explanation |
6,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation
Ideally, before installation, create a clean python3 virtual environment to deploy the package, using virtualenvwrapper for example (see http
Step1: Installation with pip from github
Install the package with pip3. All the required dependencies will be automatically installed.
Step2: To update the package
Step3: Usage
The package is meant to be used in a jupyter notebook 4.0.0 +
Notebook setup
Launch the notebook in a terminal
Step4: If it does not autolaunch your web browser, open manually the following URL http
Step5: Default pylab parameters can be defined at the beginning of the notebook as well (see http
Step6: Using JGV
JGV is first initialized with a reference genome. Then annotation and alignment files can be added. Finally, coverage and feature localization plots can be generated.
Each function has specific options that are comprehensively detailed in the testing notebook provided with the package or in html version on nbviewer
Step7: One can also import the jprint and jhelp function from pycoQC to get a improve the default print and help function in jupyter
Step8: A sample test file can be loaded from the package as well
Step9: Initialize JGV with a reference genome
JGV starts by creating a Reference object from a fasta file
Step10: One can also give a list of chromosomes to select in the fasta file
Step11: Finally, instead of a fasta file, one can provide a tab separated index file containing at least 2 columns with the refid(chromosome name) and the length of the sequence, such as a fasta index create by faidx or with the output_index option of JGV
Step12: Adding annotation files
Once initialized a JGV object can parse and save annotation files (gff3, gtf and bed).
Step13: Several annotation can be loaded. Warnings will be thrown if there are chromosomes found in the reference sequence have no feature in the annotation file
Step14: Information about the annotations can be obtained with annotation_summary
Step15: Adding alignment files
JGV objects can also parse and compute the coverage from alignment files (bam, sam and bed).
Step16: Similar to annotation, JGV also has an alignment_summary function
Step17: Generate a plot of coverage per refid
Simple visualization to have a first idea of the sequencing coverage, with many customization options
Step18: Plotting the coverage and annotation features of a specific window
interval_plot is undoubtedly the most useful function of the package. It has a large panel of option to customize the plots and will adapt automatically to plot all the annotation and alignment coverage over a defined genomic interval or an entire chromosome | Python Code:
#### REMOVE in README.md ####
import JGV as package
from IPython.core.display import display, Markdown
if "__install_requires__" in package.__dict__:
display(Markdown("## Python packages dependencies:\n"))
for dep in package.__install_requires__:
display(Markdown("* {}\n".format(dep)))
#############################
Explanation: Installation
Ideally, before installation, create a clean python3 virtual environment to deploy the package, using virtualenvwrapper for example (see http://www.simononsoftware.com/virtualenv-tutorial-part-2/).
End of explanation
pip3 install git+https://github.com/a-slide/JupyterGenoViewer.git --process-dependency-links
Explanation: Installation with pip from github
Install the package with pip3. All the required dependencies will be automatically installed.
End of explanation
pip3 install git+https://github.com/a-slide/JupyterGenoViewer.git --upgrade --process-dependency-links
Explanation: To update the package:
End of explanation
jupyter notebook
Explanation: Usage
The package is meant to be used in a jupyter notebook 4.0.0 +
Notebook setup
Launch the notebook in a terminal
End of explanation
import matplotlib.pyplot as pl
%matplotlib inline
Explanation: If it does not autolaunch your web browser, open manually the following URL http://localhost:8888/tree
From Jupyter home page you can navigate to the directory you want to work in. Then, create a new Python3 Notebook.
In the notebook, import matplotlib and use the jupyter magic command to enable direct plotting in the current Notebook.
End of explanation
pl.rcParams['figure.figsize'] = 20,7
pl.rcParams['font.family'] = 'sans-serif'
pl.rcParams['font.sans-serif'] = ['DejaVu Sans']
pl.style.use('ggplot')
Explanation: Default pylab parameters can be defined at the beginning of the notebook as well (see http://matplotlib.org/users/customizing.html for more options)
End of explanation
from JGV.JGV import JGV
Explanation: Using JGV
JGV is first initialized with a reference genome. Then annotation and alignment files can be added. Finally, coverage and feature localization plots can be generated.
Each function has specific options that are comprehensively detailed in the testing notebook provided with the package or in html version on nbviewer: Test_notebook
Import package
End of explanation
from JGV.JGV import jhelp, jprint
Explanation: One can also import the jprint and jhelp function from pycoQC to get a improve the default print and help function in jupyter
End of explanation
example_bam = JGV.example_bam()
example_fasta = JGV.example_fasta()
example_gtf = JGV.example_gtf()
example_gff3 = JGV.example_gff3()
jprint(example_bam)
jprint(example_fasta)
jprint(example_gtf)
jprint(example_gff3)
Explanation: A sample test file can be loaded from the package as well
End of explanation
j = JGV(fp=example_fasta, verbose=True)
Explanation: Initialize JGV with a reference genome
JGV starts by creating a Reference object from a fasta file
End of explanation
j = JGV(fp=example_fasta, verbose=True, ref_list=["I","II","III"])
Explanation: One can also give a list of chromosomes to select in the fasta file
End of explanation
j = JGV(fp=example_fasta, verbose=True, output_index=True)
index = "/home/aleg/Programming/Python3/JupyterGenoViewer/JGV/data/yeast.tsv"
j = JGV(index, verbose=True)
Explanation: Finally, instead of a fasta file, one can provide a tab separated index file containing at least 2 columns with the refid(chromosome name) and the length of the sequence, such as a fasta index create by faidx or with the output_index option of JGV
End of explanation
j.add_annotation(example_gtf, name="yeastMine")
Explanation: Adding annotation files
Once initialized a JGV object can parse and save annotation files (gff3, gtf and bed).
End of explanation
j.add_annotation(example_gff3, name="Ensembl")
Explanation: Several annotation can be loaded. Warnings will be thrown if there are chromosomes found in the reference sequence have no feature in the annotation file
End of explanation
j.annotation_summary()
Explanation: Information about the annotations can be obtained with annotation_summary
End of explanation
j.add_alignment(example_bam, name="RNA-Seq")
Explanation: Adding alignment files
JGV objects can also parse and compute the coverage from alignment files (bam, sam and bed).
End of explanation
j.alignment_summary()
Explanation: Similar to annotation, JGV also has an alignment_summary function
End of explanation
r = j.refid_coverage_plot()
r = j.refid_coverage_plot(norm_depth=False, norm_len=False, log=True, color="dodgerblue", alpha=0.5)
Explanation: Generate a plot of coverage per refid
Simple visualization to have a first idea of the sequencing coverage, with many customization options
End of explanation
j.interval_plot("VI", feature_types=["gene", "transcript", "CDS"])
j.interval_plot("VI", start=220000, end=225000)
Explanation: Plotting the coverage and annotation features of a specific window
interval_plot is undoubtedly the most useful function of the package. It has a large panel of option to customize the plots and will adapt automatically to plot all the annotation and alignment coverage over a defined genomic interval or an entire chromosome
End of explanation |
6,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Relationships in the GO
Alex Warwick Vesztrocy, March 2016
For some analyses, it is possible to only use the <code>is_a</code> definitions given in the Gene Ontology.
However, it is important to remember that this isn't always the case. As such, <code>GOATOOLS</code> includes the option to load the relationship definitions also.
Loading GO graph with the relationship tags
This is possible by using the <code>optional_attrs</code> argument, upon instantiating a <code>GODag</code>.
Step1: Viewing relationships in the GO graph
So now, when looking at an individual term (which has a relationship defined in the GO) these are listed in a nested manner. As an example, look at <code>GO
Step2: These different relationship types are stored as a dictionary within the relationship attribute on a GO term.
Step3: Example use case
One example use case for the relationship terms, would be to look for all functions which regulate pseudohyphal growth (<code>GO
Step4: First, find the relationship types which contain "regulates"
Step5: Now, search through the terms in the tree for those with a relationship in this list and add them to a dictionary dependent on the type of regulation.
Step6: Now <code>regulating_terms</code> contains the GO terms which relate to regulating protein localisation to the nucleolus. | Python Code:
import os
from goatools.obo_parser import GODag
if not os.path.exists('go-basic.obo'):
!wget http://geneontology.org/ontology/go-basic.obo
go = GODag('go-basic.obo', optional_attrs=['relationship'])
Explanation: Relationships in the GO
Alex Warwick Vesztrocy, March 2016
For some analyses, it is possible to only use the <code>is_a</code> definitions given in the Gene Ontology.
However, it is important to remember that this isn't always the case. As such, <code>GOATOOLS</code> includes the option to load the relationship definitions also.
Loading GO graph with the relationship tags
This is possible by using the <code>optional_attrs</code> argument, upon instantiating a <code>GODag</code>.
End of explanation
eg_term = go['GO:1901990']
eg_term
Explanation: Viewing relationships in the GO graph
So now, when looking at an individual term (which has a relationship defined in the GO) these are listed in a nested manner. As an example, look at <code>GO:1901990</code> which has a single <code>regulates</code> relationship.
End of explanation
print(eg_term.relationship.keys())
print(eg_term.relationship['regulates'])
Explanation: These different relationship types are stored as a dictionary within the relationship attribute on a GO term.
End of explanation
term_of_interest = go['GO:0007124']
Explanation: Example use case
One example use case for the relationship terms, would be to look for all functions which regulate pseudohyphal growth (<code>GO:0007124</code>). That is:
A pattern of cell growth that occurs in conditions of nitrogen limitation and abundant fermentable carbon source. Cells become elongated, switch to a unipolar budding pattern, remain physically attached to each other, and invade the growth substrate.
Source: https://www.ebi.ac.uk/QuickGO/GTerm?id=GO:0007124#term=info&info=1
End of explanation
regulates = frozenset([typedef
for typedef in go.typedefs.keys()
if 'regulates' in typedef])
print(regulates)
Explanation: First, find the relationship types which contain "regulates":
End of explanation
from collections import defaultdict
regulating_terms = defaultdict(list)
for t in go.values():
if hasattr(t, 'relationship'):
for typedef in regulates.intersection(t.relationship.keys()):
if term_of_interest in t.relationship[typedef]:
regulating_terms['{:s}d_by'.format(typedef[:-1])].append(t)
Explanation: Now, search through the terms in the tree for those with a relationship in this list and add them to a dictionary dependent on the type of regulation.
End of explanation
print('{:s} ({:s}) is:'.format(term_of_interest.name, term_of_interest.id))
for regulate_desc, goterms in regulating_terms.items():
print('\n - {:s}:'.format(regulate_desc))
for goterm in goterms:
print(' -- {:s} {:s}'.format(goterm.id, goterm.name))
for gochild in goterm.children:
print(' -- {:s} {:s}'.format(gochild.id, gochild.name))
Explanation: Now <code>regulating_terms</code> contains the GO terms which relate to regulating protein localisation to the nucleolus.
End of explanation |
6,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../static/images/iterables.png" width="240">
Iterables
Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin, called iterables.
The main homepage has a nice section about MapNode and iterables if you want to learn more. Also, if you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out synchronize and intersource.
For example, let's assume we have a node (A) that does simple skull stripping, followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm and 16mm.
Step1: Create a smoothing Node with IsotropicSmooth
Step2: Now, to use iterables and therefore smooth with different fwhm is as simple as that
Step3: And to wrap it up. We need to create a workflow, connect the nodes and finally, can run the workflow in parallel.
Step4: If we visualize the graph with exec, we can see where the parallelization actually takes place.
Step5: If you look at the structure in the workflow directory, you can also see, that for each smoothing, a specific folder was created, i.e. _fwhm_16.
Step6: Now, let's visualize the results!
Step7: IdentityInterface (special use case of iterabels)
A special use case of iterables is the IdentityInterface. The IdentityInterface interface allows you to create Nodes that simple identity mapping, i.e. Nodes that only work on parameters/strings.
For example, let's say you want to run a preprocessing workflow over 5 subjects, with each having two runs and applying 2 different smoothing kernel (as is done in the Preprocessing Example), we can do this as follows
Step8: Now, we can create the IdentityInterface Node
Step9: That's it. Now, we can connect the output fields of this infosource node like any other node to wherever we want. | Python Code:
from nipype import Node, Workflow
from nipype.interfaces.fsl import BET, IsotropicSmooth
# Initiate a skull stripping Node with BET
skullstrip = Node(BET(mask=True,
in_file='/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'),
name="skullstrip")
Explanation: <img src="../static/images/iterables.png" width="240">
Iterables
Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin, called iterables.
The main homepage has a nice section about MapNode and iterables if you want to learn more. Also, if you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out synchronize and intersource.
For example, let's assume we have a node (A) that does simple skull stripping, followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm and 16mm.
End of explanation
isosmooth = Node(IsotropicSmooth(), name='iso_smooth')
Explanation: Create a smoothing Node with IsotropicSmooth
End of explanation
isosmooth.iterables = ("fwhm", [4, 8, 16])
Explanation: Now, to use iterables and therefore smooth with different fwhm is as simple as that:
End of explanation
# Create the workflow
wf = Workflow(name="smoothflow")
wf.base_dir = "/output"
wf.connect(skullstrip, 'out_file', isosmooth, 'in_file')
# Run it in parallel (one core for each smoothing kernel)
wf.run('MultiProc', plugin_args={'n_procs': 3})
Explanation: And to wrap it up. We need to create a workflow, connect the nodes and finally, can run the workflow in parallel.
End of explanation
# Visualize the detailed graph
from IPython.display import Image
wf.write_graph(graph2use='exec', format='png', simple_form=True)
Image(filename='/output/smoothflow/graph_detailed.dot.png')
Explanation: If we visualize the graph with exec, we can see where the parallelization actually takes place.
End of explanation
!tree /output/smoothflow -I '*txt|*pklz|report*|*.json|*js|*.dot|*.html'
Explanation: If you look at the structure in the workflow directory, you can also see, that for each smoothing, a specific folder was created, i.e. _fwhm_16.
End of explanation
%pylab inline
from nilearn import plotting
plotting.plot_anat(
'/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz', title='original',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/skullstrip/sub-01_ses-test_T1w_brain.nii.gz', title='skullstripped',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_4/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=4',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_8/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=8',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_16/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=16',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
Explanation: Now, let's visualize the results!
End of explanation
# First, let's specify the list of input variables
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05']
session_list = ['run-01', 'run-02']
fwhm_widths = [4, 8]
Explanation: IdentityInterface (special use case of iterabels)
A special use case of iterables is the IdentityInterface. The IdentityInterface interface allows you to create Nodes that simple identity mapping, i.e. Nodes that only work on parameters/strings.
For example, let's say you want to run a preprocessing workflow over 5 subjects, with each having two runs and applying 2 different smoothing kernel (as is done in the Preprocessing Example), we can do this as follows:
End of explanation
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id', 'session_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('session_id', session_list),
('fwhm_id', fwhm_widths)]
Explanation: Now, we can create the IdentityInterface Node
End of explanation
infosource.outputs
Explanation: That's it. Now, we can connect the output fields of this infosource node like any other node to wherever we want.
End of explanation |
6,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Transfer Learning for the Audio Domain with Model Maker
In this notebook, you'll learn how to use Model Maker for the Audio Domain.
It is part of the Codelab to Customize an Audio model and deploy on Android.
You'll use a custom birds dataset and export a TFLite model that can be used on a phone, a TensorFlow.JS model that can be used for inference in the browser and also a SavedModel version that you can use for serving.
Intalling dependencies
Model Maker for the Audio domain needs TensorFlow 2.5 to work.
Step2: Import TensorFlow, Model Maker and other libraries
Among the dependencies that are needed, you'll use TensorFlow and Model Maker. Aside those, the others are for audio manipulation, playing and visualizations.
Step3: The Birds dataset
The Birds dataset is an education collection of 5 types of birds songs
Step4: Explore the data
The audios are already split in train and test folders. Inside each split folder, there's one folder for each bird, using their bird_code as name.
The audios are all mono and with 16kHz sample rate.
For more information about each file, you can read the metadata.csv file. It contains all the files authors, lincenses and some more information. You won't need to read it yourself on this tutorial.
Step5: Playing some audio
To have a better understanding about the data, lets listen to a random audio files from the test split.
Note
Step6: Training the Model
When using Model Maker for audio, you have to start with a model spec. This is the base model that your new model will extract information to learn about the new classes. It also affects how the dataset will be transformed to respect the models spec parameters like
Step7: Loading the data
Model Maker has the API to load the data from a folder and have it in the expected format for the model spec.
The train and test split are based on the folders. The validation dataset will be created as 20% of the train split.
Note
Step8: Training the model
the audio_classifier has the create method that creates a model and already start training it.
You can customize many parameterss, for more information you can read more details in the documentation.
On this first try you'll use all the default configurations and train for 100 epochs.
Note
Step9: The accuracy looks good but it's important to run the evaluation step on the test data and vefify your model achieved good results on unseed data.
Step11: Understanding your model
When training a classifier, it's useful to see the confusion matrix. The confusion matrix gives you detailed knowledge of how your classifier is performing on test data.
Model Maker already creates the confusion matrix for you.
Step12: Testing the model [Optional]
You can try the model on a sample audio from the test dataset just to see the results.
First you get the serving model.
Step13: Coming back to the random audio you loaded earlier
Step14: The model created has a fixed input window.
For a given audio file, you'll have to split it in windows of data of the expected size. The last window might need to be filled with zeros.
Step15: You'll loop over all the splitted audio and apply the model for each one of them.
The model you've just trained has 2 outputs
Step16: Exporting the model
The last step is exporting your model to be used on embedded devices or on the browser.
The export method export both formats for you.
Step17: You can also export the SavedModel version for serving or using on a Python environment. | Python Code:
# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2021 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
! pip install tflite-model-maker tensorflow==2.5
Explanation: Transfer Learning for the Audio Domain with Model Maker
In this notebook, you'll learn how to use Model Maker for the Audio Domain.
It is part of the Codelab to Customize an Audio model and deploy on Android.
You'll use a custom birds dataset and export a TFLite model that can be used on a phone, a TensorFlow.JS model that can be used for inference in the browser and also a SavedModel version that you can use for serving.
Intalling dependencies
Model Maker for the Audio domain needs TensorFlow 2.5 to work.
End of explanation
import tensorflow as tf
import tflite_model_maker as mm
from tflite_model_maker import audio_classifier
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
import glob
import random
from IPython.display import Audio, Image
from scipy.io import wavfile
print(f"TensorFlow Version: {tf.__version__}")
print(f"Model Maker Version: {mm.__version__}")
Explanation: Import TensorFlow, Model Maker and other libraries
Among the dependencies that are needed, you'll use TensorFlow and Model Maker. Aside those, the others are for audio manipulation, playing and visualizations.
End of explanation
birds_dataset_folder = tf.keras.utils.get_file('birds_dataset.zip',
'https://storage.googleapis.com/laurencemoroney-blog.appspot.com/birds_dataset.zip',
cache_dir='./',
cache_subdir='dataset',
extract=True)
Explanation: The Birds dataset
The Birds dataset is an education collection of 5 types of birds songs:
White-breasted Wood-Wren
House Sparrow
Red Crossbill
Chestnut-crowned Antpitta
Azara's Spinetail
The original audio came from Xeno-canto which is a website dedicated to sharing bird sounds from all over the world.
Let's start by downloading the data.
End of explanation
# @title [Run this] Util functions and data structures.
data_dir = './dataset/small_birds_dataset'
bird_code_to_name = {
'wbwwre1': 'White-breasted Wood-Wren',
'houspa': 'House Sparrow',
'redcro': 'Red Crossbill',
'chcant2': 'Chestnut-crowned Antpitta',
'azaspi1': "Azara's Spinetail",
}
birds_images = {
'wbwwre1': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Henicorhina_leucosticta_%28Cucarachero_pechiblanco%29_-_Juvenil_%2814037225664%29.jpg/640px-Henicorhina_leucosticta_%28Cucarachero_pechiblanco%29_-_Juvenil_%2814037225664%29.jpg', # Alejandro Bayer Tamayo from Armenia, Colombia
'houspa': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/52/House_Sparrow%2C_England_-_May_09.jpg/571px-House_Sparrow%2C_England_-_May_09.jpg', # Diliff
'redcro': 'https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Red_Crossbills_%28Male%29.jpg/640px-Red_Crossbills_%28Male%29.jpg', # Elaine R. Wilson, www.naturespicsonline.com
'chcant2': 'https://upload.wikimedia.org/wikipedia/commons/thumb/6/67/Chestnut-crowned_antpitta_%2846933264335%29.jpg/640px-Chestnut-crowned_antpitta_%2846933264335%29.jpg', # Mike's Birds from Riverside, CA, US
'azaspi1': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Synallaxis_azarae_76608368.jpg/640px-Synallaxis_azarae_76608368.jpg', # https://www.inaturalist.org/photos/76608368
}
test_files = os.path.join('/content', data_dir, 'test/*/*.wav')
def get_random_audio_file():
test_list = glob.glob(test_files)
random_audio_path = random.choice(test_list)
return random_audio_path
def show_bird_data(audio_path):
sample_rate, audio_data = wavfile.read(audio_path, 'rb')
bird_code = audio_path.split('/')[-2]
print(f'Bird name: {bird_code_to_name[bird_code]}')
print(f'Bird code: {bird_code}')
display(Image(birds_images[bird_code]))
plttitle = f'{bird_code_to_name[bird_code]} ({bird_code})'
plt.title(plttitle)
plt.plot(audio_data)
display(Audio(audio_data, rate=sample_rate))
print('functions and data structures created')
Explanation: Explore the data
The audios are already split in train and test folders. Inside each split folder, there's one folder for each bird, using their bird_code as name.
The audios are all mono and with 16kHz sample rate.
For more information about each file, you can read the metadata.csv file. It contains all the files authors, lincenses and some more information. You won't need to read it yourself on this tutorial.
End of explanation
random_audio = get_random_audio_file()
show_bird_data(random_audio)
Explanation: Playing some audio
To have a better understanding about the data, lets listen to a random audio files from the test split.
Note: later in this notebook you'll run inference on this audio for testing
End of explanation
spec = audio_classifier.YamNetSpec(
keep_yamnet_and_custom_heads=True,
frame_step=3 * audio_classifier.YamNetSpec.EXPECTED_WAVEFORM_LENGTH,
frame_length=6 * audio_classifier.YamNetSpec.EXPECTED_WAVEFORM_LENGTH)
Explanation: Training the Model
When using Model Maker for audio, you have to start with a model spec. This is the base model that your new model will extract information to learn about the new classes. It also affects how the dataset will be transformed to respect the models spec parameters like: sample rate, number of channels.
YAMNet is an audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology.
It's input is expected to be at 16kHz and with 1 channel.
You don't need to do any resampling yourself. Model Maker takes care of that for you.
frame_length is to decide how long each traininng sample is. in this caase EXPECTED_WAVEFORM_LENGTH * 3s
frame_steps is to decide how far appart are the training samples. In this case, the ith sample will start at EXPECTED_WAVEFORM_LENGTH * 6s after the (i-1)th sample.
The reason to set these values is to work around some limitation in real world dataset.
For example, in the bird dataset, birds don't sing all the time. They sing, rest and sing again, with noises in between. Having a long frame would help capture the singing, but setting it too long will reduce the number of samples for training.
End of explanation
train_data = audio_classifier.DataLoader.from_folder(
spec, os.path.join(data_dir, 'train'), cache=True)
train_data, validation_data = train_data.split(0.8)
test_data = audio_classifier.DataLoader.from_folder(
spec, os.path.join(data_dir, 'test'), cache=True)
Explanation: Loading the data
Model Maker has the API to load the data from a folder and have it in the expected format for the model spec.
The train and test split are based on the folders. The validation dataset will be created as 20% of the train split.
Note: The cache=True is important to make training later faster but it will also require more RAM to hold the data. For the birds dataset that is not a problem since it's only 300MB, but if you use your own data you have to pay attention to it.
End of explanation
batch_size = 128
epochs = 100
print('Training the model')
model = audio_classifier.create(
train_data,
spec,
validation_data,
batch_size=batch_size,
epochs=epochs)
Explanation: Training the model
the audio_classifier has the create method that creates a model and already start training it.
You can customize many parameterss, for more information you can read more details in the documentation.
On this first try you'll use all the default configurations and train for 100 epochs.
Note: The first epoch takes longer than all the other ones because it's when the cache is created. After that each epoch takes close to 1 second.
End of explanation
print('Evaluating the model')
model.evaluate(test_data)
Explanation: The accuracy looks good but it's important to run the evaluation step on the test data and vefify your model achieved good results on unseed data.
End of explanation
def show_confusion_matrix(confusion, test_labels):
Compute confusion matrix and normalize.
confusion_normalized = confusion.astype("float") / confusion.sum(axis=1)
axis_labels = test_labels
ax = sns.heatmap(
confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels,
cmap='Blues', annot=True, fmt='.2f', square=True)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
confusion_matrix = model.confusion_matrix(test_data)
show_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label)
Explanation: Understanding your model
When training a classifier, it's useful to see the confusion matrix. The confusion matrix gives you detailed knowledge of how your classifier is performing on test data.
Model Maker already creates the confusion matrix for you.
End of explanation
serving_model = model.create_serving_model()
print(f'Model\'s input shape and type: {serving_model.inputs}')
print(f'Model\'s output shape and type: {serving_model.outputs}')
Explanation: Testing the model [Optional]
You can try the model on a sample audio from the test dataset just to see the results.
First you get the serving model.
End of explanation
# if you want to try another file just uncoment the line below
random_audio = get_random_audio_file()
show_bird_data(random_audio)
Explanation: Coming back to the random audio you loaded earlier
End of explanation
sample_rate, audio_data = wavfile.read(random_audio, 'rb')
audio_data = np.array(audio_data) / tf.int16.max
input_size = serving_model.input_shape[1]
splitted_audio_data = tf.signal.frame(audio_data, input_size, input_size, pad_end=True, pad_value=0)
print(f'Test audio path: {random_audio}')
print(f'Original size of the audio data: {len(audio_data)}')
print(f'Number of windows for inference: {len(splitted_audio_data)}')
Explanation: The model created has a fixed input window.
For a given audio file, you'll have to split it in windows of data of the expected size. The last window might need to be filled with zeros.
End of explanation
print(random_audio)
results = []
print('Result of the window ith: your model class -> score, (spec class -> score)')
for i, data in enumerate(splitted_audio_data):
yamnet_output, inference = serving_model(data)
results.append(inference[0].numpy())
result_index = tf.argmax(inference[0])
spec_result_index = tf.argmax(yamnet_output[0])
t = spec._yamnet_labels()[spec_result_index]
result_str = f'Result of the window {i}: ' \
f'\t{test_data.index_to_label[result_index]} -> {inference[0][result_index].numpy():.3f}, ' \
f'\t({spec._yamnet_labels()[spec_result_index]} -> {yamnet_output[0][spec_result_index]:.3f})'
print(result_str)
results_np = np.array(results)
mean_results = results_np.mean(axis=0)
result_index = mean_results.argmax()
print(f'Mean result: {test_data.index_to_label[result_index]} -> {mean_results[result_index]}')
Explanation: You'll loop over all the splitted audio and apply the model for each one of them.
The model you've just trained has 2 outputs: The original YAMNet's output and the one you've just trained. This is important because the real world environment is more complicated than just bird sounds. You can use the YAMNet's output to filter out non relevant audio, for example, on the birds use case, if YAMNet is not classifying Birds or Animals, this might show that the output from your model might have an irrelevant classification.
Below both outpus are printed to make it easier to understand their relation. Most of the mistakes that your model make are when YAMNet's prediction is not related to your domain (eg: birds).
End of explanation
models_path = './birds_models'
print(f'Exporing the TFLite model to {models_path}')
model.export(models_path, tflite_filename='my_birds_model.tflite')
Explanation: Exporting the model
The last step is exporting your model to be used on embedded devices or on the browser.
The export method export both formats for you.
End of explanation
model.export(models_path, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
Explanation: You can also export the SavedModel version for serving or using on a Python environment.
End of explanation |
6,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sorting Objects in Instance Catalogs
Bryce Kalmbach
This notebook provides a series of commands that take a Twinkles Phosim Instance Catalog and creates different pandas dataframes for different types of objects in the catalog. It first separates the full sets of objects in the Instance Catalogs before picking out the sprinkled strongly lensed systems for further analysis. The complete object dataframes contain
Step1: Parsing the Instance Catalog
Here we run through the instance catalog and store which rows belong to which class of object. This is necessary since the catalog objects do not all have the same number of properties so we cannot just import them all and then sort within a dataframe.
Step2: Populating Dataframes
Now we load the dataframes for the overall sets of objects.
Step3: Sort out sprinkled Strong Lensing Systems
Now we will pick out the pieces of strongly lensed systems that were sprinkled into the instance catalogs for the Twinkles project.
Lensed AGN
We start with the Lensed AGN. In Twinkles Instance Catalogs the lensed AGN have larger uniqueIds than normal since we added information about the systems into the uniqueIds. We use this to find them in the AGN dataframe.
Step4: Below we see a pair of lensed images from a double.
Step5: Now we will extract the extra information we have stored in the uniqueId. This information is the Twinkles System number in our custom OM10 catalog in the data directory in Twinkles and the Twinkles Image Number which identifies which image in that particular system refers to that line in the catalog.
Step6: We once again look at the two images we showed earlier. We see that they are image 0 and image 1 from Twinkles System 24.
Step7: We now add this information into our sprinkled AGN dataframe and reset the indices.
Step8: The last step is to now add a column with the lens galaxy uniqueId for each system so that we can cross-reference between the lensed AGN and the lens galaxy dataframe we will create next. We start by finding the uniqueIds for the lens galaxies.
Step9: We now see that the same system has the same lens galaxy uniqueId as we expect.
Step10: Lens Galaxies
Now we will create a dataframe with the Lens Galaxies.
Step11: We now have the lens galaxies in their own dataframe that can be joined on the lensed AGN dataframe by the uniqueId.
Step12: And we can check how many systems there are by checking the length of this dataframe.
Step13: Showing that we 198 systems in the Twinkles field!
Lensed AGN Host Galaxies (Not in Twinkles 1 catalogs)
In Twinkles 1 catalogs we do not have host galaxies around our lensed AGN, but in the future we will want to be able to include this. We experimented with this at the 2017 DESC SLAC Collaboration Meeting Hack Day since Nan Li, Matt Wiesner and others are working adding lensed hosts into images.
Therefore, I have included the capacity to find the host galaxies here for future use.
To start we once again cut based on the uniqueId which will be larger than a normal galaxy.
Step14: Then like the lensed AGN we add in the info from the longer Ids and the lens galaxy info along with resetting the index.
Step15: Notice that there are different numbers of sprinkled AGN and host galaxy entries.
Step16: This is because some host galaxies have both bulge and disk components, but not all do. The example we have been using does have both components and thus we have four entries for the doubly lensed system in the host galaxy dataframe. | Python Code:
import pandas as pd
import numpy as np
Explanation: Sorting Objects in Instance Catalogs
Bryce Kalmbach
This notebook provides a series of commands that take a Twinkles Phosim Instance Catalog and creates different pandas dataframes for different types of objects in the catalog. It first separates the full sets of objects in the Instance Catalogs before picking out the sprinkled strongly lensed systems for further analysis. The complete object dataframes contain:
* Stars: All stars in the Instance Catalog
* Galaxies: All bulge and disk components of galaxies in the Instance Catalog
* AGN: All AGN in the Instance Catalog
* SNe: The supernovae that are present in the Instance Catalog
Then there are sprinkled strongly lensed systems dataframes containing:
* Sprinkled AGN galaxies: The images of the lensed AGNs
* Lens Galaxies: These are the foreground galaxies in the lens system.
* (Not Default) Sprinkled AGN Host galaxies: While these were turned off in Run 1 of Twinkles the original motivation for this notebook was to find these objects in a catalog to help development of lensed hosts at the DESC 2017 SLAC Collaboration Meeting Hack Day.
Requirements
If you already have an instance catalog from Twinkles on hand all you need now are:
* Pandas
* Numpy
End of explanation
filename = 'twinkles_phosim_input_230.txt'
i = 0
not_star_rows = []
not_galaxy_rows = []
not_agn_rows = []
not_sne_rows = []
with open(filename, 'r') as f:
for line in f:
new_str = line.split(' ')
#Skip through the header
if len(new_str) < 4:
not_star_rows.append(i)
not_galaxy_rows.append(i)
not_agn_rows.append(i)
not_sne_rows.append(i)
i+=1
continue
if new_str[5].startswith('starSED'):
#star_rows.append(i)
not_galaxy_rows.append(i)
not_agn_rows.append(i)
not_sne_rows.append(i)
elif new_str[5].startswith('galaxySED'):
#galaxy_rows.append(i)
not_star_rows.append(i)
not_agn_rows.append(i)
not_sne_rows.append(i)
elif new_str[5].startswith('agnSED'):
#agn_rows.append(i)
not_star_rows.append(i)
not_galaxy_rows.append(i)
not_sne_rows.append(i)
elif new_str[5].startswith('spectra_files'):
#sne_rows.append(i)
not_star_rows.append(i)
not_galaxy_rows.append(i)
not_agn_rows.append(i)
i += 1
Explanation: Parsing the Instance Catalog
Here we run through the instance catalog and store which rows belong to which class of object. This is necessary since the catalog objects do not all have the same number of properties so we cannot just import them all and then sort within a dataframe.
End of explanation
df_star = pd.read_csv(filename, delimiter=' ', header=None,
names = ['prefix', 'uniqueId', 'raPhosim', 'decPhoSim',
'phoSimMagNorm', 'sedFilepath', 'redshift',
'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset',
'spatialmodel', 'internalExtinctionModel',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows=not_star_rows)
df_star[:3]
df_galaxy = pd.read_csv(filename, delimiter=' ', header=None,
names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim',
'phoSimMagNorm', 'sedFilepath',
'redshift', 'shear1', 'shear2', 'kappa',
'raOffset', 'decOffset', 'spatialmodel',
'majorAxis', 'minorAxis', 'positionAngle', 'sindex',
'internalExtinctionModel', 'internalAv', 'internalRv',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows=not_galaxy_rows)
df_galaxy[:3]
df_agn = pd.read_csv(filename, delimiter=' ', header=None,
names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim',
'phoSimMagNorm', 'sedFilepath', 'redshift',
'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset',
'spatialmodel', 'internalExtinctionModel',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows = not_agn_rows)
df_agn[:3]
df_sne = pd.read_csv(filename, delimiter=' ', header=None,
names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim',
'phoSimMagNorm', 'shorterFileNames', 'redshift',
'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset',
'spatialmodel', 'internalExtinctionModel',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows = not_sne_rows)
df_sne[:3]
Explanation: Populating Dataframes
Now we load the dataframes for the overall sets of objects.
End of explanation
sprinkled_agn = df_agn[df_agn['uniqueId'] > 20000000000]
Explanation: Sort out sprinkled Strong Lensing Systems
Now we will pick out the pieces of strongly lensed systems that were sprinkled into the instance catalogs for the Twinkles project.
Lensed AGN
We start with the Lensed AGN. In Twinkles Instance Catalogs the lensed AGN have larger uniqueIds than normal since we added information about the systems into the uniqueIds. We use this to find them in the AGN dataframe.
End of explanation
sprinkled_agn[:2]
Explanation: Below we see a pair of lensed images from a double.
End of explanation
# This step undoes the step in CatSim that gives each component of a galaxy a different offset
twinkles_nums = []
for agn_id in sprinkled_agn['uniqueId']:
twinkles_ids = np.right_shift(agn_id-28, 10)
twinkles_nums.append(twinkles_ids)
#This parses the information added in the last 4 digits of the unshifted ID
twinkles_system_num = []
twinkles_img_num = []
for lens_system in twinkles_nums:
lens_system = str(lens_system)
twinkles_id = lens_system[-4:]
twinkles_id = np.int(twinkles_id)
twinkles_base = np.int(np.floor(twinkles_id/4))
twinkles_img = twinkles_id % 4
twinkles_system_num.append(twinkles_base)
twinkles_img_num.append(twinkles_img)
Explanation: Now we will extract the extra information we have stored in the uniqueId. This information is the Twinkles System number in our custom OM10 catalog in the data directory in Twinkles and the Twinkles Image Number which identifies which image in that particular system refers to that line in the catalog.
End of explanation
print twinkles_system_num[:2], twinkles_img_num[:2]
Explanation: We once again look at the two images we showed earlier. We see that they are image 0 and image 1 from Twinkles System 24.
End of explanation
sprinkled_agn = sprinkled_agn.reset_index(drop=True)
sprinkled_agn['twinkles_system'] = twinkles_system_num
sprinkled_agn['twinkles_img_num'] = twinkles_img_num
sprinkled_agn.iloc[:2, [1, 2, 3, -2, -1]]
Explanation: We now add this information into our sprinkled AGN dataframe and reset the indices.
End of explanation
#The lens galaxy ids do not have the extra 4 digits at the end so we remove them
#and then do the shift back to the `uniqueID`.
lens_gal_ids = np.left_shift((np.array(twinkles_nums))/10000, 10) + 26
sprinkled_agn['lens_galaxy_uID'] = lens_gal_ids
Explanation: The last step is to now add a column with the lens galaxy uniqueId for each system so that we can cross-reference between the lensed AGN and the lens galaxy dataframe we will create next. We start by finding the uniqueIds for the lens galaxies.
End of explanation
sprinkled_agn.iloc[:2, [1, 2, 3, -3, -2, -1]]
Explanation: We now see that the same system has the same lens galaxy uniqueId as we expect.
End of explanation
lens_gal_locs = []
for idx in lens_gal_ids:
lens_gal_locs.append(np.where(df_galaxy['uniqueId'] == idx)[0])
lens_gals = df_galaxy.iloc[np.unique(lens_gal_locs)]
lens_gals = lens_gals.reset_index(drop=True)
Explanation: Lens Galaxies
Now we will create a dataframe with the Lens Galaxies.
End of explanation
lens_gals[:1]
Explanation: We now have the lens galaxies in their own dataframe that can be joined on the lensed AGN dataframe by the uniqueId.
End of explanation
len(lens_gals)
Explanation: And we can check how many systems there are by checking the length of this dataframe.
End of explanation
host_gals = df_galaxy[df_galaxy['uniqueId'] > 178465239310]
host_gals = df_galaxy[df_galaxy['uniqueId'] > 170000000000]
host_gals[:2]
Explanation: Showing that we 198 systems in the Twinkles field!
Lensed AGN Host Galaxies (Not in Twinkles 1 catalogs)
In Twinkles 1 catalogs we do not have host galaxies around our lensed AGN, but in the future we will want to be able to include this. We experimented with this at the 2017 DESC SLAC Collaboration Meeting Hack Day since Nan Li, Matt Wiesner and others are working adding lensed hosts into images.
Therefore, I have included the capacity to find the host galaxies here for future use.
To start we once again cut based on the uniqueId which will be larger than a normal galaxy.
End of explanation
twinkles_gal_nums = []
for gal_id in host_gals['uniqueId']:
twinkles_ids = np.right_shift(gal_id-26, 10)
twinkles_gal_nums.append(twinkles_ids)
host_twinkles_system_num = []
host_twinkles_img_num = []
for host_gal in twinkles_gal_nums:
host_gal = str(host_gal)
host_twinkles_id = host_gal[-4:]
host_twinkles_id = np.int(host_twinkles_id)
host_twinkles_base = np.int(np.floor(host_twinkles_id/4))
host_twinkles_img = host_twinkles_id % 4
host_twinkles_system_num.append(host_twinkles_base)
host_twinkles_img_num.append(host_twinkles_img)
host_lens_gal_ids = np.left_shift((np.array(twinkles_gal_nums))/10000, 10) + 26
host_gals = host_gals.reset_index(drop=True)
host_gals['twinkles_system'] = host_twinkles_system_num
host_gals['twinkles_img_num'] = host_twinkles_img_num
host_gals['lens_galaxy_uID'] = host_lens_gal_ids
host_gals.iloc[:2, [1, 2, 3, -3, -2, -1]]
Explanation: Then like the lensed AGN we add in the info from the longer Ids and the lens galaxy info along with resetting the index.
End of explanation
len(sprinkled_agn), len(host_gals)
Explanation: Notice that there are different numbers of sprinkled AGN and host galaxy entries.
End of explanation
host_gals[host_gals['lens_galaxy_uID'] == 21393434].iloc[:, [1, 2, 3, -3, -2, -1]]
Explanation: This is because some host galaxies have both bulge and disk components, but not all do. The example we have been using does have both components and thus we have four entries for the doubly lensed system in the host galaxy dataframe.
End of explanation |
6,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression (predicting unmasked value given (x, y, z, synapses))
Step 1
Step1: Now graphing this data
Step2: Step 4/5/6 part b
Step3: Now graphing it
Step4: Step 7 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import urllib2
%matplotlib inline
sample_size = 10000
k_fold = 10
np.random.seed(1)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
mins = [np.min(csv[:,i]) for i in xrange(5)]
maxs = [np.max(csv[:,i]) for i in xrange(5)]
domains = zip(mins, maxs)
Y_range = domains[3]
del domains[3]
null_X = np.array([[np.random.randint(*domains[i]) for i in xrange(4)] for k in xrange(sample_size)])
null_Y = np.array([[np.random.randint(*Y_range)] for k in xrange(sample_size)])
# Sample sizes from each synthetic data distribution
S = np.array((100, 120, 200, 320,
400, 800, 1000, 2500, 5000, 7500))
# load our regressions
from sklearn.linear_model import LinearRegression
from sklearn.svm import LinearSVR
from sklearn.neighbors import KNeighborsRegressor as KNN
from sklearn.ensemble import RandomForestRegressor as RF
from sklearn.preprocessing import PolynomialFeatures as PF
from sklearn.pipeline import Pipeline
from sklearn import cross_validation
names = ['Linear Regression','SVR','KNN Regression','Random Forest Regression','Polynomial Regression']
regressions = [LinearRegression(),
LinearSVR(C=1.0),
KNN(n_neighbors=10, algorithm='auto'),
RF(max_depth=5, max_features=1),
Pipeline([('poly', PF(degree=2)),('linear', LinearRegression(fit_intercept=False))])]
r2 = np.zeros((len(S), len(regressions), 2), dtype=np.dtype('float64'))
#iterate over sample sizes and regression algos
for idx1, N in enumerate(S):
# Randomly sample from synthetic data with sample size N
a = np.random.permutation(np.arange(sample_size))[:N]
X = null_X[a]
Y = null_Y[a]
Y = np.ravel(Y)
for idx2, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
r2[idx1, idx2, :] = [scores.mean(), scores.std()]
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[idx2], scores.mean(), scores.std() * 2))
Explanation: Regression (predicting unmasked value given (x, y, z, synapses))
Step 1: Assumptions
Assume that unmasked values, Y, follow some joint distribution $F_{Y \mid X}$ where $X$ is the set of data, which are vectors in $\mathbb{R}^4$ and its elements correspond to x coordinate, y coordinate, z coordinate, synapses, respectively.
Step 2: Define model
Let the true values of unmasked correspond to the set $Y$, and let the joint distribution be parameterized by $\theta$. So for each $x_i \in X \textrm{ and } y_i \in Y \ , F(x;\theta)=y$.
We want to find parameters $\hat \theta$ such that we minimize the loss function $l(\hat y, y)$, where $\hat y = F(x;\hat \theta)$.
Step 3: Algorithms
Linear Regression
Support Vector Regression (SVR)
K-Nearest Neighbor Regression (KNN)
Random Forest Regression (RF)
Polynomial Regression
Step 4/5/6 part A: Null distribution
No relationship, i.e. all variables independent, so joint can be factored into marginals. Let's just let all marginals be uniform across their respective min and max in the actual dataset. So the target variable Y, i.e. unmasked, follows a multivariate uniform distribution.
End of explanation
plt.errorbar(S, r2[:,0,0], yerr = r2[:,0,1], hold=True, label=names[0])
plt.errorbar(S, r2[:,1,0], yerr = r2[:,1,1], color='green', hold=True, label=names[1])
plt.errorbar(S, r2[:,2,0], yerr = r2[:,2,1], color='red', hold=True, label=names[2])
plt.errorbar(S, r2[:,3,0], yerr = r2[:,3,1], color='black', hold=True, label=names[3])
plt.errorbar(S, r2[:,4,0], yerr = r2[:,4,1], color='brown', hold=True, label=names[4])
plt.xscale('log')
plt.axhline(1, color='red', linestyle='--')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
Explanation: Now graphing this data:
End of explanation
alt_X = np.apply_along_axis(lambda row : np.hstack((row[0:3], np.average(row[0:3]))), 1, null_X)
std_dev = np.sqrt(np.average(alt_X[:, 3]))
alt_Y = alt_X[:, 3]/4 + np.random.normal(scale=std_dev, size=(sample_size,))
r2 = np.zeros((len(S), len(regressions), 2), dtype=np.dtype('float64'))
#iterate over sample sizes and regression algos
for idx1, N in enumerate(S):
# Randomly sample from synthetic data with sample size N
a = np.random.permutation(np.arange(sample_size))[:N]
X = alt_X[a]
Y = alt_Y[a]
Y = np.ravel(Y)
for idx2, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)
r2[idx1, idx2, :] = [scores.mean(), scores.std()]
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[idx2], scores.mean(), scores.std() * 2))
Explanation: Step 4/5/6 part b: Alternate distribution
Here we want a strong relationship between variables. Let's keep the x, y, z uniformly distributed across the sample space, but let # of synapses, s, be a deterministic function, f, of x, y, z. Let $s=f(x,y,z)=\frac{x+y+z}{3}$. Now let's say our random variable $Y=(s/4)+\epsilon$ where $\epsilon$ is some Gaussian noise with variance equal to average(s/4) (just to make this synthetic data slightly more realistic).
End of explanation
plt.errorbar(S, r2[:,0,0], yerr = r2[:,0,1], hold=True, label=names[0])
plt.errorbar(S, r2[:,1,0], yerr = r2[:,1,1], color='green', hold=True, label=names[1])
plt.errorbar(S, r2[:,2,0], yerr = r2[:,2,1], color='red', hold=True, label=names[2])
plt.errorbar(S, r2[:,3,0], yerr = r2[:,3,1], color='black', hold=True, label=names[3])
plt.errorbar(S, r2[:,4,0], yerr = r2[:,4,1], color='brown', hold=True, label=names[4])
plt.xscale('log')
plt.axhline(1, color='red', linestyle='--')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
Explanation: Now graphing it:
End of explanation
X = csv[:, [0, 1, 2, 4]]
Y = csv[:, 3]
for idx2, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[idx2], scores.mean(), scores.std() * 2))
Explanation: Step 7: Apply on actual data
End of explanation |
6,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https
Step1: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
Step2: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code)
Step3: Below I'm running images through the VGG network in batches.
Exercise
Step4: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
Step5: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
Step10: Training
Here, we'll train the network.
Exercise
Step11: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 16
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, (None, 224, 224, 3))
with tf.name_scope('content_vgg'):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
codes_batch = sess.run(vgg.relu6, feed_dict={input_: images})
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
labels_vecs = lb.fit_transform(labels) # Your one-hot encoded labels array here
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
splitter = ss.split(codes, labels_vecs)
train_idx, val_idx = next(splitter)
half_val = int(len(val_idx) / 2)
test_idx = val_idx[:half_val]
val_idx = val_idx[half_val:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
out = tf.layers.dense(inputs_, 256)
out = tf.maximum(0., out)
logits = tf.layers.dense(out, labels_vecs.shape[1]) # output layer logits
cost = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_) # cross entropy loss
cost = tf.reduce_mean(cost)
optimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
loss, _ = sess.run([cost, optimizer], feed_dict={inputs_: x, labels_: y})
print ('Epoch: {}/{}, iteration: {}, training loss: {:.5f}'.format(e, epochs, iteration, loss))
iteration += 1
if iteration % 5 == 0:
val_acc = sess.run(accuracy, feed_dict={inputs_: val_x, labels_: val_y})
print ('Epoch:{}/{}, iteration: {}, validation accuracy: {:.5f}'.
format(e, epochs, iteration, val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation |
6,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table de matières<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Un-exercice-d'algorithmique---mise-en-page-de-paragraphe,-résolutions-gourmande-et-dynamique" data-toc-modified-id="Un-exercice-d'algorithmique---mise-en-page-de-paragraphe,-résolutions-gourmande-et-dynamique-1"><span class="toc-item-num">1 </span>Un exercice d'algorithmique - mise en page de paragraphe, résolutions gourmande et dynamique</a></span><ul class="toc-item"><li><span><a href="#Question-1." data-toc-modified-id="Question-1.-1.1"><span class="toc-item-num">1.1 </span>Question 1.</a></span><ul class="toc-item"><li><span><a href="#Réponse-
Step1: Pour l'instant, j'ai codé ça vite fait en Bash pour calculer le coût des deux fichiers
Step2: On voit que la solution gourmande a un coût de 54873 alors que la solution non gourmande (optimale) a un coût de 16000.
Faire croître la différence entre les deux coûts vers l'infini
On peut juste produire $n$ fois ces deux lignes, et le coût de la solution gourmande sera $54873 n$ et le coût optimal sera $16000 n$.
Cela montre que la différence entre les deux coûts n'est pas bornée.
Bonus
Step3: Exemples
Un premier exemple simple
Step4: Peut-on retrouver la solution suivante, qui avait été calculée à la main ?
Step5: Vérifions cela
Step6: Question 2.
2. Donner un algorithme de programmation dynamique résolvant le problème. Analyser sa complexité en temps et en espace. Et implémenter le dans le langage de votre choix. Vérifier qu'il donne la réponse optimale sur l'exemple trouvé en question 1. (ou en tous cas, une meilleure réponse).
On va déjà écrire le problème d'optimisation à résoudre, puis une relation de récurrence.
En écrivant un algorithme récursif naïf mais avec mémoïsation, on obtiendra un algorithme de programmation dynamique.
Problème d'optimisation à résoudre
On se donne $M\in\mathbb{N}^$ la taille de ligne, et un nombre $N\in\mathbb{N}^$ objets, de longueurs $l_k \in [1,\dots,M]$.
On souhaite minimiser le coût suivant, qui dépend de
Step7: Exemples
Step8: Vérifions cela
Step9: Peut-on retrouver la solution suivante, qui avait été calculée à la main ?
Step10: Vérifions cela
Step11: Et pour la solution dynamique
Step12: Question 3.
3. Supposons que pour la fonction de coût à minimiser, on ait simplement choisi la somme des nombres de caractères d’espacement présents à la fin de chaque ligne. Est-ce que l’on peut faire mieux en complexité que pour la question ?
Oui la solution gourmande, qui est donc au plus linéaire en temps et demande une mémoire de travail supplémentaire constante (ou bornée par la taille du plus long mot, selon de savoir si len(mot) est en $O(1)$ ou en $O(|\text{mot}|)$), sera optimale.
Question 4. Pourquoi un coût cubique et pas linéaire ?
4. (Plus informel) Qu’est-ce qui à votre avis peut justifier le choix de prendre les cubes plutôt quesimplement les nombres de caractères d’espacement en fin de ligne ?
Si le coût est linéaire, alors la solution gourmande sera optimale (ou en tous cas une approximation à facteur constant).
Mais c'est aussi que l'affichage ne fera pas de différence entre les deux exemples ci dessous, alors que l'on est clairement plus satisfait du rendu visuel du deuxième, qui équilibre mieux les deux lignes.
(je ne suis pas trop sûr de tout ça)
TODO mieux expliquer !
On peut vérifier la solution trouvée avec un coût carré et pas cubique | Python Code:
%%bash
cat << EOF
AA AA AA AA AA AA B ;
AA AA AA AA AA AA B ;
EOF > /tmp/test_nongreedy_optimal.txt
cat /tmp/test_nongreedy_optimal.txt
%%bash
cat << EOF
AA AA AA AA AA AA B AA AA AA AA AA AA ;
B ;
EOF > test_greedy_suboptimal.txt
cat /tmp/test_greedy_suboptimal.txt
Explanation: <h1>Table de matières<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Un-exercice-d'algorithmique---mise-en-page-de-paragraphe,-résolutions-gourmande-et-dynamique" data-toc-modified-id="Un-exercice-d'algorithmique---mise-en-page-de-paragraphe,-résolutions-gourmande-et-dynamique-1"><span class="toc-item-num">1 </span>Un exercice d'algorithmique - mise en page de paragraphe, résolutions gourmande et dynamique</a></span><ul class="toc-item"><li><span><a href="#Question-1." data-toc-modified-id="Question-1.-1.1"><span class="toc-item-num">1.1 </span>Question 1.</a></span><ul class="toc-item"><li><span><a href="#Réponse-:-non-!" data-toc-modified-id="Réponse-:-non-!-1.1.1"><span class="toc-item-num">1.1.1 </span>Réponse : non !</a></span></li><li><span><a href="#Contre-exemple-de-taille-fixée" data-toc-modified-id="Contre-exemple-de-taille-fixée-1.1.2"><span class="toc-item-num">1.1.2 </span>Contre exemple de taille fixée</a></span></li><li><span><a href="#Faire-croître-la-différence-entre-les-deux-coûts-vers-l'infini" data-toc-modified-id="Faire-croître-la-différence-entre-les-deux-coûts-vers-l'infini-1.1.3"><span class="toc-item-num">1.1.3 </span>Faire croître la différence entre les deux coûts vers l'infini</a></span></li><li><span><a href="#Bonus-:-faire-croître-le-rapport-vers-l'infini-?" data-toc-modified-id="Bonus-:-faire-croître-le-rapport-vers-l'infini-?-1.1.4"><span class="toc-item-num">1.1.4 </span>Bonus : faire croître le <em>rapport</em> vers l'infini ?</a></span></li><li><span><a href="#Code-Python-pour-la-méthode-gloutonne" data-toc-modified-id="Code-Python-pour-la-méthode-gloutonne-1.1.5"><span class="toc-item-num">1.1.5 </span>Code Python pour la méthode gloutonne</a></span></li><li><span><a href="#Exemples" data-toc-modified-id="Exemples-1.1.6"><span class="toc-item-num">1.1.6 </span>Exemples</a></span></li></ul></li><li><span><a href="#Question-2." data-toc-modified-id="Question-2.-1.2"><span class="toc-item-num">1.2 </span>Question 2.</a></span><ul class="toc-item"><li><span><a href="#Problème-d'optimisation-à-résoudre" data-toc-modified-id="Problème-d'optimisation-à-résoudre-1.2.1"><span class="toc-item-num">1.2.1 </span>Problème d'optimisation à résoudre</a></span></li><li><span><a href="#Relation-de-récurrence" data-toc-modified-id="Relation-de-récurrence-1.2.2"><span class="toc-item-num">1.2.2 </span>Relation de récurrence</a></span></li><li><span><a href="#Implémentation-naïve-par-mémoïsation" data-toc-modified-id="Implémentation-naïve-par-mémoïsation-1.2.3"><span class="toc-item-num">1.2.3 </span>Implémentation naïve par mémoïsation</a></span></li><li><span><a href="#Exemples" data-toc-modified-id="Exemples-1.2.4"><span class="toc-item-num">1.2.4 </span>Exemples</a></span></li></ul></li><li><span><a href="#Question-3." data-toc-modified-id="Question-3.-1.3"><span class="toc-item-num">1.3 </span>Question 3.</a></span></li><li><span><a href="#Question-4.-Pourquoi-un-coût-cubique-et-pas-linéaire-?" data-toc-modified-id="Question-4.-Pourquoi-un-coût-cubique-et-pas-linéaire-?-1.4"><span class="toc-item-num">1.4 </span>Question 4. Pourquoi un coût cubique et pas linéaire ?</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.5"><span class="toc-item-num">1.5 </span>Conclusion</a></span></li></ul></li></ul></div>
Un exercice d'algorithmique - mise en page de paragraphe, résolutions gourmande et dynamique
Source : http://lacl.fr/~lpellissier/Algo1/TD3.pdf, auteur : Luc Pélissier (2020-21).
Le problème étudié est l’impression équilibrée d’un paragraphe sur une imprimante.
Le texte d’entrée est modélisé comme une séquence de $n$ mots de longueurs $l_1,l_2, \dots, l_n$ (mesurées en caractères, que l'on suppose tous de même largeur - c'est le cas par exemple avec une police dite à chasse fixe).
On souhaite imprimer ce paragraphe de manière équilibrée sur un certain nombre de lignes qui contiennent un maximum de $M\geq1$ caractères chacune.
Le critère d’équilibre est le suivant :
Si une ligne donnée contient les mots $i$ à $j$ (avec $i \leq j$) et qu’on laisse exactement une espace entre deux mots, le nombre de caractères d’espacements supplémentaires à la fin de la ligne est $f(M - j+i - \sum\limits_{k=i}^j l_k)$, qui doit être positif ou nul pour que les mots tiennent sur la ligne.
L’objectif est de minimiser la somme, sur toutes les lignes hormis la dernière, des cubes des nombres de caractères d’espacement présents à la fin de chaque ligne : cela correspond à $f(s) = s^3$.
Auteur: Lilian Besson
Date : jeudi 04/02/2021
Licence : MIT
Lien : https://github.com/Naereen/notebooks/tree/master/agreg/
Remarque : pour bien visualiser ces espaces en fin de fichier, je termine chaque ligne par ;.
Question 1.
1. Est-ce que l’algorithme glouton consistant à remplir les lignes une à une en mettant à chaque fois le maximum de mots possibles sur la ligne en cours, fournit l’optimum ?
Réponse : non !
Contre exemple de taille fixée
Comme le coût est la somme des cubes d'espaces en fin de ligne, on peut penser à un contre-exemple qui va exploiter le fait que $(2x)^3 >> 2 x^3$, et produire un texte qui aura deux lignes identiques (avec $k$ espaces en fin de lignes) lorsqu'on le met en page optimalement, et une ligne quasi complète mais une deuxième ligne quasi vide :
End of explanation
%%bash
clear
for file in /tmp/test_*txt; do
echo $file
hr
cat $file
hr
n=0
echo $n
for line in $(cat $file | grep -o ' *;' | sed s/';'/''/g | tr ' ' 'X'); do
echo $line; i=$(echo $line | wc -c)
i=$((i-1))
echo "n = $n, i = $i"; n=$((n + i*i*i))
echo "=> n = $n, i = $i"
done
done
Explanation: Pour l'instant, j'ai codé ça vite fait en Bash pour calculer le coût des deux fichiers :
End of explanation
from typing import Tuple, List
def longueur_ligne(ligne: List[str]) -> int:
return sum(len(mot) for mot in ligne)
def mise_en_page_paragraphe_gloutonne(longueur_max:int, mots: List[str]) -> List[List[str]]:
print(f"Longueur maximum de la ligne = {longueur_max}")
print(f"Longueur des mots = {longueurs_mots}")
assert all(
1 <= len(mot) <= longueur_max
for mot in mots
)
mots = list(mots)[::-1] # on les lit de la fin
paragraphes = []
ligne_actuelle = []
longueur_ligne_actuelle = 0
while mots:
# print(f"mots = {mots}")
mot_a_placer = mots.pop()
# print(f" mot_a_placer = {mot_a_placer}")
# print(f" ligne_actuelle = {ligne_actuelle}")
if longueur_ligne(ligne_actuelle) + len(mot_a_placer) <= longueur_max:
ligne_actuelle += [mot_a_placer]
longueur_ligne_actuelle += len(mot_a_placer)
if longueur_ligne_actuelle < longueur_max:
ligne_actuelle += [" "]
longueur_ligne_actuelle += 1
# 1 + car on ajoute l'espace
if longueur_ligne_actuelle + 1 >= longueur_max:
paragraphes.append(ligne_actuelle)
ligne_actuelle = []
longueur_ligne_actuelle = 0
# print(f" ligne_actuelle = {ligne_actuelle}")
# print(f" paragraphes = {paragraphes}")
# dernière ligne si pas encore ajoutée
if ligne_actuelle:
paragraphes.append(ligne_actuelle)
# puis on complète avec des espaces en fin de lignes
for ligne in paragraphes:
espaces_fin_paragraphe = longueur_max - longueur_ligne(ligne)
ligne += [" "] * espaces_fin_paragraphe
assert all(
longueur_ligne(ligne) == longueur_max
for ligne in paragraphes
)
return paragraphes
def print_paragraphes(paragraphes: List[List[str]]):
print(f"\n# Mise en page finale d'un texte de {len(paragraphes)} lignes ")
for ligne in paragraphes:
print("".join(ligne) + ";")
from typing import Callable
def cout_paragraphes(paragraphes: List[List[str]], cout: Callable[[int], int]) -> int:
lignes = [ "".join(ligne) for ligne in paragraphes ]
espaces_de_fin = [
len(ligne) - len(ligne.rstrip())
for ligne in lignes
]
return sum(cout(es) for es in espaces_de_fin)
def print_couts(paragraphes):
print("- Nombre d'espaces en fin de lignes =", cout_paragraphes(paragraphes, cout= lambda i: i))
print("- Somme des carrés des nombres d'espaces en fin de lignes =", cout_paragraphes(paragraphes, cout= lambda i: i**2))
print("- Somme des cubes des nombres d'espaces en fin de lignes =", cout_paragraphes(paragraphes, cout= lambda i: i**3))
Explanation: On voit que la solution gourmande a un coût de 54873 alors que la solution non gourmande (optimale) a un coût de 16000.
Faire croître la différence entre les deux coûts vers l'infini
On peut juste produire $n$ fois ces deux lignes, et le coût de la solution gourmande sera $54873 n$ et le coût optimal sera $16000 n$.
Cela montre que la différence entre les deux coûts n'est pas bornée.
Bonus : faire croître le rapport vers l'infini ?
On devrait pouvoir aussi faire croître le rapport des deux coûts vers l'infini : plutôt que de générer ces $n$ lignes identiques, on a juste à augmenter la longueur de ces lignes (et n'en avoir que deux, mais très longues).
Comme le coût est cubique en le nombre d'espaces, on aura bien un rapport non borné entre le coût gourmand (sous optimal) et le coût optimal.
Corollaire : cela montre que la solution gourmande n'est pas un k-approximation du problème étudié.
Code Python pour la méthode gloutonne
Même si elle n'est pas efficace, on va commencer par écrire cette méthode gloutonne :
End of explanation
longueur_max = len("AA AA ") # sans le ;
mots = ["AA", "AA", "AA", "B"]
paragraphes = mise_en_page_paragraphe_gloutonne(longueur_max, mots)
print_paragraphes(paragraphes)
print_couts(paragraphes)
Explanation: Exemples
Un premier exemple simple :
End of explanation
cat /tmp/test_greedy_suboptimal.txt
longueur_max = len("AA AA AA AA AA AA AA AA AA AA AA AA AA ") # sans le ;
mots = ["AA"]*13 + ["B"]*1
Explanation: Peut-on retrouver la solution suivante, qui avait été calculée à la main ?
End of explanation
paragraphes = mise_en_page_paragraphe_gloutonne(longueur_max, mots)
print_paragraphes(paragraphes)
print_couts(paragraphes)
Explanation: Vérifions cela :
End of explanation
from typing import List, Tuple
from functools import lru_cache as memoize
couts = {
"lineaire": lambda i: i,
"quadratique": lambda i: i**2,
"cubique": lambda i: i**3,
}
@memoize(maxsize=None)
def mise_en_page_paragraphe(
longueur_max:int,
mots: Tuple[str],
choix_cout: str="cubique",
) -> List[List[str]]:
print(f"Longueur maximum de la ligne = {longueur_max}")
mots = list(mots)
print(f"Longueur des mots = {mots}")
assert len(mots) > 0
if len(mots) == 1:
return [ [mots[0]] ]
else:
cout = couts[choix_cout]
# première possibilité, on regroupe les deux premiers mots ensemble
mots1 = [mots[0] + " " + mots[1]] + mots[2:]
cout1 = float('+inf')
if len(mots1[0]) <= longueur_max:
solution1 = mise_en_page_paragraphe(longueur_max, tuple(mots1))
cout1 = cout_paragraphes(solution1, cout)
# deuxième possibilité, on place mots[0] tout seul, et on résoud les autres mots
sous_solution2 = mise_en_page_paragraphe(longueur_max, tuple(mots[1:]))
morceau_gauche2 = [ mots[0] ] + [" "] * (longueur_max - len(mots[0]))
solution2 = [ morceau_gauche2 ] + sous_solution2
cout2 = cout_paragraphes(solution2, cout)
if cout1 < cout2:
recombinaison_1 = []
for ligne in solution1:
mots_ici = "".join(ligne).split(" ")
ligne_ici = [ mots_ici[0] ]
for mot in mots_ici[1:]:
if mot:
ligne_ici += [" ", mot]
ligne_ici += [" "] * (longueur_max - longueur_ligne(ligne_ici))
recombinaison_1.append(ligne_ici)
return recombinaison_1
else:
recombinaison_2 = solution2
return recombinaison_2
Explanation: Question 2.
2. Donner un algorithme de programmation dynamique résolvant le problème. Analyser sa complexité en temps et en espace. Et implémenter le dans le langage de votre choix. Vérifier qu'il donne la réponse optimale sur l'exemple trouvé en question 1. (ou en tous cas, une meilleure réponse).
On va déjà écrire le problème d'optimisation à résoudre, puis une relation de récurrence.
En écrivant un algorithme récursif naïf mais avec mémoïsation, on obtiendra un algorithme de programmation dynamique.
Problème d'optimisation à résoudre
On se donne $M\in\mathbb{N}^$ la taille de ligne, et un nombre $N\in\mathbb{N}^$ objets, de longueurs $l_k \in [1,\dots,M]$.
On souhaite minimiser le coût suivant, qui dépend de :
$L$ nombre de ligne,
$\forall x \in{1,\dots,L-1}, \ell_x$ indique l'indice de fin des mots présents en ligne $x$. Avec $\ell_0 = 0$ pour indiquer une ligne 0 vide.
$$
\min_{
L\in{1,\dots,M}, \
\ell_1,\dots,\ell_{L-1}\in{1,\dots,N},\
\forall x\in{1,\dots,L-1}, \ell_{x+1} \geq \ell_x + 1,
}
\sum_{x=1}^{L-1}
(M - \ell_{x+1} + \ell_x - \sum_{k=\ell_x}^{\ell_{x+1}} l_k)^3
$$
On ne compte pas les espaces de la dernière ligne, d'où le L-1 dans la somme.
Relation de récurrence
Initialisation :
S'il n'y a qu'un seul mot, la solution est triviale : on le place sur la première ligne, et on a terminé.
Hérédité :
On considère le premier mot $l_1$ et le deuxième mot $l_2$.
Le coût de la solution optimale est le minimum des coûts des deux solutions optimales aux sous-problèmes suivants (de taille strictement plus petite) :
on place les deux premiers mots ensemble, et on remplace donc $l_1,l_2$ par $l_1' := l_1 + l_2 + 1$, et la suite des mots est juste décalée : $l_k' := l_{k+1}$. Ce cas a $N-1$ mots ;
on place le premier mot sur sa propre ligne (cas de base), et on résound avec les mots restants : $l_k' := l_{k+1}$. Ce cas a aussi $1$ et $N-1$ mots sur les deux sous-problèmes.
TODO
Implémentation naïve par mémoïsation
End of explanation
longueur_max = len("AA AA AA")
mots = ["AA", "AA", "AA", "B"]
mots = tuple(mots) # pour le rendre Hashable pour le @memoize
Explanation: Exemples
End of explanation
paragraphes = mise_en_page_paragraphe(longueur_max, mots)
print_paragraphes(paragraphes)
print_couts(paragraphes)
Explanation: Vérifions cela :
End of explanation
cat /tmp/test_nongreedy_optimal.txt
longueur_max = len("AA AA AA AA AA AA B ")
mots = (["AA"]*6 + ["B"]*1) * 2
mots = tuple(mots) # pour le rendre Hashable pour le @memoize
Explanation: Peut-on retrouver la solution suivante, qui avait été calculée à la main ?
End of explanation
paragraphes = mise_en_page_paragraphe_gloutonne(longueur_max, mots)
print_paragraphes(paragraphes)
print_couts(paragraphes)
Explanation: Vérifions cela :
End of explanation
paragraphes = mise_en_page_paragraphe(longueur_max, mots)
print_paragraphes(paragraphes)
print_couts(paragraphes)
Explanation: Et pour la solution dynamique :
End of explanation
paragraphes = mise_en_page_paragraphe(longueur_max, mots, choix_cout="quadratique")
print_paragraphes(paragraphes)
print_couts(paragraphes)
Explanation: Question 3.
3. Supposons que pour la fonction de coût à minimiser, on ait simplement choisi la somme des nombres de caractères d’espacement présents à la fin de chaque ligne. Est-ce que l’on peut faire mieux en complexité que pour la question ?
Oui la solution gourmande, qui est donc au plus linéaire en temps et demande une mémoire de travail supplémentaire constante (ou bornée par la taille du plus long mot, selon de savoir si len(mot) est en $O(1)$ ou en $O(|\text{mot}|)$), sera optimale.
Question 4. Pourquoi un coût cubique et pas linéaire ?
4. (Plus informel) Qu’est-ce qui à votre avis peut justifier le choix de prendre les cubes plutôt quesimplement les nombres de caractères d’espacement en fin de ligne ?
Si le coût est linéaire, alors la solution gourmande sera optimale (ou en tous cas une approximation à facteur constant).
Mais c'est aussi que l'affichage ne fera pas de différence entre les deux exemples ci dessous, alors que l'on est clairement plus satisfait du rendu visuel du deuxième, qui équilibre mieux les deux lignes.
(je ne suis pas trop sûr de tout ça)
TODO mieux expliquer !
On peut vérifier la solution trouvée avec un coût carré et pas cubique :
End of explanation |
6,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning.
The columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below.
Step1: Question 0 (Example)
What is the first country in df?
This function should return a Series.
Step2: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
Step3: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
Step4: Question 3
Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count?
$$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$
Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
Step5: Question 4
Write a function that creates a Series called "Points" which is a weighted value where each gold medal (Gold.2) counts for 3 points, silver medals (Silver.2) for 2 points, and bronze medals (Bronze.2) for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
Step6: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint
Step7: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use CENSUS2010POP.
This function should return a list of string values.
Step8: Question 7
Which county has had the largest absolute change in population within the period 2010-2015? (Hint
Step9: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index). | Python Code:
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
df.head()
Explanation: You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning.
The columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below.
End of explanation
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
Explanation: Question 0 (Example)
What is the first country in df?
This function should return a Series.
End of explanation
def answer_one():
return "YOUR ANSWER HERE"
Explanation: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
End of explanation
def answer_two():
return "YOUR ANSWER HERE"
Explanation: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
End of explanation
def answer_three():
return "YOUR ANSWER HERE"
Explanation: Question 3
Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count?
$$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$
Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
End of explanation
def answer_four():
return "YOUR ANSWER HERE"
Explanation: Question 4
Write a function that creates a Series called "Points" which is a weighted value where each gold medal (Gold.2) counts for 3 points, silver medals (Silver.2) for 2 points, and bronze medals (Bronze.2) for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
End of explanation
census_df = pd.read_csv('census.csv')
census_df.head()
def answer_five():
return "YOUR ANSWER HERE"
Explanation: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)
This function should return a single string value.
End of explanation
def answer_six():
return "YOUR ANSWER HERE"
Explanation: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use CENSUS2010POP.
This function should return a list of string values.
End of explanation
def answer_seven():
return "YOUR ANSWER HERE"
Explanation: Question 7
Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)
e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.
This function should return a single string value.
End of explanation
def answer_eight():
return "YOUR ANSWER HERE"
Explanation: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).
End of explanation |
6,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solving the Cournot Oligopoly Model by Collocation
DEMAPP09 Cournot Oligopolist Problem
<br>
This example is taken from section 6.8.1, page(s) 159-162 of
Step1: and set the $\alpha$ and $\eta$ parameters
Step2: For convenience, we define a lambda function to represent the demand. Note
Step3: We will approximate the solution for prices in the $p\in [a, b]$ interval, using 25 collocation nodes. The compecon library provides the BasisChebyshev class to make computations with Chebyshev bases
Step4: Let's assume that our first guess is $S(p)=1$. To that end, we set the value of S to one in each of the nodes
Step5: It is important to highlight that in this problem the unknowns are the $c_k$ coefficients from the Chebyshev basis; however, an object of BasisChebyshev class automatically adjusts those coefficients so they are consistent with the values we set for the function at the nodes (here indicated by the .y property).
<br>
We are now ready to define the objective function, which we will call resid. This function takes as its argument a vector with the 25 Chebyshev basis coefficients and returns the left-hand side of the 25 equations defined by (5).
Step6: Note that the resid function takes a single argument (the coefficients for the Chebyshev basis). All other parameters (Q, p, eta, alpha must be declared in the main script, where Python will find their values.
<br>
To use Newton's method, it is necessary to compute the Jacobian matrix of the function whose roots we are looking for. In certain occasions, like in the problem we are dealing with, coding the computation of this Jacobian matrix correctly can be quite cumbersome. The NLP class provides, besides the Newton's method (which we used in the last example), the Broyden's Method, whose main appeal is that it does not require the coding of the Jacobian matrix (the method itself will approximate it). To learn more about Broyden's Method, click on the hyperlink above and see Quasi-Newton Methods in section 3.4, page(s) 39-42 of the text.
Step7: After 20 iterations, Broyden's method converges to the desired solution. We can visualize this in Figure 3, which shows the value of the function on 501 different points within the approximation interval. Notice that the residual plot crosses the horizontal axis 25 times; this occurs precisely at the collocation nodes (represented by red dots). This figure also shows the precision of the approximation
Step8: Figure 3
Step9: Figure 4
Step10: In Figure 4 notice how the equilibrium price and quantity change as the number of firms increases.
Figure 5 | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from compecon import BasisChebyshev, NLP, nodeunif
from compecon.demos import demo
Explanation: Solving the Cournot Oligopoly Model by Collocation
DEMAPP09 Cournot Oligopolist Problem
<br>
This example is taken from section 6.8.1, page(s) 159-162 of:
Miranda, M. J., & Fackler, P. L. (2002). Applied computational economics and finance (P. L. Fackler, ed.). Cambridge, Mass. : MIT Press.
<br>
To illustrate the implementation of the collocation method for implicit function problems, consider the case of a Cournot oligopoly. In the standard microeconomic model of the firm, the firm maximizes its profits by matching marginal revenue to marginal cost (MC). An oligopolistic firm, recognizing that its actions affect the price, knows that its marginal revenue is $p + q \frac{dp}{dq}$, where $p$ is the price, $q$ the quantity produced, and $\frac{dp}{dq}$ is the marginal impact of the product on the market price. Cournot's assumption is that the company acts as if none of its production changes would provoke a reaction from its competitors. This implies that:
\begin{equation}
\frac{dp}{dq} = \frac{1}{D'(p)} \tag{1}
\end{equation}
where $D(p)$ is the market demand curve.
<br>
Suppose we want to derive the firm's effective supply function, which specifies the amount $q = S(p)$ that it will supply at each price. The effective supply function of the firm is characterized by the functional equation
\begin{equation}
p + \frac{S(p)}{D'(p)} - MC(S(p)) = 0 \tag{2}
\end{equation}
for every price $p>0$. In simple cases, this function can be found explicitly. However, in more complicated cases, there is no explicit solution. Suppose for example that demand and marginal cost are given by
\begin{equation}
D(p) = p^{-\eta} \qquad\qquad CM(q) = \alpha\sqrt{q} + q^2
\end{equation}
so that the functional equation to be solved for $S(p)$ is
\begin{equation} \label{eq:funcional}
\left[p - \frac{S(p)p^{\eta+1}}{\eta}\right] - \left[\alpha\sqrt{S(p)} + S(p)^2\right] = 0 \tag{3}
\end{equation}
The collocation method
In equation (3), the unknown is the supply function $S(p)$, which makes (3) an infinite-dimension equation. Instead of solving the equation directly, we will approximate its solution using $n$ Chebyshev polynomials $\phi_i(x)$, which are defined recursively for $x \in [0,1]$ as:
\begin{align}
\phi_0(x) & = 1 \
\phi_1(x) & = x \
\phi_{k + 1}(p_i) & = 2x \phi_k(x) - \phi_{k-1}(x), \qquad \text{for} \; k = 1,2, \dots
\end{align}
<br>
In addition, instead of requiring that both sides of the equation be exactly equal over the entire domain of $p \in \Re^+$, we will choose $n$ Chebyshev nodes $p_i$ in the interval $[a, b]$:
\begin{equation} \label{eq:chebynodes}
p_i = \frac{a + b}{2} + \frac{ba}{2}\ cos\left(\frac{n-i + 0.5}{n}\pi\right), \qquad\text{for } i = 1,2, \dots, n \tag{4}
\end{equation}
<br>
Thus, the supply is approximated by
\begin{equation}
S(p_i) = \sum_{k = 0}^{n-1} c_{k}\phi_k(p_i)
\end{equation}
Substituting this last expression in (3) for each of the placement nodes (Chebyshev in this case) results in a non-linear system of $ n $ equations (one for each node) in $ n $ unknowns $ c_k $ (one for each polynomial of Cheybshev), which in principle can be solved by Newton's method, as in the last example. Thus, in practice, the system to be solved is
\begin{equation} \label{eq:collocation}
\left[p_i - \frac{\left(\sum_{k=0}^{n-1}c_{k}\phi_k(p_i)\right)p_i^{\eta+1}}{\eta}\right] - \left[\alpha\sqrt{\sum_{k=0}^{n-1}c_{k}\phi_k(p_i)} + \left(\sum_{k=0}^{n-1}c_{k}\phi_k(p_i)\right)^2\right] = 0 \tag{5}
\end{equation}
for $i=1,2,\dots, n$ and for $k=1,2,\dots,n$.
Solving the model withPython
To solve this model we start a new Python session:
End of explanation
alpha= 1.0;
eta= 1.5;
Explanation: and set the $\alpha$ and $\eta$ parameters
End of explanation
D = lambda p: p** (-eta)
Explanation: For convenience, we define a lambda function to represent the demand. Note: A lambda function is a small anonymous function in Python that can take any number of arguments, but can have only one expression. If you are curious to learn more Google "Lambda Functions in Python".
End of explanation
n= 25;
a= 0.1;
b= 3.0
S= BasisChebyshev(n, a, b, labels= ['price'], l=['supply'])
Explanation: We will approximate the solution for prices in the $p\in [a, b]$ interval, using 25 collocation nodes. The compecon library provides the BasisChebyshev class to make computations with Chebyshev bases:
End of explanation
p= S.nodes
S.y= np.ones_like(p)
Explanation: Let's assume that our first guess is $S(p)=1$. To that end, we set the value of S to one in each of the nodes
End of explanation
def resid(c):
S.c= c # update interpolation coefficients
q= S(p) # compute quantity supplied at price nodes
return p- q* (p** (eta+ 1)/ eta)- alpha* np.sqrt(q)- q** 2
Explanation: It is important to highlight that in this problem the unknowns are the $c_k$ coefficients from the Chebyshev basis; however, an object of BasisChebyshev class automatically adjusts those coefficients so they are consistent with the values we set for the function at the nodes (here indicated by the .y property).
<br>
We are now ready to define the objective function, which we will call resid. This function takes as its argument a vector with the 25 Chebyshev basis coefficients and returns the left-hand side of the 25 equations defined by (5).
End of explanation
cournot = NLP(resid)
S.c = cournot.broyden(S.c, tol=1e-12)
Explanation: Note that the resid function takes a single argument (the coefficients for the Chebyshev basis). All other parameters (Q, p, eta, alpha must be declared in the main script, where Python will find their values.
<br>
To use Newton's method, it is necessary to compute the Jacobian matrix of the function whose roots we are looking for. In certain occasions, like in the problem we are dealing with, coding the computation of this Jacobian matrix correctly can be quite cumbersome. The NLP class provides, besides the Newton's method (which we used in the last example), the Broyden's Method, whose main appeal is that it does not require the coding of the Jacobian matrix (the method itself will approximate it). To learn more about Broyden's Method, click on the hyperlink above and see Quasi-Newton Methods in section 3.4, page(s) 39-42 of the text.
End of explanation
nFirms= 5;
pplot = nodeunif(501, a, b)
demo.figure('Cournot Effective Firm Supply Function',
'Quantity', 'Price', [0, nFirms], [a, b])
plt.plot(nFirms* S(pplot), pplot, D(pplot), pplot)
plt.legend(('Supply','Demand'))
plt.show();
Explanation: After 20 iterations, Broyden's method converges to the desired solution. We can visualize this in Figure 3, which shows the value of the function on 501 different points within the approximation interval. Notice that the residual plot crosses the horizontal axis 25 times; this occurs precisely at the collocation nodes (represented by red dots). This figure also shows the precision of the approximation: outside nodes, the function is within $\approx 1\times10^{-17}$ units from zero.
<br>
One of the advantages of working with the BasisChebyshev class is that, once the collocation coefficients have been found, we can evaluate the supply function by calling the S object as if it were a Python function. Thus, for example, to find out the quantity supplied by the firm when the price is 1.2, we simply evaluate print(S(1.2)), which returns 0.3950. We use this feature next to compute the effective supply curve when there are 5 identical firms in the market; the result is shown in Figure 2.
Figure 2 Supply and demand when there are 5 firms
End of explanation
p= pplot
demo.figure('Residual Function for Cournot Problem',
'Quantity', 'Residual')
plt.hlines(0, a, b, 'k', '--', lw= 2)
plt.plot(pplot, resid(S.c))
plt.plot(S.nodes,np.zeros_like(S.nodes),'r*');
plt.show();
Explanation: Figure 3: Approximation residuals for equation (5)
This block generates Figure 3.
End of explanation
m= np.array([1, 3, 5, 10, 15, 20])
demo.figure('Supply and Demand Functions', 'Quantity', 'Price', [0, 13])
plt.plot(np.outer(S(pplot), m), pplot)
plt.plot(D(pplot), pplot, linewidth= 2, color='black')
plt.legend(['m= 1', 'm= 3', 'm= 5', 'm= 10', 'm= 15', 'm= 20', 'demand']);
plt.show();
Explanation: Figure 4: Change in the effective supply as the number of firms increases
We now plot the effective supply for a varying number of firms; the result is shown in Figure 4.
End of explanation
pp= (b+ a)/ 2
dp= (b- a)/ 2
m = np.arange(1, 26)
for i in range(50):
dp/= 2
pp= pp- np.sign(S(pp)* m- D(pp))* dp
demo.figure('Cournot Equilibrium Price as Function of Industry Size',
'Number of Firms', 'Price')
plt.bar(m, pp);
plt.show();
Explanation: In Figure 4 notice how the equilibrium price and quantity change as the number of firms increases.
Figure 5: Equilibrium price as a function of the number of firms
The last figure in this example (Figure 5), shows the equilibrium price as a function of the number of firms.
End of explanation |
6,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dolsek and Fajfar (2004)
This methodology makes use of work by Dolsek and Fajfar (2004) to estimate the inelastic displacement of a SDOF system based on its elastic displacement and the proposed R−μ−T relationship. Record-to-record dispersion from Ruiz-García and Miranda (2007) can be included in the derivation of fragility curves. It is suitable for single-building fragility curve estimation and is applicable to any kind of multi-linear capacity curves. Individual fragility curves can be later combined into a single fragility curve that considers inter-building uncertainty.
<img src="../../../../../figures/DF_r_mu_T.jpg" width="400" align="middle">
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. In case multiple capacity curves are input, a spectral shape also needs to be defined.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Please also provide a spectral shape using the parameter input_spectrum if multiple capacity curves are used.
Step2: Idealise pushover curves
In order to use this methodology the pushover curves need to be idealised. Please choose an idealised shape using the parameter idealised_type. The valid options for this methodology are "bilinear" and "quadrilinear". Idealised curves can also be directly provided as input by setting the field Idealised to TRUE in the input file defining the capacity curves.
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below. Currently only interstorey drift damage model type is supported.
Step4: Calculate fragility functions
The damage threshold dispersion is calculated and integrated with the record-to-record dispersion through Monte Carlo simulations.
1. Please enter the number of Monte Carlo samples to be performed using the parameter montecarlo_samples in the cell below.
2. Please also define the constant acceleration-constant velocity and constant velocity-constant displacement corner periods of a Newmark-Hall type spectrum using the parameter corner_periods.
Step5: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step6: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step7: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step8: Plot vulnerability function
Step9: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
from rmtk.vulnerability.derivation_fragility.R_mu_T_no_dispersion.dolsek_fajfar import DF2004
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: Dolsek and Fajfar (2004)
This methodology makes use of work by Dolsek and Fajfar (2004) to estimate the inelastic displacement of a SDOF system based on its elastic displacement and the proposed R−μ−T relationship. Record-to-record dispersion from Ruiz-García and Miranda (2007) can be included in the derivation of fragility curves. It is suitable for single-building fragility curve estimation and is applicable to any kind of multi-linear capacity curves. Individual fragility curves can be later combined into a single fragility curve that considers inter-building uncertainty.
<img src="../../../../../figures/DF_r_mu_T.jpg" width="400" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Vb-dfloor.csv"
input_spectrum = "../../../../../../rmtk_data/FEMAP965spectrum.txt"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
Sa_ratios = utils.get_spectral_ratios(capacity_curves, input_spectrum)
utils.plot_capacity_curves(capacity_curves)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. In case multiple capacity curves are input, a spectral shape also needs to be defined.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Please also provide a spectral shape using the parameter input_spectrum if multiple capacity curves are used.
End of explanation
idealised_type = "quadrilinear"
idealised_capacity = utils.idealisation(idealised_type, capacity_curves)
utils.plot_idealised_capacity(idealised_capacity, capacity_curves, idealised_type)
Explanation: Idealise pushover curves
In order to use this methodology the pushover curves need to be idealised. Please choose an idealised shape using the parameter idealised_type. The valid options for this methodology are "bilinear" and "quadrilinear". Idealised curves can also be directly provided as input by setting the field Idealised to TRUE in the input file defining the capacity curves.
End of explanation
damage_model_file = "../../../../../../rmtk_data/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below. Currently only interstorey drift damage model type is supported.
End of explanation
montecarlo_samples = 50
corner_periods = [0.5, 1.8]
fragility_model = DF2004.calculate_fragility(capacity_curves, idealised_capacity, damage_model, montecarlo_samples, Sa_ratios, corner_periods)
Explanation: Calculate fragility functions
The damage threshold dispersion is calculated and integrated with the record-to-record dispersion through Monte Carlo simulations.
1. Please enter the number of Monte Carlo samples to be performed using the parameter montecarlo_samples in the cell below.
2. Please also define the constant acceleration-constant velocity and constant velocity-constant displacement corner periods of a Newmark-Hall type spectrum using the parameter corner_periods.
End of explanation
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
6,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Отчет по Лабоработной работе №3
Результаты определения параметров кэш-памяти
Описание процессора моего ноутбука с сайта cpu-world.com
Step1: Графики по данным, полученным из main_tooled.cpp
Step2: Графики по данным, полученным из valgrind | Python Code:
%matplotlib inline
from matplotlib import pyplot
def prepare_stats(path='compact_report_stats.txt'):
graph = dict()
curStats = list()
for line in open(path):
k,v = map(str.strip, line.strip().split('='))
if k == 'n':
curStats = dict()
graph.setdefault(int(v), list()).append(curStats)
else:
if not '.' in v:
v = int(v)
elif v[-1] == '%':
v = float(v[:-1]) / 100.0
else:
v = float(v)
curStats[k] = v
return graph
stats = prepare_stats()
def build_graphs(stats):
graphs = dict() # cache_lvl -> {param -> [(size,val),..]}
for size, stat in stats.iteritems():
for lvl, perLvlStat in enumerate(stat):
for param, val in perLvlStat.iteritems():
graphs.setdefault(lvl, dict()).setdefault(param, list()).append((size, val))
return graphs
graphs = build_graphs(stats)
def draw_graphs(graphs, figsize):
fig = pyplot.figure(figsize=figsize)
col_count = len(graphs)
for col_num, (lvl, perParamGraph) in enumerate(graphs.iteritems()):
row_count = len(perParamGraph)
for row_num, (param, graph) in enumerate(perParamGraph.iteritems()):
graph.sort()
ax = fig.add_subplot(row_count, col_count, 1 + col_num + col_count * row_num)
ax.set_title('Cache lvl {} - {}'.format(lvl + 1, param))
ax.plot([_[0] for _ in graph], [_[1] for _ in graph], color='blue', linestyle='-', marker='o',
markerfacecolor='green', markersize=12)
ax.set_yscale('log')
pyplot.show()
valgrind_data = dict()
size = 0
for line in open('valgrind.txt'):
if 'n = ' in line:
size = int(line.split('=')[1].strip())
elif 'D1 misses' in line:
valgrind_data.setdefault(0, dict()).setdefault('total', list()).append(
(size, int(line.split(':')[1].split('(')[0].replace(',', '')))
)
elif 'LLd misses' in line:
valgrind_data.setdefault(2, dict()).setdefault('total', list()).append(
(size, int(line.split(':')[1].split('(')[0].replace(',', '')))
)
Explanation: Отчет по Лабоработной работе №3
Результаты определения параметров кэш-памяти
Описание процессора моего ноутбука с сайта cpu-world.com:
* Level 1 cache size:
* 2 x 32 KB 8-way set associative instruction caches
* 2 x 32 KB 8-way set associative data caches
* Level 2 cache size:
* 2 x 256 KB 8-way set associative caches
* Level 3 cache size:
* 3 MB 12-way set associative shared cache
* Data width:
* 64 bit
* The number of cores:
* 2
Часть параметров можно получить при помощи системной утилиты sysctl:
$> sysctl -a | grep machdep.cpu.cache
machdep.cpu.cache.linesize: 64
machdep.cpu.cache.L2_associativity: 8
machdep.cpu.cache.size: 256
Получаем конфигурацию:
2 CPU
CacheLine = 64b = 2^6
L1 = 32KB, 8-way, per-cpu
L2 = 256KB, 8-way, per-cpu
L3 = 3MB, 12-way, shared
32Kb / 64b = 512 blocks
512 / 8 = 2^6 =64 sets
64bit pointer = [52 bit - tag][6 bit - index][6 bit - offset]
Код анализа количества промахов
Исходник без tooling-а и подсчета промахов - main.cpp
Исходник c tooling-ом для подсчета промахов - main_tooled.cpp
Скрипт сборки - compile.sh
Замечу, что для main.cpp использовалась сборка без оптимизаций, т.к. оптимизатор сильно оптимизировал и обработка задачи любой размерности укладывалась в доли секунды.
При использовании main.cpp без модификаций (с блочными реализациями), -O2 такого трюка не проделывало.
Скрипт запуска программы с tooling-ом - run.sh
Скрипт для запуска валгринда на программе без tooling-а - valgrind.sh
Графики cache-miss-ов
Ниже мы считываем и парсим логи запусков программы с tooling-ом и программы по valgrind-ом.
End of explanation
draw_graphs(graphs, figsize=(17, 17))
Explanation: Графики по данным, полученным из main_tooled.cpp
End of explanation
draw_graphs(valgrind_data, figsize=(17, 5))
Explanation: Графики по данным, полученным из valgrind
End of explanation |
6,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright © 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Restart the Runtime
Note
You need to restart the Colab Runtime Engine after installing the required Python packages (Menu > Runtime > Restart runtime...)
Step2: Import relevant packages
Step3: Check GPU Availability
Check if your Colab notebook is configured to use Graphical Processing Units (GPUs). If zero GPUs are available, check if the Colab notebook is configured to use GPUs (Menu > Runtime > Change Runtime Type).
Step4: Download the IMDB Dataset from TensorFlow Datasets
For our demo example, we are using the IMDB data set to train a sentiment model based on the pre-trained BERT model. The data set is provided through TensorFlow Datasets (TFDS). Our ML pipeline can read TFRecords, however, it expects only TFRecord files in the data folder. This is why we need to delete the additional files provided by TFDS.
Step5: Helper function to load the BERT model as Keras layer
We are reusing the BERT Layer from tf.hub in two locations within our pipeline components
Step6: TFX Pipeline
The TensorFlow Extended Pipeline is more or less following the example setup shown here. We'll only note deviations from the original setup.
Initialize the Interactive TFX Pipeline
Step7: Load the dataset
Step8: TensorFlow Data Validation
Step12: TensorFlow Transform
This is where we perform the BERT processing.
Step13: Check the Output Data Struture of the TF Transform Operation
Step18: Train the Keras Model
Step19: TensorFlow Model Evaluation
Step20: Model Export for Serving
Step21: Test your Exported Model
Step22: Upload the Exported Model to GDrive | Python Code:
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install -Uq tfx==0.25.0
!pip install -Uq tensorflow-text # The tf-text version needs to match the tf version
print("Restart your runtime enable after installing the packages")
Explanation: Copyright © 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Example TFX Pipeline demonstrating the usage of BERT
This pipeline example is an extension of the BERT pipeline. It demonstrates data preprocessing and training as described in the TensorFlow Blog post Part 1: Fast, scalable and accurate NLP: Why TFX is a perfect match for deploying BERT. The code in this example is different in that the exported model will expect raw JSON as input data, instead of the standard tf.Example data structures commonly used with the BERT pipeline.
<table class="tfo-notebook-buttons" width="100%">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/workshops/blob/master/blog/TFX_Pipeline_for_Bert_Preprocessing.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/workshops/blob/master/blog/TFX_Pipeline_for_Bert_Preprocessing.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Motivation
Instead of converting the input to a tranformer model into token ids on the client side, the model exported from this pipeline will allow the conversion on the server side.
The pipeline takes advantage of the broad TensorFlow Ecosystem, including:
* Loading the IMDB dataset via TensorFlow Datasets
* Loading a pre-trained model via tf.hub
* Manipulating the raw input data with tf.text
* Building a simple model architecture with Keras
* Composing the model pipeline with TensorFlow Extended, e.g. TensorFlow Transform, TensorFlow Data Validation and then consuming the tf.Keras model with the latest Trainer component from TFX
The structure of the overall pipeline follows the TFX Taxi Cab example.
Outline
Install Required Packages
Load the training data set
Create the TFX Pipeline
Export the trained Model
Test the exported Model
Non-Colab users
This notebook is intended to run in a Google Colab environment. However, it should also be possible to run it in any other Jupyter environment. In that case, update the file and directory paths and install TensorFlow>=2.2.0 manually.
Project Setup
Install Required Packages
End of explanation
# Restart the Colab notebook programmatically
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Runtime
Note
You need to restart the Colab Runtime Engine after installing the required Python packages (Menu > Runtime > Restart runtime...)
End of explanation
import glob
import os
import pprint
import re
import tempfile
from shutil import rmtree
from typing import List, Dict, Tuple, Union
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_hub as hub
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import (dataset_metadata, dataset_schema,
metadata_io, schema_utils)
from tfx.components import (Evaluator, ExampleValidator, ImportExampleGen,
ModelValidator, Pusher, ResolverNode, SchemaGen,
StatisticsGen, Trainer, Transform)
from tfx.components.base import executor_spec
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.proto import evaluator_pb2, example_gen_pb2, pusher_pb2, trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model, ModelBlessing
from tfx.utils.dsl_utils import external_input
import tensorflow_datasets as tfds
import tensorflow_model_analysis as tfma
import tensorflow_text as text
from tfx.orchestration.experimental.interactive.interactive_context import \
InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
Explanation: Import relevant packages
End of explanation
num_gpus_available = len(tf.config.experimental.list_physical_devices('GPU'))
print("Num GPUs Available: ", num_gpus_available)
assert num_gpus_available > 0
Explanation: Check GPU Availability
Check if your Colab notebook is configured to use Graphical Processing Units (GPUs). If zero GPUs are available, check if the Colab notebook is configured to use GPUs (Menu > Runtime > Change Runtime Type).
End of explanation
!mkdir /content/tfds/
def clean_before_download(base_data_dir: str) -> None:
rmtree(base_data_dir)
def delete_unnecessary_files(base_path: str) -> None:
counter = 0
file_list = ["dataset_info.json", "label.labels.txt", "feature.json"]
for f in file_list:
try:
os.remove(os.path.join(base_path, f))
counter += 1
except OSError:
pass
for f in glob.glob(base_path + "imdb_reviews-unsupervised.*"):
os.remove(f)
counter += 1
print(f"Deleted {counter} files")
def get_dataset(name: str = "imdb_reviews", version: str = "1.0.0") -> Tuple[Tuple, List]:
base_data_dir = "/content/tfds/"
config="plain_text"
version="1.0.0"
clean_before_download(base_data_dir)
tfds.disable_progress_bar()
builder = tfds.text.IMDBReviews(data_dir=base_data_dir,
config=config,
version=version)
download_config = tfds.download.DownloadConfig(
download_mode=tfds.GenerateMode.FORCE_REDOWNLOAD)
builder.download_and_prepare(download_config=download_config)
base_tfrecords_filename = os.path.join(base_data_dir, "imdb_reviews", config, version, "")
train_tfrecords_filename = base_tfrecords_filename + "imdb_reviews-train*"
test_tfrecords_filename = base_tfrecords_filename + "imdb_reviews-test*"
label_filename = os.path.join(base_tfrecords_filename, "label.labels.txt")
labels = [label.rstrip('\n') for label in open(label_filename)]
delete_unnecessary_files(base_tfrecords_filename)
return (train_tfrecords_filename, test_tfrecords_filename), labels
tfrecords_filenames, labels = get_dataset()
Explanation: Download the IMDB Dataset from TensorFlow Datasets
For our demo example, we are using the IMDB data set to train a sentiment model based on the pre-trained BERT model. The data set is provided through TensorFlow Datasets (TFDS). Our ML pipeline can read TFRecords, however, it expects only TFRecord files in the data folder. This is why we need to delete the additional files provided by TFDS.
End of explanation
%%skip_for_export
%%writefile bert.py
import tensorflow as tf
import tensorflow_hub as hub
BERT_TFHUB_URL = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3"
def load_bert_layer(model_url: str = BERT_TFHUB_URL) -> tf.keras.layers.Layer:
# Load the pre-trained BERT model as layer in Keras
bert_layer = hub.KerasLayer(
handle=model_url,
trainable=True)
return bert_layer
Explanation: Helper function to load the BERT model as Keras layer
We are reusing the BERT Layer from tf.hub in two locations within our pipeline components:
* in the model architecture when we define our Keras model
* in our preprocessing function when we extract the BERT settings (casing and vocab file path) to reuse the settings during the tokenization
End of explanation
context = InteractiveContext()
Explanation: TFX Pipeline
The TensorFlow Extended Pipeline is more or less following the example setup shown here. We'll only note deviations from the original setup.
Initialize the Interactive TFX Pipeline
End of explanation
output = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(splits=[
example_gen_pb2.SplitConfig.Split(name='train', hash_buckets=45),
example_gen_pb2.SplitConfig.Split(name='eval', hash_buckets=5)
]))
# Load the data from our prepared TFDS folder
examples = external_input("/content/tfds/imdb_reviews/plain_text/1.0.0")
example_gen = ImportExampleGen(input=examples, output_config=output)
context.run(example_gen)
%%skip_for_export
for artifact in example_gen.outputs['examples'].get():
print(artifact.uri)
Explanation: Load the dataset
End of explanation
%%skip_for_export
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
# context.show(statistics_gen.outputs['statistics'])
%%skip_for_export
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=True)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
%%skip_for_export
# Check the data schema for the type of input tensors
tfdv.load_schema_text(schema_gen.outputs['schema'].get()[0].uri + "/schema.pbtxt")
%%skip_for_export
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator)
context.show(example_validator.outputs['anomalies'])
Explanation: TensorFlow Data Validation
End of explanation
%%skip_for_export
%%writefile transform.py
from typing import Dict, Union
import tensorflow as tf
import tensorflow_text as text
from bert import load_bert_layer
MAX_SEQ_LEN = 512 # Max number is 512
do_lower_case = load_bert_layer().resolved_object.do_lower_case.numpy()
def preprocessing_fn(inputs: Dict) -> Dict:
Preprocess input column of text into transformed columns of.
* input token ids
* input mask
* input type ids
CLS_ID = tf.constant(101, dtype=tf.int64)
SEP_ID = tf.constant(102, dtype=tf.int64)
PAD_ID = tf.constant(0, dtype=tf.int64)
vocab_file_path = load_bert_layer().resolved_object.vocab_file.asset_path
bert_tokenizer = text.BertTokenizer(vocab_lookup_table=vocab_file_path,
token_out_type=tf.int64,
lower_case=do_lower_case)
def tokenize_text(
text: Union[tf.Tensor, tf.SparseTensor], sequence_length: int = MAX_SEQ_LEN
) -> tf.Tensor:
Perform the BERT preprocessing from text -> input token ids
# Convert text into token ids
tokens = bert_tokenizer.tokenize(text)
# Flatten the output ragged tensors
tokens = tokens.merge_dims(1, 2)[:, :sequence_length]
# Add start and end token ids to the id sequence
start_tokens = tf.fill([tf.shape(text)[0], 1], CLS_ID)
end_tokens = tf.fill([tf.shape(text)[0], 1], SEP_ID)
tokens = tokens[:, :sequence_length - 2]
tokens = tf.concat([start_tokens, tokens, end_tokens], axis=1)
# Truncate sequences greater than MAX_SEQ_LEN
tokens = tokens[:, :sequence_length]
# Pad shorter sequences with the pad token id
tokens = tokens.to_tensor(default_value=PAD_ID)
pad = sequence_length - tf.shape(tokens)[1]
tokens = tf.pad(tokens, [[0, 0], [0, pad]], constant_values=PAD_ID)
# And finally reshape the word token ids to fit the output
# data structure of TFT
return tf.reshape(tokens, [-1, sequence_length])
def preprocess_bert_input(text):
Convert input text into the input_word_ids, input_mask, input_type_ids
input_word_ids = tokenize_text(text)
input_mask = tf.cast(input_word_ids > 0, tf.int64)
input_mask = tf.reshape(input_mask, [-1, MAX_SEQ_LEN])
zeros_dims = tf.stack(tf.shape(input_mask))
input_type_ids = tf.fill(zeros_dims, 0)
input_type_ids = tf.cast(input_type_ids, tf.int64)
return (
input_word_ids,
input_mask,
input_type_ids
)
input_word_ids, input_mask, input_type_ids = \
preprocess_bert_input(tf.squeeze(inputs['text'], axis=1))
return {
'input_word_ids': input_word_ids,
'input_mask': input_mask,
'input_type_ids': input_type_ids,
'label': inputs['label']
}
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath("transform.py"))
context.run(transform)
Explanation: TensorFlow Transform
This is where we perform the BERT processing.
End of explanation
from tfx_bsl.coders.example_coder import ExampleToNumpyDict
pp = pprint.PrettyPrinter()
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = transform.outputs['transformed_examples'].get()[0].uri
print(train_uri)
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_folders = [os.path.join(train_uri, name) for name in os.listdir(train_uri)]
tfrecord_filenames = []
for tfrecord_folder in tfrecord_folders:
for name in os.listdir(tfrecord_folder):
tfrecord_filenames.append(os.path.join(tfrecord_folder, name))
# Create a TFRecordDataset to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(1):
serialized_example = tfrecord.numpy()
example = ExampleToNumpyDict(serialized_example)
pp.pprint(example)
Explanation: Check the Output Data Struture of the TF Transform Operation
End of explanation
%%skip_for_export
%%writefile trainer.py
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from typing import Text
import absl
import tensorflow as tf
from tensorflow import keras
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
_LABEL_KEY = 'label'
BERT_TFHUB_URL = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3"
def _gzip_reader_fn(filenames):
Small utility returning a record reader that can read gzip'ed files.
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
def load_bert_layer(model_url=BERT_TFHUB_URL):
# Load the pre-trained BERT model as layer in Keras
bert_layer = hub.KerasLayer(
handle=model_url,
trainable=False) # Model can be fine-tuned
return bert_layer
def get_model(tf_transform_output, max_seq_length=512):
# Dynamically create inputs for all outputs of our transform graph
feature_spec = tf_transform_output.transformed_feature_spec()
feature_spec.pop(_LABEL_KEY)
inputs = {
key: tf.keras.layers.Input(shape=(max_seq_length), name=key, dtype=tf.int64)
for key in feature_spec.keys()
}
input_word_ids = tf.cast(inputs["input_word_ids"], dtype=tf.int32)
input_mask = tf.cast(inputs["input_mask"], dtype=tf.int32)
input_type_ids = tf.cast(inputs["input_type_ids"], dtype=tf.int32)
bert_layer = load_bert_layer()
encoder_inputs = dict(
input_word_ids=tf.reshape(input_word_ids, (-1, max_seq_length)),
input_mask=tf.reshape(input_mask, (-1, max_seq_length)),
input_type_ids=tf.reshape(input_type_ids, (-1, max_seq_length)),
)
outputs = bert_layer(encoder_inputs)
# Add additional layers depending on your problem
x = tf.keras.layers.Dense(256, activation='relu')(outputs["pooled_output"])
dense = tf.keras.layers.Dense(64, activation='relu')(x)
pred = tf.keras.layers.Dense(1, activation='sigmoid')(dense)
keras_model = tf.keras.Model(
inputs=[
inputs['input_word_ids'],
inputs['input_mask'],
inputs['input_type_ids']],
outputs=pred)
keras_model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=['accuracy']
)
return keras_model
def _get_serve_tf_examples_fn(model, tf_transform_output):
Returns a function that parses a serialized tf.Example and applies TFT.
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(raw_features):
# Conversion from raw_features to features only needed to adjust
# the Tensor shape to fit Keras' inputs
features = dict()
features['text'] = tf.reshape(raw_features['text'], [-1, 1])
transformed_features = model.tft_layer(features)
outputs = model(transformed_features)
return {'outputs': outputs}
return serve_tf_examples_fn
def _input_fn(file_pattern: Text,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 32) -> tf.data.Dataset:
Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_LABEL_KEY)
return dataset
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 32)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 32)
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = get_model(tf_transform_output=tf_transform_output)
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
features_spec = dict(
text=tf.TensorSpec(shape=(None), dtype=tf.string),
)
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
features_spec
),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
# NOTE: Adjust the number of training and evaluation steps when training in an production setup
TRAINING_STEPS = 10000
EVALUATION_STEPS = 1000
trainer = Trainer(
module_file=os.path.abspath("trainer.py"),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=TRAINING_STEPS),
eval_args=trainer_pb2.EvalArgs(num_steps=EVALUATION_STEPS))
context.run(trainer)
model_resolver = ResolverNode(
instance_name='latest_blessed_model_resolver',
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing))
context.run(model_resolver)
Explanation: Train the Keras Model
End of explanation
eval_config = tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key='label')],
slicing_specs=[tfma.SlicingSpec()],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(
class_name='CategoricalAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-2})))
])
]
)
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config
)
context.run(evaluator)
# Check the blessing
!ls {evaluator.outputs['blessing'].get()[0].uri}
Explanation: TensorFlow Model Evaluation
End of explanation
!mkdir /content/serving_model_dir
serving_model_dir = "/content/serving_model_dir"
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=serving_model_dir)))
context.run(pusher)
Explanation: Model Export for Serving
End of explanation
push_uri = pusher.outputs.model_push.get()[0].uri
latest_version_path = os.path.join(push_uri)
loaded_model = tf.saved_model.load(latest_version_path)
example_str = b"This is the finest show ever produced for TV. Each episode is a triumph. The casting, the writing, the timing are all second to none. This cast performs miracles."
f = loaded_model.signatures["serving_default"]
print(f(tf.constant([example_str])))
Explanation: Test your Exported Model
End of explanation
from google.colab import drive
drive.mount('/content/drive')
!mkdir /content/drive/My\ Drive/exported_model
!cp -r {pusher.outputs.model_push.get()[0].uri} /content/drive/My\ Drive/exported_model/
drive.flush_and_unmount()
print('Exported model has been uploaded to your Google Drive.')
Explanation: Upload the Exported Model to GDrive
End of explanation |
6,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
Step1: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
Step2: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
Step3: 4. Enter CM API To BigQuery Parameters
Write the current state of accounts, subaccounts, profiles, advertisers, campaigns, sites, roles, and reports to BigQuery for a given list of CM accounts.
1. Specify the name of the dataset, several tables will be created here.
1. If dataset exists, it is inchanged.
1. Add CM account ids for the accounts to pull data from.
Modify the values below for your use case, can be done multiple times, then click play.
Step4: 5. Execute CM API To BigQuery
This does NOT need to be modified unles you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: 1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
Explanation: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
End of explanation
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
Explanation: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
End of explanation
FIELDS = {
'endpoint': '',
'auth_read': 'user', # Credentials used for reading data.
'auth_write': 'service', # Credentials used for writing data.
'dataset': '', # Google BigQuery dataset to create tables in.
'accounts': '', # Comma separated CM account ids.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 4. Enter CM API To BigQuery Parameters
Write the current state of accounts, subaccounts, profiles, advertisers, campaigns, sites, roles, and reports to BigQuery for a given list of CM accounts.
1. Specify the name of the dataset, several tables will be created here.
1. If dataset exists, it is inchanged.
1. Add CM account ids for the accounts to pull data from.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dcm_api': {
'auth': 'user',
'endpoints': {'field': {'name': 'endpoint','kind': 'choice','choices': ['accountPermissionGroups','accountPermissions','accountUserProfiles','accounts','ads','advertiserGroups','advertiserLandingPages','advertisers','browsers','campaigns','changeLogs','cities','connectionTypes','contentCategories','countries','creativeFields','creativeGroups','creatives','directorySites','dynamicTargetingKeys','eventTags','files','floodlightActivities','floodlightActivityGroups','floodlightConfigurations','languages','metros','mobileApps','mobileCarriers','operatingSystemVersions','operatingSystems','placementGroups','placementStrategies','placements','platformTypes','postalCodes','projects','regions','remarketingLists','reports','sites','sizes','subaccounts','targetableRemarketingLists','targetingTemplates','userprofiles','userRolePermissionGroups','userRolePermissions','userRoles','videoFormats'],'default': ''}},
'accounts': {
'single_cell': True,
'values': {'field': {'name': 'accounts','kind': 'integer_list','order': 2,'default': '','description': 'Comma separated CM account ids.'}}
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 1,'default': '','description': 'Google BigQuery dataset to create tables in.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
Explanation: 5. Execute CM API To BigQuery
This does NOT need to be modified unles you are changing the recipe, click play.
End of explanation |
6,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook implements an array-based version of Heapsort.
Heapsort
The function call swap(A, i, j) takes an array A and two indexes i and j and exchanges the elements at these indexes.
Step1: The procedure sink takes three arguments.
- A is the array representing the heap.
- k is an index into the array A.
- n is the upper bound of the part of this array that has to be transformed into a heap.
The array A itself might actually have more than $n+1$ elements, but for the
purpose of the method sink we restrict our attention to the subarray
A[k
Step2: The function call heapSort(A) has the task to sort the array A and proceeds in two phases.
- In phase one our goal is to transform the array Ainto a heap that is stored in A.
In order to do so, we traverse the array A in reverse in a loop.
The invariant of this loop is that before
sink is called, all trees rooted at an index greater than
k satisfy the heap condition. Initially this is true because the trees that
are rooted at indices greater than $(n + 1) // 2 - 1$ are trivial, i.e. they only
consist of their root node.
In order to maintain the invariant for index k, sink is called with
argument k, since at this point, the tree rooted at index k satisfies
the heap condition except possibly at the root. It is then the job of $\texttt{sink}$ to
establish the heap condition at index k. If the element at the root has a
priority that is too low, sink ensures that this element sinks down in the tree
as far as necessary.
- In phase two we remove the elements from the heap one-by-one and insert them at the end of
the array.
When the while-loop starts, the array A contains a heap. Therefore,
the smallest element is found at the root of the heap. Since we want to sort the
array A descendingly, we move this element to the end of the array A and in
return move the element from the end of the arrayAto the front.
After this exchange, the sublist A[0
Step3: The version of heap_sort given below adds some animation.
Step4: Testing
Step5: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
Step6: Next, we sort a million random integers. It is not as fast as merge sort, but we do not need an auxiliary array and hence we don't need additional storage. | Python Code:
def swap(A, i, j):
A[i], A[j] = A[j], A[i]
Explanation: This notebook implements an array-based version of Heapsort.
Heapsort
The function call swap(A, i, j) takes an array A and two indexes i and j and exchanges the elements at these indexes.
End of explanation
def sink(A, k, n):
while 2 * k + 1 <= n:
j = 2 * k + 1
if j + 1 <= n and A[j] > A[j + 1]:
j += 1
if A[k] < A[j]:
return
swap(A, k, j)
k = j
Explanation: The procedure sink takes three arguments.
- A is the array representing the heap.
- k is an index into the array A.
- n is the upper bound of the part of this array that has to be transformed into a heap.
The array A itself might actually have more than $n+1$ elements, but for the
purpose of the method sink we restrict our attention to the subarray
A[k:n].
When calling sink, the assumption is that A[k:n+1] should represent a heap
that possibly has its heap condition violated at its root, i.e. at index k. The
purpose of the procedure sink is to restore the heap condition at index k.
- We compute the index j of the left subtree below index k.
- We check whether there also is a right subtree at position j+1.
This is the case if j + 1 <= n.
- If the heap condition is violated at index k, we exchange the element at position k
with the child that has the higher priority, i.e. the child that is smaller.
- Next, we check in line 9 whether the heap condition is violated at index k.
If the heap condition is satisfied, there is nothing left to do and the procedure returns.
Otherwise, the element at position k is swapped with
the element at position j.
Of course, after this swap it is possible that the heap condition is
violated at position j. Therefore, k is set to j and the while-loop continues
as long as the node at position k has at least one child, i.e. as long as
2 * k + 1 <= n.
End of explanation
def heap_sort(A):
n = len(A) - 1
for k in range((n + 1) // 2 - 1, -1, -1):
sink(A, k, n)
while n >= 1:
swap(A, 0, n)
n -= 1
sink(A, 0, n)
Explanation: The function call heapSort(A) has the task to sort the array A and proceeds in two phases.
- In phase one our goal is to transform the array Ainto a heap that is stored in A.
In order to do so, we traverse the array A in reverse in a loop.
The invariant of this loop is that before
sink is called, all trees rooted at an index greater than
k satisfy the heap condition. Initially this is true because the trees that
are rooted at indices greater than $(n + 1) // 2 - 1$ are trivial, i.e. they only
consist of their root node.
In order to maintain the invariant for index k, sink is called with
argument k, since at this point, the tree rooted at index k satisfies
the heap condition except possibly at the root. It is then the job of $\texttt{sink}$ to
establish the heap condition at index k. If the element at the root has a
priority that is too low, sink ensures that this element sinks down in the tree
as far as necessary.
- In phase two we remove the elements from the heap one-by-one and insert them at the end of
the array.
When the while-loop starts, the array A contains a heap. Therefore,
the smallest element is found at the root of the heap. Since we want to sort the
array A descendingly, we move this element to the end of the array A and in
return move the element from the end of the arrayAto the front.
After this exchange, the sublist A[0:n-1] represents a heap, except that the
heap condition might now be violated at the root. Next, we decrement n, since the
last element of the array A is already in its correct position.
In order to reestablish the heap condition at the root, we call sink with index
0.
End of explanation
def heap_sort(A):
n = len(A) - 1
for k in range((n + 1) // 2 - 1, -1, -1):
sink(A, k, n)
while n >= 1:
swap(A, 0, n)
n -= 1
sink(A, 0, n)
Explanation: The version of heap_sort given below adds some animation.
End of explanation
import random as rnd
def isOrdered(L):
for i in range(len(L) - 1):
assert L[i] >= L[i+1]
from collections import Counter
def sameElements(L, S):
assert Counter(L) == Counter(S)
Explanation: Testing
End of explanation
def testSort(n, k):
for i in range(n):
L = [ rnd.randrange(2*k) for x in range(k) ]
oldL = L[:]
heap_sort(L)
isOrdered(L)
sameElements(L, oldL)
assert len(L) == len(oldL)
print('.', end='')
print()
print("All tests successful!")
%%time
testSort(100, 20_000)
Explanation: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
End of explanation
%%time
k = 1000_000
L = [ rnd.randrange(2 * k) for x in range(k) ]
S = heap_sort(L)
Explanation: Next, we sort a million random integers. It is not as fast as merge sort, but we do not need an auxiliary array and hence we don't need additional storage.
End of explanation |
6,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Average driving speed against GDP
In this post I will attempt to combine two datasets I have worked on. Firstly the Average driving speeds by country as estimated by Google's location API, and secondly the GDP by country using data from the world bank.
Import the data
Firstly we import the data as two separate data frames, this assumes the data exists as per the other posts and also does the cleaning without explanation.
Step1: An issue
Unfortunately the two data sets can't be coerced together easily. The reason being that the country codes used in the average speed investigation were taken from the geonames website which used a two letter code, while the country codes for the GDP used the proper three letter code. Thankfully most of the codes can be matched by simply pairing the first two letters of the 3 letter code (e.g "USA" with "US"), we first create all of these pairs
Step2: For those with multiple matches, some we can easily add in by hand
Step3: The rest we will ignore for now.
Plotting the data
Now we plot the data | Python Code:
df_GDP = pd.read_csv("../data_sets/GDP_by_Country_WorldBank/ny.gdp.mktp.cd_Indicator_en_csv_v2.csv",
quotechar='"', skiprows=2)
colnames_to_drop = df_GDP.columns[np.array([2, 3, -2, -1])]
for c in colnames_to_drop:
df_GDP.drop(c, 1, inplace=True)
df_GDP = df_GDP[~df_GDP['Country Code'].isnull()]
df_AS = pd.read_csv("AverageSpeedsByCountry.txt", skipinitialspace=True)
Explanation: Average driving speed against GDP
In this post I will attempt to combine two datasets I have worked on. Firstly the Average driving speeds by country as estimated by Google's location API, and secondly the GDP by country using data from the world bank.
Import the data
Firstly we import the data as two separate data frames, this assumes the data exists as per the other posts and also does the cleaning without explanation.
End of explanation
pairs = []
for Country in df_AS.Country:
matches = [Country in CC[:2] for CC in df_GDP['Country Code'].values]
matched_values = df_GDP['Country Code'][matches].values
if len(matched_values) == 1:
pairs.append([Country, matched_values[0]])
elif len(matched_values) > 1:
print "For {} I found these matches:".format(Country), " ".join(matched_values)
else:
print "No matches found for {}".format(Country)
Explanation: An issue
Unfortunately the two data sets can't be coerced together easily. The reason being that the country codes used in the average speed investigation were taken from the geonames website which used a two letter code, while the country codes for the GDP used the proper three letter code. Thankfully most of the codes can be matched by simply pairing the first two letters of the 3 letter code (e.g "USA" with "US"), we first create all of these pairs:
End of explanation
pairs_by_hand = [['BR', 'BRA'],
['CA', 'CAN'],
['FR', 'FRA'],
['AU', 'AUS'],
['AR', 'ARG'],
['IN', 'IND']]
for pair in pairs_by_hand:
pairs.append(pair)
Explanation: For those with multiple matches, some we can easily add in by hand:
End of explanation
from matplotlib.text import TextPath
ax = plt.subplot(111)
for [AveSpeedCC, GDPCC] in pairs:
GDP = df_GDP[df_GDP['Country Code'] == GDPCC]['2013'].values[0]
AveS = df_AS[df_AS.Country == AveSpeedCC].Ave.values
#ax.scatter(AveS, GDP, c="r", marker=TextPath((0, 0), AveSpeedCC, size=10000), s=1000)
m = r"$\mathrm{{{}}}$".format(AveSpeedCC)
ax.plot(AveS, GDP, marker=m, markersize=20)
ax.set_yscale("log")
ax.set_ylabel("GDP")
ax.set_xlabel("Average Speed")
plt.show()
Explanation: The rest we will ignore for now.
Plotting the data
Now we plot the data
End of explanation |
6,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. _ builtin _ 模块
1.1 apply
Step1: 效果等同于:
Step2: 那为什么要使用apply呢?
Step3: 何谓“关键字参数”?
apply使用字典传递关键字参数,实际上就是字典的键是函数的参数名,字典的值是函数的实际参数值。(相对于形参和实参)
根据上面的例子看,如果部分传递,只能传递后面的关键字参数,不能传递前面的。????
apply 函数的一个常见用法是把构造函数参数从子类传递到基类, 尤其是构造函数需要接受很多参数的时候.
子类和基类是什么概念?
Step4: 第二个函数不知道如何使用 ????
修改,子类要以父类为参数!!!
Step5: 使用 * 来标记元组, ** 来标记字典.
apply的第一个参数是函数名,第二个参数是元组,第三个参数是字典。所以用上面这个表达最好不过。
Step6: 以上等价。
用这个意思引申:
Step7: 1.2 import
Step8: 3. os模块
Step9: os.path.splitext:切扩展名
os.remove:remove a file。上面的程序里为什么要remove呢?
Step10: 5. stat模块
Step11: os.stat是将文件的相关属性读出来,然后用stat模块来处理,处理方式有多重,就要看看stat提供了什么了。
6. string模块
Step12: 分列变成了list
Step13: join和split相反。
Step14: replace的参数结构是:
1. 整字符串 2. 将被替换的字符串 3. 替换后的字符串
Step15: 上面replace的结果,没有影响原始的text。
find时,能找到就显示位置,不能找到就显示-1
Step16: 和数学运算一样,这些方法也分一元的和多元的:
upper, lower, split, join 都是一元的。其中join的对象是一个list。
replace, find, count则需要除了text之外的参数。replace需要额外两个,用以指明替换关系。find只需要一个被查找对象。count则需要一个需要计数的字符。
特别注意: replace不影响原始字符串对象。(好奇怪)
Step17: 7. re模块
Step18: 8. math模块和cmath模块
Step19: 10. operator模块
Step20: operator下的这四则运算,都是针对两个数进行的,即参数只能是两个。为了能对多个数进行连续运算,就需要用reduce,它的意思是两个运算后,作为一个数,与下一个继续进行两个数运算,直到数列终。感觉和apply作用有点差不多,第一个参数是函数,第二个参数是数列(具体参数)。
Step21: getitem和indexOf为一对逆运算,前者求指定位置上的值,后者求给定值的位置。注意,后者是大写的字母o。
Step22: 判断序列中是否包含某个值。结果是布尔值。
Step23: 15. types模块
Step24: types 模块在第一次引入的时候会破坏当前的异常状态. 也就是说, 不要在异常处理语句块中导入该模块 ( 或其他会导入它的模块 ) .
16. gc模块
gc 模块提供了到内建循环垃圾收集器的接口。 | Python Code:
def function(a,b):
print a, b
apply(function, ("wheather", "Canada?"))
apply(function, (1, 3+5))
Explanation: 1. _ builtin _ 模块
1.1 apply:使用元组或字典中的参数调用函数
Python 允许你实时地创建函数参数列表. 只要把所有的参数放入一个元组中,然后通过内建的 apply 函数调用函数.
End of explanation
function("wheather", "Canada?")
function(1, 3+5)
Explanation: 效果等同于:
End of explanation
apply(function, (), {"a":"35cm", "b":"12cm"})
apply(function, ("v",), {"b":"love"})
apply(function, ( ,"v"), {"a":"hello"})
Explanation: 那为什么要使用apply呢?
End of explanation
class Rectangle:
def __init__(self, color="white", width=10, height=10):
print "Create a ", color, self, "sized", width, "X", height
class RoundRectangle:
def __init__(self, **kw):
apply(Rectangle.__init__, (self,), kw)
rect = Rectangle(color="green", width=200, height=156)
rect = RoundRectangle(color="brown", width=20, height=15)
Explanation: 何谓“关键字参数”?
apply使用字典传递关键字参数,实际上就是字典的键是函数的参数名,字典的值是函数的实际参数值。(相对于形参和实参)
根据上面的例子看,如果部分传递,只能传递后面的关键字参数,不能传递前面的。????
apply 函数的一个常见用法是把构造函数参数从子类传递到基类, 尤其是构造函数需要接受很多参数的时候.
子类和基类是什么概念?
End of explanation
class RoundRectangle(Rectangle):
def __init__(self, **kw):
apply(Rectangle.__init__, (self,), kw)
rect2 = RoundRectangle(color= "blue", width=23, height=10)
Explanation: 第二个函数不知道如何使用 ????
修改,子类要以父类为参数!!!
End of explanation
args = ("er",)
kwargs = {"b":"haha"}
function(*args, **kwargs)
apply(function, args, kwargs)
Explanation: 使用 * 来标记元组, ** 来标记字典.
apply的第一个参数是函数名,第二个参数是元组,第三个参数是字典。所以用上面这个表达最好不过。
End of explanation
kw = {"color":"brown", "width":123, "height": 34}
rect3 = RoundRectangle(**kw)
rect4 = Rectangle(**kw)
arg=("yellow", 45, 23)
rect5 = Rectangle(*arg)
Explanation: 以上等价。
用这个意思引申:
End of explanation
import glob, os
modules =[]
for module_file in glob.glob("*-plugin.py"):
try:
module_name, ext = os.path.splitext(os.path.basename(module_file))
module = __import__(module_name)
modules.append(module)
except ImportError:
pass #ignore broken modules
for module in modules:
module.hello()
example-plugin says hello
def hello():
print "example-plugin says hello"
def getfunctionname(module_name, function_name):
module = __import__(module_name)
return getattr(module, function_name)
print repr(getfunctionname("dumbdbm","open"))
Explanation: 1.2 import
End of explanation
import os
import string
def replace(file, search_for, replace_with):
back = os.path.splitext(file)[0] + ".bak"
temp = os.path.splitext(file)[0] + ".tmp"
try:
os.remove(temp)
except os.error:
pass
fi = open(file)
fo = open(temp, "w")
for s in fi.readlines():
fo.write(string.replace(s, search_for, replace_with))
fi.close()
fo.close()
try:
os.remove(back)
except os.error:
pass
os.rename(file, back)
os.rename(temp, file)
file = "samples/sample.txt"
replace(file, "hello", "tjena")
replace(file, "tjena", "hello")
Explanation: 3. os模块
End of explanation
def replace1(file, search_for, replace_with):
back = os.path.splitext(file)[0] + ".bak"
temp = os.path.splitext(file)[0] + ".tmp"
try:
os.remove(temp)
except os.error:
pass
fi = open(file)
fo = open(temp, "w")
for s in fi.readlines():
fo.write(string.replace(s, search_for, replace_with))
fi.close()
fo.close()
try:
os.remove(back)
except os.error:
pass
os.rename(file, back)
os.rename(temp, file)
replace1(file, "hello", "tjena")
replace1(file, "tjena", "hello")
doc = os.path.splitext(file)[0] + ".doc"
for file in os.listdir("samples"):
print file
cwd = os.getcwd()
print 1, cwd
os.chdir("samples")
print 2, os.getcwd()
os.chdir(os.pardir)
print 3, os.getcwd()
Explanation: os.path.splitext:切扩展名
os.remove:remove a file。上面的程序里为什么要remove呢?
End of explanation
import stat
import os, time
st = os.stat("samples/sample.txt")
Explanation: 5. stat模块
End of explanation
import string
text = "Monty Python's Flying Circus"
print "upper", "=>", string.upper(text)
print "lower", "=>", string.lower(text)
print "split", "=>", string.split(text)
Explanation: os.stat是将文件的相关属性读出来,然后用stat模块来处理,处理方式有多重,就要看看stat提供了什么了。
6. string模块
End of explanation
print "join", "=>", string.join(string.split(text))
Explanation: 分列变成了list
End of explanation
print "replace", "=>", string.replace(text, "Python", "Cplus")
Explanation: join和split相反。
End of explanation
print "find", "=>", string.find(text, "Python")
print "find", "=>", string.find(text, "Python"), string.find(text, "Cplus")
print text
Explanation: replace的参数结构是:
1. 整字符串 2. 将被替换的字符串 3. 替换后的字符串
End of explanation
print "count", "=>", string.count(text,"n")
Explanation: 上面replace的结果,没有影响原始的text。
find时,能找到就显示位置,不能找到就显示-1
End of explanation
print string.atoi("23")
type(string.atoi("23"))
int("234")
type(int("234"))
type(float("334"))
float("334")
string.atof("456")
Explanation: 和数学运算一样,这些方法也分一元的和多元的:
upper, lower, split, join 都是一元的。其中join的对象是一个list。
replace, find, count则需要除了text之外的参数。replace需要额外两个,用以指明替换关系。find只需要一个被查找对象。count则需要一个需要计数的字符。
特别注意: replace不影响原始字符串对象。(好奇怪)
End of explanation
import re
text = "The Attila the Hun Show"
m = re.match(".", text)
if m:
print repr("."), "=>", repr(m.group(0))
Explanation: 7. re模块
End of explanation
import math
math.pi
math.e
print math.hypot(3,4)
math.sqrt(25)
import cmath
print cmath.sqrt(-1)
Explanation: 8. math模块和cmath模块
End of explanation
import operator
operator.add(3,5)
seq = 1,5,7,9
reduce(operator.add,seq)
reduce(operator.sub, seq)
reduce(operator.mul, seq)
float(reduce(operator.div, seq))
Explanation: 10. operator模块
End of explanation
operator.concat("ert", "erui")
operator.getitem(seq,1)
operator.indexOf(seq, 5)
Explanation: operator下的这四则运算,都是针对两个数进行的,即参数只能是两个。为了能对多个数进行连续运算,就需要用reduce,它的意思是两个运算后,作为一个数,与下一个继续进行两个数运算,直到数列终。感觉和apply作用有点差不多,第一个参数是函数,第二个参数是数列(具体参数)。
End of explanation
operator.sequenceIncludes(seq, 5)
Explanation: getitem和indexOf为一对逆运算,前者求指定位置上的值,后者求给定值的位置。注意,后者是大写的字母o。
End of explanation
import UserList
def dump(data):
print data,":"
print type(data),"=>",
if operator.isCallable(data):
print "is a CALLABLE data."
if operator.isMappingType(data):
print "is a MAP data."
if operator.isNumberType(data):
print "is a NUMBER data."
if operator.isSequenceType(data):
print "is a SEQUENCE data."
dump(0)
dump([3,4,5,6])
dump("weioiuernj")
dump({"a":"155cm", "b":"187cm"})
dump(len)
dump(UserList)
dump(UserList.UserList)
dump(UserList.UserList())
Explanation: 判断序列中是否包含某个值。结果是布尔值。
End of explanation
import types
def check(object):
if type(object) is types.IntType:
print "INTEGER",
if type(object) is types.FloatType:
print "FLOAT",
if type(object) is types.StringType:
print "STRING",
if type(object) is types.ClassType:
print "CLASS",
if type(object) is types.InstanceType:
print "INSTANCE",
print
check(0)
check(0.0)
check("picklecai")
class A:
pass
check(A)
a = A()
check(a)
Explanation: 15. types模块
End of explanation
import gc
class Node:
def __init__(self, name):
self.name = name
self.patrent = None
self.children = []
def addchild(self, node):
node.parent = self
self.children.append(node)
def __repr__(self):
return "<Node %s at %x" % (repr(self.name), id(self))
root = Node("monty")
root.addchild(Node("eric"))
root.addchild(Node("john"))
root.addchild(Node("michael"))
root.__init__("eric")
root.__repr__()
Explanation: types 模块在第一次引入的时候会破坏当前的异常状态. 也就是说, 不要在异常处理语句块中导入该模块 ( 或其他会导入它的模块 ) .
16. gc模块
gc 模块提供了到内建循环垃圾收集器的接口。
End of explanation |
6,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
author
Step1: Let's try plotting the results. We first need to import the required libraries and methods
Step2: Next, we create numpy arrays to store the (x,y) values
Step3: We have to re write the loop to store the values in the arrays. Remember that numpy arrays start from 0.
Step4: We could have saved effort by defining
Step6: Alternatively, and in order to re use code in future problems, we could have created a function.
Step7: Actually, for this particularly simple case, calling a function may introduce unecessary overhead, but it is a an example that we will find useful for future applications. For a simple function like this we could have used a "lambda" function (more about lambda functions <a href="http
Step8: Now, let's study the effects of different time steps on the convergence | Python Code:
T0 = 10. # initial temperature
Ts = 83. # temp. of the environment
r = 0.1 # cooling rate
dt = 0.05 # time step
tmax = 60. # maximum time
nsteps = int(tmax/dt) # number of steps
T = T0
for i in range(1,nsteps+1):
new_T = T - r*(T-Ts)*dt
T = new_T
print i,i*dt, T
# we can also do t = t - r*(t-ts)*dt
Explanation: author:
- 'Adrian E. Feiguin'
title: 'Computational Physics'
...
Ordinary differential equations
Let’s consider a simple 1st order equation:
$$\frac{dy}{dx}=f(x,y)$$
To solve this equation with a computer we need to discretize the differences: we
have to convert the differential equation into a “finite differences” equation. The simplest
solution is Euler’s method.
Euler’s method
Supouse that at a point $x_0$, the function $f$ has a value $y_0$. We
want to find the approximate value of $y$ in a point $x_1$ close to
$x_0$, $x_1=x_0+\Delta x$, with $\Delta x$ small. We assume that $f$,
the rate of change of $y$, is constant in this interval $\Delta x$.
Therefore we find: $$\begin{eqnarray}
&& dx \approx \Delta x &=&x_1-x_0, \
&& dy \approx \Delta y &=&y_1-y_0,\end{eqnarray}$$ with
$y_1=y(x_1)=y(x_0+\Delta x)$. Then we re-write the differential equation in terms of discrete differences as:
$$\frac{\Delta y}{\Delta x}=f(x,y)$$ or
$$\Delta y = f(x,y)\Delta x$$
and approximate the value of $y_1$ as
$$y_1=y_0+f(x_0,y_0)(x_1-x_0)$$ We can generalize this formula to find
the value of $y$ at $x_2=x_1+\Delta x$ as
$$y_{2}=y_1+f(x_1,y_1)\Delta x,$$ or in the general case:
$$y_{n+1}=y_n+f(x_n,y_n)\Delta x$$
This is a good approximation as long as $\Delta x$ is “small”. What is
small? Depends on the problem, but it is basically defined by the “rate
of change”, or “smoothness” of $f$. $f(x)$ has to behave smoothly and
without rapid variations in the interval $\Delta x$.
Notice that Euler’s method is equivalent to a 1st order Taylor expansion
about the point $x_0$. The “local error” calculating $x_1$ is then
$O(\Delta x^2)$. If we use the method $N$ times to calculate $N$
consecutive points, the propagated “global” error will be
$NO(\Delta x^2)\approx O(\Delta
x)$. This error decreases linearly with decreasing step, so we need to
halve the step size to reduce the error in half. The numerical work for
each step consists of a single evaluation of $f$.
Exercise 1.1: Newton’s law of cooling
If the temperature difference between an object and its surroundings is
small, the rate of change of the temperature of the object is
proportional to the temperature difference: $$\frac{dT}{dt}=-r(T-T_s),$$
where $T$ is the temperature of the body, $T_s$ is the temperature of
the environment, and $r$ is a “cooling constant” that depends on the
heat transfer mechanism, the contact area with the environment and the
thermal properties of the body. The minus sign appears because if
$T>T_s$, the temperature must decrease.
Write a program to calculate the temperature of a body at a time $t$,
given the cooling constant $r$ and the temperature of the body at time
$t=0$. Plot the results for $r=0.1\frac{1}{min}$; $T_0=83^{\circ} C$
using different intervals $\Delta t$ and compare with exact (analytical)
results.
End of explanation
%matplotlib inline
import numpy as np
from matplotlib import pyplot
Explanation: Let's try plotting the results. We first need to import the required libraries and methods
End of explanation
my_time = np.zeros(nsteps)
my_temp = np.zeros(nsteps)
Explanation: Next, we create numpy arrays to store the (x,y) values
End of explanation
T = T0
my_temp[0] = T0
for i in range(1,nsteps):
T = T - r*(T-Ts)*dt
my_time[i] = i*dt
my_temp[i] = T
pyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3)
pyplot.xlabel('time')
pyplot.ylabel('temperature');
Explanation: We have to re write the loop to store the values in the arrays. Remember that numpy arrays start from 0.
End of explanation
my_time = np.linspace(0.,tmax,nsteps)
pyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3)
pyplot.xlabel('time')
pyplot.ylabel('temperature');
Explanation: We could have saved effort by defining
End of explanation
def euler(y, f, dx):
Computes y_new = y + f*dx
Parameters
----------
y : float
old value of y_n at x_n
f : float
first derivative f(x,y) evaluated at (x_n,y_n)
dx : float
x step
return y + f*dx
T = T0
for i in range(1,nsteps):
T = euler(T, -r*(T-Ts), dt)
my_temp[i] = T
Explanation: Alternatively, and in order to re use code in future problems, we could have created a function.
End of explanation
euler = lambda y, f, dx: y + f*dx
Explanation: Actually, for this particularly simple case, calling a function may introduce unecessary overhead, but it is a an example that we will find useful for future applications. For a simple function like this we could have used a "lambda" function (more about lambda functions <a href="http://www.secnetix.de/olli/Python/lambda_functions.hawk">here</a>).
End of explanation
dt = 1.
#my_color = ['#003366','#663300','#660033','#330066']
my_color = ['red', 'green', 'blue', 'black']
for j in range(0,4):
nsteps = int(tmax/dt) #the arrays will have different size for different time steps
my_time = np.linspace(dt,tmax,nsteps)
my_temp = np.zeros(nsteps)
T = T0
for i in range(1,nsteps):
T = euler(T, -r*(T-Ts), dt)
my_temp[i] = T
pyplot.plot(my_time, my_temp, color=my_color[j], ls='-', lw=3)
dt = dt/2.
pyplot.xlabel('time');
pyplot.ylabel('temperature');
pyplot.xlim(8,10);
pyplot.ylim(48,58);
Explanation: Now, let's study the effects of different time steps on the convergence:
End of explanation |
6,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 8
Step1: Indexing would still work as you would expect, but looping through a matrix--say, to do matrix multiplication--would be laborious and highly inefficient.
We'll demonstrate this experimentally later, but suffice to say Python lists embody the drawbacks of using an interpreted language such as Python
Step2: Now just call the array method using our list from before!
Step3: To reference an element in the array, just use the same notation we did for lists
Step4: You can also separate dimensions by commas
Step5: Remember, with indexing matrices
Step6: 2
Step7: Part 2
Step8: Now, let's see the same operation, this time with NumPy arrays.
Step9: No loops needed, far fewer lines of code, and a simple intuitive operation.
Operations involving arrays on both sides of the sign will also work (though the two arrays need to be the same length).
For example, adding two vectors together
Step10: Works exactly as you'd expect, but no [explicit] loop needed.
This becomes particularly compelling with matrix multiplication. Say you have two matrices, $A$ and $B$
Step11: If you recall from algebra, matrix multiplication $A \times B$ involves multipliying each row of $A$ by each column of $B$. But rather than write that code yourself, Python (as of version 3.5) gives us a dedicated matrix multiplication operator
Step12: In almost every case, vectorized operations are far more efficient than loops written in Python to do the same thing.
Step13: If you're implementing loops in conjunction with arrays, see if there's any way to use vectorized operations instead.
In summary
NumPy arrays have all the abilities of lists (indexing, mutability, slicing) plus a whole lot of additional benefits, such as vectorized computations.
About the only limitation of NumPy arrays relative to Python lists is constructing them
Step14: With NumPy arrays, all the same functionality you know and love from lists is still there.
Step15: These operations all work whether you're using Python lists or NumPy arrays.
Multidimensional arrays
The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices.
To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists
Step16: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array() method
Step17: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way
Step18: With NumPy arrays, you can use that same notation...or you can use comma-separated indices (this may be more familiar to Matlab and R users)
Step19: It's not earth-shattering, but enough to warrant a heads-up.
When you index NumPy arrays, the nomenclature used is that of an axis
Step20: Here's a great visual aid of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3)
Step21: We know video is 3D because we can also access its ndim attribute.
Step22: Another example
Step23: Axis 0
Step24: Can you explain this number?
Step25: These are admittedly extreme examples, but they're to illustrate how flexible NumPy arrays are.
If in doubt
Step26: What is sliced.shape?
Step27: What is sliced_again.shape?
Step28: What is sliced_finally.shape?
Trick question! I've indexed all three axes, so the value I get back is no longer a NumPy array, but rather the value type I've filled the array with.
Step29: Part 4
Step30: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because broadcasting!
Broadcasting is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array.
Put another way
Step31: In this example, the scalar value 1 is broadcast to all the elements of the NumPy array zeros, converting the operation to element-wise addition.
This all happens under the NumPy/Python hood--we don't see it! It "just works"...most of the time.
There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as
both dimensions are of equal size (e.g., two 1D arrays of length 10 have equal-sized dimensions, so adding them together will work fine), OR
one of the dimensions is 1 (i.e., it's a scalar)
If these rules aren't met, you get all kinds of strange errors
Step32: On some intuitive level, this hopefully makes sense
Step33: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise.
Part 5
Step34: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors.
It could be
7 people who are described by their height, weight, age, and 40-yard dash time.
7 video games, each described by their review rating, Steam downloads count, average number of active players, and total cheating complaints
7 different genes and their expression levels under 4 separate conditions or replicates
???
Whatever our data, a common first step before any analysis involves some kind of preprocessing. In this case, if the example we're looking at is the gene expression level from the previous slide, then perhaps we know that any negative values are recording errors.
So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops to go through each element of the matrix, but it's much easier (and faster) to use boolean indexing.
First, we create a mask. This is what it sounds like
Step35: Now, we can use our mask to access only the indices we want to set to 0.
Step36: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind.
One small caveat with boolean indexing.
Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals.
But... and and or DO NOT WORK. You have to use the arithmetic versions of the operators
Step37: Fancy Indexing
"Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough
Step38: We have 8 rows and 4 columns, where each row is a vector of the same value repeated across the columns, and that value is the index of the row.
In addition to using regular integer indices, and masks to perform boolean indexing, we can also use other NumPy arrays to very selectively pick and choose what elements we want, and even the order in which we want them.
First, let's say I want the first three rows. Well, we already know how to do that with slicing
Step39: Now, I want the first three even-numbered rows. You could do this with a loop, but it might be easier with fancy indexing
Step40: See how easy that is?
Now, let's say I want rows 7, 0, 5, and 2.
In that order!
Step41: Yep, the order in which I list the integers in the indices array is the ordering in which I get them back. Very convenient for retrieving specific data in a specific order!
But wait, there's more! Rather than just specifying one dimension, you can provide tuples of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array.
(bear with me, I promise this is as bad as it gets)
Step42: Let's step through this slowly.
When you pass in tuples as indices, they act as $(x, y)$ coordinate pairs | Python Code:
matrix = [[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9] ]
print(matrix)
Explanation: Lecture 8: Advanced Data Structures
CBIO (CSCI) 4835/6835: Introduction to Computational Biology
Overview and Objectives
Before we get to some of the more advanced sequence analysis techniques, we need to cover some critical concepts using advanced Python data structures. In this lecture, we'll see how we can use an external library to make these computations much easier and much faster. By the end of this lecture, you should be able to:
Compare and contrast NumPy arrays to built-in Python lists
Define "broadcasting" in the context of vectorized programming
Use NumPy arrays in place of explicit loops for basic arithmetic operations
Understand the benefits of NumPy's "fancy indexing" capabilities and its advantages over built-in indexing
Part 1: Introduction to NumPy
NumPy, or Numerical Python, is an incredible library of basic functions and data structures that provide a robust foundation for computational scientists.
Put another way: if you're using Python and doing any kind of math, you'll probably use NumPy.
At this point, NumPy is so deeply embedded in so many other 3rd-party modules related to scientific computing that even if you're not making explicit use of it, at least one of the other modules you're using probably is.
NumPy's core: the ndarray
NumPy, or Numerical Python, is an incredible library of basic functions and data structures that provide a robust foundation for computational scientists.
End of explanation
import numpy
Explanation: Indexing would still work as you would expect, but looping through a matrix--say, to do matrix multiplication--would be laborious and highly inefficient.
We'll demonstrate this experimentally later, but suffice to say Python lists embody the drawbacks of using an interpreted language such as Python: they're easy to use, but oh so slow.
By contrast, in NumPy, we have the ndarray structure (short for "n-dimensional array") that is a highly optimized version of Python lists, perfect for fast and efficient computations. To make use of NumPy arrays, import NumPy (it's installed by default in Anaconda, and on JupyterHub):
End of explanation
arr = numpy.array(matrix)
print(arr)
Explanation: Now just call the array method using our list from before!
End of explanation
arr[0]
arr[2][2]
Explanation: To reference an element in the array, just use the same notation we did for lists:
End of explanation
arr[2, 2]
Explanation: You can also separate dimensions by commas:
End of explanation
a = numpy.array([45, 2, 59, -2, 70, 3, 6, 790])
print("Minimum: {}".format(numpy.min(a)))
print("Cosine of 1st element: {:.2f}".format(numpy.cos(a[0])))
Explanation: Remember, with indexing matrices: the first index is the row, the second index is the column.
NumPy's submodules
NumPy has an impressive array of utility modules that come along with it, optimized to use its ndarray data structure. I highly encourage you to use them, even if you're not using NumPy arrays.
1: Basic mathematical routines
All the core functions you could want; for example, all the built-in Python math routines (trig, logs, exponents, etc) all have NumPy versions. (numpy.sin, numpy.cos, numpy.log, numpy.exp, numpy.max, numpy.min)
End of explanation
print(numpy.random.randint(10)) # Random integer between 0 and 10
print(numpy.random.randint(10)) # Another one!
print(numpy.random.randint(10)) # Yet another one!
Explanation: 2: Fourier transforms
If you do any signal processing using Fourier transforms (which we might, later!), NumPy has an entire sub-module full of tools for this type of analysis in numpy.fft
3: Linear algebra
This is most of your vector and matrix linear algebra operations, from vector norms (numpy.linalg.norm) to singular value decomposition (numpy.linalg.svd) to matrix determinants (numpy.linalg.det).
4: Random numbers
NumPy has a phenomenal random number library in numpy.random. In addition to generating uniform random numbers in a certain range, you can also sample from any known parametric distribution.
End of explanation
vector = [4.0, 15.0, 6.0, 2.0]
# To normalize this to unit length, we need to divide each element by the vector's magnitude.
# To learn it's magnitude, we need to loop through the whole vector.
# So. We need two loops!
magnitude = 0.0
for element in vector:
magnitude += element ** 2
magnitude = (magnitude ** 0.5) # square root
print("Original magnitude: {:.2f}".format(magnitude))
new_magnitude = 0.0
for i in range(len(vector)):
element = vector[i]
normalized = element / magnitude
vector[i] = normalized
new_magnitude += normalized ** 2
new_magnitude = (new_magnitude ** 0.5)
print("Normalized magnitude: {:.2f}".format(new_magnitude))
Explanation: Part 2: Vectorized Arithmetic
"Vectorized arithmetic" refers to how NumPy allows you to efficiently perform arithmetic operations on entire NumPy arrays at once, as you would with "regular" Python variables.
For example: let's say you have a vector and you want to normalize it to be unit length; that involves dividing every element in the vector by a constant (the magnitude of the vector). With lists, you'd have to loop through them manually.
End of explanation
import numpy as np # This tends to be the "standard" convention when importing NumPy.
import numpy.linalg as nla
vector = [4.0, 15.0, 6.0, 2.0]
np_vector = np.array(vector) # Convert to NumPy array.
magnitude = nla.norm(np_vector) # Computing the magnitude: one-liner.
print("Original magnitude: {:.2f}".format(magnitude))
np_vector /= magnitude # Vectorized division!!! No loop needed!
new_magnitude = nla.norm(np_vector)
print("Normalized magnitude: {:.2f}".format(new_magnitude))
Explanation: Now, let's see the same operation, this time with NumPy arrays.
End of explanation
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
z = x + y
print(z)
Explanation: No loops needed, far fewer lines of code, and a simple intuitive operation.
Operations involving arrays on both sides of the sign will also work (though the two arrays need to be the same length).
For example, adding two vectors together:
End of explanation
A = np.array([ [1, 2], [3, 4] ])
B = np.array([ [5, 6], [7, 8] ])
Explanation: Works exactly as you'd expect, but no [explicit] loop needed.
This becomes particularly compelling with matrix multiplication. Say you have two matrices, $A$ and $B$:
End of explanation
A @ B
Explanation: If you recall from algebra, matrix multiplication $A \times B$ involves multipliying each row of $A$ by each column of $B$. But rather than write that code yourself, Python (as of version 3.5) gives us a dedicated matrix multiplication operator: the @ symbol!
End of explanation
def multiply_loops(A, B):
C = np.zeros((A.shape[0], B.shape[1]))
for i in range(A.shape[1]):
for j in range(B.shape[0]):
C[i, j] = A[i, j] * B[j, i]
return C
def multiply_vector(A, B):
return A @ B
X = np.random.random((100, 100))
Y = np.random.random((100, 100))
%timeit multiply_loops(X, Y)
%timeit multiply_vector(X, Y)
Explanation: In almost every case, vectorized operations are far more efficient than loops written in Python to do the same thing.
End of explanation
li = [1, 2, 3, 4, 5]
print(li)
print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive)
print(li[2:]) # Print element 2 and everything after that
print(li[:-1]) # Print everything BEFORE element -1 (the last one)
Explanation: If you're implementing loops in conjunction with arrays, see if there's any way to use vectorized operations instead.
In summary
NumPy arrays have all the abilities of lists (indexing, mutability, slicing) plus a whole lot of additional benefits, such as vectorized computations.
About the only limitation of NumPy arrays relative to Python lists is constructing them: if you're building an array from scratch, the best option would be to build the list and then pass that to numpy.array() to convert it. Adjusting the length of the NumPy array after it's constructed is more difficult than a standard list.
The Python ecosystem is huge. There is some functionality that comes with Python by default, and some of this default functionality is available immediately; the other default functionality is accessible using import statements. There is even more functionality from 3rd-party vendors, but it needs to be installed before it can be imported. NumPy falls in this lattermost category.
Vectorized operations are always, always preferred to loops. They're easier to write, easier to understand, and in almost all cases, much more efficient.
Part 3: NumPy Array Indexing and Slicing
Hopefully, you recall basic indexing and slicing from Lecture 4.
End of explanation
import numpy as np
x = np.array([1, 2, 3, 4, 5])
print(x)
print(x[1:3]) # Print element 1 (inclusive) to 3 (exclusive)
print(x[2:]) # Print element 2 and everything after that
print(x[:-1]) # Print everything BEFORE element -1 (the last one)
Explanation: With NumPy arrays, all the same functionality you know and love from lists is still there.
End of explanation
python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]
print(python_matrix)
Explanation: These operations all work whether you're using Python lists or NumPy arrays.
Multidimensional arrays
The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices.
To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists:
End of explanation
numpy_matrix = np.array(python_matrix)
print(numpy_matrix)
Explanation: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array() method:
End of explanation
print(python_matrix) # The full list-of-lists
print(python_matrix[0]) # The inner-list at the 0th position of the outer-list
print(python_matrix[0][0]) # The 0th element of the 0th inner-list
Explanation: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way:
End of explanation
print(numpy_matrix)
print(numpy_matrix[0])
print(numpy_matrix[0, 0]) # Note the comma-separated format!
Explanation: With NumPy arrays, you can use that same notation...or you can use comma-separated indices (this may be more familiar to Matlab and R users):
End of explanation
x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
print(x)
print()
print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1.
Explanation: It's not earth-shattering, but enough to warrant a heads-up.
When you index NumPy arrays, the nomenclature used is that of an axis: you are indexing specific axes of a NumPy array object. In particular, when access the .shape attribute on a NumPy array, that tells you two things:
1: How many axes there are. This number is len(ndarray.shape), or the number of elements in the tuple returned by .shape. In our previous example, numpy_matrix.shape would return (3, 3), so it would have 2 axes.
2: How many elements are in each axis. In our above example, where numpy_matrix.shape returns (3, 3), there are 2 axes (since the length of that tuple is 2), and both axes have 3 elements (hence the numbers 3).
Here's the breakdown of axis notation and indices used in a 2D NumPy array:
<img src="https://www.safaribooksonline.com/library/view/python-for-data/9781449323592/httpatomoreillycomsourceoreillyimages1346880.png" />
As with lists, if you want an entire axis, just use the colon operator all by itself:
End of explanation
video = np.empty(shape = (1920, 1080, 5000))
print("Axis 0 (frame height) : {}".format(video.shape[0])) # How many rows?
print("Axis 1 (frame width): {}".format(video.shape[1])) # How many columns?
print("Axis 2 (number of frames): {}".format(video.shape[2])) # How many frames?
Explanation: Here's a great visual aid of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3):
<img src="https://www.safaribooksonline.com/library/view/python-for-data/9781449323592/httpatomoreillycomsourceoreillyimages1346882.png" width="50%" />
Putting the "multi" in multidimensional
Depending on your field, it's entirely possible that you'll go beyond 2D matrices. If so, it's important to be able to recognize what these structures "look" like.
For example, a video can be thought of as a 3D cube. Put another way, it's a NumPy array with 3 axes: the first axis is height, the second axis is width, and the third axis is number of frames.
End of explanation
print(video.ndim)
del video
Explanation: We know video is 3D because we can also access its ndim attribute.
End of explanation
tensor = np.empty(shape = (3, 640, 480, 360, 100))
print(tensor.shape)
Explanation: Another example: 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object:
Each time point is a 3D volume of some object of interest (x, y, z coordinates)
Time-lapse means a fourth dimension
Using three fluorescent markers--nuclear, f-actin, and mitochondria--means a fifth dimension
End of explanation
print(tensor.size)
Explanation: Axis 0: color channel, used to differentiate between fluorescent markers
Axis 1: height of video frames (i.e. rows)
Axis 2: width of video frames (i.e. columns)
Axis 3: depth of 3D volume at each time interval
Axis 4: time interval (frame number)
We can also ask how many elements there are total, using the size attribute:
End of explanation
del tensor
Explanation: Can you explain this number?
End of explanation
example = np.empty(shape = (3, 5, 9))
print(example.shape)
sliced = example[0] # Indexed the first axis.
Explanation: These are admittedly extreme examples, but they're to illustrate how flexible NumPy arrays are.
If in doubt: once you index the first axis, the NumPy array you get back has the shape of all the remaining axes.
Put another way: when you index an axis directly, that axis essentially "drops out", and you're left with an array that has all the remaining axes you didn't index.
End of explanation
print(sliced.shape)
sliced_again = example[0, 0] # Indexed the first and second axes.
Explanation: What is sliced.shape?
End of explanation
print(sliced_again.shape)
sliced_finally = example[0, 0, 0]
Explanation: What is sliced_again.shape?
End of explanation
type(sliced_finally)
print(sliced_finally)
Explanation: What is sliced_finally.shape?
Trick question! I've indexed all three axes, so the value I get back is no longer a NumPy array, but rather the value type I've filled the array with.
End of explanation
x = np.array([1, 2, 3, 4, 5])
x += 10
print(x)
Explanation: Part 4: NumPy Array Broadcasting
"Broadcasting" is a fancy term for how NumPy handles vectorized operations when arrays of differing shapes are involved. (this is, in some sense, "how the sausage is made")
When you write code like this:
End of explanation
zeros = np.zeros(shape = (3, 4)) # A 3-by-4 matrix of zeros.
ones = 1
zeros += ones
print(zeros)
Explanation: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because broadcasting!
Broadcasting is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array.
Put another way: Python will internally recognize that two NumPy arrays are of different shapes, but will nonetheless attempt to invisibly and temporarily "reshape" them so that the operation the programmer (you) wrote can still happen.
We saw this in our previous example: the low-dimensional scalar "10" was replicated, or broadcast, to each element of the array x so that the addition operation could be performed individually on each element of the array.
This concept can be generalized to higher-dimensional NumPy arrays.
End of explanation
x = np.zeros(shape = (3, 3)) # A 3-by-3 matrix of zeros.
y = np.ones(4) # A 1D array of four elements (all of them 1s).
x + y
Explanation: In this example, the scalar value 1 is broadcast to all the elements of the NumPy array zeros, converting the operation to element-wise addition.
This all happens under the NumPy/Python hood--we don't see it! It "just works"...most of the time.
There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as
both dimensions are of equal size (e.g., two 1D arrays of length 10 have equal-sized dimensions, so adding them together will work fine), OR
one of the dimensions is 1 (i.e., it's a scalar)
If these rules aren't met, you get all kinds of strange errors:
End of explanation
x = np.zeros(shape = (3, 4))
y = np.array([1, 2, 3, 4])
z = x + y
print(z)
Explanation: On some intuitive level, this hopefully makes sense: there's no reasonable arithmetic operation that can be performed when you have one $3 \times 3$ matrix and a vector of length 4.
To be rigorous: it's the trailing dimensions / axes that you want to make sure line up.
Recall how matrix-matrix multiplication works: the inner dimensions have to match for the multiplication to work at all.
If you do $A \times B$, where $A$ is $3 \times 5$ and $B$ is $5 \times 4$, the inner dimensions are the two 5s. Since these match, the multiplication will work.
But if you do $B \times A$, now the inner dimensions are 4 (for $B$) and 3 (for $A$). Since they don't match, you can't multiply them in this order.
End of explanation
x = np.random.standard_normal(size = (7, 4))
print(x)
Explanation: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise.
Part 5: Advanced Indexing
Hopefully, the rules of indexing and broadcasting have made sense on some level so far.
Unfortunately, it gets still more complicated. These complications, however, are ultimately there to make life easier.
Boolean Indexing
We've already seen that you can index by slicing. Using the colon operator, you can even specify ranges, slicing out entire swaths of rows and columns.
But suppose we want something very specific; data in our array which satisfies certain criteria, as opposed to data which is found at certain indices.
Put another way: can we pull data out of an array that meets certain conditions?
Let's say you have some toy data:
End of explanation
mask = x < 0 # For every element of x, ask: is it < 0?
print(mask)
Explanation: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors.
It could be
7 people who are described by their height, weight, age, and 40-yard dash time.
7 video games, each described by their review rating, Steam downloads count, average number of active players, and total cheating complaints
7 different genes and their expression levels under 4 separate conditions or replicates
???
Whatever our data, a common first step before any analysis involves some kind of preprocessing. In this case, if the example we're looking at is the gene expression level from the previous slide, then perhaps we know that any negative values are recording errors.
So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops to go through each element of the matrix, but it's much easier (and faster) to use boolean indexing.
First, we create a mask. This is what it sounds like: it "masks" certain portions of the data, picking out only those numbers that meet the condition of the mask.
End of explanation
x[mask] = 0
print(x)
Explanation: Now, we can use our mask to access only the indices we want to set to 0.
End of explanation
mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5
x[mask] = 99
print(x)
Explanation: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind.
One small caveat with boolean indexing.
Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals.
But... and and or DO NOT WORK. You have to use the arithmetic versions of the operators: & (for and) and | (for or).
End of explanation
import numpy as np
matrix = np.empty(shape = (8, 4))
for i in range(8):
matrix[i] = i # Broadcasting is happening here!
print(matrix)
Explanation: Fancy Indexing
"Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough: fancy indexing allows you to index arrays with other [integer] arrays.
Now, to demonstrate. Let's build a 2D array that, for the sake of simplicity, has across each row the index of that row.
End of explanation
print(matrix[0:3])
Explanation: We have 8 rows and 4 columns, where each row is a vector of the same value repeated across the columns, and that value is the index of the row.
In addition to using regular integer indices, and masks to perform boolean indexing, we can also use other NumPy arrays to very selectively pick and choose what elements we want, and even the order in which we want them.
First, let's say I want the first three rows. Well, we already know how to do that with slicing:
End of explanation
indices = np.array([0, 2, 4])
print(matrix[indices])
Explanation: Now, I want the first three even-numbered rows. You could do this with a loop, but it might be easier with fancy indexing:
End of explanation
indices = np.array([7, 0, 5, 2])
print(matrix[indices])
Explanation: See how easy that is?
Now, let's say I want rows 7, 0, 5, and 2.
In that order!
End of explanation
matrix = np.arange(32).reshape((8, 4))
print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise.
indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays!
print(matrix[indices])
Explanation: Yep, the order in which I list the integers in the indices array is the ordering in which I get them back. Very convenient for retrieving specific data in a specific order!
But wait, there's more! Rather than just specifying one dimension, you can provide tuples of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array.
(bear with me, I promise this is as bad as it gets)
End of explanation
( np.array([1, 7, 4]), np.array([3, 0, 1]) )
Explanation: Let's step through this slowly.
When you pass in tuples as indices, they act as $(x, y)$ coordinate pairs: the first NumPy array of the tuple is the list of $x$ coordinates, while the second NumPy array is the list of corresponding $y$ coordinates.
In this way, the corresponding elements of the two NumPy arrays in the tuple give you the row and column indices to be selected from the original NumPy array.
In our previous example, this was our tuple of indices:
End of explanation |
6,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
304 - Medical Entity Extraction with a BiLSTM
In this tutorial we use a Bidirectional LSTM entity extractor from the MMLSPark
model downloader to extract entities from PubMed medical abstracts
Our goal is to identify useful entities in a block of free-form text. This is a
nontrivial task because entities might be referenced in the text using variety of
synonymns, abbreviations, or formats. Our target output for this model is a set
of tags that specify what kind of entity is referenced. The model we use was
trained on a large dataset of publically tagged pubmed abstracts. An example
annotated sequence is given below, "O" represents no tag
Step1: Get the model and extract the data.
Step2: Download the embeddings and the tokenizer
We use the nltk punkt sentence and word tokenizers and a set of embeddings trained on PubMed Articles
Step3: Load the embeddings and create functions for encoding sentences
Step4: Run the CNTKModel
Step5: Show the annotated text | Python Code:
from mmlspark import CNTKModel, ModelDownloader
from pyspark.sql.functions import udf, col
from pyspark.sql.types import IntegerType, ArrayType, FloatType, StringType
from pyspark.sql import Row
from os.path import abspath, join
import numpy as np
import pickle
from nltk.tokenize import sent_tokenize, word_tokenize
import os, tarfile, pickle
import urllib.request
import nltk
Explanation: 304 - Medical Entity Extraction with a BiLSTM
In this tutorial we use a Bidirectional LSTM entity extractor from the MMLSPark
model downloader to extract entities from PubMed medical abstracts
Our goal is to identify useful entities in a block of free-form text. This is a
nontrivial task because entities might be referenced in the text using variety of
synonymns, abbreviations, or formats. Our target output for this model is a set
of tags that specify what kind of entity is referenced. The model we use was
trained on a large dataset of publically tagged pubmed abstracts. An example
annotated sequence is given below, "O" represents no tag:
|I-Chemical | O |I-Chemical | O | O |I-Chemical | O |I-Chemical | O | O | O | O |I-Disease |I-Disease| O | O |
|:---: |:---:|:---: |:---:|:---:|:---: |:---:|:---: |:---:|:---: |:---:|:---:|:---: |:---: |:---:|:---: |
|Baricitinib| , |Methotrexate| , | or |Baricitinib|Plus |Methotrexate| in |Patients|with |Early|Rheumatoid|Arthritis| Who |Had...|
End of explanation
modelName = "BiLSTM"
modelDir = abspath("models")
d = ModelDownloader(spark, "wasb://" + modelDir)
modelSchema = d.downloadByName(modelName)
modelName = "BiLSTM"
modelDir = abspath("models")
d = ModelDownloader(spark, "file://" + modelDir)
modelSchema = d.downloadByName(modelName)
Explanation: Get the model and extract the data.
End of explanation
nltk.download("punkt", download_dir=modelDir)
nltk.data.path.append(modelDir)
wordEmbFileName = "WordEmbeddings_PubMed.pkl"
pickleFile = join(abspath("models"), wordEmbFileName)
if not os.path.isfile(pickleFile):
urllib.request.urlretrieve("https://mmlspark.blob.core.windows.net/datasets/" + wordEmbFileName, pickleFile)
Explanation: Download the embeddings and the tokenizer
We use the nltk punkt sentence and word tokenizers and a set of embeddings trained on PubMed Articles
End of explanation
pickleContent = pickle.load(open(pickleFile, "rb"), encoding="latin-1")
wordToIndex = pickleContent["word_to_index"]
wordvectors = pickleContent["wordvectors"]
classToEntity = pickleContent["class_to_entity"]
nClasses = len(classToEntity)
nFeatures = wordvectors.shape[1]
maxSentenceLen = 613
content = "Baricitinib, Methotrexate, or Baricitinib Plus Methotrexate in Patients with Early Rheumatoid\
Arthritis Who Had Received Limited or No Treatment with Disease-Modifying-Anti-Rheumatic-Drugs (DMARDs):\
Phase 3 Trial Results. Keywords: Janus kinase (JAK), methotrexate (MTX) and rheumatoid arthritis (RA) and\
Clinical research. In 2 completed phase 3 studies, baricitinib (bari) improved disease activity with a\
satisfactory safety profile in patients (pts) with moderately-to-severely active RA who were inadequate\
responders to either conventional synthetic1 or biologic2DMARDs. This abstract reports results from a\
phase 3 study of bari administered as monotherapy or in combination with methotrexate (MTX) to pts with\
early active RA who had limited or no prior treatment with DMARDs. MTX monotherapy was the active comparator."
sentences = sent_tokenize(content)
df = spark.createDataFrame(enumerate(sentences), ["index","sentence"])
# Add the tokenizers to all worker nodes
def prepNLTK(partition):
localPath = abspath("nltk")
nltk.download("punkt", localPath)
nltk.data.path.append(localPath)
return partition
df = df.rdd.mapPartitions(prepNLTK).toDF()
tokenizeUDF = udf(word_tokenize, ArrayType(StringType()))
df = df.withColumn("tokens",tokenizeUDF("sentence"))
countUDF = udf(len, IntegerType())
df = df.withColumn("count",countUDF("tokens"))
def wordToEmb(word):
return wordvectors[wordToIndex.get(word.lower(), wordToIndex["UNK"])]
def featurize(tokens):
X = np.zeros((maxSentenceLen, nFeatures))
X[-len(tokens):,:] = np.array([wordToEmb(word) for word in tokens])
return [float(x) for x in X.reshape(maxSentenceLen, nFeatures).flatten()]
featurizeUDF = udf(featurize, ArrayType(FloatType()))
df = df.withColumn("features", featurizeUDF("tokens"))
df.show()
Explanation: Load the embeddings and create functions for encoding sentences
End of explanation
model = CNTKModel() \
.setModelLocation(spark, modelSchema.uri) \
.setInputCol("features") \
.setOutputCol("probs") \
.setOutputNodeIndex(0) \
.setMiniBatchSize(1)
df = model.transform(df).cache()
df.show()
def probsToEntities(probs, wordCount):
reshaped_probs = np.array(probs).reshape(maxSentenceLen, nClasses)
reshaped_probs = reshaped_probs[-wordCount:,:]
return [classToEntity[np.argmax(probs)] for probs in reshaped_probs]
toEntityUDF = udf(probsToEntities,ArrayType(StringType()))
df = df.withColumn("entities", toEntityUDF("probs", "count"))
df.show()
Explanation: Run the CNTKModel
End of explanation
# Color Code the Text based on the entity type
colors = {
"B-Disease": "blue",
"I-Disease":"blue",
"B-Drug":"lime",
"I-Drug":"lime",
"B-Chemical":"lime",
"I-Chemical":"lime",
"O":"black",
"NONE":"black"
}
def prettyPrint(words, annotations):
formattedWords = []
for word,annotation in zip(words,annotations):
formattedWord = "<font size = '2' color = '{}'>{}</font>".format(colors[annotation], word)
if annotation in {"O","NONE"}:
formattedWords.append(formattedWord)
else:
formattedWords.append("<b>{}</b>".format(formattedWord))
return " ".join(formattedWords)
prettyPrintUDF = udf(prettyPrint, StringType())
df = df.withColumn("formattedSentence", prettyPrintUDF("tokens", "entities")) \
.select("formattedSentence")
sentences = [row["formattedSentence"] for row in df.collect()]
df.registerTempTable("df")
from IPython.core.display import display, HTML
for sentence in sentences:
display(HTML(sentence))
%%sql -q -o df
select * from df
%%local
sentences =df["formattedSentence"]
from IPython.core.display import display, HTML
for sentence in sentences:
display(HTML(sentence))
Explanation: Show the annotated text
End of explanation |
6,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: 1. Create the river network model grid
First, we need to create a Landlab NetworkModelGrid to represent the river network. Each link on the grid represents a reach of river. Each node represents a break between reaches. All tributary junctions must be associated with grid nodes.
Step2: Our network consists of seven links between 8 nodes. X and Y, above, represent the plan-view coordinates of the node locations. Notes_at_link describes the node indices that are connedted by each link. For example, link 2 connects node 1 and node 7.
Next, we need to populate the grid with the relevant topographic information
Step3: We must distinguish between topographic elevation (the top surface of the bed sediment) and bedrock elevation (the surface of the river in the absence of modeled sediment).
Note that "reach_length" is defined by the user, rather than calculated as the minimum distance between nodes. This accounts for channel sinuosity.
2. Create sediment 'parcels' in a DataRecord
We represent sediment in the network as discrete parcels (or packages) of grains of uniform size and characteristics. Each parcel is tracked through the network grid according to sediment transport and stratigraphic constraints.
Parcels are tracked using the Landlab DataRecord.
First, let's create arrays with all of the essential sediment parcel variables
Step4: In order to track sediment motion, we classify parcels as either active (representing mobile surface sediment) or inactive (immobile subsurface) during each timestep. The active parcels are the most recent parcels to arrive in the link. During a timestep, active parcels are transported downstream (increasing their location_in_link, which ranges from 0 to 1) according to a sediment transport formula.
We begin by assigning each parcel an arbitrary (and small) arrival time and location in the link.
Step5: In addition to the required parcel attributes listed above, you can designate optional parcel characteristics, depending on your needs. For example
Step6: We now collect the arrays into a dictionary of variables, some of which will be tracked through time (["item_id", "time"]), and others of which will remain constant through time
Step7: With all of the required attributes collected, we can create the parcels DataRecord. Often, parcels will eventually transport off of the downstream-most link. To track these parcels, we have designated a "dummy_element" here, which has index value -2.
Step8: 3. Run the NetworkSedimentTransporter
With the parcels and grid set up, we can move on to setting up the model.
Step9: Before running the NST, we need to determine flow direction on the grid (upstream and downstream for each link). To do so, we initalize and run a Landlab flow director component
Step10: Then, we initialize the network sediment transporter
Step11: Now we are ready to run the model forward in time
Step12: 4. Plot the model results
There are landlab plotting tools specific to the NetworkSedimentTransporter. In particular, plot_network_and_parcels creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes.
Here, we demonstrate one example use of plot_network_and_parcels, which creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes. For a thorough tutorial on the plotting tools, see this notebook.
Below, each link (represented as a line) is colored by the total volume of sediment on the link. Each parcel is colored by the parcel grain size.
Step13: In addition, the results of the NST can be visualized by directly accessing information about the grid, the parcels, and by accessing variables stored after the run of NST.
As a simple example, we can plot the total transport distance of all parcels through the model run as a function of parcel diameter. | Python Code:
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import numpy as np
from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter
from landlab.data_record import DataRecord
from landlab.grid.network import NetworkModelGrid
from landlab.plot import graph
from landlab.plot import plot_network_and_parcels
%matplotlib inline
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Using the Landlab NetworkSedimentTransporter component
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how to model the transport of coarse sediment through a synthetic river network using the NetworkSedimentTransporter Landlab component.
For an equivalent tutorial demonstrating initialization of the NetworkSedimentTransporter with a shapefile river network, click here.
In this example we will:
- create a synthetic Landlab grid to represent a river network
- create sediment "parcels" that will transport through the river network, represented as items in a Landlab DataRecord
- run the component
- plot the results of the model run
Import the necessary libraries, plus a bit of magic so that we can plot within this notebook:
End of explanation
y_of_node = (0, 100, 200, 200, 300, 400, 400, 125)
x_of_node = (0, 0, 100, -50, -100, 50, -150, -100)
nodes_at_link = ((1, 0), (2, 1), (1, 7), (3, 1), (3, 4), (4, 5), (4, 6))
grid = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link)
plt.figure(0)
graph.plot_graph(grid, at="node,link")
Explanation: 1. Create the river network model grid
First, we need to create a Landlab NetworkModelGrid to represent the river network. Each link on the grid represents a reach of river. Each node represents a break between reaches. All tributary junctions must be associated with grid nodes.
End of explanation
grid.at_node["topographic__elevation"] = [0.0, 0.08, 0.25, 0.15, 0.25, 0.4, 0.8, 0.8]
grid.at_node["bedrock__elevation"] = [0.0, 0.08, 0.25, 0.15, 0.25, 0.4, 0.8, 0.8]
grid.at_link["flow_depth"] = 2.5 * np.ones(grid.number_of_links) # m
grid.at_link["reach_length"] = 200 * np.ones(grid.number_of_links) # m
grid.at_link["channel_width"] = 1 * np.ones(grid.number_of_links) # m
Explanation: Our network consists of seven links between 8 nodes. X and Y, above, represent the plan-view coordinates of the node locations. Notes_at_link describes the node indices that are connedted by each link. For example, link 2 connects node 1 and node 7.
Next, we need to populate the grid with the relevant topographic information:
End of explanation
# element_id is the link on which the parcel begins.
element_id = np.repeat(np.arange(grid.number_of_links), 30)
element_id = np.expand_dims(element_id, axis=1)
volume = 0.05 * np.ones(np.shape(element_id)) # (m3)
active_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive
density = 2650 * np.ones(np.size(element_id)) # (kg/m3)
abrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)
# Lognormal GSD
medianD = 0.085 # m
mu = np.log(medianD)
sigma = np.log(2) # assume that D84 = sigma*D50
np.random.seed(0)
D = np.random.lognormal(
mu, sigma, np.shape(element_id)
) # (m) the diameter of grains in each parcel
Explanation: We must distinguish between topographic elevation (the top surface of the bed sediment) and bedrock elevation (the surface of the river in the absence of modeled sediment).
Note that "reach_length" is defined by the user, rather than calculated as the minimum distance between nodes. This accounts for channel sinuosity.
2. Create sediment 'parcels' in a DataRecord
We represent sediment in the network as discrete parcels (or packages) of grains of uniform size and characteristics. Each parcel is tracked through the network grid according to sediment transport and stratigraphic constraints.
Parcels are tracked using the Landlab DataRecord.
First, let's create arrays with all of the essential sediment parcel variables:
End of explanation
time_arrival_in_link = np.random.rand(np.size(element_id), 1)
location_in_link = np.random.rand(np.size(element_id), 1)
Explanation: In order to track sediment motion, we classify parcels as either active (representing mobile surface sediment) or inactive (immobile subsurface) during each timestep. The active parcels are the most recent parcels to arrive in the link. During a timestep, active parcels are transported downstream (increasing their location_in_link, which ranges from 0 to 1) according to a sediment transport formula.
We begin by assigning each parcel an arbitrary (and small) arrival time and location in the link.
End of explanation
lithology = ["quartzite"] * np.size(element_id)
Explanation: In addition to the required parcel attributes listed above, you can designate optional parcel characteristics, depending on your needs. For example:
End of explanation
variables = {
"abrasion_rate": (["item_id"], abrasion_rate),
"density": (["item_id"], density),
"lithology": (["item_id"], lithology),
"time_arrival_in_link": (["item_id", "time"], time_arrival_in_link),
"active_layer": (["item_id", "time"], active_layer),
"location_in_link": (["item_id", "time"], location_in_link),
"D": (["item_id", "time"], D),
"volume": (["item_id", "time"], volume),
}
Explanation: We now collect the arrays into a dictionary of variables, some of which will be tracked through time (["item_id", "time"]), and others of which will remain constant through time :
End of explanation
items = {"grid_element": "link", "element_id": element_id}
parcels = DataRecord(
grid,
items=items,
time=[0.0],
data_vars=variables,
dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]},
)
Explanation: With all of the required attributes collected, we can create the parcels DataRecord. Often, parcels will eventually transport off of the downstream-most link. To track these parcels, we have designated a "dummy_element" here, which has index value -2.
End of explanation
timesteps = 10 # total number of timesteps
dt = 60 * 60 * 24 * 1 # length of timestep (seconds)
Explanation: 3. Run the NetworkSedimentTransporter
With the parcels and grid set up, we can move on to setting up the model.
End of explanation
fd = FlowDirectorSteepest(grid, "topographic__elevation")
fd.run_one_step()
Explanation: Before running the NST, we need to determine flow direction on the grid (upstream and downstream for each link). To do so, we initalize and run a Landlab flow director component:
End of explanation
nst = NetworkSedimentTransporter(
grid,
parcels,
fd,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
Explanation: Then, we initialize the network sediment transporter:
End of explanation
for t in range(0, (timesteps * dt), dt):
nst.run_one_step(dt)
print("Model time: ", t / dt, "timesteps passed")
Explanation: Now we are ready to run the model forward in time:
End of explanation
fig = plot_network_and_parcels(
grid,
parcels,
parcel_time_index=0,
parcel_color_attribute="D",
link_attribute="sediment_total_volume",
parcel_size=10,
parcel_alpha=1.0,
)
Explanation: 4. Plot the model results
There are landlab plotting tools specific to the NetworkSedimentTransporter. In particular, plot_network_and_parcels creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes.
Here, we demonstrate one example use of plot_network_and_parcels, which creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes. For a thorough tutorial on the plotting tools, see this notebook.
Below, each link (represented as a line) is colored by the total volume of sediment on the link. Each parcel is colored by the parcel grain size.
End of explanation
plt.loglog(parcels.dataset.D[:, -1], nst._distance_traveled_cumulative, ".")
plt.xlabel("Parcel grain size (m)")
plt.ylabel("Cumulative parcel travel distance")
# Note: some of the smallest grain travel distances can exceed the length of the
# grid by "overshooting" during a single timestep of high transport rate
Explanation: In addition, the results of the NST can be visualized by directly accessing information about the grid, the parcels, and by accessing variables stored after the run of NST.
As a simple example, we can plot the total transport distance of all parcels through the model run as a function of parcel diameter.
End of explanation |
6,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representing Data and Engineering Features
Categorical Variables
One-Hot-Encoding (Dummy variables)
Step1: Checking string-encoded categorical data
Step2: Numbers can encode categoricals
Step3: Binning, Discretization, Linear Models and Trees
Step4: Interactions and Polynomials
Step5: Univariate Non-linear transformations
Step6: Automatic Feature Selection
Univariate statistics
Step7: Model-based Feature Selection
Step8: Recursive Feature Elimination
Step9: Sequential Feature Selection
Step10: Exercises
Choose either the Boston housing dataset or the adult dataset from above. Compare a linear model with interaction features against one without interaction features.
Use feature selection to determine which interaction features were most important. | Python Code:
import pandas as pd
# The file has no headers naming the columns, so we pass header=None and provide the column names explicitly in "names"
data = pd.read_csv("data/adult.data", header=None, index_col=False,
names=['age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'gender',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income'])
# For illustration purposes, we only select some of the columns:
data = data[['age', 'workclass', 'education', 'gender', 'hours-per-week', 'occupation', 'income']]
# print the first 5 rows
data.head()
Explanation: Representing Data and Engineering Features
Categorical Variables
One-Hot-Encoding (Dummy variables)
End of explanation
data.gender.value_counts()
print("Original features:\n", list(data.columns), "\n")
data_dummies = pd.get_dummies(data)
print("Features after get_dummies:\n", list(data_dummies.columns))
data_dummies.head()
# Get only the columns containing features, that is all columns from 'age' to 'occupation_ Transport-moving'
# This range contains all the features but not the target
features = data_dummies.ix[:, 'age':'occupation_ Transport-moving']
# extract numpy arrays
X = features.values
y = data_dummies['income_ >50K'].values
print(X.shape, y.shape)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
print(logreg.score(X_test, y_test))
Explanation: Checking string-encoded categorical data
End of explanation
# create a dataframe with an integer feature and a categorical string feature
demo_df = pd.DataFrame({'Integer Feature': [0, 1, 2, 1], 'Categorical Feature': ['socks', 'fox', 'socks', 'box']})
demo_df
pd.get_dummies(demo_df)
demo_df['Integer Feature'] = demo_df['Integer Feature'].astype(str)
pd.get_dummies(demo_df)
Explanation: Numbers can encode categoricals
End of explanation
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
X, y = mglearn.datasets.make_wave(n_samples=100)
plt.plot(X[:, 0], y, 'o')
line = np.linspace(-3, 3, 1000)[:-1].reshape(-1, 1)
reg = LinearRegression().fit(X, y)
plt.plot(line, reg.predict(line), label="linear regression")
reg = DecisionTreeRegressor(min_samples_split=3).fit(X, y)
plt.plot(line, reg.predict(line), label="decision tree")
plt.ylabel("regression output")
plt.xlabel("input feature")
plt.legend(loc="best")
np.set_printoptions(precision=2)
bins = np.linspace(-3, 3, 11)
bins
which_bin = np.digitize(X, bins=bins)
print("\nData points:\n", X[:5])
print("\nBin membership for data points:\n", which_bin[:5])
from sklearn.preprocessing import OneHotEncoder
# transform using the OneHotEncoder.
encoder = OneHotEncoder(sparse=False)
# encoder.fit finds the unique values that appear in which_bin
encoder.fit(which_bin)
# transform creates the one-hot encoding
X_binned = encoder.transform(which_bin)
print(X_binned[:5])
X_binned.shape
line_binned = encoder.transform(np.digitize(line, bins=bins))
plt.plot(X[:, 0], y, 'o')
reg = LinearRegression().fit(X_binned, y)
plt.plot(line, reg.predict(line_binned), label='linear regression binned')
reg = DecisionTreeRegressor(min_samples_split=3).fit(X_binned, y)
plt.plot(line, reg.predict(line_binned), label='decision tree binned')
for bin in bins:
plt.plot([bin, bin], [-3, 3], ':', c='k')
plt.legend(loc="best")
plt.suptitle("linear_binning")
Explanation: Binning, Discretization, Linear Models and Trees
End of explanation
X_combined = np.hstack([X, X_binned])
print(X_combined.shape)
plt.plot(X[:, 0], y, 'o')
reg = LinearRegression().fit(X_combined, y)
line_combined = np.hstack([line, line_binned])
plt.plot(line, reg.predict(line_combined), label='linear regression combined')
for bin in bins:
plt.plot([bin, bin], [-3, 3], ':', c='k')
plt.legend(loc="best")
X_product = np.hstack([X_binned, X * X_binned])
print(X_product.shape)
plt.plot(X[:, 0], y, 'o')
reg = LinearRegression().fit(X_product, y)
line_product = np.hstack([line_binned, line * line_binned])
plt.plot(line, reg.predict(line_product), label='linear regression combined')
for bin in bins:
plt.plot([bin, bin], [-3, 3], ':', c='k')
plt.legend(loc="best")
from sklearn.preprocessing import PolynomialFeatures
# include polynomials up to x ** 10:
poly = PolynomialFeatures(degree=10)
poly.fit(X)
X_poly = poly.transform(X)
X_poly.shape
poly.get_feature_names()
plt.plot(X[:, 0], y, 'o')
reg = LinearRegression().fit(X_poly, y)
line_poly = poly.transform(line)
plt.plot(line, reg.predict(line_poly), label='polynomial linear regression')
plt.legend(loc="best")
from sklearn.svm import SVR
plt.plot(X[:, 0], y, 'o')
for gamma in [1, 10]:
svr = SVR(gamma=gamma).fit(X, y)
plt.plot(line, svr.predict(line), label='SVR gamma=%d' % gamma)
plt.legend(loc="best")
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
boston = load_boston()
X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=0)
# rescale data:
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
poly = PolynomialFeatures(degree=2).fit(X_train_scaled)
X_train_poly = poly.transform(X_train_scaled)
X_test_poly = poly.transform(X_test_scaled)
print(X_train.shape)
print(X_train_poly.shape)
print(poly.get_feature_names())
from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train_scaled, y_train)
print("score without interactions: %f" % ridge.score(X_test_scaled, y_test))
ridge = Ridge().fit(X_train_poly, y_train)
print("score with interactions: %f" % ridge.score(X_test_poly, y_test))
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=100).fit(X_train_scaled, y_train)
print("score without interactions: %f" % rf.score(X_test_scaled, y_test))
rf = RandomForestRegressor(n_estimators=100).fit(X_train_poly, y_train)
print("score with interactions: %f" % rf.score(X_test_poly, y_test))
rf.apply(X_test_poly)
rf.apply(X_test_poly).shape
Explanation: Interactions and Polynomials
End of explanation
rnd = np.random.RandomState(0)
X_org = rnd.normal(size=(1000, 3))
w = rnd.normal(size=3)
X = np.random.poisson(10 * np.exp(X_org))
y = np.dot(X_org, w)
np.bincount(X[:, 0])
bins = np.bincount(X[:, 0])
plt.bar(range(len(bins)), bins, color='w')
plt.ylabel("number of appearances")
plt.xlabel("value")
from sklearn.linear_model import Ridge
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
Ridge().fit(X_train, y_train).score(X_test, y_test)
X_train_log = np.log(X_train + 1)
X_test_log = np.log(X_test + 1)
plt.hist(np.log(X_train_log[:, 0] + 1), bins=25, color='w');
Ridge().fit(X_train_log, y_train).score(X_test_log, y_test)
Explanation: Univariate Non-linear transformations
End of explanation
from sklearn.datasets import load_breast_cancer
from sklearn.feature_selection import SelectPercentile
from sklearn.model_selection import train_test_split
cancer = load_breast_cancer()
# get deterministic random numbers
rng = np.random.RandomState(42)
noise = rng.normal(size=(len(cancer.data), 50))
# add noise features to the data
# the first 30 features are from the dataset, the next 50 are noise
X_w_noise = np.hstack([cancer.data, noise])
X_train, X_test, y_train, y_test = train_test_split(
X_w_noise, cancer.target, random_state=0, test_size=.5)
# use f_classif (the default) and SelectPercentile to select 10% of features:
select = SelectPercentile(percentile=50)
select.fit(X_train, y_train)
# transform training set:
X_train_selected = select.transform(X_train)
print(X_train.shape)
print(X_train_selected.shape)
from sklearn.feature_selection import f_classif, f_regression, chi2
F, p = f_classif(X_train, y_train)
plt.figure()
plt.plot(p, 'o')
mask = select.get_support()
print(mask)
# visualize the mask. black is True, white is False
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
from sklearn.linear_model import LogisticRegression
# transform test data:
X_test_selected = select.transform(X_test)
lr = LogisticRegression()
lr.fit(X_train, y_train)
print("Score with all features: %f" % lr.score(X_test, y_test))
lr.fit(X_train_selected, y_train)
print("Score with only selected features: %f" % lr.score(X_test_selected, y_test))
Explanation: Automatic Feature Selection
Univariate statistics
End of explanation
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import RandomForestClassifier
select = SelectFromModel(RandomForestClassifier(n_estimators=100, random_state=42), threshold="median")
select.fit(X_train, y_train)
X_train_l1 = select.transform(X_train)
print(X_train.shape)
print(X_train_l1.shape)
mask = select.get_support()
# visualize the mask. black is True, white is False
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
X_test_l1 = select.transform(X_test)
LogisticRegression().fit(X_train_l1, y_train).score(X_test_l1, y_test)
Explanation: Model-based Feature Selection
End of explanation
from sklearn.feature_selection import RFE
select = RFE(RandomForestClassifier(n_estimators=100, random_state=42), n_features_to_select=40)
#select = RFE(LogisticRegression(penalty="l1"), n_features_to_select=40)
select.fit(X_train, y_train)
# visualize the selected features:
mask = select.get_support()
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
X_train_rfe = select.transform(X_train)
X_test_rfe = select.transform(X_test)
LogisticRegression().fit(X_train_rfe, y_train).score(X_test_rfe, y_test)
select.score(X_test, y_test)
Explanation: Recursive Feature Elimination
End of explanation
from mlxtend.feature_selection import SequentialFeatureSelector
sfs = SequentialFeatureSelector(LogisticRegression(), k_features=40,
forward=True, scoring='accuracy',cv=5)
sfs = sfs.fit(X_train, y_train)
mask = np.zeros(80, dtype='bool')
mask[np.array(sfs.k_feature_idx_)] = True
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
LogisticRegression().fit(sfs1.transform(X_train), y_train).score(sfs.transform(X_test), y_test)
Explanation: Sequential Feature Selection
End of explanation
data = pd.read_csv("data/adult.data", header=None, index_col=False,
names=['age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'gender',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income'])
y = data.income.values
X = pd.get_dummies(data.drop("income", axis=1))
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
scaler = MinMaxScaler().fit(X_train)
X_train_ = scaler.transform(X_train)
X_test_ = scaler.transform(X_test)
LogisticRegression().fit(X_train_, y_train).score(X_test_, y_test)
X_train.shape
select = SelectFromModel(RandomForestClassifier(n_estimators=100), threshold="5 * median")
X_train_selected = select.fit_transform(X_train_, y_train)
X_test_selected = select.transform(X_test_)
LogisticRegression().fit(X_train_selected, y_train).score(X_test_selected, y_test)
X_train_selected.shape
poly = PolynomialFeatures(degree=2).fit(X_train_selected)
X_train_selected_poly = poly.transform(X_train_selected)
X_test_selected_poly = poly.transform(X_test_selected)
lr = LogisticRegression(C=0.01, penalty="l1").fit(X_train_selected_poly, y_train)
lr.score(X_test_selected_poly, y_test)
np.array(poly.get_feature_names(X.columns[select.get_support()]))[lr.coef_.ravel() != 0]
Explanation: Exercises
Choose either the Boston housing dataset or the adult dataset from above. Compare a linear model with interaction features against one without interaction features.
Use feature selection to determine which interaction features were most important.
End of explanation |
6,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Project
Step1: Read in an Image
Step9: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step10: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step11: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Step12: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step13: Let's try the one with the solid white lane on the right first ...
Step15: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step17: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step19: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.
Import Packages
End of explanation
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Read in an Image
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
# Right/Left Slope
lslope = []
rslope = []
# Right/Left Centers
lcenter = []
rcenter = []
for line in lines:
for x1, y1, x2, y2 in line:
slope = (y2 - y1) / (x2 - x1)
center = [(x1 + x2) / 2, (y1 + y2) / 2]
if slope > 0.5 and slope < 1.0: # Right Lane
rslope.append(slope)
rcenter.append(center)
if slope < -0.5 and slope > -1.0: # Left Lane
lslope.append(slope)
lcenter.append(center)
lslope_avg = np.sum(lslope) / len(lslope)
rslope_avg = np.sum(rslope) / len(rslope)
lcenter_avg = np.divide(np.sum(lcenter, axis=0), len(lcenter))
rcenter_avg = np.divide(np.sum(rcenter, axis=0), len(rcenter))
ly1 = int(img.shape[0])
lx1 = int((ly1 - lcenter_avg[1]) / lslope_avg + lcenter_avg[0])
ly2 = int(img.shape[0] * 0.6)
lx2 = int((ly2 - lcenter_avg[1]) / lslope_avg + lcenter_avg[0])
ry1 = int(img.shape[0])
rx1 = int((ry1 - rcenter_avg[1]) / rslope_avg + rcenter_avg[0])
ry2 = int(img.shape[0] * 0.6)
rx2 = int((ry2 - rcenter_avg[1]) / rslope_avg + rcenter_avg[0])
cv2.line(img, (lx1, ly1), (lx2, ly2), color, thickness)
cv2.line(img, (rx1, ry1), (rx2, ry2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]),
minLineLength=min_line_len,
maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines, thickness=10)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, γ)
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
os.listdir("test_images/")
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
low_threshold = 50 # Canny edge detection
high_threshold = 150 # Canny edge detection
kernel_size = 5 # Gaussian blurring
rho = 2 # Hough Tranform, distance resolution in pixels of the Hough grid
theta = np.pi / 180 # Hough Tranform, angular resolution in radians of Hough grid
threshold = 15 # Hough Tranform, minimum number of votes (intersections in Hough grid cell)
min_line_len = 40 # Hough Tranform, minimum number of pixels making up a line
max_line_gap = 20 # Hough Tranform, maximum gap in pixels between connectable line segments
original_images = os.listdir('test_images/')
for image in original_images:
img = mpimg.imread('test_images/' + image)
vertices = np.array([[(0, img.shape[0]), (450, 320),
(510, 320), (img.shape[1], img.shape[0])]],
dtype=np.int32) # Image mask polygon
gray_img = grayscale(img) # Greyed out image
edge_img = canny(gray_img, low_threshold, high_threshold) # Canny edges
mask_img = region_of_interest(edge_img, vertices) # Region of interest
line_img = hough_lines(mask_img, rho, theta, threshold,
min_line_len, max_line_gap)
lane_line_img = weighted_img(img, line_img)
#cv2.imwrite('test_images_output/' + image, lane_line_img)
mpimg.imsave('test_images_output/' + image, lane_line_img)
#reading in an image
image = mpimg.imread('test_images_output/whiteCarLaneSwitch.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image)
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
vertices = np.array([[(0, image.shape[0]), (450, 320),
(510, 320), (image.shape[1], image.shape[0])]],
dtype=np.int32) # Image mask polygon
gray_img = grayscale(image) # Greyed out image
edge_img = canny(gray_img, low_threshold, high_threshold) # Canny edges
mask_img = region_of_interest(edge_img, vertices) # Region of interest
line_img = hough_lines(mask_img, rho, theta, threshold,
min_line_len, max_line_gap)
lane_line_img = weighted_img(image, line_img)
return lane_line_img
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
6,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Data setup</h1>
<h2>Use our function to read the data file</h2>
Step1: <h1>Plotting data on google maps</h1>
<h2>gmplot library</h2>
https
Step2: <h3>Our data dataframe contains latitudes and longitudes for each complaint.</h3>
<h3>We can draw a heatmap that will help us see the relative concentration of complaints using lats and lons</h3>
<h3>Set up the map</h3>
<h4>GoogleMapPlotter constructor</h4>
<ul>
<li>GoogleMapPlotter(center_lat, center_lng, zoom)
<li>from_geocode(location_string,zoom)
Step3: <h3>Then generate the heatmap passing the two data series (latitude and longitude) to the function</h3>
Step4: <h3>Save the heatmap to an html file</h3>
<h4>The html file can be viewed, printed, or included in another html page</h4>
Step5: <h1>Let's do some grouping operations</h1>
<h2>Incidents by Borough</h2>
Step6: <h2>Group data by borough and plot a bar chart of the incident count</h2>
Step7: <h1>Incidents by Agency</h1>
Step8: <h2>Let's combine the two in a single graph</h2>
Step9: <h2>This is quite unreadable and pointless!</h2>
<h3>We can unstack the groups so that we get borough by agency</h3>
Step10: <h3>Increase the size of the image and add a title</h3>
Step11: <h1>Digression
Step12: <h4>Group by country</h4>
Step13: <h4>Group by multiple columns</h4>
Step14: <h4>Group by age groups</h4>
Step15: <h2>Grouping by the values in a column</h2>
<h3>For example, grouping the data by values in a column that are greater than or less than zero</h3>
Step16: <h3>Write a function that takes three arguments - a dataframe, an index, and a column name and returns the grouping for that row</h3>
Step17: <h2>Now we can compute stats on these groups</h2>
Step18: <h1>Incidents by time</h1>
<p>We know the creation date of each incident so we can build a bar graph of number of incidents by month
<p>Not particularly useful with a few months data but if we had all data from 2010, we could use this sort of
analysis to eyeball trends and seasonality
<p>We're going to need to do some data manipulation for this
<h3>We'll start by creating a new date field yyyymm
Step19: <h1>Examining agencies</h1>
<h2>We'll look at the frequency by agency and report the top 5 values</h2>
Step20: <h3>We can drill down into complaints by Agency by borough</h3>
Step21: <h3>We can create 'top 5 Agency' subplots subplots for each borough</h3>
Step22: <h1>Processing time</h1>
<h2>We can compute simple statistics on processing time</h2>
Step23: <h3>But it is easier to convert the timedelta processing_time into floats for calculation purposes</h3>
Step24: <h2>Now we can compute stats easily</h2> | Python Code:
def read_311_data(datafile):
import pandas as pd
import numpy as np
#Add the fix_zip function
def fix_zip(input_zip):
try:
input_zip = int(float(input_zip))
except:
try:
input_zip = int(input_zip.split('-')[0])
except:
return np.NaN
if input_zip < 10000 or input_zip > 19999:
return np.NaN
return str(input_zip)
#Read the file
df = pd.read_csv(datafile,index_col='Unique Key')
#fix the zip
df['Incident Zip'] = df['Incident Zip'].apply(fix_zip)
#drop all rows that have any nans in them (note the easier syntax!)
df = df.dropna(how='any')
#get rid of unspecified boroughs
df = df[df['Borough'] != 'Unspecified']
#Convert times to datetime and create a processing time column
import datetime
df['Created Date'] = df['Created Date'].apply(lambda x:datetime.datetime.strptime(x,'%m/%d/%y %H:%M'))
df['Closed Date'] = df['Closed Date'].apply(lambda x:datetime.datetime.strptime(x,'%m/%d/%y %H:%M'))
df['processing_time'] = df['Closed Date'] - df['Created Date']
#Finally, get rid of negative processing times and return the final data frame
df = df[df['processing_time']>=datetime.timedelta(0,0,0)]
return df
datafile = "nyc_311_data_subset-2.csv"
data = read_311_data(datafile)
Explanation: <h1>Data setup</h1>
<h2>Use our function to read the data file</h2>
End of explanation
!pip install gmplot --upgrade
Explanation: <h1>Plotting data on google maps</h1>
<h2>gmplot library</h2>
https://github.com/vgm64/gmplot
End of explanation
import gmplot
#gmap = gmplot.GoogleMapPlotter(40.7128, -74.0059, 8)
gmap = gmplot.GoogleMapPlotter.from_geocode("New York",10)
Explanation: <h3>Our data dataframe contains latitudes and longitudes for each complaint.</h3>
<h3>We can draw a heatmap that will help us see the relative concentration of complaints using lats and lons</h3>
<h3>Set up the map</h3>
<h4>GoogleMapPlotter constructor</h4>
<ul>
<li>GoogleMapPlotter(center_lat, center_lng, zoom)
<li>from_geocode(location_string,zoom)
End of explanation
#Then generate a heatmap using the latitudes and longitudes
gmap.heatmap(data['Latitude'], data['Longitude'])
Explanation: <h3>Then generate the heatmap passing the two data series (latitude and longitude) to the function</h3>
End of explanation
gmap.draw('incidents3.html')
Explanation: <h3>Save the heatmap to an html file</h3>
<h4>The html file can be viewed, printed, or included in another html page</h4>
End of explanation
%matplotlib inline
Explanation: <h1>Let's do some grouping operations</h1>
<h2>Incidents by Borough</h2>
End of explanation
borough_group = data.groupby('Borough')
borough_group.size().plot(kind='bar')
#kind can be 'hist', 'scatter'
Explanation: <h2>Group data by borough and plot a bar chart of the incident count</h2>
End of explanation
agency_group = data.groupby('Agency')
agency_group.size().plot(kind='bar')
Explanation: <h1>Incidents by Agency</h1>
End of explanation
agency_borough = data.groupby(['Agency','Borough'])
agency_borough.size().plot(kind='bar')
Explanation: <h2>Let's combine the two in a single graph</h2>
End of explanation
agency_borough.size().unstack().plot(kind='bar')
Explanation: <h2>This is quite unreadable and pointless!</h2>
<h3>We can unstack the groups so that we get borough by agency</h3>
End of explanation
agency_borough = data.groupby(['Agency','Borough'])
agency_borough.size().unstack().plot(kind='bar',title="Incidents in each Agency by Borough",figsize=(15,15))
Explanation: <h3>Increase the size of the image and add a title</h3>
End of explanation
import pandas as pd
writers = pd.DataFrame({'Author':['George Orwell','John Steinbeck',
'Pearl Buck','Agatha Christie'],
'Country':['UK','USA','USA','UK'],
'Gender':['M','M','F','F'],
'Age':[46,66,80,85]})
writers
Explanation: <h1>Digression: The pandas groupby function</h1>
<h4>You can use functions to group data</h4>
End of explanation
grouped = writers.groupby('Country')
#grouped.first()
#grouped.last()
#grouped.sum()
#grouped.mean()
grouped.apply(sum)
grouped.groups
Explanation: <h4>Group by country</h4>
End of explanation
grouped = writers.groupby(['Country','Gender'])
grouped.groups
Explanation: <h4>Group by multiple columns</h4>
End of explanation
def age_groups(df,index,col):
print(index,col)
if df[col].iloc[index] < 30:
return 'Young'
if df[col].iloc[index] < 60:
return 'Middle'
else:
return 'Old'
writers['Age'].iloc[0]
grouped = writers.groupby(lambda x: age_groups(writers,x,'Age'))
grouped.groups
Explanation: <h4>Group by age groups</h4>
End of explanation
import numpy as np
people = pd.DataFrame(np.random.randn(5, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['Joe', 'Steve', 'Wes', 'Jim', 'Travis'])
people
Explanation: <h2>Grouping by the values in a column</h2>
<h3>For example, grouping the data by values in a column that are greater than or less than zero</h3>
End of explanation
def GroupColFunc(df, ind, col):
if df[col].loc[ind] > 0:
return 'Group1'
else:
return 'Group2'
people.groupby(lambda x: GroupColFunc(people, x, 'a')).groups
Explanation: <h3>Write a function that takes three arguments - a dataframe, an index, and a column name and returns the grouping for that row</h3>
End of explanation
print(people.groupby(lambda x: GroupColFunc(people, x, 'a')).mean())
print(people.groupby(lambda x: GroupColFunc(people, x, 'a')).std())
Explanation: <h2>Now we can compute stats on these groups</h2>
End of explanation
import datetime
data['yyyymm'] = data['Created Date'].apply(lambda x:datetime.datetime.strftime(x,'%Y%m'))
data['yyyymm']
date_agency = data.groupby(['yyyymm','Agency'])
date_agency.size().unstack().plot(kind='bar',figsize=(15,15))
Explanation: <h1>Incidents by time</h1>
<p>We know the creation date of each incident so we can build a bar graph of number of incidents by month
<p>Not particularly useful with a few months data but if we had all data from 2010, we could use this sort of
analysis to eyeball trends and seasonality
<p>We're going to need to do some data manipulation for this
<h3>We'll start by creating a new date field yyyymm
End of explanation
data.groupby('Agency').size().sort_values(ascending=False)
data.groupby('Agency').size().sort_values(ascending=False).plot(kind='bar', figsize=(20,4))
Explanation: <h1>Examining agencies</h1>
<h2>We'll look at the frequency by agency and report the top 5 values</h2>
End of explanation
agency_borough = data.groupby(['Agency', 'Borough']).size().unstack()
agency_borough
Explanation: <h3>We can drill down into complaints by Agency by borough</h3>
End of explanation
#We'll arrange the subplots in two rows and three columns.
#Since we have only 5 boroughs, one plot will be blank
COL_NUM = 2
ROW_NUM = 3
import matplotlib.pyplot as plt
fig, axes = plt.subplots(ROW_NUM, COL_NUM, figsize=(12,12))
for i, (label, col) in enumerate(agency_borough.iteritems()):
ax = axes[int(i/COL_NUM), i%COL_NUM]
col = col.sort_values(ascending=False)[:5]
col.plot(kind='barh', ax=ax)
ax.set_title(label)
plt.tight_layout()
for i, (label, col) in enumerate(agency_borough.iteritems()):
print(i,label,col)
Explanation: <h3>We can create 'top 5 Agency' subplots subplots for each borough</h3>
End of explanation
grouped = data[['processing_time','Borough']].groupby('Borough')
grouped.describe()
Explanation: <h1>Processing time</h1>
<h2>We can compute simple statistics on processing time</h2>
End of explanation
import numpy as np
#The time it takes to process. Cleaned up
data['float_time'] =data['processing_time'].apply(lambda x:x/np.timedelta64(1, 'D'))
data
Explanation: <h3>But it is easier to convert the timedelta processing_time into floats for calculation purposes</h3>
End of explanation
grouped = data[['float_time','Agency']].groupby('Agency')
grouped.mean().sort_values('float_time',ascending=False)
data['float_time'].hist(bins=50)
Explanation: <h2>Now we can compute stats easily</h2>
End of explanation |
6,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: Let's get started with some basic imports.
Step2: Overriding Computation Times
If compute_times is not empty (by either providing compute_times or compute_phases), the provided value will be used to compute the model instead of those in the times parameter.
In the case of a mesh dataset or orbit dataset, observations cannot be attached to the dataset, so a times parameter does not exist. In this case compute_times or compute_phases will always be used.
Step3: compute_times (when not empty) overrides the value of times when computing the model. However, passing times as a keyword argument to run_compute will take precedence over either - and override the computed times across all enabled datasets.
Step4: Phase-Time Conversion
In addition to the ability to provide compute_times, we can alternatively provide compute_phases. These two parameters are linked via a constraint (see the constraints tutorial), with compute_phases constrained by default.
Step5: Essentially, this constraint does the same thing as b.to_phase or b.to_time, using the appropriate t0 according to phases_t0 from the top-level orbit in the hierarchy.
Note that in the case of time-dependent systems, this mapping will also adhere to phases_dpdt (in the case of dpdt and/or phases_period (in the case of apsidal motion (dperdt).
Step6: In order to provide compute_phases instead of compute_times, we must call b.flip_constraint.
Step7: Note that under the hood, PHOEBE always works in time-space, meaning it is the constrained value of compute_times that is being passed under-the-hood.
Also note that if directly passing compute_phases to b.add_dataset, the constraint will be flipped on our behalf. We would then need to flip the constraint in order to provide compute_times instead.
Finally, it is important to make the distinction that this is not adding support for observations in phases. If we have an old light curve that is only available in phase, we still must convert these to times manually (or via b.to_time). This restriction is intentional
Step8: In the case of times, this will automatically interpolate in phase-space if the provided time is outside the range of the referenced times array. If you have a logger enabled with at least the 'warning' level, this will raise a warning and state the phases at which the interpolation will be completed.
Step9: Determining & Plotting Residuals
One particularly useful case for interpolating is to compare a model (perhaps computed in phase-space) to a dataset with a large number of datapoints. We can do this directly by calling compute_residuals, which will handle any necessary interpolation and compare the dependent variable between the dataset and models.
Note that if there are more than one dataset or model attached to the bundle, it may be necessary to pass dataset and/or model (or filter in advanced and call compute_residuals on the filtered ParameterSet.
To see this in action, we'll first create a "fake" observational dataset, add some noise, recompute the model using compute_phases, and then calculate the residuals.
Step10: If we plot the dataset and model, we see that the model was only computed for one cycle, whereas the dataset extends further in time.
Step11: But we can also plot the residuals. Here, calculate_residuals is called internally, interpolating in phase-space, and then plotted in time-space. See the options for y in the plot API docs for more details. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Advanced: compute_times & compute_phases
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(0,10,101), dataset='lc01')
Explanation: Let's get started with some basic imports.
End of explanation
print(b.filter(qualifier=['times', 'compute_times'], context='dataset'))
b.set_value('compute_times', phoebe.linspace(0,3,11))
b.run_compute()
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
Explanation: Overriding Computation Times
If compute_times is not empty (by either providing compute_times or compute_phases), the provided value will be used to compute the model instead of those in the times parameter.
In the case of a mesh dataset or orbit dataset, observations cannot be attached to the dataset, so a times parameter does not exist. In this case compute_times or compute_phases will always be used.
End of explanation
b.run_compute(times=[0,0.2])
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
b.run_compute()
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
Explanation: compute_times (when not empty) overrides the value of times when computing the model. However, passing times as a keyword argument to run_compute will take precedence over either - and override the computed times across all enabled datasets.
End of explanation
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))
Explanation: Phase-Time Conversion
In addition to the ability to provide compute_times, we can alternatively provide compute_phases. These two parameters are linked via a constraint (see the constraints tutorial), with compute_phases constrained by default.
End of explanation
print(b.get_constraint('compute_phases'))
print(b.get_parameter('phases_t0').choices)
Explanation: Essentially, this constraint does the same thing as b.to_phase or b.to_time, using the appropriate t0 according to phases_t0 from the top-level orbit in the hierarchy.
Note that in the case of time-dependent systems, this mapping will also adhere to phases_dpdt (in the case of dpdt and/or phases_period (in the case of apsidal motion (dperdt).
End of explanation
b.flip_constraint('compute_phases', solve_for='compute_times')
b.set_value('compute_phases', phoebe.linspace(0,1,11))
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'phases_t0'], context='dataset'))
Explanation: In order to provide compute_phases instead of compute_times, we must call b.flip_constraint.
End of explanation
b.get_parameter('fluxes', context='model').get_value()
b.get_parameter('fluxes', context='model').interp_value(times=1.0)
b.get_parameter('fluxes', context='model').interp_value(times=phoebe.linspace(0,3,101))
Explanation: Note that under the hood, PHOEBE always works in time-space, meaning it is the constrained value of compute_times that is being passed under-the-hood.
Also note that if directly passing compute_phases to b.add_dataset, the constraint will be flipped on our behalf. We would then need to flip the constraint in order to provide compute_times instead.
Finally, it is important to make the distinction that this is not adding support for observations in phases. If we have an old light curve that is only available in phase, we still must convert these to times manually (or via b.to_time). This restriction is intentional: we do not want the mapping between phase and time to change as the ephemeris is changed or fitted, rather we want to try to map from phase to time using the ephemeris that was originally used when the dataset was recorded (if possible, or the best possible guess).
Interpolating the Model
Whether or not we used compute_times/compute_phases or not, it is sometimes useful to be able to interpolate on the resulting model. In the case where we provided compute_times/compute_phases to "down-sample" from a large dataset, this can be particularly useful.
We can call interp_value on any FloatArrayParameter.
End of explanation
b.get_parameter('fluxes', context='model').interp_value(times=5)
Explanation: In the case of times, this will automatically interpolate in phase-space if the provided time is outside the range of the referenced times array. If you have a logger enabled with at least the 'warning' level, this will raise a warning and state the phases at which the interpolation will be completed.
End of explanation
b.add_dataset('lc',
times=phoebe.linspace(0,10,1000),
dataset='lc01',
overwrite=True)
b.run_compute(irrad_method='none')
fluxes = b.get_value('fluxes', context='model')
b.set_value('fluxes', context='dataset', value=fluxes)
b.flip_constraint('compute_phases', solve_for='compute_times')
b.set_value('compute_phases', phoebe.linspace(0,1,101))
b.set_value('teff', component='primary', value=5950)
b.run_compute(irrad_method='none')
print(len(b.get_value('fluxes', context='dataset')), len(b.get_value('fluxes', context='model')))
b.calculate_residuals()
Explanation: Determining & Plotting Residuals
One particularly useful case for interpolating is to compare a model (perhaps computed in phase-space) to a dataset with a large number of datapoints. We can do this directly by calling compute_residuals, which will handle any necessary interpolation and compare the dependent variable between the dataset and models.
Note that if there are more than one dataset or model attached to the bundle, it may be necessary to pass dataset and/or model (or filter in advanced and call compute_residuals on the filtered ParameterSet.
To see this in action, we'll first create a "fake" observational dataset, add some noise, recompute the model using compute_phases, and then calculate the residuals.
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: If we plot the dataset and model, we see that the model was only computed for one cycle, whereas the dataset extends further in time.
End of explanation
afig, mplfig = b.plot(y='residuals', show=True)
Explanation: But we can also plot the residuals. Here, calculate_residuals is called internally, interpolating in phase-space, and then plotted in time-space. See the options for y in the plot API docs for more details.
End of explanation |
6,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chopsticks!
A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
An investigation for determining the optimum length of chopsticks.
Link to Abstract and Paper
the abstract below was adapted from the link
Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
Download the data set for the adults, then answer the following questions based on the abstract and the data set.
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
1. What is the independent variable in the experiment?
Chopstick length.
2. What is the dependent variable in the experiment?
Food-pinching performance.
3. How is the dependent variable operationally defined?
By count the number of peanuts picked and placed in a cup (PPPC)
4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
Peanut.
Cup
Junior college male student.
Primary school pupil.
One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
Step1: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
Step2: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
Step3: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
240 mm | Python Code:
import pandas as pd
# pandas is a software library for data manipulation and analysis
# We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd.
# hit shift + enter to run this cell or block of code
path = '/Users/Slimn/Desktop/Work/Project/Udacity/NanoDegree/P0/chopstick-effectiveness.csv'
# Change the path to the location where the chopstick-effectiveness.csv file is located on your computer.
# If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer.
dataFrame = pd.read_csv(path)
dataFrame
Explanation: Chopsticks!
A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
An investigation for determining the optimum length of chopsticks.
Link to Abstract and Paper
the abstract below was adapted from the link
Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
Download the data set for the adults, then answer the following questions based on the abstract and the data set.
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
1. What is the independent variable in the experiment?
Chopstick length.
2. What is the dependent variable in the experiment?
Food-pinching performance.
3. How is the dependent variable operationally defined?
By count the number of peanuts picked and placed in a cup (PPPC)
4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
Peanut.
Cup
Junior college male student.
Primary school pupil.
One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
End of explanation
dataFrame['Food.Pinching.Efficiency'].mean()
Explanation: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
End of explanation
meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index()
meansByChopstickLength
# reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5.
Explanation: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
End of explanation
# Causes plots to display within the notebook rather than in a new window
#pylab inline
import matplotlib.pyplot as plt
plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency'])
# title="")
plt.xlabel("Length in mm")
plt.ylabel("Efficiency in PPPC")
plt.title("Average Food Pinching Efficiency by Chopstick Length")
plt.show()
Explanation: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
240 mm
End of explanation |
6,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is the True Normal Human Body Temperature?
Background
The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct?
<div class="span5 alert alert-info">
<h3>Exercises</h3>
<p>In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.</p>
<p>Answer the following questions <b>in this notebook below and submit to your Github account</b>.</p>
<ol>
<li> Is the distribution of body temperatures normal?
<ul>
<li> Although this is not a requirement for CLT to hold (read CLT carefully), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population.
</ul>
<li> Is the sample size large? Are the observations independent?
<ul>
<li> Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply.
</ul>
<li> Is the true population mean really 98.6 degrees F?
<ul>
<li> Would you use a one-sample or two-sample test? Why?
<li> In this situation, is it appropriate to use the $t$ or $z$ statistic?
<li> Now try using the other test. How is the result be different? Why?
</ul>
<li> At what temperature should we consider someone's temperature to be "abnormal"?
<ul>
<li> Start by computing the margin of error and confidence interval.
</ul>
<li> Is there a significant difference between males and females in normal temperature?
<ul>
<li> What test did you use and why?
<li> Write a story with your conclusion in the context of the original problem.
</ul>
</ol>
You can include written notes in notebook cells using Markdown
Step1: The normal distribution test
Step2: To check if the distribution of temperature is normal, it is always better to visualize it. We plot the histogram of the values and plot the fitted values to obtain a normal distribution. We see that there are a few outliers in the distribution on the right side but still it correlates as a normal distribution.
Performing the Normaltest using Scipy's normal function and we obtain the p value of 0.25. Assuming the statistical significance to be 0.05 and the Null hypothesis being the distribution is normal. We can accept the Null hypothesis as the obtained p-value is greater than 0.05 which can also confirm the normal distribution.
Step3: We see the sample size is n= 130 and as a general rule of thumb inorder for CLT to be validated
it is necessary for n>30. Hence the sample size is compartively large.
Question 3
HO
Step4: Choosing one sample test vs two sample test
Step5: Any temperatures out of this range should be considered abnormal.
Question 5 | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
df = pd.read_csv('data/human_body_temperature.csv')
df.head()
Explanation: What is the True Normal Human Body Temperature?
Background
The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct?
<div class="span5 alert alert-info">
<h3>Exercises</h3>
<p>In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.</p>
<p>Answer the following questions <b>in this notebook below and submit to your Github account</b>.</p>
<ol>
<li> Is the distribution of body temperatures normal?
<ul>
<li> Although this is not a requirement for CLT to hold (read CLT carefully), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population.
</ul>
<li> Is the sample size large? Are the observations independent?
<ul>
<li> Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply.
</ul>
<li> Is the true population mean really 98.6 degrees F?
<ul>
<li> Would you use a one-sample or two-sample test? Why?
<li> In this situation, is it appropriate to use the $t$ or $z$ statistic?
<li> Now try using the other test. How is the result be different? Why?
</ul>
<li> At what temperature should we consider someone's temperature to be "abnormal"?
<ul>
<li> Start by computing the margin of error and confidence interval.
</ul>
<li> Is there a significant difference between males and females in normal temperature?
<ul>
<li> What test did you use and why?
<li> Write a story with your conclusion in the context of the original problem.
</ul>
</ol>
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
#### Resources
+ Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm
+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
****
</div>
End of explanation
x=df.sort_values("temperature",axis=0)
t=x["temperature"]
#print(np.mean(t))
plot_fit = stats.norm.pdf(t, np.mean(t), np.std(t))
plt.plot(t,plot_fit,'-o')
plt.hist(df.temperature, bins = 20 ,normed = True)
plt.ylabel('Frequency')
plt.xlabel('Temperature')
plt.show()
stats.normaltest(t)
Explanation: The normal distribution test:
End of explanation
#Question 2:
no_of_samples=df["temperature"].count()
print(no_of_samples)
Explanation: To check if the distribution of temperature is normal, it is always better to visualize it. We plot the histogram of the values and plot the fitted values to obtain a normal distribution. We see that there are a few outliers in the distribution on the right side but still it correlates as a normal distribution.
Performing the Normaltest using Scipy's normal function and we obtain the p value of 0.25. Assuming the statistical significance to be 0.05 and the Null hypothesis being the distribution is normal. We can accept the Null hypothesis as the obtained p-value is greater than 0.05 which can also confirm the normal distribution.
End of explanation
from statsmodels.stats.weightstats import ztest
from scipy.stats import ttest_ind
from scipy.stats import ttest_1samp
t_score=ttest_1samp(t,98.6)
t_score_abs=abs(t_score[0])
t_score_p_abs=abs(t_score[1])
z_score=ztest(t,value=98.6)
z_score_abs=abs(z_score[0])
p_value_abs=abs(z_score[1])
print("The z score is given by: %F and the p-value is given by %6.9F"%(z_score_abs,p_value_abs))
print("The t score is given by: %F and the p-value is given by %6.9F"%(t_score_abs,t_score_p_abs))
Explanation: We see the sample size is n= 130 and as a general rule of thumb inorder for CLT to be validated
it is necessary for n>30. Hence the sample size is compartively large.
Question 3
HO: The true population mean is 98.6 degrees F (Null hypothesis)
H1: The true population mean is not 98.6 degrees F (Alternative hypothesis)
Alternatively we can state that,
HO: μ1 = μ2
H1: μ1 ≠ μ2
End of explanation
#Question 4:
#For a 95% Confidence Interval the Confidence interval can be computed as:
variance_=np.std(t)/np.sqrt(no_of_samples)
mean_=np.mean(t)
confidence_interval = stats.norm.interval(0.95, loc=mean_, scale=variance_)
print("The Confidence Interval Lies between %F and %F"%(confidence_interval[0],confidence_interval[1]))
Explanation: Choosing one sample test vs two sample test:
The problem defined has a single sample and we need to test against the population mean and hence we would use a one sample test as against the two sample test.
T-test vs Z-test:
T-test is chosen and best suited when n<30 and hence we can choose z-test for this particular distribution.Also here we are comparing the mean of the population against a predetermined value i.e. 98.6 and it is best to use z-test. T- test is more useful when we compare the means of two sample distributions and check to see if there is a difference between them.
The p value is 0.000000049 which is less than the usual significance level 0.05 and hence we can reject the Null hypothesis and say that the population mean is not 98.6
Trying the t-test: Since we are comparing the mean value to a reference number, the calculation of both z score and t score remains same and hence value remains same. However the p-value differs slighlty from the other.
End of explanation
temp_male=df.temperature[df.gender=='M']
female_temp=df.temperature[df.gender=='F']
ttest_ind(temp_male,female_temp)
Explanation: Any temperatures out of this range should be considered abnormal.
Question 5:
Here we use t-test statistic because we want to compare the mean of two groups involved, the male and the female group and it is better to use a t-test.
End of explanation |
6,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
说明
目的: 为探讨在同一个样本点集下,cma-es算法对$f(x)$拟合效果与分段数的关系,实验记录每个分段方案下cma-es过程的迭代时间,关系矩阵变化过程,计算结果,cpu计算时间等进行对比。
函数:
<center>$ f(x)=10sin0.6x+uniform(-1.5,1.5)gauss(0,5),x \in[-7,7)$ </center>
分段: 为保证每段点数的一致性,通过$f(x)$在定义域内均匀分布300个样本点,段数分别为$PN(partition\ number) \in {5,10,15,20,25,30}$,对应CMA问题的求解维度从15至90,每段点数不小于10.
评价函数:
1. $M1$:分段测试函数的mse;
2. $M2$:在f1基础上增加间断点连续性判断指标:$\sum_{i=1}^{k} (e^{\Delta Y_i-\alpha}-1)$,其中$\Delta Y_i$ 代表拟合分段函数在间断点处的左右间断点差的绝对值,$\alpha $默认代表$Y$值域范围的$1\%$大小;
3. $M3$:在f2基础上增加间断点一阶导数评价指标:$\sum_{i=1}^{k}(e^{\frac{\Delta \sigma_i-\beta}{10e}}-1)$,其中$\Delta \sigma_i$代表左右间断点处的左右导数的$arctan$差的绝对值,$\beta$如未特殊说明全局默认为10度($\frac {\pi}{18}$);
实验
迭代次数与分段数关系图表
Step1: cpu计算耗时与分段数关系图表 | Python Code:
import makeData as md
%pylab inline
plt.rc('figure', figsize=(16, 9))
X=md.loadData('result.tl')
import numpy as np
import pandas as pd
b=[]
for i in range(5,35,5):
temp=[]
for j in range(3):
temp.append(X[i][j]['iter'])
b.append(temp)
bs=np.array(b).T
ind=range(5,35,5)
d={'M1':pd.Series(bs[0],index=ind),
'M2':pd.Series(bs[1],index=ind),
'M3':pd.Series(bs[2],index=ind)}
df = pd.DataFrame(d)
df.columns.name='function'
df.index.name='partition'
df
df.plot(kind='bar',fontsize=20)
leg = plt.gca().get_legend()
ltext = leg.get_texts()
plt.setp(ltext, fontsize='20')
plt.title("iteration counts",fontsize=16)
df.plot()
leg = plt.gca().get_legend()
ltext = leg.get_texts()
plt.setp(ltext, fontsize='20')
plt.title("iteration counts",fontsize=16)
Explanation: 说明
目的: 为探讨在同一个样本点集下,cma-es算法对$f(x)$拟合效果与分段数的关系,实验记录每个分段方案下cma-es过程的迭代时间,关系矩阵变化过程,计算结果,cpu计算时间等进行对比。
函数:
<center>$ f(x)=10sin0.6x+uniform(-1.5,1.5)gauss(0,5),x \in[-7,7)$ </center>
分段: 为保证每段点数的一致性,通过$f(x)$在定义域内均匀分布300个样本点,段数分别为$PN(partition\ number) \in {5,10,15,20,25,30}$,对应CMA问题的求解维度从15至90,每段点数不小于10.
评价函数:
1. $M1$:分段测试函数的mse;
2. $M2$:在f1基础上增加间断点连续性判断指标:$\sum_{i=1}^{k} (e^{\Delta Y_i-\alpha}-1)$,其中$\Delta Y_i$ 代表拟合分段函数在间断点处的左右间断点差的绝对值,$\alpha $默认代表$Y$值域范围的$1\%$大小;
3. $M3$:在f2基础上增加间断点一阶导数评价指标:$\sum_{i=1}^{k}(e^{\frac{\Delta \sigma_i-\beta}{10e}}-1)$,其中$\Delta \sigma_i$代表左右间断点处的左右导数的$arctan$差的绝对值,$\beta$如未特殊说明全局默认为10度($\frac {\pi}{18}$);
实验
迭代次数与分段数关系图表
End of explanation
b=[]
for i in range(5,35,5):
temp=[]
for j in range(3):
temp.append(X[i][j]['time'])
b.append(temp)
bs=np.array(b).T
ind=range(5,35,5)
d={'M1':pd.Series(bs[0],index=ind),
'M2':pd.Series(bs[1],index=ind),
'M3':pd.Series(bs[2],index=ind)}
df = pd.DataFrame(d)
df.columns.name='function'
df.index.name='partition'
df
df.plot(kind='bar',fontsize=20)
leg = plt.gca().get_legend()
ltext = leg.get_texts()
plt.setp(ltext, fontsize='20')
plt.title("CPU time ",fontsize=16)
df.plot()
leg = plt.gca().get_legend()
ltext = leg.get_texts()
plt.setp(ltext, fontsize='20')
plt.title("CPU time",fontsize=16)
Explanation: cpu计算耗时与分段数关系图表
End of explanation |
6,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step5: Exercise
Step6: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step7: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step8: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step9: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step10: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step11: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step12: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step13: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step14: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step15: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step16: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
print(len(reviews_ints))
print(reviews_ints[1])
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
print(len(reviews))
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
print(len(features))
features[:10,:100]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
6,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: The data has been generated by running the template notebook
usALEX-5samples-PR-raw-dir_ex_aa-fit-AexAem
for each sample.
To recompute the PR data used by this notebook run the
8-spots paper analysis notebook.
Computation
Step2: Save coefficient to disk | Python Code:
data_file = 'results/usALEX-5samples-PR-raw-dir_ex_aa-fit-AexAem.csv'
Explanation: Executed: Mon Mar 27 11:38:35 2017
Duration: 3 seconds.
Direct ecitation coefficient fit
This notebook estracts the direct excitation coefficient from the set of 5 us-ALEX smFRET measurements.
What it does?
This notebook performs a weighted average of direct excitation coefficient fitted from each measurement.
Dependencies
This notebooks reads the file:
End of explanation
from __future__ import division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
sns.set_style('whitegrid')
palette = ('Paired', 10)
sns.palplot(sns.color_palette(*palette))
sns.set_palette(*palette)
data = pd.read_csv(data_file).set_index('sample')
data
data.columns
d = data[[c for c in data.columns if c.startswith('dir')]]
d.plot(kind='line', lw=3, title='Direct Excitation Coefficient')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=False);
dir_ex_aa = np.average(data.dir_ex_S_kde_w5, weights=data.n_bursts_aa)
'%.5f' % dir_ex_aa
Explanation: The data has been generated by running the template notebook
usALEX-5samples-PR-raw-dir_ex_aa-fit-AexAem
for each sample.
To recompute the PR data used by this notebook run the
8-spots paper analysis notebook.
Computation
End of explanation
with open('results/usALEX - direct excitation coefficient dir_ex_aa.csv', 'w') as f:
f.write('%.5f' % dir_ex_aa)
Explanation: Save coefficient to disk
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.