Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,100 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am trying to extract rows from a Pandas dataframe using a list of row names, but it can't be done. Here is an example | Problem:
import pandas as pd
import io
data = io.StringIO("""
rs alleles chrom pos strand assembly# center protLSID assayLSID
TP3 A/C 0 3 + NaN NaN NaN NaN
TP7 A/T 0 7 + NaN NaN NaN NaN
TP12 T/A 0 12 + NaN NaN NaN NaN
TP15 C/A 0 15 + NaN NaN NaN NaN
TP18 C/T 0 18 + NaN NaN NaN NaN
""")
df = pd.read_csv(data, delim_whitespace=True).set_index('rs')
test = ['TP3', 'TP7', 'TP18']
def g(df, test):
return df.loc[test]
result = g(df, test) |
3,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Euler Equations
The Euler equations in primitive variable form, $q = (\rho, u, p)^\intercal$ appear as
Step1: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
Step2: The right eigenvectors are what SymPy gives natively. For a given eigenvalue, $\lambda$, these
satisfy
Step3: 0-th right eigenvector
Step4: this corresponds to the eigenvalue
Step5: 1-st right eigenvector
Step6: this corresponds to the eigenvalue
Step7: 2-nd right eigenvector
Step8: this corresponds to the eigenvalue
Step9: Here they are as a matrix, $R$, in order from smallest to largest eigenvalue
Step10: Left Eigenvectors
The left eigenvectors satisfy
Step11: Traditionally, we normalize these such that $l^{(\mu)} \cdot r^{(\nu)} = \delta_{\mu\nu}$
Step12: 0-th left eigenvector
Step13: 1-st left eigenvector
Step14: 2-nd left eigenvector | Python Code:
from sympy.abc import rho
rho, u, c = symbols('rho u c')
A = Matrix([[u, rho, 0], [0, u, rho**-1], [0, c**2 * rho, u]])
A
Explanation: Euler Equations
The Euler equations in primitive variable form, $q = (\rho, u, p)^\intercal$ appear as:
$$q_t + A(q) q_x = 0$$
with the matrix $A(q)$:
$$A(q) = \left ( \begin{array}{ccc} u & \rho & 0 \
0 & u & 1/\rho \
0 & \gamma p & u \end{array} \right )
$$
The sound speed is related to the adiabatic index, $\gamma$, as $c^2 = \gamma p /\rho$.
We can represent this matrix symbolically in SymPy and explore its eigensystem.
End of explanation
A.eigenvals()
Explanation: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
End of explanation
R = A.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it
r = []
lam = []
for (ev, _, rtmp) in R:
r.append(rtmp[0])
lam.append(ev)
# we can normalize them anyway we want, so let's make the first entry 1
for n in range(len(r)):
v = r[n]
r[n] = v/v[0]
Explanation: The right eigenvectors are what SymPy gives natively. For a given eigenvalue, $\lambda$, these
satisfy:
$$A r = \lambda r$$
Right Eigenvectors
End of explanation
r[0]
Explanation: 0-th right eigenvector
End of explanation
lam[0]
Explanation: this corresponds to the eigenvalue
End of explanation
r[1]
Explanation: 1-st right eigenvector
End of explanation
lam[1]
Explanation: this corresponds to the eigenvalue
End of explanation
r[2]
Explanation: 2-nd right eigenvector
End of explanation
lam[2]
Explanation: this corresponds to the eigenvalue
End of explanation
R = zeros(3,3)
R[:,0] = r[1]
R[:,1] = r[0]
R[:,2] = r[2]
R
Explanation: Here they are as a matrix, $R$, in order from smallest to largest eigenvalue
End of explanation
B = A.transpose()
B
L = B.eigenvects()
l = []
laml = []
for (ev, _, ltmp) in L:
l.append(ltmp[0].transpose())
laml.append(ev)
Explanation: Left Eigenvectors
The left eigenvectors satisfy:
$$l A = \lambda l$$
SymPy doesn't have a method to get left eigenvectors directly, so we take the transpose of this expression:
$$(l A)^\intercal = A^\intercal l^\intercal = \lambda l^\intercal$$
Therefore, the transpose of the left eigenvectors, $l^\intercal$, are the right eigenvectors of transpose of $A$
End of explanation
for n in range(len(l)):
if lam[n] == laml[n]:
ltmp = l[n]
p = ltmp.dot(r[n])
l[n] = ltmp/p
Explanation: Traditionally, we normalize these such that $l^{(\mu)} \cdot r^{(\nu)} = \delta_{\mu\nu}$
End of explanation
l[0]
Explanation: 0-th left eigenvector
End of explanation
l[1]
Explanation: 1-st left eigenvector
End of explanation
l[2]
Explanation: 2-nd left eigenvector
End of explanation |
3,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Basic CFNCluster Setup</h1>
<h3 align="center">Author
Step1: 1. Install CFNCluster
Notice
Step2: 2. Upgrade CFNCluster
Step3: 3. Configure CFNCluster
To configure CFNCluster settings, you need to import the package CFNCluster. The below functions tell you how to insert AWS access keys, configure instance types, spot price and S3 resource.
Step4: After you finish configuration, you can call the below function to double check if your settings are correct.
Before you create a new cluster, you can check the current running clusters to avoid to use the different cluster name by call the below function.
Step5: To create a new cluster, you need to set a cluster name and then call the below function. After the creation is complete, you will see the output information about your cluser IP address.
Step6: 4. Manage cluster
To manage your new created cluster, you need to import ConnectionManager. The ConnectionManager can create the connection to the master node, execute commands on the master node, transfer files to the master. To create a connection to the master node, you need to set the hostname, username and your private key file. The hostname IP address (MasterPublicIP) can be found when your cluster creation is complete. The private key file should be the same when you configure CFNCluster.
Step7: After the job is done, you can call the below function to close the connection.
Step8: To delete the cluster, you just need to set the cluster name and call the below function. | Python Code:
import os
import sys
sys.path.append(os.getcwd().replace("notebooks", "cfncluster"))
## Input the AWS account access keys
aws_access_key_id = "/**aws_access_key_id**/"
aws_secret_access_key = "/**aws_secret_access_key**/"
## CFNCluster name
your_cluster_name = "cluster_name"
## The private key pair for accessing cluster.
private_key = "/path/to/private_key.pem"
## If delete cfncluster after job is done.
delete_cfncluster = False
Explanation: <h1 align="center">Basic CFNCluster Setup</h1>
<h3 align="center">Author: Guorong Xu ([email protected]) </h3>
<h3 align="center">2016-9-19</h3>
The notebook is an example that tells you how to call API to install, configure CFNCluster package, create a cluster, and connect to the master node. Currently we only support Linux, Mac OS platforms.
<font color='red'>Notice:</font> First step is to fill in the AWS account access keys and then follow the instructions to install CFNCluster package and create a cluster.
End of explanation
import CFNClusterManager
CFNClusterManager.install_cfn_cluster()
Explanation: 1. Install CFNCluster
Notice: The CFNCluster package can be only installed on Linux box which supports pip installation.
End of explanation
import CFNClusterManager
CFNClusterManager.upgrade_cfn_cluster()
Explanation: 2. Upgrade CFNCluster
End of explanation
import CFNClusterManager
## Configure cfncluster settings
CFNClusterManager.insert_access_keys(aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
CFNClusterManager.config_key_name(private_key)
CFNClusterManager.config_instance_types(master_instance_type="m3.large", compute_instance_type="r3.2xlarge")
CFNClusterManager.config_initial_cluster_size(initial_cluster_size="0")
CFNClusterManager.config_spot_price(spot_price="0.7")
CFNClusterManager.config_volume_size(volume_size="300")
CFNClusterManager.config_ebs_snapshot_id(ebs_snapshot_id="snap-5faff708")
CFNClusterManager.config_aws_region_name(aws_region_name="us-west-2")
CFNClusterManager.config_post_install(post_install="s3://path/to/postinstall.sh")
CFNClusterManager.config_vpc_subnet_id(master_subnet_id="subnet-00000000", vpc_id="vpc-00000000")
CFNClusterManager.config_s3_resource(s3_read_resource="s3://bucket_name/", s3_read_write_resource="s3://bucket_name/")
Explanation: 3. Configure CFNCluster
To configure CFNCluster settings, you need to import the package CFNCluster. The below functions tell you how to insert AWS access keys, configure instance types, spot price and S3 resource.
End of explanation
CFNClusterManager.view_cfncluster_config()
CFNClusterManager.list_cfn_cluster()
Explanation: After you finish configuration, you can call the below function to double check if your settings are correct.
Before you create a new cluster, you can check the current running clusters to avoid to use the different cluster name by call the below function.
End of explanation
master_ip_address = CFNClusterManager.create_cfn_cluster(cluster_name=your_cluster_name)
Explanation: To create a new cluster, you need to set a cluster name and then call the below function. After the creation is complete, you will see the output information about your cluser IP address.
End of explanation
import ConnectionManager
ssh_client = ConnectionManager.connect_master(hostname=master_ip_address,
username="ec2-user",
private_key_file=private_key)
Explanation: 4. Manage cluster
To manage your new created cluster, you need to import ConnectionManager. The ConnectionManager can create the connection to the master node, execute commands on the master node, transfer files to the master. To create a connection to the master node, you need to set the hostname, username and your private key file. The hostname IP address (MasterPublicIP) can be found when your cluster creation is complete. The private key file should be the same when you configure CFNCluster.
End of explanation
ConnectionManager.close_connection(ssh_client)
Explanation: After the job is done, you can call the below function to close the connection.
End of explanation
import CFNClusterManager
if delete_cfncluster == True:
CFNClusterManager.delete_cfn_cluster(cluster_name=your_cluster_name)
Explanation: To delete the cluster, you just need to set the cluster name and call the below function.
End of explanation |
3,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A notebook to test and demonstrate KernelSteinTest. This implements the kernelized Stein discrepancy test of Chwialkowski et al., 2016 and Liu et al., 2016 in ICML 2016.
Step1: Problem
Step2: Test original implementation
Original implementation of Chwialkowski et al., 2016
Step3: | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import kgof
import kgof.data as data
import kgof.density as density
import kgof.goftest as gof
import kgof.kernel as ker
import kgof.util as util
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 18
}
plt.rc('font', **font)
plt.rc('lines', linewidth=2)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
Explanation: A notebook to test and demonstrate KernelSteinTest. This implements the kernelized Stein discrepancy test of Chwialkowski et al., 2016 and Liu et al., 2016 in ICML 2016.
End of explanation
# true p
seed = 13
d =
# sample
n = 800
mean = np.zeros(d)
variance = 1.0
qmean = mean.copy()
qmean[0] = 0
qvariance = variance
p = density.IsotropicNormal(mean, variance)
ds = data.DSIsotropicNormal(qmean, qvariance)
# ds = data.DSLaplace(d=d, loc=0, scale=1.0/np.sqrt(2))
dat = ds.sample(n, seed=seed+1)
X = dat.data()
# Test
alpha = 0.01
# Gaussian kernel with median heuristic
sig2 = util.meddistance(X, subsample=1000)**2
k = ker.KGauss(sig2)
# inverse multiquadric kernel
# From Gorham & Mackey 2017 (https://arxiv.org/abs/1703.01717)
# k = ker.KIMQ(b=-0.5, c=1.0)
bootstrapper = gof.bootstrapper_rademacher
kstein = gof.KernelSteinTest(p, k, bootstrapper=bootstrapper,
alpha=alpha, n_simulate=500, seed=seed+1)
kstein_result = kstein.perform_test(dat, return_simulated_stats=True,
return_ustat_gram=True)
kstein_result
#kstein.compute_stat(dat)
print('p-value: ', kstein_result['pvalue'])
print('reject H0: ', kstein_result['h0_rejected'])
sim_stats = kstein_result['sim_stats']
plt.figure(figsize=(10, 4))
plt.hist(sim_stats, bins=20, normed=True);
plt.stem([kstein_result['test_stat']], [0.03], 'r-o', label='Stat')
plt.legend()
Explanation: Problem: p = Isotropic normal distribution
End of explanation
from scipy.spatial.distance import squareform, pdist
def simulatepm(N, p_change):
'''
:param N:
:param p_change:
:return:
'''
X = np.zeros(N) - 1
change_sign = np.random.rand(N) < p_change
for i in range(N):
if change_sign[i]:
X[i] = -X[i - 1]
else:
X[i] = X[i - 1]
return X
class _GoodnessOfFitTest:
def __init__(self, grad_log_prob, scaling=1):
#scaling is the sigma^2 as in exp(-|x_y|^2/2*sigma^2)
self.scaling = scaling*2
self.grad = grad_log_prob
# construct (slow) multiple gradient handle if efficient one is not given
def grad_multiple(self, X):
#print self.grad
return np.array([(self.grad)(x) for x in X])
def kernel_matrix(self, X):
# check for stupid mistake
assert X.shape[0] > X.shape[1]
sq_dists = squareform(pdist(X, 'sqeuclidean'))
K = np.exp(-sq_dists/ self.scaling)
return K
def gradient_k_wrt_x(self, X, K, dim):
X_dim = X[:, dim]
assert X_dim.ndim == 1
differences = X_dim.reshape(len(X_dim), 1) - X_dim.reshape(1, len(X_dim))
return -2.0 / self.scaling * K * differences
def gradient_k_wrt_y(self, X, K, dim):
return -self.gradient_k_wrt_x(X, K, dim)
def second_derivative_k(self, X, K, dim):
X_dim = X[:, dim]
assert X_dim.ndim == 1
differences = X_dim.reshape(len(X_dim), 1) - X_dim.reshape(1, len(X_dim))
sq_differences = differences ** 2
return 2.0 * K * (self.scaling - 2 * sq_differences) / self.scaling ** 2
def get_statistic_multiple_dim(self, samples, dim):
num_samples = len(samples)
log_pdf_gradients = self.grad_multiple(samples)
# n x 1
log_pdf_gradients = log_pdf_gradients[:, dim]
# n x n
K = self.kernel_matrix(samples)
assert K.shape[0]==K.shape[1]
# n x n
gradient_k_x = self.gradient_k_wrt_x(samples, K, dim)
assert gradient_k_x.shape[0] == gradient_k_x.shape[1]
# n x n
gradient_k_y = self.gradient_k_wrt_y(samples, K, dim)
# n x n
second_derivative = self.second_derivative_k(samples, K, dim)
assert second_derivative.shape[0] == second_derivative.shape[1]
# use broadcasting to mimic the element wise looped call
pairwise_log_gradients = log_pdf_gradients.reshape(num_samples, 1) \
* log_pdf_gradients.reshape(1, num_samples)
A = pairwise_log_gradients * K
B = gradient_k_x * log_pdf_gradients
C = (gradient_k_y.T * log_pdf_gradients).T
D = second_derivative
V_statistic = A + B + C + D
#V_statistic = C
stat = num_samples * np.mean(V_statistic)
return V_statistic, stat
def compute_pvalues_for_processes(self, U_matrix, chane_prob, num_bootstrapped_stats=300):
N = U_matrix.shape[0]
bootsraped_stats = np.zeros(num_bootstrapped_stats)
with util.NumpySeedContext(seed=10):
for proc in range(num_bootstrapped_stats):
# W = np.sign(orsetinW[:,proc])
W = simulatepm(N, chane_prob)
WW = np.outer(W, W)
st = np.mean(U_matrix * WW)
bootsraped_stats[proc] = N * st
stat = N * np.mean(U_matrix)
return float(np.sum(bootsraped_stats > stat)) / num_bootstrapped_stats
def is_from_null(self, alpha, samples, chane_prob):
dims = samples.shape[1]
boots = 10 * int(dims / alpha)
num_samples = samples.shape[0]
U = np.zeros((num_samples, num_samples))
for dim in range(dims):
U2, _ = self.get_statistic_multiple_dim(samples, dim)
U += U2
p = self.compute_pvalues_for_processes(U, chane_prob, boots)
return p, U
#sigma = np.array([[1, 0.2, 0.1], [0.2, 1, 0.4], [0.1, 0.4, 1]])
def grad_log_correleted(x):
#sigmaInv = np.linalg.inv(sigma)
#return - np.dot(sigmaInv.T + sigmaInv, x) / 2.0
return -(x-mean)/variance
#me = _GoodnessOfFitTest(grad_log_correleted)
qm = _GoodnessOfFitTest(grad_log_correleted, scaling=sig2)
#X = np.random.multivariate_normal([0, 0, 0], sigma, 200)
p_val, U = qm.is_from_null(0.05, X, 0.1)
print(p_val)
plt.imshow(U, interpolation='none')
plt.colorbar()
# U-statistic matrix from the new implementation
H = kstein_result['H']
plt.imshow(H, interpolation='none')
plt.colorbar()
plt.imshow(U-H, interpolation='none')
plt.colorbar()
Explanation: Test original implementation
Original implementation of Chwialkowski et al., 2016
End of explanation
x = np.random.randint(1, 5, 5)
y = np.random.randint(1, 3, 3)
x
y
x[:, np.newaxis] - y[np.newaxis, :]
Explanation:
End of explanation |
3,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Behavioral Cloning Notebook
Overview
This notebook contains project files for the Behavioral Cloning Project.
In this project, I use my knowledge on deep neural networks and convolutional neural networks to clone driving behaviors. I train, validate and test a model using Keras. The model will output a steering angles for an autonomous vehicle given images collected from the car.
Udacity has provided a car simulator where you can steer a car around a track for data collection. The image data and steering angles are used to train a neural network and then a trained model is used to drive the car autonomously around the track.
Import Packages
Step1: Read and store lines from driving log csv file
Step2: Image Processing
Step3: Data visualisation
Step4: Take a validation set
Step5: Define Data Generator
Refer to https
Step6: Build Model Architecture
Step7: Train the model | Python Code:
import csv
from PIL import Image
import cv2
import numpy as np
import h5py
import os
from random import shuffle
import sklearn
Explanation: Behavioral Cloning Notebook
Overview
This notebook contains project files for the Behavioral Cloning Project.
In this project, I use my knowledge on deep neural networks and convolutional neural networks to clone driving behaviors. I train, validate and test a model using Keras. The model will output a steering angles for an autonomous vehicle given images collected from the car.
Udacity has provided a car simulator where you can steer a car around a track for data collection. The image data and steering angles are used to train a neural network and then a trained model is used to drive the car autonomously around the track.
Import Packages
End of explanation
samples = []
with open('data/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
# if we added headers to row 1 we better skip this line
#iterlines = iter(reader)
#next(iterlines)
for line in reader:
samples.append(line)
Explanation: Read and store lines from driving log csv file
End of explanation
def gray_scale(image):
return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
def clahe_normalise(image):
# create a CLAHE object (Arguments are optional).
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(5,5))
return clahe.apply(image)
def process_image(image):
# do some pre processing on the image
# TODO: Continue experimenting with colour, brightness adjustments
#image = gray_scale(image)
#image = clahe_normalise(image)
return image
# TODO: more testing with ImageDataGenerator
# from keras.preprocessing.image import ImageDataGenerator
# https://keras.io/preprocessing/image/
#train_datagen = ImageDataGenerator(
# featurewise_center=True,
# featurewise_std_normalization=True,
# rotation_range=0,
# width_shift_range=0.0,
# height_shift_range=0.0,
# horizontal_flip=True)
Explanation: Image Processing
End of explanation
import random
import matplotlib.pyplot as plt
from PIL import Image
index = random.randint(0, len(samples))
sample = samples[index]
print(sample)
print ("Sample Information")
print("Centre Image Location: ", sample[0])
print("Centre Image Location: ", sample[1])
print("Centre Image Location: ", sample[2])
print("Steering Centre: ", sample[3])
print ("Throttle: ", sample[4])
path = "data/IMG/"
# RGB
img_center = np.asarray(Image.open(path+os.path.basename(sample[0])))
img_left = np.asarray(Image.open(path+os.path.basename(sample[1])))
img_right = np.asarray(Image.open(path+os.path.basename(sample[2])))
# Gray
gray_img_center = gray_scale(img_center)
gray_img_left = gray_scale(img_center)
gray_img_right = gray_scale(img_center)
# Flipped
img_center_flipped = cv2.flip(gray_img_center,1)
img_left_flipped = cv2.flip(gray_img_center,1)
img_right_flipped = cv2.flip(gray_img_center,1)
# Normalised
img_center_flipped_normalised = clahe_normalise(img_center_flipped)
img_left_flipped_normalised = clahe_normalise(img_left_flipped)
img_right_flipped_normalised = clahe_normalise(img_right_flipped)
# Crop
img_center_cropped = img_center_flipped_normalised[65:160-22,0:320]
img_left_cropped = img_left_flipped_normalised[65:160-22,0:320]
img_right_cropped = img_right_flipped_normalised[65:160-22,0:320]
steering_center = float(sample[3]) # steering measurement for centre image
correction = 0.1 # steering offset for left and right images, tune this parameter
steering_left = steering_center + correction
steering_right = steering_center - correction
# And print the results
# RGB
plt.figure(figsize=(20,20))
plt.subplot(4,3,1)
plt.imshow(img_center)
plt.axis('off')
plt.title('Image Center', fontsize=10)
plt.subplot(4,3,2)
plt.imshow(img_left)
plt.axis('off')
plt.title('Image Left', fontsize=10)
plt.subplot(4,3,3)
plt.imshow(img_right)
plt.axis('off')
plt.title('Image Right', fontsize=10)
### Gray
plt.subplot(4,3,4)
plt.imshow(gray_img_center, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Gray Center', fontsize=10)
plt.subplot(4,3,5)
plt.imshow(gray_img_left, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Gray Left', fontsize=10)
plt.subplot(4,3,6)
plt.imshow(gray_img_right, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Gray Right', fontsize=10)
### Flipped Images
plt.subplot(4,3,7)
plt.imshow(img_center_flipped_normalised, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Center Flipped', fontsize=10)
plt.subplot(4,3,8)
plt.imshow(img_left_flipped_normalised, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Left Flipped', fontsize=10)
plt.subplot(4,3,9)
plt.imshow(img_right_flipped_normalised, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Right Flipped', fontsize=10)
### Normalised Images
plt.subplot(4,3,10)
plt.imshow(img_center_flipped_normalised, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Center Flipped', fontsize=10)
plt.subplot(4,3,11)
plt.imshow(img_left_flipped_normalised, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Left Flipped', fontsize=10)
plt.subplot(4,3,12)
plt.imshow(img_right_flipped_normalised, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Right Flipped', fontsize=10)
### Normalised Images
plt.subplot(4,3,10)
plt.imshow(img_center_cropped, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Center Cropped', fontsize=10)
plt.subplot(4,3,11)
plt.imshow(img_left_cropped, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Left Cropped', fontsize=10)
plt.subplot(4,3,12)
plt.imshow(img_right_cropped, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Image Right Cropped', fontsize=10)
plt.subplots_adjust(wspace=0.2, hspace=0.2, top=0.5, bottom=0, left=0, right=0.5)
plt.savefig('plots/data_visualisation.png')
plt.show()
Explanation: Data visualisation
End of explanation
from sklearn.model_selection import train_test_split
train_samples, validation_samples = train_test_split(samples, test_size=0.2)
Explanation: Take a validation set
End of explanation
from numpy import newaxis
def generator(samples, batch_size=32):
# Create empty arrays to contain batch of features and labels
num_samples = len(samples)
while True:
shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
batch_features = []
batch_labels = []
for batch_sample in batch_samples:
path = "data/IMG/"
img_center = process_image(np.asarray(Image.open(path+os.path.basename(batch_sample[0]))))
img_left = process_image(np.asarray(Image.open(path+os.path.basename(batch_sample[1]))))
img_right = process_image(np.asarray(Image.open(path+os.path.basename(batch_sample[2]))))
#We now want to create adjusted steering measurement for the side camera images
steering_center = float(batch_sample[3]) # steering measurement for centre image
correction = 0.1 # steering offset for left and right images, tune this parameter
steering_left = steering_center + correction
steering_right = steering_center - correction
# TODO: Add throttle information
batch_features.extend([img_center, img_left, img_right, cv2.flip(img_center,1), cv2.flip(img_left,1), cv2.flip(img_right,1)])
batch_labels.extend([steering_center, steering_left, steering_right, steering_center*-1.0, steering_left*-1.0, steering_right*-1.0])
X_train = np.array(batch_features)
# X_train = X_train[..., newaxis] # if converting to gray scale and normalising, may need to add another axis
# Do some image processing on the data
#train_datagen.fit(X_train)
y_train = np.array(batch_labels)
yield sklearn.utils.shuffle(X_train, y_train) # once we've got our processed batch send them off
Explanation: Define Data Generator
Refer to https://medium.com/@fromtheast/implement-fit-generator-in-keras-61aa2786ce98 for a good tutorial
End of explanation
train_generator = generator(train_samples, batch_size=32)
validation_generator = generator(validation_samples, batch_size=32)
# Imports to build the model Architecture
import matplotlib.pyplot as plt
from keras.models import Model
from keras.models import Sequential
from keras.layers import Flatten, Dense, Lambda
from keras.layers import Conv2D
from keras.layers.core import Dropout
from keras.layers.noise import GaussianDropout
from keras.layers.pooling import MaxPooling2D
from keras.layers.convolutional import Cropping2D
# In the architecture we add a crop layer
crop_top = 65
crop_bottom = 22
# The input image dimensions
input_height = 160
input_width = 320
new_height = input_height - crop_bottom - crop_top
# Build the model architecture
# Based on http://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf
model = Sequential()
model.add(Cropping2D(cropping=((crop_top,crop_bottom),(0,0)), input_shape=(input_height,input_width, 3)))
model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(new_height,input_width,3)))
model.add(Conv2D(24,kernel_size=5,strides=(2, 2),activation='relu'))
model.add(Conv2D(36,kernel_size=5,strides=(2, 2),activation='relu'))
model.add(Conv2D(48,kernel_size=5,strides=(2, 2),activation='relu'))
model.add(Conv2D(64,kernel_size=3,strides=(1, 1),activation='relu'))
model.add(Conv2D(64,kernel_size=3,strides=(1, 1),activation='relu'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(1164))
model.add(Dense(100))
model.add(Dense(50))
model.add(Dense(10))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
print("model summary: ", model.summary())
Explanation: Build Model Architecture
End of explanation
batch_size = 100
# Info: https://medium.com/@fromtheast/implement-fit-generator-in-keras-61aa2786ce98
history_object = model.fit_generator(
train_generator, steps_per_epoch=len(train_samples)/batch_size,
validation_data = validation_generator, validation_steps=len(validation_samples)/batch_size,
epochs=5, verbose=1)
model.save('model.h5')
### print the keys contained in the history object
print(history_object.history.keys())
### plot the training and validation loss for each epoch
plt.plot(history_object.history['loss'])
plt.plot(history_object.history['val_loss'])
plt.title('model mean squared error loss')
plt.ylabel('mean squared error loss')
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
plt.show()
Explanation: Train the model
End of explanation |
3,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
์ ๊ฒฝ๋ง ์ฑ๋ฅ ๊ฐ์
์ ๊ฒฝ๋ง์ ์์ธก ์ฑ๋ฅ ๋ฐ ์๋ ด ์ฑ๋ฅ์ ๊ฐ์ ํ๊ธฐ ์ํด์๋ ๋ค์๊ณผ ๊ฐ์ ์ถ๊ฐ์ ์ธ ๊ณ ๋ ค๋ฅผ ํด์ผ ํ๋ค.
์ค์ฐจ(๋ชฉ์ ) ํจ์ ๊ฐ์
Step1: 5๋ 10์ด ๋์ค๋ฉด ๊ธฐ์ธ๊ธฐ๊ฐ์ด 0์ด ๋์จ๋ค. ์์ฒญ ์๊ฒ ๋์จ๋ค. (์ด๋ก์ ๊ฒฝ์ฐ์). a๊ฐ์ด ์ด๋ ๊ฒ ๋์ค๋ฉด weight update๊ฐ ์ ์ ๋๊ฒ ๋๋ค. ์ฐ๋ฆฌ๊ฐ ์ํ๋ ๊ฒ์ ๋ฐ๋๋ค. a๊ฐ์ด ํฌ๋ฉด ๋ ์ ์๋๋ ํจ๊ณผ๊ฐ ๋์ค๋๊น. ๊ทธ๋์ ์๋ก ๋ง๋ค์ด ๋ธ ๊ฒ์ ๋ฐ์ ์๋ ๊ต์ฐจ ์ํธ๋กํผ ์ค์ฐจ ํจ์์ด๋ค.
๊ต์ฐจ ์ํธ๋กํผ ์ค์ฐจ ํจ์ (Cross-Entropy Cost Function)
์ด๋ฌํ ์๋ ด ์๋ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๋ ๋ฐฉ๋ฒ์ ํ๋๋ ์ค์ฐจ ์ ๊ณฑํฉ ํํ๊ฐ ์๋ ๊ต์ฐจ ์ํธ๋กํผ(Cross-Entropy) ํํ์ ์ค์ฐจํจ์๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ด๋ค.
$$
\begin{eqnarray}
C = -\frac{1}{n} \sum_x \left[y \ln z + (1-y) \ln (1-z) \right],
\end{eqnarray}
$$
๋ฏธ๋ถ๊ฐ์ ๋ค์๊ณผ ๊ฐ๋ค.
$$
\begin{eqnarray}
\frac{\partial C}{\partial w_j} & = & -\frac{1}{n} \sum_x \left(
\frac{y }{z} -\frac{(1-y)}{1-z} \right)
\frac{\partial z}{\partial w_j} \
& = & -\frac{1}{n} \sum_x \left(
\frac{y}{\sigma(a)}
-\frac{(1-y)}{1-\sigma(a)} \right)\sigma'(a) x_j \
& = &
\frac{1}{n}
\sum_x \frac{\sigma'(a) x_j}{\sigma(a) (1-\sigma(a))}
(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x x_j(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x (z-y) x_j\ \
\frac{\partial C}{\partial b} &=& \frac{1}{n} \sum_x (z-y)
\end{eqnarray}
$$
์ฐ๋ฆฌ๊ฐ ๋ฐ๋ผ๋ ๊ฒ์ z๊ฐ 1์ด ๋๋ฉด y๊ฐ 1 or 0. ๊ทธ๋์ MSE์ ๊ฐ์ ํจ๊ณผ.
์ค์ฐจ๊ฐ ํด ๋๋ ๋ฏธ๋ถ๊ฐ์ด ์ปค์ ์ฌ๋กํ๊ฐ ์ปค์ ์
๋ฐ์ดํธ๊ฐ ๋นจ๋ฆฌ ๋๋ค. ์ค์ฐจ๊ฐ ์ ์ผ๋ฉด ๋ฏธ๋ถ์ด ์ฒ์ฒํ ๋ผ์ ์ ๋ฐํ๊ฒ ์กฐ์ ํ ์๊ฐ ์๋ค.
์ด ์์์ ๋ณด๋ค์ํผ ๊ธฐ์ธ๊ธฐ(gradient)๊ฐ ์์ธก ์ค์ฐจ(prediction error) $z-y$์ ๋น๋กํ๊ธฐ ๋๋ฌธ์
์ค์ฐจ๊ฐ ํฌ๋ฉด ์๋ ด ์๋๊ฐ ๋น ๋ฅด๊ณ
์ค์ฐจ๊ฐ ์ ์ผ๋ฉด ์๋๊ฐ ๊ฐ์ํ์ฌ ๋ฐ์ฐ์ ๋ฐฉ์งํ๋ค.
๊ต์ฐจ ์ํธ๋กํผ ๊ตฌํ ์
https
Step6: ๊ณผ์ต์ ํ ๋ฌธ์
์ ๊ฒฝ๋ง ๋ชจํ์ ํ๋ผ๋ฏธํฐ์ ์๊ฐ ๋ค๋ฅธ ๋ชจํ์ ๋นํด ๋ง๋ค.
* (28x28)x(30)x(10) => 24,000
* (28x28)x(100)x(10) => 80,000
์ด๋ ๊ฒ ํ๋ผ๋ฏธํฐ์ ์๊ฐ ๋ง์ผ๋ฉด ๊ณผ์ต์ ํ ๋ฐ์ ๊ฐ๋ฅ์ฑ์ด ์ฆ๊ฐํ๋ค. ์ฆ, ์ ํ๋๊ฐ ๋์์ง์ง ์๊ฑฐ๋ ๋๋น ์ ธ๋ ์ค์ฐจ ํจ์๋ ๊ณ์ ๊ฐ์ํ๋ ํ์์ด ๋ฐ์ํ๋ค.
์
Step10: Hyper-Tangent Activation and Rectified Linear Unit (ReLu) Activation
์๊ทธ๋ชจ์ด๋ ํจ์ ์ด์ธ์๋ ํ์ดํผ ํ์ ํธ ๋ฐ ReLu ํจ์๋ฅผ ์ฌ์ฉํ ์๋ ์๋ค.
ํ์ดํผ ํ์ ํธ activation ํจ์๋ ์์ ๊ฐ์ ๊ฐ์ง ์ ์์ผ๋ฉฐ ์๊ทธ๋ชจ์ด๋ activation ํจ์๋ณด๋ค ์ผ๋ฐ์ ์ผ๋ก ์๋ ด ์๋๊ฐ ๋น ๋ฅด๋ค.
$$
\begin{eqnarray}
\tanh(w \cdot x+b),
\end{eqnarray}
$$
$$
\begin{eqnarray}
\tanh(a) \equiv \frac{e^a-e^{-a}}{e^a+e^{-a}}.
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sigma(a) = \frac{1+\tanh(a/2)}{2},
\end{eqnarray}
$$
Step11: Rectified Linear Unit (ReLu) Activation ํจ์๋ ๋ฌดํ๋ ํฌ๊ธฐ์ activation ๊ฐ์ด ๊ฐ๋ฅํ๋ฉฐ ๊ฐ์ค์น์ดํฉ $a$๊ฐ ํฐ ๊ฒฝ์ฐ์๋ ๊ธฐ์ธ๊ธฐ(gradient)๊ฐ 0 ์ด๋๋ฉฐ ์ฌ๋ผ์ง์ง ์๋๋ค๋ ์ฅ์ ์ด ์๋ค.
$$
\begin{eqnarray}
\max(0, w \cdot x+b).
\end{eqnarray}
$$ | Python Code:
sigmoid = lambda x: 1/(1+np.exp(-x))
sigmoid_prime = lambda x: sigmoid(x)*(1-sigmoid(x))
xx = np.linspace(-10, 10, 1000)
plt.plot(xx, sigmoid(xx));
plt.plot(xx, sigmoid_prime(xx));
Explanation: ์ ๊ฒฝ๋ง ์ฑ๋ฅ ๊ฐ์
์ ๊ฒฝ๋ง์ ์์ธก ์ฑ๋ฅ ๋ฐ ์๋ ด ์ฑ๋ฅ์ ๊ฐ์ ํ๊ธฐ ์ํด์๋ ๋ค์๊ณผ ๊ฐ์ ์ถ๊ฐ์ ์ธ ๊ณ ๋ ค๋ฅผ ํด์ผ ํ๋ค.
์ค์ฐจ(๋ชฉ์ ) ํจ์ ๊ฐ์ : cross-entropy cost function
์ ๊ทํ: regularization
๊ฐ์ค์น ์ด๊ธฐ๊ฐ: weight initialization
Softmax ์ถ๋ ฅ
Activation ํจ์ ์ ํ: hyper-tangent and ReLu
๊ธฐ์ธ๊ธฐ์ ์๋ ด ์๋ ๋ฌธ์
์ผ๋ฐ์ ์ผ๋ก ์ฌ์ฉํ๋ ์์ฐจ ์ ๊ณฑํฉ(sum of square) ํํ์ ์ค์ฐจ ํจ์๋ ๋๋ถ๋ถ์ ๊ฒฝ์ฐ์ ๊ธฐ์ธ๊ธฐ ๊ฐ์ด 0 ์ด๋ฏ๋ก (near-zero gradient) ์๋ ด์ด ๋๋ ค์ง๋ ๋จ์ ์ด ์๋ค.
http://neuralnetworksanddeeplearning.com/chap3.html
$$
\begin{eqnarray}
z = \sigma (wx+b)
\end{eqnarray}
$$
$$
\begin{eqnarray}
C = \frac{(y-z)^2}{2},
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & (z-y)\sigma'(a) x \
\frac{\partial C}{\partial b} & = & (z-y)\sigma'(a)
\end{eqnarray}
$$
if $x=1$, $y=0$,
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & a \sigma'(a) \
\frac{\partial C}{\partial b} & = & a \sigma'(z)
\end{eqnarray}
$$
$\sigma'$๋ ๋๋ถ๋ถ์ ๊ฒฝ์ฐ์ zero.
End of explanation
%cd /home/dockeruser/neural-networks-and-deep-learning/src
%ls
import mnist_loader
import network2
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
net = network2.Network([784, 30, 10], cost=network2.QuadraticCost)
net.large_weight_initializer()
%time result1 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True)
net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost)
net.large_weight_initializer()
%time result2 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True)
plt.plot(result1[1], 'bo-', label="quadratic cost")
plt.plot(result2[1], 'rs-', label="cross-entropy cost")
plt.legend(loc=0)
plt.show()
Explanation: 5๋ 10์ด ๋์ค๋ฉด ๊ธฐ์ธ๊ธฐ๊ฐ์ด 0์ด ๋์จ๋ค. ์์ฒญ ์๊ฒ ๋์จ๋ค. (์ด๋ก์ ๊ฒฝ์ฐ์). a๊ฐ์ด ์ด๋ ๊ฒ ๋์ค๋ฉด weight update๊ฐ ์ ์ ๋๊ฒ ๋๋ค. ์ฐ๋ฆฌ๊ฐ ์ํ๋ ๊ฒ์ ๋ฐ๋๋ค. a๊ฐ์ด ํฌ๋ฉด ๋ ์ ์๋๋ ํจ๊ณผ๊ฐ ๋์ค๋๊น. ๊ทธ๋์ ์๋ก ๋ง๋ค์ด ๋ธ ๊ฒ์ ๋ฐ์ ์๋ ๊ต์ฐจ ์ํธ๋กํผ ์ค์ฐจ ํจ์์ด๋ค.
๊ต์ฐจ ์ํธ๋กํผ ์ค์ฐจ ํจ์ (Cross-Entropy Cost Function)
์ด๋ฌํ ์๋ ด ์๋ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๋ ๋ฐฉ๋ฒ์ ํ๋๋ ์ค์ฐจ ์ ๊ณฑํฉ ํํ๊ฐ ์๋ ๊ต์ฐจ ์ํธ๋กํผ(Cross-Entropy) ํํ์ ์ค์ฐจํจ์๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ด๋ค.
$$
\begin{eqnarray}
C = -\frac{1}{n} \sum_x \left[y \ln z + (1-y) \ln (1-z) \right],
\end{eqnarray}
$$
๋ฏธ๋ถ๊ฐ์ ๋ค์๊ณผ ๊ฐ๋ค.
$$
\begin{eqnarray}
\frac{\partial C}{\partial w_j} & = & -\frac{1}{n} \sum_x \left(
\frac{y }{z} -\frac{(1-y)}{1-z} \right)
\frac{\partial z}{\partial w_j} \
& = & -\frac{1}{n} \sum_x \left(
\frac{y}{\sigma(a)}
-\frac{(1-y)}{1-\sigma(a)} \right)\sigma'(a) x_j \
& = &
\frac{1}{n}
\sum_x \frac{\sigma'(a) x_j}{\sigma(a) (1-\sigma(a))}
(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x x_j(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x (z-y) x_j\ \
\frac{\partial C}{\partial b} &=& \frac{1}{n} \sum_x (z-y)
\end{eqnarray}
$$
์ฐ๋ฆฌ๊ฐ ๋ฐ๋ผ๋ ๊ฒ์ z๊ฐ 1์ด ๋๋ฉด y๊ฐ 1 or 0. ๊ทธ๋์ MSE์ ๊ฐ์ ํจ๊ณผ.
์ค์ฐจ๊ฐ ํด ๋๋ ๋ฏธ๋ถ๊ฐ์ด ์ปค์ ์ฌ๋กํ๊ฐ ์ปค์ ์
๋ฐ์ดํธ๊ฐ ๋นจ๋ฆฌ ๋๋ค. ์ค์ฐจ๊ฐ ์ ์ผ๋ฉด ๋ฏธ๋ถ์ด ์ฒ์ฒํ ๋ผ์ ์ ๋ฐํ๊ฒ ์กฐ์ ํ ์๊ฐ ์๋ค.
์ด ์์์ ๋ณด๋ค์ํผ ๊ธฐ์ธ๊ธฐ(gradient)๊ฐ ์์ธก ์ค์ฐจ(prediction error) $z-y$์ ๋น๋กํ๊ธฐ ๋๋ฌธ์
์ค์ฐจ๊ฐ ํฌ๋ฉด ์๋ ด ์๋๊ฐ ๋น ๋ฅด๊ณ
์ค์ฐจ๊ฐ ์ ์ผ๋ฉด ์๋๊ฐ ๊ฐ์ํ์ฌ ๋ฐ์ฐ์ ๋ฐฉ์งํ๋ค.
๊ต์ฐจ ์ํธ๋กํผ ๊ตฌํ ์
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network2.py
```python
Define the quadratic and cross-entropy cost functions
class QuadraticCost(object):
@staticmethod
def fn(a, y):
Return the cost associated with an output ``a`` and desired output
``y``.
return 0.5*np.linalg.norm(a-y)**2
@staticmethod
def delta(z, a, y): #์ฌ๊ธฐ ๋ธํ๋ ๋ฏธ๋ถ๊ฐ์ ์๋ฏธํ๋ค.
Return the error delta from the output layer.
return (a-y) * sigmoid_prime(z)
class CrossEntropyCost(object):
@staticmethod
def fn(a, y): #ํจ์
Return the cost associated with an output ``a`` and desired output
``y``. Note that np.nan_to_num is used to ensure numerical
stability. In particular, if both ``a`` and ``y`` have a 1.0
in the same slot, then the expression (1-y)*np.log(1-a)
returns nan. The np.nan_to_num ensures that that is converted
to the correct value (0.0).
return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a))) #nan์ ๋ก๊ทธ ์์ ์ด์ํ ๊ฐ(์์๋ 0)์ด ๋ค์ด๊ฐ๋ ๊ฒ. ๊ทธ๋์ 0์ผ๋ก ๋ค #์ด๊ฐ๊ฒ๋
@staticmethod
def delta(z, a, y):
Return the error delta from the output layer. Note that the
parameter ``z`` is not used by the method. It is included in
the method's parameters in order to make the interface
consistent with the delta method for other cost classes.
return (a-y)
```
End of explanation
from ipywidgets import interactive
from IPython.display import Audio, display
def softmax_plot(z1=0, z2=0, z3=0, z4=0):
exps = np.array([np.exp(z1), np.exp(z2), np.exp(z3), np.exp(z4)])
exp_sum = exps.sum()
plt.bar(range(len(exps)), exps/exp_sum)
plt.xlim(-0.3, 4.1)
plt.ylim(0, 1)
plt.xticks([])
v = interactive(softmax_plot, z1=(-3, 5, 0.01), z2=(-3, 5, 0.01), z3=(-3, 5, 0.01), z4=(-3, 5, 0.01))
display(v)
Explanation: ๊ณผ์ต์ ํ ๋ฌธ์
์ ๊ฒฝ๋ง ๋ชจํ์ ํ๋ผ๋ฏธํฐ์ ์๊ฐ ๋ค๋ฅธ ๋ชจํ์ ๋นํด ๋ง๋ค.
* (28x28)x(30)x(10) => 24,000
* (28x28)x(100)x(10) => 80,000
์ด๋ ๊ฒ ํ๋ผ๋ฏธํฐ์ ์๊ฐ ๋ง์ผ๋ฉด ๊ณผ์ต์ ํ ๋ฐ์ ๊ฐ๋ฅ์ฑ์ด ์ฆ๊ฐํ๋ค. ์ฆ, ์ ํ๋๊ฐ ๋์์ง์ง ์๊ฑฐ๋ ๋๋น ์ ธ๋ ์ค์ฐจ ํจ์๋ ๊ณ์ ๊ฐ์ํ๋ ํ์์ด ๋ฐ์ํ๋ค.
์:
python
net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost)
net.large_weight_initializer()
net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data,
monitor_evaluation_accuracy=True, monitor_training_cost=True)
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting1.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting3.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting4.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting2.png" style="width:90%;">
L2 ์ ๊ทํ
์ด๋ฌํ ๊ณผ์ต์ ํ๋ฅผ ๋ฐฉ์งํ๊ธฐ ์ํด์๋ ์ค์ฐจ ํจ์์ ๋ค์๊ณผ ๊ฐ์ด ์ ๊ทํ ํญ๋ชฉ์ ์ถ๊ฐํ์ฌ์ผ ํ๋ค.
$$
\begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln
(1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i w_i^2
\end{eqnarray}
$$
๋๋
$$
\begin{eqnarray} C = C_0 + \frac{\lambda}{2n}
\sum_i w_i^2,
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} w \
\frac{\partial C}{\partial b} & = & \frac{\partial C_0}{\partial b}
\end{eqnarray}
$$
$$
\begin{eqnarray}
w & \rightarrow & w-\eta \frac{\partial C_0}{\partial w}-\frac{\eta \lambda}{n} w \
& = & \left(1-\frac{\eta \lambda}{n}\right) w -\eta \frac{\partial C_0}{\partial w}
\end{eqnarray}
$$
L2 ์ ๊ทํ ๊ตฌํ ์
`python
def total_cost(self, data, lmbda, convert=False):
Return the total cost for the data setdata. The flagconvertshould be set to False if the data set is the
training data (the usual case), and to True if the data set is
the validation or test data. See comments on the similar (but
reversed) convention for theaccuracy`` method, above.
cost = 0.0
for x, y in data:
a = self.feedforward(x)
if convert: y = vectorized_result(y)
cost += self.cost.fn(a, y)/len(data)
cost += 0.5(lmbda/len(data))sum(np.linalg.norm(w)**2 for w in self.weights)
return cost
def update_mini_batch(self, mini_batch, eta, lmbda, n):
Update the network's weights and biases by applying gradient
descent using backpropagation to a single mini batch. The
mini_batch is a list of tuples (x, y), eta is the
learning rate, lmbda is the regularization parameter, and
n is the total size of the training data set.
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [(1-eta(lmbda/n))w-(eta/len(mini_batch))nw for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))nb for b, nb in zip(self.biases, nabla_b)]
```
python
net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data, lmbda = 0.1,
monitor_evaluation_cost=True, monitor_evaluation_accuracy=True,
monitor_training_cost=True, monitor_training_accuracy=True)
<img src="http://neuralnetworksanddeeplearning.com/images/regularized1.png" style="width:90%;" >
<img src="http://neuralnetworksanddeeplearning.com/images/regularized2.png" style="width:90%;" >
L1 ์ ๊ทํ
L2 ์ ๊ทํ ๋์ ๋ค์๊ณผ ๊ฐ์ L1 ์ ๊ทํ๋ฅผ ์ฌ์ฉํ ์๋ ์๋ค.
$$
\begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln
(1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i \| w_i \|
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} = \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} \, {\rm sgn}(w)
\end{eqnarray}
$$
$$
\begin{eqnarray}
w \rightarrow w' = w-\frac{\eta \lambda}{n} \mbox{sgn}(w) - \eta \frac{\partial C_0}{\partial w}
\end{eqnarray}
$$
Dropout ์ ๊ทํ
Dropout ์ ๊ทํ ๋ฐฉ๋ฒ์ epoch ๋ง๋ค ์์์ hidden layer neurons $100p$%(๋ณดํต ์ ๋ฐ)๋ฅผ dropout ํ์ฌ ์ต์ ํ ๊ณผ์ ์ ํฌํจํ์ง ์๋ ๋ฐฉ๋ฒ์ด๋ค. ์ด ๋ฐฉ๋ฒ์ ์ฌ์ฉํ๋ฉด ๊ฐ์ค์น ๊ฐ๋ค ๊ฐ๋ค์ด ๋์์ ์์ง์ด๋ ๊ฒ(co-adaptations) ๋ฐฉ์งํ๋ฉฐ ๋ชจํ averaging ํจ๊ณผ๋ฅผ ๊ฐ์ ธ๋ค ์ค๋ค.
<img src="http://neuralnetworksanddeeplearning.com/images/tikz31.png">
๊ฐ์ค์น ๊ฐฑ์ ์ด ๋๋๊ณ ํ
์คํธ ์์ ์๋ ๊ฐ์ค์น์ $p$๋ฅผ ๊ณฑํ์ฌ ์ค์ผ์ผ๋งํ๋ค.
<img src="https://datascienceschool.net/upfiles/8e5177d1e7dd46a69d5b316ee8748e00.png">
๊ฐ์ค์น ์ด๊ธฐํ (Weight initialization)
๋ด๋ฐ์ ๋ํ ์
๋ ฅ์ ์ $n_{in}$๊ฐ ์ฆ๊ฐํ๋ฉด ๊ฐ์ค ์ดํฉ $a$๊ฐ์ ํ์คํธ์ฐจ๋ ์ฆ๊ฐํ๋ค.
$$ \text{std}(a) \propto \sqrt{n_{in}} $$
<img src="http://neuralnetworksanddeeplearning.com/images/tikz32.png">
์๋ฅผ ๋ค์ด ์
๋ ฅ์ด 1000๊ฐ, ๊ทธ ์ค ์ ๋ฐ์ด 1์ด๋ฉด ํ์คํธ์ฐจ๋ ์ฝ 22.4 ์ด ๋๋ค.
$$ \sqrt{501} \approx 22.4 $$
<img src="https://docs.google.com/drawings/d/1PZwr7wS_3gg7bXtp16XaZCbvxj4tMrfcbCf6GJhaX_0/pub?w=608&h=153">
์ด๋ ๊ฒ ํ์ค ํธ๊ฐ๊ฐ ํฌ๋ฉด ์๋ ด์ด ๋๋ ค์ง๊ธฐ ๋๋ฌธ์ ์
๋ ฅ ์์ ๋ฐ๋ผ ์ด๊ธฐํ ๊ฐ์ค์น์ ํ์คํธ์ฐจ๋ฅผ ๊ฐ์ํ๋ ์ด๊ธฐํ ๊ฐ ์กฐ์ ์ด ํ์ํ๋ค.
$$\dfrac{1}{\sqrt{n_{in}} }$$
๊ฐ์ค์น ์ด๊ธฐํ ๊ตฌํ ์
python
def default_weight_initializer(self):
Initialize each weight using a Gaussian distribution with mean 0
and standard deviation 1 over the square root of the number of
weights connecting to the same neuron. Initialize the biases
using a Gaussian distribution with mean 0 and standard
deviation 1.
Note that the first layer is assumed to be an input layer, and
by convention we won't set any biases for those neurons, since
biases are only ever used in computing the outputs from later
layers.
self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
self.weights = [np.random.randn(y, x)/np.sqrt(x) for x, y in zip(self.sizes[:-1], self.sizes[1:])]
<img src="http://neuralnetworksanddeeplearning.com/images/weight_initialization_30.png" style="width:90%;">
์ํํธ๋งฅ์ค ์ถ๋ ฅ
์ํํธ๋งฅ์ค(softmax) ํจ์๋ ์
๋ ฅ๊ณผ ์ถ๋ ฅ์ด ๋ค๋ณ์(multiple variable) ์ธ ํจ์์ด๋ค. ์ต๊ณ ์ถ๋ ฅ์ ์์น๋ฅผ ๋ณํํ์ง ์์ผ๋ฉด์ ์ธจ๋ ฅ์ ํฉ์ด 1์ด ๋๋๋ก ์กฐ์ ํ๊ธฐ ๋๋ฌธ์ ์ถ๋ ฅ์ ํ๋ฅ ๋ก ์ ์๋ฏธ๋ฅผ ๋ถ์ฌํ ์ ์๋ค. ๋ณดํต ์ ๊ฒฝ๋ง์ ์ต์ข
์ถ๋ ฅ๋จ์ ์ ์ฉํ๋ค.
$$
\begin{eqnarray}
y^L_j = \frac{e^{a^L_j}}{\sum_k e^{a^L_k}},
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sum_j y^L_j & = & \frac{\sum_j e^{a^L_j}}{\sum_k e^{a^L_k}} = 1
\end{eqnarray}
$$
<img src="https://www.tensorflow.org/versions/master/images/softmax-regression-scalargraph.png" style="width:60%;">
End of explanation
z = np.linspace(-5, 5, 100)
a = np.tanh(z)
plt.plot(z, a)
plt.show()
Explanation: Hyper-Tangent Activation and Rectified Linear Unit (ReLu) Activation
์๊ทธ๋ชจ์ด๋ ํจ์ ์ด์ธ์๋ ํ์ดํผ ํ์ ํธ ๋ฐ ReLu ํจ์๋ฅผ ์ฌ์ฉํ ์๋ ์๋ค.
ํ์ดํผ ํ์ ํธ activation ํจ์๋ ์์ ๊ฐ์ ๊ฐ์ง ์ ์์ผ๋ฉฐ ์๊ทธ๋ชจ์ด๋ activation ํจ์๋ณด๋ค ์ผ๋ฐ์ ์ผ๋ก ์๋ ด ์๋๊ฐ ๋น ๋ฅด๋ค.
$$
\begin{eqnarray}
\tanh(w \cdot x+b),
\end{eqnarray}
$$
$$
\begin{eqnarray}
\tanh(a) \equiv \frac{e^a-e^{-a}}{e^a+e^{-a}}.
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sigma(a) = \frac{1+\tanh(a/2)}{2},
\end{eqnarray}
$$
End of explanation
z = np.linspace(-5, 5, 100)
a = np.maximum(z, 0)
plt.plot(z, a)
plt.show()
Explanation: Rectified Linear Unit (ReLu) Activation ํจ์๋ ๋ฌดํ๋ ํฌ๊ธฐ์ activation ๊ฐ์ด ๊ฐ๋ฅํ๋ฉฐ ๊ฐ์ค์น์ดํฉ $a$๊ฐ ํฐ ๊ฒฝ์ฐ์๋ ๊ธฐ์ธ๊ธฐ(gradient)๊ฐ 0 ์ด๋๋ฉฐ ์ฌ๋ผ์ง์ง ์๋๋ค๋ ์ฅ์ ์ด ์๋ค.
$$
\begin{eqnarray}
\max(0, w \cdot x+b).
\end{eqnarray}
$$
End of explanation |
3,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing the performance of our best model without modulation of the learning rate or momentum, deactivating the extensions.
Step1: So in the end it reaches almost the same log loss. I'm guessing it probably does this much faster? | Python Code:
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import holoviews as hl
%load_ext holoviews.ipython
import sklearn.metrics
cd ..
settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/8aug_flat_lr.json", settings, force=True)
model = pylearn2.utils.serial.load(run_settings['alt_picklepath'])
%run check_test_score.py run_settings/8aug_flat_lr.json
%run check_test_score.py run_settings/alexnet_based_norm_global_8aug.json
Explanation: Comparing the performance of our best model without modulation of the learning rate or momentum, deactivating the extensions.
End of explanation
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/alexnet_based_norm_global_8aug.json", settings, force=True)
old = pylearn2.utils.serial.load(run_settings['pickle abspath'])
def plot_monitor(model,c = 'valid_y_nll'):
channel = model.monitor.channels[c]
plt.title(c)
plt.plot(channel.example_record,channel.val_record)
return None
plot_monitor(old)
plot_monitor(model)
plot_monitor(old, c="train_y_nll")
plot_monitor(model, c="train_y_nll")
Explanation: So in the end it reaches almost the same log loss. I'm guessing it probably does this much faster?
End of explanation |
3,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercรญcio 1-3
Step1: 2. Regressรฃo linear
Vamos aplicar a regressรฃo linear, usando o mรฉtodo gradiente descendente.<br>
Se nรฃo lembrar do mรฉtodo, volte para o Exercรญcio 1-1<br>
Neste ponto, vocรช jรก deve ter criado o arquivo <tt>funcoes.py</tt>
Step2: Hmmm, alguma coisa estรก estranha. Estรก ?
Pense um pouco. Vocรช consegue explicar esse resultado ?
3. Regressรฃo logรญstica
Vamos aplicar a regressรฃo logรญstica. (Lembre-se | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# Criar um array com n nรบmeros.
# Cada um desses nรบmeros รฉ um exemplo x
# Em seguida, estender os exemplos: x ---> (1,x)
N = 14
x = np.array([0.2, 0.5, 1, 1.1, 1.2, 1.8, 2, 4.3, 4.4, 5.7, 6.9, 7.5, 8, 8.2])
X = np.vstack(zip(np.ones(N), x))
print 'Dimensรฃo do array X:', X.shape
# Supor que os exemplos na primeira metade sรฃo negativos e o restante sรฃo
# positivos
y = np.array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1])
print 'dimensรฃo do array y:', y.shape
# show elements in distinct colors to discriminate negative from positive ones
for i in range(N):
if y[i]==1:
plt.plot(X[i,1], y[i], 'bo') # o (bolinhas) azuis (blue)
else:
plt.plot(X[i,1], y[i], 'ro') # o (bolinhas) vermelhas (red)
plt.ylim(-1,2)
plt.xlabel('x')
plt.ylabel('y (classe)')
plt.show()
Explanation: Exercรญcio 1-3: Classificaรงรฃo
<b>Objetivo:</b> Entender como aplicar a regressรฃo linear e a regressรฃo logรญstica.
1. Como aplicar regressรฃo em um problema de classificaรงรฃo
O rรณtulo $y$ (valor 0 ou 1 no nosso caso) pode ser pensado como o valor que desejamos predizer para uma observaรงรฃo $\mathbf{x}$ qualquer.<br>
<br>
Portanto a ideia รฉ isso mesmo. Usaremos aqui os mesmos dados do Exercรญcio 0-1.<br>
No plot a seguir, os exemplos positivos e negativos aparecem em alturas distintas no grรกfico (diferentemente da forma vista no Exercรญcio 0-1)
End of explanation
# Supomos que o arquivo funcoes.py jรก estรก criado
from funcoes import gradientDescent, computeCost
# chutar uns pesos iniciais e calcular o custo inicial
w = np.zeros(2)
initialCost = computeCost(X, y, w)
print 'Initial cost: ', initialCost
# Some gradient descent settings
iterations = 1500
alpha = 0.01
# run gradient descent
w, J_history = gradientDescent(X, y, w, alpha, iterations)
finalCost = computeCost(X, y, w)
print 'Final cost: ', finalCost
# plot do resultado
print 'Weight w found by gradient descent: (%f, %f)' % (w[0], w[1])
# Plot the linear fit
plt.plot(X[:7,1], y[:7], 'ro')
plt.plot(X[7:,1], y[7:], 'bo')
plt.plot(X[:,1], X.dot(w), '-')
plt.ylim(-1,2)
plt.xlabel('x')
plt.ylabel('y (classe)')
plt.show()
Explanation: 2. Regressรฃo linear
Vamos aplicar a regressรฃo linear, usando o mรฉtodo gradiente descendente.<br>
Se nรฃo lembrar do mรฉtodo, volte para o Exercรญcio 1-1<br>
Neste ponto, vocรช jรก deve ter criado o arquivo <tt>funcoes.py</tt>
End of explanation
from funcoes2 import sigmoid, gradientDescent2, computeCost2
# chutar uns pesos iniciais e calcular o custo inicial
w = np.zeros(2)
initialCost = computeCost2(X, y, w)
print 'Initial cost: ', initialCost
# Some gradient descent settings
iterations = 1000
alpha = 0.005
# run gradient descent
w, J_history = gradientDescent2(X, y, w, alpha, iterations)
finalCost = computeCost2(X, y, w)
print 'Final cost: ', finalCost
print w
R = X.dot(w)
plt.plot(X[:,1], X.dot(w), '-')
for i in range(N):
if R[i] > 0.5:
plt.plot(X[i,1], y[i], 'bo')
else:
plt.plot(X[i,1], y[i], 'ro')
plt.xlabel('x')
plt.ylabel('y (class)')
plt.show()
Explanation: Hmmm, alguma coisa estรก estranha. Estรก ?
Pense um pouco. Vocรช consegue explicar esse resultado ?
3. Regressรฃo logรญstica
Vamos aplicar a regressรฃo logรญstica. (Lembre-se: apesar do nome, a regressรฃo logรญstica nรฃo visa "ajustar" uma funรงรฃo ร s observaรงรตes)<br>
Na regressรฃo logรญstica, a combinaรงรฃo linear $\sum_{j=0}^{n} w_j\,x_j$ รฉ processada pela funรงรฃo sigmoide $s(z) = \frac{1}{1+e^{-z}}$<br> Isto รฉ, calcula-se:
$$
g(\mathbf{x}) = s(h(\mathbf{x})) = s(\sum_{j=0}^{n} w_j\,x_j)
$$
e compara-se $g(\mathbf{x})$ com $y$. A ideia รฉ que $g(\mathbf{x})$ aproxime a posteriori $P(y=1|\mathbf{x})$.<br>
Para esta parte, serรก necessรกrio o arquivo <tt>funcoes2.py</tt> (que faz parte do kit)
End of explanation |
3,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to TensorFlow
TensorFlow is a deep learning framework that allows you to build neural networks more easily than by hand, and thus can speed up your deep learning development significantly. However, TensorFlow is not just a deep learning library, but really a library for deep learning. It's really just a number-crunching library, similar to Numpy, but the difference is that TensorFlow allows us to perform machine-learning specific number-crunching operations (e.g. derivatives on huge matrices). Using TensorFlow, We can also easily distribute these processes across our CPU cores, GPU cores, but also across a distributed network of computers.
We will be demonstrating TensorFlow in Python (obviously), but for those who are interested, they have APIs in the following languages
Step1: Writing and running programs in TensorFlow has the following steps
Step2: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
Step3: To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Quick aside
Step4: When we first defined x we did not have to specify a value for it. A placeholder is simply a variable that you will assign data to later when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening
Step6: Sigmoid in TensorFlow
So while we just saw that you can compute user defined functions, Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. Let's compute the sigmoid function of an input.
Step8: Computing the Cost
You can also use a built-in function to compute the cost of your neural network.
Let's implement the cross entropy loss. The function we will use is
Step10: One Hot Encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have to convert as follows
Step11: Building a Neural Network with TensorFlow
Now that we have seen a little bit about how TensorFlow works, let's build our first neural network. To begin, we will import the MNIST data set
Step13: So, what does this mean? In our data set, there are 55,000 examples of handwritten digits from zero to nine. Each example is a 28x28 pixel image flattened in an array with 784 values representing each pixelโs intensity.
Our goal is to build an algorithm capable of recognizing a digit with high accuracy. To do so, we are going to build a tensorflow neural network model.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX.
Create Placeholders
Our first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Step14: Note
Step16: Initialize Parameters
Step18: As expected, the parameters haven't been evaluated yet.
Forward Propogation in TensorFlow
Step20: Compute Cost
Step23: Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "optimizer" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be | Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
Explanation: Introduction to TensorFlow
TensorFlow is a deep learning framework that allows you to build neural networks more easily than by hand, and thus can speed up your deep learning development significantly. However, TensorFlow is not just a deep learning library, but really a library for deep learning. It's really just a number-crunching library, similar to Numpy, but the difference is that TensorFlow allows us to perform machine-learning specific number-crunching operations (e.g. derivatives on huge matrices). Using TensorFlow, We can also easily distribute these processes across our CPU cores, GPU cores, but also across a distributed network of computers.
We will be demonstrating TensorFlow in Python (obviously), but for those who are interested, they have APIs in the following languages: C++, Haskell, Java, Go, and Rust. TensorFlow is avaliable as third party packages in C#, Julia, R, and Scala.
Important Note: A lot of this coding tutorial comes from Andrew Ng's Deep Learning course on Coursera
Alright, let's get started! First, let's import a couple of libraries we will be using.
End of explanation
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c) #Question: What should the output be?
Explanation: Writing and running programs in TensorFlow has the following steps:
Create Tensors (variables) that are not yet executed/evaluated.
Write operations between those Tensors.
Initialize your Tensors.
Create a Session.
Run the Session. This will run the operations you'd written above.
Now let us look at an easy example:
End of explanation
sess = tf.Session()
print(sess.run(c))
Explanation: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
End of explanation
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
Explanation: To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Quick aside: Note that there are two typical ways to create and use sessions in tensorflow:
Method 1:
```python
sess = tf.Session()
Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
**Method 2:**python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Let's see what that looks like.
End of explanation
X = tf.constant(np.random.randn(3,1), name="X")
W = tf.constant(np.random.randn(4,3), name="W")
b = tf.constant(np.random.randn(4,1), name="b")
Y = tf.add(tf.matmul(W, X), b)
sess = tf.Session()
result = sess.run(Y)
sess.close()
print("Result = " + str(result))
Explanation: When we first defined x we did not have to specify a value for it. A placeholder is simply a variable that you will assign data to later when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
Linear Function in TensorFlow
Let's start with a very simple exercise, by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and $b$ is a random vector.
Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1).
End of explanation
def sigmoid(z):
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
x = tf.placeholder(tf.float32, name="x")
sigmoid = tf.sigmoid(x)
with tf.Session() as sess:
result = sess.run(sigmoid, feed_dict={x: z})
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
Explanation: Sigmoid in TensorFlow
So while we just saw that you can compute user defined functions, Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. Let's compute the sigmoid function of an input.
End of explanation
def cost(logits, labels):
ย ย ย ย Computes the cost using the sigmoid cross entropy
ย ย ย ย
ย ย ย ย Arguments:
ย ย ย ย logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
ย ย ย ย labels -- vector of labels y (1 or 0)
ย ย ย ย
ย ย ย ย Returns:
ย ย ย ย cost -- runs the session of the cost (formula (2))
z = tf.placeholder(tf.float32, name="z")
y = tf.placeholder(tf.float32, name="y")
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z, labels = y)
sess = tf.Session()
cost = sess.run(cost, feed_dict={z: logits, y: labels})
sess.close()
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
Explanation: Computing the Cost
You can also use a built-in function to compute the cost of your neural network.
Let's implement the cross entropy loss. The function we will use is:
tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)
We will input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{i}) + (1-y^{(i)})\log (1-\sigma(z^{i})\large )\small\tag{2}$$
End of explanation
def one_hot_matrix(labels, C):
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
C = tf.constant(C, name="C")
one_hot_matrix = tf.one_hot(labels, C, 1)
sess = tf.Session()
one_hot = sess.run(one_hot_matrix)
sess.close()
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
Explanation: One Hot Encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have to convert as follows:
<img src="images/one_hot.png">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
tf.one_hot(labels, depth, axis)
End of explanation
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
X_train = mnist.train.images
Y_train = mnist.train.labels
X_test = mnist.test.images
Y_test = mnist.test.labels
print ("Number of training examples = " + str(X_train.shape[0]))
print ("Number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
Explanation: Building a Neural Network with TensorFlow
Now that we have seen a little bit about how TensorFlow works, let's build our first neural network. To begin, we will import the MNIST data set:
End of explanation
def create_placeholders(num_features, num_classes):
Creates the placeholders for the tensorflow session.
Arguments:
num_features -- scalar, size of an image vector (num_px * num_px = 28 * 28 = 748)
num_classes -- scalar, number of classes (from 0 to 9, so -> 10)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
X = tf.placeholder(tf.float32, shape=[None, num_features])
Y = tf.placeholder(tf.float32, shape=[None, num_classes])
return X, Y
Explanation: So, what does this mean? In our data set, there are 55,000 examples of handwritten digits from zero to nine. Each example is a 28x28 pixel image flattened in an array with 784 values representing each pixelโs intensity.
Our goal is to build an algorithm capable of recognizing a digit with high accuracy. To do so, we are going to build a tensorflow neural network model.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX.
Create Placeholders
Our first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
End of explanation
X, Y = create_placeholders(X_train.shape[1], Y_train.shape[1])
print ("X = " + str(X))
print ("Y = " + str(Y))
Explanation: Note: When we assign None to our placeholder, it means the placeholder can be fed as many examples as you want to give it. In this case, our placeholder can be fed any multitude of 784-sized values.
End of explanation
def initialize_parameters(num_features, num_classes):
Initializes parameters to build a neural network with tensorflow.
Arguments:
num_features -- scalar, size of an image vector (num_px * num_px = 28 * 28 = 748)
num_classes -- scalar, number of classes (from 0 to 9, so -> 10)
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
W1 = tf.get_variable("W1", [num_features, 25], initializer=tf.contrib.layers.xavier_initializer())
b1 = tf.get_variable("b1", [1,25], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [25, 12], initializer=tf.contrib.layers.xavier_initializer())
b2 = tf.get_variable("b2", [1,12], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [12, num_classes], initializer=tf.contrib.layers.xavier_initializer())
b3 = tf.get_variable("b3", [1, num_classes], initializer = tf.zeros_initializer())
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
with tf.Session() as sess:
parameters = initialize_parameters(X_train.shape[1], Y_train.shape[1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Initialize Parameters
End of explanation
def forward_propagation(X, parameters):
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (Number of examples, number of features)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
# Numpy Equivalents:
Z1 = tf.add(tf.matmul(X, W1), b1) # Z1 = np.dot(X, W1) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(A1, W2), b2) # Z2 = np.dot(A1, W2) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(A2, W3), b3) # Z3 = np.dot(A2, W3) + b3
return Z3
Explanation: As expected, the parameters haven't been evaluated yet.
Forward Propogation in TensorFlow
End of explanation
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
return cost
Explanation: Compute Cost
End of explanation
def random_mini_batches(X, Y, mini_batch_size):
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data
Y -- true "label" vector
mini_batch_size - size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
m = X.shape[0] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[permutation, :]
shuffled_Y = Y[permutation, :]
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
mini_batch_X = shuffled_X[k * mini_batch_size : k * mini_batch_size + mini_batch_size, :]
mini_batch_Y = shuffled_Y[k * mini_batch_size : k * mini_batch_size + mini_batch_size, :]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
mini_batch_X = shuffled_X[num_complete_minibatches * mini_batch_size : m, :]
mini_batch_Y = shuffled_Y[num_complete_minibatches * mini_batch_size : m, :]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 200, minibatch_size = 32, print_cost = True):
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set features
Y_train -- training set class values
X_test -- test set features
Y_test -- test set class values
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
(m, num_features) = X_train.shape # (m : number of examples in the train set, n_features: input size)
num_classes = Y_train.shape[1] # n_classes : output size
costs = [] # To keep track of the cost
# Create placeholders
X, Y = create_placeholders(num_features, num_classes)
# Initialize parameters
parameters = initialize_parameters(num_features, num_classes)
# Forward propagation: Build the forward propagation in the tensorflow graph
Z3 = forward_propagation(X, parameters)
# Cost function: Add cost function to tensorflow graph
cost = compute_cost(Z3, Y)
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
minibatches = random_mini_batches(X_train, Y_train, minibatch_size)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 10 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3,1), tf.argmax(Y,1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "optimizer" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
To make the optimization you would do:
python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
Building the model
Now, you will bring it all together!
End of explanation |
3,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook simulates an antidromic stimulus reaching a pool of motoneurons and a renshaw cell.
Pablo Alejandro
Step1: The antidromic stimulus at the PTN.
Step2: The spike times of each MN along the simulation.
Step3: The force produced.
Step4: The membrande potential at the soma of the first motorneuron. | Python Code:
import sys
sys.path.insert(0, '..')
import time
import matplotlib.pyplot as plt
%matplotlib notebook
plt.rcParams['figure.figsize']= 7,7
import numpy as np
from Configuration import Configuration
from MotorUnitPool import MotorUnitPool
from InterneuronPool import InterneuronPool
from SynapsesFactory import SynapsesFactory
conf = Configuration('confAntidromicStimulationofMNandRC.rmto')
pools = dict()
pools[0] = MotorUnitPool(conf, 'SOL')
pools[1] = InterneuronPool(conf, 'RC', 'ext')
for i in xrange(0,len(pools[0].unit)):
pools[0].unit[i].createStimulus()
Syn = SynapsesFactory(conf, pools)
t = np.arange(0.0, conf.simDuration_ms, conf.timeStep_ms)
RC_mV = np.zeros_like(t)
MN_mV = np.zeros_like(t)
tic = time.clock()
for i in xrange(0, len(t)):
pools[0].atualizeMotorUnitPool(t[i]) # MN pool
pools[2].atualizePool(t[i]) # RC synaptic Noise
pools[1].atualizeInterneuronPool(t[i]) # RC pool
RC_mV[i] = pools[1].v_mV[0]
MN_mV[i] = pools[0].v_mV[1]
toc = time.clock()
print str(toc - tic) + ' seconds'
plt.figure()
plt.plot(t, pools[0].unit[0].nerveStimulus_mA)
Explanation: This notebook simulates an antidromic stimulus reaching a pool of motoneurons and a renshaw cell.
Pablo Alejandro
End of explanation
pools[0].listSpikes()
plt.figure()
plt.plot(pools[0].poolSomaSpikes[:, 0],
pools[0].poolSomaSpikes[:, 1]+1, '.')
Explanation: The antidromic stimulus at the PTN.
End of explanation
plt.figure()
plt.plot(t, pools[0].Muscle.force, '-')
Explanation: The spike times of each MN along the simulation.
End of explanation
plt.figure()
plt.plot(t, MN_mV, '-')
Explanation: The force produced.
End of explanation
plt.figure()
plt.plot(t, RC_mV, '-')
plt.xlim((90,145))
Explanation: The membrande potential at the soma of the first motorneuron.
End of explanation |
3,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classificaรงรฃo por Regras Prรฉ-Definidas
O problema com o qual vamos lidar รฉ o de classificar automaticamente elementos de um conjunto atravรฉs de suas caracterรญsticas mensurรกveis. Trata-se, assim, do problema de observar elementos e, atravรฉs dessas observaรงรตes, inferir qual รฉ a classe ร qual o elemento pertence. Neste caderno, iremos utilizar um processo de inferรชncia baseado em regras prรฉ-definidas.
Objetivos
Ao final desta iteraรงรฃo, o estudante serรก capaz de
Step1: Conjunto de dados
No nosso estudo de caso, verificaremos se รฉ possรญvel identificar o esporte que um jogador pratica observando apenas suas caracterรญsticas fรญsicas. Para isso, utilizaremos dados reais de altura e peso dos jogadores das seleรงรตes brasileiras de futebol e vรดlei. Os dados estรฃo num arquivo CSV, que pode ser carregado para uma variรกvel de ambiente para nossa simulaรงรฃo.
Step2: Visualizando dados
Cada um dos elementos do conjunto de dados รฉ caracterizado por trรชs valores
Step3: O scatter plot nos permite verificar a relevรขncia de cada uma das caracterรญsticas que medimos para o problema de classificaรงรฃo em questรฃo. Observando a distribuiรงรฃo dos dados no eixo vertical, verificamos que jogadores de vรดlei, quase sempre, sรฃo mais altos que os jogadores de futebol. Observando a distribuiรงรฃo de dados no eixo horizontal, verificamos que jogadores futebol tendem a ser mais leves que os jogadores de vรดlei, mas nรฃo hรก uma divisรฃo tรฃo clara quanto no caso da altura.
Isso nos indica que poderรญamos escolher um limiar de altura acima do qual um jogador seria classificado como um jogador de vรดlei, e, consequentemente, abaixo do qual ele seria classificado como jogador de futebol. Implementei o classificador como uma funรงรฃo que recebe como entrada um valor de limiar e um conjunto de dados, e retorna os rรณtulos que devem ser associados a cada um dos pontos desse conjunto. A funรงรฃo aplica a regra do limiar a cada um dos elementos do vetor de dados recebido na entrada.
Step4: A escolha de um limiar de classificaรงรฃo pode ser interpretada como a divisรฃo do espaรงo definido pelas caracterรญsticas observadas em partiรงรตes, sendo que cada uma corresponde a uma classe. Se escolhermos um limiar de 1.90 m para a decisรฃo, observaremos o seguinte particionamento
Step5: Aplicando a regra de decisรฃo
Chega entรฃo o momento de aplicar, de fato, a regra de decisรฃo aos dados de nosso conjunto. Apรณs essa aplicaรงรฃo, poderemos comparar o resultado da classificaรงรฃo automรกtica com o gabarito (ground-truth), o que nos permite contar erros e acertos. Em especial, nos interessa contar erros e acertos separadamente para cada classe de jogadores.
Step6: Um resultado bastante interessante desta execuรงรฃo รฉ que, embora as figuras de scatter plot tenham mostrado apenas quatro jogadores de vรดlei prรณximos ร fronteira de decisรฃo (e, portanto, sujeitos a erros), o sistema de avaliaรงรฃo acusou cinco erros na classificaรงรฃo. Isso aconteceu porque alguns pontos foram sobrepostos ao serem desenhados na imagem. Uma possรญvel maneira de contornar esse problema รฉ adicionando um pequeno ruรญdo aleatรณrio ร posiรงรฃo de cada um dos pontos, evidenciando os elementos ocultos.
Step7: Esse procedimento evidencia caracterรญticas que poderiam ficar ocultas no conjunto de dados. Porรฉm, se aplicado em excesso, pode tornar a representaรงรฃo menos precisa. Uma ferramenta de anรกlise de dados que permite verificar quantos pontos estรฃo em cada posiรงรฃo รฉ o histograma.
Step8: O histograma traz uma representaรงรฃo mais clara do comportamento dos dados, evidenciando a frequรชncia de ocorrรชncia de cada faixa de valores em cada dimensรฃo. Porรฉm, ao mesmo tempo, nรฃo evidencia as correlaรงรตes entre variรกveis. De qualquer forma, trata-se de uma ferramenta importante para verificar quais caracterรญsticas sรฃo relevantes no processo de classificaรงรฃo.
Otimizando o processo de classificaรงรฃo
Nรฃo temos, neste momento, nenhum motivo para crer que nosso limiar inicial - 1.9m - seja o melhor possรญvel (ou
Step9: Deve ficar รณbvio que a otimizaรงรฃo do limiar atravรฉs da variaรงรฃo manual rapidamente se torna um processo laborioso. Embora algumas respostas sejam claramente piores que outras, existem vรกrias respostas que parecem boas dentro de um intervalo muito pequeno, e nรฃo temos como garantir que uma delas seja, necessariamente, รณtima. Porรฉm, podemos aumentar nossas chances de encontrar um valor รณtimo se automatizarmos o processo de busca exaustiva.
O cรณdigo abaixo executa o processo de busca exaustiva variando o limiar entre dois limites - inicial e final - com passos de tamanho conhecido. A cada passo, verifica se o resultado encontrado รฉ melhor que o melhor resultado armazenado atรฉ entรฃo, e, caso seja, armazena esse novo resultado. Verifique o que acontece com o resultado ao tornar o passo progressivamente mais refinado. | Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
Explanation: Classificaรงรฃo por Regras Prรฉ-Definidas
O problema com o qual vamos lidar รฉ o de classificar automaticamente elementos de um conjunto atravรฉs de suas caracterรญsticas mensurรกveis. Trata-se, assim, do problema de observar elementos e, atravรฉs dessas observaรงรตes, inferir qual รฉ a classe ร qual o elemento pertence. Neste caderno, iremos utilizar um processo de inferรชncia baseado em regras prรฉ-definidas.
Objetivos
Ao final desta iteraรงรฃo, o estudante serรก capaz de:
* Entender a relevรขncia de caracterรญsticas adequadas em conjuntos de dados
* Analisar a relevรขncia de caracterรญsticas de dados usando scatter plots e histograma
* Entender o conceito de fronteira de decisรฃo
* Construir regras para classificaรงรฃo ร partir da anรกlise manual de dados
* Otimizar parรขmetros de regras usando busca exaustiva
End of explanation
import csv
with open("biometria.csv", 'rb') as f:
dados = list(csv.reader(f))
for d in dados:
print d
Explanation: Conjunto de dados
No nosso estudo de caso, verificaremos se รฉ possรญvel identificar o esporte que um jogador pratica observando apenas suas caracterรญsticas fรญsicas. Para isso, utilizaremos dados reais de altura e peso dos jogadores das seleรงรตes brasileiras de futebol e vรดlei. Os dados estรฃo num arquivo CSV, que pode ser carregado para uma variรกvel de ambiente para nossa simulaรงรฃo.
End of explanation
# Separando os dados em arrays do numpy
rotulos_volei = [d[0] for d in dados[1:-1] if d[0] is 'V']
rotulos_futebol = [d[0] for d in dados[1:-1] if d[0] is 'F']
altura_volei = np.array([float(d[1]) for d in dados[1:-1] if d[0] is 'V'])
altura_futebol = np.array([float(d[1]) for d in dados[1:-1] if d[0] is 'F'])
peso_volei = np.array([float(d[2]) for d in dados[1:-1] if d[0] is 'V'])
peso_futebol = np.array([float(d[2]) for d in dados[1:-1] if d[0] is 'F'])
plt.figure();
plt.scatter(peso_volei, altura_volei, color='red');
plt.scatter(peso_futebol, altura_futebol, color='blue');
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['V', 'F'], loc=4);
Explanation: Visualizando dados
Cada um dos elementos do conjunto de dados รฉ caracterizado por trรชs valores: o esporte que pratica (Futebol ou Vรดlei), sua altura e seu peso. Visualizar todos esses dados na forma de uma tabela, porรฉm, รฉ claramente pouco prรกtico. Podemos imaginar como conjuntos de dados ainda maiores se comportariam - uma tabela com jogadores de futebol e vรดlei de todos os paรญses que participam do campeonato mundial, por exemplo, seria obviamente muito grande para ser analisada na forma de nรบmeros.
Uma forma bastante comum de visualizaรงรฃo de dados รฉ o scatter plot. Trata-se de um tipo de figura na qual os pontos de um conjunto sรฃo colocados em uma figura. Utilizaremos cores para identificar o esporte relacionado a cada um dos pontos de dados.
End of explanation
def classificador_limiar(limiar, dados, rotulos=('V', 'F')):
ans = []
for i in xrange(len(dados)):
if dados[i] > limiar:
ans.append(rotulos[0])
else:
ans.append(rotulos[1])
return ans
print "Exemplo: ", classificador_limiar(1.9, [1.99, 1.9, 1.89, 1.3, 2.1])
Explanation: O scatter plot nos permite verificar a relevรขncia de cada uma das caracterรญsticas que medimos para o problema de classificaรงรฃo em questรฃo. Observando a distribuiรงรฃo dos dados no eixo vertical, verificamos que jogadores de vรดlei, quase sempre, sรฃo mais altos que os jogadores de futebol. Observando a distribuiรงรฃo de dados no eixo horizontal, verificamos que jogadores futebol tendem a ser mais leves que os jogadores de vรดlei, mas nรฃo hรก uma divisรฃo tรฃo clara quanto no caso da altura.
Isso nos indica que poderรญamos escolher um limiar de altura acima do qual um jogador seria classificado como um jogador de vรดlei, e, consequentemente, abaixo do qual ele seria classificado como jogador de futebol. Implementei o classificador como uma funรงรฃo que recebe como entrada um valor de limiar e um conjunto de dados, e retorna os rรณtulos que devem ser associados a cada um dos pontos desse conjunto. A funรงรฃo aplica a regra do limiar a cada um dos elementos do vetor de dados recebido na entrada.
End of explanation
plt.figure();
plt.scatter(peso_volei, altura_volei, color='red');
plt.scatter(peso_futebol, altura_futebol, color='blue');
plt.plot([60, 120], [1.9, 1.9], color='green', lw=1)
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['Limiar', 'V', 'F'], loc=4);
Explanation: A escolha de um limiar de classificaรงรฃo pode ser interpretada como a divisรฃo do espaรงo definido pelas caracterรญsticas observadas em partiรงรตes, sendo que cada uma corresponde a uma classe. Se escolhermos um limiar de 1.90 m para a decisรฃo, observaremos o seguinte particionamento:
End of explanation
def comparar_resultados(resultado, gabarito):
acertos = 0
erros = 0
for i in range(len(resultado)):
if resultado[i] == gabarito[i]:
acertos += 1
else:
erros += 1
return acertos, erros
# Executar classificacao
classificacao_volei = classificador_limiar(1.9, altura_volei)
classificacao_futebol = classificador_limiar(1.9, altura_futebol)
# Comparar resultados com gabarito
resultados_volei = comparar_resultados(classificacao_volei, rotulos_volei)
resultados_futebol = comparar_resultados(classificacao_futebol, rotulos_futebol)
# Mostrar resultados
print "Volei: ", resultados_volei
print "Futebol:", resultados_futebol
Explanation: Aplicando a regra de decisรฃo
Chega entรฃo o momento de aplicar, de fato, a regra de decisรฃo aos dados de nosso conjunto. Apรณs essa aplicaรงรฃo, poderemos comparar o resultado da classificaรงรฃo automรกtica com o gabarito (ground-truth), o que nos permite contar erros e acertos. Em especial, nos interessa contar erros e acertos separadamente para cada classe de jogadores.
End of explanation
plt.figure();
plt.scatter(peso_volei + 2*np.random.random(peso_volei.shape), altura_volei, color='red');
plt.scatter(peso_futebol + 2*np.random.random(peso_futebol.shape), altura_futebol, color='blue');
plt.plot([60, 120], [1.9, 1.9], color='green', lw=1)
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['Limiar', 'V', 'F'], loc=4);
Explanation: Um resultado bastante interessante desta execuรงรฃo รฉ que, embora as figuras de scatter plot tenham mostrado apenas quatro jogadores de vรดlei prรณximos ร fronteira de decisรฃo (e, portanto, sujeitos a erros), o sistema de avaliaรงรฃo acusou cinco erros na classificaรงรฃo. Isso aconteceu porque alguns pontos foram sobrepostos ao serem desenhados na imagem. Uma possรญvel maneira de contornar esse problema รฉ adicionando um pequeno ruรญdo aleatรณrio ร posiรงรฃo de cada um dos pontos, evidenciando os elementos ocultos.
End of explanation
plt.figure();
plt.hist([altura_volei, altura_futebol], 10, normed=0, histtype='bar',
color=['red', 'blue'],
label=['V', 'F']);
plt.xlabel('Altura (m)');
plt.ylabel('Quantidade de jogadores');
plt.legend(loc=1);
plt.figure();
plt.hist([peso_volei, peso_futebol], 10, normed=0, histtype='bar',
color=['red', 'blue'],
label=['V', 'F']);
plt.xlabel('Peso (kg)');
plt.ylabel('Quantidade de jogadores');
plt.legend(loc=1);
Explanation: Esse procedimento evidencia caracterรญticas que poderiam ficar ocultas no conjunto de dados. Porรฉm, se aplicado em excesso, pode tornar a representaรงรฃo menos precisa. Uma ferramenta de anรกlise de dados que permite verificar quantos pontos estรฃo em cada posiรงรฃo รฉ o histograma.
End of explanation
limiar = 1.7
# Executar classificacao
classificacao_volei = classificador_limiar(limiar, altura_volei)
classificacao_futebol = classificador_limiar(limiar, altura_futebol)
# Comparar resultados com gabarito
resultados_volei = comparar_resultados(classificacao_volei, rotulos_volei)
resultados_futebol = comparar_resultados(classificacao_futebol, rotulos_futebol)
# Mostrar resultados e limiar de classificaรงรฃo
plt.figure();
plt.scatter(peso_volei + 2*np.random.random(peso_volei.shape), altura_volei, color='red');
plt.scatter(peso_futebol + 2*np.random.random(peso_futebol.shape), altura_futebol, color='blue');
plt.plot([60, 120], [limiar, limiar], color='green', lw=1)
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['Limiar', 'V', 'F'], loc=4);
print "Total de acertos:", resultados_volei[0] + resultados_futebol[0]
Explanation: O histograma traz uma representaรงรฃo mais clara do comportamento dos dados, evidenciando a frequรชncia de ocorrรชncia de cada faixa de valores em cada dimensรฃo. Porรฉm, ao mesmo tempo, nรฃo evidencia as correlaรงรตes entre variรกveis. De qualquer forma, trata-se de uma ferramenta importante para verificar quais caracterรญsticas sรฃo relevantes no processo de classificaรงรฃo.
Otimizando o processo de classificaรงรฃo
Nรฃo temos, neste momento, nenhum motivo para crer que nosso limiar inicial - 1.9m - seja o melhor possรญvel (ou: o รณtimo) para realizar a classificaรงรฃo automรกtica a que nos propusemos. No trecho de cรณdigo abaixo, รฉ possรญvel variar o valor do limiar e entรฃo visualizar a fronteira de decisรฃo e o nรบmero total de acertos do processo de classificaรงรฃo. Antes de prosseguir, experimente alguns valores para o limiar e tente otimizar o nรบmero de acertos do sistema.
End of explanation
limiares = [] # limiares candidatos
respostas = []
# Limiares que serao testados
inicial = 1.6
passo = 0.001
final = 2.2
i = inicial
melhor_limiar = inicial
melhor_classificacao = 0
while i <= final :
# Executar classificacao
classificacao_volei = classificador_limiar(i, altura_volei);
classificacao_futebol = classificador_limiar(i, altura_futebol);
# Comparar resultados com gabarito
resultados_volei = comparar_resultados(classificacao_volei, rotulos_volei);
resultados_futebol = comparar_resultados(classificacao_futebol, rotulos_futebol);
# Calcula o total de acertos e armazena o resultado
res = resultados_volei[0] + resultados_futebol[0]
respostas.append(res);
limiares.append(i);
# Verifica se consegui uma classificacao melhor
if res > melhor_classificacao:
melhor_classificacao = res
melhor_limiar = i
# Da mais um passo
i += passo;
# Mostrar resultados e limiar de classificaรงรฃo
plt.figure();
plt.plot(limiares, respostas);
plt.ylabel('Acertos');
plt.xlabel('Limiar');
print "Melhor limiar:", melhor_limiar, " Acertos:", melhor_classificacao
Explanation: Deve ficar รณbvio que a otimizaรงรฃo do limiar atravรฉs da variaรงรฃo manual rapidamente se torna um processo laborioso. Embora algumas respostas sejam claramente piores que outras, existem vรกrias respostas que parecem boas dentro de um intervalo muito pequeno, e nรฃo temos como garantir que uma delas seja, necessariamente, รณtima. Porรฉm, podemos aumentar nossas chances de encontrar um valor รณtimo se automatizarmos o processo de busca exaustiva.
O cรณdigo abaixo executa o processo de busca exaustiva variando o limiar entre dois limites - inicial e final - com passos de tamanho conhecido. A cada passo, verifica se o resultado encontrado รฉ melhor que o melhor resultado armazenado atรฉ entรฃo, e, caso seja, armazena esse novo resultado. Verifique o que acontece com o resultado ao tornar o passo progressivamente mais refinado.
End of explanation |
3,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nonnegative Matrix Factorization In The Movielens Dataset
This example continues illustrating using pandas-munging capabilities in estimators building features that draw from several rows, this time using NMF (nonnegative matrix factorization). We will use a single table from the Movielens dataset (F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets
Step1: Munging NMF With Pandas
In Simple Row-Aggregating Features In The Movielens Dataset we looked at direct attributes obtainable from the rankings
Step2: We now use NMF for decomposition, and then find the user latent factors in U and item latent factors in I
Step3: Note that the Ibex version of NMF sets the indexes and columns of the U and I appropriately.
Step4: Pandas makes it easy to merge the user and item latent factors to the users and items, respectively.
Step5: Let's merge to the results also the number of occurrences to of the users and items, respectively.
Step6: We now have a dataframe of latent variables. Let's build a random forest regressor, and use it on this dataframe.
Step7: Finally, let's check the feature importances.
Step8: Building A Pandas-Munging Estimator
We'll now build a Scikit-Learn / Pandas step doing the above.
Step9: We can now use cross validation to assess this scheme. | Python Code:
import os
from sklearn import base
import pandas as pd
import scipy as sp
import seaborn as sns
sns.set_style('whitegrid')
sns.despine()
import ibex
from ibex.sklearn import model_selection as pd_model_selection
from ibex.sklearn import decomposition as pd_decomposition
from ibex.sklearn import decomposition as pd_decomposition
from ibex.sklearn import ensemble as pd_ensemble
%pylab inline
ratings = pd.read_csv(
'movielens_data/ml-100k/u.data',
sep='\t',
header=None,
names=['user_id', 'item_id', 'rating', 'timestamp'])
features = ['user_id', 'item_id']
ratings[features + ['rating']].head()
Explanation: Nonnegative Matrix Factorization In The Movielens Dataset
This example continues illustrating using pandas-munging capabilities in estimators building features that draw from several rows, this time using NMF (nonnegative matrix factorization). We will use a single table from the Movielens dataset (F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS)).
Loading The Data
In this example too, we'll only use the dataset table describing the ratings themselves. I.e., each row is an instance of a single rating given by a specific user to a specific movie.
End of explanation
UI = pd.pivot_table(ratings, values='rating', index='user_id', columns ='item_id')
UI
Explanation: Munging NMF With Pandas
In Simple Row-Aggregating Features In The Movielens Dataset we looked at direct attributes obtainable from the rankings: the average user and item ranking. Here we'll use Pandas to bring the dataset to a form where we can find latent factors through NMF.
First we pivot the table so that we have a UI matrix of the users as rows, the items as columns, and the ratings as the values:
End of explanation
d = pd_decomposition.NMF(n_components=20)
U = d.fit_transform(UI.fillna(0))
I = d.components_
Explanation: We now use NMF for decomposition, and then find the user latent factors in U and item latent factors in I:
End of explanation
U.head()
I.head()
Explanation: Note that the Ibex version of NMF sets the indexes and columns of the U and I appropriately.
End of explanation
ratings.head()
rating_comps = pd.merge(
ratings,
U,
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
I.T,
left_on='item_id',
right_index=True,
how='left')
rating_comps.head()
Explanation: Pandas makes it easy to merge the user and item latent factors to the users and items, respectively.
End of explanation
rating_comps = pd.merge(
rating_comps,
ratings.groupby(ratings.user_id).size().to_frame().rename(columns={0: 'user_id_count'}),
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
ratings.groupby(ratings.item_id).size().to_frame().rename(columns={0: 'item_id_count'}),
left_on='user_id',
right_index=True,
how='left')
prd_features = [c for c in rating_comps if 'comp_' in c] + ['user_id_count', 'item_id_count']
rating_comps.head()
Explanation: Let's merge to the results also the number of occurrences to of the users and items, respectively.
End of explanation
prd = pd_ensemble.RandomForestRegressor().fit(rating_comps[prd_features], ratings.rating)
prd.score(rating_comps[prd_features], ratings.rating)
Explanation: We now have a dataframe of latent variables. Let's build a random forest regressor, and use it on this dataframe.
End of explanation
prd.feature_importances_.to_frame().plot(kind='barh');
Explanation: Finally, let's check the feature importances.
End of explanation
class RatingsFactorizer(base.BaseEstimator, base.TransformerMixin, ibex.FrameMixin):
def fit(self, X, y):
X = pd.concat([X[['user_id', 'item_id']], y])
X.columns = ['user_id', 'item_id', 'rating']
self._user_id_count = X.groupby(X.user_id).size().to_frame().rename(columns={0: 'user_id_count'})
self._item_id_count = X.groupby(X.item_id).size().to_frame().rename(columns={0: 'item_id_count'})
UI = pd.pivot_table(ratings, values='rating', index='user_id', columns ='item_id')
d = pd_decomposition.NMF(n_components=10)
self._U = d.fit_transform(UI.fillna(0))
self._I = d.components_
return self
def transform(self, X):
rating_comps = pd.merge(
X[['user_id', 'item_id']],
self._U,
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
self._I.T,
left_on='item_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
self._user_id_count,
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
self._item_id_count,
left_on='user_id',
right_index=True,
how='left')
prd_features = [c for c in rating_comps if 'comp_' in c] + ['user_id_count', 'item_id_count']
return rating_comps[prd_features].fillna(0)
Explanation: Building A Pandas-Munging Estimator
We'll now build a Scikit-Learn / Pandas step doing the above.
End of explanation
prd = RatingsFactorizer() | pd_ensemble.RandomForestRegressor()
hist(
pd_model_selection.cross_val_score(
prd,
ratings[features],
ratings.rating,
cv=20,
n_jobs=-1),
color='grey');
xlabel('CV Score')
ylabel('Num Occurrences')
figtext(
0,
-0.1,
'Histogram of cross-validated scores');
Explanation: We can now use cross validation to assess this scheme.
End of explanation |
3,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks
Project
Step1: <a id='step1'></a>
Step 1
Step2: Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.
In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.
Write a Human Face Detector
We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.
Step3: (IMPLEMENTATION) Assess the Human Face Detector
Question 1
Step4: We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning
Step5: <a id='step2'></a>
Step 2
Step6: Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.
(IMPLEMENTATION) Making Predictions with a Pre-trained Model
In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.
Step7: (IMPLEMENTATION) Write a Dog Detector
While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).
Step8: (IMPLEMENTATION) Assess the Dog Detector
Question 2
Step9: We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.
Step10: <a id='step3'></a>
Step 3
Step11: Question 3
Step12: Question 4
Step14: (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.
Step15: (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
Step16: <a id='step4'></a>
Step 4
Step17: (IMPLEMENTATION) Model Architecture
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.
Step18: Question 5
Step19: (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.
Step20: (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
Step21: (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.
Step22: <a id='step5'></a>
Step 5
Step23: <a id='step6'></a>
Step 6 | Python Code:
import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
Explanation: Convolutional Neural Networks
Project: Write an Algorithm for a Dog Identification App
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).
In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
Step 0: Import Datasets
Step 1: Detect Humans
Step 2: Detect Dogs
Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
Step 5: Write your Algorithm
Step 6: Test Your Algorithm
<a id='step0'></a>
Step 0: Import Datasets
Make sure that you've downloaded the required human and dog datasets:
* Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dogImages.
Download the human dataset. Unzip the folder and place it in the home diretcory, at location /lfw.
Note: If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.
In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.
End of explanation
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Explanation: <a id='step1'></a>
Step 1: Detect Humans
In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.
OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
End of explanation
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
Explanation: Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.
In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.
Write a Human Face Detector
We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.
End of explanation
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
Explanation: (IMPLEMENTATION) Assess the Human Face Detector
Question 1: Use the code cell below to test the performance of the face_detector function.
- What percentage of the first 100 images in human_files have a detected human face?
- What percentage of the first 100 images in dog_files have a detected human face?
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.
Answer:
(You can print out your results and/or write your percentages in this cell)
End of explanation
### (Optional)
### TODO: Test performance of anotherface detection algorithm.
### Feel free to use as many code cells as needed.
Explanation: We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.
End of explanation
import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
Explanation: <a id='step2'></a>
Step 2: Detect Dogs
In this section, we use a pre-trained model to detect dogs in images.
Obtain Pre-trained VGG-16 Model
The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.
End of explanation
from PIL import Image
import torchvision.transforms as transforms
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
return None # predicted class index
Explanation: Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.
(IMPLEMENTATION) Making Predictions with a Pre-trained Model
In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.
End of explanation
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
return None # true/false
Explanation: (IMPLEMENTATION) Write a Dog Detector
While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).
End of explanation
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
Explanation: (IMPLEMENTATION) Assess the Dog Detector
Question 2: Use the code cell below to test the performance of your dog_detector function.
- What percentage of the images in human_files_short have a detected dog?
- What percentage of the images in dog_files_short have a detected dog?
Answer:
End of explanation
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
Explanation: We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.
End of explanation
import os
from torchvision import datasets
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
Explanation: <a id='step3'></a>
Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively). You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!
End of explanation
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
def forward(self, x):
## Define forward behavior
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
Explanation: Question 3: Describe your chosen procedure for preprocessing the data.
- How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
- Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?
Answer:
(IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. Use the template in the code cell below.
End of explanation
import torch.optim as optim
### TODO: select loss function
criterion_scratch = None
### TODO: select optimizer
optimizer_scratch = None
Explanation: Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.
Answer:
(IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and the optimizer as optimizer_scratch below.
End of explanation
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
returns trained model
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
# return trained model
return model
# train the model
model_scratch = train(100, loaders_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
Explanation: (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.
End of explanation
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
Explanation: (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
End of explanation
## TODO: Specify data loaders
Explanation: <a id='step4'></a>
Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively).
If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.
End of explanation
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
if use_cuda:
model_transfer = model_transfer.cuda()
Explanation: (IMPLEMENTATION) Model Architecture
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.
End of explanation
criterion_transfer = None
optimizer_transfer = None
Explanation: Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
Answer:
(IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and the optimizer as optimizer_transfer below.
End of explanation
# train the model
model_transfer = # train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
#model_transfer.load_state_dict(torch.load('model_transfer.pt'))
Explanation: (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.
End of explanation
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
Explanation: (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
End of explanation
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
return None
Explanation: (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.
End of explanation
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def run_app(img_path):
## handle cases for a human face, dog, and neither
Explanation: <a id='step5'></a>
Step 5: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a dog is detected in the image, return the predicted breed.
- if a human is detected in the image, return the resembling dog breed.
- if neither is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and human_detector functions developed above. You are required to use your CNN from Step 4 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!
(IMPLEMENTATION) Write your Algorithm
End of explanation
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
Explanation: <a id='step6'></a>
Step 6: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
(IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
Answer: (Three possible points for improvement)
End of explanation |
3,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
05 Scraping data with Requests and Beautiful Soup
To now, we've covered means of grabbing data that are formatted to grab. The term 'web scraping' refers to the messier means of pulling material from web sites that were really meant for people, not for computers. Web sites, of course, can include a variety of objects
Step1: The requests package works a lot like the urllib package in that it sends a request to a server and stores the servers response in a variable, here named response.
Step2: BeautifulSoup is designed to intelligently read raw HTML code, i.e., what is stored in the response variable generated above. The command below reads in the raw HTML and parses it into logical components that we can command.
The lxml in the command specifies a particular parser for deconstructing the HTML...
Step3: Here we search the text of the web page's body for any instances of https | Python Code:
# Import the requests package; install if necessary
try:
import requests
except:
import pip
pip.main(['install','requests'])
import requests
# Import BeautifulSoup from the bs4 package; install bs4 if necessary
try:
from bs4 import BeautifulSoup
except:
import pip
pip.main(['install','bs4'])
from bs4 import BeautifulSoup
# Import re, a package for using regular expressions
import re
Explanation: 05 Scraping data with Requests and Beautiful Soup
To now, we've covered means of grabbing data that are formatted to grab. The term 'web scraping' refers to the messier means of pulling material from web sites that were really meant for people, not for computers. Web sites, of course, can include a variety of objects: text, images, video, flash, etc., and your success at scraping what you want will vary. In other words, scraping involves a bit of MacGyvering.
Useful packages for scraping are requests and bs4/BeautifulSoup, which code is included to install these below.
We'll run through a few quick examples, but for more on this topic, I recommend:
* http://www.pythonforbeginners.com/python-on-the-web/beautifulsoup-4-python/
* http://web.stanford.edu/~zlotnick/TextAsData/Web_Scraping_with_Beautiful_Soup.html
* http://stanford.edu/~mgorkove/cgi-bin/rpython_tutorials/webscraping_with_lxml.php
End of explanation
# Send a request to a web page
response = requests.get('https://xkcd.com/869')
# The response object simply has the contents of the web page at the address provided
print(response.text)
Explanation: The requests package works a lot like the urllib package in that it sends a request to a server and stores the servers response in a variable, here named response.
End of explanation
# BeautifulSoup
soup = BeautifulSoup(response.text, 'lxml')
type(soup)
Explanation: BeautifulSoup is designed to intelligently read raw HTML code, i.e., what is stored in the response variable generated above. The command below reads in the raw HTML and parses it into logical components that we can command.
The lxml in the command specifies a particular parser for deconstructing the HTML...
End of explanation
#Search the page for emebedded links to PNG files
match = re.search('https://.*\.png', soup.body.text)
#What was found in the search
print(match.group())
#And here is some Juptyer code to display the picture resulting from it
from IPython.display import Image
Image(url=match.group())
Explanation: Here we search the text of the web page's body for any instances of https://....png, that is any link to a PNG image embedded in the page. This is done using re and implementing regular expressions (see https://developers.google.com/edu/python/regular-expressions for more info on this useful module...)
The match object returned by search() holds information about the nature of the match, including the original input string, the regular expression used, and the location within the original string where the pattern occurs. The group property of the match is the full string that's returned
End of explanation |
3,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization โ Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization โ Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
layer = tf.layers.dense(prev_layer,
num_units,
use_bias=False,
activation=None )
# When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
layer = tf.layers.batch_normalization(layer, training = is_training)
return tf.nn.relu(layer)
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth , is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False,activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training = is_training)
return tf.nn.relu(conv_layer)
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
is_training = tf.placeholder(tf.bool)
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training : True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training:False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training:False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training:False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels, is_training:False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]], is_training:False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature โ something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM โ then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
3,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variance Component Analysis
This notebook illustrates variance components analysis for two-level
nested and crossed designs.
Step1: Make the notebook reproducible
Step2: Nested analysis
In our discussion below, "Group 2" is nested within "Group 1". As a
concrete example, "Group 1" might be school districts, with "Group
2" being individual schools. The function below generates data from
such a population. In a nested analysis, the group 2 labels that
are nested within different group 1 labels are treated as
independent groups, even if they have the same label. For example,
two schools labeled "school 1" that are in two different school
districts are treated as independent schools, even though they have
the same label.
Step3: Generate a data set to analyze.
Step4: Using all the default arguments for generate_nested, the population
values of "group 1 Var" and "group 2 Var" are 2^2=4 and 3^2=9,
respectively. The unexplained variance, listed as "scale" at the
top of the summary table, has population value 4^2=16.
Step5: If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually.
Step6: Then we set up the variance components using the VCSpec class.
Step7: Finally we fit the model. It can be seen that the results of the
two fits are identical.
Step8: Crossed analysis
In a crossed analysis, the levels of one group can occur in any
combination with the levels of the another group. The groups in
Statsmodels MixedLM are always nested, but it is possible to fit a
crossed model by having only one group, and specifying all random
effects as variance components. Many, but not all crossed models
can be fit in this way. The function below generates a crossed data
set with two levels of random structure.
Step9: Generate a data set to analyze.
Step10: Next we fit the model, note that the groups vector is constant.
Using the default parameters for generate_crossed, the level 1
variance should be 2^2=4, the level 2 variance should be 3^2=9, and
the unexplained variance should be 4^2=16.
Step11: If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually.
Step12: Here we fit the model without using formulas, it is simple to check
that the results for models 3 and 4 are identical. | Python Code:
import numpy as np
import statsmodels.api as sm
from statsmodels.regression.mixed_linear_model import VCSpec
import pandas as pd
Explanation: Variance Component Analysis
This notebook illustrates variance components analysis for two-level
nested and crossed designs.
End of explanation
np.random.seed(3123)
Explanation: Make the notebook reproducible
End of explanation
def generate_nested(n_group1=200, n_group2=20, n_rep=10, group1_sd=2,
group2_sd=3, unexplained_sd=4):
# Group 1 indicators
group1 = np.kron(np.arange(n_group1), np.ones(n_group2 * n_rep))
# Group 1 effects
u = group1_sd * np.random.normal(size=n_group1)
effects1 = np.kron(u, np.ones(n_group2 * n_rep))
# Group 2 indicators
group2 = np.kron(np.ones(n_group1), np.kron(np.arange(n_group2), np.ones(n_rep)))
# Group 2 effects
u = group2_sd * np.random.normal(size=n_group1*n_group2)
effects2 = np.kron(u, np.ones(n_rep))
e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)
y = effects1 + effects2 + e
df = pd.DataFrame({"y":y, "group1": group1, "group2": group2})
return df
Explanation: Nested analysis
In our discussion below, "Group 2" is nested within "Group 1". As a
concrete example, "Group 1" might be school districts, with "Group
2" being individual schools. The function below generates data from
such a population. In a nested analysis, the group 2 labels that
are nested within different group 1 labels are treated as
independent groups, even if they have the same label. For example,
two schools labeled "school 1" that are in two different school
districts are treated as independent schools, even though they have
the same label.
End of explanation
df = generate_nested()
Explanation: Generate a data set to analyze.
End of explanation
model1 = sm.MixedLM.from_formula("y ~ 1", re_formula="1", vc_formula={"group2": "0 + C(group2)"},
groups="group1", data=df)
result1 = model1.fit()
print(result1.summary())
Explanation: Using all the default arguments for generate_nested, the population
values of "group 1 Var" and "group 2 Var" are 2^2=4 and 3^2=9,
respectively. The unexplained variance, listed as "scale" at the
top of the summary table, has population value 4^2=16.
End of explanation
def f(x):
n = x.shape[0]
g2 = x.group2
u = g2.unique()
u.sort()
uv = {v: k for k, v in enumerate(u)}
mat = np.zeros((n, len(u)))
for i in range(n):
mat[i, uv[g2[i]]] = 1
colnames = ["%d" % z for z in u]
return mat, colnames
Explanation: If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually.
End of explanation
vcm = df.groupby("group1").apply(f).to_list()
mats = [x[0] for x in vcm]
colnames = [x[1] for x in vcm]
names = ["group2"]
vcs = VCSpec(names, [colnames], [mats])
Explanation: Then we set up the variance components using the VCSpec class.
End of explanation
oo = np.ones(df.shape[0])
model2 = sm.MixedLM(df.y, oo, exog_re=oo, groups=df.group1, exog_vc=vcs)
result2 = model2.fit()
print(result2.summary())
Explanation: Finally we fit the model. It can be seen that the results of the
two fits are identical.
End of explanation
def generate_crossed(n_group1=100, n_group2=100, n_rep=4, group1_sd=2,
group2_sd=3, unexplained_sd=4):
# Group 1 indicators
group1 = np.kron(np.arange(n_group1, dtype=np.int),
np.ones(n_group2 * n_rep, dtype=np.int))
group1 = group1[np.random.permutation(len(group1))]
# Group 1 effects
u = group1_sd * np.random.normal(size=n_group1)
effects1 = u[group1]
# Group 2 indicators
group2 = np.kron(np.arange(n_group2, dtype=np.int),
np.ones(n_group2 * n_rep, dtype=np.int))
group2 = group2[np.random.permutation(len(group2))]
# Group 2 effects
u = group2_sd * np.random.normal(size=n_group2)
effects2 = u[group2]
e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)
y = effects1 + effects2 + e
df = pd.DataFrame({"y":y, "group1": group1, "group2": group2})
return df
Explanation: Crossed analysis
In a crossed analysis, the levels of one group can occur in any
combination with the levels of the another group. The groups in
Statsmodels MixedLM are always nested, but it is possible to fit a
crossed model by having only one group, and specifying all random
effects as variance components. Many, but not all crossed models
can be fit in this way. The function below generates a crossed data
set with two levels of random structure.
End of explanation
df = generate_crossed()
Explanation: Generate a data set to analyze.
End of explanation
vc = {"g1": "0 + C(group1)", "g2": "0 + C(group2)"}
oo = np.ones(df.shape[0])
model3 = sm.MixedLM.from_formula("y ~ 1", groups=oo, vc_formula=vc, data=df)
result3 = model3.fit()
print(result3.summary())
Explanation: Next we fit the model, note that the groups vector is constant.
Using the default parameters for generate_crossed, the level 1
variance should be 2^2=4, the level 2 variance should be 3^2=9, and
the unexplained variance should be 4^2=16.
End of explanation
def f(g):
n = len(g)
u = g.unique()
u.sort()
uv = {v: k for k, v in enumerate(u)}
mat = np.zeros((n, len(u)))
for i in range(n):
mat[i, uv[g[i]]] = 1
colnames = ["%d" % z for z in u]
return [mat], [colnames]
vcm = [f(df.group1), f(df.group2)]
mats = [x[0] for x in vcm]
colnames = [x[1] for x in vcm]
names = ["group1", "group2"]
vcs = VCSpec(names, colnames, mats)
Explanation: If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually.
End of explanation
oo = np.ones(df.shape[0])
model4 = sm.MixedLM(df.y, oo[:, None], exog_re=None, groups=oo, exog_vc=vcs)
result4 = model4.fit()
print(result4.summary())
Explanation: Here we fit the model without using formulas, it is simple to check
that the results for models 3 and 4 are identical.
End of explanation |
3,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use the WFSGeojsonLayer class
This class provides WFS layers for ipyleaflet from services than avec geojson output capabilities
We first have to create the WFS connection and instanciate the map
Step1: We can then retrieve the available layers. We will use these to create our WFS layer.
Step2: Next we create our WFS layer from one of the layers listed above. It is filtered by the extent of the map, seen above. This next function is a builder and will create, add and configure the map with it's two default widgets.
Step3: The layer created above will have a refresh button, which can be pressed to refresh the WFS layer.
It will also have a property widget in the lower right corner of the map, and will show the feature ID of a feature after you click on it.
It's also possible to add a new property widget. We first need to retrieve the properties of a feature. The following code returns the properties of the first feature, which should be shared by all features.
Step4: We can create a new widget from any of the above properties
The widget_name parameter needs to be unique, else it will overwrite the existing one.
Step5: To replace the default property widget, the same function can be used with the 'main_widget' name.
This can be usefull when there is no need for the feature ID, or on the off chance that the first property attribute does not contain the feature ID.
Step6: The geojson data is available. The results are also filtered by what is visible on the map.
Step7: A search by ID for features is also available. Let's set back the main widget to default so we can have access to feature IDs again
Step8: Now click on a feature and replace '748' in the cell below with a new ID number to get the full properties of that feature
Step9: To get rid of all the property widgets | Python Code:
from birdy import IpyleafletWFS
from ipyleaflet import Map
url = 'http://boreas.ouranos.ca/geoserver/wfs'
version = '2.0.0'
wfs_connection = IpyleafletWFS(url, version)
demo_map = Map(center=(46.42, -64.14), zoom=8)
demo_map
Explanation: How to use the WFSGeojsonLayer class
This class provides WFS layers for ipyleaflet from services than avec geojson output capabilities
We first have to create the WFS connection and instanciate the map:
End of explanation
wfs_connection.layer_list
Explanation: We can then retrieve the available layers. We will use these to create our WFS layer.
End of explanation
wfs_connection.build_layer(layer_typename='public:HydroLAKES_poly', source_map=demo_map)
Explanation: Next we create our WFS layer from one of the layers listed above. It is filtered by the extent of the map, seen above. This next function is a builder and will create, add and configure the map with it's two default widgets.
End of explanation
wfs_connection.property_list
Explanation: The layer created above will have a refresh button, which can be pressed to refresh the WFS layer.
It will also have a property widget in the lower right corner of the map, and will show the feature ID of a feature after you click on it.
It's also possible to add a new property widget. We first need to retrieve the properties of a feature. The following code returns the properties of the first feature, which should be shared by all features.
End of explanation
wfs_connection.create_feature_property_widget(widget_name='Wshd_area', feature_property='Wshd_area', widget_position='bottomleft')
demo_map
Explanation: We can create a new widget from any of the above properties
The widget_name parameter needs to be unique, else it will overwrite the existing one.
End of explanation
wfs_connection.create_feature_property_widget(widget_name='main_widget', feature_property='Lake_area')
Explanation: To replace the default property widget, the same function can be used with the 'main_widget' name.
This can be usefull when there is no need for the feature ID, or on the off chance that the first property attribute does not contain the feature ID.
End of explanation
gjson = wfs_connection.geojson
gjson['features'][0].keys()
gjson['totalFeatures']
Explanation: The geojson data is available. The results are also filtered by what is visible on the map.
End of explanation
wfs_connection.create_feature_property_widget(widget_name='main_widget')
demo_map
Explanation: A search by ID for features is also available. Let's set back the main widget to default so we can have access to feature IDs again
End of explanation
wfs_connection.feature_properties_by_id(748)
Explanation: Now click on a feature and replace '748' in the cell below with a new ID number to get the full properties of that feature
End of explanation
wfs_connection.clear_property_widgets()
demo_map
### And finally, to remove the layer from the map
wfs_connection.remove_layer()
Explanation: To get rid of all the property widgets:
End of explanation |
3,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ํ๋ฅ ๋ชจํ์ด๋
๋ฐ์ดํฐ ๋ถํฌ ๋ฌ์ฌ์ ๋ฌธ์ ์
๊ธฐ์ ํต๊ณ ๋ฑ์ ๋ฐฉ๋ฒ์ผ๋ก ์๋ฃ์ ๋ถํฌ๋ฅผ ๊ธฐ์ ํ๋ ๋ฐฉ๋ฒ์ ๋ถํ์คํ๋ฉฐ ๋๋ต์ ์ธ ์ ๋ณด๋ง์ ์ ๋ฌํ ๋ฟ์ด๋ฉฐ ์์ธํ ํน์ ์๋ฒฝํ ์ ๋ณด๋ฅผ ์ ๋ฌํ๊ธฐ ์ด๋ ต๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ด 1,000๊ฐ์ ๋ฐ์ดํฐ๊ฐ ์๋ค. ๋ฐ์ดํฐ ์์ฑ์๋ SciPy์ ํ๋ฅ ๋ถํฌ ๋ช
๋ น์ ์ด์ฉํ์๋ค.
[[school_notebook
Step1: ํ์คํ ๊ทธ๋จ์ ๊ทธ๋ฆฌ๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
Step2: ์ด ํ์คํ ๊ทธ๋จ์์ -0.143394 ๋ถํฐ 0.437156 ์ฌ์ด์ ๊ฐ์ด ์ ์ฒด์ ์ฝ 24%๋ฅผ ์ฐจ์งํ๊ณ ์์์ ์ ์ ์๋ค. ๊ทธ๋ผ ๋ง์ฝ -0.01 ๋ถํฐ 0.01 ์ฌ์ด์ ๊ตฌ๊ฐ์ ๋ํ ์ ๋ณด๋ฅผ ์ป๊ณ ์ถ๋ค๋ฉด? ๋ ์ธ๋ถ์ ์ธ ๊ตฌ๊ฐ์ ๋ํด ์ ๋ณด๋ฅผ ๊ตฌํ๊ณ ์ถ๋ค๋ฉด ํ์คํ ๊ทธ๋จ์ ๊ตฌ๊ฐ์ ๋ ์๊ฒ ๋๋์ด์ผ ํ๋ค.
Step3: ์ ํํ ๋ฌ์ฌ๋ฅผ ์ํด ๊ตฌ๊ฐ์ ์๋ฅผ ์ฆ๊ฐ์ํค๋ฉด ๋ช ๊ฐ์ง ๋ฌธ์ ๊ฐ ๋ฐ์ํ๋ค.
์ฐ์ ๊ตฌ๊ฐ์ ๊ฐ๊ฒฉ์ด ์์์ง๋ฉด์ ํ๋์ ๊ตฌ๊ฐ์ ์๋ ์๋ฃ์ ์๊ฐ ์ ์ ์ ์ด์ง๋ค. ๋ง์ฝ ๊ตฌ๊ฐ ์๊ฐ ๋ฌดํ๋์ ๊ฐ๊น๋ค๋ฉด ํ๋์ ๊ตฌ๊ฐ ํญ์ 0์ผ๋ก ์๋ ดํ๊ณ ํด๋น ๊ตฌ๊ฐ์ ์๋ฃ ์๋ 0์ผ๋ก ์๋ ดํ ๊ฒ์ด๋ค. ๋ฐ๋ผ์ ๋ถํฌ์ ์๋์ ์ธ ๋ชจ์์ ์ดํด๋ณด๊ธฐ ํ๋ค์ด์ง๋ค.
๋ ํฐ ๋ฌธ์ ๋ ๋ถํฌ๋ฅผ ๋ฌ์ฌํ๊ธฐ ์ํ ์ ๋ณด๊ฐ ์ฆ๊ฐํ๋ค๋ ์ ์ด๋ค. ๋ฐ์ดํฐ์ ๋ถํฌ๋ฅผ ๋ฌ์ฌํ๋ ์ด์ ๋ ์ ์ ๊ฐฏ์์ ์ซ์๋ฅผ ํตํด ๋ฐ์ดํฐ์ ์ ๋ฐ์ ์ธ ๋ชจ์ต์ ๋น ๋ฅด๊ฒ ํ์
ํ๊ธฐ ์ํ ๊ฒ์ธ๋ฐ ๋ฌ์ฌ๋ฅผ ์ํ ์ ๋ณด์ ์์ด ์ฆ๊ฐํ๋ฉด ์๋์ ๋ชฉ์ ์ ์์ด๋ฒ๋ฆฐ๋ค.
ํ๋ฅ ๋ชจํ
์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ง๋ค์ด์ง ๊ฒ์ด ํ๋ฅ ๋ชจํ(probability model)์ด๋ค. ํ๋ฅ ๋ชจํ์ ์ถํ ์ค๋ช
ํ ํ๋ฅ ๋ณ์(random variable)๋ผ๋ ๊ฒ์ ์ด์ฉํ์ฌ ๋ฐ์ดํฐ ๋ถํฌ๋ฅผ ์ํ์ ์ผ๋ก ์ ์ํ๋ ๋ฐฉ๋ฒ์ ๋งํ๋ค.
๋ณดํต ํ๋ฅ ๋ถํฌ ํจ์(probability distribution function) ๋๋ ํ๋ฅ ๋ฐ๋ ํจ์(probability density function)๋ผ๊ณ ๋ถ๋ฆฌ์ฐ๋ ๋ฏธ๋ฆฌ ์ ํด์ง ํจ์์ ์์์ ์ฌ์ฉํ๋ค. ์ด ๋ ์ด ํจ์๋ค์ ๊ณ์๋ฅผ ๋ถํฌ์ ๋ชจ์(parameter)๋ผ๊ณ ๋ถ๋ฅธ๋ค.
์๋ฅผ ๋ค์ด ๊ฐ์ฅ ๋๋ฆฌ ์ฐ์ด๋ ํ๋ฅ ๋ชจํ์ ํ๋์ธ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ(Gaussian normal distribution)๋ ๋ค์๊ณผ ๊ฐ์ ์์์ผ๋ก ํ๋ฅ ๋ฐ๋ ํจ์๋ฅผ ์ ์ํ๋ค.
$$ N(x; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$
์ด ํจ์์ ๋
๋ฆฝ ๋ณ์๋ ์๋ฃ์ ๊ฐ์ ์๋ฏธํ๋ ๋ณ์ $x$์ด๋ค. ์์์ ์ฌ์ฉ๋ ๋ฌธ์ $\mu$์ $\sigma$๋ ํ๊ท (mean)๊ณผ ํ์คํธ์ฐจ(standard deviation)๋ฅผ ๋ปํ๋ ๋ชจ์์ด๋ค. ํจ์ ํ๊ธฐ์์ ์ธ๋ฏธ์ฝ๋ก (;)์ ๋
๋ฆฝ ๋ณ์์ ๋ชจ์๋ฅผ ๊ตฌ๋ถํ๊ธฐ ์ํด ์ฌ์ฉํ์๋ค.
์ด ํจ์์์ด ์ค์ ๋ถํฌ์ ์ด๋ค ๊ด๊ณ๋ฅผ ๊ฐ์ง๋์ง๋ ๋ถํฌ ํจ์์ ๋ํ ๋ค์ ๋
ธํธ๋ถ์ ์ฐธ์กฐํ๋ค.
[[school_notebook
Step4: ์๋ฅผ ๋ค์ด ์ด๋ค ๋ฐ์ดํฐ์ ๋ถํฌ๋ฅผ ๋ฌ์ฌํ๊ธฐ ์ํด ๋ฐ์ดํฐ์ ํ์คํ ๊ทธ๋จ์ ๊ทธ๋ฆฌ๊ฑฐ๋ ๊ธฐ์ ํต๊ณ ์์น๋ฅผ ์ ์ํ ํ์์์ด ๋ค์๊ณผ ๊ฐ์ด ๋งํ๋ ๊ฒ ๋ง์ผ๋ก ๋ฐ์ดํฐ์ ๋ถํฌ์ ๋ํ ์ ๋ณด๋ฅผ ์๋ฒฝํ๊ฒ ์ ๋ฌํ ์ ์๋ค.
์ด ๋ฐ์ดํฐ๋ ํ๊ท $\mu$, ํ์คํธ์ฐจ $\sigma$์ธ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ๋ฅผ ๋ฐ๋ฅธ๋ค.
ํ๋ฅ ๋ชจํ๊ณผ ๋ฐ์ดํฐ ์์ฑ
ํ๋ฅ ๋ชจํ๋ก ์ ๋ฐ์ดํฐ์ ํ๋ฅ ๊ฐ์ ๊ด๊ณ๋ฅผ ์ด์ฉํ์ฌ ๋ฐ์ดํฐ ๋ถํฌ๋ฅผ ํจ์จ์ ์ผ๋ก ๋ฌ์ฌํ๋ ๋ฐฉ๋ฒ์ ๋งํ๋ค. ๋ฐ์ดํฐ์ ํ๋ฅ ๊ฐ์ ๊ด๊ณ๋ฅผ ์๊ธฐ ์ํด์๋ ์ฐ์ ๋ฐ์ดํฐ๊ฐ ์ด๋ป๊ฒ ๋ง๋ค์ด์ง๋์ง๋ฅผ ์ดํดํด์ผ ํ๋ค.
์ฐ๋ฆฌ๋ ๋ณดํต ์กฐ์ฌ(research) ๊ณผ์ ์ ํตํด ๋ฐ์ดํฐ๋ฅผ ํ๋ ํ๋ ์์งํ๋ค. ์ด๋ฌํ ๊ณผ์ ์ ์ฃผ์ฌ์๋ฅผ ๋์ง๋ ํ์์ ๋น์ ํ ์ ์๋ค. ์ฆ ์กฐ์ฌ๋ฅผ ํตํด ๋ฐ์ดํฐ๋ฅผ ํ๋ ํ๋ณดํ๋ ์ผ์ ์ฃผ์ฌ์๋ฅผ ๋์ ธ ์๋ฅผ ํฅํ๋ ๋๊ธ์ ์ซ์๋ฅผ ์ข
์ด์ ์ ๋ ์ผ๊ณผ ๊ฐ๋ค๊ณ ๋ณด๋ ๊ฒ์ด๋ค.
์๋ฅผ ๋ค์ด ๋ค์ ๋ฐ์ดํฐ๋ฅผ ๋ณด์.
Step5: ์ด ๋ฐ์ดํฐ๋ ์ฌ์ค ์ง์ง๋ก ์ปดํจํฐ ์ฃผ์ฌ์๋ฅผ ๋์ ธ ๋ง๋ ์ซ์์ด๋ค. ์ด ์ซ์๋ฅผ ๋ง๋๋ ์ฝ๋๋ ์๋์ ๊ฐ๋ค. | Python Code:
sp.random.seed(0)
x = sp.random.normal(size=1000)
Explanation: ํ๋ฅ ๋ชจํ์ด๋
๋ฐ์ดํฐ ๋ถํฌ ๋ฌ์ฌ์ ๋ฌธ์ ์
๊ธฐ์ ํต๊ณ ๋ฑ์ ๋ฐฉ๋ฒ์ผ๋ก ์๋ฃ์ ๋ถํฌ๋ฅผ ๊ธฐ์ ํ๋ ๋ฐฉ๋ฒ์ ๋ถํ์คํ๋ฉฐ ๋๋ต์ ์ธ ์ ๋ณด๋ง์ ์ ๋ฌํ ๋ฟ์ด๋ฉฐ ์์ธํ ํน์ ์๋ฒฝํ ์ ๋ณด๋ฅผ ์ ๋ฌํ๊ธฐ ์ด๋ ต๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ด 1,000๊ฐ์ ๋ฐ์ดํฐ๊ฐ ์๋ค. ๋ฐ์ดํฐ ์์ฑ์๋ SciPy์ ํ๋ฅ ๋ถํฌ ๋ช
๋ น์ ์ด์ฉํ์๋ค.
[[school_notebook:175522b819ae4645907179462dabc5d4]]
End of explanation
ns, bins, ps = plt.hist(x, bins=10)
plt.show()
pd.DataFrame([bins, ns/1000])
Explanation: ํ์คํ ๊ทธ๋จ์ ๊ทธ๋ฆฌ๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
End of explanation
ns, bins, ps = plt.hist(x, bins=100)
plt.show()
pd.DataFrame([bins, ns/1000])
Explanation: ์ด ํ์คํ ๊ทธ๋จ์์ -0.143394 ๋ถํฐ 0.437156 ์ฌ์ด์ ๊ฐ์ด ์ ์ฒด์ ์ฝ 24%๋ฅผ ์ฐจ์งํ๊ณ ์์์ ์ ์ ์๋ค. ๊ทธ๋ผ ๋ง์ฝ -0.01 ๋ถํฐ 0.01 ์ฌ์ด์ ๊ตฌ๊ฐ์ ๋ํ ์ ๋ณด๋ฅผ ์ป๊ณ ์ถ๋ค๋ฉด? ๋ ์ธ๋ถ์ ์ธ ๊ตฌ๊ฐ์ ๋ํด ์ ๋ณด๋ฅผ ๊ตฌํ๊ณ ์ถ๋ค๋ฉด ํ์คํ ๊ทธ๋จ์ ๊ตฌ๊ฐ์ ๋ ์๊ฒ ๋๋์ด์ผ ํ๋ค.
End of explanation
x = np.linspace(-3, 3, 100)
y = sp.stats.norm.pdf(x)
plt.plot(x, y)
plt.show()
Explanation: ์ ํํ ๋ฌ์ฌ๋ฅผ ์ํด ๊ตฌ๊ฐ์ ์๋ฅผ ์ฆ๊ฐ์ํค๋ฉด ๋ช ๊ฐ์ง ๋ฌธ์ ๊ฐ ๋ฐ์ํ๋ค.
์ฐ์ ๊ตฌ๊ฐ์ ๊ฐ๊ฒฉ์ด ์์์ง๋ฉด์ ํ๋์ ๊ตฌ๊ฐ์ ์๋ ์๋ฃ์ ์๊ฐ ์ ์ ์ ์ด์ง๋ค. ๋ง์ฝ ๊ตฌ๊ฐ ์๊ฐ ๋ฌดํ๋์ ๊ฐ๊น๋ค๋ฉด ํ๋์ ๊ตฌ๊ฐ ํญ์ 0์ผ๋ก ์๋ ดํ๊ณ ํด๋น ๊ตฌ๊ฐ์ ์๋ฃ ์๋ 0์ผ๋ก ์๋ ดํ ๊ฒ์ด๋ค. ๋ฐ๋ผ์ ๋ถํฌ์ ์๋์ ์ธ ๋ชจ์์ ์ดํด๋ณด๊ธฐ ํ๋ค์ด์ง๋ค.
๋ ํฐ ๋ฌธ์ ๋ ๋ถํฌ๋ฅผ ๋ฌ์ฌํ๊ธฐ ์ํ ์ ๋ณด๊ฐ ์ฆ๊ฐํ๋ค๋ ์ ์ด๋ค. ๋ฐ์ดํฐ์ ๋ถํฌ๋ฅผ ๋ฌ์ฌํ๋ ์ด์ ๋ ์ ์ ๊ฐฏ์์ ์ซ์๋ฅผ ํตํด ๋ฐ์ดํฐ์ ์ ๋ฐ์ ์ธ ๋ชจ์ต์ ๋น ๋ฅด๊ฒ ํ์
ํ๊ธฐ ์ํ ๊ฒ์ธ๋ฐ ๋ฌ์ฌ๋ฅผ ์ํ ์ ๋ณด์ ์์ด ์ฆ๊ฐํ๋ฉด ์๋์ ๋ชฉ์ ์ ์์ด๋ฒ๋ฆฐ๋ค.
ํ๋ฅ ๋ชจํ
์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ง๋ค์ด์ง ๊ฒ์ด ํ๋ฅ ๋ชจํ(probability model)์ด๋ค. ํ๋ฅ ๋ชจํ์ ์ถํ ์ค๋ช
ํ ํ๋ฅ ๋ณ์(random variable)๋ผ๋ ๊ฒ์ ์ด์ฉํ์ฌ ๋ฐ์ดํฐ ๋ถํฌ๋ฅผ ์ํ์ ์ผ๋ก ์ ์ํ๋ ๋ฐฉ๋ฒ์ ๋งํ๋ค.
๋ณดํต ํ๋ฅ ๋ถํฌ ํจ์(probability distribution function) ๋๋ ํ๋ฅ ๋ฐ๋ ํจ์(probability density function)๋ผ๊ณ ๋ถ๋ฆฌ์ฐ๋ ๋ฏธ๋ฆฌ ์ ํด์ง ํจ์์ ์์์ ์ฌ์ฉํ๋ค. ์ด ๋ ์ด ํจ์๋ค์ ๊ณ์๋ฅผ ๋ถํฌ์ ๋ชจ์(parameter)๋ผ๊ณ ๋ถ๋ฅธ๋ค.
์๋ฅผ ๋ค์ด ๊ฐ์ฅ ๋๋ฆฌ ์ฐ์ด๋ ํ๋ฅ ๋ชจํ์ ํ๋์ธ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ(Gaussian normal distribution)๋ ๋ค์๊ณผ ๊ฐ์ ์์์ผ๋ก ํ๋ฅ ๋ฐ๋ ํจ์๋ฅผ ์ ์ํ๋ค.
$$ N(x; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$
์ด ํจ์์ ๋
๋ฆฝ ๋ณ์๋ ์๋ฃ์ ๊ฐ์ ์๋ฏธํ๋ ๋ณ์ $x$์ด๋ค. ์์์ ์ฌ์ฉ๋ ๋ฌธ์ $\mu$์ $\sigma$๋ ํ๊ท (mean)๊ณผ ํ์คํธ์ฐจ(standard deviation)๋ฅผ ๋ปํ๋ ๋ชจ์์ด๋ค. ํจ์ ํ๊ธฐ์์ ์ธ๋ฏธ์ฝ๋ก (;)์ ๋
๋ฆฝ ๋ณ์์ ๋ชจ์๋ฅผ ๊ตฌ๋ถํ๊ธฐ ์ํด ์ฌ์ฉํ์๋ค.
์ด ํจ์์์ด ์ค์ ๋ถํฌ์ ์ด๋ค ๊ด๊ณ๋ฅผ ๊ฐ์ง๋์ง๋ ๋ถํฌ ํจ์์ ๋ํ ๋ค์ ๋
ธํธ๋ถ์ ์ฐธ์กฐํ๋ค.
[[school_notebook:4d74d1b5651245a7903583f30ae44608]]
๋ค์ ๊ทธ๋ฆผ์ scipy๋ฅผ ์ฌ์ฉํ์ฌ ํ๊ท 0, ํ์คํธ์ฐจ 1์ธ ํ์ค ์ ๊ท ๋ถํฌ(standard normal distribution)์ ๋ชจ์์ ๊ทธ๋ฆฐ๊ฒ์ด๋ค.
End of explanation
x = np.array([5, 6, 1, 4, 4, 4, 2, 4, 6, 3, 5, 1, 1, 5, 3, 2, 1, 2, 6, 2, 6, 1, 2,
5, 4, 1, 4, 6, 1, 3, 4, 1, 2, 4, 6, 4, 4, 1, 2, 2, 2, 1, 3, 5, 4, 4,
3, 5, 3, 1, 1, 5, 6, 6, 1, 5, 2, 5, 2, 3, 3, 1, 2, 2, 2, 2, 4, 4, 3,
4, 1, 4, 6, 5, 2, 3, 5, 4, 5, 5, 5, 4, 5, 5, 5, 1, 5, 4, 3, 6, 6, 6])
Explanation: ์๋ฅผ ๋ค์ด ์ด๋ค ๋ฐ์ดํฐ์ ๋ถํฌ๋ฅผ ๋ฌ์ฌํ๊ธฐ ์ํด ๋ฐ์ดํฐ์ ํ์คํ ๊ทธ๋จ์ ๊ทธ๋ฆฌ๊ฑฐ๋ ๊ธฐ์ ํต๊ณ ์์น๋ฅผ ์ ์ํ ํ์์์ด ๋ค์๊ณผ ๊ฐ์ด ๋งํ๋ ๊ฒ ๋ง์ผ๋ก ๋ฐ์ดํฐ์ ๋ถํฌ์ ๋ํ ์ ๋ณด๋ฅผ ์๋ฒฝํ๊ฒ ์ ๋ฌํ ์ ์๋ค.
์ด ๋ฐ์ดํฐ๋ ํ๊ท $\mu$, ํ์คํธ์ฐจ $\sigma$์ธ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ๋ฅผ ๋ฐ๋ฅธ๋ค.
ํ๋ฅ ๋ชจํ๊ณผ ๋ฐ์ดํฐ ์์ฑ
ํ๋ฅ ๋ชจํ๋ก ์ ๋ฐ์ดํฐ์ ํ๋ฅ ๊ฐ์ ๊ด๊ณ๋ฅผ ์ด์ฉํ์ฌ ๋ฐ์ดํฐ ๋ถํฌ๋ฅผ ํจ์จ์ ์ผ๋ก ๋ฌ์ฌํ๋ ๋ฐฉ๋ฒ์ ๋งํ๋ค. ๋ฐ์ดํฐ์ ํ๋ฅ ๊ฐ์ ๊ด๊ณ๋ฅผ ์๊ธฐ ์ํด์๋ ์ฐ์ ๋ฐ์ดํฐ๊ฐ ์ด๋ป๊ฒ ๋ง๋ค์ด์ง๋์ง๋ฅผ ์ดํดํด์ผ ํ๋ค.
์ฐ๋ฆฌ๋ ๋ณดํต ์กฐ์ฌ(research) ๊ณผ์ ์ ํตํด ๋ฐ์ดํฐ๋ฅผ ํ๋ ํ๋ ์์งํ๋ค. ์ด๋ฌํ ๊ณผ์ ์ ์ฃผ์ฌ์๋ฅผ ๋์ง๋ ํ์์ ๋น์ ํ ์ ์๋ค. ์ฆ ์กฐ์ฌ๋ฅผ ํตํด ๋ฐ์ดํฐ๋ฅผ ํ๋ ํ๋ณดํ๋ ์ผ์ ์ฃผ์ฌ์๋ฅผ ๋์ ธ ์๋ฅผ ํฅํ๋ ๋๊ธ์ ์ซ์๋ฅผ ์ข
์ด์ ์ ๋ ์ผ๊ณผ ๊ฐ๋ค๊ณ ๋ณด๋ ๊ฒ์ด๋ค.
์๋ฅผ ๋ค์ด ๋ค์ ๋ฐ์ดํฐ๋ฅผ ๋ณด์.
End of explanation
np.random.seed(0)
np.random.randint(1, 7, 92)
Explanation: ์ด ๋ฐ์ดํฐ๋ ์ฌ์ค ์ง์ง๋ก ์ปดํจํฐ ์ฃผ์ฌ์๋ฅผ ๋์ ธ ๋ง๋ ์ซ์์ด๋ค. ์ด ์ซ์๋ฅผ ๋ง๋๋ ์ฝ๋๋ ์๋์ ๊ฐ๋ค.
End of explanation |
3,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
# YOUR CODE HERE
F = 1/(np.exp((energy-mu)/kT)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
$F(\epsilon)=\frac{1}{e^\frac{(\epsilon-\mu)}{kT}+1}$
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
ax = plt.gca()
energy = np.arange(0,11.0)
plt.plot(energy,fermidist(energy,mu,kT))
plt.ylim(0,2.0)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist,mu = [0.0,5.0], kT = [0.1,10.0])
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
3,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
3,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview and Examples
A brief description of what linearsolve is for followed by examples.
What linearsolve Does
linearsolve defines a class - linearsolve.model - with several functions for approximating, solving, and simulating dynamic stochastic general equilibrium (DSGE) models. The equilibrium conditions for most DSGE models can be expressed as a vector function $f$
Step1: Example 2
Step2: Next, we need to define a function that returns the equilibrium conditions of the model. The function will take as inputs two vectors
Step3: Notice that inside the function we have to define the variables of the model form the elements of the input vectors variables_forward and variables_current.
Initializing the model
To initialize the model, we need to specify the total number of state variables in the model, the number of state variables with exogenous shocks, the names of the endogenous variables, and the parameters of the model.
It is essential that the variable names are ordered in the following way
Step4: Steady state
Next, we need to compute the nonstochastic steady state of the model. The .compute_ss() method can be used to compute the steady state numerically. The method's default is to use scipy's fsolve() function, but other scipy root-finding functions can be used
Step5: Note that the steady state is returned as a Pandas Series. Alternatively, you could compute the steady state directly and then sent the rbc.ss attribute
Step6: Log-linearization and solution
Now we use the .log_linear() method to find the log-linear appxoximation to the model's equilibrium conditions. That is, we'll transform the nonlinear model into a linear model in which all variables are expressed as log-deviations from the steady state. Specifically, we'll compute the matrices $A$ and $B$ that satisfy
Step7: Finally, we need to obtain the solution to the log-linearized model. The solution is a pair of matrices $F$ and $P$ that specify
Step8: Impulse responses
One the model is solved, use the .impulse() method to compute impulse responses to exogenous shocks to the state. The method creates the .irs attribute which is a dictionary with keys equal to the names of the exogenous shocks and the values are Pandas DataFrames with the computed impulse respones. You can supply your own values for the shocks, but the default is 0.01 for each exogenous shock.
Step9: Plotting is easy.
Step10: Stochastic simulation
Creating a stochastic simulation of the model is straightforward with the .stoch_sim() method. In the following example, I create a 151 period (including t=0) simulation by first simulating the model for 251 periods and then dropping the first 100 values. The standard deviation of the shock to $A_t$ is set to 0.00763. The seed for the numpy random number generator is set to 0.
Step11: Example 3
Step12: Compute impulse responses and plot
Compute impulse responses of the endogenous variables to a one percent shock to each exogenous variable.
Step13: Construct a stochastic simulation and plot
Contruct a 151 period stochastic simulation by first siumlating the model for 251 periods and then dropping the first 100 values. The seed for the numpy random number generator is set to 0. | Python Code:
# Import numpy, pandas, linearsolve, matplotlib.pyplot
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
# Input model parameters
parameters = pd.Series(dtype=float)
parameters['alpha'] = .35
parameters['beta'] = 0.99
parameters['delta'] = 0.025
parameters['rhoa'] = .9
parameters['sigma'] = 1.5
parameters['A'] = 1
# Funtion that evaluates the equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Household Euler equation
euler_eqn = p.beta*fwd.c**-p.sigma*(p.alpha*cur.a*fwd.k**(p.alpha-1)+1-p.delta) - cur.c**-p.sigma
# Goods market clearing
market_clearing = cur.c + fwd.k - (1-p.delta)*cur.k - cur.a*cur.k**p.alpha
# Exogenous technology
technology_proc = p.rhoa*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
euler_eqn,
market_clearing,
technology_proc
])
# Initialize the model
model = ls.model(equations = equations,
n_states=2,
n_exo_states = 1,
var_names=['a','k','c'],
parameters = parameters)
# Compute the steady state numerically
guess = [1,1,1]
model.compute_ss(guess)
# Find the log-linear approximation around the non-stochastic steady state and solve
model.approximate_and_solve()
# Compute impulse responses and plot
model.impulse(T=41,t0=5,shocks=None)
fig = plt.figure(figsize=(12,4))
ax1 =fig.add_subplot(1,2,1)
model.irs['e_a'][['a','k','c']].plot(lw='5',alpha=0.5,grid=True,ax=ax1).legend(loc='upper right',ncol=3)
ax2 =fig.add_subplot(1,2,2)
model.irs['e_a'][['e_a','a']].plot(lw='5',alpha=0.5,grid=True,ax=ax2).legend(loc='upper right',ncol=2)
Explanation: Overview and Examples
A brief description of what linearsolve is for followed by examples.
What linearsolve Does
linearsolve defines a class - linearsolve.model - with several functions for approximating, solving, and simulating dynamic stochastic general equilibrium (DSGE) models. The equilibrium conditions for most DSGE models can be expressed as a vector function $f$:
\begin{align}
f(E_t X_{t+1}, X_t, \epsilon_{t+1}) = 0,
\end{align}
where 0 is an $n\times 1$ vector of zeros, $X_t$ is an $n\times 1$ vector of endogenous variables, and $\epsilon_{t+1}$ is an $m\times 1$ vector of exogenous structural shocks to the model. $E_tX_{t+1}$ denotes the expecation of the $t+1$ endogenous variables based on the information available to agents in the model as of time period $t$.
linearsolve.model has methods for computing linear and log-linear approximations of the model given above and methods for solving and simulating the linear model.
Example 1: Quickly Simulate an RBC Model
Here I demonstrate how how relatively straightforward it is to appoximate, solve, and simulate a DSGE model using linearsolve. In the example that follows, I describe the procedure more carefully.
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha A_{t+1} K_{t+1}^{\alpha-1} + 1 - \delta)\right]\
C_t + K_{t+1} & = A_t K_t^{\alpha} + (1-\delta)K_t\
\log A_{t+1} & = \rho_a \log A_{t} + \epsilon_{t+1}
\end{align}
In the block of code that immediately follows, I input the model, solve for the steady state, compute the log-linear approximation of the equilibirum conditions, and compute some impulse responses following a shock to technology $A_t$.
End of explanation
# Input model parameters
parameters = pd.Series(dtype=float)
parameters['alpha'] = .35
parameters['beta'] = 0.99
parameters['delta'] = 0.025
parameters['rhoa'] = .9
parameters['sigma'] = 1.5
parameters['A'] = 1
Explanation: Example 2: An RBC Model with More Details
Consider the equilibrium conditions for a basic RBC model without labor:
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha A_{t+1} K_{t+1}^{\alpha-1} + 1 - \delta)\right]\
Y_t & = A_t K_t^{\alpha}\
I_t & = K_{t+1} - (1-\delta)K_t\
Y_t & = C_t + I_t\
\log A_t & = \rho_a \log A_{t-1} + \epsilon_t
\end{align}
In the nonstochastic steady state, we have:
\begin{align}
K & = \left(\frac{\alpha A}{1/\beta+\delta-1}\right)^{\frac{1}{1-\alpha}}\
Y & = AK^{\alpha}\
I & = \delta K\
C & = Y - I
\end{align}
Given values for the parameters $\beta$, $\sigma$, $\alpha$, $\delta$, and $A$, steady state values of capital, output, investment, and consumption are easily computed.
Initializing the model in linearsolve
To initialize the model, we need to first set the model's parameters. We do this by creating a Pandas Series variable called parameters:
End of explanation
# Define function to compute equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Household Euler equation
euler_eqn = p.beta*fwd.c**-p.sigma*(p.alpha*fwd.y/fwd.k+1-p.delta) - cur.c**-p.sigma
# Production function
production_fuction = cur.a*cur.k**p.alpha - cur.y
# Capital evolution
capital_evolution = fwd.k - (1-p.delta)*cur.k - cur.i
# Goods market clearing
market_clearing = cur.c + cur.i - cur.y
# Exogenous technology
technology_proc = cur.a**p.rhoa- fwd.a
# Stack equilibrium conditions into a numpy array
return np.array([
euler_eqn,
production_fuction,
capital_evolution,
market_clearing,
technology_proc
])
Explanation: Next, we need to define a function that returns the equilibrium conditions of the model. The function will take as inputs two vectors: one vector of "current" variables and another of "forward-looking" or one-period-ahead variables. The function will return an array that represents the equilibirum conditions of the model. We'll enter each equation with all variables moved to one side of the equals sign. For example, here's how we'll enter the produciton fucntion:
production_function = technology_current*capital_current**alpha - output_curent
Here the variable production_function stores the production function equation set equal to zero. We can enter the equations in almost any way we want. For example, we could also have entered the production function this way:
production_function = 1 - output_curent/technology_current/capital_current**alpha
One more thing to consider: the natural log in the equation describing the evolution of total factor productivity will create problems for the solution routine later on. So rewrite the equation as:
\begin{align}
A_{t+1} & = A_{t}^{\rho_a}e^{\epsilon_{t+1}}\
\end{align}
So the complete system of equations that we enter into the program looks like:
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha Y_{t+1} /K_{t+1}+ 1 - \delta)\right]\
Y_t & = A_t K_t^{\alpha}\
I_t & = K_{t+1} - (1-\delta)K_t\
Y_t & = C_t + I_t\
A_{t+1} & = A_{t}^{\rho_a}e^{\epsilon_{t+1}}
\end{align}
Now let's define the function that returns the equilibrium conditions:
End of explanation
# Initialize the model
rbc = ls.model(equations = equations,
n_states=2,
n_exo_states=1,
var_names=['a','k','c','y','i'],
parameters=parameters)
Explanation: Notice that inside the function we have to define the variables of the model form the elements of the input vectors variables_forward and variables_current.
Initializing the model
To initialize the model, we need to specify the total number of state variables in the model, the number of state variables with exogenous shocks, the names of the endogenous variables, and the parameters of the model.
It is essential that the variable names are ordered in the following way: First the names of the endogenous variables with the state variables with exogenous shocks, then the state variables without shocks, and finally the control variables. Ordering within the groups doesn't matter.
End of explanation
# Compute the steady state numerically
guess = [1,1,1,1,1]
rbc.compute_ss(guess)
print(rbc.ss)
Explanation: Steady state
Next, we need to compute the nonstochastic steady state of the model. The .compute_ss() method can be used to compute the steady state numerically. The method's default is to use scipy's fsolve() function, but other scipy root-finding functions can be used: root, broyden1, and broyden2. The optional argument options lets the user pass keywords directly to the optimization function. Check out the documentation for Scipy's nonlinear solvers here: http://docs.scipy.org/doc/scipy/reference/optimize.html
End of explanation
# Steady state solution
p = parameters
K = (p.alpha*p.A/(1/p.beta+p.delta-1))**(1/(1-p.alpha))
C = p.A*K**p.alpha - p.delta*K
Y = p.A*K**p.alpha
I = Y - C
rbc.set_ss([p.A,K,C,Y,I])
print(rbc.ss)
Explanation: Note that the steady state is returned as a Pandas Series. Alternatively, you could compute the steady state directly and then sent the rbc.ss attribute:
End of explanation
# Find the log-linear approximation around the non-stochastic steady state
rbc.log_linear_approximation()
print('The matrix A:\n\n',np.around(rbc.a,4),'\n\n')
print('The matrix B:\n\n',np.around(rbc.b,4))
Explanation: Log-linearization and solution
Now we use the .log_linear() method to find the log-linear appxoximation to the model's equilibrium conditions. That is, we'll transform the nonlinear model into a linear model in which all variables are expressed as log-deviations from the steady state. Specifically, we'll compute the matrices $A$ and $B$ that satisfy:
\begin{align}
A E_t\left[ x_{t+1} \right] & = B x_t + \left[ \begin{array}{c} \epsilon_{t+1} \ 0 \end{array} \right],
\end{align}
where the vector $x_{t}$ denotes the log deviation of the endogenous variables from their steady state values.
End of explanation
# Solve the model
rbc.solve_klein(rbc.a,rbc.b)
# Display the output
print('The matrix F:\n\n',np.around(rbc.f,4),'\n\n')
print('The matrix P:\n\n',np.around(rbc.p,4))
Explanation: Finally, we need to obtain the solution to the log-linearized model. The solution is a pair of matrices $F$ and $P$ that specify:
The current values of the non-state variables $u_{t}$ as a linear function of the previous values of the state variables $s_t$.
The future values of the state variables $s_{t+1}$ as a linear function of the previous values of the state variables $s_t$ and the future realisation of the exogenous shock process $\epsilon_{t+1}$.
\begin{align}
u_t & = Fs_t\
s_{t+1} & = Ps_t + \epsilon_{t+1}.
\end{align}
We use the .klein() method to find the solution.
End of explanation
# Compute impulse responses and plot
rbc.impulse(T=41,t0=1,shocks=None,percent=True)
print('Impulse responses to a 0.01 unit shock to A:\n\n',rbc.irs['e_a'].head())
Explanation: Impulse responses
One the model is solved, use the .impulse() method to compute impulse responses to exogenous shocks to the state. The method creates the .irs attribute which is a dictionary with keys equal to the names of the exogenous shocks and the values are Pandas DataFrames with the computed impulse respones. You can supply your own values for the shocks, but the default is 0.01 for each exogenous shock.
End of explanation
rbc.irs['e_a'][['a','k','c','y','i']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
rbc.irs['e_a'][['e_a','a']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
Explanation: Plotting is easy.
End of explanation
rbc.stoch_sim(T=121,drop_first=100,cov_mat=np.array([0.00763**2]),seed=0,percent=True)
rbc.simulated[['k','c','y','i']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=4)
rbc.simulated[['a']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=4)
rbc.simulated['e_a'].plot(lw='5',alpha=0.5,grid=True).legend(ncol=4)
Explanation: Stochastic simulation
Creating a stochastic simulation of the model is straightforward with the .stoch_sim() method. In the following example, I create a 151 period (including t=0) simulation by first simulating the model for 251 periods and then dropping the first 100 values. The standard deviation of the shock to $A_t$ is set to 0.00763. The seed for the numpy random number generator is set to 0.
End of explanation
# Input model parameters
beta = 0.99
sigma= 1
eta = 1
omega= 0.8
kappa= (sigma+eta)*(1-omega)*(1-beta*omega)/omega
rhor = 0.9
phipi= 1.5
phiy = 0
rhog = 0.5
rhou = 0.5
rhov = 0.9
Sigma = 0.001*np.eye(3)
# Store parameters
parameters = pd.Series({
'beta':beta,
'sigma':sigma,
'eta':eta,
'omega':omega,
'kappa':kappa,
'rhor':rhor,
'phipi':phipi,
'phiy':phiy,
'rhog':rhog,
'rhou':rhou,
'rhov':rhov
})
# Define function that computes equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Exogenous demand
g_proc = p.rhog*cur.g - fwd.g
# Exogenous inflation
u_proc = p.rhou*cur.u - fwd.u
# Exogenous monetary policy
v_proc = p.rhov*cur.v - fwd.v
# Euler equation
euler_eqn = fwd.y -1/p.sigma*(cur.i-fwd.pi) + fwd.g - cur.y
# NK Phillips curve evolution
phillips_curve = p.beta*fwd.pi + p.kappa*cur.y + fwd.u - cur.pi
# interest rate rule
interest_rule = p.phiy*cur.y+p.phipi*cur.pi + fwd.v - cur.i
# Fisher equation
fisher_eqn = cur.i - fwd.pi - cur.r
# Stack equilibrium conditions into a numpy array
return np.array([
g_proc,
u_proc,
v_proc,
euler_eqn,
phillips_curve,
interest_rule,
fisher_eqn
])
# Initialize the nk model
nk = ls.model(equations=equations,
n_states=3,
n_exo_states = 3,
var_names=['g','u','v','i','r','y','pi'],
parameters=parameters)
# Set the steady state of the nk model
nk.set_ss([0,0,0,0,0,0,0])
# Find the log-linear approximation around the non-stochastic steady state
nk.linear_approximation()
# Solve the nk model
nk.solve_klein(nk.a,nk.b)
Explanation: Example 3: A New-Keynesian Model
Consider the new-Keynesian business cycle model from Walsh (2017), chapter 8 expressed in log-linear terms:
\begin{align}
y_t & = E_ty_{t+1} - \sigma^{-1} (i_t - E_t\pi_{t+1}) + g_t\
\pi_t & = \beta E_t\pi_{t+1} + \kappa y_t + u_t\
i_t & = \phi_x y_t + \phi_{\pi} \pi_t + v_t\
r_t & = i_t - E_t\pi_{t+1}\
g_{t+1} & = \rho_g g_{t} + \epsilon_{t+1}^g\
u_{t+1} & = \rho_u u_{t} + \epsilon_{t+1}^u\
v_{t+1} & = \rho_v v_{t} + \epsilon_{t+1}^v
\end{align}
where $y_t$ is the output gap (log-deviation of output from the natural rate), $\pi_t$ is the quarterly rate of inflation between $t-1$ and $t$, $i_t$ is the nominal interest rate on funds moving between period $t$ and $t+1$, $r_t$ is the real interest rate, $g_t$ is the exogenous component of demand, $u_t$ is an exogenous component of inflation, and $v_t$ is the exogenous component of monetary policy.
Since the model is already log-linear, there is no need to approximate the equilibrium conditions. We'll still use the .log_linear method to find the matrices $A$ and $B$, but we'll have to set the islinear option to True to avoid generating an error.
Inintialize model and solve
End of explanation
# Compute impulse responses
nk.impulse(T=11,t0=1,shocks=None)
# Create the figure and axes
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(3,1,1)
ax2 = fig.add_subplot(3,1,2)
ax3 = fig.add_subplot(3,1,3)
# Plot commands
nk.irs['e_g'][['g','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Demand shock',ax=ax1).legend(loc='upper right',ncol=5)
nk.irs['e_u'][['u','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Inflation shock',ax=ax2).legend(loc='upper right',ncol=5)
nk.irs['e_v'][['v','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Interest rate shock',ax=ax3).legend(loc='upper right',ncol=5)
Explanation: Compute impulse responses and plot
Compute impulse responses of the endogenous variables to a one percent shock to each exogenous variable.
End of explanation
# Compute stochastic simulation
nk.stoch_sim(T=151,drop_first=100,cov_mat=Sigma,seed=0)
# Create the figure and axes
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
# Plot commands
nk.simulated[['y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Output, inflation, and interest rates',ax=ax1).legend(ncol=4)
nk.simulated[['g','u','v']].plot(lw='5',alpha=0.5,grid=True,title='Exogenous demand, inflation, and policy',ax=ax2).legend(ncol=4,loc='lower right')
# Plot simulated exogenous shocks
nk.simulated[['e_g','g']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
nk.simulated[['e_u','u']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
nk.simulated[['e_v','v']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
Explanation: Construct a stochastic simulation and plot
Contruct a 151 period stochastic simulation by first siumlating the model for 251 periods and then dropping the first 100 values. The seed for the numpy random number generator is set to 0.
End of explanation |
3,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Appendix A
Step1: Baseline evaluation
For tuning the BM25 parameters, we're going to use just a match query per field, combined using a bool should query. This will search for query terms across the url, title, and body fields, and we'll be attempting to optimize the BM25 parameters that are used in the scoring function for each field. In theory, each field could have it's own BM25 similarty and parameters, but we'll leave that as an exercise to the reader.
Since BM25 parameters are actually index settings in Elasticsearch (they are theoretically query parameters, but they are implemented as index settings to be consistent with other similarity modules), we need to make sure to set the parameters before any evaluation step. At optimization time, we'lll do the same process
Step2: That's the same baseline that we've seen in the "Query tuning" notebook, so we know we're setup correctly.
Optimization
Now we're ready to run the optimization procedure and see if we can improve on that, while holding the default query parameters constant.
We know that there's roughly a standard range for each parameter, so we use those. We also set internally before running the optimization some static initial points to try, based on some well-known default parameter values
Step3: Here's a look at the parameter space, which is easy to plot here since there are just two parameters.
Step4: Interesting that we do see an improvement but it's not very significant. One hypothesis is that maybe our analyzers are doing work that means there's not much left to tune. Let's try this again but using the default analyzers with the index msmarco-document.defaults.
First we set the baseline that we're comparing against. We expect it to be lower than the baseline with the custom analyzers. We saw this in the "Analyzers" notebook as well already.
Step5: Now let's optimize BM25. Before we do that, let's also increase the possible range of k1 to make sure we really see a maximum score from somewhere within the range and not at a maximum or minumum value in the range.
Step6: That's a much larger improvement over the baseline, and actually this optimized version with the default analyzers beats the tuned version with the custom analyzers! Goes to show you that you can't make assumptions โย you need to test your hypothesis!
Conclusion
Before we wrap up, it's good to set all the indices back to their default values, in case we use those indices for other experiments. | Python Code:
%load_ext autoreload
%autoreload 2
import importlib
import os
import sys
from elasticsearch import Elasticsearch
from skopt.plots import plot_objective
# project library
sys.path.insert(0, os.path.abspath('..'))
import qopt
importlib.reload(qopt)
from qopt.notebooks import evaluate_mrr100_dev, optimize_bm25_mrr100
from qopt.optimize import Config, set_bm25_parameters
# use a local Elasticsearch or Cloud instance (https://cloud.elastic.co/)
es = Elasticsearch('http://localhost:9200')
# set the parallelization parameter `max_concurrent_searches` for the Rank Evaluation API calls
max_concurrent_searches = 10
index = 'msmarco-document'
index_defaults = 'msmarco-document.defaults'
template_id = 'combined_matches'
# no query params
query_params = {}
# default Elasticsearch BM25 params
default_bm25_params = {'k1': 1.2, 'b': 0.75}
Explanation: Appendix A: Tuning BM25 parameters for the MSMARCO Document dataset
The following shows a principled, data-driven approach to tuning BM25 parameters with a basic query, using the MSMARCO Document dataset. This assumes familiarity with basic query tuning as shown in the "Query tuning" notebooks.
BM25 contains two parameters k1 and b. Roughly speaking (very roughly), k1 controls the amount of term saturation (at some point, more terms does not mean more relevant) and b controls the importance of document length. A deeper look into these parameters is beyond the scope of this notebook, but our three part blog series on understanding BM25 is very useful for that.
Be aware that not all query types will see improvements with BM25 tuning. Sometimes it's more impactful to just tune query parameters. As always you try it out with your datasets first and get concrete measurements. We recommend customizing index settings/analyzers first, then do query parmeter tuning and get your baseline measurements. Next, try the best index settings/analyzers with BM25 tuning, then do query parameter tuning and see if it makes any improvement on your baseline. If there's no significant difference it's best to just stick with the default BM25 parameters for simplicty.
End of explanation
%%time
set_bm25_parameters(es, index, **default_bm25_params)
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, query_params)
Explanation: Baseline evaluation
For tuning the BM25 parameters, we're going to use just a match query per field, combined using a bool should query. This will search for query terms across the url, title, and body fields, and we'll be attempting to optimize the BM25 parameters that are used in the scoring function for each field. In theory, each field could have it's own BM25 similarty and parameters, but we'll leave that as an exercise to the reader.
Since BM25 parameters are actually index settings in Elasticsearch (they are theoretically query parameters, but they are implemented as index settings to be consistent with other similarity modules), we need to make sure to set the parameters before any evaluation step. At optimization time, we'lll do the same process: set the BM25 parameters to try, then run the rank evaluation API on the training query dataset.
End of explanation
%%time
_, best_params, _, metadata = optimize_bm25_mrr100(es, max_concurrent_searches, index, template_id, query_params,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 40,
'num_initial_points': 20,
'space': {
'k1': { 'low': 0.5, 'high': 5.0 },
'b': { 'low': 0.3, 'high': 1.0 },
}
}))
Explanation: That's the same baseline that we've seen in the "Query tuning" notebook, so we know we're setup correctly.
Optimization
Now we're ready to run the optimization procedure and see if we can improve on that, while holding the default query parameters constant.
We know that there's roughly a standard range for each parameter, so we use those. We also set internally before running the optimization some static initial points to try, based on some well-known default parameter values:
Elasticsearch defaults: k1: 1.2, b: 0.75
Anserini [1] defaults: k1: 0.9, b: 0.4
[1] anserini is a commonly used tool in academia for research into search systems
End of explanation
_ = plot_objective(metadata, sample_source='result')
%%time
set_bm25_parameters(es, index, **best_params)
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, query_params)
Explanation: Here's a look at the parameter space, which is easy to plot here since there are just two parameters.
End of explanation
%%time
set_bm25_parameters(es, index_defaults, **default_bm25_params)
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index_defaults, template_id, query_params)
Explanation: Interesting that we do see an improvement but it's not very significant. One hypothesis is that maybe our analyzers are doing work that means there's not much left to tune. Let's try this again but using the default analyzers with the index msmarco-document.defaults.
First we set the baseline that we're comparing against. We expect it to be lower than the baseline with the custom analyzers. We saw this in the "Analyzers" notebook as well already.
End of explanation
%%time
_, best_params, _, metadata = optimize_bm25_mrr100(es, max_concurrent_searches, index_defaults, template_id, query_params,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 50,
'num_initial_points': 25,
'space': {
'k1': { 'low': 0.5, 'high': 10.0 },
'b': { 'low': 0.3, 'high': 1.0 },
}
}))
_ = plot_objective(metadata, sample_source='result')
%%time
set_bm25_parameters(es, index_defaults, **best_params)
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index_defaults, template_id, query_params)
Explanation: Now let's optimize BM25. Before we do that, let's also increase the possible range of k1 to make sure we really see a maximum score from somewhere within the range and not at a maximum or minumum value in the range.
End of explanation
set_bm25_parameters(es, index, **default_bm25_params)
set_bm25_parameters(es, index_defaults, **default_bm25_params)
Explanation: That's a much larger improvement over the baseline, and actually this optimized version with the default analyzers beats the tuned version with the custom analyzers! Goes to show you that you can't make assumptions โย you need to test your hypothesis!
Conclusion
Before we wrap up, it's good to set all the indices back to their default values, in case we use those indices for other experiments.
End of explanation |
3,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2022 The TensorFlow Authors.
Step1: Private Heavy Hitters
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Background
Step4: Simulations
To run simulations to discover the most popular words (heavy hitters) in the Shakespeare dataset, first you need to create a TFF computation using the tff.analytics.heavy_hitters.iblt.build_iblt_computation API with the following parameters
Step5: Now you are ready to run simulations with TFF computation iblt_computation and the preprocess input dataset. The output iblt_computation of has four attributes
Step6: Private Heavy Hitters with Differential Privacy
To obtain private heavy hitters with central DP, a DP mechanism is applied for
open set histograms. The idea is to add noise to the counts of strings in the
aggregated histogram, then only keep the strings with counts above a certain
threhold. The noise and threhold depends on (epsilon, delta)-DP budget, see
this doc
for detailed algorithm and proof. The noisy counts are rounded to integers as a
post-processing step, which does not weaken the DP guarantee. Note that you will
discover less heavy hitters when DP is required. This is because the
thresholding step filters out strings with low counts. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
# tensorflow_federated_nightly also bring in tf_nightly, which
# can causes a duplicate tensorboard install, leading to errors.
!pip install --quiet tensorflow-text-nightly
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
import tensorflow_text as tf_text
np.random.seed(0)
tff.backends.test.set_test_python_execution_context()
tff.federated_computation(lambda: 'Hello, World!')()
Explanation: Private Heavy Hitters
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/private_heavy_hitters"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/private_heavy_hitters.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/private_heavy_hitters.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/private_heavy_hitters.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
NOTE: This colab has been verified to work with the latest released version of the tensorflow_federated pip package. This colab may not be updated to work against main.
This tutorial shows how to use the tff.analytics.heavy_hitters.iblt.build_iblt_computation API to build a federated analytics computation to discover the most frequent strings (private heavy hitters) in the population.
Environment Setup
Please run the following to make sure that your environment is
correctly setup. If you don't see a greeting, please refer to the
Installation guide for instructions.
End of explanation
# Load the simulation data.
source, _ = tff.simulation.datasets.shakespeare.load_data()
# Preprocessing function to tokenize a line into words.
def tokenize(ds):
Tokenizes a line into words with alphanum characters.
def extract_strings(example):
return tf.expand_dims(example['snippets'], 0)
def tokenize_line(line):
return tf.data.Dataset.from_tensor_slices(tokenizer.tokenize(line)[0])
def mask_all_symbolic_words(word):
return tf.math.logical_not(
tf_text.wordshape(word, tf_text.WordShape.IS_PUNCT_OR_SYMBOL))
tokenizer = tf_text.WhitespaceTokenizer()
ds = ds.map(extract_strings)
ds = ds.flat_map(tokenize_line)
ds = ds.map(tf_text.case_fold_utf8)
ds = ds.filter(mask_all_symbolic_words)
return ds
batch_size = 5
def client_data(n: int) -> tf.data.Dataset:
return tokenize(source.create_tf_dataset_for_client(
source.client_ids[n])).batch(batch_size)
# Pick a subset of client devices to participate in the computation.
dataset = [client_data(n) for n in range(10)]
Explanation: Background: Private Heavy Hitters in Federated Analytics
Consider the following setting: Each client has a list of strings, and each string is from an open set, which means it could be arbitrary. The goal is to discover the most popular strings (heavy hitters) and their counts privately in a federated setting. This colab demonstrates a solution to this problem with the following privacy properties:
Secure aggregation: Computes the aggregated string counts such that it should not be possible for the server to learn any client's individual value. See tff.federated_secure_sum for more information.
Differential pirvacy (DP): A widely used method for bounding and quantifying the privacy leakage of sensitive data in analytics. You can apply user-level central DP to the heavy hitter results.
The secure aggregation API tff.federated_secure_sum supports linear sums of integer vectors. If the strings are from a closed set of size n, then it is easy to encode each client's strings to a vector of size n: let the value at index i of the vector be the count of the i<sup>th</sup> string in the closed set. Then you can securely sum the vectors of all clients to get the counts of strings in the whole population. However, if the strings are from an open set, it is not obvious how to encode them properly for secure sum. In this work, you can encode the strings into Invertible Bloom Lookup Tables (IBLT), which is a probabilistic data structure that have the ability to encode items in a large (or open) domain in an efficient manner. IBLT sketches can be linearly summed, so they are compatible with secure sum.
You can use tff.analytics.heavy_hitters.iblt.build_iblt_computation to create a TFF computation that encodes each client's local strings to an IBLT structure. These structures are securely summed via a cryptographic secure multi-party computation protocol into an aggregated IBLT structure which the server can decode. The server then can return the top heavy hitters. The following sections show how to use this API to create a TFF computation and run simulations with the Shakespeare dataset.
Load and Preprocess the Federated Shakespeare Data
The Shakespeare dataset contains lines of characters of Shakespeare plays. In this example, a subset of characters (that is, clients) are selected. A preprocessor converts each character's lines into a list of strings, and any string that is only punctuation or symbols is dropped.
End of explanation
max_words_per_user = 8
iblt_computation = tff.analytics.heavy_hitters.iblt.build_iblt_computation(
capacity=100,
max_string_length=20,
max_words_per_user=max_words_per_user,
max_heavy_hitters=10,
secure_sum_bitwidth=32,
multi_contribution=False,
batch_size=batch_size)
Explanation: Simulations
To run simulations to discover the most popular words (heavy hitters) in the Shakespeare dataset, first you need to create a TFF computation using the tff.analytics.heavy_hitters.iblt.build_iblt_computation API with the following parameters:
capacity: The capacity of the IBLT sketch. This number should be roughly the total number of unique strings that could appear in one round of computation. Defaults to 1000. If this number is too small, the decoding could fail due to collision of hashed values. If this number is too large, it would consume more memory than necessary.
max_string_length: The maximum length of a string in the IBLT. Defaults to 10. Must be positive. The strings longer than max_string_length will be truncated.
max_words_per_user: The maximum number of strings each client is allowed to contribute. If not None, must be a positive integer. Defaults to None, which means all the clients contribute all their strings.
max_heavy_hitters: The maximum number of items to return. If the decoded results have more than this number of items, will order decreasingly by the estimated counts and return the top max_heavy_hitters items. Defaults to None, which means to return all the heavy hitters in the result.
secure_sum_bitwidth: The bitwidth used for secure sum. The default value is
None, which disables secure sum. If not None, must be in the range [1,62]. See tff.federated_secure_sum.
multi_contribution: Whether each client is allowed to contribute multiple counts or only a count of one for each unique word. Defaults to True. This argument could improve the utility when differential privacy is required.
batch_size: The number of elements in each batch of the dataset. Defaults to 1, means the input dataset is processed by tf.data.Dataset.batch(1). Must be a positive integer.
End of explanation
def run_simulation(one_round_computation: tff.Computation, dataset):
output = one_round_computation(dataset)
heavy_hitters = output.heavy_hitters
heavy_hitters_counts = output.heavy_hitters_counts
heavy_hitters = [word.decode('utf-8', 'ignore') for word in heavy_hitters]
results = {}
for index in range(len(heavy_hitters)):
results[heavy_hitters[index]] = heavy_hitters_counts[index]
return output.clients, dict(results)
clients, result = run_simulation(iblt_computation, dataset)
print(f'Number of clients participated: {clients}')
print('Discovered heavy hitters and counts:')
print(result)
Explanation: Now you are ready to run simulations with TFF computation iblt_computation and the preprocess input dataset. The output iblt_computation of has four attributes:
clients: A scalar number of clients that participated in the computation.
heavy_hitters: A list of aggregated heavy hitters.
heavy_hitters_counts: A list of the counts of aggregated heavy hitters.
num_not_decoded: A scalar number of strings that are not successfully decoded.
End of explanation
iblt_computation = tff.analytics.heavy_hitters.iblt.build_iblt_computation(
capacity=100,
max_string_length=20,
max_words_per_user=max_words_per_user,
secure_sum_bitwidth=32,
multi_contribution=False,
batch_size=batch_size)
clients, result = run_simulation(iblt_computation, dataset)
# DP parameters
eps = 20
delta = 0.01
# Calculating scale for Laplace noise
scale = max_words_per_user / eps
# Calculating the threshold
tau = 1 + (max_words_per_user / eps) * np.log(max_words_per_user / (2 * delta))
result_with_dp = {}
for word in result:
noised_count = result[word] + np.random.laplace(scale=scale)
if noised_count >= tau:
result_with_dp[word] = int(noised_count)
print(f'Discovered heavy hitters and counts with central DP:')
print(result_with_dp)
Explanation: Private Heavy Hitters with Differential Privacy
To obtain private heavy hitters with central DP, a DP mechanism is applied for
open set histograms. The idea is to add noise to the counts of strings in the
aggregated histogram, then only keep the strings with counts above a certain
threhold. The noise and threhold depends on (epsilon, delta)-DP budget, see
this doc
for detailed algorithm and proof. The noisy counts are rounded to integers as a
post-processing step, which does not weaken the DP guarantee. Note that you will
discover less heavy hitters when DP is required. This is because the
thresholding step filters out strings with low counts.
End of explanation |
3,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Invoer
We gaan dus proberen op te lossen
<img src=./examples/2x2b.png width = 120px></img>
Als eerste moeten we dit een manier bedenken om dit in te typen, op een manier de voor de gebruiker (dat ben jezelf dus) en voor het programma begrijpelijk is
Step2: Filteren
In onze invoer staan dus allemaal overbodige tekens, die moeten we er uit gaan filteren
Step3: Onbekenden in de sudoku
In onze invoer staan nu punten, dat betekent dat we nog niet weten wat daar moet staan, en dat zou dus 1,2,3 of 4 kunnen zijn. We gaan ieder element in de invoer vervangen voor alle mogelijkheden
Step4: Uitvoer
Handig zou zijn, om op een leesbare manier de sudoku weer te geven, met alle opties
{'3'} {'2', '3', '1', '4'} {'4'} {'2', '3', '1', '4'}
{'2', '3', '1', '4'} {'1'} {'2', '3', '1', '4'} {'3'}
{'2'} {'3'} {'2', '3', '1', '4'} {'2', '3', '1', '4'}
{'1'} {'2', '3', '1', '4'} {'2', '3', '1', '4'} {'2'}
Step5: Nieuwe regels
Step6: Uitlijnen
Niet alle strings zijn even lang, om alles netjes uit te lijnen kunnen we zeggen dat er spaties voor of achter de string moeten worden toegevoegd om alles netjes uit te lijnen
Step7: Functies
Nu we weten hoe we handig de in en uitvoer kunnen verwerken, kunnen we hier functies van maken. Dan hoeven we maar een keer op te schrijven hoe iets moet, en dan kunenn we dat hergebruiken
Step9: invoer functie
Step11: uitvoer functie | Python Code:
# tekst kan je invoeren door met drie dubbele quotes te beginnen en eindigen
invoer =
3 . | 4 .
. 1 | . 3
----+----
2 3 | . .
1 . | . 2
print(invoer)
# in de variabele sudoku_invoer staan nu de volgende tekens. \n betekent nieuwe regel
invoer
Explanation: Invoer
We gaan dus proberen op te lossen
<img src=./examples/2x2b.png width = 120px></img>
Als eerste moeten we dit een manier bedenken om dit in te typen, op een manier de voor de gebruiker (dat ben jezelf dus) en voor het programma begrijpelijk is
End of explanation
# we gaan nu de inhoud van sudoku_invoer filteren zodat alleen de toegestane tekens overblijven
TEKENS = ['1','2','3','4']
toegestane_invoer = TEKENS + ['.']
gefilterd = [teken for teken in invoer if teken in toegestane_invoer]
# ook veranderen we de sudoku van een 'string' (een serie letters)
# naar een list (een lijst met inhoud)
print(gefilterd)
# van een lijst kunnen we nu bijvoorbeeld alleen het eerste (begin bij nul te tellen):
print('eerste element:', gefilterd[0])
print('tweede tot vierde element:', gefilterd[1:4])
Explanation: Filteren
In onze invoer staan dus allemaal overbodige tekens, die moeten we er uit gaan filteren
End of explanation
sudoku = [TEKENS if teken == '.' else teken for teken in gefilterd]
print(sudoku)
## Set
sudoku = [set(letters) for letters in sudoku]
print(sudoku)
Explanation: Onbekenden in de sudoku
In onze invoer staan nu punten, dat betekent dat we nog niet weten wat daar moet staan, en dat zou dus 1,2,3 of 4 kunnen zijn. We gaan ieder element in de invoer vervangen voor alle mogelijkheden
End of explanation
# de lijst met tekens in sudoko kunnen we ook weer netjes als een sudoku weergeven
# een voor een de sudoku weergeven (ieder cijfer op nieuwe regel)
for getal in [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]:
print(getal, sudoku[getal])
for i, s in enumerate(sudoku):
print(i, s)
Explanation: Uitvoer
Handig zou zijn, om op een leesbare manier de sudoku weer te geven, met alle opties
{'3'} {'2', '3', '1', '4'} {'4'} {'2', '3', '1', '4'}
{'2', '3', '1', '4'} {'1'} {'2', '3', '1', '4'} {'3'}
{'2'} {'3'} {'2', '3', '1', '4'} {'2', '3', '1', '4'}
{'1'} {'2', '3', '1', '4'} {'2', '3', '1', '4'} {'2'}
End of explanation
# alles achter elkaar...
for i, s in enumerate(sudoku):
print(str(s),end='')
# modulus
KOLOMMEN = 4
for i, s in enumerate(sudoku):
print(i,i % KOLOMMEN, s)
# alles achter elkaar, maar soms een nieuwe regel
# modulus
KOLOMMEN = 4
for i, s in enumerate(sudoku):
if ( (i+1) % KOLOMMEN) == 0:
print(s)
else:
print(s,end='')
Explanation: Nieuwe regels
End of explanation
for s in ['1','123','12345']:
print('|',s,'|')
for s in ['1','123','12345']:
print('|',s.center(10),'|')
# de center functie werkt alleen op een string
for i, s in enumerate(sudoku):
print(s.center(20))
for i, s in enumerate(sudoku):
print(str(s).center(20))
# alles achter elkaar, maar soms een nieuwe regel
# modulus
KOLOMMEN = 4
for i, s in enumerate(sudoku):
s = str(s).center(24)
if ( (i+1) % KOLOMMEN) == 0:
print(s)
else:
print(s,end='')
Explanation: Uitlijnen
Niet alle strings zijn even lang, om alles netjes uit te lijnen kunnen we zeggen dat er spaties voor of achter de string moeten worden toegevoegd om alles netjes uit te lijnen
End of explanation
def print_drie_keer(invoer):
print(invoer)
print(invoer)
print(invoer)
print_drie_keer('a')
print_drie_keer('abc')
Explanation: Functies
Nu we weten hoe we handig de in en uitvoer kunnen verwerken, kunnen we hier functies van maken. Dan hoeven we maar een keer op te schrijven hoe iets moet, en dan kunenn we dat hergebruiken
End of explanation
TEKENS = ['1','2','3','4']
def lees_sudoku(invoer):
toegestane_invoer = TEKENS + ['.']
# filteren
gefilterd = [teken for teken in invoer if teken in toegestane_invoer]
# vervangen . voor alle tekens, en omzetten naar set
sudoku = [set(TEKENS) if teken == '.' else set(teken) for teken in gefilterd]
return(sudoku)
invoer =
3 . | 4 .
. 1 | . 3
----+----
2 3 | . .
1 . | . 2
sudoku = lees_sudoku(invoer)
print(sudoku)
Explanation: invoer functie
End of explanation
KOLOMMEN = 4
def print_sudoku(sudoku):
for i, s in enumerate(sudoku):
s = str(s).center(24)
if ( (i+1) % KOLOMMEN) == 0:
print(s)
else:
print(s,end='')
print_sudoku(sudoku)
invoer =
3 . | 4 .
. 1 | . 3
----+----
2 3 | . .
1 . | . 2
sudoku = lees_sudoku(invoer)
print_sudoku(sudoku)
Explanation: uitvoer functie
End of explanation |
3,124 | Given the following text description, write Python code to implement the functionality described.
Description:
Find the Largest N digit perfect square number in Base B
Python3 implementation to find the largest N digit perfect square number in base B ; Function to find the largest N digit number ; Largest n - digit perfect square ; Print the result ; Driver Code
| Python Code:
import math
def nDigitPerfectSquares(n , b ) :
largest = pow(math . ceil(math . sqrt(pow(b , n ) ) ) - 1 , 2 )
print(largest )
N = 1
B = 8
nDigitPerfectSquares(N , B )
|
3,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Point cloud segmentation with PointNet
Author
Step1: Downloading Dataset
The ShapeNet dataset is an ongoing effort to establish a richly-annotated,
large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet
dataset with clean single 3D models and manually verified category and alignment
annotations. It covers 55 common object categories, with about 51,300 unique 3D models.
For this example, we use one of the 12 object categories of
PASCAL 3D+,
included as part of the ShapenetCore dataset.
Step2: Loading the dataset
We parse the dataset metadata in order to easily map model categories to their
respective directories and segmentation classes to colors for the purpose of
visualization.
Step3: In this example, we train PointNet to segment the parts of an Airplane model.
Step4: Structuring the dataset
We generate the following in-memory data structures from the Airplane point clouds and
their labels
Step5: Next, we take a look at some samples from the in-memory arrays we just generated
Step6: Now, let's visualize some of the point clouds along with their labels.
Step7: Preprocessing
Note that all the point clouds that we have loaded consist of a variable number of points,
which makes it difficult for us to batch them together. In order to overcome this problem, we
randomly sample a fixed number of points from each point cloud. We also normalize the
point clouds in order to make the data scale-invariant.
Step8: Let's visualize the sampled and normalized point clouds along with their corresponding
labels.
Step9: Creating TensorFlow datasets
We create tf.data.Dataset objects for the training and validation data.
We also augment the training point clouds by applying random jitter to them.
Step10: PointNet model
The figure below depicts the internals of the PointNet model family
Step12: We implement a regularizer (taken from
this example)
to enforce orthogonality in the feature space. This is needed to ensure
that the magnitudes of the transformed features do not vary too much.
Step14: The next piece is the transformation network which we explained earlier.
Step15: Finally, we piece the above blocks together and implement the segmentation model.
Step16: Instantiate the model
Step17: Training
For the training the authors recommend using a learning rate schedule that decays the
initial learning rate by half every 20 epochs. In this example, we resort to 15 epochs.
Step18: Finally, we implement a utility for running our experiments and launch model training.
Step19: Visualize the training landscape
Step20: Inference | Python Code:
import os
import json
import random
import numpy as np
import pandas as pd
from tqdm import tqdm
from glob import glob
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
Explanation: Point cloud segmentation with PointNet
Author: Soumik Rakshit, Sayak Paul<br>
Date created: 2020/10/23<br>
Last modified: 2020/10/24<br>
Description: Implementation of a PointNet-based model for segmenting point clouds.
Introduction
A "point cloud" is an important type of data structure for storing geometric shape data.
Due to its irregular format, it's often transformed into
regular 3D voxel grids or collections of images before being used in deep learning applications,
a step which makes the data unnecessarily large.
The PointNet family of models solves this problem by directly consuming point clouds, respecting
the permutation-invariance property of the point data. The PointNet family of
models provides a simple, unified architecture
for applications ranging from object classification, part segmentation, to
scene semantic parsing.
In this example, we demonstrate the implementation of the PointNet architecture
for shape segmentation.
References
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Point cloud classification with PointNet
Spatial Transformer Networks
Imports
End of explanation
dataset_url = "https://git.io/JiY4i"
dataset_path = keras.utils.get_file(
fname="shapenet.zip",
origin=dataset_url,
cache_subdir="datasets",
hash_algorithm="auto",
extract=True,
archive_format="auto",
cache_dir="datasets",
)
Explanation: Downloading Dataset
The ShapeNet dataset is an ongoing effort to establish a richly-annotated,
large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet
dataset with clean single 3D models and manually verified category and alignment
annotations. It covers 55 common object categories, with about 51,300 unique 3D models.
For this example, we use one of the 12 object categories of
PASCAL 3D+,
included as part of the ShapenetCore dataset.
End of explanation
with open("/tmp/.keras/datasets/PartAnnotation/metadata.json") as json_file:
metadata = json.load(json_file)
print(metadata)
Explanation: Loading the dataset
We parse the dataset metadata in order to easily map model categories to their
respective directories and segmentation classes to colors for the purpose of
visualization.
End of explanation
points_dir = "/tmp/.keras/datasets/PartAnnotation/{}/points".format(
metadata["Airplane"]["directory"]
)
labels_dir = "/tmp/.keras/datasets/PartAnnotation/{}/points_label".format(
metadata["Airplane"]["directory"]
)
LABELS = metadata["Airplane"]["lables"]
COLORS = metadata["Airplane"]["colors"]
VAL_SPLIT = 0.2
NUM_SAMPLE_POINTS = 1024
BATCH_SIZE = 32
EPOCHS = 60
INITIAL_LR = 1e-3
Explanation: In this example, we train PointNet to segment the parts of an Airplane model.
End of explanation
point_clouds, test_point_clouds = [], []
point_cloud_labels, all_labels = [], []
points_files = glob(os.path.join(points_dir, "*.pts"))
for point_file in tqdm(points_files):
point_cloud = np.loadtxt(point_file)
if point_cloud.shape[0] < NUM_SAMPLE_POINTS:
continue
# Get the file-id of the current point cloud for parsing its
# labels.
file_id = point_file.split("/")[-1].split(".")[0]
label_data, num_labels = {}, 0
for label in LABELS:
label_file = os.path.join(labels_dir, label, file_id + ".seg")
if os.path.exists(label_file):
label_data[label] = np.loadtxt(label_file).astype("float32")
num_labels = len(label_data[label])
# Point clouds having labels will be our training samples.
try:
label_map = ["none"] * num_labels
for label in LABELS:
for i, data in enumerate(label_data[label]):
label_map[i] = label if data == 1 else label_map[i]
label_data = [
LABELS.index(label) if label != "none" else len(LABELS)
for label in label_map
]
# Apply one-hot encoding to the dense label representation.
label_data = keras.utils.to_categorical(label_data, num_classes=len(LABELS) + 1)
point_clouds.append(point_cloud)
point_cloud_labels.append(label_data)
all_labels.append(label_map)
except KeyError:
test_point_clouds.append(point_cloud)
Explanation: Structuring the dataset
We generate the following in-memory data structures from the Airplane point clouds and
their labels:
point_clouds is a list of np.array objects that represent the point cloud data in
the form of x, y and z coordinates. Axis 0 represents the number of points in the
point cloud, while axis 1 represents the coordinates. all_labels is the list
that represents the label of each coordinate as a string (needed mainly for
visualization purposes).
test_point_clouds is in the same format as point_clouds, but doesn't have
corresponding the labels of the point clouds.
all_labels is a list of np.array objects that represent the point cloud labels
for each coordinate, corresponding to the point_clouds list.
point_cloud_labels is a list of np.array objects that represent the point cloud
labels for each coordinate in one-hot encoded form, corresponding to the point_clouds
list.
End of explanation
for _ in range(5):
i = random.randint(0, len(point_clouds) - 1)
print(f"point_clouds[{i}].shape:", point_clouds[0].shape)
print(f"point_cloud_labels[{i}].shape:", point_cloud_labels[0].shape)
for j in range(5):
print(
f"all_labels[{i}][{j}]:",
all_labels[i][j],
f"\tpoint_cloud_labels[{i}][{j}]:",
point_cloud_labels[i][j],
"\n",
)
Explanation: Next, we take a look at some samples from the in-memory arrays we just generated:
End of explanation
def visualize_data(point_cloud, labels):
df = pd.DataFrame(
data={
"x": point_cloud[:, 0],
"y": point_cloud[:, 1],
"z": point_cloud[:, 2],
"label": labels,
}
)
fig = plt.figure(figsize=(15, 10))
ax = plt.axes(projection="3d")
for index, label in enumerate(LABELS):
c_df = df[df["label"] == label]
try:
ax.scatter(
c_df["x"], c_df["y"], c_df["z"], label=label, alpha=0.5, c=COLORS[index]
)
except IndexError:
pass
ax.legend()
plt.show()
visualize_data(point_clouds[0], all_labels[0])
visualize_data(point_clouds[300], all_labels[300])
Explanation: Now, let's visualize some of the point clouds along with their labels.
End of explanation
for index in tqdm(range(len(point_clouds))):
current_point_cloud = point_clouds[index]
current_label_cloud = point_cloud_labels[index]
current_labels = all_labels[index]
num_points = len(current_point_cloud)
# Randomly sampling respective indices.
sampled_indices = random.sample(list(range(num_points)), NUM_SAMPLE_POINTS)
# Sampling points corresponding to sampled indices.
sampled_point_cloud = np.array([current_point_cloud[i] for i in sampled_indices])
# Sampling corresponding one-hot encoded labels.
sampled_label_cloud = np.array([current_label_cloud[i] for i in sampled_indices])
# Sampling corresponding labels for visualization.
sampled_labels = np.array([current_labels[i] for i in sampled_indices])
# Normalizing sampled point cloud.
norm_point_cloud = sampled_point_cloud - np.mean(sampled_point_cloud, axis=0)
norm_point_cloud /= np.max(np.linalg.norm(norm_point_cloud, axis=1))
point_clouds[index] = norm_point_cloud
point_cloud_labels[index] = sampled_label_cloud
all_labels[index] = sampled_labels
Explanation: Preprocessing
Note that all the point clouds that we have loaded consist of a variable number of points,
which makes it difficult for us to batch them together. In order to overcome this problem, we
randomly sample a fixed number of points from each point cloud. We also normalize the
point clouds in order to make the data scale-invariant.
End of explanation
visualize_data(point_clouds[0], all_labels[0])
visualize_data(point_clouds[300], all_labels[300])
Explanation: Let's visualize the sampled and normalized point clouds along with their corresponding
labels.
End of explanation
def load_data(point_cloud_batch, label_cloud_batch):
point_cloud_batch.set_shape([NUM_SAMPLE_POINTS, 3])
label_cloud_batch.set_shape([NUM_SAMPLE_POINTS, len(LABELS) + 1])
return point_cloud_batch, label_cloud_batch
def augment(point_cloud_batch, label_cloud_batch):
noise = tf.random.uniform(
tf.shape(label_cloud_batch), -0.005, 0.005, dtype=tf.float64
)
point_cloud_batch += noise[:, :, :3]
return point_cloud_batch, label_cloud_batch
def generate_dataset(point_clouds, label_clouds, is_training=True):
dataset = tf.data.Dataset.from_tensor_slices((point_clouds, label_clouds))
dataset = dataset.shuffle(BATCH_SIZE * 100) if is_training else dataset
dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.batch(batch_size=BATCH_SIZE)
dataset = (
dataset.map(augment, num_parallel_calls=tf.data.AUTOTUNE)
if is_training
else dataset
)
return dataset
split_index = int(len(point_clouds) * (1 - VAL_SPLIT))
train_point_clouds = point_clouds[:split_index]
train_label_cloud = point_cloud_labels[:split_index]
total_training_examples = len(train_point_clouds)
val_point_clouds = point_clouds[split_index:]
val_label_cloud = point_cloud_labels[split_index:]
print("Num train point clouds:", len(train_point_clouds))
print("Num train point cloud labels:", len(train_label_cloud))
print("Num val point clouds:", len(val_point_clouds))
print("Num val point cloud labels:", len(val_label_cloud))
train_dataset = generate_dataset(train_point_clouds, train_label_cloud)
val_dataset = generate_dataset(val_point_clouds, val_label_cloud, is_training=False)
print("Train Dataset:", train_dataset)
print("Validation Dataset:", val_dataset)
Explanation: Creating TensorFlow datasets
We create tf.data.Dataset objects for the training and validation data.
We also augment the training point clouds by applying random jitter to them.
End of explanation
def conv_block(x: tf.Tensor, filters: int, name: str) -> tf.Tensor:
x = layers.Conv1D(filters, kernel_size=1, padding="valid", name=f"{name}_conv")(x)
x = layers.BatchNormalization(momentum=0.0, name=f"{name}_batch_norm")(x)
return layers.Activation("relu", name=f"{name}_relu")(x)
def mlp_block(x: tf.Tensor, filters: int, name: str) -> tf.Tensor:
x = layers.Dense(filters, name=f"{name}_dense")(x)
x = layers.BatchNormalization(momentum=0.0, name=f"{name}_batch_norm")(x)
return layers.Activation("relu", name=f"{name}_relu")(x)
Explanation: PointNet model
The figure below depicts the internals of the PointNet model family:
Given that PointNet is meant to consume an unordered set of coordinates as its input data,
its architecture needs to match the following characteristic properties
of point cloud data:
Permutation invariance
Given the unstructured nature of point cloud data, a scan made up of n points has n!
permutations. The subsequent data processing must be invariant to the different
representations. In order to make PointNet invariant to input permutations, we use a
symmetric function (such as max-pooling) once the n input points are mapped to
higher-dimensional space. The result is a global feature vector that aims to capture
an aggregate signature of the n input points. The global feature vector is used alongside
local point features for segmentation.
Transformation invariance
Segmentation outputs should be unchanged if the object undergoes certain transformations,
such as translation or scaling. For a given input point cloud, we apply an appropriate
rigid or affine transformation to achieve pose normalization. Because each of the n input
points are represented as a vector and are mapped to the embedding spaces independently,
applying a geometric transformation simply amounts to matrix multiplying each point with
a transformation matrix. This is motivated by the concept of
Spatial Transformer Networks.
The operations comprising the T-Net are motivated by the higher-level architecture of
PointNet. MLPs (or fully-connected layers) are used to map the input points independently
and identically to a higher-dimensional space; max-pooling is used to encode a global
feature vector whose dimensionality is then reduced with fully-connected layers. The
input-dependent features at the final fully-connected layer are then combined with
globally trainable weights and biases, resulting in a 3-by-3 transformation matrix.
Point interactions
The interaction between neighboring points often carries useful information (i.e., a
single point should not be treated in isolation). Whereas classification need only make
use of global features, segmentation must be able to leverage local point features along
with global point features.
Note: The figures presented in this section have been taken from the
original paper.
Now that we know the pieces that compose the PointNet model, we can implement the model.
We start by implementing the basic blocks i.e., the convolutional block and the multi-layer
perceptron block.
End of explanation
class OrthogonalRegularizer(keras.regularizers.Regularizer):
Reference: https://keras.io/examples/vision/pointnet/#build-a-model
def __init__(self, num_features, l2reg=0.001):
self.num_features = num_features
self.l2reg = l2reg
self.identity = tf.eye(num_features)
def __call__(self, x):
x = tf.reshape(x, (-1, self.num_features, self.num_features))
xxt = tf.tensordot(x, x, axes=(2, 2))
xxt = tf.reshape(xxt, (-1, self.num_features, self.num_features))
return tf.reduce_sum(self.l2reg * tf.square(xxt - self.identity))
def get_config(self):
config = super(TransformerEncoder, self).get_config()
config.update({"num_features": self.num_features, "l2reg_strength": self.l2reg})
return config
Explanation: We implement a regularizer (taken from
this example)
to enforce orthogonality in the feature space. This is needed to ensure
that the magnitudes of the transformed features do not vary too much.
End of explanation
def transformation_net(inputs: tf.Tensor, num_features: int, name: str) -> tf.Tensor:
Reference: https://keras.io/examples/vision/pointnet/#build-a-model.
The `filters` values come from the original paper:
https://arxiv.org/abs/1612.00593.
x = conv_block(inputs, filters=64, name=f"{name}_1")
x = conv_block(x, filters=128, name=f"{name}_2")
x = conv_block(x, filters=1024, name=f"{name}_3")
x = layers.GlobalMaxPooling1D()(x)
x = mlp_block(x, filters=512, name=f"{name}_1_1")
x = mlp_block(x, filters=256, name=f"{name}_2_1")
return layers.Dense(
num_features * num_features,
kernel_initializer="zeros",
bias_initializer=keras.initializers.Constant(np.eye(num_features).flatten()),
activity_regularizer=OrthogonalRegularizer(num_features),
name=f"{name}_final",
)(x)
def transformation_block(inputs: tf.Tensor, num_features: int, name: str) -> tf.Tensor:
transformed_features = transformation_net(inputs, num_features, name=name)
transformed_features = layers.Reshape((num_features, num_features))(
transformed_features
)
return layers.Dot(axes=(2, 1), name=f"{name}_mm")([inputs, transformed_features])
Explanation: The next piece is the transformation network which we explained earlier.
End of explanation
def get_shape_segmentation_model(num_points: int, num_classes: int) -> keras.Model:
input_points = keras.Input(shape=(None, 3))
# PointNet Classification Network.
transformed_inputs = transformation_block(
input_points, num_features=3, name="input_transformation_block"
)
features_64 = conv_block(transformed_inputs, filters=64, name="features_64")
features_128_1 = conv_block(features_64, filters=128, name="features_128_1")
features_128_2 = conv_block(features_128_1, filters=128, name="features_128_2")
transformed_features = transformation_block(
features_128_2, num_features=128, name="transformed_features"
)
features_512 = conv_block(transformed_features, filters=512, name="features_512")
features_2048 = conv_block(features_512, filters=2048, name="pre_maxpool_block")
global_features = layers.MaxPool1D(pool_size=num_points, name="global_features")(
features_2048
)
global_features = tf.tile(global_features, [1, num_points, 1])
# Segmentation head.
segmentation_input = layers.Concatenate(name="segmentation_input")(
[
features_64,
features_128_1,
features_128_2,
transformed_features,
features_512,
global_features,
]
)
segmentation_features = conv_block(
segmentation_input, filters=128, name="segmentation_features"
)
outputs = layers.Conv1D(
num_classes, kernel_size=1, activation="softmax", name="segmentation_head"
)(segmentation_features)
return keras.Model(input_points, outputs)
Explanation: Finally, we piece the above blocks together and implement the segmentation model.
End of explanation
x, y = next(iter(train_dataset))
num_points = x.shape[1]
num_classes = y.shape[-1]
segmentation_model = get_shape_segmentation_model(num_points, num_classes)
segmentation_model.summary()
Explanation: Instantiate the model
End of explanation
training_step_size = total_training_examples // BATCH_SIZE
total_training_steps = training_step_size * EPOCHS
print(f"Total training steps: {total_training_steps}.")
lr_schedule = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[training_step_size * 15, training_step_size * 15],
values=[INITIAL_LR, INITIAL_LR * 0.5, INITIAL_LR * 0.25],
)
steps = tf.range(total_training_steps, dtype=tf.int32)
lrs = [lr_schedule(step) for step in steps]
plt.plot(lrs)
plt.xlabel("Steps")
plt.ylabel("Learning Rate")
plt.show()
Explanation: Training
For the training the authors recommend using a learning rate schedule that decays the
initial learning rate by half every 20 epochs. In this example, we resort to 15 epochs.
End of explanation
def run_experiment(epochs):
segmentation_model = get_shape_segmentation_model(num_points, num_classes)
segmentation_model.compile(
optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),
loss=keras.losses.CategoricalCrossentropy(),
metrics=["accuracy"],
)
checkpoint_filepath = "/tmp/checkpoint"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_loss",
save_best_only=True,
save_weights_only=True,
)
history = segmentation_model.fit(
train_dataset,
validation_data=val_dataset,
epochs=epochs,
callbacks=[checkpoint_callback],
)
segmentation_model.load_weights(checkpoint_filepath)
return segmentation_model, history
segmentation_model, history = run_experiment(epochs=EPOCHS)
Explanation: Finally, we implement a utility for running our experiments and launch model training.
End of explanation
def plot_result(item):
plt.plot(history.history[item], label=item)
plt.plot(history.history["val_" + item], label="val_" + item)
plt.xlabel("Epochs")
plt.ylabel(item)
plt.title("Train and Validation {} Over Epochs".format(item), fontsize=14)
plt.legend()
plt.grid()
plt.show()
plot_result("loss")
plot_result("accuracy")
Explanation: Visualize the training landscape
End of explanation
validation_batch = next(iter(val_dataset))
val_predictions = segmentation_model.predict(validation_batch[0])
print(f"Validation prediction shape: {val_predictions.shape}")
def visualize_single_point_cloud(point_clouds, label_clouds, idx):
label_map = LABELS + ["none"]
point_cloud = point_clouds[idx]
label_cloud = label_clouds[idx]
visualize_data(point_cloud, [label_map[np.argmax(label)] for label in label_cloud])
idx = np.random.choice(len(validation_batch[0]))
print(f"Index selected: {idx}")
# Plotting with ground-truth.
visualize_single_point_cloud(validation_batch[0], validation_batch[1], idx)
# Plotting with predicted labels.
visualize_single_point_cloud(validation_batch[0], val_predictions, idx)
Explanation: Inference
End of explanation |
3,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Introduction
In this tutorial, you will learn how to do statistical analysis of your simulation data.
This is an important topic, because the statistics of your data determine how precise your simulation result is. Furthermore, knowing about the statistics can help you optimize your disk space usage.
ESPResSo provides a lot of ways to take measurements of your system. Usually, you will sample a quantity many times during a simulation and in the end average over all samples. Intuitively, the simulation result will be more precise the more samples are taken during the simulation. However, this is not the whole truth. There are some things that need to be considered, which we will cover in this tutorial.
Formally, if you determine a physical quantity by averaging over several samples, you only approximate the unknown, true mean value. Usually, the quantity is expected to fluctuate around its mean; therefore, you can never directly measure the mean. You are bound to take repeated measurements and in the end average over all samples (a finite number). In your report, you will present this average as your result. Additionally, you should express the precision of your measurements to give a proper meaning to your result. And this is where things get more involved.
There are several different ways to express the precision of your measurements. We will begin by briefly discussing what they are and what their differences are. After that, we will continue with the standard error of the mean as a viable option to be presented in your simulation results.
Standard deviation
The standard deviation is a measure for how much individual samples are expected to deviate from the mean. We want to use precise terminology, and therefore need to state that, in fact, we cannot directly measure the standard deviation but only estimate it. A commonly used estimator for the standard deviation is
$
\begin{align}
\hat{\sigma} = \sqrt{\frac{1}{N-1.5}\sum_{i=1}^{N}(X_i-\overline{X})^2}\tag{1}
\end{align}
$
where $\hat{\sigma}$ is the estimator of the standard deviation $\sigma$, $N$ the number of samples, $X_i$ the individual samples and $\overline{X}$ their mean. This estimator somewhat resembles the "square root of the variance". The curious $-1.5$ in the denominator is a necessary correction to make the estimator less biased (for further reading, see <a href='#[1]'>[1]</a>).
Standard error of the mean
The standard error of the mean (often abbreviated as SEM, or $s$, and its estimator is designated $\hat{\sigma}_\overline{X}$) describes how much the mean value of your sample is expected to deviate from the true mean value $\mu$. Imagine repeating the whole simulation over and over again, taking $N$ samples every time and averaging over them. The SEM quantifies how much those averages will fluctuate around the true mean $\mu$. In fact, it is defined as the standard deviation of the averages.
At first glance, it might seem to be very expensive to compute the SEM, because one would have to repeat the whole simulation many times. However, under the right circumstances, the SEM can be estimated from a single series of $N$ measurements. We will discuss how this can be done.
Confidence interval
A confidence interval (CI) specifies a range of numbers within which the unknown true mean value $\mu$ lies with a certain probability $1-\alpha$. A common confidence level is $1-\alpha=95~\%$. A $95~\%$ CI would contain the true value $\mu$ with probability $95~\%$. Care must be taken interpreting the CI, since the lower and upper bound of a CI are themselves random variables. Just as a simulation run drafts samples from the overall ensemble, determining a CI from a simulation run is drafting a CI from all possible CIs. When the upper and lower bound of a CI have been calculated, this range either contains the true value or not, so there no longer is a probability attached to it. However, for repeated simulations with subsequent computation of the corresponding CIs, on average $95~\%$ of CIs will contain the true value, while $5~\%$ won't.
If the samples are normally distributed and the SEM is known, the upper and lower bounds of the $95~\%$ CI are $\overline{X} \pm 1.96 \, \hat{\sigma}_\overline{X}$.
Interquartile range
The interquartile range denotes the range, within which the central $50~\%$ of all samples lie, if one were to order them by their size. This leaves one quarter of all samples lying below the interquartile range, and another quarter of all samples above it.
Now – what do we use?
We are interested in the precision of our overall, averaged, simulation result, and not in the precision of the individual samples. Those are expected to fluctuate, and in many cases, those fluctuations are uninteresting for the end result. Out of the options presented above, the SEM and the CI are the only ones doing this requirement justice. Since they are related, the question boils down to how to compute the SEM, which will be the topic of the rest of this tutorial.
Uncorrelated samples
How the SEM can be computed depends on the data itself. For uncorrelated samples, it is nearly trivial
Step2: One can clearly see that each sample lies in the vicinity of the previous one.
Below is an example for almost completely uncorrelated samples. The data points are taken from the same time series as in the previous example, but this time they are chosen with large gaps in between (every 800th sample is used). These samples appear to fluctuate a lot more randomly.
Step3: However, you should not trust your eye in deciding whether or not a time series is correlated. In fact, when running molecular dynamics simulations, your best guess is to always assume that samples are correlated, and that you should use one of the following techniques for statistical analysis, and rather not just use equation (2).
Binning analysis
Binning analysis is a straightforward method to calculate the SEM for correlated data. A time series of measurements of $N$ samples is divided into $N_\mathrm{B}$ equally long blocks called bins. If $N$ is not an integer multiple of $N_\mathrm{B}$, some data must be discarded to achieve this. The samples in every bin are averaged, giving the bin averages $\overline{X}i$. It is important that the bin size $N/N\mathrm{B}$ is significantly larger than the correlation time. Otherwise, binning analysis will yield the wrong SEM.
Once we have computed the bin averages $\overline{X}_i$, getting the SEM is straightforward
Step4: Exercise
Determine the maximally possible number of bins of size BIN_SIZE with the data in time_series_1, and store it in a variable N_BINS.
Create a numpy array called bin_avgs of length N_BINS.
Compute the bin averages of time_series_1 and store them in bin_avgs.
python
N_BINS = N_SAMPLES // BIN_SIZE
bin_avgs = np.zeros(N_BINS)
for i in range(N_BINS)
Step5: Now we already have an estimate on how precise our simulation result is. But how do we know if we chose the appropriate bin size? The answer is, we can perform binning analysis for many different bin sizes and check when the SEM converges. For that we would like to define a function that does the binning analysis in one go.
Exercise
Define a function called do_binning_analysis that takes as arguments data (a numpy array containing the samples) and bin_size and returns the estimated SEM. You can reuse your code from the previous exercises and adapt it to be part of the function.
python
def do_binning_analysis(data, bin_size)
Step6: Even though the fit is not perfect, it suffices to give us the position of the asymptote, which is the final estimate for the standard error of the mean. You can see that binning analysis, in fact, managed to estimate the SEM very precisely compared to the analytical solution. This illustrates that most of the time, binning analysis will give you a very reasonable estimate for the SEM, and in fact, is often used in practice because of its simplicity.
However, in some cases, the statistics of your system can be quite challenging. Remember that in real applications, there won't be an analytical solution for the SEM. Therefore, you need to rely entirely on the statistical analysis. It is important to view the statistical analysis critically to decide whether the statistical analysis is trustworthy or not. To illustrate this, let's have a look at the binning analysis of the other time series that was generated at the start of the tutorial | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 18})
import sys
import logging
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
np.random.seed(43)
def ar_1_process(n_samples, c, phi, eps):
'''
Generate a correlated random sequence with the AR(1) process.
Parameters
----------
n_samples: :obj:`int`
Sample size.
c: :obj:`float`
Constant term.
phi: :obj:`float`
Correlation magnitude.
eps: :obj:`float`
Shock magnitude.
'''
ys = np.zeros(n_samples)
if abs(phi) >= 1:
raise ValueError("abs(phi) must be smaller than 1.")
# draw initial value from normal distribution with known mean and variance
ys[0] = np.random.normal(loc=c / (1 - phi), scale=np.sqrt(eps**2 / (1 - phi**2)))
for i in range(1, n_samples):
ys[i] = c + phi * ys[i - 1] + np.random.normal(loc=0., scale=eps)
return ys
# generate simulation data using the AR(1) process
logging.info("Generating data sets for the tutorial ...")
N_SAMPLES = 100000
C_1 = 2.0
PHI_1 = 0.85
EPS_1 = 2.0
time_series_1 = ar_1_process(N_SAMPLES, C_1, PHI_1, EPS_1)
C_2 = 0.05
PHI_2 = 0.999
EPS_2 = 1.0
time_series_2 = ar_1_process(N_SAMPLES, C_2, PHI_2, EPS_2)
logging.info("Done")
fig = plt.figure(figsize=(10, 6))
plt.title("The first 1000 samples of both time series")
plt.plot(time_series_1[0:1000], label="time series 1")
plt.plot(time_series_2[0:1000], label="time series 2")
plt.xlabel("$i$")
plt.ylabel("$X_i$")
plt.legend()
plt.show()
Explanation: Tutorial: Error Estimation - Part 1 (Introduction and Binning Analysis)
Table of contents
Data generation
Introduction
Uncorrelated samples
Binning analysis
References
Data generation
In this tutorial, you will learn how to estimate the accuracy of your simulation results. Because we are going to emply statistical methods, we need a fair amount of data to play with. The following code cell will generate two data sets which will be used throughout the tutorial.
End of explanation
fig = plt.figure(figsize=(10, 6))
plt.plot(time_series_1[1000:1050], "x")
fig.axes[0].margins(y=0.1)
plt.xlabel("$i$")
plt.ylabel("$X_i$")
plt.show()
Explanation: Introduction
In this tutorial, you will learn how to do statistical analysis of your simulation data.
This is an important topic, because the statistics of your data determine how precise your simulation result is. Furthermore, knowing about the statistics can help you optimize your disk space usage.
ESPResSo provides a lot of ways to take measurements of your system. Usually, you will sample a quantity many times during a simulation and in the end average over all samples. Intuitively, the simulation result will be more precise the more samples are taken during the simulation. However, this is not the whole truth. There are some things that need to be considered, which we will cover in this tutorial.
Formally, if you determine a physical quantity by averaging over several samples, you only approximate the unknown, true mean value. Usually, the quantity is expected to fluctuate around its mean; therefore, you can never directly measure the mean. You are bound to take repeated measurements and in the end average over all samples (a finite number). In your report, you will present this average as your result. Additionally, you should express the precision of your measurements to give a proper meaning to your result. And this is where things get more involved.
There are several different ways to express the precision of your measurements. We will begin by briefly discussing what they are and what their differences are. After that, we will continue with the standard error of the mean as a viable option to be presented in your simulation results.
Standard deviation
The standard deviation is a measure for how much individual samples are expected to deviate from the mean. We want to use precise terminology, and therefore need to state that, in fact, we cannot directly measure the standard deviation but only estimate it. A commonly used estimator for the standard deviation is
$
\begin{align}
\hat{\sigma} = \sqrt{\frac{1}{N-1.5}\sum_{i=1}^{N}(X_i-\overline{X})^2}\tag{1}
\end{align}
$
where $\hat{\sigma}$ is the estimator of the standard deviation $\sigma$, $N$ the number of samples, $X_i$ the individual samples and $\overline{X}$ their mean. This estimator somewhat resembles the "square root of the variance". The curious $-1.5$ in the denominator is a necessary correction to make the estimator less biased (for further reading, see <a href='#[1]'>[1]</a>).
Standard error of the mean
The standard error of the mean (often abbreviated as SEM, or $s$, and its estimator is designated $\hat{\sigma}_\overline{X}$) describes how much the mean value of your sample is expected to deviate from the true mean value $\mu$. Imagine repeating the whole simulation over and over again, taking $N$ samples every time and averaging over them. The SEM quantifies how much those averages will fluctuate around the true mean $\mu$. In fact, it is defined as the standard deviation of the averages.
At first glance, it might seem to be very expensive to compute the SEM, because one would have to repeat the whole simulation many times. However, under the right circumstances, the SEM can be estimated from a single series of $N$ measurements. We will discuss how this can be done.
Confidence interval
A confidence interval (CI) specifies a range of numbers within which the unknown true mean value $\mu$ lies with a certain probability $1-\alpha$. A common confidence level is $1-\alpha=95~\%$. A $95~\%$ CI would contain the true value $\mu$ with probability $95~\%$. Care must be taken interpreting the CI, since the lower and upper bound of a CI are themselves random variables. Just as a simulation run drafts samples from the overall ensemble, determining a CI from a simulation run is drafting a CI from all possible CIs. When the upper and lower bound of a CI have been calculated, this range either contains the true value or not, so there no longer is a probability attached to it. However, for repeated simulations with subsequent computation of the corresponding CIs, on average $95~\%$ of CIs will contain the true value, while $5~\%$ won't.
If the samples are normally distributed and the SEM is known, the upper and lower bounds of the $95~\%$ CI are $\overline{X} \pm 1.96 \, \hat{\sigma}_\overline{X}$.
Interquartile range
The interquartile range denotes the range, within which the central $50~\%$ of all samples lie, if one were to order them by their size. This leaves one quarter of all samples lying below the interquartile range, and another quarter of all samples above it.
Now – what do we use?
We are interested in the precision of our overall, averaged, simulation result, and not in the precision of the individual samples. Those are expected to fluctuate, and in many cases, those fluctuations are uninteresting for the end result. Out of the options presented above, the SEM and the CI are the only ones doing this requirement justice. Since they are related, the question boils down to how to compute the SEM, which will be the topic of the rest of this tutorial.
Uncorrelated samples
How the SEM can be computed depends on the data itself. For uncorrelated samples, it is nearly trivial:
$
\begin{align}
\hat\sigma_\overline{X} = \frac{\hat\sigma}{\sqrt{N}}\tag{2}
\end{align}
$
where $\hat\sigma_\overline{X}$ is the estimated SEM, $\hat\sigma$ is the estimated standard deviation (see eq. 1) and $N$ is the number of samples. But what does it mean for samples to be uncorrelated?
An example for uncorrelated samples would be the rolling of a dice. The outcome of each trial is completely independent to the previous trials. We might guess any number from 1 to 6, regardless of what has been the last result. The same could be true if we ran an experiment many times independently from one another and measured a quantity each time. By looking at one experimental value, we would'nt be able to predict the next one. The best guess would be simply the mean value of the entire series. In the case of rolling a dice, correlations could for example be observed if it was more probable to obtain the same result as in the previous dice roll rather than another result.
Usually, when you run a molecular dynamics simulation, the particles will only move by a tiny amount during a time step. Consequently, most observables also change only by a small amount during a time step and it is, therefore, more probable to obtain a similar result rather than a completely different result. If we were to sample an observable in every time step, we would get a lot of samples with very similar values. It is said that the samples are correlated. Only if we wait for a sufficiently long time, the system will eventually have evolved to a completely different configuration, and we can expect the observable to assume a truly independent, uncorrelated value.
It is often easy to see when samples are correlated. Execute the code cell below for an example, where a small part of time_series_1 is plotted.
End of explanation
fig = plt.figure(figsize=(10, 6))
plt.plot(np.arange(2000, 42000, 800), time_series_1[2000:42000:800], "x")
fig.axes[0].margins(y=0.1)
plt.xlabel("$i$")
plt.ylabel("$X_i$")
fig.axes[0].xaxis.set_major_locator(plt.MultipleLocator(base=8000))
plt.show()
Explanation: One can clearly see that each sample lies in the vicinity of the previous one.
Below is an example for almost completely uncorrelated samples. The data points are taken from the same time series as in the previous example, but this time they are chosen with large gaps in between (every 800th sample is used). These samples appear to fluctuate a lot more randomly.
End of explanation
BIN_SIZE = 2000
Explanation: However, you should not trust your eye in deciding whether or not a time series is correlated. In fact, when running molecular dynamics simulations, your best guess is to always assume that samples are correlated, and that you should use one of the following techniques for statistical analysis, and rather not just use equation (2).
Binning analysis
Binning analysis is a straightforward method to calculate the SEM for correlated data. A time series of measurements of $N$ samples is divided into $N_\mathrm{B}$ equally long blocks called bins. If $N$ is not an integer multiple of $N_\mathrm{B}$, some data must be discarded to achieve this. The samples in every bin are averaged, giving the bin averages $\overline{X}i$. It is important that the bin size $N/N\mathrm{B}$ is significantly larger than the correlation time. Otherwise, binning analysis will yield the wrong SEM.
Once we have computed the bin averages $\overline{X}_i$, getting the SEM is straightforward: we can simply treat $\overline{X}_i$ as an uncorrelated time series. In other words, we can compute the SEM by using equation (1) and (2)!
Let's implement this.
End of explanation
print(f"Best guess for measured quantity: {avg:.3f}")
print(f"Standard error of the mean: {sem:.3f}")
Explanation: Exercise
Determine the maximally possible number of bins of size BIN_SIZE with the data in time_series_1, and store it in a variable N_BINS.
Create a numpy array called bin_avgs of length N_BINS.
Compute the bin averages of time_series_1 and store them in bin_avgs.
python
N_BINS = N_SAMPLES // BIN_SIZE
bin_avgs = np.zeros(N_BINS)
for i in range(N_BINS):
bin_avgs[i] = np.average(time_series_1[i * BIN_SIZE:(i + 1) * BIN_SIZE])
Exercise
Compute the average of all bin averages and store it in avg. This is the overall average, our best guess for the measured quantity. Furthermore, compute the standard error of the mean using equations (1) and (2) from the values in bin_avgs and store it in sem.
python
avg = np.average(bin_avgs)
sem = np.sqrt(np.sum((bin_avgs - avg)**2) / (N_BINS - 1.5) / N_BINS)
End of explanation
from scipy.optimize import curve_fit
# only fit to the first couple of SEMs
CUTOFF = 600
# sizes of the corresponding bins
sizes_subset = np.arange(3, 3 + CUTOFF, dtype=int)
def fit_fn(x, a, b, c):
return -np.exp(-a * x) * b + c
fit_params, _ = curve_fit(fit_fn, sizes_subset, sems[:CUTOFF], (0.05, 1, 0.5))
fit_sems = fit_fn(sizes, *fit_params)
# compute analytical solutions for AR(1) process
AN_SIGMA_1 = np.sqrt(EPS_1 ** 2 / (1 - PHI_1 ** 2))
AN_TAU_EXP_1 = -1 / np.log(PHI_1)
AN_SEM_1 = np.sqrt(2 * AN_SIGMA_1 ** 2 * AN_TAU_EXP_1 / N_SAMPLES)
plt.figure(figsize=(10, 6))
plt.plot(sizes, sems, "x", label="binning analysis")
plt.plot(sizes[(0, -1),], np.repeat(AN_SEM_1, 2), "-.", label="analytical solution")
plt.plot(sizes, fit_sems, "-", label="fit")
plt.xscale("log")
plt.xlabel("$N_B$")
plt.ylabel("SEM")
plt.legend()
plt.show()
print(f"Final Standard Error of the Mean: {fit_params[2]:.4f}")
print(f"Analytical Standard Error of the Mean: {AN_SEM_1:.4f}")
Explanation: Now we already have an estimate on how precise our simulation result is. But how do we know if we chose the appropriate bin size? The answer is, we can perform binning analysis for many different bin sizes and check when the SEM converges. For that we would like to define a function that does the binning analysis in one go.
Exercise
Define a function called do_binning_analysis that takes as arguments data (a numpy array containing the samples) and bin_size and returns the estimated SEM. You can reuse your code from the previous exercises and adapt it to be part of the function.
python
def do_binning_analysis(data, bin_size):
n_samples = len(data)
n_bins = n_samples // bin_size
bin_avgs = np.mean(data[:n_bins * bin_size].reshape((n_bins, -1)), axis=1)
return np.std(bin_avgs, ddof=1.5) / np.sqrt(n_bins)
Exercise
Now take the data in time_series_1 and perform binning analysis for bin sizes from 3 up to 5000 and plot the estimated SEMs against the bin size with logarithmic x axis. Your SEM estimates should be stored in a numpy array called sems.
```python
sizes = np.arange(3, 5001, dtype=int)
sems = np.zeros(5001 - 3, dtype=float)
for s in range(len(sizes)):
sems[s] = do_binning_analysis(time_series_1, sizes[s])
plt.figure(figsize=(10, 6))
plt.plot(sizes, sems, "x")
plt.xscale("log")
plt.xlabel("$N_B$")
plt.ylabel("SEM")
plt.show()
```
You should see that the series converges to a value between 0.04 and 0.05, before transitioning into a noisy tail. The tail becomes increasingly noisy, because as the block size increases, the number of blocks decreases, thus resulting in worse statistics.
To extract the correct SEM from this plot, we can fit an exponential function to the first part of the data, that doesn't suffer from too much noise.
End of explanation
sizes = np.arange(3, 5001, dtype=int)
sems = np.zeros(5001 - 3, dtype=float)
for s in range(len(sizes)):
sems[s] = do_binning_analysis(time_series_2, sizes[s])
# compute analytical solutions for AR(1) process
AN_SIGMA_2 = np.sqrt(EPS_2 ** 2 / (1 - PHI_2 ** 2))
AN_TAU_EXP_2 = -1 / np.log(PHI_2)
AN_SEM_2 = np.sqrt(2 * AN_SIGMA_2 ** 2 * AN_TAU_EXP_2 / N_SAMPLES)
plt.figure(figsize=(10, 6))
plt.plot(sizes, sems, "x", label="binning analysis")
plt.plot(sizes[(0, -1),], np.repeat(AN_SEM_2, 2), "-.", label="analytical solution")
plt.xscale("log")
plt.xlabel("$N_B$")
plt.ylabel("SEM")
plt.show()
Explanation: Even though the fit is not perfect, it suffices to give us the position of the asymptote, which is the final estimate for the standard error of the mean. You can see that binning analysis, in fact, managed to estimate the SEM very precisely compared to the analytical solution. This illustrates that most of the time, binning analysis will give you a very reasonable estimate for the SEM, and in fact, is often used in practice because of its simplicity.
However, in some cases, the statistics of your system can be quite challenging. Remember that in real applications, there won't be an analytical solution for the SEM. Therefore, you need to rely entirely on the statistical analysis. It is important to view the statistical analysis critically to decide whether the statistical analysis is trustworthy or not. To illustrate this, let's have a look at the binning analysis of the other time series that was generated at the start of the tutorial:
End of explanation |
3,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Train Points Importer</h1>
<hr style="border
Step1: <span>
We want to have this data in a more standard format, a CSV file for instance. More like this way
Step2: <span>
<h2>Data cleaning and preparation</h2>
</span>
<br>
<span>
We are going to create an importer and import the data from the Arduino raw serial into a more readable format, CSV.
</span>
<br><br>
<span>
First, set libraries path and import the Importer module
</span>
<br>
Step3: <span>
Create an Importer instance now
Step4: <span>
Now it is time to set some paths, in order to point the source raw data files and the target CSV files
Step5: <span>
That was for the left-side collected hit info.
</span>
Step6: <span>
That was for the right-side collected hit info.
</span>
<br>
<span>
Let's do the left-side import now
Step7: And the right-side import now | Python Code:
!head -10 train_points_import_data/arduino_raw_data.txt
Explanation: <h1>Train Points Importer</h1>
<hr style="border: 1px solid #000;">
<span>
<h2>
Import Tool for transforming collected hits from Arduino serial port, to ATT readable hit format.
</h2>
<span>
<br>
</span>
<i>Import points from arduino format</i><br>
<br>
SOURCE FORMAT:<br>
"hit: { [tstamp]:[level] [tstamp]:[level] ... [tstamp]:[level] [side]}"<br>
from file: src/arduino/data/[file]<br>
<br>
<i>To internal format</i><br>
<br>
TARGET FORMAT:<br>
"[x_coord],[y_coord],[tstamp],[tstamp], ... ,[tstamp]"<br>
to file: src/python/data/[file]<br>
</span>
<hr>
<h2>Abstract</h2>
<br>
<span>
Let's have a look at the raw data,
</span>
End of explanation
!head -10 train_points_import_data/processed_data.csv
Explanation: <span>
We want to have this data in a more standard format, a CSV file for instance. More like this way:
</span>
End of explanation
# Import points from arduino format:
#
# "hit: { [tstamp]:[level] [tstamp]:[level] ... [tstamp]:[level] [side]}"
# from file: src/arduino/data/[file]
#
# To internal format:
# "[x_coord],[y_coord],[tstamp],[tstamp], ... ,[tstamp]"
# to file: src/python/data/[file]
import sys
#sys.path.insert(0, '/home/asanso/workspace/att-spyder/att/src/python/')
sys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')
import hit.importer.train_points_importer as imp
Explanation: <span>
<h2>Data cleaning and preparation</h2>
</span>
<br>
<span>
We are going to create an importer and import the data from the Arduino raw serial into a more readable format, CSV.
</span>
<br><br>
<span>
First, set libraries path and import the Importer module
</span>
<br>
End of explanation
importer = imp.TrainPointsImporter()
Explanation: <span>
Create an Importer instance now:
</span>
End of explanation
str_left_input_file = "../src/arduino/data/train_20160129_left.txt"
str_left_output_file = "../src/python/data/train_points_20160129_left.txt"
Explanation: <span>
Now it is time to set some paths, in order to point the source raw data files and the target CSV files:
</span>
End of explanation
str_right_input_file = "../src/arduino/data/train_20160129_right.txt"
str_right_output_file = "../src/python/data/train_points_20160129_right.txt"
Explanation: <span>
That was for the left-side collected hit info.
</span>
End of explanation
importer.from_file_to_file(str_left_input_file, str_left_output_file)
Explanation: <span>
That was for the right-side collected hit info.
</span>
<br>
<span>
Let's do the left-side import now:
</span>
End of explanation
importer.from_file_to_file(str_right_input_file, str_right_output_file)
Explanation: And the right-side import now:
End of explanation |
3,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test Data Prep
This page is meant to prepare test fixtures for use in hydrofunctions.
Table of Contents
Step1: Create sample NWIS responses to requests.
The cells below will generate the output a user would expect to get from successful requests by hf.NWIS.json().
In other words, these are not the raw JSON returned by the NWIS, but rather
the output from calling the .json() method on a Requests object. It actually
isn't json, but rather a Python dict. You can tell because the true and false
in the original json have been coverted to True and False for Python.
Each of the fixtures will represent the response to a different type of request to the NWIS. These will be used
to test whether hydrofunctions is parsing the JSON correctly. Note that I need to call these directly from the NWIS
using Requests, not hydrofunctions!
NOTE
Step2: Check values of test fixtures
Several properties of the test dataframes can be checked, including
Step3: Check individual values within the mult_flags fixture
The mult_flags dataframe should have all three flags, with multiple flags in some locations.
- 2019-01-24T10
Step4: Check that missing index values are filled.
Some gages have missing records. These result in missing index values that should be filled in. The data values and the qualifiers should say 'NaN'.
Step5: Check that data requests with different frequencies of observations raise a warning.
the diff_freq request also has some other strange things about it that require special handling. For example, it reports a precipitation data series, but delivers the series with no data. Normally if you request a paramter with no data, it doesn't report a series.
Step6: Check Daylight Savings Time | Python Code:
import hydrofunctions as hf
print("Hydrofunctions version: ", hf.__version__)
import numpy as np
print("Numpy version: ", np.__version__)
import pandas as pd
print("Pandas version: ", pd.__version__)
import requests
print("Requests version: ", requests.__version__)
import matplotlib as plt
%matplotlib inline
Explanation: Test Data Prep
This page is meant to prepare test fixtures for use in hydrofunctions.
Table of Contents:
- Create sample NWIS responses to requests.
- Check values of test fixtures.
- Check individual values within the mult_flags fixture
- Check that missing index values are filled.
- Check that data requests with different frequencies of observations raise a warning.
- Check Daylight Savings Time
End of explanation
header = {'Accept-encoding': 'gzip','max-age': '120'}
JSON15min2day_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json%2C1.1&sites=03213700¶meterCd=00060&startDT=2016-09-01&endDT=2016-09-02", headers=header)
two_sites_two_params_iv_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000,01541200&period=P1D", headers=header)
nothing_avail_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&indent=on&stateCd=al¶meterCd=00001", headers=header)
mult_flags_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01542500&startDT=2019-01-24&endDT=2019-01-28¶meterCd=00060", headers=header)
diff_freq = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01570500,01541000&startDT=2018-06-01&endDT=2018-06-01", headers=header)
startDST_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000&startDT=2018-03-10&endDT=2018-03-12¶meterCd=00060", headers=header)
endDST_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000&startDT=2018-11-03&endDT=2018-11-05¶meterCd=00060", headers=header)
JSON15min2day_req.json()
two_sites_two_params_iv_req.json()
nothing_avail_req.json()
mult_flags_req.json()
diff_freq.json()
startDST_req.json()
endDST_req.json()
Explanation: Create sample NWIS responses to requests.
The cells below will generate the output a user would expect to get from successful requests by hf.NWIS.json().
In other words, these are not the raw JSON returned by the NWIS, but rather
the output from calling the .json() method on a Requests object. It actually
isn't json, but rather a Python dict. You can tell because the true and false
in the original json have been coverted to True and False for Python.
Each of the fixtures will represent the response to a different type of request to the NWIS. These will be used
to test whether hydrofunctions is parsing the JSON correctly. Note that I need to call these directly from the NWIS
using Requests, not hydrofunctions!
NOTE: within the response from NWIS, there is a field called 'queryURL', which contains the query used to generate this response... except with one tiny modification. The 'queryURL' doesn't contain the '?' neccessary between the static part of the URL and the requested parameters. Is this a bug?
| Request Name | NWIS request URL | Description |
|------------------|----------------------|------------------------------|
| JSON15min2month | http://nwis.waterservices.usgs.gov/nwis/iv/?format=json%2C1.1&sites=03213700¶meterCd=00060&startDT=2016-09-01&endDT=2017-11-01 | A fairly long request for discharge from one site. |
|two_sites_two_params_iv | http://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000,01541200&siteStatus=all | Returns stage & discharge for two sites, along with qualifier flags. |
| nothing_avail | http://waterservices.usgs.gov/nwis/iv/?format=json&indent=on&stateCd=al¶meterCd=00001&siteStatus=all | This is basically an empty dataset. NWIS returns this in response to filters that filter too much stuff, requests for parameters that don't exist, or don't exist at this station... |
| mult_flags | http://waterservices.usgs.gov/nwis/iv/?format=json&sites=01542500&startDT=2019-01-24&endDT=2019-01-28¶meterCd=00060 | NWIS returns data for discharge at one site, but with 3 _qualifier flags: 'e':estimated, 'P': provisional data, 'Ice'. |
| diff_freq | https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01570500,01541000&startDT=2018-06-01&endDT=2018-06-01 | The 01541000 site collects every 15 minutes; the 01570500 collects every 30 minutes. The 01570500 site also collects other parameters like precipitation; these have no data for these dates, but NWIS reports them as if there is a data series, but it just delivers an empty series. If you request just the parameter with no data, the NWIS doesn't return any series, so this requires a test during parsing. |
| startDST | https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000&startDT=2018-03-10&endDT=2018-03-12¶meterCd=00060 | If handled improperly, this request will have a gap as Daylight Savings jumps forward. |
| endDST | https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000&startDT=2018-11-03&endDT=2018-11-05¶meterCd=00060 | If handled impropery, this request will have an hour of duplicates as Daylight Savings ends and time goes back to normal. |
End of explanation
# Import the 'JSON' dicts from our test_data module.
from tests.test_data import JSON15min2day, two_sites_two_params_iv, nothing_avail, mult_flags, diff_freq, startDST, endDST
json = hf.extract_nwis_df(JSON15min2day, interpolate=False)
print("json shape: ", json.shape)
two = hf.extract_nwis_df(two_sites_two_params_iv, interpolate=False)
print("two shape: ", two.shape)
try:
nothing = hf.extract_nwis_df(nothing_avail, interpolate=False)
print("nothing shape: ", nothing.shape)
except hf.HydroNoDataError as err:
print(err)
mult = hf.extract_nwis_df(mult_flags, interpolate=False)
# This has missing observations that get replaced.
print("mult shape: ", mult.shape, "orig json length series1: ", len(mult_flags['value']['timeSeries'][0]['values'][0]['value']))
# This version gets missing observations replaced with interpolated values.
mult_interp = hf.extract_nwis_df(mult_flags, interpolate=True)
print("mult_interp shape: ", mult_interp.shape, "orig json length series1: ", len(mult_flags['value']['timeSeries'][0]['values'][0]['value']))
diff = hf.extract_nwis_df(diff_freq, interpolate=False)
print("diff shape: ", diff.shape)
diff_interp = hf.extract_nwis_df(diff_freq, interpolate=True)
print("diff_interp shape: ", diff_interp.shape)
diff
mult['2019-01-24 16-05:00']
mult_interp['2019-01-24 16-05:00']
shapes = {'two':two.shape, 'json':json.shape, 'mult':mult.shape, 'diff':diff.shape}
shapes
json.index.is_unique
two.index.is_monotonic
Explanation: Check values of test fixtures
Several properties of the test dataframes can be checked, including:
- .shape : the number of columns and rows. (returns a tuple)
- .columns.values : the names of the columns (returns a list)
- .index.is_unique : are there any repeated values in the time index values? (True|False)
- .index.is_monotonic : index values increase in order. (True|False)
End of explanation
mult.loc['2019-01-24T10:30', 'USGS:01542500:00060:00000_qualifiers']
mult.loc['2019-01-28T16:00:00.000-05:00', 'USGS:01542500:00060:00000_qualifiers']
mult.loc['2019-01-28T16:00:00.000-05:00', 'USGS:01542500:00060:00000']
type(mult.loc['2019-01-28T16:00:00.000-05:00', 'USGS:01542500:00060:00000'])
Explanation: Check individual values within the mult_flags fixture
The mult_flags dataframe should have all three flags, with multiple flags in some locations.
- 2019-01-24T10:30 has P & e
- 2019-01-28T16:00 has P & Ice; value is -999999; should be converted to np.nan
End of explanation
mult.plot()
mult_interp.plot()
start = mult.index.min()
stop = mult.index.max()
(stop-start)/pd.Timedelta('15 minutes')
# The length is the same as calculated above. No missing index values.
mult.shape
# Missing index values from Jan. 24th were filled in
mult['2019-01-24']
Explanation: Check that missing index values are filled.
Some gages have missing records. These result in missing index values that should be filled in. The data values and the qualifiers should say 'NaN'.
End of explanation
diff = hf.extract_nwis_df(diff_freq)
diff
diff['USGS:01570500:00060:00000'].plot()
Explanation: Check that data requests with different frequencies of observations raise a warning.
the diff_freq request also has some other strange things about it that require special handling. For example, it reports a precipitation data series, but delivers the series with no data. Normally if you request a paramter with no data, it doesn't report a series.
End of explanation
startDSTdf = hf.extract_nwis_df(startDST, interpolate=False)
# Three days at the start of DST should be 3 * 24 * 4 = 288, minus an hour * 4 = 284.
print(startDSTdf.shape)
startDSTdf.plot()
endDSTdf = hf.extract_nwis_df(endDST, interpolate=False)
# Three days at the end of DST should be 3 * 24 * 4 = 288 long, plus an extra hour *4 = 292.
print(endDSTdf.shape)
endDSTdf.plot()
startDSTdf['2018-03-11 20':'2018-03-12 05']
startDSTdf['USGS:01541000:00060:00000_qualifiers'].describe()
endDSTdf['USGS:01541000:00060:00000_qualifiers'].describe()
Explanation: Check Daylight Savings Time
End of explanation |
3,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background
This notebook is an interactive tool for evaluating scoring methods for frame design.
Step1: We must first setup a sample environment with a frame and components.
Step2: Scoring
Weight Distribution- X
Weight distribution in X (fore-aft) is scored using an exponential. Scoring curves are defined by two parameters
Step3: Weight Distribution- Y
Y weight distribution is scored symmetrically about the y centerline. The curve is a normal distribution, with a shape define by 1) the frame width and 2) a shape factor.
Step4: Collisions
Step5: Time
Score decays slowly over time in a linear manner with every frame | Python Code:
import frame_methods
import engine_methods as em
import itertools
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import norm
import numpy as np
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
Explanation: Background
This notebook is an interactive tool for evaluating scoring methods for frame design.
End of explanation
### Setup frame environment
y_centerline = 200
left_buffer = 0
base_bom = frame_methods.read_bom(bom_name = "bom1.json")
part_shelf = base_bom.items
frame = frame_methods.frame_layout(bom_frame_spec = base_bom.frame_spec, frame_width = 35)
Explanation: We must first setup a sample environment with a frame and components.
End of explanation
# Define shape variable
# Get theoretical max-forward CG_x:
min_length = em.get_min_collapsible_x(part_shelf, strategy = 'avg')
print("Min Length:", min_length)
# Plot scoring method
CG_x = np.arange(0, 500, 0.01)
CG_x_precalc = (CG_x - min_length) / (frame.frame_length - min_length)
@interact(CG_x_shape=(0.05, 1, 0.05))
def plt_CG_x(CG_x_shape = 0.3):
CG_x_score = np.exp(-CG_x_precalc / CG_x_shape)
plt.plot(CG_x, CG_x_score)
plt.plot((min_length, min_length), (0, max(CG_x_score)), 'red')
plt.show()
def plt_CG_x_logit(s, min_length):
u = min_length + (frame.frame_length / 10)
CG_x_logit = 1 - (1 / (1 + np.exp(-(CG_x- u)/s)))
plt.plot(CG_x, CG_x_logit)
plt.plot((min_length, min_length), (0, max(CG_x_logit)), 'red')
plt.show()
interact(plt_CG_x_logit, s = (1, 500, 1))
Explanation: Scoring
Weight Distribution- X
Weight distribution in X (fore-aft) is scored using an exponential. Scoring curves are defined by two parameters: Shape and minimum theoretical collapsible length.
Alternatively, a logit curve is proposed as a CG_x scoring method.
End of explanation
@interact(CG_y_shape = (0.05, 1, 0.05))
def plt_CG_Y(CG_y_shape = 1.0):
rv = norm(loc = 0.0, scale = frame.frame_width * CG_y_shape)
x = np.arange(-60, 60, .1)
CG_y_score = rv.pdf(x) / max(rv.pdf(x))
#plot the pdfs of these normal distributions
plt.plot(x, CG_y_score)
plt.show()
Explanation: Weight Distribution- Y
Y weight distribution is scored symmetrically about the y centerline. The curve is a normal distribution, with a shape define by 1) the frame width and 2) a shape factor.
End of explanation
Collisions are scored on an exponential curve, dictated by a shape parameter
@interact(collision_shape = (0.01, 1, 0.01))
def plt_col(collision_shape = 0.15):
collisions = np.arange(0, len(part_shelf), 1)
collision_score = np.exp(-(collisions/len(part_shelf))/collision_shape)
plt.plot(collisions, collision_score)
plt.show()
Explanation: Collisions
End of explanation
seconds = np.arange(0, 300, 1/30)
time_penalty = seconds * -0.001
plt.plot(seconds, time_penalty)
plt.show()
Explanation: Time
Score decays slowly over time in a linear manner with every frame
End of explanation |
3,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Taming math and physics using SymPy
Tutorial based on the No bullshit guide series of textbooks by Ivan Savov
Abstract
Most people consider math and physics to be scary
beasts from which it is best to keep one's distance. Computers,
however, can help us tame the complexity and tedious arithmetic
manipulations associated with these subjects. Indeed, math and
physics are much more approachable once you have the power of
computers on your side.
This tutorial serves a dual purpose. On one hand, it serves
as a review of the fundamental concepts of mathematics for
computer-literate people. On the other hand, this tutorial serves
to demonstrate to students how a computer algebra system can
help them with their classwork. A word of warning is in order.
Please don't use SymPy to avoid the suffering associated with your
homework! Teachers assign homework problems to you because
they want you to learn. Do your homework by hand, but if you
want, you can check your answers using SymPy. Better yet, use
SymPy to invent extra practice problems for yourself.
Contents
Fundamentals of mathematics
Complex numbers
Calculus
Vectors
Mechanics
Linear algebra
Introduction
You can use a computer algebra system (CAS) to compute complicated
math expressions, solve equations, perform calculus procedures,
and simulate physics systems.
All computer algebra systems offer essentially the same functionality,
so it doesn't matter which system you use
Step1: Fundamentals of mathematics
Let's begin by learning about the basic SymPy objects and the
operations we can carry out on them. We'll learn the SymPy equivalents
of many math verbs like “to solve” (an equation), “to expand” (an
expression), “to factor” (a polynomial).
Numbers
In Python, there are two types of number objects
Step2: Integer objects in Python are a faithful representation of the set of
integers $\mathbb{Z}={\ldots,-2,-1,0,1,2,\ldots}$. Floating point numbers are
approximate representations of the reals $\mathbb{R}$. Regardless of its absolute
size, a floating point number is only accurate to 16 decimals.
Special care is required when specifying rational numbers, because
integer division might not produce the answer you want. In other
words, Python will not automatically convert the answer to a floating
point number, but instead round the answer to the closest integer
Step3: To avoid this problem, you can force float division by using the
number 1.0 instead of 1
Step4: This result is better, but it's still only an approximation of the exact
number $\frac{1}{7} \in \mathbb{Q}$, since a float has 16 decimals while the decimal
expansion of $\frac{1}{7}$ is infinitely long. To obtain an exact representation
of $\frac{1}{7}$ you need to create a SymPy expression. You can sympify any
expression using the shortcut function S()
Step5: Note the input to S() is specified as a text string delimited by quotes.
We could have achieved the same result using S('1')/7 since a SymPy
object divided by an int is a SymPy object.
Except for the tricky Python division operator, other math operators
like addition +, subtraction -, and multiplication * work as you would
expect. The syntax ** is used in Python to denote exponentiation
Step6: When solving math problems, it's best to work with SymPy objects,
and wait to compute the numeric answer in the end. To obtain a
numeric approximation of a SymPy object as a float, call its .evalf()
method
Step7: The method .n() is equivalent to .evalf(). The global SymPy
function N() can also be used to to compute numerical values. You can
easily change the number of digits of precision of the approximation.
Enter pi.n(400) to obtain an approximation of $\pi$ to 400 decimals.
Symbols
Python is a civilized language so there's no need to define variables
before assigning values to them. When you write a = 3, you define a
new name a and set it to the value 3. You can now use the name a
in subsequent calculations.
Most interesting SymPy calculations require us to define symbols,
which are the SymPy objects for representing variables and unknowns.
For your convenience, when live.sympy.org starts, it runs the
following commands automatically
Step8: The first statement instructs python to convert 1/7 to 1.0/7 when
dividing, potentially saving you from any int division confusion. The
second statement imports all the SymPy functions. The remaining
statements define some generic symbols x, y, z, and t, and several
other symbols with special properties.
Note the difference between the following two statements
Step9: The name x is defined as a symbol, so SymPy knows that x + 2 is an
expression; but the variable p is not defined, so SymPy doesn't know
what to make of p + 2. To use p in expressions, you must first define
it as a symbol
Step10: You can define a sequence of variables using the following notation
Step11: You can use any name you want for a variable, but it's best if you
avoid the letters Q,C,O,S,I,N and E because they have special uses
in SymPy
Step12: Expresions
You define SymPy expressions by combining symbols with basic math
operations and other functions
Step13: The function simplify can be used on any expression to simplify
it. The examples below illustrate other useful SymPy functions that
correspond to common mathematical operations on expressions
Step14: To substitute a given value into an expression, call the .subs()
method, passing in a python dictionary object { key
Step15: Note how we used .n() to obtain the expression's numeric value.
Solving equations
The function solve is the main workhorse in SymPy. This incredibly
powerful function knows how to solve all kinds of equations. In fact
solve can solve pretty much any equation! When high school students
learn about this function, they get really angry—why did they spend
five years of their life learning to solve various equations by hand,
when all along there was this solve thing that could do all the math
for them? Don't worry, learning math is never a waste of time.
The function solve takes two arguments. Use solve(expr,var) to
solve the equation expr==0 for the variable var. You can rewrite any
equation in the form expr==0 by moving all the terms to one side
of the equation; the solutions to $A(x) = B(x)$ are the same as the
solutions to $A(x) - B(x) = 0$.
For example, to solve the quadratic equation $x^2 + 2x - 8 = 0$, use
Step16: In this case the equation has two solutions so solve returns a list.
Check that $x = 2$ and $x = -4$ satisfy the equation $x^2 + 2x - 8 = 0$.
The best part about solve and SymPy is that you can obtain symbolic
answers when solving equations. Instead of solving one specific
quadratic equation, we can solve all possible equations of the form
$ax^2 + bx + c = 0$ using the following steps
Step17: In this case solve calculated the solution in terms of the symbols
a, b, and c. You should be able to recognize the expressions in the
solution—it's the quadratic formula $x_{1,2} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$.
To solve a specific equation like $x^2 + 2x - 8 = 0$, we can substitute
the coefficients $a = 1$, $b = 2$, and $c = -8$ into the general solution to
obtain the same result
Step18: To solve a system of equations, you can feed solve with the list of
equations as the first argument, and specify the list of unknowns you
want to solve for as the second argument. For example, to solve for $x$
and $y$ in the system of equations $x + y = 3$ and $3x - 2y = 0$, use
Step19: The function solve is like a Swiss Army knife you can use to solve
all kind of problems. Suppose you want to complete the square in the
expression $x^2 - 4x + 7$, that is, you want to find constants $h$ and $k$
such that $x^2 -4x + 7 = (x-h)^2 + k$. There is no special “complete the
square” function in SymPy, but you can call solve on the equation
$(x - h)^2 + k - (x^2 - 4x + 7) = 0$ to find the unknowns $h$ and $k$
Step20: Learn the basic SymPy commands and you'll never need to suffer
another tedious arithmetic calculation painstakingly performed by
hand again!
Rational functions
By default, SymPy will not combine or split rational expressions.
You need to use together to symbolically calculate the addition of
fractions
Step21: Alternately, if you have a rational expression and want to divide the
numerator by the denominator, use the apart function
Step22: Exponentials and logarithms
Euler's constant $e = 2.71828\dots$ is defined one of several ways,
$$
e \equiv \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n
\equiv \lim_{\epsilon\to 0}(1+\epsilon)^{1/\epsilon}
\equiv \sum_{n=0}^{\infty}\frac{1}{n!},
$$
and is denoted E in SymPy. Using exp(x) is equivalent to E**x.
The functions log and ln both compute the logarithm base $e$
Step23: By default, SymPy assumes the inputs to functions like exp and log are
complex numbers, so it will not expand certain logarithmic expressions.
However, indicating to SymPy that the inputs are positive real numbers
will make the expansions work
Step24: Polynomials
Let's define a polynomial $P$ with roots at $x = 1$, $x = 2$, and $x = 3$
Step25: To see the expanded version of the polynomial, call its expand method
Step26: When the polynomial is expressed in it's expanded form $P(x) =
x^3 - 6x^2 + 11x - 6$, we can't immediately identify its roots. This is
why the factored form $P(x) = (x - 1)(x - 2)(x - 3)$ is preferable. To
factor a polynomial, call its factor method or simplify it
Step27: Recall that the roots of the polynomial $P(x)$ are defined as the
solutions to the equation $P(x) = 0$. We can use the solve function
to find the roots of the polynomial
Step28: Equality checking
In the last example, we used the simplify function to check whether
two expressions were equal. This way of checking equality works
because $P = Q$ if and only if $P - Q = 0$. This is the best way to
check if two expressions are equal in SymPy because it attempts all
possible simplifications when comparing the expressions. Below is
a list of other ways to check whether two quantities are equal with
example cases where they fail
Step29: Trigonometry
The trigonometric functions sin and cos take inputs in radians
Step30: For angles in degrees, you need a conversion factor of $\frac{\pi}{180}$[rad/$^\circ$]
Step31: The inverse trigonometric functions $\sin^{-1}(x) \equiv \arcsin(x)$ and
$\cos^{-1}(x) \equiv \arccos(x)$ are used as follows
Step32: Recall that $\tan(x) \equiv \frac{\sin(x)}{\cos(x)}$. The inverse function of $\tan(x)$ is $\tan^{-1}(x) \equiv \arctan(x) \equiv$ atan(x)
Step33: The function acos returns angles in the range $[0, \pi]$, while asin and
atan return angles in the range $[-\frac{\pi}{2},\frac{\pi}{2}]$.
Here are some trigonometric identities that SymPy knows
Step34: The function trigsimp does essentially the same job as simplify.
If instead of simplifying you want to expand a trig expression, you
should use expand_trig, because the default expand won't touch trig
functions
Step35: Hyperbolic trigonometric functions
The hyperbolic sine and cosine in SymPy are denoted sinh and cosh
respectively and SymPy is smart enough to recognize them when
simplifying expressions
Step36: Recall that $x = \cosh(\mu)$ and $y = \sinh(\mu)$ are defined as $x$ and $y$
coordinates of a point on the the hyperbola with equation $x^2 - y^2 = 1$
and therefore satisfy the identity $\cosh^2 x - \sinh^2 x = 1$
Step37: Complex numbers
Ever since Newton, the word “number” has been used to refer to one
of the following types of math objects
Step38: The solutions are $x = i$ and $x = -i$, and indeed we can verify that
$i^2 + 1 = 0$ and $(-i)^2 + 1 = 0$ since $i^2 = -1$.
The complex numbers $\mathbb{C}$ are defined as ${ a+bi \,|\, a,b \in \mathbb{R} }$. Complex numbers
contain a real part and an imaginary part
Step39: The polar representation of a complex number is $z!\equiv!|z|\angle\theta!\equiv !|z|e^{i\theta}$.
For a complex number $z=a+bi$,
the quantity $|z|=\sqrt{a^2+b^2}$ is known as the absolute value of $z$,
and $\theta$ is its phase or its argument
Step40: The complex conjugate of $z = a + bi$ is the number $\bar{z} = a - bi$
Step41: Complex conjugation is important for computing the absolute value
of $z$ $\left(|z|\equiv\sqrt{z\bar{z}}\right)$ and for division by $z$ $\left(\frac{1}{z}\equiv\frac{\bar{z}}{|z|^2}\right)$.
Euler's formula
Euler's formula shows an important relation between the exponential
function $e^x$ and the trigonometric functions $sin(x)$ and $cos(x)$
Step42: Basically, $\cos(x)$ is the real part of $e^{ix}$, and $\sin(x)$ is the imaginary
part of $e^{ix}$. Whaaat? I know it's weird, but weird things are bound
to happen when you input imaginary numbers to functions.
Euler's formula is often used to rewrite the functions sin and cos in
terms of complex exponentials. For example,
Step43: Compare this expression with the definition of hyperbolic cosine.
Calculus
Calculus is the study of the properties of functions. The operations of
calculus are used to describe the limit behaviour of functions, calculate
their rates of change, and calculate the areas under their graphs. In
this section we'll learn about the SymPy functions for calculating
limits, derivatives, integrals, and summations.
Infinity
The infinity symbol is denoted oo (two lowercase os) in SymPy. Infinity
is not a number but a process
Step44: Limits
We use limits to describe, with mathematical precision, infinitely large
quantities, infinitely small quantities, and procedures with infinitely
many steps.
The number $e$ is defined as the limit $e \equiv \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$
Step45: This limit expression describes the annual growth rate of a loan with
a nominal interest rate of 100% and infinitely frequent compounding.
Borrow \$1000 in such a scheme, and you'll owe $2718.28 after one year.
Limits are also useful to describe the behaviour of functions. Consider
the function $f(x) = \frac{1}{x}$. The limit command shows us what happens
to $f(x)$ near $x = 0$ and as $x$ goes to infinity
Step46: As $x$ becomes larger and larger, the fraction $\frac{1}{x}$ becomes smaller
and smaller. In the limit where $x$ goes to infinity, $\frac{1}{x}$ approaches
zero
Step47: Limits are used to define the derivative and the integral operations.
Derivatives
The derivative function, denoted $f'(x)$, $\frac{d}{dx}f(x)$, $\frac{df}{dx}$, or $\frac{dy}{dx}$,
describes the rate of change of the function $f(x)$.
The SymPy function diff computes the derivative of any expression
Step48: The differentiation operation knows about the product rule $[f(x)g(x)]^\prime=f^\prime(x)g(x)+f(x)g^\prime(x)$,
the chain rule $f(g(x))' = f'(g(x))g'(x)$,
and the quotient rule $\left[\frac{f(x)}{g(x)}\right]^\prime = \frac{f'(x)g(x) - f(x)g'(x)}{g(x)^2}$
Step49: The second derivative of a function f is diff(f,x,2)
Step50: The exponential function $f(x)=e^x$ is special because it is equal to its derivative
Step51: A differential equation is an equation that relates some unknown function $f(x)$ to its derivative.
An example of a differential equation is $f'(x)=f(x)$.
What is the function $f(x)$ which is equal to its derivative?
You can either try to guess what $f(x)$ is or use the dsolve function
Step52: We'll discuss dsolve again in the section on mechanics.
Tangent lines
The tangent line to the function $f(x)$ at $x=x_0$ is
the line that passes through the point $(x_0, f(x_0))$ and has
the same slope as the function at that point.
The tangent line to the function $f(x)$ at the point $x=x_0$ is described by the equation
$$
T_1(x) = f(x_0) \ + \ f'(x_0)(x-x_0).
$$
What is the equation of the tangent line to $f(x)=\frac{1}{2}x^2$ at $x_0=1$?
Step53: The tangent line $T_1(x)$ has the same value and slope as the function $f(x)$ at $x=1$
Step54: Optimization
Optimization is about choosing an input for a function $f(x)$ that results in the best value for $f(x)$.
The best value usually means the maximum value
(if the function represents something desirable like profits)
or the minimum value
(if the function represents something undesirable like costs).
The derivative $f'(x)$ encodes the information about the slope of $f(x)$.
Positive slope $f'(x)>0$ means $f(x)$ is increasing,
negative slope $f'(x)<0$ means $f(x)$ is decreasing,
and zero slope $f'(x)=0$ means the graph of the function is horizontal.
The critical points of a function $f(x)$ are the solutions to the equation $f'(x)=0$.
Each critical point is a candidate to be either a maximum or a minimum of the function.
The second derivative $f^{\prime\prime}(x)$ encodes the information about the curvature of $f(x)$.
Positive curvature means the function looks like $x^2$,
negative curvature means the function looks like $-x^2$.
Let's find the critical points of the function $f(x)=x^3-2x^2+x$
and use the information from its second derivative
to find the maximum of the function
on the interval $x \in [0,1]$.
Step55: It will help to look at the graph of this function.
The point $x=\frac{1}{3}$ is a local maximum because it is a critical point of $f(x)$
where the curvature is negative, meaning $f(x)$ looks like the peak of a mountain at $x=\frac{1}{3}$.
The maximum value of $f(x)$ on the interval $x\in [0,1]$ is $f!\left(\frac{1}{3}\right)=\frac{4}{27}$.
The point $x=1$ is a local minimum because it is a critical point
with positive curvature, meaning $f(x)$ looks like the bottom of a valley at $x=1$.
Integrals
The integral of $f(x)$ corresponds to the computation of the area under the graph of $f(x)$.
The area under $f(x)$ between the points $x=a$ and $x=b$ is denoted as follows
Step56: This is known as an indefinite integral since the limits of integration are not defined.
In contrast,
a definite integral computes the area under $f(x)$ between $x=a$ and $x=b$.
Use integrate(f, (x,a,b)) to compute the definite integrals of the form $A(a,b)=\int_a^b f(x) \, dx$
Step57: We can obtain the same area by first calculating the indefinite integral $F(c)=\int_0^c !f(x)\,dx$,
then using $A(a,b) = F(x)\big\vert_a^b \equiv F(b) - F(a)$
Step58: Integrals correspond to signed area calculations
Step59: During the first half of its $2\pi$-cycle,
the graph of $\sin(x)$ is above the $x$-axis, so it has a positive contribution to the area under the curve.
During the second half of its cycle (from $x=\pi$ to $x=2\pi$),
$\sin(x)$ is below the $x$-axis, so it contributes negative area.
Draw a graph of $\sin(x)$ to see what is going on.
Fundamental theorem of calculus
The integral is the “inverse operation” of the derivative.
If you perform the integral operation followed by the derivative operation on some function,
you'll obtain the same function
Step60: Alternately, if you compute the derivative of a function followed by the integral,
you will obtain the original function $f(x)$ (up to a constant)
Step61: The fundamental theorem of calculus is important because it tells us how to solve differential equations.
If we have to solve for $f(x)$ in the differential equation $\frac{d}{dx}f(x) = g(x)$,
we can take the integral on both sides of the equation to obtain the answer $f(x) = \int g(x)\,dx + C$.
Sequences
Sequences are functions that take whole numbers as inputs.
Instead of continuous inputs $x\in \mathbb{R}$,
sequences take natural numbers $n\in\mathbb{N}$ as inputs.
We denote sequences as $a_n$ instead of the usual function notation $a(n)$.
We define a sequence by specifying an expression for its $n^\mathrm{th}$ term
Step62: Substitute the desired value of $n$ to see the value of the $n^\mathrm{th}$ term
Step63: The Python list comprehension syntax [item for item in list]
can be used to print the sequence values for some range of indices
Step64: Observe that $a_n$ is not properly defined for $n=0$ since $\frac{1}{0}$ is a division-by-zero error.
To be precise, we should say $a_n$'s domain is the positive naturals $a_n
Step65: Both $a_n=\frac{1}{n}$ and $b_n = \frac{1}{n!}$ converge to $0$ as $n\to\infty$.
Many important math quantities are defined as limit expressions.
An interesting example to consider is the number $\pi$,
which is defined as the area of a circle of radius $1$.
We can approximate the area of the unit circle by drawing a many-sided regular polygon around the circle.
Splitting the $n$-sided regular polygon into identical triangular splices,
we can obtain a formula for its area $A_n$.
In the limit as $n\to \infty$,
the $n$-sided-polygon approximation to the area of the unit-circle becomes exact
Step66: Series
Suppose we're given a sequence $a_n$ and we want to compute the sum of all the values in this sequence $\sum_{n}^\infty a_n$.
Series are sums of sequences.
Summing the values of a sequence $a_n
Step67: We say the series $\sum a_n$ diverges to infinity (or is divergent) while the series $\sum b_n$ converges (or is convergent).
As we sum together more and more terms of the sequence $b_n$, the total becomes closer and closer to some finite number.
In this case, the infinite sum $\sum_{n=0}^\infty \frac{1}{n!}$ converges to the number $e=2.71828\ldots$.
The summation command is useful because it allows us to compute infinite sums,
but for most practical applications we don't need to take an infinite number of terms in a series to obtain a good approximation.
This is why series are so neat
Step68: Taylor series
Wait, there's more!
Not only can we use series to approximate numbers,
we can also use them to approximate functions.
A power series is a series whose terms contain different powers of the variable $x$.
The $n^\mathrm{th}$ term in a power series is a function of both the sequence index $n$ and the input variable $x$.
For example, the power series of the function $\exp(x)=e^x$ is
$$
\exp(x) \equiv 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + \cdots
= \sum_{n=0}^\infty \frac{x^n}{n!}.
$$
This is, IMHO, one of the most important ideas in calculus
Step69: Note that SymPy is actually smart enough to recognize that the infinite series
you're computing corresponds to the closed-form expression $e^5$
Step70: Taking as few as 35 terms in the series is sufficient to obtain an approximation to $e$
that is accurate to 16 decimals
Step71: The coefficients in the power series of a function (also known as the Taylor series)
The formula for the $n^\mathrm{th}$ term in the Taylor series of $f(x)$ expanded at $x=c$ is $a_n(x) = \frac{f^{(n)}(c)}{n!}(x-c)^n$,
where $f^{(n)}(c)$ is the value of the $n^\mathrm{th}$ derivative of $f(x)$ evaluated at $x=c$.
The term Maclaurin series refers to Taylor series expansions at $x=0$.
The SymPy function series is a convenient way to obtain the series of any function.
Calling series(expr,var,at,nmax)
will show you the series expansion of expr
near var=at
up to power nmax
Step72: Some functions are not defined at $x=0$, so we expand them at a different value of $x$.
For example, the power series of $\ln(x)$ expanded at $x=1$ is
Step73: Here, the result SymPy returns is misleading.
The Taylor series of $\ln(x)$ expanded at $x=1$ has terms of the form $(x-1)^n$
Step74: Vectors
A vector $\vec{v} \in \mathbb{R}^n$ is an $n$-tuple of real numbers.
For example, consider a vector that has three components
Step75: Dot product
The dot product of the 3-vectors $\vec{u}$ and $\vec{v}$ can be defined two ways
Step76: We can combine the algebraic and geometric formulas for the dot product
to obtain the cosine of the angle between the vectors
$$
\cos(\varphi)
= \frac{ \vec{u}\cdot\vec{v} }{ \|\vec{u}\|\|\vec{v}\| }
= \frac{ u_xv_x+u_yv_y+u_zv_z }{ \|\vec{u}\|\|\vec{v}\| },
$$
and use the acos function to find the angle measure
Step77: Just by looking at the coordinates of the vectors $\vec{u}$ and $\vec{v}$,
it's difficult to determine their relative direction.
Thanks to the dot product, however,
we know the angle between the vectors is $52.76^\circ$,
which means they kind of point in the same direction.
Vectors that are at an angle $\varphi=90^\circ$ are called orthogonal, meaning at right angles with each other.
The dot product of vectors for which $\varphi > 90^\circ$ is negative because they point mostly in opposite directions.
The notion of the “angle between vectors” applies more generally to vectors with any number of dimensions.
The dot product for $n$-dimensional vectors is $\vec{u}\cdot\vec{v}=\sum_{i=1}^n u_iv_i$.
This means we can talk about “the angle between” 1000-dimensional vectors.
That's pretty crazy if you think about it—there is no way we could possibly “visualize” 1000-dimensional vectors,
yet given two such vectors we can tell if they point mostly in the same direction,
in perpendicular directions, or mostly in opposite directions.
The dot product is a commutative operation $\vec{u}\cdot\vec{v} = \vec{v}\cdot\vec{u}$
Step78: Projections
Dot products are used for computing projections.
Assume you're given two vectors $\vec{u}$ and $\vec{n}$ and you want to find the component
of $\vec{u}$ that points in the $\vec{n}$ direction.
The following formula based on the dot product will give you the answer
Step79: In the case where the direction vector $\hat{n}$ is of unit length $\|\hat{n}\| = 1$,
the projection formula simplifies to $\Pi_{\hat{n}}( \vec{u} ) \equiv (\vec{u}\cdot\hat{n})\hat{n}$.
Consider now the plane $P$ defined by $(1,1,1)\cdot[(x,y,z)-(0,0,0)]=0$.
A plane is a two dimensional subspace of $\mathbb{R}^3$.
We can decompose any vector $\vec{u} \in \mathbb{R}^3$ into two parts $\vec{u}=\vec{v} + \vec{w}$
such that $\vec{v}$ lies inside the plane and $\vec{w}$ is perpendicular to the plane (parallel to $\vec{n}=(1,1,1)$).
To obtain the perpendicular-to-$P$ component of $\vec{u}$,
compute the projection of $\vec{u}$ in the direction $\vec{n}$
Step80: To obtain the in-the-plane-$P$ component of $\vec{u}$,
start with $\vec{u}$ and subtract the perpendicular-to-$P$ part
Step81: You should check on your own that $\vec{v}+\vec{w}=\vec{u}$ as claimed.
Cross product
The cross product, denoted $\times$, takes two vectors as inputs and produces a vector as output.
The cross products of individual basis elements are defined as follows
Step82: The vector $\vec{u}\times \vec{v}$ is orthogonal to both $\vec{u}$ and $\vec{v}$.
The norm of the cross product $\|\vec{u}\times \vec{v}\|$ is proportional to the lengths of the vectors
and the sine of the angle between them
Step83: The name “cross product” is well-suited for this operation since it is
calculated by “cross-multiplying” the coefficients of the vectors
Step84: The dot product is anti-commutative $\vec{u}\times\vec{v} = -\vec{v}\times\vec{u}$
Step85: The product of two numbers and the dot product of two vectors are commutative operations.
The cross product, however, is not commutative
Step86: To express the answer in length-and-direction notation,
use norm to find the length of $\vec{F}_{\textrm{net}}$
and atan2 (The function atan2(y,x) computes the correct direction
for all vectors $(x,y)$, unlike atan(y/x) which requires corrections for angles in the range $[\frac{\pi}{2}, \frac{3\pi}{2}]$.) to find its direction
Step87: The net force on the object is $\vec{F}_{\textrm{net}}= 8.697\angle 16.7^\circ$[N].
Kinematics
Let $x(t)$ denote the position of an object,
$v(t)$ denote its velocity,
and $a(t)$ denote its acceleration.
Together $x(t)$, $v(t)$, and $a(t)$ are known as the equations of motion of the object.
The equations of motion are related by the derivative operation
Step88: You may remember these equations from your high school physics class.
They are the uniform accelerated motion (UAM) equations
Step89: The above calculation shows $v_f^2 - 2ax_f = -2ax_i + v_i^2$.
After moving the term $2ax_f$ to the other side of the equation, we obtain
\begin{align}
(v(t))^2 \ = \ v_f^2 = v_i^2 + 2a\Delta x \ = \ v_i^2 + 2a(x_f-x_i).
\end{align}
The fourth equation is important for practical purposes
because it allows us to solve physics problems in a time-less manner.
Example
Find the position function of an object at time $t=3[\mathrm{s}]$,
if it starts from $x_i=20[\mathrm{m}]$ with $v_i=10[\mathrm{m/s}]$ and undergoes
a constant acceleration of $a=5[\mathrm{m/s^2}]$.
What is the object's velocity at $t=3[\mathrm{s}]$?
Step90: If you think about it,
physics knowledge combined with computer skills is like a superpower!
General equations of motion
The procedure
$a(t) \ \overset{v_i+ \int!dt }{\longrightarrow} \ v(t) \ \overset{x_i+ \int!dt }{\longrightarrow} \ x(t)$
can be used to obtain the position function $x(t)$ even when the acceleration is not constant.
Suppose the acceleration of an object is $a(t)=\sqrt{k t}$;
what is its $x(t)$?
Step91: Potential energy
Instead of working with the kinematic equations of motion $x(t)$, $v(t)$, and $a(t)$ which depend on time,
we can solve physics problems using energy calculations.
A key connection between the world of forces and the world of energy is the concept of potential energy.
If you move an object against a conservative force (think raising a ball in the air against the force of gravity),
you can think of the work you do agains the force as being stored in the potential energy of the object.
For each force $\vec{F}(x)$ there is a corresponding potential energy $U_F(x)$.
The change in potential energy associated with the force $\vec{F}(x)$ and displacement $\vec{d}$
is defined as the negative of the work done by the force during the displacement
Step92: Note the negative sign in the formula defining the potential energy.
This negative is canceled by the negative sign of the dot product $\vec{F}\cdot d\vec{x}$
Step93: Note the solution $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ is equivalent to $x(t) = A\cos(\omega t + \phi)$,
which is more commonly used to describe simple harmonic motion.
We can use the expand function with the argument trig=True to convince ourselves of this equivalence
Step94: If we define $C_1=A\sin(\phi)$ and $C_2=A\cos(\phi)$,
we obtain the form $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ that SymPy found.
Conservation of energy
We can verify that the total energy of the mass-spring system is conserved by showing
$E_T(t) = U_s(t) + K(t) = \textrm{constant}$
Step95: Linear algebra
A matrix $A \in \mathbb{R}^{m\times n}$ is a rectangular array of real numbers with $m$ rows and $n$ columns.
To specify a matrix $A$, we specify the values for its $mn$ components $a_{11}, a_{12}, \ldots, a_{mn}$
as a list of lists
Step96: Use the square brackets to access the matrix elements or to obtain a submatrix
Step97: Some commonly used matrices can be created with shortcut methods
Step98: Standard algebraic operations like
addition +, subtraction -, multiplication *,
and exponentiation ** work as expected for Matrix objects.
The transpose operation flips the matrix through its diagonal
Step99: Recall that the transpose is also used to convert row vectors into column vectors and vice versa.
Row operations
Step100: The method row_op takes two arguments as inputs
Step101: Note the rref method returns a tuple of values
Step102: The column space of $A$ is the span of the columns of $A$ that contain the pivots
in the reduced row echelon form of $A$
Step103: Note we took columns from the original matrix $A$ and not its RREF.
To find the null space of $A$, call its nullspace method
Step104: Determinants
The determinant of a matrix,
denoted $\det(A)$ or $|A|$,
is a particular way to multiply the entries of the matrix to produce a single number.
Step105: Determinants are used for all kinds of tasks
Step106: The matrix inverse $A^{-1}$ plays the role of division by $A$.
Eigenvectors and eigenvalues
When a matrix is multiplied by one of its eigenvectors the output
is the same eigenvector multiplied by a constant $A\vec{e}\lambda =\lambda\vec{e}\lambda$.
The constant $\lambda$ (the Greek letter lambda) is called an eigenvalue of $A$.
To find the eigenvalues of a matrix, start from the definition $A\vec{e}\lambda =\lambda\vec{e}\lambda$,
insert the identity $\mathbb{1}$,
and rewrite it as a null-space problem
Step107: Certain matrices can be written entirely in terms of their eigenvectors and their eigenvalues.
Consider the matrix $\Lambda$ (capital Greek L) that has the eigenvalues of the matrix $A$ on the diagonal,
and the matrix $Q$ constructed from the eigenvectors of $A$ as columns
Step108: Not all matrices are diagonalizable.
You can check if a matrix is diagonalizable by calling its is_diagonalizable method | Python Code:
from sympy import init_session
init_session()
Explanation: Taming math and physics using SymPy
Tutorial based on the No bullshit guide series of textbooks by Ivan Savov
Abstract
Most people consider math and physics to be scary
beasts from which it is best to keep one's distance. Computers,
however, can help us tame the complexity and tedious arithmetic
manipulations associated with these subjects. Indeed, math and
physics are much more approachable once you have the power of
computers on your side.
This tutorial serves a dual purpose. On one hand, it serves
as a review of the fundamental concepts of mathematics for
computer-literate people. On the other hand, this tutorial serves
to demonstrate to students how a computer algebra system can
help them with their classwork. A word of warning is in order.
Please don't use SymPy to avoid the suffering associated with your
homework! Teachers assign homework problems to you because
they want you to learn. Do your homework by hand, but if you
want, you can check your answers using SymPy. Better yet, use
SymPy to invent extra practice problems for yourself.
Contents
Fundamentals of mathematics
Complex numbers
Calculus
Vectors
Mechanics
Linear algebra
Introduction
You can use a computer algebra system (CAS) to compute complicated
math expressions, solve equations, perform calculus procedures,
and simulate physics systems.
All computer algebra systems offer essentially the same functionality,
so it doesn't matter which system you use: there are free
systems like SymPy, Magma, or Octave, and commercial systems like
Maple, MATLAB, and Mathematica. This tutorial is an introduction to
SymPy, which is a symbolic computer algebra system written in the
programming language Python. In a symbolic CAS, numbers and
operations are represented symbolically, so the answers obtained are
exact. For example, the number √2 is represented in SymPy as the
object Pow(2,1/2), whereas in numerical computer algebra systems
like Octave, the number √2 is represented as the approximation
1.41421356237310 (a float). For most purposes the approximation
is okay, but sometimes approximations can lead to problems:
float(sqrt(2))*float(sqrt(2)) = 2.00000000000000044 ≠ 2. Because
SymPy uses exact representations, you'll never run into such
problems: Pow(2,1/2)*Pow(2,1/2) = 2.
This tutorial is organized as follows. We'll begin by introducing the
SymPy basics and the bread-and-butter functions used for manipulating
expressions and solving equations. Afterward, we'll discuss the
SymPy functions that implement calculus operations like differentiation
and integration. We'll also introduce the functions used to deal with
vectors and complex numbers. Later we'll see how to use vectors and
integrals to understand Newtonian mechanics. In the last section,
we'll introduce the linear algebra functions available in SymPy.
This tutorial presents many explanations as blocks of code. Be sure
to try the code examples on your own by typing the commands into
SymPy. It's always important to verify for yourself!
Using SymPy
The easiest way to use SymPy, provided you're connected to the
Internet, is to visit http://live.sympy.org. You'll be presented with
an interactive prompt into which you can enter your commands—right
in your browser.
If you want to use SymPy on your own computer, you must install
Python and the python package sympy. You can then open a command
prompt and start a SymPy session using:
```
you@host$ python
Python X.Y.Z
[GCC a.b.c (Build Info)] on platform
Type "help", "copyright", or "license" for more information.
from sympy import *
```
The >>> prompt indicates you're in the Python shell which accepts
Python commands. The command from sympy import * imports all
the SymPy functions into the current namespace. All SymPy functions
are now available to you. To exit the python shell press CTRL+D.
I highly recommend you also install ipython, which is an improved
interactive python shell. If you have ipython and SymPy installed,
you can start an ipython shell with SymPy pre-imported using the
command isympy. For an even better experience, you can try ipython notebook,
which is a web frontend for the ipython shell.
You can start your session the same way as isympy do, by running following commands, which will be detaily described latter.
End of explanation
3 # an int
3.0 # a float
Explanation: Fundamentals of mathematics
Let's begin by learning about the basic SymPy objects and the
operations we can carry out on them. We'll learn the SymPy equivalents
of many math verbs like “to solve” (an equation), “to expand” (an
expression), “to factor” (a polynomial).
Numbers
In Python, there are two types of number objects: ints and floats.
End of explanation
1/7 # int/int gives int
Explanation: Integer objects in Python are a faithful representation of the set of
integers $\mathbb{Z}={\ldots,-2,-1,0,1,2,\ldots}$. Floating point numbers are
approximate representations of the reals $\mathbb{R}$. Regardless of its absolute
size, a floating point number is only accurate to 16 decimals.
Special care is required when specifying rational numbers, because
integer division might not produce the answer you want. In other
words, Python will not automatically convert the answer to a floating
point number, but instead round the answer to the closest integer:
End of explanation
1.0/7 # float/int gives float
Explanation: To avoid this problem, you can force float division by using the
number 1.0 instead of 1:
End of explanation
S('1/7') # = Rational(1,7)
Explanation: This result is better, but it's still only an approximation of the exact
number $\frac{1}{7} \in \mathbb{Q}$, since a float has 16 decimals while the decimal
expansion of $\frac{1}{7}$ is infinitely long. To obtain an exact representation
of $\frac{1}{7}$ you need to create a SymPy expression. You can sympify any
expression using the shortcut function S():
End of explanation
2**10 # same as S('2^10')
Explanation: Note the input to S() is specified as a text string delimited by quotes.
We could have achieved the same result using S('1')/7 since a SymPy
object divided by an int is a SymPy object.
Except for the tricky Python division operator, other math operators
like addition +, subtraction -, and multiplication * work as you would
expect. The syntax ** is used in Python to denote exponentiation:
End of explanation
pi
pi.evalf()
Explanation: When solving math problems, it's best to work with SymPy objects,
and wait to compute the numeric answer in the end. To obtain a
numeric approximation of a SymPy object as a float, call its .evalf()
method:
End of explanation
from __future__ import division
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
Explanation: The method .n() is equivalent to .evalf(). The global SymPy
function N() can also be used to to compute numerical values. You can
easily change the number of digits of precision of the approximation.
Enter pi.n(400) to obtain an approximation of $\pi$ to 400 decimals.
Symbols
Python is a civilized language so there's no need to define variables
before assigning values to them. When you write a = 3, you define a
new name a and set it to the value 3. You can now use the name a
in subsequent calculations.
Most interesting SymPy calculations require us to define symbols,
which are the SymPy objects for representing variables and unknowns.
For your convenience, when live.sympy.org starts, it runs the
following commands automatically:
End of explanation
x + 2 # an Add expression
p + 2
Explanation: The first statement instructs python to convert 1/7 to 1.0/7 when
dividing, potentially saving you from any int division confusion. The
second statement imports all the SymPy functions. The remaining
statements define some generic symbols x, y, z, and t, and several
other symbols with special properties.
Note the difference between the following two statements:
End of explanation
p = Symbol('p') # the same as p = symbols('p')
p + 2 # = Add(Symbol('p'), Integer(2))
Explanation: The name x is defined as a symbol, so SymPy knows that x + 2 is an
expression; but the variable p is not defined, so SymPy doesn't know
what to make of p + 2. To use p in expressions, you must first define
it as a symbol:
End of explanation
a0, a1, a2, a3 = symbols('a0:4')
Explanation: You can define a sequence of variables using the following notation:
End of explanation
3+3
_*2
Explanation: You can use any name you want for a variable, but it's best if you
avoid the letters Q,C,O,S,I,N and E because they have special uses
in SymPy: I is the unit imaginary number $i \equiv \sqrt(-1)$, E is the base of
the natural logarithm, S() is the sympify function, N() is used to
obtain numeric approximations, and O is used for big-O notation.
The underscore symbol _ is a special variable that contains the result
of the last printed value. The variable _ is analogous to the ans button
on certain calculators, and is useful in multi-step calculations:
End of explanation
expr = 2*x + 3*x - sin(x) - 3*x + 42
simplify(expr)
Explanation: Expresions
You define SymPy expressions by combining symbols with basic math
operations and other functions:
End of explanation
factor( x**2-2*x-8 )
expand( (x-4)*(x+2) )
a, b = symbols('a b')
collect(x**2 + x*b + a*x + a*b, x) # collect terms for diff. pows of x
Explanation: The function simplify can be used on any expression to simplify
it. The examples below illustrate other useful SymPy functions that
correspond to common mathematical operations on expressions:
End of explanation
expr = sin(x) + cos(y)
expr
expr.subs({x:1, y:2})
expr.subs({x:1, y:2}).n()
Explanation: To substitute a given value into an expression, call the .subs()
method, passing in a python dictionary object { key:val, ... }
with the symbol–value substitutions you want to make:
End of explanation
solve( x**2 + 2*x - 8, x)
Explanation: Note how we used .n() to obtain the expression's numeric value.
Solving equations
The function solve is the main workhorse in SymPy. This incredibly
powerful function knows how to solve all kinds of equations. In fact
solve can solve pretty much any equation! When high school students
learn about this function, they get really angry—why did they spend
five years of their life learning to solve various equations by hand,
when all along there was this solve thing that could do all the math
for them? Don't worry, learning math is never a waste of time.
The function solve takes two arguments. Use solve(expr,var) to
solve the equation expr==0 for the variable var. You can rewrite any
equation in the form expr==0 by moving all the terms to one side
of the equation; the solutions to $A(x) = B(x)$ are the same as the
solutions to $A(x) - B(x) = 0$.
For example, to solve the quadratic equation $x^2 + 2x - 8 = 0$, use
End of explanation
a, b, c = symbols('a b c')
solve( a*x**2 + b*x + c, x)
Explanation: In this case the equation has two solutions so solve returns a list.
Check that $x = 2$ and $x = -4$ satisfy the equation $x^2 + 2x - 8 = 0$.
The best part about solve and SymPy is that you can obtain symbolic
answers when solving equations. Instead of solving one specific
quadratic equation, we can solve all possible equations of the form
$ax^2 + bx + c = 0$ using the following steps:
End of explanation
gen_sol = solve( a*x**2 + b*x + c, x)
[ gen_sol[0].subs({'a':1,'b':2,'c':-8}),
gen_sol[1].subs({'a':1,'b':2,'c':-8}) ]
Explanation: In this case solve calculated the solution in terms of the symbols
a, b, and c. You should be able to recognize the expressions in the
solution—it's the quadratic formula $x_{1,2} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$.
To solve a specific equation like $x^2 + 2x - 8 = 0$, we can substitute
the coefficients $a = 1$, $b = 2$, and $c = -8$ into the general solution to
obtain the same result:
End of explanation
solve([x + y - 3, 3*x - 2*y], [x, y])
Explanation: To solve a system of equations, you can feed solve with the list of
equations as the first argument, and specify the list of unknowns you
want to solve for as the second argument. For example, to solve for $x$
and $y$ in the system of equations $x + y = 3$ and $3x - 2y = 0$, use
End of explanation
h, k = symbols('h k')
solve( (x-h)**2 + k - (x**2-4*x+7), [h,k] )
((x-2)**2+3).expand() # so h = 2 and k = 3, verify...
Explanation: The function solve is like a Swiss Army knife you can use to solve
all kind of problems. Suppose you want to complete the square in the
expression $x^2 - 4x + 7$, that is, you want to find constants $h$ and $k$
such that $x^2 -4x + 7 = (x-h)^2 + k$. There is no special “complete the
square” function in SymPy, but you can call solve on the equation
$(x - h)^2 + k - (x^2 - 4x + 7) = 0$ to find the unknowns $h$ and $k$:
End of explanation
a, b, c, d = symbols('a b c d')
a/b + c/d
together(a/b + c/d)
Explanation: Learn the basic SymPy commands and you'll never need to suffer
another tedious arithmetic calculation painstakingly performed by
hand again!
Rational functions
By default, SymPy will not combine or split rational expressions.
You need to use together to symbolically calculate the addition of
fractions:
End of explanation
apart( (x**2+x+4)/(x+2) )
Explanation: Alternately, if you have a rational expression and want to divide the
numerator by the denominator, use the apart function:
End of explanation
log(E**3) # same as ln(E**3)
Explanation: Exponentials and logarithms
Euler's constant $e = 2.71828\dots$ is defined one of several ways,
$$
e \equiv \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n
\equiv \lim_{\epsilon\to 0}(1+\epsilon)^{1/\epsilon}
\equiv \sum_{n=0}^{\infty}\frac{1}{n!},
$$
and is denoted E in SymPy. Using exp(x) is equivalent to E**x.
The functions log and ln both compute the logarithm base $e$:
End of explanation
x, y = symbols('x y')
log(x*y).expand()
a, b = symbols('a b', positive=True)
log(a*b).expand()
Explanation: By default, SymPy assumes the inputs to functions like exp and log are
complex numbers, so it will not expand certain logarithmic expressions.
However, indicating to SymPy that the inputs are positive real numbers
will make the expansions work:
End of explanation
P = (x-1)*(x-2)*(x-3)
P
Explanation: Polynomials
Let's define a polynomial $P$ with roots at $x = 1$, $x = 2$, and $x = 3$:
End of explanation
P.expand()
Explanation: To see the expanded version of the polynomial, call its expand method:
End of explanation
P.factor()
P.simplify()
Explanation: When the polynomial is expressed in it's expanded form $P(x) =
x^3 - 6x^2 + 11x - 6$, we can't immediately identify its roots. This is
why the factored form $P(x) = (x - 1)(x - 2)(x - 3)$ is preferable. To
factor a polynomial, call its factor method or simplify it:
End of explanation
roots = solve(P,x)
roots
# let's check if P equals (x-1)(x-2)(x-3)
simplify( P - (x-roots[0])*(x-roots[1])*(x-roots[2]) )
Explanation: Recall that the roots of the polynomial $P(x)$ are defined as the
solutions to the equation $P(x) = 0$. We can use the solve function
to find the roots of the polynomial:
End of explanation
p = (x-5)*(x+5)
q = x**2 - 25
p == q # fail
p - q == 0 # fail
simplify(p - q) == 0
sin(x)**2 + cos(x)**2 == 1 # fail
simplify( sin(x)**2 + cos(x)**2 - 1) == 0
Explanation: Equality checking
In the last example, we used the simplify function to check whether
two expressions were equal. This way of checking equality works
because $P = Q$ if and only if $P - Q = 0$. This is the best way to
check if two expressions are equal in SymPy because it attempts all
possible simplifications when comparing the expressions. Below is
a list of other ways to check whether two quantities are equal with
example cases where they fail:
End of explanation
sin(pi/6)
cos(pi/6)
Explanation: Trigonometry
The trigonometric functions sin and cos take inputs in radians:
End of explanation
sin(30*pi/180) # 30 deg = pi/6 rads
Explanation: For angles in degrees, you need a conversion factor of $\frac{\pi}{180}$[rad/$^\circ$]:
End of explanation
asin(1/2)
acos(sqrt(3)/2)
Explanation: The inverse trigonometric functions $\sin^{-1}(x) \equiv \arcsin(x)$ and
$\cos^{-1}(x) \equiv \arccos(x)$ are used as follows:
End of explanation
tan(pi/6)
atan( 1/sqrt(3) )
Explanation: Recall that $\tan(x) \equiv \frac{\sin(x)}{\cos(x)}$. The inverse function of $\tan(x)$ is $\tan^{-1}(x) \equiv \arctan(x) \equiv$ atan(x)
End of explanation
sin(x) == cos(x - pi/2)
simplify( sin(x)*cos(y)+cos(x)*sin(y) )
e = 2*sin(x)**2 + 2*cos(x)**2
trigsimp(e)
trigsimp(log(e))
trigsimp(log(e), deep=True)
simplify(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4)
Explanation: The function acos returns angles in the range $[0, \pi]$, while asin and
atan return angles in the range $[-\frac{\pi}{2},\frac{\pi}{2}]$.
Here are some trigonometric identities that SymPy knows:
End of explanation
expand(sin(2*x)) # = (sin(2*x)).expand()
expand_trig(sin(2*x)) # = (sin(2*x)).expand(trig=True)
Explanation: The function trigsimp does essentially the same job as simplify.
If instead of simplifying you want to expand a trig expression, you
should use expand_trig, because the default expand won't touch trig
functions:
End of explanation
simplify( (exp(x)+exp(-x))/2 )
simplify( (exp(x)-exp(-x))/2 )
Explanation: Hyperbolic trigonometric functions
The hyperbolic sine and cosine in SymPy are denoted sinh and cosh
respectively and SymPy is smart enough to recognize them when
simplifying expressions:
End of explanation
simplify( cosh(x)**2 - sinh(x)**2 )
Explanation: Recall that $x = \cosh(\mu)$ and $y = \sinh(\mu)$ are defined as $x$ and $y$
coordinates of a point on the the hyperbola with equation $x^2 - y^2 = 1$
and therefore satisfy the identity $\cosh^2 x - \sinh^2 x = 1$:
End of explanation
I*I
solve( x**2 + 1 , x)
Explanation: Complex numbers
Ever since Newton, the word “number” has been used to refer to one
of the following types of math objects: the naturals $\mathbb{N}$, the integers
$\mathbb{Z}$, the rationals $\mathbb{Q}$, and the real numbers $\mathbb{R}$. Each set of numbers is
associated with a different class of equations. The natural numbers
$\mathbb{N}$ appear as solutions of the equation $m + n = x$, where $m$ and $n$ are
natural numbers (denoted $m, n \in \mathbb{N}$). The integers $\mathbb{Z}$ are the solutions
to equations of the form $x + m = n$, where $m, n \in \mathbb{N}$. The rational
numbers $\mathbb{Q}$ are necessary to solve for $x$ in $mx = n$, with $m, n \in \mathbb{Z}$.
The solutions to $x^2 = 2$ are irrational (so $\not\in \mathbb{Q}$) so we need an even
larger set that contains all possible numbers: real set of numbers $\mathbb{R}$.
A pattern emerges where more complicated equations require the
invention of new types of numbers.
Consider the quadratic equation $x^2 = -1$. There are no real solutions
to this equation, but we can define an imaginary number $i = \sqrt{-1}$
(denoted I in SymPy) that satisfies this equation:
End of explanation
z = 4 + 3*I
z
re(z)
im(z)
Explanation: The solutions are $x = i$ and $x = -i$, and indeed we can verify that
$i^2 + 1 = 0$ and $(-i)^2 + 1 = 0$ since $i^2 = -1$.
The complex numbers $\mathbb{C}$ are defined as ${ a+bi \,|\, a,b \in \mathbb{R} }$. Complex numbers
contain a real part and an imaginary part:
End of explanation
Abs(z)
arg(z)
Explanation: The polar representation of a complex number is $z!\equiv!|z|\angle\theta!\equiv !|z|e^{i\theta}$.
For a complex number $z=a+bi$,
the quantity $|z|=\sqrt{a^2+b^2}$ is known as the absolute value of $z$,
and $\theta$ is its phase or its argument:
End of explanation
conjugate( z )
Explanation: The complex conjugate of $z = a + bi$ is the number $\bar{z} = a - bi$:
End of explanation
x = symbols('x', real=True)
exp(I*x).expand(complex=True)
re( exp(I*x) )
im( exp(I*x) )
Explanation: Complex conjugation is important for computing the absolute value
of $z$ $\left(|z|\equiv\sqrt{z\bar{z}}\right)$ and for division by $z$ $\left(\frac{1}{z}\equiv\frac{\bar{z}}{|z|^2}\right)$.
Euler's formula
Euler's formula shows an important relation between the exponential
function $e^x$ and the trigonometric functions $sin(x)$ and $cos(x)$:
$$e^{ix} = \cos x + i \sin x.$$
To obtain this result in SymPy, you must specify that the number $x$ is
real and also tell expand that you're interested in complex expansions:
End of explanation
(cos(x)).rewrite(exp)
Explanation: Basically, $\cos(x)$ is the real part of $e^{ix}$, and $\sin(x)$ is the imaginary
part of $e^{ix}$. Whaaat? I know it's weird, but weird things are bound
to happen when you input imaginary numbers to functions.
Euler's formula is often used to rewrite the functions sin and cos in
terms of complex exponentials. For example,
End of explanation
oo+1
5000 < oo
1/oo
Explanation: Compare this expression with the definition of hyperbolic cosine.
Calculus
Calculus is the study of the properties of functions. The operations of
calculus are used to describe the limit behaviour of functions, calculate
their rates of change, and calculate the areas under their graphs. In
this section we'll learn about the SymPy functions for calculating
limits, derivatives, integrals, and summations.
Infinity
The infinity symbol is denoted oo (two lowercase os) in SymPy. Infinity
is not a number but a process: the process of counting forever. Thus,
$\infty + 1 = \infty$, $\infty$ is greater than any finite number, and $1/\infty$ is an
infinitely small number. Sympy knows how to correctly treat infinity
in expressions:
End of explanation
limit( (1+1/n)**n, n, oo)
Explanation: Limits
We use limits to describe, with mathematical precision, infinitely large
quantities, infinitely small quantities, and procedures with infinitely
many steps.
The number $e$ is defined as the limit $e \equiv \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$:
End of explanation
limit( 1/x, x, 0, dir="+")
limit( 1/x, x, 0, dir="-")
limit( 1/x, x, oo)
Explanation: This limit expression describes the annual growth rate of a loan with
a nominal interest rate of 100% and infinitely frequent compounding.
Borrow \$1000 in such a scheme, and you'll owe $2718.28 after one year.
Limits are also useful to describe the behaviour of functions. Consider
the function $f(x) = \frac{1}{x}$. The limit command shows us what happens
to $f(x)$ near $x = 0$ and as $x$ goes to infinity:
End of explanation
limit(sin(x)/x, x, 0)
limit(sin(x)**2/x, x, 0)
limit(exp(x)/x**100,x,oo) # which is bigger e^x or x^100 ?
# exp f >> all poly f for big x
Explanation: As $x$ becomes larger and larger, the fraction $\frac{1}{x}$ becomes smaller
and smaller. In the limit where $x$ goes to infinity, $\frac{1}{x}$ approaches
zero: $\lim_{x\to\infty}\frac{1}{x} = 0$. On the other hand, when $x$ takes on smaller
and smaller positive values, the expression $\frac{1}{x}$ becomes infinite:
$\lim_{x\to0^+}\frac{1}{x} = \infty$. When $x$ approaches 0 from the left, we have
$\lim_{x\to0^-}\frac{1}{x}=-\infty$. If these calculations are not clear to you, study
the graph of $f(x) = \frac{1}{x}$.
Here are some other examples of limits:
End of explanation
diff(x**3, x)
Explanation: Limits are used to define the derivative and the integral operations.
Derivatives
The derivative function, denoted $f'(x)$, $\frac{d}{dx}f(x)$, $\frac{df}{dx}$, or $\frac{dy}{dx}$,
describes the rate of change of the function $f(x)$.
The SymPy function diff computes the derivative of any expression:
End of explanation
diff( x**2*sin(x), x )
diff( sin(x**2), x )
diff( x**2/sin(x), x )
Explanation: The differentiation operation knows about the product rule $[f(x)g(x)]^\prime=f^\prime(x)g(x)+f(x)g^\prime(x)$,
the chain rule $f(g(x))' = f'(g(x))g'(x)$,
and the quotient rule $\left[\frac{f(x)}{g(x)}\right]^\prime = \frac{f'(x)g(x) - f(x)g'(x)}{g(x)^2}$:
End of explanation
diff(x**3, x, 2) # same as diff(diff(x**3, x), x)
Explanation: The second derivative of a function f is diff(f,x,2):
End of explanation
diff( exp(x), x ) # same as diff( E**x, x )
Explanation: The exponential function $f(x)=e^x$ is special because it is equal to its derivative:
End of explanation
x = symbols('x')
f = symbols('f', cls=Function) # can now use f(x)
dsolve( f(x) - diff(f(x),x), f(x) )
Explanation: A differential equation is an equation that relates some unknown function $f(x)$ to its derivative.
An example of a differential equation is $f'(x)=f(x)$.
What is the function $f(x)$ which is equal to its derivative?
You can either try to guess what $f(x)$ is or use the dsolve function:
End of explanation
f = S('1/2')*x**2
f
df = diff(f,x)
df
T_1 = f.subs({x:1}) + df.subs({x:1})*(x - 1)
T_1
Explanation: We'll discuss dsolve again in the section on mechanics.
Tangent lines
The tangent line to the function $f(x)$ at $x=x_0$ is
the line that passes through the point $(x_0, f(x_0))$ and has
the same slope as the function at that point.
The tangent line to the function $f(x)$ at the point $x=x_0$ is described by the equation
$$
T_1(x) = f(x_0) \ + \ f'(x_0)(x-x_0).
$$
What is the equation of the tangent line to $f(x)=\frac{1}{2}x^2$ at $x_0=1$?
End of explanation
T_1.subs({x:1}) == f.subs({x:1})
diff(T_1,x).subs({x:1}) == diff(f,x).subs({x:1})
Explanation: The tangent line $T_1(x)$ has the same value and slope as the function $f(x)$ at $x=1$:
End of explanation
x = Symbol('x')
f = x**3-2*x**2+x
diff(f, x)
sols = solve( diff(f,x), x)
sols
diff(diff(f,x), x).subs( {x:sols[0]} )
diff(diff(f,x), x).subs( {x:sols[1]} )
Explanation: Optimization
Optimization is about choosing an input for a function $f(x)$ that results in the best value for $f(x)$.
The best value usually means the maximum value
(if the function represents something desirable like profits)
or the minimum value
(if the function represents something undesirable like costs).
The derivative $f'(x)$ encodes the information about the slope of $f(x)$.
Positive slope $f'(x)>0$ means $f(x)$ is increasing,
negative slope $f'(x)<0$ means $f(x)$ is decreasing,
and zero slope $f'(x)=0$ means the graph of the function is horizontal.
The critical points of a function $f(x)$ are the solutions to the equation $f'(x)=0$.
Each critical point is a candidate to be either a maximum or a minimum of the function.
The second derivative $f^{\prime\prime}(x)$ encodes the information about the curvature of $f(x)$.
Positive curvature means the function looks like $x^2$,
negative curvature means the function looks like $-x^2$.
Let's find the critical points of the function $f(x)=x^3-2x^2+x$
and use the information from its second derivative
to find the maximum of the function
on the interval $x \in [0,1]$.
End of explanation
integrate(x**3, x)
integrate(sin(x), x)
integrate(ln(x), x)
Explanation: It will help to look at the graph of this function.
The point $x=\frac{1}{3}$ is a local maximum because it is a critical point of $f(x)$
where the curvature is negative, meaning $f(x)$ looks like the peak of a mountain at $x=\frac{1}{3}$.
The maximum value of $f(x)$ on the interval $x\in [0,1]$ is $f!\left(\frac{1}{3}\right)=\frac{4}{27}$.
The point $x=1$ is a local minimum because it is a critical point
with positive curvature, meaning $f(x)$ looks like the bottom of a valley at $x=1$.
Integrals
The integral of $f(x)$ corresponds to the computation of the area under the graph of $f(x)$.
The area under $f(x)$ between the points $x=a$ and $x=b$ is denoted as follows:
$$
A(a,b) = \int_a^b f(x) \: dx.
$$
The integral function $F$ corresponds to the area calculation as a function
of the upper limit of integration:
$$
F(c) \equiv \int_0^c ! f(x)\:dx\,.
$$
The area under $f(x)$ between $x=a$ and $x=b$ is obtained by
calculating the change in the integral function:
$$
A(a,b) = \int_a^b ! f(x)\:dx = F(b)-F(a).
$$
In SymPy we use integrate(f, x) to obtain the integral function $F(x)$ of any function $f(x)$:
$F(x) = \int_0^x f(u)\,du$.
End of explanation
integrate(x**3, (x,0,1)) # the area under x^3 from x=0 to x=1
Explanation: This is known as an indefinite integral since the limits of integration are not defined.
In contrast,
a definite integral computes the area under $f(x)$ between $x=a$ and $x=b$.
Use integrate(f, (x,a,b)) to compute the definite integrals of the form $A(a,b)=\int_a^b f(x) \, dx$:
End of explanation
F = integrate(x**3, x)
F.subs({x:1}) - F.subs({x:0})
Explanation: We can obtain the same area by first calculating the indefinite integral $F(c)=\int_0^c !f(x)\,dx$,
then using $A(a,b) = F(x)\big\vert_a^b \equiv F(b) - F(a)$:
End of explanation
integrate(sin(x), (x,0,pi))
integrate(sin(x), (x,pi,2*pi))
integrate(sin(x), (x,0,2*pi))
Explanation: Integrals correspond to signed area calculations:
End of explanation
f = x**2
F = integrate(f, x)
F
diff(F,x)
Explanation: During the first half of its $2\pi$-cycle,
the graph of $\sin(x)$ is above the $x$-axis, so it has a positive contribution to the area under the curve.
During the second half of its cycle (from $x=\pi$ to $x=2\pi$),
$\sin(x)$ is below the $x$-axis, so it contributes negative area.
Draw a graph of $\sin(x)$ to see what is going on.
Fundamental theorem of calculus
The integral is the “inverse operation” of the derivative.
If you perform the integral operation followed by the derivative operation on some function,
you'll obtain the same function:
$$
\left(\frac{d}{dx} \circ \int dx \right) f(x) \equiv \frac{d}{dx} \int_c^x f(u)\:du = f(x).
$$
End of explanation
f = x**2
df = diff(f,x)
df
integrate(df, x)
Explanation: Alternately, if you compute the derivative of a function followed by the integral,
you will obtain the original function $f(x)$ (up to a constant):
$$
\left( \int dx \circ \frac{d}{dx}\right) f(x) \equiv \int_c^x f'(u)\;du = f(x) + C.
$$
End of explanation
a_n = 1/n
b_n = 1/factorial(n)
Explanation: The fundamental theorem of calculus is important because it tells us how to solve differential equations.
If we have to solve for $f(x)$ in the differential equation $\frac{d}{dx}f(x) = g(x)$,
we can take the integral on both sides of the equation to obtain the answer $f(x) = \int g(x)\,dx + C$.
Sequences
Sequences are functions that take whole numbers as inputs.
Instead of continuous inputs $x\in \mathbb{R}$,
sequences take natural numbers $n\in\mathbb{N}$ as inputs.
We denote sequences as $a_n$ instead of the usual function notation $a(n)$.
We define a sequence by specifying an expression for its $n^\mathrm{th}$ term:
End of explanation
a_n.subs({n:5})
Explanation: Substitute the desired value of $n$ to see the value of the $n^\mathrm{th}$ term:
End of explanation
[ a_n.subs({n:i}) for i in range(0,8) ]
[ b_n.subs({n:i}) for i in range(0,8) ]
Explanation: The Python list comprehension syntax [item for item in list]
can be used to print the sequence values for some range of indices:
End of explanation
limit(a_n, n, oo)
limit(b_n, n, oo)
Explanation: Observe that $a_n$ is not properly defined for $n=0$ since $\frac{1}{0}$ is a division-by-zero error.
To be precise, we should say $a_n$'s domain is the positive naturals $a_n:\mathbb{N}^+ \to \mathbb{R}$.
Observe how quickly the factorial function $n!=1\cdot2\cdot3\cdots(n-1)\cdot n$ grows:
$7!= 5040$, $10!=3628800$, $20! > 10^{18}$.
We're often interested in calculating the limits of sequences as $n\to \infty$.
What happens to the terms in the sequence when $n$ becomes large?
End of explanation
A_n = n*tan(2*pi/(2*n))
limit(A_n, n, oo)
Explanation: Both $a_n=\frac{1}{n}$ and $b_n = \frac{1}{n!}$ converge to $0$ as $n\to\infty$.
Many important math quantities are defined as limit expressions.
An interesting example to consider is the number $\pi$,
which is defined as the area of a circle of radius $1$.
We can approximate the area of the unit circle by drawing a many-sided regular polygon around the circle.
Splitting the $n$-sided regular polygon into identical triangular splices,
we can obtain a formula for its area $A_n$.
In the limit as $n\to \infty$,
the $n$-sided-polygon approximation to the area of the unit-circle becomes exact:
End of explanation
a_n = 1/n
summation(a_n, [n, 1, oo])
b_n = 1/factorial(n)
summation(b_n, [n, 0, oo])
Explanation: Series
Suppose we're given a sequence $a_n$ and we want to compute the sum of all the values in this sequence $\sum_{n}^\infty a_n$.
Series are sums of sequences.
Summing the values of a sequence $a_n:\mathbb{N}\to \mathbb{R}$
is analogous to taking the integral of a function $f:\mathbb{R}\to \mathbb{R}$.
To work with series in SymPy,
use the summation function whose syntax is analogous to the integrate function:
End of explanation
import math
def b_nf(n):
return 1.0/math.factorial(n)
sum( [b_nf(n) for n in range(0,10)] )
E.evalf() # true value
Explanation: We say the series $\sum a_n$ diverges to infinity (or is divergent) while the series $\sum b_n$ converges (or is convergent).
As we sum together more and more terms of the sequence $b_n$, the total becomes closer and closer to some finite number.
In this case, the infinite sum $\sum_{n=0}^\infty \frac{1}{n!}$ converges to the number $e=2.71828\ldots$.
The summation command is useful because it allows us to compute infinite sums,
but for most practical applications we don't need to take an infinite number of terms in a series to obtain a good approximation.
This is why series are so neat: they represent a great way to obtain approximations.
Using standard Python commands,
we can obtain an approximation to $e$ that is accurate to six decimals by summing 10 terms in the series:
End of explanation
exp_xn = x**n/factorial(n)
summation( exp_xn.subs({x:5}), [n, 0, oo] ).evalf()
exp(5).evalf() # the true value
Explanation: Taylor series
Wait, there's more!
Not only can we use series to approximate numbers,
we can also use them to approximate functions.
A power series is a series whose terms contain different powers of the variable $x$.
The $n^\mathrm{th}$ term in a power series is a function of both the sequence index $n$ and the input variable $x$.
For example, the power series of the function $\exp(x)=e^x$ is
$$
\exp(x) \equiv 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + \cdots
= \sum_{n=0}^\infty \frac{x^n}{n!}.
$$
This is, IMHO, one of the most important ideas in calculus:
you can compute the value of $\exp(5)$ by taking the infinite sum of the terms in the power series with $x=5$:
End of explanation
summation( exp_xn.subs({x:5}), [n, 0, oo])
Explanation: Note that SymPy is actually smart enough to recognize that the infinite series
you're computing corresponds to the closed-form expression $e^5$:
End of explanation
import math # redo using only python
def exp_xnf(x,n):
return x**n/math.factorial(n)
sum( [exp_xnf(5.0,i) for i in range(0,35)] )
Explanation: Taking as few as 35 terms in the series is sufficient to obtain an approximation to $e$
that is accurate to 16 decimals:
End of explanation
series( sin(x), x, 0, 8)
series( cos(x), x, 0, 8)
series( sinh(x), x, 0, 8)
series( cosh(x), x, 0, 8)
Explanation: The coefficients in the power series of a function (also known as the Taylor series)
The formula for the $n^\mathrm{th}$ term in the Taylor series of $f(x)$ expanded at $x=c$ is $a_n(x) = \frac{f^{(n)}(c)}{n!}(x-c)^n$,
where $f^{(n)}(c)$ is the value of the $n^\mathrm{th}$ derivative of $f(x)$ evaluated at $x=c$.
The term Maclaurin series refers to Taylor series expansions at $x=0$.
The SymPy function series is a convenient way to obtain the series of any function.
Calling series(expr,var,at,nmax)
will show you the series expansion of expr
near var=at
up to power nmax:
End of explanation
series(ln(x), x, 1, 6) # Taylor series of ln(x) at x=1
Explanation: Some functions are not defined at $x=0$, so we expand them at a different value of $x$.
For example, the power series of $\ln(x)$ expanded at $x=1$ is
End of explanation
series(ln(x+1), x, 0, 6) # Maclaurin series of ln(x+1)
Explanation: Here, the result SymPy returns is misleading.
The Taylor series of $\ln(x)$ expanded at $x=1$ has terms of the form $(x-1)^n$:
$$
\ln(x) = (x-1) - \frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} - \frac{(x-1)^4}{4} + \frac{(x-1)^5}{5} + \cdots.
$$
Verify this is the correct formula by substituting $x=1$.
SymPy returns an answer in terms of coordinates relative to $x=1$.
Instead of expanding $\ln(x)$ around $x=1$,
we can obtain an equivalent expression if we expand $\ln(x+1)$ around $x=0$:
End of explanation
u = Matrix([[4,5,6]]) # a row vector = 1x3 matrix
v = Matrix([[7],
[8], # a col vector = 3x1 matrix
[9]])
v.T # use the transpose operation to convert a col vec to a row vec
u[0] # 0-based indexing for entries
u.norm() # length of u
uhat = u/u.norm() # unit-length vec in same dir as u
uhat
uhat.norm()
Explanation: Vectors
A vector $\vec{v} \in \mathbb{R}^n$ is an $n$-tuple of real numbers.
For example, consider a vector that has three components:
$$
\vec{v} = (v_1,v_2,v_3) \ \in \ (\mathbb{R},\mathbb{R},\mathbb{R}) \equiv \mathbb{R}^3.
$$
To specify the vector $\vec{v}$,
we specify the values for its three components $v_1$, $v_2$, and $v_3$.
A matrix $A \in \mathbb{R}^{m\times n}$ is a rectangular array of real numbers with $m$ rows and $n$ columns.
A vector is a special type of matrix; we can think of a vector $\vec{v}\in \mathbb{R}^n$
either as a row vector ($1\times n$ matrix) or a column vector ($n \times 1$ matrix).
Because of this equivalence between vectors and matrices,
there is no need for a special vector object in SymPy,
and Matrix objects are used for vectors as well.
This is how we define vectors
and compute their properties:
End of explanation
u = Matrix([ 4,5,6])
v = Matrix([-1,1,2])
u.dot(v)
Explanation: Dot product
The dot product of the 3-vectors $\vec{u}$ and $\vec{v}$ can be defined two ways:
$$
\vec{u}\cdot\vec{v}
\equiv
\underbrace{u_xv_x+u_yv_y+u_zv_z}{\textrm{algebraic def.}}
\equiv
\underbrace{\|\vec{u}\|\|\vec{v}\|\cos(\varphi)}{\textrm{geometric def.}}
\quad \in \mathbb{R},
$$
where $\varphi$ is the angle between the vectors $\vec{u}$ and $\vec{v}$.
In SymPy,
End of explanation
acos(u.dot(v)/(u.norm()*v.norm())).evalf() # in radians = 52.76 degrees
Explanation: We can combine the algebraic and geometric formulas for the dot product
to obtain the cosine of the angle between the vectors
$$
\cos(\varphi)
= \frac{ \vec{u}\cdot\vec{v} }{ \|\vec{u}\|\|\vec{v}\| }
= \frac{ u_xv_x+u_yv_y+u_zv_z }{ \|\vec{u}\|\|\vec{v}\| },
$$
and use the acos function to find the angle measure:
End of explanation
u.dot(v) == v.dot(u)
Explanation: Just by looking at the coordinates of the vectors $\vec{u}$ and $\vec{v}$,
it's difficult to determine their relative direction.
Thanks to the dot product, however,
we know the angle between the vectors is $52.76^\circ$,
which means they kind of point in the same direction.
Vectors that are at an angle $\varphi=90^\circ$ are called orthogonal, meaning at right angles with each other.
The dot product of vectors for which $\varphi > 90^\circ$ is negative because they point mostly in opposite directions.
The notion of the “angle between vectors” applies more generally to vectors with any number of dimensions.
The dot product for $n$-dimensional vectors is $\vec{u}\cdot\vec{v}=\sum_{i=1}^n u_iv_i$.
This means we can talk about “the angle between” 1000-dimensional vectors.
That's pretty crazy if you think about it—there is no way we could possibly “visualize” 1000-dimensional vectors,
yet given two such vectors we can tell if they point mostly in the same direction,
in perpendicular directions, or mostly in opposite directions.
The dot product is a commutative operation $\vec{u}\cdot\vec{v} = \vec{v}\cdot\vec{u}$:
End of explanation
u = Matrix([4,5,6])
n = Matrix([1,1,1])
(u.dot(n) / n.norm()**2)*n # projection of v in the n dir
Explanation: Projections
Dot products are used for computing projections.
Assume you're given two vectors $\vec{u}$ and $\vec{n}$ and you want to find the component
of $\vec{u}$ that points in the $\vec{n}$ direction.
The following formula based on the dot product will give you the answer:
$$
\Pi_{\vec{n}}( \vec{u} ) \equiv \frac{ \vec{u} \cdot \vec{n} }{ \| \vec{n} \|^2 } \vec{n}.
$$
This is how to implement this formula in SymPy:
End of explanation
w = (u.dot(n) / n.norm()**2)*n
w
Explanation: In the case where the direction vector $\hat{n}$ is of unit length $\|\hat{n}\| = 1$,
the projection formula simplifies to $\Pi_{\hat{n}}( \vec{u} ) \equiv (\vec{u}\cdot\hat{n})\hat{n}$.
Consider now the plane $P$ defined by $(1,1,1)\cdot[(x,y,z)-(0,0,0)]=0$.
A plane is a two dimensional subspace of $\mathbb{R}^3$.
We can decompose any vector $\vec{u} \in \mathbb{R}^3$ into two parts $\vec{u}=\vec{v} + \vec{w}$
such that $\vec{v}$ lies inside the plane and $\vec{w}$ is perpendicular to the plane (parallel to $\vec{n}=(1,1,1)$).
To obtain the perpendicular-to-$P$ component of $\vec{u}$,
compute the projection of $\vec{u}$ in the direction $\vec{n}$:
End of explanation
v = u - (u.dot(n)/n.norm()**2)*n # same as u - w
v
Explanation: To obtain the in-the-plane-$P$ component of $\vec{u}$,
start with $\vec{u}$ and subtract the perpendicular-to-$P$ part:
End of explanation
u = Matrix([ 4,5,6])
v = Matrix([-1,1,2])
u.cross(v)
Explanation: You should check on your own that $\vec{v}+\vec{w}=\vec{u}$ as claimed.
Cross product
The cross product, denoted $\times$, takes two vectors as inputs and produces a vector as output.
The cross products of individual basis elements are defined as follows:
$$
\hat{\imath}\times\hat{\jmath} =\hat{k}, \qquad
\hat{\jmath}\times\hat{k} =\hat{\imath}, \qquad
\hat{k}\times \hat{\imath}= \hat{\jmath}.
$$
Here is how to compute the cross product of two vectors in SymPy:
End of explanation
(u.cross(v).norm()/(u.norm()*v.norm())).n()
Explanation: The vector $\vec{u}\times \vec{v}$ is orthogonal to both $\vec{u}$ and $\vec{v}$.
The norm of the cross product $\|\vec{u}\times \vec{v}\|$ is proportional to the lengths of the vectors
and the sine of the angle between them:
End of explanation
u1,u2,u3 = symbols('u1:4')
v1,v2,v3 = symbols('v1:4')
Matrix([u1,u2,u3]).cross(Matrix([v1,v2,v3]))
Explanation: The name “cross product” is well-suited for this operation since it is
calculated by “cross-multiplying” the coefficients of the vectors:
$$
\vec{u}\times\vec{v}=
\left(
u_yv_z-u_zv_y, \ u_zv_x-u_xv_z, \ u_xv_y-u_yv_x
\right).
$$
By defining individual symbols for the entries of two vectors,
we can make SymPy show us the cross-product formula:
End of explanation
u.cross(v)
v.cross(u)
Explanation: The dot product is anti-commutative $\vec{u}\times\vec{v} = -\vec{v}\times\vec{u}$:
End of explanation
F_1 = Matrix( [4,0] )
F_2 = Matrix( [5*cos(30*pi/180), 5*sin(30*pi/180) ] )
F_net = F_1 + F_2
F_net # in Newtons
F_net.evalf() # in Newtons
Explanation: The product of two numbers and the dot product of two vectors are commutative operations.
The cross product, however, is not commutative: $\vec{u}\times\vec{v} \neq \vec{v}\times\vec{u}$.
Mechanics
The module called sympy.physics.mechanics
contains elaborate tools for describing mechanical systems,
manipulating reference frames, forces, and torques.
These specialized functions are not necessary for a first-year mechanics course.
The basic SymPy functions like solve,
and the vector operations you learned in the previous sections are powerful enough for basic Newtonian mechanics.
Dynamics
The net force acting on an object is the sum of all the external forces acting on it $\vec{F}_{\textrm{net}} = \sum \vec{F}$.
Since forces are vectors,
we need to use vector addition to compute the net force.
Compute
$\vec{F}_{\textrm{net}}=\vec{F}_1 + \vec{F}_2$,
where $\vec{F}_1=4\hat{\imath}[\mathrm{N}]$ and $\vec{F}_2 = 5\angle 30^\circ[\mathrm{N}]$:
End of explanation
F_net.norm().evalf() # |F_net| in [N]
(atan2( F_net[1],F_net[0] )*180/pi).n() # angle in degrees
Explanation: To express the answer in length-and-direction notation,
use norm to find the length of $\vec{F}_{\textrm{net}}$
and atan2 (The function atan2(y,x) computes the correct direction
for all vectors $(x,y)$, unlike atan(y/x) which requires corrections for angles in the range $[\frac{\pi}{2}, \frac{3\pi}{2}]$.) to find its direction:
End of explanation
t, a, v_i, x_i = symbols('t a v_i x_i')
v = v_i + integrate(a, (t, 0,t) )
v
x = x_i + integrate(v, (t, 0,t) )
x
Explanation: The net force on the object is $\vec{F}_{\textrm{net}}= 8.697\angle 16.7^\circ$[N].
Kinematics
Let $x(t)$ denote the position of an object,
$v(t)$ denote its velocity,
and $a(t)$ denote its acceleration.
Together $x(t)$, $v(t)$, and $a(t)$ are known as the equations of motion of the object.
The equations of motion are related by the derivative operation:
$$
a(t) \overset{\frac{d}{dt} }{\longleftarrow} v(t) \overset{\frac{d}{dt} }{\longleftarrow} x(t).
$$
Assume we know the initial position $x_i\equiv x(0)$ and the initial velocity $v_i\equiv v(0)$ of the object
and we want to find $x(t)$ for all later times.
We can do this starting from the dynamics of the problem—the forces acting on the object.
Newton's second law $\vec{F}{\textrm{net}} = m\vec{a}$ states that a net force $\vec{F}{\textrm{net}}$
applied on an object of mass $m$ produces acceleration $\vec{a}$.
Thus, we can obtain an objects acceleration if we know the net force acting on it.
Starting from the knowledge of $a(t)$, we can obtain $v(t)$ by integrating
then find $x(t)$ by integrating $v(t)$:
$$
a(t) \ \ \ \overset{v_i+ \int!dt }{\longrightarrow} \ \ \ v(t) \ \ \ \overset{x_i+ \int!dt }{\longrightarrow} \ \ \ x(t).
$$
The reasoning follows from the fundamental theorem of calculus:
if $a(t)$ represents the change in $v(t)$,
then the total of $a(t)$ accumulated between $t=t_1$ and $t=t_2$
is equal to the total change in $v(t)$ between these times: $\Delta v = v(t_2) - v(t_1)$.
Similarly, the integral of $v(t)$ from $t=0$ until $t=\tau$ is equal to $x(\tau) - x(0)$.
Uniform acceleration motion (UAM)
Let's analyze the case where the net force on the object is constant.
A constant force causes a constant acceleration $a = \frac{F}{m} = \textrm{constant}$.
If the acceleration function is constant over time $a(t)=a$.
We find $v(t)$ and $x(t)$ as follows:
End of explanation
(v*v).expand()
((v*v).expand() - 2*a*x).simplify()
Explanation: You may remember these equations from your high school physics class.
They are the uniform accelerated motion (UAM) equations:
\begin{align}
a(t) &= a, \
v(t) &= v_i + at, \[-2mm]
x(t) &= x_i + v_it + \frac{1}{2}at^2.
\end{align}
In high school, you probably had to memorize these equations.
Now you know how to derive them yourself starting from first principles.
For the sake of completeness, we'll now derive the fourth UAM equation,
which relates the object's final velocity to the initial velocity,
the displacement, and the acceleration, without reference to time:
End of explanation
x_i = 20 # initial position
v_i = 10 # initial velocity
a = 5 # acceleration (constant during motion)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t,0,t) )
x
x.subs({t:3}).n() # x(3) in [m]
diff(x,t).subs({t:3}).n() # v(3) in [m/s]
Explanation: The above calculation shows $v_f^2 - 2ax_f = -2ax_i + v_i^2$.
After moving the term $2ax_f$ to the other side of the equation, we obtain
\begin{align}
(v(t))^2 \ = \ v_f^2 = v_i^2 + 2a\Delta x \ = \ v_i^2 + 2a(x_f-x_i).
\end{align}
The fourth equation is important for practical purposes
because it allows us to solve physics problems in a time-less manner.
Example
Find the position function of an object at time $t=3[\mathrm{s}]$,
if it starts from $x_i=20[\mathrm{m}]$ with $v_i=10[\mathrm{m/s}]$ and undergoes
a constant acceleration of $a=5[\mathrm{m/s^2}]$.
What is the object's velocity at $t=3[\mathrm{s}]$?
End of explanation
t, v_i, x_i, k = symbols('t v_i x_i k')
a = sqrt(k*t)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t, 0,t) )
x
Explanation: If you think about it,
physics knowledge combined with computer skills is like a superpower!
General equations of motion
The procedure
$a(t) \ \overset{v_i+ \int!dt }{\longrightarrow} \ v(t) \ \overset{x_i+ \int!dt }{\longrightarrow} \ x(t)$
can be used to obtain the position function $x(t)$ even when the acceleration is not constant.
Suppose the acceleration of an object is $a(t)=\sqrt{k t}$;
what is its $x(t)$?
End of explanation
x, y = symbols('x y')
m, g, k, h = symbols('m g k h')
F_g = -m*g # Force of gravity on mass m
U_g = - integrate( F_g, (y,0,h) )
U_g # Grav. potential energy
F_s = -k*x # Spring force for displacement x
U_s = - integrate( F_s, (x,0,x) )
U_s # Spring potential energy
Explanation: Potential energy
Instead of working with the kinematic equations of motion $x(t)$, $v(t)$, and $a(t)$ which depend on time,
we can solve physics problems using energy calculations.
A key connection between the world of forces and the world of energy is the concept of potential energy.
If you move an object against a conservative force (think raising a ball in the air against the force of gravity),
you can think of the work you do agains the force as being stored in the potential energy of the object.
For each force $\vec{F}(x)$ there is a corresponding potential energy $U_F(x)$.
The change in potential energy associated with the force $\vec{F}(x)$ and displacement $\vec{d}$
is defined as the negative of the work done by the force during the displacement: $U_F(x) = - W = - \int_{\vec{d}} \vec{F}(x)\cdot d\vec{x}$.
The potential energies associated with gravity $\vec{F}_g = -mg\hat{\jmath}$
and the force of a spring $\vec{F}_s = -k\vec{x}$ are calculated as follows:
End of explanation
t = Symbol('t') # time t
x = Function('x') # position function x(t)
w = Symbol('w', positive=True) # angular frequency w
sol = dsolve( diff(x(t),t,t) + w**2*x(t), x(t) )
sol
x = sol.rhs
x
Explanation: Note the negative sign in the formula defining the potential energy.
This negative is canceled by the negative sign of the dot product $\vec{F}\cdot d\vec{x}$:
when the force acts in the direction opposite to the displacement,
the work done by the force is negative.
Simple harmonic motion
The force exerted by a spring is given by the formula $F=-kx$.
If the only force acting on a mass $m$ is the force of a spring,
we can use Newton's second law to obtain the following equation:
$$
F=ma
\quad \Rightarrow \quad
-kx = ma
\quad \Rightarrow \quad
-kx(t) = m\frac{d^2}{dt^2}\Big[x(t)\Big].
$$
The motion of a mass-spring system is described by the differential equation $\frac{d^2}{dt^2}x(t) + \omega^2 x(t)=0$,
where the constant $\omega = \sqrt{\frac{k}{m}}$ is called the angular frequency.
We can find the position function $x(t)$ using the dsolve method:
End of explanation
A, phi = symbols("A phi")
(A*cos(w*t - phi)).expand(trig=True)
Explanation: Note the solution $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ is equivalent to $x(t) = A\cos(\omega t + \phi)$,
which is more commonly used to describe simple harmonic motion.
We can use the expand function with the argument trig=True to convince ourselves of this equivalence:
End of explanation
x = sol.rhs.subs({"C1":0,"C2":A})
x
v = diff(x, t)
v
E_T = (0.5*k*x**2 + 0.5*m*v**2).simplify()
E_T
E_T.subs({k:m*w**2}).simplify() # = K_max
E_T.subs({w:sqrt(k/m)}).simplify() # = U_max
Explanation: If we define $C_1=A\sin(\phi)$ and $C_2=A\cos(\phi)$,
we obtain the form $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ that SymPy found.
Conservation of energy
We can verify that the total energy of the mass-spring system is conserved by showing
$E_T(t) = U_s(t) + K(t) = \textrm{constant}$:
End of explanation
A = Matrix( [[ 2,-3,-8, 7],
[-2,-1, 2,-7],
[ 1, 0,-3, 6]] )
Explanation: Linear algebra
A matrix $A \in \mathbb{R}^{m\times n}$ is a rectangular array of real numbers with $m$ rows and $n$ columns.
To specify a matrix $A$, we specify the values for its $mn$ components $a_{11}, a_{12}, \ldots, a_{mn}$
as a list of lists:
End of explanation
A[0,1] # row 0, col 1 of A
A[0:2,0:3] # top-left 2x3 submatrix of A
Explanation: Use the square brackets to access the matrix elements or to obtain a submatrix:
End of explanation
eye(2) # 2x2 identity matrix
zeros(2, 3)
Explanation: Some commonly used matrices can be created with shortcut methods:
End of explanation
A.transpose() # the same as A.T
Explanation: Standard algebraic operations like
addition +, subtraction -, multiplication *,
and exponentiation ** work as expected for Matrix objects.
The transpose operation flips the matrix through its diagonal:
End of explanation
M = eye(3)
M.row_op(1, lambda v,j: v+3*M[0,j] )
M
Explanation: Recall that the transpose is also used to convert row vectors into column vectors and vice versa.
Row operations
End of explanation
A = Matrix( [[2,-3,-8, 7],
[-2,-1,2,-7],
[1, 0,-3, 6]])
A.rref() # RREF of A, location of pivots
Explanation: The method row_op takes two arguments as inputs:
the first argument specifies the 0-based index of the row you want to act on,
while the second argument is a function of the form f(val,j)
that describes how you want the value val=M[i,j] to be transformed.
The above call to row_op implements the row operation $R_2 \gets R_2 + 3R_1$.
Reduced row echelon form
The Gauss—Jordan elimination procedure is a sequence of row operations you can perform
on any matrix to bring it to its reduced row echelon form (RREF).
In SymPy, matrices have a rref method that computes their RREF:
End of explanation
[ A.rref()[0][r,:] for r in A.rref()[1] ] # R(A)
Explanation: Note the rref method returns a tuple of values:
the first value is the RREF of $A$,
while the second tells you the indices of the leading ones (also known as pivots) in the RREF of $A$.
To get just the RREF of $A$, select the $0^\mathrm{th}$ entry form the tuple: A.rref()[0].
Matrix fundamental spaces
Consider the matrix $A \in \mathbb{R}^{m\times n}$.
The fundamental spaces of a matrix are its column space $\mathcal{C}(A)$,
its null space $\mathcal{N}(A)$,
and its row space $\mathcal{R}(A)$.
These vector spaces are important when you consider the matrix product
$A\vec{x}=\vec{y}$ as “applying” the linear transformation $T_A:\mathbb{R}^n \to \mathbb{R}^m$
to an input vector $\vec{x} \in \mathbb{R}^n$ to produce the output vector $\vec{y} \in \mathbb{R}^m$.
Linear transformations $T_A:\mathbb{R}^n \to \mathbb{R}^m$ (vector functions)
are equivalent to $m\times n$ matrices.
This is one of the fundamental ideas in linear algebra.
You can think of $T_A$ as the abstract description of the transformation
and $A \in \mathbb{R}^{m\times n}$ as a concrete implementation of $T_A$.
By this equivalence,
the fundamental spaces of a matrix $A$
tell us facts about the domain and image of the linear transformation $T_A$.
The columns space $\mathcal{C}(A)$ is the same as the image space space $\textrm{Im}(T_A)$ (the set of all possible outputs).
The null space $\mathcal{N}(A)$ is the same as the kernel $\textrm{Ker}(T_A)$ (the set of inputs that $T_A$ maps to the zero vector).
The row space $\mathcal{R}(A)$ is the orthogonal complement of the null space.
Input vectors in the row space of $A$ are in one-to-one correspondence with the output vectors in the column space of $A$.
Okay, enough theory! Let's see how to compute the fundamental spaces of the matrix $A$ defined above.
The non-zero rows in the reduced row echelon form of $A$ are a basis for its row space:
End of explanation
[ A[:,c] for c in A.rref()[1] ] # C(A)
Explanation: The column space of $A$ is the span of the columns of $A$ that contain the pivots
in the reduced row echelon form of $A$:
End of explanation
A.nullspace() # N(A)
Explanation: Note we took columns from the original matrix $A$ and not its RREF.
To find the null space of $A$, call its nullspace method:
End of explanation
M = Matrix( [[1, 2, 3],
[2,-2, 4],
[2, 2, 5]] )
M.det()
Explanation: Determinants
The determinant of a matrix,
denoted $\det(A)$ or $|A|$,
is a particular way to multiply the entries of the matrix to produce a single number.
End of explanation
A = Matrix( [[1,2],
[3,9]] )
A.inv()
A.inv()*A
A*A.inv()
Explanation: Determinants are used for all kinds of tasks:
to compute areas and volumes,
to solve systems of equations,
and to check whether a matrix is invertible or not.
Matrix inverse
For every invertible matrix $A$,
there exists an inverse matrix $A^{-1}$ which undoes the effect of $A$.
The cumulative effect of the product of $A$ and $A^{-1}$ (in any order)
is the identity matrix: $AA^{-1}= A^{-1}A=\mathbb{1}$.
End of explanation
A = Matrix( [[ 9, -2],
[-2, 6]] )
A.eigenvals() # same as solve(det(A-eye(2)*x), x)
# return eigenvalues with their multiplicity
A.eigenvects()
Explanation: The matrix inverse $A^{-1}$ plays the role of division by $A$.
Eigenvectors and eigenvalues
When a matrix is multiplied by one of its eigenvectors the output
is the same eigenvector multiplied by a constant $A\vec{e}\lambda =\lambda\vec{e}\lambda$.
The constant $\lambda$ (the Greek letter lambda) is called an eigenvalue of $A$.
To find the eigenvalues of a matrix, start from the definition $A\vec{e}\lambda =\lambda\vec{e}\lambda$,
insert the identity $\mathbb{1}$,
and rewrite it as a null-space problem:
$$
A\vec{e}\lambda =\lambda\mathbb{1}\vec{e}\lambda
\qquad
\Rightarrow
\qquad
\left(A - \lambda\mathbb{1}\right)\vec{e}_\lambda = \vec{0}.
$$
This equation will have a solution whenever $|A - \lambda\mathbb{1}|=0$.(The invertible matrix theorem states
that a matrix has a non-empty null space if and only if its determinant is zero.)
The eigenvalues of $A \in \mathbb{R}^{n \times n}$,
denoted ${ \lambda_1, \lambda_2, \ldots, \lambda_n }$,\
are the roots of the characteristic polynomial $p(\lambda)=|A - \lambda \mathbb{1}|$.
End of explanation
Q, L = A.diagonalize()
Q # the matrix of eigenvectors as columns
Q.inv()
L # the matrix of eigenvalues
Q*L*Q.inv() # eigendecomposition of A
Q.inv()*A*Q # obtain L from A and Q
Explanation: Certain matrices can be written entirely in terms of their eigenvectors and their eigenvalues.
Consider the matrix $\Lambda$ (capital Greek L) that has the eigenvalues of the matrix $A$ on the diagonal,
and the matrix $Q$ constructed from the eigenvectors of $A$ as columns:
$$
\Lambda =
\begin{bmatrix}
\lambda_1 & \cdots & 0 \
\vdots & \ddots & 0 \
0 & 0 & \lambda_n
\end{bmatrix}!,
\ \
Q \: =
\begin{bmatrix}
| & & | \
\vec{e}{\lambda_1} & ! \cdots ! & \large\vec{e}{\lambda_n} \
| & & |
\end{bmatrix}!,
\ \
\textrm{then}
\ \
A = Q \Lambda Q^{-1}.
$$
Matrices that can be written this way are called diagonalizable.
To diagonalize a matrix $A$ is to find its $Q$ and $\Lambda$ matrices:
End of explanation
A.is_diagonalizable()
B = Matrix( [[1, 3],
[0, 1]] )
B.is_diagonalizable()
B.eigenvals() # eigenvalue 1 with multiplicity 2
B.eigenvects()
Explanation: Not all matrices are diagonalizable.
You can check if a matrix is diagonalizable by calling its is_diagonalizable method:
End of explanation |
3,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tant que ce n'est pas bien maรฎtrisรฉ c'est ...
La Guerre des Fonctions
1) Autes fonctions particuliรจres ร corriger
2) Exercices de Simplification du code des fonctions
3) Quelques Exercices
L'objet de ce cours est de vous faire progresser dans l'ameilloration de votre capacitรฉ de comprendre le fontionnement des algorithmes, par la pratique des tรขches consistant ร DEBUGER et ร SIMPLIFIER des fonctions SANS modifier leur sens.
Les fonctions ici presentes, ne modifient pas leurs ENTRรES, elles sont donc essentiellement dรฉterminรฉes par la valeur de SORTIE (aprรจs le return).
Encore une fois les variables locales des fonctions n'ont pas d'importance tant quelles ne ne modifient pas la valeur de retour.
Des exemples sont fournis pour chaque fonction, mais ne ils sont pas suffisant pour garantir que la fonction n'a pas du tout รฉtรฉ modifiรฉe.
Le but de l'exercice est de vous รฉvaluer en fonction
Step1: Maintenant on peux se servir de
%inline matplotlib
import matplotlib.pyplot as plt
pour obennir le graphe de la fonction cube symmetrique par rapport ร l'axe y
- quelles sont les modifications de le la fonction cube_positif(x) ?
- de combien de return a t'on besoin ?
Step2: 2) Correction de fonctions en vrac (les fonction suivantes doivent รชtre corrigรฉes)
2.1) Test des entiers et des premiers
Step3: 2.2) Factorielle
Step4: 3) Exercices de simplification
3.1) Exercice 2
Step5: La cellule suivante doit afficher 2 => True, 3 => False
Step6: 3.2) Exercice 2
Step7: La cellule suivante doit afficher 4 => True, 6 => False
Step8: 3.3 Exercice 3
Step9: La cellule suivante doit afficher [6,15,24] et [6,8,10]
Step10: 3.4) Exercice 4
Step11: La cellule suivante doit afficher cas 1 => True, cas 2 => False
Step12: 3.5) Exercice 5 - fizzbuzz
On dรฉfinit une foction fizzbuzz, qui prend un argument n et, pour tous les nombres de 1 ร n
Step13: La cellule suivante doit afficher
Step14: 3.6) Exercice 7 - calcul de combinatoire
On cherche ร calculer les combinatoires de n, k, ร l'aide de la fonction math.factorial. La formule est la suivante
Step15: La cellule ci-dessous doit afficher C(4,2) => 6, C(6,3) => 20
Step16: 3.7) Exercice 8 - fonction maximum_pair
Simplifiez la fonction suivante qui renvoit le plus grand nombre pair d'une liste ne contenant que des nombres pairs positifs, ou 0 si la liste ne contient aucun nombre pair.
On peut raisonnablement l'รฉcrire en 5 ligne (voir en une ligne avec la list-comprehension, mais elle n'est pas au programme)
Step17: La cellule suivante doit afficher cas 1 => 12, cas 2 => 0
Step18: 3.8) Exercice 9 - Fonction carre magique
On code une fonction qui dรฉtermine si une liste en entrรฉe correspond ร un carrรฉ magique de taille 3, c'est ร dire qu'il s'agit d'une liste de liste de mรชme longueur, dont les sommes sur les lignes et les colonnes sont รฉgales et tous les termes diffรฉrents.
Courage
Step19: La cellule suivante doit afficher True, False, False | Python Code:
def cube_positif( x ):
if abs( x*x*x >= 0.0):
return x*x*x
print("Erreur")
return
cube_positif(-4)
Explanation: Tant que ce n'est pas bien maรฎtrisรฉ c'est ...
La Guerre des Fonctions
1) Autes fonctions particuliรจres ร corriger
2) Exercices de Simplification du code des fonctions
3) Quelques Exercices
L'objet de ce cours est de vous faire progresser dans l'ameilloration de votre capacitรฉ de comprendre le fontionnement des algorithmes, par la pratique des tรขches consistant ร DEBUGER et ร SIMPLIFIER des fonctions SANS modifier leur sens.
Les fonctions ici presentes, ne modifient pas leurs ENTRรES, elles sont donc essentiellement dรฉterminรฉes par la valeur de SORTIE (aprรจs le return).
Encore une fois les variables locales des fonctions n'ont pas d'importance tant quelles ne ne modifient pas la valeur de retour.
Des exemples sont fournis pour chaque fonction, mais ne ils sont pas suffisant pour garantir que la fonction n'a pas du tout รฉtรฉ modifiรฉe.
Le but de l'exercice est de vous รฉvaluer en fonction :
du fait que vous n'ayez pas dรฉnaturรฉ le retour de la fonction
de votre capacitรฉ ร ecrire un code simple et clair
Dans le cas prรฉsent, on pourra approximer "simplification" par rรฉduction du nombre de ligne.
Vous pouvez รฉgalement amรฉliorer la fonction en rรฉduisant le nombre d'itรฉration nรฉcessaire si il s'agit d'une fonction utilisant des boucles.
Dans tous les cas veuillez ajouter des commentaires, voire des docstrings.
Attention pour les exercices suivants !!!
NE SURTOUT PAS MODIFIER LES DEUX LIGNES DE COMMENTAIRES DIRECTEMENT DANS EN-TรTE DES FONCTIONS.
TOUTE MODIFICATION, voire mรชme d'un CHARACTERE ESPACE entrainera une penalitรฉ
Ceux-ci servent aussi ร documenter la fonction
Une indication est donnรฉe (plus ou moins prรฉcise) du nombre de ligne que l'on peut atteindre.
Sauf indication contraire, vous pouvez utiliser toutes les librairies de python, vus en cours (math, matplotlib).
Vous pouvez รฉgalement utilisez les fonctions systรจme vus egualement en cours.
1) Definir la fonction cube symmetrique par rapport a y
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
#On commence sans fonction,
XX=[]
YY=[]
X=range(10,-10,-1)
for x in X:
XX.append(x)
YY.append(x*x*x)
plt.plot(XX,YY)
def cube_positif( x ):
if x*x*x >= 0.0:
return x*x*x
elif x*x*x <= 0.0:
return -1*x*x*x
#On peut simplifier cette fonction
#On va s'en servir de la faรงon suivante:
XX=[]
YY=[]
X=range(10,-10,-1)
for x in X:
XX.append(x)
YY.append(cube_positif( x ))
plt.plot(XX,YY)
plt.show()
Explanation: Maintenant on peux se servir de
%inline matplotlib
import matplotlib.pyplot as plt
pour obennir le graphe de la fonction cube symmetrique par rapport ร l'axe y
- quelles sont les modifications de le la fonction cube_positif(x) ?
- de combien de return a t'on besoin ?
End of explanation
def est_entier(x):
return x == int(x)
def est_premier(n):
if n < 2:
return False
for it in range(2,int(n**0.5)+1):
if n % it == 0:
return False
return True
n=float(input())
print("est entier ? : {}".format(est_entier(n)))
print("est premier ? : {}".format(est_premier(n)))
#Que se passe t'il si l'on ne force pas n a รชtre un nombre ?
#Que se passe t'il si on a un nombre a virgule (float) ?
#Est ce que ce nombre doit รชtre nรฉcessairement un int ou est ce que cela peut รชtre autre chose ?
Explanation: 2) Correction de fonctions en vrac (les fonction suivantes doivent รชtre corrigรฉes)
2.1) Test des entiers et des premiers
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def factorielle( int( n ) ):
res = 1
for it in range(1,n+1):
res = res*it
return res
#Aide:
#Cette partie du code contiens deux erreurs
#Un problรจme de passage de variable
#Un problรจme liรฉ a l'appartenance ร un bloc
## DM - exercice nยฐ1
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def factorielle( int( n ) ):
res = 1
for it in range(1,n+1):
res = res*it
return res
print( factorielle( 10 ) )
#devrait donner 3628800
Explanation: 2.2) Factorielle : Exercice 1
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def est_pair(n):
if n % 2 == 0:
return True
else:
return False
## DM - exercice nยฐ2
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def est_pair(n):
if n % 2 == 0:
return True
else:
return False
Explanation: 3) Exercices de simplification
3.1) Exercice 2 : simplification de la fonction est_pair()
Simplifier la fonction suivante, qui renvoit True si un nombre est pair, False sinon.
Indication : On peut en faire une version trรจs trรจs courte.
End of explanation
print( "2 => ", est_pair(2) )
print( "3 => ", est_pair(3) )
Explanation: La cellule suivante doit afficher 2 => True, 3 => False
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def possede_un_seul_diviseur(n):
nb_diviseur = 0
for it in range(2,n):
if n % it == 0:
nb_diviseur = nb_diviseur + 1
if nb_diviseur == 1:
return True
return False
## DM - exercice nยฐ3
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def possede_un_seul_diviseur(n):
nb_diviseur = 0
for it in range(2,n):
if n % it == 0:
nb_diviseur = nb_diviseur + 1
if nb_diviseur == 1:
return True
return False
Explanation: 3.2) Exercice 2 : fonction a un seul diviseur
Simplifier la fonction suivante, qui renvoit True si un nombre possรจde 1 et un unique diviseur diffรฉrent de 1 et lui-mรชme, False sinon.
Pour le coup on ne peut pas beaucoup rรฉduire le code de la fonction, sauf ร changer l'algorithme (pour un rรฉsultat รฉquivalent). En modifiant l'algorithme, on peut l'รฉcrire en une ligne (en utilisant les bonnes fonctions)
End of explanation
print( "4 => ", possede_un_seul_diviseur(4) )
print( "6 => ", possede_un_seul_diviseur(6) )
Explanation: La cellule suivante doit afficher 4 => True, 6 => False
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def somme_partielle(liste):
if len(liste) != 9:
return None
a1 = 0
for it in range(0, len(liste[0:3])):
a1 = a1 + liste[it]
a2 = 0
for it in range(3, len(liste[3:6])):
a2 = a2 + liste[it]
a3 = 0
for it in range(6, len(liste[6:9])):
a3 = a3 + liste[it]
res = []
res.append(a1)
res.append(a2)
res.append(a3)
return res
## DM - exercice nยฐ4
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def somme_partielle(liste):
if len(liste) != 9:
return None
a1 = 0
for it in range(0, len(liste[0:3])):
a1 = a1 + liste[it]
a2 = 0
for it in range(3, 3 + len(liste[3:6])):
a2 = a2 + liste[it]
a3 = 0
for it in range(6, 6 + len(liste[6:9])):
a3 = a3 + liste[it]
res = []
res.append(a1)
res.append(a2)
res.append(a3)
return res
Explanation: 3.3 Exercice 3: fonction somme partielle
On dรฉfinit une fonction somme_partielle, qui prend en argument une liste de 9 รฉlรฉments et renvoit une liste de trois รฉlรฉments, les sommes des 3 premiers รฉlรฉments, des 3 suivants et des 3 derniers.
Par exemple :
~~~ Python
a = somme_partielle( [1,2,3,4,5,6,7,8,9] )
print( a ) ## Affiche [6, 15, 24]
b = somme_partielle( [0,1,5,0,1,7,0,1,9] )
print( b ) ## Affiche [6,8,10]
~~~
On peut รฉcrire cette fonction en 6 lignes.
End of explanation
a = somme_partielle( [1,2,3,4,5,6,7,8,9] )
print( a ) ## Affiche [6, 15, 24]
b = somme_partielle( [0,1,5,0,1,7,0,1,9] )
print( b ) ## Affiche [6,8,10]
Explanation: La cellule suivante doit afficher [6,15,24] et [6,8,10]
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def verifier_si_tous_les_elements_sont_egaux(liste):
nb_egaux = 0
for it in liste:
for it2 in liste:
if it != it2:
return False
else:
nb_egaux += 1
if nb_egaux == len(liste)**2:
return True
else:
return False
## DM - exercice nยฐ5
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def verifier_si_tous_les_elements_sont_egaux(liste):
nb_egaux = 0
for it in liste:
for it2 in liste:
if it != it2:
return False
else:
nb_egaux += 1
if nb_egaux == len(liste)**2:
return True
else:
return False
Explanation: 3.4) Exercice 4 : vรฉrifiez que tous les รฉlรฉments d'une liste soient รฉgaux
On veut รฉcrire une fonction qui vรฉrifie que tous les รฉlรฉments d'une liste sont bien รฉgaux entre eux.
On peut la rรฉduire d'ร peu prรจs de moitiรฉ.
End of explanation
print("Cas 1 =>", verifier_si_tous_les_elements_sont_egaux([1,1,1,1,1,1,1,1]))
print("Cas 2 =>", verifier_si_tous_les_elements_sont_egaux([1,1,1,1,1,1,1,2]))
Explanation: La cellule suivante doit afficher cas 1 => True, cas 2 => False
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def fizzbuzz(n):
it = 1
while it <= n:
if (it % 3 == 0) and (it % 5 == 0):
print("fizzbuzz")
if (it % 3 == 0) and (it % 5 != 0):
print("fizz")
if (it % 3 != 0) and (it % 5 == 0):
print("buzz")
if (it % 3 != 0) and (it % 5 != 0):
print(it)
it = it + 1
## DM - exercice nยฐ6
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def fizzbuzz(n):
it = 1
while it <= n:
if (it % 3 == 0) and (it % 5 == 0):
print("fizzbuzz")
if (it % 3 == 0) and (it % 5 != 0):
print("fizz")
if (it % 3 != 0) and (it % 5 == 0):
print("buzz")
if (it % 3 != 0) and (it % 5 != 0):
print(it)
it = it + 1
Explanation: 3.5) Exercice 5 - fizzbuzz
On dรฉfinit une foction fizzbuzz, qui prend un argument n et, pour tous les nombres de 1 ร n :
affiche fizz si le nombre est divisible par 3
affiche buzz si le nombre est divisible par 5
affiche fizzbuzz si il est ร la fois divisible par 3 et 5
affiche le nombre si aucun des cas ci-dessus n'est rรฉalisรฉ
Beaucoup de choses ร simplifier dans cette fonction, par contre cela ne se traduira pas forcรฉment par des lignes en moins. on peut quand mรชme gagner quelques lignes.
End of explanation
fizzbuzz(20)
Explanation: La cellule suivante doit afficher :
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz
16
17
fizz
19
buzz
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
import math
def combinatoire(n,k):
a = math.factorial(n)
a = a // math.factorial(k)
a = a // math.factorial(n-k)
return a
## DM - exercice nยฐ7
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
import math
def combinatoire(n,k):
a = math.factorial(n)
a = a // math.factorial(k)
a = a // math.factorial(n-k)
return a
Explanation: 3.6) Exercice 7 - calcul de combinatoire
On cherche ร calculer les combinatoires de n, k, ร l'aide de la fonction math.factorial. La formule est la suivante :
$$\displaystyle \frac{ n! }{ k!(n-k)! }$$
On peut obtenir une version trรจs courte de cette fonction.
End of explanation
print( "C(4,2) =>", combinatoire(4,2) )
print( "C(6,3) =>", combinatoire(6,3) )
Explanation: La cellule ci-dessous doit afficher C(4,2) => 6, C(6,3) => 20
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def maximum_pair(liste):
for it in liste:
if it % 2 == 0:
est_le_plus_grand = True
for it2 in liste:
if (it2 % 2 == 0) and it2 > it:
est_le_plus_grand = False
break
if est_le_plus_grand == True:
return it
return 0
## DM - exercice nยฐ8
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def maximum_pair(liste):
for it in liste:
if it % 2 == 0:
est_le_plus_grand = True
for it2 in liste:
if (it2 % 2 == 0) and it2 > it:
est_le_plus_grand = False
break
if est_le_plus_grand == True:
return it
return 0
Explanation: 3.7) Exercice 8 - fonction maximum_pair
Simplifiez la fonction suivante qui renvoit le plus grand nombre pair d'une liste ne contenant que des nombres pairs positifs, ou 0 si la liste ne contient aucun nombre pair.
On peut raisonnablement l'รฉcrire en 5 ligne (voir en une ligne avec la list-comprehension, mais elle n'est pas au programme)
End of explanation
print( "cas 1 =>", maximum_pair([1,2,4,13,12,10,8]))
print( "cas 2 =>", maximum_pair([1,21,41,13,125,109,87]))
Explanation: La cellule suivante doit afficher cas 1 => 12, cas 2 => 0
End of explanation
## Rรฉfรฉrence - ร priori ne la modifiez pas, s'il vous plaรฎt
def carre_magique(carre_magique):
if len(carre_magique) != 3:
return False
for it_ligne in carre_magique:
if len(it_ligne) != 3:
return False
for it_ligne_1 in carre_magique:
for it_ligne_2 in carre_magique:
if id(it_ligne_1) != id(it_ligne_2):
for it_elt1 in it_ligne_1:
for it_elt2 in it_ligne_2:
if it_elt1 == it_elt2:
return False
somme_ligne_1 = 0
somme_ligne_2 = 0
somme_ligne_3 = 0
ligne_1 = carre_magique[0]
ligne_2 = carre_magique[1]
ligne_3 = carre_magique[2]
for it in ligne_1:
somme_ligne_1 = somme_ligne_1 + it
for it in ligne_2:
somme_ligne_2 = somme_ligne_2 + it
for it in ligne_3:
somme_ligne_3 = somme_ligne_3 + it
if (somme_ligne_1 != somme_ligne_2) or (somme_ligne_1 != somme_ligne_3) or (somme_ligne_2 != somme_ligne_3):
return False
somme_colonne_1 = 0
somme_colonne_2 = 0
somme_colonne_3 = 0
for it in range(3):
somme_colonne_1 = somme_colonne_1 + carre_magique[it][0]
for it in range(3):
somme_colonne_2 = somme_colonne_2 + carre_magique[it][1]
for it in range(3):
somme_colonne_3 = somme_colonne_3 + carre_magique[it][2]
if (somme_colonne_1 != somme_colonne_2) or (somme_colonne_1 != somme_colonne_3) or (somme_colonne_2 != somme_colonne_3):
return False
return True
## DM - exercice nยฐ9
## Modifiez le code ci-dessous, et ne modifiez pas ces commentaires
def carre_magique(carre_magique):
if len(carre_magique) != 3:
return False
for it_ligne in carre_magique:
if len(it_ligne) != 3:
return False
for it_ligne_1 in carre_magique:
for it_ligne_2 in carre_magique:
if id(it_ligne_1) != id(it_ligne_2):
for it_elt1 in it_ligne_1:
for it_elt2 in it_ligne_2:
if it_elt1 == it_elt2:
return False
somme_ligne_1 = 0
somme_ligne_2 = 0
somme_ligne_3 = 0
ligne_1 = carre_magique[0]
ligne_2 = carre_magique[1]
ligne_3 = carre_magique[2]
for it in ligne_1:
somme_ligne_1 = somme_ligne_1 + it
for it in ligne_2:
somme_ligne_2 = somme_ligne_2 + it
for it in ligne_3:
somme_ligne_3 = somme_ligne_3 + it
if (somme_ligne_1 != somme_ligne_2) or (somme_ligne_1 != somme_ligne_3) or (somme_ligne_2 != somme_ligne_3):
return False
somme_colonne_1 = 0
somme_colonne_2 = 0
somme_colonne_3 = 0
for it in range(3):
somme_colonne_1 = somme_colonne_1 + carre_magique[it][0]
for it in range(3):
somme_colonne_2 = somme_colonne_2 + carre_magique[it][1]
for it in range(3):
somme_colonne_3 = somme_colonne_3 + carre_magique[it][2]
if (somme_colonne_1 != somme_colonne_2) or (somme_colonne_1 != somme_colonne_3) or (somme_colonne_2 != somme_colonne_3):
return False
return True
Explanation: 3.8) Exercice 9 - Fonction carre magique
On code une fonction qui dรฉtermine si une liste en entrรฉe correspond ร un carrรฉ magique de taille 3, c'est ร dire qu'il s'agit d'une liste de liste de mรชme longueur, dont les sommes sur les lignes et les colonnes sont รฉgales et tous les termes diffรฉrents.
Courage : il y a des carres magiques partout dans Paris
Carre Magique Porte d'Auteil et parfois ils sont bien plus compliquรฉs
End of explanation
print( carre_magique([[2,9,4],
[7,5,3],
[6,1,8]]))
print( carre_magique([[2,9,4],
[7,5,10],
[6,1,8]]))
print( carre_magique([[1,1,1],
[1,1,1],
[1,1,1]]))
Explanation: La cellule suivante doit afficher True, False, False
End of explanation |
3,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Search OOI ERDDAP for Pioneer Glider Data
Use ERDDAP's RESTful advanced search to try to find OOI Pioneer glider water temperatures from the OOI ERDDAP. Use case from Stace Beaulieu ([email protected])
Step1: First try just searching for "glider"
Step2: Now search for all temperature data in specified bounding box and temporal extent
Step3: Define a function that returns a Pandas DataFrame based on the dataset ID. The ERDDAP request variables (e.g. "ctdpf_ckl_wfp_instrument_ctdpf_ckl_seawater_temperature") are hard-coded here, so this routine should be modified for other ERDDAP endpoints or datasets.
Since we didn't actually find any glider data, we just request the last temperature value from each dataset, using the ERDDAP orderByMax("time") constraint. This way we can see when the data ends, and if the mooring locations look correct | Python Code:
import pandas as pd
Explanation: Search OOI ERDDAP for Pioneer Glider Data
Use ERDDAP's RESTful advanced search to try to find OOI Pioneer glider water temperatures from the OOI ERDDAP. Use case from Stace Beaulieu ([email protected])
End of explanation
url = 'http://ooi-data.marine.rutgers.edu/erddap/search/advanced.csv?page=1&itemsPerPage=1000&searchFor=glider'
dft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution', 'Dataset ID'])
dft.head()
Explanation: First try just searching for "glider"
End of explanation
start = '2000-01-01T00:00:00Z'
stop = '2017-02-22T00:00:00Z'
lat_min = 39.
lat_max = 41.5
lon_min = -72.
lon_max = -69.
standard_name = 'sea_water_temperature'
endpoint = 'http://ooi-data.marine.rutgers.edu/erddap/search/advanced.csv'
import pandas as pd
base = (
'{}'
'?page=1'
'&itemsPerPage=1000'
'&searchFor='
'&protocol=(ANY)'
'&cdm_data_type=(ANY)'
'&institution=(ANY)'
'&ioos_category=(ANY)'
'&keywords=(ANY)'
'&long_name=(ANY)'
'&standard_name={}'
'&variableName=(ANY)'
'&maxLat={}'
'&minLon={}'
'&maxLon={}'
'&minLat={}'
'&minTime={}'
'&maxTime={}').format
url = base(
endpoint,
standard_name,
lat_max,
lon_min,
lon_max,
lat_min,
start,
stop
)
print(url)
dft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution','Dataset ID'])
print('Datasets Found = {}'.format(len(dft)))
print(url)
dft
Explanation: Now search for all temperature data in specified bounding box and temporal extent
End of explanation
def download_df(glider_id):
from pandas import DataFrame, read_csv
# from urllib.error import HTTPError
uri = ('http://ooi-data.marine.rutgers.edu/erddap/tabledap/{}.csv'
'?trajectory,'
'time,latitude,longitude,'
'ctdpf_ckl_wfp_instrument_ctdpf_ckl_seawater_temperature'
'&orderByMax("time")'
'&time>={}'
'&time<={}'
'&latitude>={}'
'&latitude<={}'
'&longitude>={}'
'&longitude<={}').format
url = uri(glider_id,start,stop,lat_min,lat_max,lon_min,lon_max)
print(url)
# Not sure if returning an empty df is the best idea.
try:
df = read_csv(url, index_col='time', parse_dates=True, skiprows=[1])
except:
df = pd.DataFrame()
return df
df = pd.concat(list(map(download_df, dft['Dataset ID'].values)))
print('Total Data Values Found: {}'.format(len(df)))
df
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.feature import NaturalEarthFeature
bathym_1000 = NaturalEarthFeature(name='bathymetry_J_1000',
scale='10m', category='physical')
fig, ax = plt.subplots(
figsize=(9, 9),
subplot_kw=dict(projection=ccrs.PlateCarree())
)
ax.coastlines(resolution='10m')
ax.add_feature(bathym_1000, facecolor=[0.9, 0.9, 0.9], edgecolor='none')
dx = dy = 0.5
ax.set_extent([lon_min-dx, lon_max+dx, lat_min-dy, lat_max+dy])
g = df.groupby('trajectory')
for glider in g.groups:
traj = df[df['trajectory'] == glider]
ax.plot(traj['longitude'], traj['latitude'], 'o', label=glider)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=2, color='gray', alpha=0.5, linestyle='--')
ax.legend();
Explanation: Define a function that returns a Pandas DataFrame based on the dataset ID. The ERDDAP request variables (e.g. "ctdpf_ckl_wfp_instrument_ctdpf_ckl_seawater_temperature") are hard-coded here, so this routine should be modified for other ERDDAP endpoints or datasets.
Since we didn't actually find any glider data, we just request the last temperature value from each dataset, using the ERDDAP orderByMax("time") constraint. This way we can see when the data ends, and if the mooring locations look correct
End of explanation |
3,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background on projectors and projections
This tutorial provides background information on projectors and Signal Space
Projection (SSP), and covers loading and saving projectors, adding and removing
projectors from Raw objects, the difference between "applied" and "unapplied"
projectors, and at what stages MNE-Python applies projectors automatically.
Step1: What is a projection?
^^^^^^^^^^^^^^^^^^^^^
In the most basic terms, a projection is an operation that converts one set
of points into another set of points, where repeating the projection
operation on the resulting points has no effect. To give a simple geometric
example, imagine the point $(3, 2, 5)$ in 3-dimensional space. A
projection of that point onto the $x, y$ plane looks a lot like a
shadow cast by that point if the sun were directly above it
Step2: <div class="alert alert-info"><h4>Note</h4><p>The ``@`` symbol indicates matrix multiplication on NumPy arrays, and was
introduced in Python 3.5 / NumPy 1.10. The notation ``plot(*point)`` uses
Python `argument expansion`_ to "unpack" the elements of ``point`` into
separate positional arguments to the function. In other words,
``plot(*point)`` expands to ``plot(3, 2, 5)``.</p></div>
Notice that we used matrix multiplication to compute the projection of our
point $(3, 2, 5)$onto the $x, y$ plane
Step3: Knowing that, we can compute a plane that is orthogonal to the effect of the
trigger (using the fact that a plane through the origin has equation
$Ax + By + Cz = 0$ given a normal vector $(A, B, C)$), and
project our real measurements onto that plane.
Step4: Computing the projection matrix from the trigger_effect vector is done
using singular value decomposition <svd_>_ (SVD); interested readers may
consult the internet or a linear algebra textbook for details on this method.
With the projection matrix in place, we can project our original vector
$(3, 2, 5)$ to remove the effect of the trigger, and then plot it
Step5: Just as before, the projection matrix will map any point in $x, y, z$
space onto that plane, and once a point has been projected onto that plane,
applying the projection again will have no effect. For that reason, it should
be clear that although the projected points vary in all three $x$,
$y$, and $z$ directions, the set of projected points have only
two effective dimensions (i.e., they are constrained to a plane).
.. sidebar
Step6: In MNE-Python, the environmental noise vectors are computed using principal
component analysis <pca_>, usually abbreviated "PCA", which is why the SSP
projectors usually have names like "PCA-v1". (Incidentally, since the process
of performing PCA uses singular value decomposition <svd_> under the hood,
it is also common to see phrases like "projectors were computed using SVD" in
published papers.) The projectors are stored in the projs field of
raw.info
Step7: raw.info['projs'] is an ordinary Python
Step8: The
Step9: Computing projectors
~~~~~~~~~~~~~~~~~~~~
Step10: Additional ways of visualizing projectors are covered in the tutorial
tut-artifact-ssp.
Loading and saving projectors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SSP can be used for other types of signal cleaning besides just reduction of
environmental noise. You probably noticed two large deflections in the
magnetometer signals in the previous plot that were not removed by the
empty-room projectors โ those are artifacts of the subject's heartbeat. SSP
can be used to remove those artifacts as well. The sample data includes
projectors for heartbeat noise reduction that were saved in a separate file
from the raw data, which can be loaded with the
Step11: There is a corresponding
Step12: To remove projectors, there is a corresponding method | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa
from scipy.linalg import svd
import mne
def setup_3d_axes():
ax = plt.axes(projection='3d')
ax.view_init(azim=-105, elev=20)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_xlim(-1, 5)
ax.set_ylim(-1, 5)
ax.set_zlim(0, 5)
return ax
Explanation: Background on projectors and projections
This tutorial provides background information on projectors and Signal Space
Projection (SSP), and covers loading and saving projectors, adding and removing
projectors from Raw objects, the difference between "applied" and "unapplied"
projectors, and at what stages MNE-Python applies projectors automatically.
:depth: 2
We'll start by importing the Python modules we need; we'll also define a short
function to make it easier to make several plots that look similar:
End of explanation
ax = setup_3d_axes()
# plot the vector (3, 2, 5)
origin = np.zeros((3, 1))
point = np.array([[3, 2, 5]]).T
vector = np.hstack([origin, point])
ax.plot(*vector, color='k')
ax.plot(*point, color='k', marker='o')
# project the vector onto the x,y plane and plot it
xy_projection_matrix = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 0]])
projected_point = xy_projection_matrix @ point
projected_vector = xy_projection_matrix @ vector
ax.plot(*projected_vector, color='C0')
ax.plot(*projected_point, color='C0', marker='o')
# add dashed arrow showing projection
arrow_coords = np.concatenate([point, projected_point - point]).flatten()
ax.quiver3D(*arrow_coords, length=0.96, arrow_length_ratio=0.1, color='C1',
linewidth=1, linestyle='dashed')
Explanation: What is a projection?
^^^^^^^^^^^^^^^^^^^^^
In the most basic terms, a projection is an operation that converts one set
of points into another set of points, where repeating the projection
operation on the resulting points has no effect. To give a simple geometric
example, imagine the point $(3, 2, 5)$ in 3-dimensional space. A
projection of that point onto the $x, y$ plane looks a lot like a
shadow cast by that point if the sun were directly above it:
End of explanation
trigger_effect = np.array([[3, -1, 1]]).T
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The ``@`` symbol indicates matrix multiplication on NumPy arrays, and was
introduced in Python 3.5 / NumPy 1.10. The notation ``plot(*point)`` uses
Python `argument expansion`_ to "unpack" the elements of ``point`` into
separate positional arguments to the function. In other words,
``plot(*point)`` expands to ``plot(3, 2, 5)``.</p></div>
Notice that we used matrix multiplication to compute the projection of our
point $(3, 2, 5)$onto the $x, y$ plane:
\begin{align}\left[
\begin{matrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 0 \end{matrix}
\right]
\left[ \begin{matrix} 3 \ 2 \ 5 \end{matrix} \right] =
\left[ \begin{matrix} 3 \ 2 \ 0 \end{matrix} \right]\end{align}
...and that applying the projection again to the result just gives back the
result again:
\begin{align}\left[
\begin{matrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 0 \end{matrix}
\right]
\left[ \begin{matrix} 3 \ 2 \ 0 \end{matrix} \right] =
\left[ \begin{matrix} 3 \ 2 \ 0 \end{matrix} \right]\end{align}
From an information perspective, this projection has taken the point
$x, y, z$ and removed the information about how far in the $z$
direction our point was located; all we know now is its position in the
$x, y$ plane. Moreover, applying our projection matrix to any point
in $x, y, z$ space will reduce it to a corresponding point on the
$x, y$ plane. The term for this is a subspace: the projection matrix
projects points in the original space into a subspace of lower dimension
than the original. The reason our subspace is the $x,y$ plane (instead
of, say, the $y,z$ plane) is a direct result of the particular values
in our projection matrix.
Example: projection as noise reduction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another way to describe this "loss of information" or "projection into a
subspace" is to say that projection reduces the rank (or "degrees of
freedom") of the measurement โ here, from 3 dimensions down to 2. On the
other hand, if you know that measurement component in the $z$ direction
is just noise due to your measurement method, and all you care about are the
$x$ and $y$ components, then projecting your 3-dimensional
measurement into the $x, y$ plane could be seen as a form of noise
reduction.
Of course, it would be very lucky indeed if all the measurement noise were
concentrated in the $z$ direction; you could just discard the $z$
component without bothering to construct a projection matrix or do the matrix
multiplication. Suppose instead that in order to take that measurement you
had to pull a trigger on a measurement device, and the act of pulling the
trigger causes the device to move a little. If you measure how
trigger-pulling affects measurement device position, you could then "correct"
your real measurements to "project out" the effect of the trigger pulling.
Here we'll suppose that the average effect of the trigger is to move the
measurement device by $(3, -1, 1)$:
End of explanation
# compute the plane orthogonal to trigger_effect
x, y = np.meshgrid(np.linspace(-1, 5, 61), np.linspace(-1, 5, 61))
A, B, C = trigger_effect
z = (-A * x - B * y) / C
# cut off the plane below z=0 (just to make the plot nicer)
mask = np.where(z >= 0)
x = x[mask]
y = y[mask]
z = z[mask]
Explanation: Knowing that, we can compute a plane that is orthogonal to the effect of the
trigger (using the fact that a plane through the origin has equation
$Ax + By + Cz = 0$ given a normal vector $(A, B, C)$), and
project our real measurements onto that plane.
End of explanation
# compute the projection matrix
U, S, V = svd(trigger_effect, full_matrices=False)
trigger_projection_matrix = np.eye(3) - U @ U.T
# project the vector onto the orthogonal plane
projected_point = trigger_projection_matrix @ point
projected_vector = trigger_projection_matrix @ vector
# plot the trigger effect and its orthogonal plane
ax = setup_3d_axes()
ax.plot_trisurf(x, y, z, color='C2', shade=False, alpha=0.25)
ax.quiver3D(*np.concatenate([origin, trigger_effect]).flatten(),
arrow_length_ratio=0.1, color='C2', alpha=0.5)
# plot the original vector
ax.plot(*vector, color='k')
ax.plot(*point, color='k', marker='o')
offset = np.full((3, 1), 0.1)
ax.text(*(point + offset).flat, '({}, {}, {})'.format(*point.flat), color='k')
# plot the projected vector
ax.plot(*projected_vector, color='C0')
ax.plot(*projected_point, color='C0', marker='o')
offset = np.full((3, 1), -0.2)
ax.text(*(projected_point + offset).flat,
'({}, {}, {})'.format(*np.round(projected_point.flat, 2)),
color='C0', horizontalalignment='right')
# add dashed arrow showing projection
arrow_coords = np.concatenate([point, projected_point - point]).flatten()
ax.quiver3D(*arrow_coords, length=0.96, arrow_length_ratio=0.1,
color='C1', linewidth=1, linestyle='dashed')
Explanation: Computing the projection matrix from the trigger_effect vector is done
using singular value decomposition <svd_>_ (SVD); interested readers may
consult the internet or a linear algebra textbook for details on this method.
With the projection matrix in place, we can project our original vector
$(3, 2, 5)$ to remove the effect of the trigger, and then plot it:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
Explanation: Just as before, the projection matrix will map any point in $x, y, z$
space onto that plane, and once a point has been projected onto that plane,
applying the projection again will have no effect. For that reason, it should
be clear that although the projected points vary in all three $x$,
$y$, and $z$ directions, the set of projected points have only
two effective dimensions (i.e., they are constrained to a plane).
.. sidebar:: Terminology
In MNE-Python, the matrix used to project a raw signal into a subspace is
usually called a :term:`projector <projector>` or a *projection
operator* โ these terms are interchangeable with the term *projection
matrix* used above.
Projections of EEG or MEG signals work in very much the same way: the point
$x, y, z$ corresponds to the value of each sensor at a single time
point, and the projection matrix varies depending on what aspects of the
signal (i.e., what kind of noise) you are trying to project out. The only
real difference is that instead of a single 3-dimensional point $(x, y,
z)$ you're dealing with a time series of $N$-dimensional "points" (one
at each sampling time), where $N$ is usually in the tens or hundreds
(depending on how many sensors your EEG/MEG system has). Fortunately, because
projection is a matrix operation, it can be done very quickly even on signals
with hundreds of dimensions and tens of thousands of time points.
Signal-space projection (SSP)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We mentioned above that the projection matrix will vary depending on what
kind of noise you are trying to project away. Signal-space projection (SSP)
[1]_ is a way of estimating what that projection matrix should be, by
comparing measurements with and without the signal of interest. For example,
you can take additional "empty room" measurements that record activity at the
sensors when no subject is present. By looking at the spatial pattern of
activity across MEG sensors in an empty room measurement, you can create one
or more $N$-dimensional vector(s) giving the "direction(s)" of
environmental noise in sensor space (analogous to the vector for "effect of
the trigger" in our example above). SSP is also often used for removing
heartbeat and eye movement artifacts โ in those cases, instead of empty room
recordings the direction of the noise is estimated by detecting the
artifacts, extracting epochs around them, and averaging. See
tut-artifact-ssp for examples.
Once you know the noise vectors, you can create a hyperplane that is
orthogonal
to them, and construct a projection matrix to project your experimental
recordings onto that hyperplane. In that way, the component of your
measurements associated with environmental noise can be removed. Again, it
should be clear that the projection reduces the dimensionality of your data โ
you'll still have the same number of sensor signals, but they won't all be
linearly independent โ but typically there are tens or hundreds of sensors
and the noise subspace that you are eliminating has only 3-5 dimensions, so
the loss of degrees of freedom is usually not problematic.
Projectors in MNE-Python
^^^^^^^^^^^^^^^^^^^^^^^^
In our example data, SSP <ssp-tutorial> has already been performed
using empty room recordings, but the :term:projectors <projector> are
stored alongside the raw data and have not been applied yet (or,
synonymously, the projectors are not active yet). Here we'll load
the sample data <sample-dataset> and crop it to 60 seconds; you can
see the projectors in the output of :func:~mne.io.read_raw_fif below:
End of explanation
print(raw.info['projs'])
Explanation: In MNE-Python, the environmental noise vectors are computed using principal
component analysis <pca_>, usually abbreviated "PCA", which is why the SSP
projectors usually have names like "PCA-v1". (Incidentally, since the process
of performing PCA uses singular value decomposition <svd_> under the hood,
it is also common to see phrases like "projectors were computed using SVD" in
published papers.) The projectors are stored in the projs field of
raw.info:
End of explanation
first_projector = raw.info['projs'][0]
print(first_projector)
print(first_projector.keys())
Explanation: raw.info['projs'] is an ordinary Python :class:list of
:class:~mne.Projection objects, so you can access individual projectors by
indexing into it. The :class:~mne.Projection object itself is similar to a
Python :class:dict, so you can use its .keys() method to see what
fields it contains (normally you don't need to access its properties
directly, but you can if necessary):
End of explanation
print(raw.proj)
print(first_projector['active'])
Explanation: The :class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked
objects all have a boolean :attr:~mne.io.Raw.proj attribute that indicates
whether there are any unapplied / inactive projectors stored in the object.
In other words, the :attr:~mne.io.Raw.proj attribute is True if at
least one :term:projector is present and all of them are active. In
addition, each individual projector also has a boolean active field:
End of explanation
# In MNE-Python, SSP vectors can be computed using general purpose functions
# :func:`mne.compute_proj_raw`, :func:`mne.compute_proj_epochs`, and
# :func:`mne.compute_proj_evoked`. The general assumption these functions make
# is that the data passed contains raw data, epochs or averages of the artifact
# you want to repair via projection. In practice this typically involves
# continuous raw data of empty room recordings or averaged ECG or EOG
# artifacts. A second set of high-level convenience functions is provided to
# compute projection vectors for typical use cases. This includes
# :func:`mne.preprocessing.compute_proj_ecg` and
# :func:`mne.preprocessing.compute_proj_eog` for computing the ECG and EOG
# related artifact components, respectively; see :ref:`tut-artifact-ssp` for
# examples of these uses. For computing the EEG reference signal as a
# projector, the function :func:`mne.set_eeg_reference` can be used; see
# :ref:`tut-set-eeg-ref` for more information.
#
# .. warning:: It is best to compute projectors only on channels that will be
# used (e.g., excluding bad channels). This ensures that
# projection vectors will remain ortho-normalized and that they
# properly capture the activity of interest.
#
#
# Visualizing the effect of projectors
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# You can see the effect the projectors are having on the measured signal by
# comparing plots with and without the projectors applied. By default,
# ``raw.plot()`` will apply the projectors in the background before plotting
# (without modifying the :class:`~mne.io.Raw` object); you can control this
# with the boolean ``proj`` parameter as shown below, or you can turn them on
# and off interactively with the projectors interface, accessed via the
# :kbd:`Proj` button in the lower right corner of the plot window. Here we'll
# look at just the magnetometers, and a 2-second sample from the beginning of
# the file.
mags = raw.copy().crop(tmax=2).pick_types(meg='mag')
for proj in (False, True):
fig = mags.plot(butterfly=True, proj=proj)
fig.subplots_adjust(top=0.9)
fig.suptitle('proj={}'.format(proj), size='xx-large', weight='bold')
Explanation: Computing projectors
~~~~~~~~~~~~~~~~~~~~
End of explanation
ecg_proj_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_ecg-proj.fif')
ecg_projs = mne.read_proj(ecg_proj_file)
print(ecg_projs)
Explanation: Additional ways of visualizing projectors are covered in the tutorial
tut-artifact-ssp.
Loading and saving projectors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SSP can be used for other types of signal cleaning besides just reduction of
environmental noise. You probably noticed two large deflections in the
magnetometer signals in the previous plot that were not removed by the
empty-room projectors โ those are artifacts of the subject's heartbeat. SSP
can be used to remove those artifacts as well. The sample data includes
projectors for heartbeat noise reduction that were saved in a separate file
from the raw data, which can be loaded with the :func:mne.read_proj
function:
End of explanation
raw.add_proj(ecg_projs)
Explanation: There is a corresponding :func:mne.write_proj function that can be used to
save projectors to disk in .fif format:
.. code-block:: python3
mne.write_proj('heartbeat-proj.fif', ecg_projs)
<div class="alert alert-info"><h4>Note</h4><p>By convention, MNE-Python expects projectors to be saved with a filename
ending in ``-proj.fif`` (or ``-proj.fif.gz``), and will issue a warning
if you forgo this recommendation.</p></div>
Adding and removing projectors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Above, when we printed the ecg_projs list that we loaded from a file, it
showed two projectors for gradiometers (the first two, marked "planar"), two
for magnetometers (the middle two, marked "axial"), and two for EEG sensors
(the last two, marked "eeg"). We can add them to the :class:~mne.io.Raw
object using the :meth:~mne.io.Raw.add_proj method:
End of explanation
mags_ecg = raw.copy().crop(tmax=2).pick_types(meg='mag')
for data, title in zip([mags, mags_ecg], ['Without', 'With']):
fig = data.plot(butterfly=True, proj=True)
fig.subplots_adjust(top=0.9)
fig.suptitle('{} ECG projector'.format(title), size='xx-large',
weight='bold')
Explanation: To remove projectors, there is a corresponding method
:meth:~mne.io.Raw.del_proj that will remove projectors based on their index
within the raw.info['projs'] list. For the special case of replacing the
existing projectors with new ones, use
raw.add_proj(ecg_projs, remove_existing=True).
To see how the ECG projectors affect the measured signal, we can once again
plot the data with and without the projectors applied (though remember that
the :meth:~mne.io.Raw.plot method only temporarily applies the projectors
for visualization, and does not permanently change the underlying data).
We'll compare the mags variable we created above, which had only the
empty room SSP projectors, to the data with both empty room and ECG
projectors:
End of explanation |
3,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ะกัะฐะฒะฝะตะฝะธะต ะผะตััะธะบ ะบะฐัะตััะฒะฐ ะฑะธะฝะฐัะฝะพะน ะบะปะฐััะธัะธะบะฐัะธะธ
Programming Assignment
ะ ััะพะผ ะทะฐะดะฐะฝะธะธ ะผั ัะฐะทะฑะตัะตะผัั, ะฒ ัะตะผ ัะพััะพะธั ัะฐะทะฝะธัะฐ ะผะตะถะดั ัะฐะทะฝัะผะธ ะผะตััะธะบะฐะผะธ ะบะฐัะตััะฒะฐ. ะั ะพััะฐะฝะพะฒะธะผัั ะฝะฐ ะทะฐะดะฐัะต ะฑะธะฝะฐัะฝะพะน ะบะปะฐััะธัะธะบะฐัะธะธ (ั ะพัะบะปะธะบะฐะผะธ 0 ะธ 1), ะฝะพ ัะฐััะผะพััะธะผ ะตะต ะบะฐะบ ะทะฐะดะฐัั ะฟัะตะดัะบะฐะทะฐะฝะธั ะฒะตัะพััะฝะพััะธ ัะพะณะพ, ััะพ ะพะฑัะตะบั ะฟัะธะฝะฐะดะปะตะถะธั ะบะปะฐััั 1. ะขะฐะบะธะผ ะพะฑัะฐะทะพะผ, ะผั ะฑัะดะตะผ ัะฐะฑะพัะฐัั ั ะฒะตัะตััะฒะตะฝะฝะพะน, ะฐ ะฝะต ะฑะธะฝะฐัะฝะพะน ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน.
ะะฐะดะฐะฝะธะต ะพัะพัะผะปะตะฝะพ ะฒ ััะธะปะต ะดะตะผะพะฝัััะฐัะธะธ ั ัะปะตะผะตะฝัะฐะผะธ Programming Assignment. ะะฐะผ ะฝัะถะฝะพ ะทะฐะฟัััะธัั ัะถะต ะฝะฐะฟะธัะฐะฝะฝัะน ะบะพะด ะธ ัะฐััะผะพััะตัั ะฟัะตะดะปะพะถะตะฝะฝัะต ะณัะฐัะธะบะธ, ะฐ ัะฐะบะถะต ัะตะฐะปะธะทะพะฒะฐัั ะฝะตัะบะพะปัะบะพ ัะฒะพะธั
ััะฝะบัะธะน. ะะปั ะฟัะพะฒะตัะบะธ ะทะฐะฟะธัะธัะต ะฒ ะพัะดะตะปัะฝัะต ัะฐะนะปั ัะตะทัะปััะฐัั ัะฐะฑะพัั ััะธั
ััะฝะบัะธะน ะฝะฐ ัะบะฐะทะฐะฝะฝัั
ะฝะฐะฑะพัะฐั
ะฒั
ะพะดะฝัั
ะดะฐะฝะฝัั
, ััะพ ะผะพะถะฝะพ ัะดะตะปะฐัั ั ะฟะพะผะพััั ะฟัะตะดะปะพะถะตะฝะฝัั
ะฒ ะทะฐะดะฐะฝะธัั
ััะฝะบัะธะน write_answer_N, N - ะฝะพะผะตั ะทะฐะดะฐัะธ. ะะฐะณััะทะธัะต ััะธ ัะฐะนะปั ะฒ ัะธััะตะผั.
ะะปั ะฟะพัััะพะตะฝะธั ะณัะฐัะธะบะพะฒ ะฝัะถะฝะพ ะธะผะฟะพััะธัะพะฒะฐัั ัะพะพัะฒะตัััะฒัััะธะต ะผะพะดัะปะธ.
ะะธะฑะปะธะพัะตะบะฐ seaborn ะฟะพะทะฒะพะปัะตั ัะดะตะปะฐัั ะณัะฐัะธะบะธ ะบัะฐัะธะฒะตะต. ะัะปะธ ะฒั ะฝะต ั
ะพัะธัะต ะตะต ะธัะฟะพะปัะทะพะฒะฐัั, ะทะฐะบะพะผะผะตะฝัะธััะนัะต ััะตััั ัััะพะบั.
ะะพะปะตะต ัะพะณะพ, ะดะปั ะฒัะฟะพะปะฝะตะฝะธั Programming Assignment ะผะพะดัะปะธ matplotlib ะธ seaborn ะฝะต ะฝัะถะฝั (ะฒั ะผะพะถะตัะต ะฝะต ะทะฐะฟััะบะฐัั ััะตะนะบะธ ั ะฟะพัััะพะตะฝะธะตะผ ะณัะฐัะธะบะพะฒ ะธ ัะผะพััะตัั ะฝะฐ ัะถะต ะฟะพัััะพะตะฝะฝัะต ะบะฐััะธะฝะบะธ).
Step1: ะงัะพ ะฟัะตะดัะบะฐะทัะฒะฐัั ะฐะปะณะพัะธัะผั
ะะปั ะฒััะธัะปะตะฝะธั ะผะตััะธะบ ะบะฐัะตััะฒะฐ ะฒ ะพะฑััะตะฝะธะธ ั ััะธัะตะปะตะผ ะฝัะถะฝะพ ะทะฝะฐัั ัะพะปัะบะพ ะดะฒะฐ ะฒะตะบัะพัะฐ
Step2: ะะดะตะฐะปัะฝะฐั ัะธััะฐัะธั
Step3: ะะฝัะตัะฒะฐะปั ะฒะตัะพััะฝะพััะตะน ะดะปั ะดะฒัั
ะบะปะฐััะพะฒ ะฟัะตะบัะฐัะฝะพ ัะฐะทะดะตะปััััั ะฟะพัะพะณะพะผ T = 0.5.
ะงะฐัะต ะฒัะตะณะพ ะธะฝัะตัะฒะฐะปั ะฝะฐะบะปะฐะดัะฒะฐัััั - ัะพะณะดะฐ ะฝัะถะฝะพ ะฐะบะบััะฐัะฝะพ ะฟะพะดะฑะธัะฐัั ะฟะพัะพะณ.
ะกะฐะผัะน ะฝะตะฟัะฐะฒะธะปัะฝัะน ะฐะปะณะพัะธัะผ ะดะตะปะฐะตั ะฒัะต ะฝะฐะพะฑะพัะพั
Step4: ะะปะณะพัะธัะผ ะผะพะถะตั ะฑััั ะพััะพัะพะถะฝัะผ ะธ ัััะตะผะธัััั ัะธะปัะฝะพ ะฝะต ะพัะบะปะพะฝััั ะฒะตัะพััะฝะพััะธ ะพั 0.5, ะฐ ะผะพะถะตั ัะธัะบะพะฒะฐัั - ะดะตะปะฐัั ะฟัะตะดัะบะฐะทะฐะฝะธั ะฑะปะธะทะฐะบะธะผะธ ะบ ะฝัะปั ะธะปะธ ะตะดะธะฝะธัะต.
Step5: ะขะฐะบะถะต ะธะฝัะตัะฒะฐะปั ะผะพะณัั ัะผะตัะฐัััั. ะัะปะธ ะฐะปะณะพัะธัะผ ะฑะพะธััั ะพัะธะฑะพะบ false positive, ัะพ ะพะฝ ะฑัะดะตั ัะฐัะต ะดะตะปะฐัั ะฟัะตะดัะบะฐะทะฐะฝะธั, ะฑะปะธะทะบะธะต ะบ ะฝัะปั.
ะะฝะฐะปะพะณะธัะฝะพ, ััะพะฑั ะธะทะฑะตะถะฐัั ะพัะธะฑะพะบ false negative, ะปะพะณะธัะฝะพ ัะฐัะต ะฟัะตะดัะบะฐะทัะฒะฐัั ะฑะพะปััะธะต ะฒะตัะพััะฝะพััะธ.
Step6: ะั ะพะฟะธัะฐะปะธ ัะฐะทะฝัะต ั
ะฐัะฐะบัะตัั ะฒะตะบัะพัะพะฒ ะฒะตัะพััะฝะพััะตะน. ะะฐะปะตะต ะผั ะฑัะดะตะผ ัะผะพััะตัั, ะบะฐะบ ะผะตััะธะบะธ ะพัะตะฝะธะฒะฐัั ัะฐะทะฝัะต ะฒะตะบัะพัั ะฟัะตะดัะบะฐะทะฐะฝะธะน, ะฟะพััะพะผั ะพะฑัะทะฐัะตะปัะฝะพ ะฒัะฟะพะปะฝะธัะต ััะตะนะบะธ, ัะพะทะดะฐััะธะต ะฒะตะบัะพัั ะดะปั ะฒะธะทัะฐะปะธะทะฐัะธะธ.
ะะตััะธะบะธ, ะพัะตะฝะธะฒะฐััะธะต ะฑะธะฝะฐัะฝัะต ะฒะตะบัะพัั ะฟัะตะดัะบะฐะทะฐะฝะธะน
ะััั ะดะฒะต ัะธะฟะธัะฝัะต ัะธััะฐัะธะธ, ะบะพะณะดะฐ ัะฟะตัะธะฐะปะธััั ะฟะพ ะผะฐัะธะฝะฝะพะผั ะพะฑััะตะฝะธั ะฝะฐัะธะฝะฐัั ะธะทััะฐัั ั
ะฐัะฐะบัะตัะธััะธะบะธ ะผะตััะธะบ ะบะฐัะตััะฒะฐ
Step7: ะัะต ััะธ ะผะตััะธะบะธ ะปะตะณะบะพ ัะฐะทะปะธัะฐัั ะฟัะพัััะต ัะปััะฐะธ ั
ะพัะพัะธั
ะธ ะฟะปะพั
ะธั
ะฐะปะณะพัะธัะผะพะฒ. ะะฑัะฐัะธะผ ะฒะฝะธะผะฐะฝะธะต, ััะพ ะผะตััะธะบะธ ะธะผะตัั ะพะฑะปะฐััั ะทะฝะฐัะตะฝะธะน [0, 1], ะธ ะฟะพัะพะผั ะธั
ะปะตะณะบะพ ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั.
ะะตััะธะบะฐะผ ะฝะต ะฒะฐะถะฝั ะฒะตะปะธัะธะฝั ะฒะตัะพััะฝะพััะตะน, ะธะผ ะฒะฐะถะฝะพ ัะพะปัะบะพ ัะพ, ัะบะพะปัะบะพ ะพะฑัะตะบัะพะฒ ะฝะตะฟัะฐะฒะธะปัะฝะพ ะทะฐัะปะธ ะทะฐ ัััะฐะฝะพะฒะปะตะฝะฝัั ะณัะฐะฝะธัั (ะฒ ะดะฐะฝะฝะพะผ ัะปััะฐะต T = 0.5).
ะะตััะธะบะฐ accuracy ะดะฐะตั ะพะดะธะฝะฐะบะพะฒัะน ะฒะตั ะพัะธะฑะบะฐะผ false positive ะธ false negative, ะทะฐัะพ ะฟะฐัะฐ ะผะตััะธะบ precision ะธ recall ะพะดะฝะพะทะฝะฐัะฝะพ ะธะดะตะฝัะธัะธัะธััะตั ััะพ ัะฐะทะปะธัะธะต. ะกะพะฑััะฒะตะฝะฝะพ, ะธั
ะดะปั ัะพะณะพ ะธ ะธัะฟะพะปัะทััั, ััะพะฑั ะบะพะฝััะพะปะธัะพะฒะฐัั ะพัะธะฑะบะธ FP ะธ FN.
ะั ะธะทะผะตัะธะปะธ ััะธ ะผะตััะธะบะธ, ัะธะบัะธัะพะฒะฐะฒ ะฟะพัะพะณ T = 0.5, ะฟะพัะพะผั ััะพ ะดะปั ะฟะพััะธ ะฒัะตั
ะบะฐััะธะฝะพะบ ะพะฝ ะบะฐะถะตััั ะพะฟัะธะผะฐะปัะฝัะผ. ะะฐะฒะฐะนัะต ะฟะพัะผะพััะธะผ ะฝะฐ ะฟะพัะปะตะดะฝะตะน (ัะฐะผะพะน ะธะฝัะตัะตัะฝะพะน ะดะปั ััะธั
ะผะตััะธะบ) ะณััะฟะฟะต ะฒะตะบัะพัะพะฒ, ะบะฐะบ ะผะตะฝััััั precision ะธ recall ะฟัะธ ัะฒะตะปะธัะตะฝะธะธ ะฟะพัะพะณะฐ.
Step8: ะัะธ ัะฒะตะปะธัะตะฝะธะธ ะฟะพัะพะณะฐ ะผั ะดะตะปะฐะตะผ ะผะตะฝััะต ะพัะธะฑะพะบ FP ะธ ะฑะพะปััะต ะพัะธะฑะพะบ FN, ะฟะพััะพะผั ะพะดะฝะฐ ะธะท ะบัะธะฒัั
ัะฐััะตั, ะฐ ะฒัะพัะฐั - ะฟะฐะดะฐะตั. ะะพ ัะฐะบะพะผั ะณัะฐัะธะบั ะผะพะถะฝะพ ะฟะพะดะพะฑัะฐัั ะพะฟัะธะผะฐะปัะฝะพะต ะทะฝะฐัะตะฝะธะต ะฟะพัะพะณะฐ, ะฟัะธ ะบะพัะพัะพะผ precision ะธ recall ะฑัะดัั ะฟัะธะตะผะปะตะผั. ะัะปะธ ัะฐะบะพะณะพ ะฟะพัะพะณะฐ ะฝะต ะฝะฐัะปะพัั, ะฝัะถะฝะพ ะพะฑััะฐัั ะดััะณะพะน ะฐะปะณะพัะธัะผ.
ะะณะพะฒะพัะธะผัั, ััะพ ะฟัะธะตะผะปะตะผัะต ะทะฝะฐัะตะฝะธั precision ะธ recall ะพะฟัะตะดะตะปััััั ะฟัะตะดะผะตัะฝะพะน ะพะฑะปะฐัััั. ะะฐะฟัะธะผะตั, ะฒ ะทะฐะดะฐัะต ะพะฟัะตะดะตะปะตะฝะธั, ะฑะพะปะตะฝ ะปะธ ะฟะฐัะธะตะฝั ะพะฟัะตะดะตะปะตะฝะฝะพะน ะฑะพะปะตะทะฝัั (0 - ะทะดะพัะพะฒ, 1 - ะฑะพะปะตะฝ), ะพัะธะฑะพะบ false negative ััะฐัะฐัััั ะธะทะฑะตะณะฐัั, ััะตะฑัั recall ะพะบะพะปะพ 0.9. ะะพะถะฝะพ ัะบะฐะทะฐัั ัะตะปะพะฒะตะบั, ััะพ ะพะฝ ะฑะพะปะตะฝ, ะธ ะฟัะธ ะดะฐะปัะฝะตะนัะตะน ะดะธะฐะณะฝะพััะธะบะต ะฒััะฒะธัั ะพัะธะฑะบั; ะณะพัะฐะทะดะพ ั
ัะถะต ะฟัะพะฟัััะธัั ะฝะฐะปะธัะธะต ะฑะพะปะตะทะฝะธ.
<font color="green" size=5>Programming assignment
Step9: F1-score
ะัะตะฒะธะดะฝัะน ะฝะตะดะพััะฐัะพะบ ะฟะฐัั ะผะตััะธะบ precision-recall - ะฒ ัะพะผ, ััะพ ะธั
ะดะฒะต
Step10: F1-ะผะตััะธะบะฐ ะฒ ะดะฒัั
ะฟะพัะปะตะดะฝะธั
ัะปััะฐัั
, ะบะพะณะดะฐ ะพะดะฝะฐ ะธะท ะฟะฐัะฝัั
ะผะตััะธะบ ัะฐะฒะฝะฐ 1, ะทะฝะฐัะธัะตะปัะฝะพ ะผะตะฝััะต, ัะตะผ ะฒ ะฟะตัะฒะพะผ, ัะฑะฐะปะฐะฝัะธัะพะฒะฐะฝะฝะพะผ ัะปััะฐะต.
<font color="green" size=5>Programming assignment
Step11: ะะตััะธะบะธ, ะพัะตะฝะธะฒะฐััะธะต ะฒะตะบัะพัั ะฒะตัะพััะฝะพััะตะน ะบะปะฐััะฐ 1
ะ ะฐััะผะพััะตะฝะฝัะต ะผะตััะธะบะธ ัะดะพะฑะฝะพ ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั, ะฝะพ ะฟัะธ ะธั
ะธัะฟะพะปัะทะพะฒะฐะฝะธะธ ะผั ะฝะต ััะธััะฒะฐะตะผ ะฑะพะปัััั ัะฐััั ะธะฝัะพัะผะฐัะธะธ, ะฟะพะปััะตะฝะฝะพะน ะพั ะฐะปะณะพัะธัะผะฐ. ะ ะฝะตะบะพัะพััั
ะทะฐะดะฐัะฐั
ะฒะตัะพััะฝะพััะธ ะฝัะถะฝั ะฒ ัะธััะพะผ ะฒะธะดะต, ะฝะฐะฟัะธะผะตั, ะตัะปะธ ะผั ะฟัะตะดัะบะฐะทัะฒะฐะตะผ, ะฒัะธะณัะฐะตั ะปะธ ะบะพะผะฐะฝะดะฐ ะฒ ัััะฑะพะปัะฝะพะผ ะผะฐััะต, ะธ ะฒะตะปะธัะธะฝะฐ ะฒะตัะพััะฝะพััะธ ะฒะปะธัะตั ะฝะฐ ัะฐะทะผะตั ััะฐะฒะบะธ ะทะฐ ััั ะบะพะผะฐะฝะดั. ะะฐะถะต ะตัะปะธ ะฒ ะบะพะฝัะต ะบะพะฝัะพะฒ ะผั ะฒัะต ัะฐะฒะฝะพ ะฑะธะฝะฐัะธะทัะตะผ ะฟัะตะดัะบะฐะทะฐะฝะธะต, ั
ะพัะตััั ัะปะตะดะธัั ะทะฐ ั
ะฐัะฐะบัะตัะพะผ ะฒะตะบัะพัะฐ ะฒะตัะพััะฝะพััะธ.
Log_loss
Log_loss ะฒััะธัะปัะตั ะฟัะฐะฒะดะพะฟะพะดะพะฑะธะต ะผะตัะพะบ ะฒ actual ั ะฒะตัะพััะฝะพัััะผะธ ะธะท predicted, ะฒะทััะพะต ั ะฟัะพัะธะฒะพะฟะพะปะพะถะฝัะผ ะทะฝะฐะบะพะผ
Step12: ะะฐะบ ะธ ะฟัะตะดัะดััะธะต ะผะตััะธะบะธ, log_loss ั
ะพัะพัะพ ัะฐะทะปะธัะฐะตั ะธะดะตะฐะปัะฝัะน, ัะธะฟะธัะฝัะน ะธ ะฟะปะพั
ะพะน ัะปััะฐะธ. ะะพ ะพะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต, ััะพ ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั ะฒะตะปะธัะธะฝั ะดะพััะฐัะพัะฝะพ ัะปะพะถะฝะพ
Step13: ะะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต ะฝะฐ ัะฐะทะฝะธัั weighted_log_loss ะผะตะถะดั ัะปััะฐัะผะธ Avoids FP ะธ Avoids FN.
ROC ะธ AUC
ะัะธ ะฟะพัััะพะตะฝะธะธ ROC-ะบัะธะฒะพะน (receiver operating characteristic) ะฟัะพะธัั
ะพะดะธั ะฒะฐััะธัะพะฒะฐะฝะธะต ะฟะพัะพะณะฐ ะฑะธะฝะฐัะธะทะฐัะธะธ ะฒะตะบัะพัะฐ ะฒะตัะพััะฝะพััะตะน, ะธ ะฒััะธัะปััััั ะฒะตะปะธัะธะฝั, ะทะฐะฒะธัััะธะต ะพั ัะธัะปะฐ ะพัะธะฑะพะบ FP ะธ FN. ะญัะธ ะฒะตะปะธัะธะฝั ะทะฐะดะฐัััั ัะฐะบ, ััะพะฑั ะฒ ัะปััะฐะต, ะบะพะณะดะฐ ัััะตััะฒัะตั ะฟะพัะพะณ ะดะปั ะธะดะตะฐะปัะฝะพะณะพ ัะฐะทะดะตะปะตะฝะธั ะบะปะฐััะพะฒ, ROC-ะบัะธะฒะฐั ะฟัะพั
ะพะดะธะปะฐ ัะตัะตะท ะพะฟัะตะดะตะปะตะฝะฝัั ัะพัะบั - ะฒะตัั
ะฝะธะน ะปะตะฒัะน ัะณะพะป ะบะฒะฐะดัะฐัะฐ [0, 1] x [0, 1]. ะัะพะผะต ัะพะณะพ, ะพะฝะฐ ะฒัะตะณะดะฐ ะฟัะพั
ะพะดะธั ัะตัะตะท ะปะตะฒัะน ะฝะธะถะฝะธะน ะธ ะฟัะฐะฒัะน ะฒะตัั
ะฝะธะน ัะณะปั. ะะพะปััะฐะตััั ะฝะฐะณะปัะดะฝะฐั ะฒะธะทัะฐะปะธะทะฐัะธั ะบะฐัะตััะฒะฐ ะฐะปะณะพัะธัะผะฐ. ะก ัะตะปัั ะพั
ะฐัะฐะบัะตัะธะทะพะฒะฐัั ััั ะฒะธะทัะฐะปะธะทะฐัะธั ัะธัะปะตะฝะฝะพ, ะฒะฒะตะปะธ ะฟะพะฝััะธะต AUC - ะฟะปะพัะฐะดั ะฟะพะด ROC-ะบัะธะฒะพะน.
ะััั ะฝะตัะปะพะถะฝัะน ะธ ัััะตะบัะธะฒะฝัะน ะฐะปะณะพัะธัะผ, ะบะพัะพััะน ะทะฐ ะพะดะธะฝ ะฟัะพั
ะพะด ะฟะพ ะฒัะฑะพัะบะต ะฒััะธัะปัะตั ROC-ะบัะธะฒัั ะธ AUC, ะฝะพ ะผั ะฝะต ะฑัะดะตะผ ะฒะดะฐะฒะฐัััั ะฒ ะดะตัะฐะปะธ.
ะะพัััะพะธะผ ROC-ะบัะธะฒัะต ะดะปั ะฝะฐัะธั
ะทะฐะดะฐั
Step14: ะงะตะผ ะฑะพะปััะต ะพะฑัะตะบัะพะฒ ะฒ ะฒัะฑะพัะบะต, ัะตะผ ะฑะพะปะตะต ะณะปะฐะดะบะพะน ะฒัะณะปัะดะธั ะบัะธะฒะฐั (ั
ะพัั ะฝะฐ ัะฐะผะพะผ ะดะตะปะต ะพะฝะฐ ะฒัะต ัะฐะฒะฝะพ ัััะฟะตะฝัะฐัะฐั).
ะะฐะบ ะธ ะพะถะธะดะฐะปะพัั, ะบัะธะฒัะต ะฒัะตั
ะธะดะตะฐะปัะฝัั
ะฐะปะณะพัะธัะผะพะฒ ะฟัะพั
ะพะดัั ัะตัะตะท ะปะตะฒัะน ะฒะตัั
ะฝะธะน ัะณะพะป. ะะฐ ะฟะตัะฒะพะผ ะณัะฐัะธะบะต ัะฐะบะถะต ะฟะพะบะฐะทะฐะฝะฐ ัะธะฟะธัะฝะฐั ROC-ะบัะธะฒะฐั (ะพะฑััะฝะพ ะฝะฐ ะฟัะฐะบัะธะบะต ะพะฝะธ ะฝะต ะดะพั
ะพะดัั ะดะพ "ะธะดะตะฐะปัะฝะพะณะพ" ัะณะปะฐ).
AUC ัะธัะบัััะตะณะพ ะฐะปะณะพัะธัะผะฐ ะทะฝะฐัะธัะตะปัะฝะพะผ ะผะตะฝััะต, ัะตะผ ั ะพััะพัะพะถะฝะพะณะพ, ั
ะพัั ะพััะพัะพะถะฝัะน ะธ ัะธัะบััะธะน ะธะดะตะฐะปัะฝัะต ะฐะปะณะพัะธัะผั ะฝะต ัะฐะทะปะธัะฐัััั ะฟะพ ROC ะธะปะธ AUC. ะะพััะพะผั ัััะตะผะธัััั ัะฒะตะปะธัะธัั ะทะฐะทะพั ะผะตะถะดั ะธะฝัะตัะฒะฐะปะฐะผะธ ะฒะตัะพััะฝะพััะตะน ะบะปะฐััะพะฒ ัะผััะปะฐ ะฝะต ะธะผะตะตั.
ะะฐะฑะปัะดะฐะตััั ะฟะตัะตะบะพั ะบัะธะฒะพะน ะฒ ัะปััะฐะต, ะบะพะณะดะฐ ะฐะปะณะพัะธัะผั ัะฒะพะนััะฒะตะฝะฝั ะพัะธะฑะบะธ FP ะธะปะธ FN. ะะดะฝะฐะบะพ ะฟะพ ะฒะตะปะธัะธะฝะต AUC ััะพ ะพััะปะตะดะธัั ะฝะตะฒะพะทะผะพะถะฝะพ (ะบัะธะฒัะต ะผะพะณัั ะฑััั ัะธะผะผะตััะธัะฝั ะพัะฝะพัะธัะตะปัะฝะพ ะดะธะฐะณะพะฝะฐะปะธ (0, 1)-(1, 0)).
ะะพัะปะต ัะพะณะพ, ะบะฐะบ ะบัะธะฒะฐั ะฟะพัััะพะตะฝะฐ, ัะดะพะฑะฝะพ ะฒัะฑะธัะฐัั ะฟะพัะพะณ ะฑะธะฝะฐัะธะทะฐัะธะธ, ะฒ ะบะพัะพัะพะผ ะฑัะดะตั ะดะพััะธะณะฝัั ะบะพะผะฟัะพะผะธัั ะผะตะถะดั FP ะธะปะธ FN. ะะพัะพะณ ัะพะพัะฒะตัััะฒัะตั ัะพัะบะต ะฝะฐ ะบัะธะฒะพะน. ะัะปะธ ะผั ั
ะพัะธะผ ะธะทะฑะตะถะฐัั ะพัะธะฑะพะบ FP, ะฝัะถะฝะพ ะฒัะฑะธัะฐัั ัะพัะบั ะฝะฐ ะปะตะฒะพะน ััะพัะพะฝะต ะบะฒะฐะดัะฐัะฐ (ะบะฐะบ ะผะพะถะฝะพ ะฒััะต), ะตัะปะธ FN - ัะพัะบั ะฝะฐ ะฒะตัั
ะฝะตะน ััะพัะพะฝะต ะบะฒะฐะดัะฐัะฐ (ะบะฐะบ ะผะพะถะฝะพ ะปะตะฒะตะต). ะัะต ะฟัะพะผะตะถััะพัะฝัะต ัะพัะบะธ ะฑัะดัั ัะพะพัะฒะตัััะฒะพะฒะฐัั ัะฐะทะฝัะผ ะฟัะพะฟะพััะธัะผ FP ะธ FN.
<font color="green" size=5>Programming assignment | Python Code:
import numpy as np
from matplotlib import pyplot as plt
import seaborn
%matplotlib inline
Explanation: ะกัะฐะฒะฝะตะฝะธะต ะผะตััะธะบ ะบะฐัะตััะฒะฐ ะฑะธะฝะฐัะฝะพะน ะบะปะฐััะธัะธะบะฐัะธะธ
Programming Assignment
ะ ััะพะผ ะทะฐะดะฐะฝะธะธ ะผั ัะฐะทะฑะตัะตะผัั, ะฒ ัะตะผ ัะพััะพะธั ัะฐะทะฝะธัะฐ ะผะตะถะดั ัะฐะทะฝัะผะธ ะผะตััะธะบะฐะผะธ ะบะฐัะตััะฒะฐ. ะั ะพััะฐะฝะพะฒะธะผัั ะฝะฐ ะทะฐะดะฐัะต ะฑะธะฝะฐัะฝะพะน ะบะปะฐััะธัะธะบะฐัะธะธ (ั ะพัะบะปะธะบะฐะผะธ 0 ะธ 1), ะฝะพ ัะฐััะผะพััะธะผ ะตะต ะบะฐะบ ะทะฐะดะฐัั ะฟัะตะดัะบะฐะทะฐะฝะธั ะฒะตัะพััะฝะพััะธ ัะพะณะพ, ััะพ ะพะฑัะตะบั ะฟัะธะฝะฐะดะปะตะถะธั ะบะปะฐััั 1. ะขะฐะบะธะผ ะพะฑัะฐะทะพะผ, ะผั ะฑัะดะตะผ ัะฐะฑะพัะฐัั ั ะฒะตัะตััะฒะตะฝะฝะพะน, ะฐ ะฝะต ะฑะธะฝะฐัะฝะพะน ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน.
ะะฐะดะฐะฝะธะต ะพัะพัะผะปะตะฝะพ ะฒ ััะธะปะต ะดะตะผะพะฝัััะฐัะธะธ ั ัะปะตะผะตะฝัะฐะผะธ Programming Assignment. ะะฐะผ ะฝัะถะฝะพ ะทะฐะฟัััะธัั ัะถะต ะฝะฐะฟะธัะฐะฝะฝัะน ะบะพะด ะธ ัะฐััะผะพััะตัั ะฟัะตะดะปะพะถะตะฝะฝัะต ะณัะฐัะธะบะธ, ะฐ ัะฐะบะถะต ัะตะฐะปะธะทะพะฒะฐัั ะฝะตัะบะพะปัะบะพ ัะฒะพะธั
ััะฝะบัะธะน. ะะปั ะฟัะพะฒะตัะบะธ ะทะฐะฟะธัะธัะต ะฒ ะพัะดะตะปัะฝัะต ัะฐะนะปั ัะตะทัะปััะฐัั ัะฐะฑะพัั ััะธั
ััะฝะบัะธะน ะฝะฐ ัะบะฐะทะฐะฝะฝัั
ะฝะฐะฑะพัะฐั
ะฒั
ะพะดะฝัั
ะดะฐะฝะฝัั
, ััะพ ะผะพะถะฝะพ ัะดะตะปะฐัั ั ะฟะพะผะพััั ะฟัะตะดะปะพะถะตะฝะฝัั
ะฒ ะทะฐะดะฐะฝะธัั
ััะฝะบัะธะน write_answer_N, N - ะฝะพะผะตั ะทะฐะดะฐัะธ. ะะฐะณััะทะธัะต ััะธ ัะฐะนะปั ะฒ ัะธััะตะผั.
ะะปั ะฟะพัััะพะตะฝะธั ะณัะฐัะธะบะพะฒ ะฝัะถะฝะพ ะธะผะฟะพััะธัะพะฒะฐัั ัะพะพัะฒะตัััะฒัััะธะต ะผะพะดัะปะธ.
ะะธะฑะปะธะพัะตะบะฐ seaborn ะฟะพะทะฒะพะปัะตั ัะดะตะปะฐัั ะณัะฐัะธะบะธ ะบัะฐัะธะฒะตะต. ะัะปะธ ะฒั ะฝะต ั
ะพัะธัะต ะตะต ะธัะฟะพะปัะทะพะฒะฐัั, ะทะฐะบะพะผะผะตะฝัะธััะนัะต ััะตััั ัััะพะบั.
ะะพะปะตะต ัะพะณะพ, ะดะปั ะฒัะฟะพะปะฝะตะฝะธั Programming Assignment ะผะพะดัะปะธ matplotlib ะธ seaborn ะฝะต ะฝัะถะฝั (ะฒั ะผะพะถะตัะต ะฝะต ะทะฐะฟััะบะฐัั ััะตะนะบะธ ั ะฟะพัััะพะตะฝะธะตะผ ะณัะฐัะธะบะพะฒ ะธ ัะผะพััะตัั ะฝะฐ ัะถะต ะฟะพัััะพะตะฝะฝัะต ะบะฐััะธะฝะบะธ).
End of explanation
# ัะธััะตั ะพะดะธะฝ scatter plot
# scatterplot- ััะพ ัะธะฟ ะณัะฐัะธะบะฐ,
# ะบะพัะพััะน ะฟะพะบะฐะทัะฒะฐะตั ะดะฐะฝะฝัะต ะบะฐะบ ัะพะฒะพะบัะฟะฝะพััั ัะพัะตะบ.
def scatter(actual, predicted, T):
plt.scatter(actual, predicted)
plt.xlabel("Labels")
plt.ylabel("Predicted probabilities")
plt.plot([-0.2, 1.2], [T, T])
plt.axis([-0.1, 1.1, -0.1, 1.1])#ะพะฑะปะฐััั ะพัะตะน [xmin, xmax, ymin, ymax]
# ัะธััะตั ะฝะตัะบะพะปัะบะพ scatter plot ะฒ ัะฐะฑะปะธัะต, ะธะผะตััะตะน ัะฐะทะผะตัั shape
def many_scatters(actuals, predicteds, Ts, titles, shape):
plt.figure(figsize=(shape[1]*5, shape[0]*5))
i = 1
for actual, predicted, T, title in zip(actuals, predicteds, Ts, titles):
ax = plt.subplot(shape[0], shape[1], i)
ax.set_title(title)
i += 1
scatter(actual, predicted, T)
Explanation: ะงัะพ ะฟัะตะดัะบะฐะทัะฒะฐัั ะฐะปะณะพัะธัะผั
ะะปั ะฒััะธัะปะตะฝะธั ะผะตััะธะบ ะบะฐัะตััะฒะฐ ะฒ ะพะฑััะตะฝะธะธ ั ััะธัะตะปะตะผ ะฝัะถะฝะพ ะทะฝะฐัั ัะพะปัะบะพ ะดะฒะฐ ะฒะตะบัะพัะฐ: ะฒะตะบัะพั ะฟัะฐะฒะธะปัะฝัั
ะพัะฒะตัะพะฒ ะธ ะฒะตะบัะพั ะฟัะตะดัะบะฐะทะฐะฝะฝัั
ะฒะตะปะธัะธะฝ; ะฑัะดะตะผ ะพะฑะพะทะฝะฐัะฐัั ะธั
actual ะธ predicted. ะะตะบัะพั actual ะธะทะฒะตััะตะฝ ะธะท ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ, ะฒะตะบัะพั predicted ะฒะพะทะฒัะฐัะฐะตััั ะฐะปะณะพัะธัะผะพะผ ะฟัะตะดัะบะฐะทะฐะฝะธั. ะกะตะณะพะดะฝั ะผั ะฝะต ะฑัะดะตะผ ะธัะฟะพะปัะทะพะฒะฐัั ะบะฐะบะธะต-ัะพ ะฐะปะณะพัะธัะผั ะบะปะฐััะธัะธะบะฐัะธะธ, ะฐ ะฟัะพััะพ ัะฐััะผะพััะธะผ ัะฐะทะฝัะต ะฒะตะบัะพัั ะฟัะตะดัะบะฐะทะฐะฝะธะน.
ะ ะฝะฐัะตะน ัะพัะผัะปะธัะพะฒะบะต actual ัะพััะพะธั ะธะท ะฝัะปะตะน ะธ ะตะดะธะฝะธั, ะฐ predicted - ะธะท ะฒะตะปะธัะธะฝ ะธะท ะธะฝัะตัะฒะฐะปะฐ [0, 1] (ะฒะตัะพััะฝะพััะธ ะบะปะฐััะฐ 1). ะขะฐะบะธะต ะฒะตะบัะพัั ัะดะพะฑะฝะพ ะฟะพะบะฐะทัะฒะฐัั ะฝะฐ scatter plot.
ะงัะพะฑั ัะดะตะปะฐัั ัะธะฝะฐะปัะฝะพะต ะฟัะตะดัะบะฐะทะฐะฝะธะต (ัะถะต ะฑะธะฝะฐัะฝะพะต), ะฝัะถะฝะพ ัััะฐะฝะพะฒะธัั ะฟะพัะพะณ T: ะฒัะต ะพะฑัะตะบัั, ะธะผะตััะธะต ะฟัะตะดัะบะฐะทะฐะฝะธะต ะฒััะต ะฟะพัะพะณะฐ, ะพัะฝะพััั ะบ ะบะปะฐััั 1, ะพััะฐะปัะฝัะต - ะบ ะบะปะฐััั 0.
End of explanation
actual_0 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_0 = np.array([ 0.19015288, 0.23872404, 0.42707312, 0.15308362, 0.2951875 ,
0.23475641, 0.17882447, 0.36320878, 0.33505476, 0.202608 ,
0.82044786, 0.69750253, 0.60272784, 0.9032949 , 0.86949819,
0.97368264, 0.97289232, 0.75356512, 0.65189193, 0.95237033,
0.91529693, 0.8458463 ])
plt.figure(figsize=(5, 5))
scatter(actual_0, predicted_0, 0.5)
Explanation: ะะดะตะฐะปัะฝะฐั ัะธััะฐัะธั: ัััะตััะฒัะตั ะฟะพัะพะณ T, ะฒะตัะฝะพ ัะฐะทะดะตะปัััะธะน ะฒะตัะพััะฝะพััะธ, ัะพะพัะฒะตัััะฒัััะธะต ะดะฒัะผ ะบะปะฐััะฐะผ. ะัะธะผะตั ัะฐะบะพะน ัะธััะฐัะธะธ:
End of explanation
actual_1 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1.])
predicted_1 = np.array([ 0.41310733, 0.43739138, 0.22346525, 0.46746017, 0.58251177,
0.38989541, 0.43634826, 0.32329726, 0.01114812, 0.41623557,
0.54875741, 0.48526472, 0.21747683, 0.05069586, 0.16438548,
0.68721238, 0.72062154, 0.90268312, 0.46486043, 0.99656541,
0.59919345, 0.53818659, 0.8037637 , 0.272277 , 0.87428626,
0.79721372, 0.62506539, 0.63010277, 0.35276217, 0.56775664])
actual_2 = np.array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
predicted_2 = np.array([ 0.07058193, 0.57877375, 0.42453249, 0.56562439, 0.13372737,
0.18696826, 0.09037209, 0.12609756, 0.14047683, 0.06210359,
0.36812596, 0.22277266, 0.79974381, 0.94843878, 0.4742684 ,
0.80825366, 0.83569563, 0.45621915, 0.79364286, 0.82181152,
0.44531285, 0.65245348, 0.69884206, 0.69455127])
many_scatters([actual_0, actual_1, actual_2], [predicted_0, predicted_1, predicted_2],
[0.5, 0.5, 0.5], ["Perfect", "Typical", "Awful algorithm"], (1, 3))
Explanation: ะะฝัะตัะฒะฐะปั ะฒะตัะพััะฝะพััะตะน ะดะปั ะดะฒัั
ะบะปะฐััะพะฒ ะฟัะตะบัะฐัะฝะพ ัะฐะทะดะตะปััััั ะฟะพัะพะณะพะผ T = 0.5.
ะงะฐัะต ะฒัะตะณะพ ะธะฝัะตัะฒะฐะปั ะฝะฐะบะปะฐะดัะฒะฐัััั - ัะพะณะดะฐ ะฝัะถะฝะพ ะฐะบะบััะฐัะฝะพ ะฟะพะดะฑะธัะฐัั ะฟะพัะพะณ.
ะกะฐะผัะน ะฝะตะฟัะฐะฒะธะปัะฝัะน ะฐะปะณะพัะธัะผ ะดะตะปะฐะตั ะฒัะต ะฝะฐะพะฑะพัะพั: ะฟะพะดะฝะธะผะฐะตั ะฒะตัะพััะฝะพััะธ ะบะปะฐััะฐ 0 ะฒััะต ะฒะตัะพััะฝะพััะตะน ะบะปะฐััะฐ 1. ะัะปะธ ัะฐะบ ะฟัะพะธะทะพัะปะพ, ััะพะธั ะฟะพัะผะพััะตัั, ะฝะต ะฟะตัะตะฟััะฐะปะธัั ะปะธ ะผะตัะบะธ 0 ะธ 1 ะฟัะธ ัะพะทะดะฐะฝะธะธ ัะตะปะตะฒะพะณะพ ะฒะตะบัะพัะฐ ะธะท ััััั
ะดะฐะฝะฝัั
.
ะัะธะผะตัั:
End of explanation
# ัะธัะบัััะธะน ะธะดะตะฐะปัะฝัะน ะฐะปะณะพะธัะผ
actual_0r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_0r = np.array([ 0.23563765, 0.16685597, 0.13718058, 0.35905335, 0.18498365,
0.20730027, 0.14833803, 0.18841647, 0.01205882, 0.0101424 ,
0.10170538, 0.94552901, 0.72007506, 0.75186747, 0.85893269,
0.90517219, 0.97667347, 0.86346504, 0.72267683, 0.9130444 ,
0.8319242 , 0.9578879 , 0.89448939, 0.76379055])
# ัะธัะบัััะธะน ั
ะพัะพัะธะน ะฐะปะณะพัะธัะผ
actual_1r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_1r = np.array([ 0.13832748, 0.0814398 , 0.16136633, 0.11766141, 0.31784942,
0.14886991, 0.22664977, 0.07735617, 0.07071879, 0.92146468,
0.87579938, 0.97561838, 0.75638872, 0.89900957, 0.93760969,
0.92708013, 0.82003675, 0.85833438, 0.67371118, 0.82115125,
0.87560984, 0.77832734, 0.7593189, 0.81615662, 0.11906964,
0.18857729])
many_scatters([actual_0, actual_1, actual_0r, actual_1r],
[predicted_0, predicted_1, predicted_0r, predicted_1r],
[0.5, 0.5, 0.5, 0.5],
["Perfect careful", "Typical careful", "Perfect risky", "Typical risky"],
(2, 2))
Explanation: ะะปะณะพัะธัะผ ะผะพะถะตั ะฑััั ะพััะพัะพะถะฝัะผ ะธ ัััะตะผะธัััั ัะธะปัะฝะพ ะฝะต ะพัะบะปะพะฝััั ะฒะตัะพััะฝะพััะธ ะพั 0.5, ะฐ ะผะพะถะตั ัะธัะบะพะฒะฐัั - ะดะตะปะฐัั ะฟัะตะดัะบะฐะทะฐะฝะธั ะฑะปะธะทะฐะบะธะผะธ ะบ ะฝัะปั ะธะปะธ ะตะดะธะฝะธัะต.
End of explanation
actual_10 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1.])
predicted_10 = np.array([ 0.29340574, 0.47340035, 0.1580356 , 0.29996772, 0.24115457, 0.16177793,
0.35552878, 0.18867804, 0.38141962, 0.20367392, 0.26418924, 0.16289102,
0.27774892, 0.32013135, 0.13453541, 0.39478755, 0.96625033, 0.47683139,
0.51221325, 0.48938235, 0.57092593, 0.21856972, 0.62773859, 0.90454639, 0.19406537,
0.32063043, 0.4545493 , 0.57574841, 0.55847795 ])
actual_11 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_11 = np.array([ 0.35929566, 0.61562123, 0.71974688, 0.24893298, 0.19056711, 0.89308488,
0.71155538, 0.00903258, 0.51950535, 0.72153302, 0.45936068, 0.20197229, 0.67092724,
0.81111343, 0.65359427, 0.70044585, 0.61983513, 0.84716577, 0.8512387 ,
0.86023125, 0.7659328 , 0.70362246, 0.70127618, 0.8578749 , 0.83641841,
0.62959491, 0.90445368])
many_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11],
[0.5, 0.5, 0.5], ["Typical", "Avoids FP", "Avoids FN"], (1, 3))
Explanation: ะขะฐะบะถะต ะธะฝัะตัะฒะฐะปั ะผะพะณัั ัะผะตัะฐัััั. ะัะปะธ ะฐะปะณะพัะธัะผ ะฑะพะธััั ะพัะธะฑะพะบ false positive, ัะพ ะพะฝ ะฑัะดะตั ัะฐัะต ะดะตะปะฐัั ะฟัะตะดัะบะฐะทะฐะฝะธั, ะฑะปะธะทะบะธะต ะบ ะฝัะปั.
ะะฝะฐะปะพะณะธัะฝะพ, ััะพะฑั ะธะทะฑะตะถะฐัั ะพัะธะฑะพะบ false negative, ะปะพะณะธัะฝะพ ัะฐัะต ะฟัะตะดัะบะฐะทัะฒะฐัั ะฑะพะปััะธะต ะฒะตัะพััะฝะพััะธ.
End of explanation
from sklearn.metrics import precision_score, recall_score, accuracy_score
T = 0.5
print "ะะปะณะพัะธัะผั, ัะฐะทะฝัะต ะฟะพ ะบะฐัะตััะฒั:"
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0 > T, predicted_1 > T, predicted_2 > T],
["Perfect:", "Typical:", "Awful:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
print
print "ะััะพัะพะถะฝัะน ะธ ัะธัะบัััะธะน ะฐะปะณะพัะธัะผั:"
for actual, predicted, descr in zip([actual_1, actual_1r],
[predicted_1 > T, predicted_1r > T],
["Typical careful:", "Typical risky:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
print
print "ะ ะฐะทะฝัะต ัะบะปะพะฝะฝะพััะธ ะฐะปะณะพัะธัะผะพะฒ ะบ ะพัะธะฑะบะฐะผ FP ะธ FN:"
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10 > T, predicted_11 > T],
["Avoids FP:", "Avoids FN:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
Explanation: ะั ะพะฟะธัะฐะปะธ ัะฐะทะฝัะต ั
ะฐัะฐะบัะตัั ะฒะตะบัะพัะพะฒ ะฒะตัะพััะฝะพััะตะน. ะะฐะปะตะต ะผั ะฑัะดะตะผ ัะผะพััะตัั, ะบะฐะบ ะผะตััะธะบะธ ะพัะตะฝะธะฒะฐัั ัะฐะทะฝัะต ะฒะตะบัะพัั ะฟัะตะดัะบะฐะทะฐะฝะธะน, ะฟะพััะพะผั ะพะฑัะทะฐัะตะปัะฝะพ ะฒัะฟะพะปะฝะธัะต ััะตะนะบะธ, ัะพะทะดะฐััะธะต ะฒะตะบัะพัั ะดะปั ะฒะธะทัะฐะปะธะทะฐัะธะธ.
ะะตััะธะบะธ, ะพัะตะฝะธะฒะฐััะธะต ะฑะธะฝะฐัะฝัะต ะฒะตะบัะพัั ะฟัะตะดัะบะฐะทะฐะฝะธะน
ะััั ะดะฒะต ัะธะฟะธัะฝัะต ัะธััะฐัะธะธ, ะบะพะณะดะฐ ัะฟะตัะธะฐะปะธััั ะฟะพ ะผะฐัะธะฝะฝะพะผั ะพะฑััะตะฝะธั ะฝะฐัะธะฝะฐัั ะธะทััะฐัั ั
ะฐัะฐะบัะตัะธััะธะบะธ ะผะตััะธะบ ะบะฐัะตััะฒะฐ:
1. ะฟัะธ ััะฐััะธะธ ะฒ ัะพัะตะฒะฝะพะฒะฐะฝะธะธ ะธะปะธ ัะตัะตะฝะธะธ ะฟัะธะบะปะฐะดะฝะพะน ะทะฐะดะฐัะธ, ะบะพะณะดะฐ ะฒะตะบัะพั ะฟัะตะดัะบะฐะทะฐะฝะธะน ะพัะตะฝะธะฒะฐะตััั ะฟะพ ะบะพะฝะบัะตัะฝะพะน ะผะตััะธะบะต, ะธ ะฝัะถะฝะพ ะฟะพัััะพะธัั ะฐะปะณะพัะธัะผ, ะผะฐะบัะธะผะธะทะธััััะธะน ััั ะผะตััะธะบั.
1. ะฝะฐ ััะฐะฟะต ัะพัะผะฐะปะธะทะฐัะธะธ ะทะฐะดะฐัะธ ะผะฐัะธะฝะฝะพะณะพ ะพะฑััะตะฝะธั, ะบะพะณะดะฐ ะตััั ััะตะฑะพะฒะฐะฝะธั ะฟัะธะบะปะฐะดะฝะพะน ะพะฑะปะฐััะธ, ะธ ะฝัะถะฝะพ ะฟัะตะดะปะพะถะธัั ะผะฐัะตะผะฐัะธัะตัะบัั ะผะตััะธะบั, ะบะพัะพัะฐั ะฑัะดะตั ัะพะพัะฒะตัััะฒะพะฒะฐัั ััะธะผ ััะตะฑะพะฒะฐะฝะธัะผ.
ะะฐะปะตะต ะผั ะฒะบัะฐััะต ัะฐััะผะพััะธะผ ะบะฐะถะดัั ะผะตััะธะบั ั ััะธั
ะดะฒัั
ะฟะพะทะธัะธะน.
Precision ะธ recall; accuracy
ะะปั ะฝะฐัะฐะปะฐ ัะฐะทะฑะตัะตะผัั ั ะผะตััะธะบะฐะผะธ, ะพัะตะฝะธะฒะฐััะธะต ะบะฐัะตััะฒะพ ัะถะต ะฟะพัะปะต ะฑะธะฝะฐัะธะทะฐัะธะธ ะฟะพ ะฟะพัะพะณั T, ัะพ ะตััั ััะฐะฒะฝะธะฒะฐััะธะต ะดะฒะฐ ะฑะธะฝะฐัะฝัั
ะฒะตะบัะพัะฐ: actual ะธ predicted.
ะะฒะต ะฟะพะฟัะปััะฝัะต ะผะตััะธะบะธ - precision ะธ recall. ะะตัะฒะฐั ะฟะพะบะฐะทัะฒะฐะตั, ะบะฐะบ ัะฐััะพ ะฐะปะณะพัะธัะผ ะฟัะตะดัะบะฐะทัะฒะฐะตั ะบะปะฐัั 1 ะธ ะพะบะฐะทัะฒะฐะตััั ะฟัะฐะฒัะผ, ะฐ ะฒัะพัะฐั - ะบะฐะบ ะผะฝะพะณะพ ะพะฑัะตะบัะพะฒ ะบะปะฐััะฐ 1 ะฐะปะณะพัะธัะผ ะฝะฐัะตะป.
ะขะฐะบะถะต ัะฐััะผะพััะธะผ ัะฐะผัั ะฟัะพัััั ะธ ะธะทะฒะตััะฝัั ะผะตััะธะบั - accuracy; ะพะฝะฐ ะฟะพะบะฐะทัะฒะฐะตั ะดะพะปั ะฟัะฐะฒะธะปัะฝัั
ะพัะฒะตัะพะฒ.
ะัััะฝะธะผ ะฟัะตะธะผััะตััะฒะฐ ะธ ะฝะตะดะพััะฐัะบะธ ััะธั
ะผะตััะธะบ, ะฟะพะฟัะพะฑะพะฒะฐะฒ ะธั
ะฝะฐ ัะฐะทะฝัั
ะฒะตะบัะพัะฐั
ะฒะตัะพััะฝะพััะตะน.
End of explanation
from sklearn.metrics import precision_recall_curve
precs = []
recs = []
threshs = []
labels = ["Typical", "Avoids FP", "Avoids FN"]
for actual, predicted in zip([actual_1, actual_10, actual_11],
[predicted_1, predicted_10, predicted_11]):
prec, rec, thresh = precision_recall_curve(actual, predicted)
precs.append(prec)
recs.append(rec)
threshs.append(thresh)
plt.figure(figsize=(15, 5))
for i in range(3):
ax = plt.subplot(1, 3, i+1)
plt.plot(threshs[i], precs[i][:-1], label="precision")
plt.plot(threshs[i], recs[i][:-1], label="recall")
plt.xlabel("threshold")
ax.set_title(labels[i])
plt.legend()
Explanation: ะัะต ััะธ ะผะตััะธะบะธ ะปะตะณะบะพ ัะฐะทะปะธัะฐัั ะฟัะพัััะต ัะปััะฐะธ ั
ะพัะพัะธั
ะธ ะฟะปะพั
ะธั
ะฐะปะณะพัะธัะผะพะฒ. ะะฑัะฐัะธะผ ะฒะฝะธะผะฐะฝะธะต, ััะพ ะผะตััะธะบะธ ะธะผะตัั ะพะฑะปะฐััั ะทะฝะฐัะตะฝะธะน [0, 1], ะธ ะฟะพัะพะผั ะธั
ะปะตะณะบะพ ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั.
ะะตััะธะบะฐะผ ะฝะต ะฒะฐะถะฝั ะฒะตะปะธัะธะฝั ะฒะตัะพััะฝะพััะตะน, ะธะผ ะฒะฐะถะฝะพ ัะพะปัะบะพ ัะพ, ัะบะพะปัะบะพ ะพะฑัะตะบัะพะฒ ะฝะตะฟัะฐะฒะธะปัะฝะพ ะทะฐัะปะธ ะทะฐ ัััะฐะฝะพะฒะปะตะฝะฝัั ะณัะฐะฝะธัั (ะฒ ะดะฐะฝะฝะพะผ ัะปััะฐะต T = 0.5).
ะะตััะธะบะฐ accuracy ะดะฐะตั ะพะดะธะฝะฐะบะพะฒัะน ะฒะตั ะพัะธะฑะบะฐะผ false positive ะธ false negative, ะทะฐัะพ ะฟะฐัะฐ ะผะตััะธะบ precision ะธ recall ะพะดะฝะพะทะฝะฐัะฝะพ ะธะดะตะฝัะธัะธัะธััะตั ััะพ ัะฐะทะปะธัะธะต. ะกะพะฑััะฒะตะฝะฝะพ, ะธั
ะดะปั ัะพะณะพ ะธ ะธัะฟะพะปัะทััั, ััะพะฑั ะบะพะฝััะพะปะธัะพะฒะฐัั ะพัะธะฑะบะธ FP ะธ FN.
ะั ะธะทะผะตัะธะปะธ ััะธ ะผะตััะธะบะธ, ัะธะบัะธัะพะฒะฐะฒ ะฟะพัะพะณ T = 0.5, ะฟะพัะพะผั ััะพ ะดะปั ะฟะพััะธ ะฒัะตั
ะบะฐััะธะฝะพะบ ะพะฝ ะบะฐะถะตััั ะพะฟัะธะผะฐะปัะฝัะผ. ะะฐะฒะฐะนัะต ะฟะพัะผะพััะธะผ ะฝะฐ ะฟะพัะปะตะดะฝะตะน (ัะฐะผะพะน ะธะฝัะตัะตัะฝะพะน ะดะปั ััะธั
ะผะตััะธะบ) ะณััะฟะฟะต ะฒะตะบัะพัะพะฒ, ะบะฐะบ ะผะตะฝััััั precision ะธ recall ะฟัะธ ัะฒะตะปะธัะตะฝะธะธ ะฟะพัะพะณะฐ.
End of explanation
############### Programming assignment: problem 1 ###############
T = 0.65
ans =[]
print "ะะตััะธะบะธ ะฝะฐ ััะตั
ะฒัะฑัะฐะฝะฝัั
ะฟะฐัะฐั
ะฒะตะบัะพัะพะฒ:"
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1 > T, predicted_10 > T, predicted_11 > T],
["Typical:", "Avoids FP:", "Avoids FN:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
ans.append(precision_score(actual, predicted))
ans.append(recall_score(actual, predicted))
print
print ans
def write_answer_1(precision_1, recall_1, precision_10, recall_10, precision_11, recall_11):
answers = [precision_1, recall_1, precision_10, recall_10, precision_11, recall_11]
with open("pa_metrics_problem1.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_1(ans[0],ans[1],ans[2],ans[3],ans[4],ans[5])
Explanation: ะัะธ ัะฒะตะปะธัะตะฝะธะธ ะฟะพัะพะณะฐ ะผั ะดะตะปะฐะตะผ ะผะตะฝััะต ะพัะธะฑะพะบ FP ะธ ะฑะพะปััะต ะพัะธะฑะพะบ FN, ะฟะพััะพะผั ะพะดะฝะฐ ะธะท ะบัะธะฒัั
ัะฐััะตั, ะฐ ะฒัะพัะฐั - ะฟะฐะดะฐะตั. ะะพ ัะฐะบะพะผั ะณัะฐัะธะบั ะผะพะถะฝะพ ะฟะพะดะพะฑัะฐัั ะพะฟัะธะผะฐะปัะฝะพะต ะทะฝะฐัะตะฝะธะต ะฟะพัะพะณะฐ, ะฟัะธ ะบะพัะพัะพะผ precision ะธ recall ะฑัะดัั ะฟัะธะตะผะปะตะผั. ะัะปะธ ัะฐะบะพะณะพ ะฟะพัะพะณะฐ ะฝะต ะฝะฐัะปะพัั, ะฝัะถะฝะพ ะพะฑััะฐัั ะดััะณะพะน ะฐะปะณะพัะธัะผ.
ะะณะพะฒะพัะธะผัั, ััะพ ะฟัะธะตะผะปะตะผัะต ะทะฝะฐัะตะฝะธั precision ะธ recall ะพะฟัะตะดะตะปััััั ะฟัะตะดะผะตัะฝะพะน ะพะฑะปะฐัััั. ะะฐะฟัะธะผะตั, ะฒ ะทะฐะดะฐัะต ะพะฟัะตะดะตะปะตะฝะธั, ะฑะพะปะตะฝ ะปะธ ะฟะฐัะธะตะฝั ะพะฟัะตะดะตะปะตะฝะฝะพะน ะฑะพะปะตะทะฝัั (0 - ะทะดะพัะพะฒ, 1 - ะฑะพะปะตะฝ), ะพัะธะฑะพะบ false negative ััะฐัะฐัััั ะธะทะฑะตะณะฐัั, ััะตะฑัั recall ะพะบะพะปะพ 0.9. ะะพะถะฝะพ ัะบะฐะทะฐัั ัะตะปะพะฒะตะบั, ััะพ ะพะฝ ะฑะพะปะตะฝ, ะธ ะฟัะธ ะดะฐะปัะฝะตะนัะตะน ะดะธะฐะณะฝะพััะธะบะต ะฒััะฒะธัั ะพัะธะฑะบั; ะณะพัะฐะทะดะพ ั
ัะถะต ะฟัะพะฟัััะธัั ะฝะฐะปะธัะธะต ะฑะพะปะตะทะฝะธ.
<font color="green" size=5>Programming assignment: problem 1. </font> ะคะธะบัะธััะตะผ ะฟะพัะพะณ T = 0.65; ะฟะพ ะณัะฐัะธะบะฐะผ ะผะพะถะฝะพ ะฟัะธะผะตัะฝะพ ัะทะฝะฐัั, ัะตะผั ัะฐะฒะฝั ะผะตััะธะบะธ ะฝะฐ ััะตั
ะฒัะฑัะฐะฝะฝัั
ะฟะฐัะฐั
ะฒะตะบัะพัะพะฒ (actual, predicted). ะััะธัะปะธัะต ัะพัะฝัะต precision ะธ recall ะดะปั ััะธั
ััะตั
ะฟะฐั ะฒะตะบัะพัะพะฒ.
6 ะฟะพะปััะตะฝะฝัั
ัะธัะตะป ะทะฐะฟะธัะธัะต ะฒ ัะตะบััะพะฒัะน ัะฐะนะป ะฒ ัะฐะบะพะผ ะฟะพััะดะบะต:
precision_1 recall_1 precision_10 recall_10 precision_11 recall_11
ะฆะธััั XXX ะฟะพัะปะต ะฟัะพะฑะตะปะฐ ัะพะพัะฒะตัััะฒััั ัะฐะบะธะผ ะถะต ัะธััะฐะผ ะฒ ะฝะฐะทะฒะฐะฝะธัั
ะฟะตัะตะผะตะฝะฝัั
actual_XXX ะธ predicted_XXX.
ะะตัะตะดะฐะนัะต ะพัะฒะตั ะฒ ััะฝะบัะธั write_answer_1. ะะพะปััะตะฝะฝัะน ัะฐะนะป ะทะฐะณััะทะธัะต ะฒ ัะพัะผั.
End of explanation
from sklearn.metrics import f1_score
T = 0.5
print "ะ ะฐะทะฝัะต ัะบะปะพะฝะฝะพััะธ ะฐะปะณะพัะธัะผะพะฒ ะบ ะพัะธะฑะบะฐะผ FP ะธ FN:"
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1 > T, predicted_10 > T, predicted_11 > T],
["Typical:", "Avoids FP:", "Avoids FN:"]):
print descr, "f1 =", f1_score(actual, predicted)
Explanation: F1-score
ะัะตะฒะธะดะฝัะน ะฝะตะดะพััะฐัะพะบ ะฟะฐัั ะผะตััะธะบ precision-recall - ะฒ ัะพะผ, ััะพ ะธั
ะดะฒะต: ะฝะตะฟะพะฝััะฝะพ, ะบะฐะบ ัะฐะฝะถะธัะพะฒะฐัั ะฐะปะณะพัะธัะผั. ะงัะพะฑั ััะพะณะพ ะธะทะฑะตะถะฐัั, ะธัะฟะพะปัะทััั F1-ะผะตััะธะบั, ะบะพัะพัะฐั ัะฐะฒะฝะฐ ััะตะดะฝะตะผั ะณะฐัะผะพะฝะธัะตัะบะพะผั precision ะธ recall.
F1-ะผะตััะธะบะฐ ะฑัะดะตั ัะฐะฒะฝะฐ 1, ะตัะปะธ ะธ ัะพะปัะบะพ ะตัะปะธ precision = 1 ะธ recall = 1 (ะธะดะตะฐะปัะฝัะน ะฐะปะณะพัะธัะผ).
(: ะะฑะผะฐะฝััั F1 ัะปะพะถะฝะพ: ะตัะปะธ ะพะดะฝะฐ ะธะท ะฒะตะปะธัะธะฝ ะผะฐะปะตะฝัะบะฐั, ะฐ ะดััะณะฐั ะฑะปะธะทะบะฐ ะบ 1 (ะฟะพ ะณัะฐัะธะบะฐะผ ะฒะธะดะฝะพ, ััะพ ัะฐะบะพะต ัะพะพัะฝะพัะตะฝะธะต ะธะฝะพะณะดะฐ ะปะตะณะบะพ ะฟะพะปััะธัั), F1 ะฑัะดะตั ะดะฐะปะตะบะฐ ะพั 1. F1-ะผะตััะธะบั ัะปะพะถะฝะพ ะพะฟัะธะผะธะทะธัะพะฒะฐัั, ะฟะพัะพะผั ััะพ ะดะปั ััะพะณะพ ะฝัะถะฝะพ ะดะพะฑะธะฒะฐัััั ะฒััะพะบะพะน ะฟะพะปะฝะพัั ะธ ัะพัะฝะพััะธ ะพะดะฝะพะฒัะตะผะตะฝะฝะพ.
ะะฐะฟัะธะผะตั, ะฟะพััะธัะฐะตะผ F1 ะดะปั ัะพะณะพ ะถะต ะฝะฐะฑะพัะฐ ะฒะตะบัะพัะพะฒ, ะดะปั ะบะพัะพัะพะณะพ ะผั ัััะพะธะปะธ ะณัะฐัะธะบะธ (ะผั ะฟะพะผะฝะธะผ, ััะพ ัะฐะผ ะพะดะฝะฐ ะธะท ะบัะธะฒัั
ะฑััััะพ ะฒัั
ะพะดะธั ะฒ ะตะดะธะฝะธัั).
End of explanation
############### Programming assignment: problem 2 ###############
k = 1
print "ะะพะธัะบ ะทะฝะฐัะตะฝะธะน ะบ:"
for i in range(10):
T = 0.1 * k
print 'k = ',k,'T = ',T
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1 > T, predicted_10 > T, predicted_11 > T],
["Typical:", "Avoids FP:", "Avoids FN:"]):
print descr, "f1 =", f1_score(actual, predicted)
k += 1
print
k_1 = 5
k_10 = 3
k_11 =6
ks = [k_1, k_10, k_11]
many_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11],
np.array(ks)*0.1, ["Typical", "Avoids FP", "Avoids FN"], (1, 3))
def write_answer_2(k_1, k_10, k_11):
answers = [k_1, k_10, k_11]
with open("pa_metrics_problem2.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_2(k_1, k_10, k_11)
Explanation: F1-ะผะตััะธะบะฐ ะฒ ะดะฒัั
ะฟะพัะปะตะดะฝะธั
ัะปััะฐัั
, ะบะพะณะดะฐ ะพะดะฝะฐ ะธะท ะฟะฐัะฝัั
ะผะตััะธะบ ัะฐะฒะฝะฐ 1, ะทะฝะฐัะธัะตะปัะฝะพ ะผะตะฝััะต, ัะตะผ ะฒ ะฟะตัะฒะพะผ, ัะฑะฐะปะฐะฝัะธัะพะฒะฐะฝะฝะพะผ ัะปััะฐะต.
<font color="green" size=5>Programming assignment: problem 2. </font> ะะฐ precision ะธ recall ะฒะปะธััั ะธ ั
ะฐัะฐะบัะตั ะฒะตะบัะพัะฐ ะฒะตัะพััะฝะพััะตะน, ะธ ัััะฐะฝะพะฒะปะตะฝะฝัะน ะฟะพัะพะณ.
ะะปั ัะตั
ะถะต ะฟะฐั (actual, predicted), ััะพ ะธ ะฒ ะฟัะตะดัะดััะตะน ะทะฐะดะฐัะต, ะฝะฐะนะดะธัะต ะพะฟัะธะผะฐะปัะฝัะต ะฟะพัะพะณะธ, ะผะฐะบัะธะผะธะทะธััััะธะต F1_score. ะัะดะตะผ ัะฐััะผะฐััะธะฒะฐัั ัะพะปัะบะพ ะฟะพัะพะณะธ ะฒะธะดะฐ T = 0.1 * k, k - ัะตะปะพะต; ัะพะพัะฒะตัััะฒะตะฝะฝะพ, ะฝัะถะฝะพ ะฝะฐะนัะธ ััะธ ะทะฝะฐัะตะฝะธั k. ะัะปะธ f1 ะผะฐะบัะธะผะธะทะธััะตััั ะฟัะธ ะฝะตัะบะพะปัะบะธั
ะทะฝะฐัะตะฝะธัั
k, ัะบะฐะถะธัะต ะฝะฐะธะผะตะฝััะตะต ะธะท ะฝะธั
.
ะะฐะฟะธัะธัะต ะฝะฐะนะดะตะฝะฝัะต ัะธัะปะฐ k ะฒ ัะปะตะดัััะตะผ ะฟะพััะดะบะต:
k_1, k_10, k_11
ะฆะธััั XXX ะฟะพัะปะต ะฟัะพะฑะตะปะฐ ัะพะพัะฒะตัััะฒััั ัะฐะบะธะผ ะถะต ัะธััะฐะผ ะฒ ะฝะฐะทะฒะฐะฝะธัั
ะฟะตัะตะผะตะฝะฝัั
actual_XXX ะธ predicted_XXX.
ะะตัะตะดะฐะนัะต ะพัะฒะตั ะฒ ััะฝะบัะธั write_answer_2. ะะฐะณััะทะธัะต ัะฐะนะป ะฒ ัะพัะผั.
ะัะปะธ ะฒั ะทะฐะฟะธัะธัะต ัะฟะธัะพะบ ะธะท ััะตั
ะฝะฐะนะดะตะฝะฝัั
k ะฒ ัะพะผ ะถะต ะฟะพััะดะบะต ะฒ ะฟะตัะตะผะตะฝะฝัั ks, ัะพ ั ะฟะพะผะพััั ะบะพะดะฐ ะฝะธะถะต ะผะพะถะฝะพ ะฒะธะทัะฐะปะธะทะธัะพะฒะฐัั ะฝะฐะนะดะตะฝะฝัะต ะฟะพัะพะณะธ:
End of explanation
from sklearn.metrics import log_loss
print "ะะปะณะพัะธัะผั, ัะฐะทะฝัะต ะฟะพ ะบะฐัะตััะฒั:"
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect:", "Typical:", "Awful:"]):
print descr, log_loss(actual, predicted)
print
print "ะััะพัะพะถะฝัะน ะธ ัะธัะบัััะธะน ะฐะปะณะพัะธัะผั:"
for actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r],
[predicted_0, predicted_0r, predicted_1, predicted_1r],
["Ideal careful", "Ideal risky", "Typical careful:", "Typical risky:"]):
print descr, log_loss(actual, predicted)
print
print "ะ ะฐะทะฝัะต ัะบะปะพะฝะฝะพััะธ ะฐะปะณะพัะธัะผะพะฒ ะบ ะพัะธะฑะบะฐะผ FP ะธ FN:"
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10, predicted_11],
["Avoids FP:", "Avoids FN:"]):
print descr, log_loss(actual, predicted)
Explanation: ะะตััะธะบะธ, ะพัะตะฝะธะฒะฐััะธะต ะฒะตะบัะพัั ะฒะตัะพััะฝะพััะตะน ะบะปะฐััะฐ 1
ะ ะฐััะผะพััะตะฝะฝัะต ะผะตััะธะบะธ ัะดะพะฑะฝะพ ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั, ะฝะพ ะฟัะธ ะธั
ะธัะฟะพะปัะทะพะฒะฐะฝะธะธ ะผั ะฝะต ััะธััะฒะฐะตะผ ะฑะพะปัััั ัะฐััั ะธะฝัะพัะผะฐัะธะธ, ะฟะพะปััะตะฝะฝะพะน ะพั ะฐะปะณะพัะธัะผะฐ. ะ ะฝะตะบะพัะพััั
ะทะฐะดะฐัะฐั
ะฒะตัะพััะฝะพััะธ ะฝัะถะฝั ะฒ ัะธััะพะผ ะฒะธะดะต, ะฝะฐะฟัะธะผะตั, ะตัะปะธ ะผั ะฟัะตะดัะบะฐะทัะฒะฐะตะผ, ะฒัะธะณัะฐะตั ะปะธ ะบะพะผะฐะฝะดะฐ ะฒ ัััะฑะพะปัะฝะพะผ ะผะฐััะต, ะธ ะฒะตะปะธัะธะฝะฐ ะฒะตัะพััะฝะพััะธ ะฒะปะธัะตั ะฝะฐ ัะฐะทะผะตั ััะฐะฒะบะธ ะทะฐ ััั ะบะพะผะฐะฝะดั. ะะฐะถะต ะตัะปะธ ะฒ ะบะพะฝัะต ะบะพะฝัะพะฒ ะผั ะฒัะต ัะฐะฒะฝะพ ะฑะธะฝะฐัะธะทัะตะผ ะฟัะตะดัะบะฐะทะฐะฝะธะต, ั
ะพัะตััั ัะปะตะดะธัั ะทะฐ ั
ะฐัะฐะบัะตัะพะผ ะฒะตะบัะพัะฐ ะฒะตัะพััะฝะพััะธ.
Log_loss
Log_loss ะฒััะธัะปัะตั ะฟัะฐะฒะดะพะฟะพะดะพะฑะธะต ะผะตัะพะบ ะฒ actual ั ะฒะตัะพััะฝะพัััะผะธ ะธะท predicted, ะฒะทััะพะต ั ะฟัะพัะธะฒะพะฟะพะปะพะถะฝัะผ ะทะฝะฐะบะพะผ:
$log_loss(actual, predicted) = - \frac 1 n \sum_{i=1}^n (actual_i \cdot \log (predicted_i) + (1-actual_i) \cdot \log (1-predicted_i))$, $n$ - ะดะปะธะฝะฐ ะฒะตะบัะพัะพะฒ.
ะกะพะพัะฒะตัััะฒะตะฝะฝะพ, ััั ะผะตััะธะบั ะฝัะถะฝะพ ะผะธะฝะธะผะธะทะธัะพะฒะฐัั.
ะััะธัะปะธะผ ะตะต ะฝะฐ ะฝะฐัะธั
ะฒะตะบัะพัะฐั
:
End of explanation
############### Programming assignment: problem 3 ###############
import math
def weighted_log_loss(actual,predicted):
sum = 0.
n = len(actual)
for i in xrange(0,n):
sum += 0.3 * actual[i] * math.log(predicted[i]) + 0.7 * (1 - actual[i]) * math.log(1 - predicted[i])
return round(sum/-n,12)
answer = []
j = 0
print "ะััะธัะปะตะฝะธะต ะผะพะดะธัะธัะธัะพะฒะฐะฝะฝะพะณะพ Log_loss\n"
print "ะะปะณะพัะธัะผั, ัะฐะทะฝัะต ะฟะพ ะบะฐัะตััะฒั:"
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect:", "Typical:", "Awful:"]):
answer.append(weighted_log_loss(actual, predicted))
print descr, answer[j]
j += 1
print "\nะััะพัะพะถะฝัะน ะธ ัะธัะบัััะธะน ะฐะปะณะพัะธัะผั:"
for actual, predicted, descr in zip([actual_0r, actual_1r],
[ predicted_0r, predicted_1r],
["Ideal careful", "Ideal risky", "Typical careful:", "Typical risky:"]):
answer.append(weighted_log_loss(actual, predicted))
print descr, answer[j]
j += 1
print "\nะ ะฐะทะฝัะต ัะบะปะพะฝะฝะพััะธ ะฐะปะณะพัะธัะผะพะฒ ะบ ะพัะธะฑะบะฐะผ FP ะธ FN:"
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10, predicted_11],
["Avoids FP:", "Avoids FN:"]):
answer.append(weighted_log_loss(actual, predicted))
print descr, answer[j]
j += 1
def write_answer_3(wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11):
answers = [wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11]
with open("pa_metrics_problem3.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_3(answer[0], answer[1], answer[2], answer[3], answer[4], answer[5], answer[6])
Explanation: ะะฐะบ ะธ ะฟัะตะดัะดััะธะต ะผะตััะธะบะธ, log_loss ั
ะพัะพัะพ ัะฐะทะปะธัะฐะตั ะธะดะตะฐะปัะฝัะน, ัะธะฟะธัะฝัะน ะธ ะฟะปะพั
ะพะน ัะปััะฐะธ. ะะพ ะพะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต, ััะพ ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั ะฒะตะปะธัะธะฝั ะดะพััะฐัะพัะฝะพ ัะปะพะถะฝะพ: ะผะตััะธะบะฐ ะฝะต ะดะพััะธะณะฐะตั ะฝัะปั ะฝะธะบะพะณะดะฐ ะธ ะฝะต ะธะผะตะตั ะฒะตัั
ะฝะตะน ะณัะฐะฝะธัั. ะะพััะพะผั ะดะฐะถะต ะดะปั ะธะดะตะฐะปัะฝะพะณะพ ะฐะปะณะพัะธัะผะฐ, ะตัะปะธ ัะผะพััะตัั ัะพะปัะบะพ ะฝะฐ ะพะดะฝะพ ะทะฝะฐัะตะฝะธะต log_loss, ะฝะตะฒะพะทะผะพะถะฝะพ ะฟะพะฝััั, ััะพ ะพะฝ ะธะดะตะฐะปัะฝัะน.
ะะพ ะทะฐัะพ ััะฐ ะผะตััะธะบะฐ ัะฐะทะปะธัะฐะตั ะพััะพัะพะถะฝัะน ะธ ัะธัะบัััะธะน ะฐะปะณะพัะธัะผั. ะะฐะบ ะผั ะฒะธะดะตะปะธ ะฒััะต, ะฒ ัะปััะฐัั
Typical careful ะธ Typical risky ะบะพะปะธัะตััะฒะพ ะพัะธะฑะพะบ ะฟัะธ ะฑะธะฝะฐัะธะทะฐัะธะธ ะฟะพ T = 0.5 ะฟัะธะผะตัะฝะพ ะพะดะธะฝะฐะบะพะฒะพะต, ะฒ ัะปััะฐัั
Ideal ะพัะธะฑะพะบ ะฒะพะพะฑัะต ะฝะตั. ะะดะฝะฐะบะพ ะทะฐ ะฝะตัะดะฐัะฝะพ ัะณะฐะดะฐะฝะฝัะต ะบะปะฐััั ะฒ Typical ัะธัะบัััะตะผั ะฐะปะณะพัะธัะผั ะฟัะธั
ะพะดะธััั ะฟะปะฐัะธัั ะฑะพะปััะธะผ ัะฒะตะปะธัะตะฝะธะตะผ log_loss, ัะตะผ ะพััะพัะพะถะฝะพะผั ะฐะปะณะพัะธัะผั. ะก ะดััะณะพะน ััะพัะพะฝั, ะทะฐ ัะดะฐัะฝะพ ัะณะฐะดะฐะฝะฝัะต ะบะปะฐััั ัะธัะบะพะฒะฐะฝะฝัะน ะธะดะตะฐะปัะฝัะน ะฐะปะณะพัะธัะผ ะฟะพะปััะฐะตั ะผะตะฝััะธะน log_loss, ัะตะผ ะพััะพัะพะถะฝัะน ะธะดะตะฐะปัะฝัะน ะฐะปะณะพัะธัะผ.
ะขะฐะบะธะผ ะพะฑัะฐะทะพะผ, log_loss ััะฒััะฒะธัะตะปะตะฝ ะธ ะบ ะฒะตัะพััะฝะพัััะผ, ะฑะปะธะทะบะธะผ ะบ 0 ะธ 1, ะธ ะบ ะฒะตัะพััะฝะพัััะผ, ะฑะปะธะทะบะธะผ ะบ 0.5.
ะัะธะฑะบะธ FP ะธ FN ะพะฑััะฝัะน Log_loss ัะฐะทะปะธัะฐัั ะฝะต ัะผะตะตั.
ะะดะฝะฐะบะพ ะฝะตัััะดะฝะพ ัะดะตะปะฐัั ะพะฑะพะฑัะตะฝะธะต log_loss ะฝะฐ ัะปััะฐะน, ะบะพะณะดะฐ ะฝัะถะฝะพ ะฑะพะปััะต ัััะฐัะพะฒะฐัั FP ะธะปะธ FN: ะดะปั ััะพะณะพ ะดะพััะฐัะพัะฝะพ ะดะพะฑะฐะฒะธัั ะฒัะฟัะบะปัั (ัะพ ะตััั ะฝะตะพััะธัะฐัะตะปัะฝัั ะธ ััะผะผะธัััััััั ะบ ะตะดะธะฝะธัะต) ะบะพะผะฑะธะฝะฐัะธั ะธะท ะดะฒัั
ะบะพัััะธัะธะตะฝัะพะฒ ะบ ัะปะฐะณะฐะตะผัะผ ะฟัะฐะฒะดะพะฟะพะดะพะฑะธั. ะะฐะฟัะธะผะตั, ะดะฐะฒะฐะนัะต ัััะฐัะพะฒะฐัั false positive:
$weighted_log_loss(actual, predicted) = -\frac 1 n \sum_{i=1}^n (0.3\, \cdot actual_i \cdot \log (predicted_i) + 0.7\,\cdot (1-actual_i)\cdot \log (1-predicted_i))$
$log_loss(actual, predicted) = - \frac 1 n \sum_{i=1}^n (actual_i \cdot \log (predicted_i) + (1-actual_i) \cdot \log (1-predicted_i))$, $n$ - ะดะปะธะฝะฐ ะฒะตะบัะพัะพะฒ.
ะัะปะธ ะฐะปะณะพัะธัะผ ะฝะตะฒะตัะฝะพ ะฟัะตะดัะบะฐะทัะฒะฐะตั ะฑะพะปัััั ะฒะตัะพััะฝะพััั ะฟะตัะฒะพะผั ะบะปะฐััั, ัะพ ะตััั ะพะฑัะตะบั ะฝะฐ ัะฐะผะพะผ ะดะตะปะต ะฟัะธะฝะฐะดะปะตะถะธั ะบะปะฐััั 0, ัะพ ะฟะตัะฒะพะต ัะปะฐะณะฐะตะผะพะต ะฒ ัะบะพะฑะบะฐั
ัะฐะฒะฝะพ ะฝัะปั, ะฐ ะฒัะพัะพะต ััะธััะฒะฐะตััั ั ะฑะพะปััะธะผ ะฒะตัะพะผ.
<font color="green" size=5>Programming assignment: problem 3. </font> ะะฐะฟะธัะธัะต ััะฝะบัะธั, ะบะพัะพัะฐั ะฑะตัะตั ะฝะฐ ะฒั
ะพะด ะฒะตะบัะพัั actual ะธ predicted ะธ ะฒะพะทะฒัะฐัะฐะตั ะผะพะดะธัะธัะธัะพะฒะฐะฝะฝัะน Log-Loss, ะฒััะธัะปะตะฝะฝัะน ะฟะพ ัะพัะผัะปะต ะฒััะต. ะััะธัะปะธัะต ะตะต ะทะฝะฐัะตะฝะธะต (ะพะฑะพะทะฝะฐัะธะผ ะตะณะพ wll) ะฝะฐ ัะตั
ะถะต ะฒะตะบัะพัะฐั
, ะฝะฐ ะบะพัะพััั
ะผั ะฒััะธัะปัะปะธ ะพะฑััะฝัะน log_loss, ะธ ะทะฐะฟะธัะธัะต ะฒ ัะฐะนะป ะฒ ัะปะตะดัััะตะผ ะฟะพััะดะบะต:
wll_0 wll_1 wll_2 wll_0r wll_1r wll_10 wll_11
ะฆะธััั XXX ะฟะพัะปะต ะฟัะพะฑะตะปะฐ ัะพะพัะฒะตัััะฒััั ัะฐะบะธะผ ะถะต ัะธััะฐะผ ะฒ ะฝะฐะทะฒะฐะฝะธัั
ะฟะตัะตะผะตะฝะฝัั
actual_XXX ะธ predicted_XXX.
ะะตัะตะดะฐะนัะต ะพัะฒะตั ะฒ ััะฝะบัะธั write_answer3. ะะฐะณััะทะธัะต ัะฐะนะป ะฒ ัะพัะผั.
End of explanation
from sklearn.metrics import roc_curve, roc_auc_score
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
aucs = ""
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect", "Typical", "Awful"]):
fpr, tpr, thr = roc_curve(actual, predicted)
plt.plot(fpr, tpr, label=descr)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 2)
for actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r],
[predicted_0, predicted_0r, predicted_1, predicted_1r],
["Ideal careful", "Ideal Risky", "Typical careful", "Typical risky"]):
fpr, tpr, thr = roc_curve(actual, predicted)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 3)
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1, predicted_10, predicted_11],
["Typical", "Avoids FP", "Avoids FN"]):
fpr, tpr, thr = roc_curve(actual, predicted)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
print aucs
Explanation: ะะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต ะฝะฐ ัะฐะทะฝะธัั weighted_log_loss ะผะตะถะดั ัะปััะฐัะผะธ Avoids FP ะธ Avoids FN.
ROC ะธ AUC
ะัะธ ะฟะพัััะพะตะฝะธะธ ROC-ะบัะธะฒะพะน (receiver operating characteristic) ะฟัะพะธัั
ะพะดะธั ะฒะฐััะธัะพะฒะฐะฝะธะต ะฟะพัะพะณะฐ ะฑะธะฝะฐัะธะทะฐัะธะธ ะฒะตะบัะพัะฐ ะฒะตัะพััะฝะพััะตะน, ะธ ะฒััะธัะปััััั ะฒะตะปะธัะธะฝั, ะทะฐะฒะธัััะธะต ะพั ัะธัะปะฐ ะพัะธะฑะพะบ FP ะธ FN. ะญัะธ ะฒะตะปะธัะธะฝั ะทะฐะดะฐัััั ัะฐะบ, ััะพะฑั ะฒ ัะปััะฐะต, ะบะพะณะดะฐ ัััะตััะฒัะตั ะฟะพัะพะณ ะดะปั ะธะดะตะฐะปัะฝะพะณะพ ัะฐะทะดะตะปะตะฝะธั ะบะปะฐััะพะฒ, ROC-ะบัะธะฒะฐั ะฟัะพั
ะพะดะธะปะฐ ัะตัะตะท ะพะฟัะตะดะตะปะตะฝะฝัั ัะพัะบั - ะฒะตัั
ะฝะธะน ะปะตะฒัะน ัะณะพะป ะบะฒะฐะดัะฐัะฐ [0, 1] x [0, 1]. ะัะพะผะต ัะพะณะพ, ะพะฝะฐ ะฒัะตะณะดะฐ ะฟัะพั
ะพะดะธั ัะตัะตะท ะปะตะฒัะน ะฝะธะถะฝะธะน ะธ ะฟัะฐะฒัะน ะฒะตัั
ะฝะธะน ัะณะปั. ะะพะปััะฐะตััั ะฝะฐะณะปัะดะฝะฐั ะฒะธะทัะฐะปะธะทะฐัะธั ะบะฐัะตััะฒะฐ ะฐะปะณะพัะธัะผะฐ. ะก ัะตะปัั ะพั
ะฐัะฐะบัะตัะธะทะพะฒะฐัั ััั ะฒะธะทัะฐะปะธะทะฐัะธั ัะธัะปะตะฝะฝะพ, ะฒะฒะตะปะธ ะฟะพะฝััะธะต AUC - ะฟะปะพัะฐะดั ะฟะพะด ROC-ะบัะธะฒะพะน.
ะััั ะฝะตัะปะพะถะฝัะน ะธ ัััะตะบัะธะฒะฝัะน ะฐะปะณะพัะธัะผ, ะบะพัะพััะน ะทะฐ ะพะดะธะฝ ะฟัะพั
ะพะด ะฟะพ ะฒัะฑะพัะบะต ะฒััะธัะปัะตั ROC-ะบัะธะฒัั ะธ AUC, ะฝะพ ะผั ะฝะต ะฑัะดะตะผ ะฒะดะฐะฒะฐัััั ะฒ ะดะตัะฐะปะธ.
ะะพัััะพะธะผ ROC-ะบัะธะฒัะต ะดะปั ะฝะฐัะธั
ะทะฐะดะฐั:
End of explanation
def Binarization_threshold(fpr,tpr,thr):
ideal = np.array([0,1])
mins = np.linalg.norm(ideal - np.array([fpr[0],tpr[0]]))
ansT = thr[0]#t
resX = fpr[0]#x
resY = tpr[0]#y
for i in xrange(1,len(fpr)):
result = np.linalg.norm(ideal - np.array([fpr[i],tpr[i]]))
if result < mins:
mins = result
ansT = thr[i]#t
resX = fpr[i]#x
resY = tpr[i]#y
plt.scatter(resX,resY)
return ansT
############### Programming assignment: problem 4 ###############
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
aucs = ""
answer4 = []
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect", "Typical", "Awful"]):
fpr, tpr, thr = roc_curve(actual, predicted)
answer4.append(Binarization_threshold(fpr,tpr,thr))#search t
plt.plot(fpr, tpr, label=descr)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 2)
#-------
for actual, predicted, descr in zip([actual_0r,actual_1r],
[predicted_0r,predicted_1r],
["Ideal Risky","Typical risky"]):
fpr, tpr, thr = roc_curve(actual, predicted)
answer4.append(Binarization_threshold(fpr,tpr,thr))#search t
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 3)
#--------
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10, predicted_11],
["Avoids FP", "Avoids FN"]):
fpr, tpr, thr = roc_curve(actual, predicted)
answer4.append(Binarization_threshold(fpr,tpr,thr))#search t
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
print aucs
print "\n",answer4
def write_answer_4(T_0, T_1, T_2, T_0r, T_1r, T_10, T_11):
answers = [T_0, T_1, T_2, T_0r, T_1r, T_10, T_11]
with open("pa_metrics_problem4.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_4(answer4[0],answer4[1],answer4[2],answer4[3],answer4[4],answer4[5],answer4[6])
Explanation: ะงะตะผ ะฑะพะปััะต ะพะฑัะตะบัะพะฒ ะฒ ะฒัะฑะพัะบะต, ัะตะผ ะฑะพะปะตะต ะณะปะฐะดะบะพะน ะฒัะณะปัะดะธั ะบัะธะฒะฐั (ั
ะพัั ะฝะฐ ัะฐะผะพะผ ะดะตะปะต ะพะฝะฐ ะฒัะต ัะฐะฒะฝะพ ัััะฟะตะฝัะฐัะฐั).
ะะฐะบ ะธ ะพะถะธะดะฐะปะพัั, ะบัะธะฒัะต ะฒัะตั
ะธะดะตะฐะปัะฝัั
ะฐะปะณะพัะธัะผะพะฒ ะฟัะพั
ะพะดัั ัะตัะตะท ะปะตะฒัะน ะฒะตัั
ะฝะธะน ัะณะพะป. ะะฐ ะฟะตัะฒะพะผ ะณัะฐัะธะบะต ัะฐะบะถะต ะฟะพะบะฐะทะฐะฝะฐ ัะธะฟะธัะฝะฐั ROC-ะบัะธะฒะฐั (ะพะฑััะฝะพ ะฝะฐ ะฟัะฐะบัะธะบะต ะพะฝะธ ะฝะต ะดะพั
ะพะดัั ะดะพ "ะธะดะตะฐะปัะฝะพะณะพ" ัะณะปะฐ).
AUC ัะธัะบัััะตะณะพ ะฐะปะณะพัะธัะผะฐ ะทะฝะฐัะธัะตะปัะฝะพะผ ะผะตะฝััะต, ัะตะผ ั ะพััะพัะพะถะฝะพะณะพ, ั
ะพัั ะพััะพัะพะถะฝัะน ะธ ัะธัะบััะธะน ะธะดะตะฐะปัะฝัะต ะฐะปะณะพัะธัะผั ะฝะต ัะฐะทะปะธัะฐัััั ะฟะพ ROC ะธะปะธ AUC. ะะพััะพะผั ัััะตะผะธัััั ัะฒะตะปะธัะธัั ะทะฐะทะพั ะผะตะถะดั ะธะฝัะตัะฒะฐะปะฐะผะธ ะฒะตัะพััะฝะพััะตะน ะบะปะฐััะพะฒ ัะผััะปะฐ ะฝะต ะธะผะตะตั.
ะะฐะฑะปัะดะฐะตััั ะฟะตัะตะบะพั ะบัะธะฒะพะน ะฒ ัะปััะฐะต, ะบะพะณะดะฐ ะฐะปะณะพัะธัะผั ัะฒะพะนััะฒะตะฝะฝั ะพัะธะฑะบะธ FP ะธะปะธ FN. ะะดะฝะฐะบะพ ะฟะพ ะฒะตะปะธัะธะฝะต AUC ััะพ ะพััะปะตะดะธัั ะฝะตะฒะพะทะผะพะถะฝะพ (ะบัะธะฒัะต ะผะพะณัั ะฑััั ัะธะผะผะตััะธัะฝั ะพัะฝะพัะธัะตะปัะฝะพ ะดะธะฐะณะพะฝะฐะปะธ (0, 1)-(1, 0)).
ะะพัะปะต ัะพะณะพ, ะบะฐะบ ะบัะธะฒะฐั ะฟะพัััะพะตะฝะฐ, ัะดะพะฑะฝะพ ะฒัะฑะธัะฐัั ะฟะพัะพะณ ะฑะธะฝะฐัะธะทะฐัะธะธ, ะฒ ะบะพัะพัะพะผ ะฑัะดะตั ะดะพััะธะณะฝัั ะบะพะผะฟัะพะผะธัั ะผะตะถะดั FP ะธะปะธ FN. ะะพัะพะณ ัะพะพัะฒะตัััะฒัะตั ัะพัะบะต ะฝะฐ ะบัะธะฒะพะน. ะัะปะธ ะผั ั
ะพัะธะผ ะธะทะฑะตะถะฐัั ะพัะธะฑะพะบ FP, ะฝัะถะฝะพ ะฒัะฑะธัะฐัั ัะพัะบั ะฝะฐ ะปะตะฒะพะน ััะพัะพะฝะต ะบะฒะฐะดัะฐัะฐ (ะบะฐะบ ะผะพะถะฝะพ ะฒััะต), ะตัะปะธ FN - ัะพัะบั ะฝะฐ ะฒะตัั
ะฝะตะน ััะพัะพะฝะต ะบะฒะฐะดัะฐัะฐ (ะบะฐะบ ะผะพะถะฝะพ ะปะตะฒะตะต). ะัะต ะฟัะพะผะตะถััะพัะฝัะต ัะพัะบะธ ะฑัะดัั ัะพะพัะฒะตัััะฒะพะฒะฐัั ัะฐะทะฝัะผ ะฟัะพะฟะพััะธัะผ FP ะธ FN.
<font color="green" size=5>Programming assignment: problem 4. </font> ะะฐ ะบะฐะถะดะพะน ะบัะธะฒะพะน ะฝะฐะนะดะธัะต ัะพัะบั, ะบะพัะพัะฐั ะฑะปะธะถะต ะฒัะตะณะพ ะบ ะปะตะฒะพะผั ะฒะตัั
ะฝะตะผั ัะณะปั (ะฑะปะธะถะต ะฒ ัะผััะปะต ะพะฑััะฝะพะณะพ ะตะฒะบะปะธะดะพะฒะฐ ัะฐัััะพัะฝะธั), ััะพะน ัะพัะบะต ัะพะพัะฒะตัััะฒัะตั ะฝะตะบะพัะพััะน ะฟะพัะพะณ ะฑะธะฝะฐัะธะทะฐัะธะธ. ะะฐะฟะธัะธัะต ะฒ ะฒัั
ะพะดะฝะพะน ัะฐะนะป ะฟะพัะพะณะธ ะฒ ัะปะตะดัััะตะผ ะฟะพััะดะบะต:
T_0 T_1 T_2 T_0r T_1r T_10 T_11
ะฆะธััั XXX ะฟะพัะปะต ะฟัะพะฑะตะปะฐ ัะพะพัะฒะตัััะฒััั ัะฐะบะธะผ ะถะต ัะธััะฐะผ ะฒ ะฝะฐะทะฒะฐะฝะธัั
ะฟะตัะตะผะตะฝะฝัั
actual_XXX ะธ predicted_XXX.
ะัะปะธ ะฟะพัะพะณะพะฒ, ะผะธะฝะธะผะธะทะธััััะธั
ัะฐัััะพัะฝะธะต, ะฝะตัะบะพะปัะบะพ, ะฒัะฑะตัะธัะต ะฝะฐะธะฑะพะปััะธะน.
ะะตัะตะดะฐะนัะต ะพัะฒะตั ะฒ ััะฝะบัะธั write_answer_4. ะะฐะณััะทะธัะต ัะฐะนะป ะฒ ัะพัะผั.
ะะพััะฝะตะฝะธะต: ััะฝะบัะธั roc_curve ะฒะพะทะฒัะฐัะฐะตั ััะธ ะทะฝะฐัะตะฝะธั: FPR (ะผะฐััะธะฒ ะฐะฑััะธะธ ัะพัะตะบ ROC-ะบัะธะฒะพะน), TPR (ะผะฐััะธะฒ ะพัะดะธะฝะฐั ัะพัะตะบ ROC-ะบัะธะฒะพะน) ะธ thresholds (ะผะฐััะธะฒ ะฟะพัะพะณะพะฒ, ัะพะพัะฒะตัััะฒัััะธั
ัะพัะบะฐะผ).
ะ ะตะบะพะผะตะฝะดัะตะผ ะพััะธัะพะฒัะฒะฐัั ะฝะฐะนะดะตะฝะฝัั ัะพัะบั ะฝะฐ ะณัะฐัะธะบะต ั ะฟะพะผะพััั ััะฝะบัะธะธ plt.scatter.
End of explanation |
3,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #5
This homework presents a sophisticated scenario in which you must design a SQL schema, insert data into it, and issue queries against it.
The scenario
In the year 20XX, I have won the lottery and decided to leave my programming days behind me in order to pursue my true calling as a cat cafe tycoon. This webpage lists the locations of my cat cafes and all the cats that are currently in residence at these cafes.
I'm interested in doing more detailed analysis of my cat cafe holdings and the cats that are currently being cared for by my cafes. For this reason, I've hired you to convert this HTML page into a workable SQL database. (Why don't I just do it myself? Because I am far too busy hanging out with adorable cats in all of my beautiful, beautiful cat cafes.)
Specifically, I want to know the answers to the following questions
Step1: Let's tackle the list of cafes first. In the cell below, write some code that creates a list of dictionaries with information about each cafe, assigning it to the variable cafe_list. I've written some of the code for you; you just need to fill in the rest. The list should end up looking like this
Step2: Great! In the following cell, write some code that creates a list of cats from the <table> tag on the page, storing them as a list of dictionaries in a variable called cat_list. Again, I've written a bit of the code for you. Expected output
Step3: Problem set #2
Step4: Here's a cell you can run if something goes wrong and you need to rollback the current query session
Step8: In the cell below, you're going to create three tables, necessary to represent the data you scraped above. I've given the basic framework of the Python code and SQL statements to create these tables. I've given the entire CREATE TABLE statement for the cafe table, but for the other two, you'll need to supply the field names and the data types for each column. If you're unsure what to call the fields, or what fields should be in the tables, consult the queries in "The Queries" below. Hints
Step9: After executing the above cell, issuing a \d command in psql should yield something that looks like the following
Step10: Issuing SELECT * FROM cafe in the psql client should yield something that looks like this
Step11: The dictionary maps the name of the cat cafe to its ID in the database. You'll need these values later when you're adding records to the linking table (cat_cafe).
Now the tricky part. (Yes, believe it or not, this is the tricky part. The other stuff has all been easy by comparison.) In the cell below, write the Python code to insert each cat's data from the cat_list variable (created in Problem Set #1) into the cat table. The code should also insert the relevant data into the cat_cafe table. Hints
Step12: Issuing a SELECT * FROM cat LIMIT 10 in psql should yield something that looks like this
Step14: In which zip codes can I find a lilac-colored tabby?
Expected output
Step16: What's the average weight of cats currently residing at all locations?
Expected output
Step18: Which location has the most cats with tortoiseshell coats?
Expected output | Python Code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
html = urlopen("http://static.decontextualize.com/cats.html").read()
document = BeautifulSoup(html, "html.parser")
Explanation: Homework #5
This homework presents a sophisticated scenario in which you must design a SQL schema, insert data into it, and issue queries against it.
The scenario
In the year 20XX, I have won the lottery and decided to leave my programming days behind me in order to pursue my true calling as a cat cafe tycoon. This webpage lists the locations of my cat cafes and all the cats that are currently in residence at these cafes.
I'm interested in doing more detailed analysis of my cat cafe holdings and the cats that are currently being cared for by my cafes. For this reason, I've hired you to convert this HTML page into a workable SQL database. (Why don't I just do it myself? Because I am far too busy hanging out with adorable cats in all of my beautiful, beautiful cat cafes.)
Specifically, I want to know the answers to the following questions:
What's the name of the youngest cat at any location?
In which zip codes can I find a lilac-colored tabby?
What's the average weight of cats currently residing at any location (grouped by location)?
Which location has the most cats with tortoiseshell coats?
Because I'm not paying you very much, and because I am a merciful person who has considerable experience in these matters, I've decided to write the queries for you. (See below.) Your job is just to scrape the data from the web page, create the appropriate tables in PostgreSQL, and insert the data into those tables.
Before you continue, scroll down to "The Queries" below to examine the queries as I wrote them.
Problem set #1: Scraping the data
Your first goal is to create two data structures, both lists of dictionaries: one for the list of locations and one for the list of cats. You'll get these from scraping two <table> tags in the HTML: the first table has a class of cafe-list, the second has a class of cat-list.
Before you do anything else, though, execute the following cell to import Beautiful Soup and create a BeautifulSoup object with the content of the web page:
End of explanation
# TA-COMMENT: "zip" is a special word in Python! Here, you can see that it appears in green (unlike other variable names)
# This mayyyy yield an error.
cafe_list = list()
cafe_table = document.find('table', {'class': 'cafe-list'})
tbody = cafe_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
name = tr_tag.find('td', {'class':'name'}).string
zip = tr_tag.find('td', {'class':'zip'}).string
cafe_list.append({'name': name, 'zip': zip})
pass # replace "pass" with your code
cafe_list
Explanation: Let's tackle the list of cafes first. In the cell below, write some code that creates a list of dictionaries with information about each cafe, assigning it to the variable cafe_list. I've written some of the code for you; you just need to fill in the rest. The list should end up looking like this:
[{'name': 'Hang In There', 'zip': '11237'},
{'name': 'Independent Claws', 'zip': '11201'},
{'name': 'Paws and Play', 'zip': '11215'},
{'name': 'Tall Tails', 'zip': '11222'},
{'name': 'Cats Meow', 'zip': '11231'}]
End of explanation
cat_list = list()
cat_table = document.find('table', {'class': 'cat-list'})
tbody = cat_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
birthdate = tr_tag.find('td', {'class':'birthdate'}).string
color = tr_tag.find('td', {'class':'color'}).string
locations = tr_tag.find('td', {'class':'locations'}).string.split(', ')
weight = float(tr_tag.find('td', {'class':'weight'}).string)
pattern = tr_tag.find('td', {'class':'pattern'}).string
name = tr_tag.find('td', {'class':'name'}).string
cat_dict = {'birthdate': birthdate,
'color': color,
'locations': locations,
'name': name,
'pattern': pattern,
'weight': weight}
# your code here
cat_list.append(cat_dict)
cat_list
Explanation: Great! In the following cell, write some code that creates a list of cats from the <table> tag on the page, storing them as a list of dictionaries in a variable called cat_list. Again, I've written a bit of the code for you. Expected output:
[{'birthdate': '2015-05-20',
'color': 'black',
'locations': ['Paws and Play', 'Independent Claws*'],
'name': 'Sylvester',
'pattern': 'colorpoint',
'weight': 10.46},
{'birthdate': '2000-01-03',
'color': 'cinnamon',
'locations': ['Independent Claws*'],
'name': 'Jasper',
'pattern': 'solid',
'weight': 8.06},
{'birthdate': '2006-02-27',
'color': 'brown',
'locations': ['Independent Claws*'],
'name': 'Luna',
'pattern': 'tortoiseshell',
'weight': 10.88},
[...many records omitted for brevity...]
{'birthdate': '1999-01-09',
'color': 'white',
'locations': ['Cats Meow*', 'Independent Claws', 'Tall Tails'],
'name': 'Lafayette',
'pattern': 'tortoiseshell',
'weight': 9.3}]
Note: Observe the data types of the values in each dictionary! Make sure to explicitly convert values retrieved from .string attributes of Beautiful Soup tag objects to strs using the str() function.
End of explanation
import pg8000
conn = pg8000.connect(database="catcafes")
Explanation: Problem set #2: Designing the schema
Before you do anything else, use psql to create a new database for this homework assignment using the following command:
CREATE DATABASE catcafes;
In the following cell, connect to the database using pg8000. (You may need to provide additional arguments to the .connect() method, depending on the distribution of PostgreSQL you're using.)
End of explanation
conn.rollback()
Explanation: Here's a cell you can run if something goes wrong and you need to rollback the current query session:
End of explanation
cursor = conn.cursor()
cursor.execute(
CREATE TABLE cafe (
id serial,
name varchar(40),
zip varchar(5)
)
)
cursor.execute(
CREATE TABLE cat (
id serial,
name varchar(60),
birthdate varchar(40),
color varchar(40),
pattern varchar(40),
weight numeric
)
)
cursor.execute(
CREATE TABLE cat_cafe (
cafe_id integer not null,
cat_id integer not null,
active boolean
)
)
conn.commit()
Explanation: In the cell below, you're going to create three tables, necessary to represent the data you scraped above. I've given the basic framework of the Python code and SQL statements to create these tables. I've given the entire CREATE TABLE statement for the cafe table, but for the other two, you'll need to supply the field names and the data types for each column. If you're unsure what to call the fields, or what fields should be in the tables, consult the queries in "The Queries" below. Hints:
Many of these fields will be varchars. Don't worry too much about how many characters you needโit's okay just to eyeball it.
Feel free to use a varchar type to store the birthdate field. No need to dig too deep into PostgreSQL's date types for this particular homework assignment.
Cats and locations are in a many-to-many relationship. You'll need to create a linking table to represent this relationship. (That's why there's space for you to create three tables.)
The linking table will need a field to keep track of whether or not a particular cafe is the "current" cafe for a given cat.
End of explanation
cafe_name_id_map = {}
for item in cafe_list:
cursor.execute("INSERT INTO cafe (name, zip) VALUES (%s, %s) RETURNING id",
[str(item['name']), str(item['zip'])])
rowid = cursor.fetchone()[0]
cafe_name_id_map[str(item['name'])] = rowid
conn.commit()
Explanation: After executing the above cell, issuing a \d command in psql should yield something that looks like the following:
List of relations
Schema | Name | Type | Owner
--------+-------------+----------+---------
public | cafe | table | allison
public | cafe_id_seq | sequence | allison
public | cat | table | allison
public | cat_cafe | table | allison
public | cat_id_seq | sequence | allison
(5 rows)
If something doesn't look right, you can always use the DROP TABLE command to drop the tables and start again. (You can also issue a DROP DATABASE catcafes command to drop the database altogether.) Don't worry if it takes a few tries to get it rightโhappens to the best and most expert among us. You'll probably have to drop the database and start again from scratch several times while completing this homework.
Note: If you try to issue a DROP TABLE or DROP DATABASE command and psql seems to hang forever, it could be that PostgreSQL is waiting for current connections to close before proceeding with your command. To fix this, create a cell with the code conn.close() in your notebook and execute it. After the DROP commands have completed, make sure to run the cell containing the pg8000.connect() call again.
Problem set #3: Inserting the data
In the cell below, I've written the code to insert the cafes into the cafe table, using data from the cafe_list variable that we made earlier. If the code you wrote to create that table was correct, the following cell should execute without error or incident. Execute it before you continue.
End of explanation
cafe_name_id_map
Explanation: Issuing SELECT * FROM cafe in the psql client should yield something that looks like this:
id | name | zip
----+-------------------+-------
1 | Hang In There | 11237
2 | Independent Claws | 11201
3 | Paws and Play | 11215
4 | Tall Tails | 11222
5 | Cats Meow | 11231
(5 rows)
(The id values may be different depending on how many times you've cleaned the table out with DELETE.)
Note that the code in the cell above created a dictionary called cafe_name_id_map. What's in it? Let's see:
End of explanation
conn.rollback()
cursor = conn.cursor()
cat_name_id_map = {}
def escape_quote(string):
return string.replace("'", "''")
for cat in cat_list:
sql = "INSERT INTO cat \n(birthdate, color, name, pattern, weight) \nVALUES \n"
sql += "('{}', '{}', '{}', '{}', {}) RETURNING id".format(str(cat['birthdate']), str(cat['color']), escape_quote(str(cat['name'])), str(cat['pattern']), cat['weight'])
# print(sql) # for debug
cursor.execute(sql)
rowid = cursor.fetchone()[0]
cat_name_id_map[str(cat['name'])] = rowid
conn.commit()
cat_name_id_map
import re
reg = re.compile(".*\*$")
sql_base = "INSERT INTO cat_cafe \n(cafe_id, cat_id, active) \nVALUES \n"
insert_list = []
# many to one, so not too much trouble... we iterate (1) through the cats then (2) through their locations
for cat in cat_list: # (1) we iterate through the cats
cat_id = cat_name_id_map[cat['name']]
for loc in cat['locations']: # (2) then through the cafes (= cat locations)
if re.match(reg, loc): # is there a star (*)?
active = 't'
cafe_id = cafe_name_id_map[loc[:-1]]
else:
active = 'f'
cafe_id = cafe_name_id_map[loc]
sql_insert = "({}, {}, '{}')".format(cafe_id, cat_id, active)
insert_list.append(sql_insert)
print(sql_base + str.join(",\n", insert_list)) # debug
cursor.execute(sql_base + str.join(",\n", insert_list))
conn.commit()
Explanation: The dictionary maps the name of the cat cafe to its ID in the database. You'll need these values later when you're adding records to the linking table (cat_cafe).
Now the tricky part. (Yes, believe it or not, this is the tricky part. The other stuff has all been easy by comparison.) In the cell below, write the Python code to insert each cat's data from the cat_list variable (created in Problem Set #1) into the cat table. The code should also insert the relevant data into the cat_cafe table. Hints:
You'll need to get the id of each cat record using the RETURNING clause of the INSERT statement and the .fetchone() method of the cursor object.
How do you know whether or not the current location is the "active" location for a particular cat? The page itself contains some explanatory text that might be helpful here. You might need to use some string checking and manipulation functions in order to make this determination and transform the string as needed.
The linking table stores an ID only for both the cat and the cafe. Use the cafe_name_id_map dictionary to get the id of the cafes inserted earlier.
End of explanation
cursor.execute("SELECT max(birthdate) FROM cat")
birthdate = cursor.fetchone()[0]
cursor.execute("SELECT name FROM cat WHERE birthdate = %s", [birthdate])
print(cursor.fetchone()[0])
Explanation: Issuing a SELECT * FROM cat LIMIT 10 in psql should yield something that looks like this:
id | name | birthdate | weight | color | pattern
----+-----------+------------+--------+----------+---------------
1 | Sylvester | 2015-05-20 | 10.46 | black | colorpoint
2 | Jasper | 2000-01-03 | 8.06 | cinnamon | solid
3 | Luna | 2006-02-27 | 10.88 | brown | tortoiseshell
4 | Georges | 2015-08-13 | 9.40 | white | tabby
5 | Millie | 2003-09-13 | 9.27 | red | bicolor
6 | Lisa | 2009-07-30 | 8.84 | cream | colorpoint
7 | Oscar | 2011-12-15 | 8.44 | cream | solid
8 | Scaredy | 2015-12-30 | 8.83 | lilac | tabby
9 | Charlotte | 2013-10-16 | 9.54 | blue | tabby
10 | Whiskers | 2011-02-07 | 9.47 | white | colorpoint
(10 rows)
And a SELECT * FROM cat_cafe LIMIT 10 in psql should look like this:
cat_id | cafe_id | active
--------+---------+--------
1 | 3 | f
1 | 2 | t
2 | 2 | t
3 | 2 | t
4 | 4 | t
4 | 1 | f
5 | 3 | t
6 | 1 | t
7 | 1 | t
7 | 5 | f
(10 rows)
Again, the exact values for the ID columns might be different, depending on how many times you've deleted and dropped the tables.
The Queries
Okay. To verify your work, run the following queries and check their output. If you've correctly scraped the data and imported it into SQL, running the cells should produce exactly the expected output, as indicated. If not, then you performed one of the steps above incorrectly; check your work and try again. (Note: Don't modify these cells, just run them! This homework was about scraping and inserting data, not querying it.)
What's the name of the youngest cat at any location?
Expected output: Scaredy
End of explanation
cursor.execute(SELECT DISTINCT(cafe.zip)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat.color = 'lilac' AND cat.pattern = 'tabby' AND cat_cafe.active = true
)
print(', '.join([x[0] for x in cursor.fetchall()]))
Explanation: In which zip codes can I find a lilac-colored tabby?
Expected output: 11237, 11215
End of explanation
cursor.execute(
SELECT cafe.name, avg(cat.weight)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat_cafe.active = true
GROUP BY cafe.name
)
for rec in cursor.fetchall():
print(rec[0]+":", "%0.2f" % rec[1])
Explanation: What's the average weight of cats currently residing at all locations?
Expected output:
Independent Claws: 9.33
Paws and Play: 9.28
Tall Tails: 9.82
Hang In There: 9.25
Cats Meow: 9.76
End of explanation
cursor.execute(
SELECT cafe.name
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat_cafe.active = true AND cat.pattern = 'tortoiseshell'
GROUP BY cafe.name
ORDER BY count(cat.name) DESC
LIMIT 1
)
print(cursor.fetchone()[0])
Explanation: Which location has the most cats with tortoiseshell coats?
Expected output: Independent Claws
End of explanation |
3,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coregionalized Regression Model (vector-valued regression)
updated
Step1: For this example we will generate an artificial dataset.
Step2: Our two datasets look like this
Step3: We will also define a function that will be used later for plotting our results.
Step4: Covariance kernel
The coregionalized regression model relies on the use of
$\color{firebrick}{\textbf{multiple output kernels}}$ or $\color{firebrick}{\textbf{vector-valued kernels}}$
(รlvarez, Rosasco and Lawrence, 2012), of the following form
Step5: The components of the kernel can be accessed as follows
Step6: We have built a function called $\color{firebrick}{\textbf{ICM}}$ that deals with the steps of defining two kernels and multiplying them together.
Step7: Now we will show how to add different kernels together to model the data in our example.
Using the GPy's Coregionalized Regression Model
Once we have defined an appropiate kernel for our model, its use is straightforward. In the next example we will use a
$\color{firebrick}{\textbf{Matern-3/2 kernel}}$ as $\bf K$.
Step8: Notice that the there are two parameters for the $\color{firebrick}{\textbf{noise variance}}$. Each one corresponds to the noise of each output.
But what is the advantage of this model? Well, the fit of a non-coregionalized model (i.e., two independent models) would look like this
Step9: The coregionalized model shares information across outputs, but the independent models cannot do that. In the regions where there is no training data specific to an output the independent models tend to return to the prior assumptions.
In this case, where both outputs have associated patterns, the fit is better with the coregionalized model.
$\color{firebrick}{\textbf{Can we improve the fit in the coregionalization?}}$
Yes, we will have a look at that in the next section.
Kernel Selection
The data from both outputs is not centered on zero. A way of dealing with outputs of different means or magnitudes is using a
$\color{firebrick}{\textbf{bias kernel}}$.
This kernel is just changing the mean (constant) of the Gaussian Process being fitted. There is no need to assume any sort of correlation between both means, so we can define ${\bf W} = {\bf 0}$.
Step10: At the moment, our model is only able to explain the mean of the data. However we can notice that there is a deacreasing trend in the first output and and increasent trend in the second one. In this case we can model such a trend with a
$\color{firebrick}{\textbf{linear kernel}}$.
Since the linear kernel only fits a line with constant slope along the output space, there is no need to assume any correlation between outputs.
We could define our new multiple output kernel as follows
Step11: Now we will model the variation along the trend defined by the linear component. We will do this with a Matern-3/2 kernel.
Step12: Prediction at new input values
Behind the scenes, this model is using an extended input space with an additional dimension that points at the output each data point belongs to. To make use of the prediction function of GPy, this model needs the input array to have the extended format.
For example if we want to make predictions in the region 100 to 100 for the second output, we need to define the new inputs as follows
Step13: $\color{firebrick}{\textbf{Note
Step14: The astype(int) function is to ensure that the values of the dictionary are integers, otherwise the Python complains when using them as indices. Then prediction is command can then be called this way | Python Code:
%pylab inline
import pylab as pb
pylab.ion()
import GPy
Explanation: Coregionalized Regression Model (vector-valued regression)
updated: 17th June 2015
by Ricardo Andrade-Pacheco
This tutorial will focus on the use and kernel selection of the $\color{firebrick}{\textbf{coregionalized regression}}$ model in GPy.
Setup
The first thing to do is to set the plots to be interactive and to import GPy.
End of explanation
#This functions generate data corresponding to two outputs
f_output1 = lambda x: 4. * np.cos(x/5.) - .4*x - 35. + np.random.rand(x.size)[:,None] * 2.
f_output2 = lambda x: 6. * np.cos(x/5.) + .2*x + 35. + np.random.rand(x.size)[:,None] * 8.
#{X,Y} training set for each output
X1 = np.random.rand(100)[:,None]; X1=X1*75
X2 = np.random.rand(100)[:,None]; X2=X2*70 + 30
Y1 = f_output1(X1)
Y2 = f_output2(X2)
#{X,Y} test set for each output
Xt1 = np.random.rand(100)[:,None]*100
Xt2 = np.random.rand(100)[:,None]*100
Yt1 = f_output1(Xt1)
Yt2 = f_output2(Xt2)
Explanation: For this example we will generate an artificial dataset.
End of explanation
xlim = (0,100); ylim = (0,50)
fig = pb.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
ax1.set_xlim(xlim)
ax1.set_title('Output 1')
ax1.plot(X1[:,:1],Y1,'kx',mew=1.5,label='Train set')
ax1.plot(Xt1[:,:1],Yt1,'rx',mew=1.5,label='Test set')
ax1.legend()
ax2 = fig.add_subplot(212)
ax2.set_xlim(xlim)
ax2.set_title('Output 2')
ax2.plot(X2[:,:1],Y2,'kx',mew=1.5,label='Train set')
ax2.plot(Xt2[:,:1],Yt2,'rx',mew=1.5,label='Test set')
ax2.legend()
Explanation: Our two datasets look like this:
End of explanation
def plot_2outputs(m,xlim,ylim):
fig = pb.figure(figsize=(12,8))
#Output 1
ax1 = fig.add_subplot(211)
ax1.set_xlim(xlim)
ax1.set_title('Output 1')
m.plot(plot_limits=xlim,fixed_inputs=[(1,0)],which_data_rows=slice(0,100),ax=ax1)
ax1.plot(Xt1[:,:1],Yt1,'rx',mew=1.5)
#Output 2
ax2 = fig.add_subplot(212)
ax2.set_xlim(xlim)
ax2.set_title('Output 2')
m.plot(plot_limits=xlim,fixed_inputs=[(1,1)],which_data_rows=slice(100,200),ax=ax2)
ax2.plot(Xt2[:,:1],Yt2,'rx',mew=1.5)
Explanation: We will also define a function that will be used later for plotting our results.
End of explanation
import GPy
K=GPy.kern.RBF(1)
B = GPy.kern.Coregionalize(input_dim=1,output_dim=2)
multkernel = K.prod(B,name='B.K')
print multkernel
Explanation: Covariance kernel
The coregionalized regression model relies on the use of
$\color{firebrick}{\textbf{multiple output kernels}}$ or $\color{firebrick}{\textbf{vector-valued kernels}}$
(รlvarez, Rosasco and Lawrence, 2012), of the following form:
$
\begin{align}
{\bf B}\otimes{\bf K} =
\left(\begin{array}{ccc}
B_{1,1}\times{\bf K}({\bf X}{1},{\bf X}{1}) & \ldots & B_{1,D}\times{\bf K}({\bf X}{1},{\bf X}{D})\
\vdots & \ddots & \vdots\
B_{D,1}\times{\bf K}({\bf X}{D},{\bf X}{1}) & \ldots & B_{D,D}\times{\bf K}({\bf X}{D},{\bf X}{D})
\end{array}\right)
\end{align}
$.
In the expression above, ${\bf K}$ is a kernel function, ${\bf B}$ is a regarded as the coregionalization matrix, and ${\bf X}_i$ represents the inputs corresponding to the $i$-th output.
Notice that if $B_{i,j} = 0$ for $i \neq j$, then all the outputs are being considered as independent of each other.
To ensure that the multiple output kernel is a valid kernel, we need the $\bf K$ and ${\bf B}$ to be to be valid. If $\bf K$ is already a valid kernel, we just need to ensure that ${\bf B}$ is positive definite. The last is achieved by defining ${\bf B} = {\bf W}{\bf W}^\top + {\boldsymbol \kappa}{\bf I}$, for some matrix $\bf W$ and vector ${\boldsymbol \kappa}$.
In GPy, a multiple output kernel is defined in the following way:
End of explanation
#Components of B
print 'W matrix\n',B.W
print '\nkappa vector\n',B.kappa
print '\nB matrix\n',B.B
Explanation: The components of the kernel can be accessed as follows:
End of explanation
icm = GPy.util.multioutput.ICM(input_dim=1,num_outputs=2,kernel=GPy.kern.RBF(1))
print icm
Explanation: We have built a function called $\color{firebrick}{\textbf{ICM}}$ that deals with the steps of defining two kernels and multiplying them together.
End of explanation
K = GPy.kern.Matern32(1)
icm = GPy.util.multioutput.ICM(input_dim=1,num_outputs=2,kernel=K)
m = GPy.models.GPCoregionalizedRegression([X1,X2],[Y1,Y2],kernel=icm)
m['.*Mat32.var'].constrain_fixed(1.) #For this kernel, B.kappa encodes the variance now.
m.optimize()
print m
plot_2outputs(m,xlim=(0,100),ylim=(-20,60))
Explanation: Now we will show how to add different kernels together to model the data in our example.
Using the GPy's Coregionalized Regression Model
Once we have defined an appropiate kernel for our model, its use is straightforward. In the next example we will use a
$\color{firebrick}{\textbf{Matern-3/2 kernel}}$ as $\bf K$.
End of explanation
K = GPy.kern.Matern32(1)
m1 = GPy.models.GPRegression(X1,Y1,kernel=K.copy())
m1.optimize()
m2 = GPy.models.GPRegression(X2,Y2,kernel=K.copy())
m2.optimize()
fig = pb.figure(figsize=(12,8))
#Output 1
ax1 = fig.add_subplot(211)
m1.plot(plot_limits=xlim,ax=ax1)
ax1.plot(Xt1[:,:1],Yt1,'rx',mew=1.5)
ax1.set_title('Output 1')
#Output 2
ax2 = fig.add_subplot(212)
m2.plot(plot_limits=xlim,ax=ax2)
ax2.plot(Xt2[:,:1],Yt2,'rx',mew=1.5)
ax2.set_title('Output 2')
Explanation: Notice that the there are two parameters for the $\color{firebrick}{\textbf{noise variance}}$. Each one corresponds to the noise of each output.
But what is the advantage of this model? Well, the fit of a non-coregionalized model (i.e., two independent models) would look like this:
End of explanation
kernel = GPy.util.multioutput.ICM(input_dim=1,num_outputs=2,kernel=GPy.kern.Bias(input_dim=1))
m = GPy.models.GPCoregionalizedRegression(X_list=[X1,X2],Y_list=[Y1,Y2],kernel=kernel)
m['.*bias.var'].constrain_fixed(1) #B.kappa now encodes the variance.
m['.*W'].constrain_fixed(0)
m.optimize()
plot_2outputs(m,xlim=(-20,120),ylim=(0,60))
Explanation: The coregionalized model shares information across outputs, but the independent models cannot do that. In the regions where there is no training data specific to an output the independent models tend to return to the prior assumptions.
In this case, where both outputs have associated patterns, the fit is better with the coregionalized model.
$\color{firebrick}{\textbf{Can we improve the fit in the coregionalization?}}$
Yes, we will have a look at that in the next section.
Kernel Selection
The data from both outputs is not centered on zero. A way of dealing with outputs of different means or magnitudes is using a
$\color{firebrick}{\textbf{bias kernel}}$.
This kernel is just changing the mean (constant) of the Gaussian Process being fitted. There is no need to assume any sort of correlation between both means, so we can define ${\bf W} = {\bf 0}$.
End of explanation
K1 = GPy.kern.Bias(1)
K2 = GPy.kern.Linear(1)
lcm = GPy.util.multioutput.LCM(input_dim=1,num_outputs=2,kernels_list=[K1,K2])
m = GPy.models.GPCoregionalizedRegression([X1,X2],[Y1,Y2],kernel=lcm)
m['.*bias.var'].constrain_fixed(1.)
m['.*W'].constrain_fixed(0)
m['.*linear.var'].constrain_fixed(1.)
m.optimize()
plot_2outputs(m,xlim=(-20,120),ylim=(0,60))
Explanation: At the moment, our model is only able to explain the mean of the data. However we can notice that there is a deacreasing trend in the first output and and increasent trend in the second one. In this case we can model such a trend with a
$\color{firebrick}{\textbf{linear kernel}}$.
Since the linear kernel only fits a line with constant slope along the output space, there is no need to assume any correlation between outputs.
We could define our new multiple output kernel as follows:
${\bf K}{ICM} = {\bf B} \otimes ( {\bf K}{Bias} + {\bf K}_{Linear} )$.
However, we can also define a more general kernel of the following form:
${\bf K}{LCM} = {\bf B}_1 \otimes {\bf K}{Bias} + {\bf B}2 \otimes {\bf K}{Linear}$.
GPy has also a function which saves some steps in the definition of
$\color{firebrick}{\textbf{LCM}}$ kernels.
End of explanation
K1 = GPy.kern.Bias(1)
K2 = GPy.kern.Linear(1)
K3 = GPy.kern.Matern32(1)
lcm = GPy.util.multioutput.LCM(input_dim=1,num_outputs=2,kernels_list=[K1,K2,K3])
m = GPy.models.GPCoregionalizedRegression([X1,X2],[Y1,Y2],kernel=lcm)
m['.*ICM.*var'].unconstrain()
m['.*ICM0.*var'].constrain_fixed(1.)
m['.*ICM0.*W'].constrain_fixed(0)
m['.*ICM1.*var'].constrain_fixed(1.)
m['.*ICM1.*W'].constrain_fixed(0)
m.optimize()
plot_2outputs(m,xlim=(0,100),ylim=(-20,60))
Explanation: Now we will model the variation along the trend defined by the linear component. We will do this with a Matern-3/2 kernel.
End of explanation
newX = np.arange(100,110)[:,None]
newX = np.hstack([newX,np.ones_like(newX)])
print newX
Explanation: Prediction at new input values
Behind the scenes, this model is using an extended input space with an additional dimension that points at the output each data point belongs to. To make use of the prediction function of GPy, this model needs the input array to have the extended format.
For example if we want to make predictions in the region 100 to 100 for the second output, we need to define the new inputs as follows:
End of explanation
noise_dict = {'output_index':newX[:,1:].astype(int)}
Explanation: $\color{firebrick}{\textbf{Note:}}$ remember that Python starts counting from zero, so input 1 is actually the second input.
We also need to pass another output to the predict function. This is an array that tells which $\color{firebrick}{\textbf{noise model}}$ is associated to each point to be predicted. This is a dictionary constructed as follows:
End of explanation
m.predict(newX,Y_metadata=noise_dict)
Explanation: The astype(int) function is to ensure that the values of the dictionary are integers, otherwise the Python complains when using them as indices. Then prediction is command can then be called this way:
End of explanation |
3,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experimental Results from a Decision Tree based NER model
Decisions Trees, as opposed to other machine learning techniques such as SVM's and Neural Networks, provide a human-interpretable classification model. We will exploit this to both generate pretty pictures and glean information for feature selection in our high dimensionality datasets.
This report will provide precision, recall, and f-measure values for Decision Trees built on the orthographic; orthograhic + morphological; orthographic + morphological + lexical feature sets for the Adverse Reaction, Indication, Active Ingredient, and Inactive Ingredient entities. A viewable Decision Tree structure will also be generated for each fold.
<hr>
The file 'decisiontree.py' builds a Decision Tree classifier on the sparse format ARFF file passed in as a parameter. This file is saved in the models directory with the format 'decisiontree_[featuresets]_[entity name].pkl' <br>
The file 'evaluate_decisiontree.py' evaluates a given Decision Tree model stored inside a '.pkl' file outputing appropriate statistics and saving a pdf image of the underlying decision structure associated with the given model.
All ARFF files were cleaned with 'arff_translator.py'. This cleaning consisted of removing a comma from each instance that was mistakenly inserted during file creation.
Step2: Adverse Reaction Feature Set
Orthographic Features
Step3: Average Precision
Step5: The above tree suggests that solely orthograhic information may not be enough to train the classifier. Notice left subtree of the root node splits on the 'single_character' feature. Clearly an adverse reaction would not be a single character yet the tree predicts that any instance where 'single_character' holds the value of 1 would in-fact be an adverse reaction.
<hr>
Orthographic + Morphological Features
Step6: Average Precision
Step8: The decision tree structure above confirms that the features maximizing split purity are dominantly morphological; that is, orthographic features may just be serving as noise if they are included.
<hr>
Orthographic + Morphological + Lexical Features
Step9: Average Precision | Python Code:
#python3 arff_translator.py [filename]
Explanation: Experimental Results from a Decision Tree based NER model
Decisions Trees, as opposed to other machine learning techniques such as SVM's and Neural Networks, provide a human-interpretable classification model. We will exploit this to both generate pretty pictures and glean information for feature selection in our high dimensionality datasets.
This report will provide precision, recall, and f-measure values for Decision Trees built on the orthographic; orthograhic + morphological; orthographic + morphological + lexical feature sets for the Adverse Reaction, Indication, Active Ingredient, and Inactive Ingredient entities. A viewable Decision Tree structure will also be generated for each fold.
<hr>
The file 'decisiontree.py' builds a Decision Tree classifier on the sparse format ARFF file passed in as a parameter. This file is saved in the models directory with the format 'decisiontree_[featuresets]_[entity name].pkl' <br>
The file 'evaluate_decisiontree.py' evaluates a given Decision Tree model stored inside a '.pkl' file outputing appropriate statistics and saving a pdf image of the underlying decision structure associated with the given model.
All ARFF files were cleaned with 'arff_translator.py'. This cleaning consisted of removing a comma from each instance that was mistakenly inserted during file creation.
End of explanation
import subprocess
Creates models for each fold and runs evaluation with results
featureset = "o"
entity_name = "adversereaction"
for fold in range(1,1): #training has already been done
training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold)
os.system("python3 decisiontree.py -tr %s" % (training_data))
for fold in range(1,11):
testing_data = "../ARFF_Files/%s_ARFF/_%s/_test/%s_test-%i.arff" % (entity_name, featureset, entity_name, fold)
output = subprocess.check_output("python3 evaluate_decisiontree.py -te %s" % (testing_data), shell=True)
print(output.decode('utf-8'))
Explanation: Adverse Reaction Feature Set
Orthographic Features
End of explanation
import graphviz
from sklearn.externals import joblib
from Tools import arff_converter
from sklearn import tree
featureset = "o" #Careful with highly dimensional datasets
entity_name = "adversereaction"
fold = 1 #change this to display a graph of the decision tree structure for a fold
training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold)
dataset = arff_converter.arff2df(training_data)
dtree = joblib.load('../Models/decisiontree/%s_%s/decisiontree_%s_%s_train-%i.arff.pkl' % (entity_name, featureset, featureset, entity_name,fold))
tree.export_graphviz(dtree,
out_file="visual/temptree1.dot",
feature_names=dataset.columns.values[:-1],
class_names=["Entity", "Non-Entity"], label='all',
filled=True, rounded=True, proportion=False, leaves_parallel=True,
special_characters=True,
max_depth=3 #change for more detail, careful with large datasets
)
with open("visual/temptree1.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
#graphviz.Source(dot_graph).view()
#the above line is a fullscreen alternative, also generates a temporary file that requires manual removal
Explanation: Average Precision: 0.6329762
Average Recall : 0.0080615<br>
Average F-Measure: 0.0158644<br>
Rather lackluster performance.
End of explanation
import subprocess
Creates models for each fold and runs evaluation with results
featureset = "om"
entity_name = "adversereaction"
for fold in range(1,1): #training has already been done
training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold)
os.system("python3 decisiontree.py -tr %s" % (training_data))
for fold in range(1,11):
testing_data = "../ARFF_Files/%s_ARFF/_%s/_test/%s_test-%i.arff" % (entity_name, featureset, entity_name, fold)
output = subprocess.check_output("python3 evaluate_decisiontree.py -te %s" % (testing_data), shell=True)
print(output.decode('utf-8'))
Explanation: The above tree suggests that solely orthograhic information may not be enough to train the classifier. Notice left subtree of the root node splits on the 'single_character' feature. Clearly an adverse reaction would not be a single character yet the tree predicts that any instance where 'single_character' holds the value of 1 would in-fact be an adverse reaction.
<hr>
Orthographic + Morphological Features
End of explanation
import graphviz
from sklearn.externals import joblib
from Tools import arff_converter
from sklearn import tree
featureset = "om" #Careful with highly dimensional datasets
entity_name = "adversereaction"
fold = 2 #change this to display a graph of the decision tree structure for a fold
training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold)
dataset = arff_converter.arff2df(training_data)
dtree = joblib.load('../Models/decisiontree/%s_%s/decisiontree_%s_%s_train-%i.arff.pkl' % (entity_name, featureset, featureset, entity_name,fold))
tree.export_graphviz(dtree,
out_file="visual/temptree.dot",
feature_names=dataset.columns.values[:-1],
class_names=["Entity", "Non-Entity"], label='all',
filled=True, rounded=True, proportion=False, leaves_parallel=True,
special_characters=True,
max_depth=3 #change for more detail, careful with large datasets
)
with open("visual/temptree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
#graphviz.Source(dot_graph).view()
#the above line is a fullscreen alternative, also generates a temporary file that requires manual removal
Explanation: Average Precision: 0.6423055
Average Recall : 0.4637322<br>
Average F-Measure: 0.5329495<br>
It appears adding in the morphological features greatly increased classifier performance.<br>
Below, find the underlying decision tree structure representing the classifier.
End of explanation
import subprocess
Creates models for each fold and runs evaluation with results
featureset = "omt"
entity_name = "adversereaction"
for fold in range(1,1): #training has already been done
training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold)
os.system("python3 decisiontree.py -tr %s" % (training_data))
for fold in range(1,11):
testing_data = "../ARFF_Files/%s_ARFF/_%s/_test/%s_test-%i.arff" % (entity_name, featureset, entity_name, fold)
output = subprocess.check_output("python3 evaluate_decisiontree.py -te %s" % (testing_data), shell=True)
print(output.decode('utf-8'))
Explanation: The decision tree structure above confirms that the features maximizing split purity are dominantly morphological; that is, orthographic features may just be serving as noise if they are included.
<hr>
Orthographic + Morphological + Lexical Features
End of explanation
import graphviz
from sklearn.externals import joblib
from Tools import arff_converter
from sklearn import tree
featureset = "omt" #Careful with large datasets
entity_name = "adversereaction"
fold = 10 #change this to display a graph of the decision tree structure for a fold
training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold)
dataset = arff_converter.arff2df(training_data)
dtree = joblib.load('../Models/decisiontree/%s_%s/decisiontree_%s_%s_train-%i.arff.pkl' % (entity_name, featureset, featureset, entity_name,fold))
tree.export_graphviz(dtree,
out_file="visual/temp1.dot",
feature_names=dataset.columns.values[:-1],
class_names=["Entity", "Non-Entity"], label='all',
filled=True, rounded=True, proportion=False, leaves_parallel=True,
special_characters=True,
max_depth=4 #change for more detail, careful with large datasets
)
with open("visual/temp1.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
#graphviz.Source(dot_graph).view()
#the above line is a fullscreen alternative, also generates a temporary file that requires manual removal
Explanation: Average Precision: 0.6639918
Average Recall : 0.6856795<br>
Average F-Measure: 0.6662661<br>
The addition of lexical features clearly assists the classifiers recall of the minority class. It appears, however, that the inclusion of lexical features leads to lowering of classifier precision relative to recall. This suggests that the inclusion of lexical features introduces noise that skews the decision boundary towards the majority 'Non-entity' class but still is necessary to strengthen the boundary around the minority class as shown by the higher recall scores.
The undersampling of majority class instances coupled with feature selection may lead to favorable results on this set of combined features.
End of explanation |
3,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process Regression (in 1-D)
Gaussian processes belong to class of non-parametric regression models in machine learning. To try and develop an understanding of what is meant by "non-parametric" in this context, let's consider as an example the modelling of a star's photometric variability as shown below.
Step1: Why use GP regression?
Step2: The formalism (i.e. the math)
Okay so we've said a lot about what GP regression is and why it can be useful but let's now turn to some mathematical background.
We said that the GP model is a multivariate Gaussian distribution of dimension $N$ where $N$ is the number of datapoints in the timeseries. Therefore the evaluation of the model at time $t_i$ is a Gaussian distribution which of course has a well-defined mean and variance. In fact, this is the definition of a GP.
Definition
Step3: In addition to the kernels specified above, additional more complicated kernels can constructed via the linear combination of multiple kernel functions. This is the adopted case for the stellar photometric variability. Namely we have rotationally modulated jitter arising from surface inhomogenities on the star such as cool spots and/or bright plages in the star's chromosphere. The data's correlation is therefore expected to have a periodic component at the stellar rotation period ($P_{\text{rot}}$). However, higher order effects such as the varying lifetimes, contrast, and spatial distribution of active regions will cause the correlations to instead be quasi-periodic. This covariance kernel can commonly constructed via the superposition of the squared exponential kernel
\begin{equation}
k(t,t') = \exp{\left( -\frac{(t-t')^2}{2 \lambda^2} \right)}
\end{equation}
and the periodic covariance kernel
\begin{equation}
k(t,t') = \exp{\left( -\Gamma^2 \sin^2{\left[ \frac{\pi |t-t'|}{P} \right]} \right)}.
\end{equation}
Together the quasi-periodic covariance kernel (with amplitude $a$) is
\begin{equation}
k(t,t') = a^2 \exp{\left( -\frac{(t-t')^2}{2 \lambda^2} -\Gamma^2 \sin^2{\left[ \frac{\pi |t-t'|}{P} \right]} \right)}.
\end{equation}
The quasi-periodic covariance kernel therefore has a set of four hyperparameters $\theta=(a,\lambda,\Gamma,P)$ where $a$ is the amplitude of the correlations, $\lambda$ is the exponentially decaying timescale (i.e. measurements further away from each other in time are less correlated than those that are close together), $\Gamma$ is this weird 'coherence' scale, and $P$ is the periodicity of the periodic component ($P=P_{\text{rot}}$ in the photometric light curve). See http
Step4: We can now compute the covariance matrix $K$ from our kernel $k$ and photometric measurement errors $\sigma_i$ via
\begin{equation}
K_{ij} = \sigma_i^2 \delta_{ij} + k(t_i,t_j),
\end{equation}
where $\delta_{ij}$ is the Kronecker delta function.
Next we can compute the covariance matrix given the independent variable $t$ and measurement errors. We can then visualize our covariance matrix.
Step5: Now that we have specified the covariance matrix, we can draw samples from the GP prior function (using george.GP.sample). Remember that the GP is an $N$ dimensional multivariate Gaussian whose covariance is desribed by $K$. So despite us randomly sampling from a Gaussian distribution at each $t_i$, the resulting regression model will still be smooth.
Step6: We haven't told the GP about our data yet so the samples from the prior do not trace the observations but they do exhibit the expected covariance properties. Namely, smooth evolution (small $\Gamma$) with a distinct periodicity ($P\sim 122$ days) and a long timescale decay in amplitude ($\lambda \sim 6.5 \times 10^4$ days).
Conditioning the GP on the data
We can compute the posterior function (up to a constant) from which to sample given the input dataset and the likelihood function. Recall from Baye's theorem that
\begin{equation}
\ln{\text{posterior}} \propto \ln{\text{prior}} + \ln{\text{likelihood}}.
\end{equation}
For a GP with specified mean function $\boldsymbol{\mu}$ and covariance matrix $K$, the logarithmic likelihood function for the data $\mathbf{y}$ can be computed analytically via
\begin{equation}
\ln{\mathcal{L}} = -\frac{1}{2} \left( \mathbf{r}^T K^{-1} \mathbf{r} + \ln{\mathrm{det} K} + N \ln{2 \pi} \right),
\end{equation}
where $\mathbf{r} \equiv \mathbf{y} - \boldsymbol{\mu}$ is the residual vector. For our fiducial light curve in differential magnitudes, the data are already centered at zero so we take $\boldsymbol{\mu}$ to be the zero vector.
With the values of $\theta$ specified, we can compute the predictive posterior mean $M$ and covariance $C$ at epochs $t'$ via
\begin{align}
M &= K(t'_i, t_j) K(t_i,t_j)^{-1} y(t_j), \
C &= K(t'_i,t*_j) - K(t'_i, t_j) K(t_i,t_j)^{-1} K(t'_i, t_j)^T
\end{align}
and consequently sample from the predictive posterior distribution. However we are often most interested in the predictive mean (maximum a-posteriori model) and its variance (confidence intervals) at previously unseen epochs $t'$.
Step7: Note that the GP predictive model is well-constrained where there exists training data. At epochs far from the epochs in the training set $T=t_1,\cdots,t_N$, the covariance properties still constrain the predictive mean but the model variance becomes understandably large.
Step8: Optimizing the hyperparameters
The first method of GP hyperparmeter optimization is just likelihood maximization and is built-in to george via the scipy.stats.optimize function. This method is admittedly not very robust. Let's compare our previously adopted hyperparameter values to those obtained from the built-in likelihood maximization routine.
Step9: Run the above cell a few times. The optimal hyperparameters tend to vary sometimes by orders of magnitude (i.e. P).
Another common method is to use MCMC sampling the obtain a maximum a posteriori estimate of the hyperparameter values and their uncertainies. To do so one must evaulate the ln likelihood function at each step in their MCMC chain which requires a matrix inversion operation. George has a built-in method to compute the inverse of the covariance matrix via Cholesky decomposition. We can therefore fold the lnlikehood method into our own MCMC code for a given set of sampled hyperparameter values $\theta'$. Note that the dependent variable must already be corrected by the mean function; i.e. feed the residual vector ($\textbf{r} \equiv \textbf{y} - \boldsymbol{\mu}$) to george. | Python Code:
import matplotlib.image as mpimg
import pylab as plt
import numpy as np
%matplotlib inline
# Plot the (binned) photometric light curve
from MEarthphotometry import * # custom class
self = loadpickle('MEarthphot')
t, y, ey = self.bjdtrimbin, self.magtrimbin, self.emagtrimbin
# Plotting
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
t0 = round(min(self.bjd))
ax.plot(self.bjdtrim-t0, self.magtrim, 'k.', alpha=.1)
ax.errorbar(t-t0, y, ey, fmt='o', color='#a50f15', capsize=0, elinewidth=.8)
plt.gca().invert_yaxis()
ax.set_xlabel('BJD - %i'%t0), ax.set_ylabel('Differential Magnitude'), ax.set_xlim((-10,max(self.modelbjd)-t0)), ax.set_ylim((.02,-.02))
Explanation: Gaussian Process Regression (in 1-D)
Gaussian processes belong to class of non-parametric regression models in machine learning. To try and develop an understanding of what is meant by "non-parametric" in this context, let's consider as an example the modelling of a star's photometric variability as shown below.
End of explanation
# Plotting
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
t0 = round(min(self.bjd))
ax.plot(self.bjdtrim-t0, self.magtrim, 'k.', alpha=.1)
ax.errorbar(t-t0, y, ey, fmt='o', color='#a50f15', capsize=0, elinewidth=.8)
ax.plot(self.modelbjd-t0, self.model, '-', c='#fb6a4a', lw=2)
ax.fill_between(self.modelbjd-t0, self.model-2*self.modelerr, self.model+2*self.modelerr, color='#fb6a4a', alpha=.5)
plt.gca().invert_yaxis()
ax.set_xlabel('BJD - %i'%t0), ax.set_ylabel('Differential Magnitude'), ax.set_xlim((-10,max(self.modelbjd)-t0)), ax.set_ylim((.02,-.02))
Explanation: Why use GP regression?: parametric models aren't always complete
In the frequentist approach, one may assume a functional form of the model to the dataset.
Once the best-fit parameters of that model have been optimized, the model can be used to predict the values of $y_i$ at previously unseen epochs $t_i$. Recall that this is the primary goal of regression modelling in machine learning. In the above example, the brightness variations appear roughly periodic in time so we might be tempted to model the light curve with a purely sinusoidal function of the form
\begin{equation}
\Psi_i = A \sin{\left( \frac{2\pi t_i}{P_{\text{rot}}} + \phi \right)}.
\end{equation}
With this optimized model we can estimate the probability of obtaining certain values of the starโs brightness $\Psi_i$ the next time we go back to our telescope to observe the star by evaluating the model at those previously unseen epochs. This is great! Weโve obtained a model that we can use for predictive purposes! However, we have assumed a functional form of the model which itself can be thought of as a hyperparameter of the model to the dataset. Indeed this model may not be complete you may suspect to have been the case in this example given that the stellar photometric variability does not appear to be purely sinusoidal (i.e. the decaying photometric amplitude with time). Effects of the varying lifetimes, contrast, and spatial distribution of active regions on the stellar surface over adjacent rotation cycles may result in brightness variations which are not strictly periodic. In this case a purely sinusoidal function clearly does not completely capture the structure present in the starโs light curve and we should pursue a more complete model.
Why use GP regression?: advantages of non-parametricity (is that a real word?)
An alternative approach is to assign a prior probability to every possible function that could potentially model our photometric timeseries. Weโll call this the Bayesian approach because it incorporates our prior knowledge of what can reasonably model our (training) dataset and potentially any unseen data. But computational restrictions dictate that we cannot possibly evaluate every possible function. Instead, we can use a Gaussian process (GP) to model the covariance between each individual datapoint with every other.
GP regression is a method of supervised learning wherein the covariance properties of the input dataset are described by a set of hyperparameters which are trained on the data itself. But what is the GP? The GP, with its optimized set of hyperparameters, is itself a multivariate Gaussian distribution of dimension $N$ where $N$ is the number of datapoints in the training set. Therefore the evaluation of the model at each epoch $t_i$ is not deterministic but instead is a 1D Gaussian distribution (also called the predictive distribution) whose mean value gives us the most-likely model prediction $\Psi(t_i)$. Because we do not consider a specific functional form of the model to the data as we did previously by modelling the star's photometric variations with a sinusoidal function, the GP regression model is called non-parametric. It implies that the 'shape' of the model is dictated by the data itself but whose covariance properties are described by some chosen prior regarding how the data are correlated with each other. The advantage of non-parametric modelling is that it allows us to make fewer prior assumptions regarding the functional form of a model to our timseries thus giving much more freedom to the model.
Returning now to our example of the varying stellar brightness, below I've plotted an optimized mean GP regression model and its 95% confidence intervals to illustrate the nature of the model as being non-parametic. Soon we'll discuss how to obtain this GP model in practice.
End of explanation
import george
from george import kernels
# uncomment below to read about some popular built-in covariance kernels
#help(kernels.WhiteKernel)
#help(kernels.ExpSquaredKernel)
#help(kernels.Matern32Kernel)
#help(kernels.ExpSine2Kernel)
Explanation: The formalism (i.e. the math)
Okay so we've said a lot about what GP regression is and why it can be useful but let's now turn to some mathematical background.
We said that the GP model is a multivariate Gaussian distribution of dimension $N$ where $N$ is the number of datapoints in the timeseries. Therefore the evaluation of the model at time $t_i$ is a Gaussian distribution which of course has a well-defined mean and variance. In fact, this is the definition of a GP.
Definition: a GP on any set $T$ is a set of random variables such that $\forall t_1,\cdots,t_N$ in $T$, the vector $M = (M_1,\cdots,M_N)$ is a multivariate Gaussian.
Or,
a GP is any set of random variables for which any finite subset are Gaussian distributed. Being mutivariate, a GP can be thought of as the probability of obtaining a model to a dataset or a Gaussian distribution of functions.
There are fancy theorems regarding the existance of GPs but the only important take-away for our purposes is that for any set $T$ (like our array of observation epochs), any real-valued mean function $\mu(t)$, and any real-valued covariance function $k(t,t')$, there exists a GP on $T$ with expection value $=\mu(t)$ and cov$(M_s,M_t)=k(s,t)$. Basically, we can always construct a valid GP to describe a timeseries with appropriately chosen mean and covariance functions.
Computing our GP model (in python)
I'll quickly introduce the code that I use most often as it's not part of sklearn although sklearn does have (some) capabilities. The python library is called george (http://dan.iel.fm/george/current/). The library contains functions to perform tasks such as GP definition and computation as well as sampling from your GP. We'll see its implementation as we progress towards computing the GP model shown in the figure above.
Mean functions
Recall that a unique GP on the set of observation epochs $T=(t_1,\cdots,t_N)$ is specified by a mean function $\mu(t)$ and covariance function $k(t,t')$ describing how the stellar brightness at time $t$ is correlated with the stellar brightness at time $t'$. The mean function is used to 'zero-out' the timeseries such that the GP only acts to model the correlated residuals. For example, in the case of radial velocity observations of a single-planetary system which is also contaminated with stellar jitter, the mean function would be the keplarian signal of the planet and the GP would be used to model the correlated noise arising from jitter; i.e. star spots, granulation, etc. This specification of the mean function in george is not done explicitly. Instead it is expected that the dataset minus the mean function be specified when initializing a GP with george (the GP object).
Covariance functions
Initializing the GP does require that we specify the covariance kernel or the way in which the data in our training set are correlated. There are many valid covarince functions to choose from. Some of the most popular ones are included in the george kernel library (see http://dan.iel.fm/george/current/user/kernels).
End of explanation
# hyperparameters for the GJ 1132 light curve
a, l, G, Prot = 1.439e-5, 6.522e4, 6.883e-1, 1.224e2
my_gp = george.GP(a * (kernels.ExpSquaredKernel(l) * kernels.ExpSine2Kernel(G, Prot)))
Explanation: In addition to the kernels specified above, additional more complicated kernels can constructed via the linear combination of multiple kernel functions. This is the adopted case for the stellar photometric variability. Namely we have rotationally modulated jitter arising from surface inhomogenities on the star such as cool spots and/or bright plages in the star's chromosphere. The data's correlation is therefore expected to have a periodic component at the stellar rotation period ($P_{\text{rot}}$). However, higher order effects such as the varying lifetimes, contrast, and spatial distribution of active regions will cause the correlations to instead be quasi-periodic. This covariance kernel can commonly constructed via the superposition of the squared exponential kernel
\begin{equation}
k(t,t') = \exp{\left( -\frac{(t-t')^2}{2 \lambda^2} \right)}
\end{equation}
and the periodic covariance kernel
\begin{equation}
k(t,t') = \exp{\left( -\Gamma^2 \sin^2{\left[ \frac{\pi |t-t'|}{P} \right]} \right)}.
\end{equation}
Together the quasi-periodic covariance kernel (with amplitude $a$) is
\begin{equation}
k(t,t') = a^2 \exp{\left( -\frac{(t-t')^2}{2 \lambda^2} -\Gamma^2 \sin^2{\left[ \frac{\pi |t-t'|}{P} \right]} \right)}.
\end{equation}
The quasi-periodic covariance kernel therefore has a set of four hyperparameters $\theta=(a,\lambda,\Gamma,P)$ where $a$ is the amplitude of the correlations, $\lambda$ is the exponentially decaying timescale (i.e. measurements further away from each other in time are less correlated than those that are close together), $\Gamma$ is this weird 'coherence' scale, and $P$ is the periodicity of the periodic component ($P=P_{\text{rot}}$ in the photometric light curve). See http://dan.iel.fm/gp.js/ to play with the effect of varying the hyperparameter values. For the quasi-periodic kernel at the aforementioned link, the order of the hyperparameter sliders is $a,\Gamma,P,\lambda$.
Using the optimized hyperparameters from my previous analysis of the above light curve, we can initialize our quasi-periodic GP with george via
End of explanation
my_gp.compute(t, ey) # matrix needs to know the times and values of the diagonal elements
# Plotting
tm = np.linspace(t.min(),t.max(),1e2) - t.min()
plt.pcolormesh(tm, tm, my_gp.get_matrix(tm), cmap=plt.get_cmap('Greys'))
plt.colorbar(), plt.axes().set_aspect('equal'), plt.gca().invert_yaxis(), plt.show()
Explanation: We can now compute the covariance matrix $K$ from our kernel $k$ and photometric measurement errors $\sigma_i$ via
\begin{equation}
K_{ij} = \sigma_i^2 \delta_{ij} + k(t_i,t_j),
\end{equation}
where $\delta_{ij}$ is the Kronecker delta function.
Next we can compute the covariance matrix given the independent variable $t$ and measurement errors. We can then visualize our covariance matrix.
End of explanation
# Plotting prior samples
Nsamples = 5
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
t0 = round(min(self.bjd))
ax.plot(t-t0, y, '.', c='#a50f15', alpha=.7)
tm = np.linspace(t.min(), t.max(), 2e2)
for i in range(Nsamples):
ax.plot(tm-t0, my_gp.sample(tm), '-')
plt.gca().invert_yaxis()
ax.set_xlabel('BJD - %i'%t0), ax.set_ylabel('Differential Magnitude'), ax.set_xlim((-10,max(self.modelbjd)-t0)), ax.set_ylim((.015,-.015))
Explanation: Now that we have specified the covariance matrix, we can draw samples from the GP prior function (using george.GP.sample). Remember that the GP is an $N$ dimensional multivariate Gaussian whose covariance is desribed by $K$. So despite us randomly sampling from a Gaussian distribution at each $t_i$, the resulting regression model will still be smooth.
End of explanation
tprime = np.linspace(t.min(), t.max(), 2e2)
M, C = my_gp.predict(y, tprime) # give data to evaluate likelihood at prediction epochs tprime
sig = np.sqrt(np.diag(C))
# Plotting posterior mean and one standard deviation
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
t0 = round(min(self.bjd))
ax.errorbar(t-t0, y, ey, fmt='o', color='#a50f15', capsize=0, elinewidth=.8)
ax.plot(tprime-t0, M, '-', c='#fb6a4a', lw=2)
ax.fill_between(tprime-t0, M-sig, M+sig, color='#fb6a4a', alpha=.5)
plt.gca().invert_yaxis()
ax.set_xlabel('BJD - %i'%t0), ax.set_ylabel('Differential Magnitude'), ax.set_xlim((-10,max(self.modelbjd)-t0)), ax.set_ylim((.015,-.015))
Explanation: We haven't told the GP about our data yet so the samples from the prior do not trace the observations but they do exhibit the expected covariance properties. Namely, smooth evolution (small $\Gamma$) with a distinct periodicity ($P\sim 122$ days) and a long timescale decay in amplitude ($\lambda \sim 6.5 \times 10^4$ days).
Conditioning the GP on the data
We can compute the posterior function (up to a constant) from which to sample given the input dataset and the likelihood function. Recall from Baye's theorem that
\begin{equation}
\ln{\text{posterior}} \propto \ln{\text{prior}} + \ln{\text{likelihood}}.
\end{equation}
For a GP with specified mean function $\boldsymbol{\mu}$ and covariance matrix $K$, the logarithmic likelihood function for the data $\mathbf{y}$ can be computed analytically via
\begin{equation}
\ln{\mathcal{L}} = -\frac{1}{2} \left( \mathbf{r}^T K^{-1} \mathbf{r} + \ln{\mathrm{det} K} + N \ln{2 \pi} \right),
\end{equation}
where $\mathbf{r} \equiv \mathbf{y} - \boldsymbol{\mu}$ is the residual vector. For our fiducial light curve in differential magnitudes, the data are already centered at zero so we take $\boldsymbol{\mu}$ to be the zero vector.
With the values of $\theta$ specified, we can compute the predictive posterior mean $M$ and covariance $C$ at epochs $t'$ via
\begin{align}
M &= K(t'_i, t_j) K(t_i,t_j)^{-1} y(t_j), \
C &= K(t'_i,t*_j) - K(t'_i, t_j) K(t_i,t_j)^{-1} K(t'_i, t_j)^T
\end{align}
and consequently sample from the predictive posterior distribution. However we are often most interested in the predictive mean (maximum a-posteriori model) and its variance (confidence intervals) at previously unseen epochs $t'$.
End of explanation
tprime_out = np.linspace(t.min()-700, t.max()+700, 2e2)
M_out, C_out = my_gp.predict(y, tprime_out)
sig_out = np.sqrt(np.diag(C_out))
# Plotting posterior mean and one standard deviation
fig = plt.figure(figsize=(14,7))
ax = fig.add_subplot(111)
t0 = round(min(self.bjd))
ax.errorbar(t-t0, y, ey, fmt='o', color='#a50f15', capsize=0, elinewidth=.8)
ax.plot(tprime_out-t0, M_out, '-', c='#fb6a4a', lw=2)
ax.fill_between(tprime_out-t0, M_out-sig_out, M_out+sig_out, color='#fb6a4a', alpha=.5)
plt.gca().invert_yaxis()
ax.set_xlabel('BJD - %i'%t0), ax.set_ylabel('Differential Magnitude'), ax.set_xlim((tprime_out.min()-t0,tprime_out.max()-t0)), ax.set_ylim((.015,-.015))
Explanation: Note that the GP predictive model is well-constrained where there exists training data. At epochs far from the epochs in the training set $T=t_1,\cdots,t_N$, the covariance properties still constrain the predictive mean but the model variance becomes understandably large.
End of explanation
theta_orig = my_gp.kernel.pars
# create a second GP with non-optimized parameters
theta_nonopt = my_gp.kernel.pars * np.random.uniform(.5, 1.5, 4)
a, l, G, Prot = theta_nonopt
my_2nd_gp = george.GP(a * (kernels.ExpSquaredKernel(l) * kernels.ExpSine2Kernel(G, Prot)))
# Optimize these hyperparameters
success = False
while not success:
theta_opt, results = my_2nd_gp.optimize(t, y, ey)
success = results['success']
labels = ['a','l','G','P']
for i in range(4):
print '\n%s:'%labels[i]
print 'Non-optimized %s = %.3e'%(labels[i], theta_nonopt[i])
print 'Optimized %s = %.3e'%(labels[i], np.exp(theta_opt[i]))
print 'Original %s = %.3e'%(labels[i], theta_orig[i])
Explanation: Optimizing the hyperparameters
The first method of GP hyperparmeter optimization is just likelihood maximization and is built-in to george via the scipy.stats.optimize function. This method is admittedly not very robust. Let's compare our previously adopted hyperparameter values to those obtained from the built-in likelihood maximization routine.
End of explanation
# example of computing the likelihood
thetaprime = my_gp.kernel.pars * np.random.uniform(.9, 1.1, 4)
a, l, G, Prot = thetaprime
my_3rd_gp = george.GP(a * (kernels.ExpSquaredKernel(l) * kernels.ExpSine2Kernel(G, Prot)))
my_3rd_gp.compute(t, ey)
mean_func = np.zeros(t.size) # our mean function is zero
lnl = my_3rd_gp.lnlikelihood(y - mean_func)
# put the above code into your lnlikelihood function which is called at each step in your MCMC chain
Explanation: Run the above cell a few times. The optimal hyperparameters tend to vary sometimes by orders of magnitude (i.e. P).
Another common method is to use MCMC sampling the obtain a maximum a posteriori estimate of the hyperparameter values and their uncertainies. To do so one must evaulate the ln likelihood function at each step in their MCMC chain which requires a matrix inversion operation. George has a built-in method to compute the inverse of the covariance matrix via Cholesky decomposition. We can therefore fold the lnlikehood method into our own MCMC code for a given set of sampled hyperparameter values $\theta'$. Note that the dependent variable must already be corrected by the mean function; i.e. feed the residual vector ($\textbf{r} \equiv \textbf{y} - \boldsymbol{\mu}$) to george.
End of explanation |
3,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tables to Networks, Networks to Tables
Networks can be represented in a tabular form in two ways
Step1: At this point, we have our stations and trips data loaded into memory.
How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important.
Let's try to answer the question
Step2: Then, let's iterate over the stations DataFrame, and add in the node attributes.
Step3: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here.
The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time
Step4: Exercise
Flex your memory muscles
Step5: Exercise
Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count >= 100) are left.
Step6: Let's now try drawing the graph.
Exercise
Use nx.draw(my_graph) to draw the filtered graph to screen.
Step7: Exercise
Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges.
Step8: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph.
Edges represent only trips that were taken >100 times between those two nodes.
Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant).
Saving NetworkX Graph Files
NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way.
To write to disk | Python Code:
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir('datasets/'):
print('Unzip the divvy_2013.zip file in the datasets folder.')
stations = pd.read_csv('datasets/divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], index_col='id', encoding='utf-8')
stations
trips = pd.read_csv('datasets/divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'],
index_col=['trip_id'])
trips = trips.sort()
trips
Explanation: Tables to Networks, Networks to Tables
Networks can be represented in a tabular form in two ways: As an adjacency list with edge attributes stored as columnar values, and as a node list with node attributes stored as columnar values.
Storing the network data as a single massive adjacency table, with node attributes repeated on each row, can get unwieldy, especially if the graph is large, or grows to be so. One way to get around this is to store two files: one with node data and node attributes, and one with edge data and edge attributes.
The Divvy bike sharing dataset is one such example of a network data set that has been stored as such.
Loading Node Lists and Adjacency Lists
Let's use the Divvy bike sharing data set as a starting point. The Divvy data set is comprised of the following data:
Stations and metadata (like a node list with attributes saved)
Trips and metadata (like an edge list with attributes saved)
The README.txt file in the Divvy directory should help orient you around the data.
End of explanation
G = nx.DiGraph()
Explanation: At this point, we have our stations and trips data loaded into memory.
How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important.
Let's try to answer the question: "What are the most popular trip paths?" In this case, the bike station is a reasonable "unit of consideration", so we will use the bike stations as the nodes.
To start, let's initialize an directed graph G.
End of explanation
for r, d in stations.iterrows(): # call the pandas DataFrame row-by-row iterator
G.add_node(r, attr_dict=d.to_dict())
Explanation: Then, let's iterate over the stations DataFrame, and add in the node attributes.
End of explanation
# # Run the following code at your own risk :)
# for r, d in trips.iterrows():
# start = d['from_station_id']
# end = d['to_station_id']
# if (start, end) not in G.edges():
# G.add_edge(start, end, count=1)
# else:
# G.edge[start][end]['count'] += 1
for (start, stop), d in trips.groupby(['from_station_id', 'to_station_id']):
G.add_edge(start, stop, count=len(d))
G.edges(data=True)
len(G.edges())
len(G.nodes())
Explanation: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here.
The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time :-). Alternatively, I would suggest doing a pandas groupby.
End of explanation
from collections import Counter
# Count the number of edges that have x trips recorded on them.
trip_count_distr = ______________________________
# Then plot the distribution of these
plt.scatter(_______________, _______________, alpha=0.1)
plt.yscale('log')
plt.xlabel('num. of trips')
plt.ylabel('num. of edges')
Explanation: Exercise
Flex your memory muscles: can you make a scatter plot of the distribution of the number edges that have a certain number of trips?
The key should be the number of trips between two nodes, and the value should be the number of edges that have that number of trips.
End of explanation
# Filter the edges to just those with more than 100 trips.
G_filtered = G.copy()
for u, v, d in G.edges(data=True):
# Fill in your code here.
len(G_filtered.edges())
Explanation: Exercise
Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count >= 100) are left.
End of explanation
# Fill in your code here.
Explanation: Let's now try drawing the graph.
Exercise
Use nx.draw(my_graph) to draw the filtered graph to screen.
End of explanation
nodes = sorted(_________________, key=lambda x:_________________)
edges = ___________
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/divvy.png', dpi=300)
Explanation: Exercise
Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges.
End of explanation
nx.write_gpickle(G, 'datasets/divvy_2013/divvy_graph.pkl')
G = nx.read_gpickle('datasets/divvy_2013/divvy_graph.pkl')
Explanation: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph.
Edges represent only trips that were taken >100 times between those two nodes.
Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant).
Saving NetworkX Graph Files
NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way.
To write to disk:
nx.write_gpickle(G, handle)
To load from disk:
G = nx.read_gpickle(handle)
End of explanation |
3,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capรญtulo 7</font>
Download
Step1: Missรฃo
Step2: Teste da Soluรงรฃo | Python Code:
# Versรฃo da Linguagem Python
from platform import python_version
print('Versรฃo da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capรญtulo 7</font>
Download: http://github.com/dsacademybr
End of explanation
class Grid(object):
def find_path(self, matrix):
if matrix is None or not matrix:
return None
cache = {}
path = []
if self._find_path(matrix, len(matrix) - 1,
len(matrix[0]) - 1, cache, path):
return path
else:
return None
def _find_path(self, matrix, row, col, cache, path):
if row < 0 or col < 0 or not matrix[row][col]:
return False
cell = (row, col)
if cell in cache:
return cache[cell]
cache[cell] = (row == 0 and col == 0 or
self._find_path(matrix, row, col - 1, cache, path) or
self._find_path(matrix, row - 1, col, cache, path))
if cache[cell]:
path.append(cell)
return cache[cell]
Explanation: Missรฃo: Implementar um algoritmo para mover um robรด do canto superior esquerdo para o canto inferior direito de uma grade.
Nรญvel de Dificuldade: Mรฉdio
Premissas
Existem restriรงรตes de como o robรด se move?
ย ย ย ย * O robรด sรณ pode se mover para a direita e para baixo
Algumas cรฉlulas sรฃo invรกlidas (fora dos limites)?
ย ย ย ย * Sim
Podemos supor que as cรฉlulas inicial e final sรฃo cรฉlulas vรกlidas?
ย ย ย ย * Sim
Isso รฉ uma grade retangular? ou seja, a grade nรฃo รฉ irregular?
ย ย ย ย * Sim
Haverรก sempre uma maneira vรกlida para o robรด chegar ao canto inferior direito?
ย ย ย ย * Nรฃo, retorno None
Podemos assumir que as entradas sรฃo vรกlidas?
ย ย ย ย * Nรฃo
Podemos supor que isso se encaixa na memรณria?
ย ย ย ย * Sim
Teste Cases
<pre>
o = cรฉlula vรกlida
x = cรฉlula invรกlida
0 1 2 3
0 o o o o
1 o x o o
2 o o x o
3 x o o o
4 o o x o
5 o o o x
6 o x o x
7 o x o o
</pre>
Caso geral
Saรญda esperada = [(0, 0), (1, 0), (2, 0),
(2, 1), (3, 1), (4, 1),
(5, 1), (5, 2), (6, 2),
(7, 2), (7, 3)]
Nenhum caminho vรกlido, por exemplo, linha 7, col 2 รฉ invรกlido
Nenhuma entrada
Matriz vazia
Algoritmo
To get to row r and column c [r, c], we will need to have gone:
Right from [r, c-1] if this is a valid cell - [Path 1]
Down from [r-1, c] if this is a valid cell - [Path 2]
If we look at [Path 1], to get to [r, c-1], we will need to have gone:
Right from [r, c-2] if this is a valid cell
Down from [r-1, c-1] if this is a valid cell
Continue this process until we reach the start cell or until we find that there is no path.
Base case:
If the input row or col are < 0, or if [row, col] is not a valid cell
Return False
Recursive case:
We'll memoize the solution to improve performance.
Use the memo to see if we've already processed the current cell
If any of the following is True, append the current cell to the path and set our result to True:
We are at the start cell
We get a True result from a recursive call on:
[row, col-1]
[row-1, col]
Update the memo
Return the result
Complexity:
* Time: O(row * col)
* Space: O(row * col) for the recursion depth
Soluรงรฃo
End of explanation
%%writefile missao3.py
from nose.tools import assert_equal
class TestGridPath(object):
def test_grid_path(self):
grid = Grid()
assert_equal(grid.find_path(None), None)
assert_equal(grid.find_path([[]]), None)
max_rows = 8
max_cols = 4
matrix = [[1] * max_cols for _ in range(max_rows)]
matrix[1][1] = 0
matrix[2][2] = 0
matrix[3][0] = 0
matrix[4][2] = 0
matrix[5][3] = 0
matrix[6][1] = 0
matrix[6][3] = 0
matrix[7][1] = 0
result = grid.find_path(matrix)
expected = [(0, 0), (1, 0), (2, 0),
(2, 1), (3, 1), (4, 1),
(5, 1), (5, 2), (6, 2),
(7, 2), (7, 3)]
assert_equal(result, expected)
matrix[7][2] = 0
result = grid.find_path(matrix)
assert_equal(result, None)
print('Sua soluรงรฃo foi executada com sucesso! Parabรฉns!')
def main():
test = TestGridPath()
test.test_grid_path()
if __name__ == '__main__':
main()
%run -i missao3.py
Explanation: Teste da Soluรงรฃo
End of explanation |
3,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
Step1: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE
Step2: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A
Step3: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
Step4: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
Step5: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
Step7: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks
Step8: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
Step10: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
Step11: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image. | Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
def compute_saliency_maps(X, y, model):
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
scores, cache = model.forward(X, mode='test')
dscores = np.zeros_like(scores)
for n,idxN in enumerate(y):
dscores[n, idxN ] = 1
dX, grads = model.backward(dscores, cache)
saliency = dX.max(axis=1)
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
from cs231n.layers import softmax_loss
def make_fooling_image(X, target_y, model):
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
eps = 0.25
for i in range(1000):
scores, cache = model.forward(X_fooling)
if scores[0].argmax() == target_y:
# You fool!
print 'You fool! Iterations: ',i
break
_, dscores = softmax_loss(scores, target_y)
dX, grads = model.backward(dscores, cache)
# Sign of the gradient
sign_dX = (dX > 0).astype(np.float32)
# adding an imperceptibly small vector whose elements are equal to
# the sign of the elements of the gradient of the cost function with
# respect to the input [https://arxiv.org/pdf/1412.6572v3.pdf]
X_fooling -= eps * sign_dX
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation |
3,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
+
Word Count Lab
Step2: (1b) Pluralize and test
Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.
This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more <FILL IN> sections. The cell that needs to be modified will have # TODO
Step3: (1c) Apply makePlural to the base RDD
Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.
Step4: (1d) Pass a lambda function to map
Let's create the same RDD using a lambda function.
Step5: (1e) Length of each word
Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.
Step6: (1f) Pair RDDs
The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('<word>', 1) for each word element in the RDD.
We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD.
Step7: Part 2
Step8: (2b) Use groupByKey() to obtain the counts
Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.
Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.
Step9: (2c) Counting using reduceByKey
A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets.
Step10: (2d) All together
The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.
Step11: Part 3
Step12: (3b) Mean using reduce
Find the mean number of words per unique word in wordCounts.
Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.
Step14: Part 4
Step16: (4b) Capitalization and punctuation
Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are
Step17: (4c) Load a text file
For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines.
Step18: (4d) Words from lines
Before we can use the wordcount() function, we have to address two issues with the format of the RDD
Step19: (4e) Remove empty elements
The next step is to filter out the empty elements. Remove all entries where the word is ''.
Step20: (4f) Count the words
We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.
You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.
Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts. | Python Code:
wordsList = ['cat', 'elephant', 'rat', 'rat', 'cat']
wordsRDD = sc.parallelize(wordsList, 4)
# Print out the type of wordsRDD
print type(wordsRDD)
Explanation: +
Word Count Lab: Building a word count application
This lab will build on the techniques covered in the Spark tutorial to develop a simple word count application. The volume of unstructured text in existence is growing dramatically, and Spark is an excellent tool for analyzing this type of data. In this lab, we will write code that calculates the most common words in the Complete Works of William Shakespeare retrieved from Project Gutenberg. This could also be scaled to find the most common words on the Internet.
During this lab we will cover:
Part 1: Creating a base RDD and pair RDDs
Part 2: Counting with pair RDDs
Part 3: Finding unique words and a mean value
Part 4: Apply word count to a file
Note that, for reference, you can look up the details of the relevant methods in Spark's Python API
Part 1: Creating a base RDD and pair RDDs
In this part of the lab, we will explore creating a base RDD with parallelize and using pair RDDs to count words.
(1a) Create a base RDD
We'll start by generating a base RDD by using a Python list and the sc.parallelize method. Then we'll print out the type of the base RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def makePlural(word):
Adds an 's' to `word`.
Note:
This is a simple function that only adds an 's'. No attempt is made to follow proper
pluralization rules.
Args:
word (str): A string.
Returns:
str: A string with 's' added to it.
return word + "s"
print makePlural('cat')
# One way of completing the function
def makePlural(word):
return word + 's'
print makePlural('cat')
# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test
# TEST Pluralize and test (1b)
Test.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s')
Explanation: (1b) Pluralize and test
Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.
This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more <FILL IN> sections. The cell that needs to be modified will have # TODO: Replace <FILL IN> with appropriate code on its first line. Once the <FILL IN> sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
pluralRDD = wordsRDD.map(makePlural)
print pluralRDD.collect()
# TEST Apply makePlural to the base RDD(1c)
Test.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralRDD')
Explanation: (1c) Apply makePlural to the base RDD
Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
pluralLambdaRDD = wordsRDD.map(lambda a: a + "s")
print pluralLambdaRDD.collect()
# TEST Pass a lambda function to map (1d)
Test.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralLambdaRDD (1d)')
Explanation: (1d) Pass a lambda function to map
Let's create the same RDD using a lambda function.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
pluralLengths = (pluralRDD
.map(lambda a: len(a))
.collect())
print pluralLengths
# TEST Length of each word (1e)
Test.assertEquals(pluralLengths, [4, 9, 4, 4, 4],
'incorrect values for pluralLengths')
Explanation: (1e) Length of each word
Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
wordPairs = wordsRDD.map(lambda a: (a,1))
print wordPairs.collect()
# TEST Pair RDDs (1f)
Test.assertEquals(wordPairs.collect(),
[('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)],
'incorrect value for wordPairs')
Explanation: (1f) Pair RDDs
The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('<word>', 1) for each word element in the RDD.
We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Note that groupByKey requires no parameters
wordsGrouped = wordPairs.groupByKey()
for key, value in wordsGrouped.collect():
print '{0}: {1}'.format(key, list(value))
# TEST groupByKey() approach (2a)
Test.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)).collect()),
[('cat', [1, 1]), ('elephant', [1]), ('rat', [1, 1])],
'incorrect value for wordsGrouped')
Explanation: Part 2: Counting with pair RDDs
Now, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others.
A naive approach would be to collect() all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations.
(2a) groupByKey() approach
An approach you might first consider (we'll see shortly that there are better ways) is based on using the groupByKey() transformation. As the name implies, the groupByKey() transformation groups all the elements of the RDD with the same key into a single list in one of the partitions. There are two problems with using groupByKey():
The operation requires a lot of data movement to move all the values into the appropriate partitions.
The lists can be very large. Consider a word count of English Wikipedia: the lists for common words (e.g., the, a, etc.) would be huge and could exhaust the available memory in a worker.
Use groupByKey() to generate a pair RDD of type ('word', iterator).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
wordCountsGrouped = wordsGrouped.map(lambda (a,b): (a, sum(b)))
print wordCountsGrouped.collect()
# TEST Use groupByKey() to obtain the counts (2b)
Test.assertEquals(sorted(wordCountsGrouped.collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsGrouped')
Explanation: (2b) Use groupByKey() to obtain the counts
Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.
Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Note that reduceByKey takes in a function that accepts two values and returns a single value
wordCounts = wordPairs.reduceByKey(lambda a,b: a+b)
print wordCounts.collect()
# TEST Counting using reduceByKey (2c)
Test.assertEquals(sorted(wordCounts.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCounts')
Explanation: (2c) Counting using reduceByKey
A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
wordCountsCollected = (wordsRDD
.map(lambda a: (a,1))
.reduceByKey(lambda a,b: a+b)
.collect())
print wordCountsCollected
# TEST All together (2d)
Test.assertEquals(sorted(wordCountsCollected), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsCollected')
Explanation: (2d) All together
The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
uniqueWords = wordsRDD.distinct().count()
print uniqueWords
# TEST Unique words (3a)
Test.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords')
Explanation: Part 3: Finding unique words and a mean value
(3a) Unique words
Calculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from operator import add
totalCount = (wordCounts
.map(lambda (a,b): b)
.reduce(lambda a,b: a+b))
average = totalCount / float(wordCounts.distinct().count())
print totalCount
print round(average, 2)
# TEST Mean using reduce (3b)
Test.assertEquals(round(average, 2), 1.67, 'incorrect value of average')
Explanation: (3b) Mean using reduce
Find the mean number of words per unique word in wordCounts.
Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def wordCount(wordListRDD):
Creates a pair RDD with word counts from an RDD of words.
Args:
wordListRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
return (wordListRDD
.map(lambda a : (a,1))
.reduceByKey(lambda a,b: a+b))
print wordCount(wordsRDD).collect()
# TEST wordCount function (4a)
Test.assertEquals(sorted(wordCount(wordsRDD).collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect definition for wordCount function')
Explanation: Part 4: Apply word count to a file
In this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data.
(4a) wordCount function
First, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in an RDD that is a list of words like wordsRDD and return a pair RDD that has all of the words and their associated counts.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
import re
def removePunctuation(text):
Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
text (str): A string.
Returns:
str: The cleaned up string.
return re.sub("[^a-zA-Z0-9 ]", "", text.strip(" ").lower())
print removePunctuation('Hi, you!')
print removePunctuation(' No under_score!')
# TEST Capitalization and punctuation (4b)
Test.assertEquals(removePunctuation(" The Elephant's 4 cats. "),
'the elephants 4 cats',
'incorrect definition for removePunctuation function')
Explanation: (4b) Capitalization and punctuation
Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are:
Words should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word).
All punctuation should be removed.
Any leading or trailing spaces on a line should be removed.
Define the function removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python re module to remove any text that is not a letter, number, or space. Reading help(re.sub) might be useful.
End of explanation
# Just run this code
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')
fileName = os.path.join(baseDir, inputPath)
shakespeareRDD = (sc
.textFile(fileName, 8)
.map(removePunctuation))
print '\n'.join(shakespeareRDD
.zipWithIndex() # to (line, lineNum)
.map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'
.take(15))
Explanation: (4c) Load a text file
For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
shakespeareWordsRDD = shakespeareRDD.flatMap(lambda a: a.split(" "))
shakespeareWordCount = shakespeareWordsRDD.count()
print shakespeareWordsRDD.top(5)
print shakespeareWordCount
# TEST Words from lines (4d)
# This test allows for leading spaces to be removed either before or after
# punctuation is removed.
Test.assertTrue(shakespeareWordCount == 927631 or shakespeareWordCount == 928908,
'incorrect value for shakespeareWordCount')
Test.assertEquals(shakespeareWordsRDD.top(5),
[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],
'incorrect value for shakespeareWordsRDD')
Explanation: (4d) Words from lines
Before we can use the wordcount() function, we have to address two issues with the format of the RDD:
The first issue is that that we need to split each line by its spaces.
The second issue is we need to filter out empty lines.
Apply a transformation that will split each element of the RDD by its spaces. For each element of the RDD, you should apply Python's string split() function. You might think that a map() transformation is the way to do this, but think about what the result of the split() function will be.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
shakeWordsRDD = shakespeareWordsRDD.filter(lambda a: a != "")
shakeWordCount = shakeWordsRDD.count()
print shakeWordCount
# TEST Remove empty elements (4e)
Test.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount')
Explanation: (4e) Remove empty elements
The next step is to filter out the empty elements. Remove all entries where the word is ''.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
top15WordsAndCounts = wordCount(shakeWordsRDD).takeOrdered(15, lambda(a,b): -b)
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts))
# TEST Count the words (4f)
Test.assertEquals(top15WordsAndCounts,
[(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],
'incorrect value for top15WordsAndCounts')
Explanation: (4f) Count the words
We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.
You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.
Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts.
End of explanation |
3,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Business Question
Step1: The business question
Step2: Note On Correlation and Slope
Step3: Radio correlation
Step4: Newspaper correlation
Step5: All media correlation
Step6: Remarks until here
Step7: Necessary checks
Step10: Feature selection
Step11: Issues | Python Code:
# necessary imports
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("../lib_plot")
import scatter_boxplot as sbp
%matplotlib inline
import seaborn as sns
from scipy.stats.stats import pearsonr
from scipy import stats
# โ the dataset is provided by James et al., Introduction to Statistical Learning.
# here: http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv
df = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
df.head()
Explanation: Business Question :
Which marketing media contribute to Sales ?
(Inspired from the "Introduction to Statistical Learning" of James et al.)
Work here deals with Linear (simple and multiple) regression,
of the Sales dataset (Sales ~ Newspaper adv, Radio adv, TV adv)
and how such analysis can help us in a marketing plan.
Dependent Variables:
โข TV
โข Radio
โข Newspaper
Independent Variables:
โข Sales
End of explanation
# lets see the linear relation separately
sns.pairplot(df, size=2.5, kind='reg');
# Just visually we see that with sales there seems to be some correlation with TV, some correlation with Radio and
# maybe some but weaker corellation with Newspaper.
# We see also that there are other weak correlations e.g. Radio with Newspaper, we keep this for later.
# Lets now get some numbers of these correlations
print "Sales ~ TV", pearsonr(df['Sales'], df['TV'])
print "Sales ~ Radio", pearsonr(df['Sales'], df['Radio'])
print "Sales ~ Newspaper", pearsonr(df['Sales'], df['Newspaper'])
slope, intercept, r_value, p_value, std_err = stats.linregress(df['Sales'], df['TV'])
print slope, intercept, r_value*r_value, p_value, std_err
# =>
# A first simple approach is to check the p-value of each media, they have to be 'low' so to contribute.
# Our visual inspection is confirmed here:
# Stronger correlation happens with TV then Radio and finally Newspaper, see pearson correlation and relative p-vals.
# Remember
# p-value:
# it is the probability to observe any value >= our calculated t-statistic (= number of st. deviations our b1
# of y=biX+b0+e, is away from 0, assuming b1=0)
# as probability is a value in [0, 1]
# if ~0 => our Null hypothesis wrong => There is relationship between media sales
# if ~1 => our Null hypothesis correct => No relationship between media sales
# low:
# long story, but typically 0.05 indicates string evidence against the null hypothesis
# !!! Attention
# A significance level of 0.05 does indicate that 5% of studies will reject the null hypothesis while the null hypothesis is indeed true.
# BUT we can not interpret the p-val in the same way.
Explanation: The business question:
Which media contribute to Sales ?
End of explanation
import statsmodels.formula.api as sm
model = sm.ols(formula='Sales ~ TV', data=df)
res = model.fit()
print res.summary()
Explanation: Note On Correlation and Slope:
It can be very instructive at some point to perform some Standarization on our data.
The output of the pearson correlation is a value between -1,1 which inform us about the correlation.
The pearson correlation is a desirable since is bounded [-1, 1] and easily interpretable.
The output of the statsmodel, give us the Slope.
The slope also informs about correlation but is not the pearson coefficient we want.
We need to tranform our data.
They only coincide when the two variables x, y are on the same scale, in some sense.
The most common way of achieving this is through standardization
Correlation VS Slope
* Correlation gives a bounded measurement - independent of the scale of the two variables.
* Regression slope gives a useful quantity interpreted as the estimated change in the ex- pected value of y for a given value of x. It can not be deduced from the correlation coefficient alone.
TV correlation
End of explanation
model = sm.ols(formula='Sales ~ Radio', data=df)
res = model.fit()
print res.summary()
Explanation: Radio correlation
End of explanation
model = sm.ols(formula='Sales ~ Newspaper', data=df)
res = model.fit()
print res.summary()
Explanation: Newspaper correlation
End of explanation
model = sm.ols(formula='Sales ~ TV + Radio + Newspaper', data=df)
res = model.fit()
print res.summary()
Explanation: All media correlation
End of explanation
# Comparing the multiple regression coefficient with the individual ones we get:
# | Mulitple , Single
# TV : 0.0458, 0.0475
# Radio : 0.1885, 0.2025
# Newspaper : 0.0547, -0.0010
# => The multiple and simple regression coefficients can be quite different.
# => The relationship revealed in multiple regression is the one to be trusted, this because:
# the true correlation comes from Sales ~ Radio,
# but there is also a correlation Radio ~ Newspaper
# In this way the newspaper 'looks like' it is correlated with Sales, Sales ~ Newspaper, if not multiple reg. context.
# This phenomenon, correlation of features, is called collinearity
Explanation: Remarks until here
End of explanation
# Before we continue with selection of features,
# it makes sense to see _if_ there is a relationship between Response and predictors.
# if any of the p-values of the features of the multipel reg. is very small isn't it garanteed that we have this relationship?
# do we need to check some other metric?
# Actually checking the coef. / p-vals of multiple regression is not enough!
# This happens especially when the num of predictors is large.
# There is a very high probability that we ll observer at least one p-val <0.05 just by chance.
# => Thats why we need to use the F-statistic, which corrects for the number of predictors.
# The rule is that F-stat >> 1 is good indication,
# even more correct, if n large F-stat >~1 can be also good OR F-stat >> 1 if n small.
# Nevertheless, since F-stat followes an F distribution there are calculations that can give us the corresponding
# p-val which we can use and judge to reject or not the null hypothesis.
# => Here Prob (F-statistic): 1.58e-96 ~0 => THERE IS A RELATIONSHIP !
# !!! Attention if p >~ n then F-stat can not be used !!!
Explanation: Necessary checks
End of explanation
df_features = df[['TV','Radio', 'Newspaper']]
df_target = df['Sales']
X = df_features
y = df_target
Feature importance evaluation
The relative rank (i.e. depth) of a feature used as a decision node in a tree can be used to assess
the relative importance of that feature with respect to the predictability of the target variable.
Features used at the top of the tree are used contribute to the final prediction decision of a larger fraction
of the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate
of the relative importance of the features.
By averaging those expected activity rates over several randomized trees one can reduce the variance of such an
estimate and use it for feature selection.
from here:http://stackoverflow.com/questions/15810339/how-are-feature-importances-in-randomforestclassifier-determined
There are indeed several ways to get feature "importances". As often, there is no strict consensus about what this word means.
In scikit-learn, we implement the importance as described in [1] (often cited, but unfortunately rarely read...).
It is sometimes called "gini importance" or "mean decrease impurity" and is defined as the total decrease in node
impurity (weighted by the probability of reaching that node (which is approximated by the proportion of samples
reaching that node)) averaged over all trees of the ensemble.
In the literature or in some other packages, you can also find feature importances implemented as the
"mean decrease accuracy". Basically, the idea is to measure the decrease in accuracy on OOB data when you
randomly permute the values for that feature. If the decrease is low, then the feature is not important,
and vice-versa.
(Note that both algorithms are available in the randomForest R package.)
[1]: Breiman, Friedman, "Classification and regression trees", 1984.
# Feature selection with Linear regression
names = df_features.columns
lm = linear_model.LinearRegression()
lm.fit(X, y)
feature_importances = rf.feature_importances_
print "Features sorted by their score:"
print sorted(zip(map(lambda x: round(x, 4),feature_importances), names), reverse=True)
# Plot the feature importances of the forest
indices = np.argsort(feature_importances)[::-1]
nfeatures = len(feature_importances)
plt.figure()
plt.title("Feature importances")
plt.bar(range(nfeatures), feature_importances[indices], color="r", align="center")
plt.xticks(range(nfeatures), indices)
plt.xlim([-1, nfeatures])
plt.show()
# Feature selection with Tree based method - Random Forest
from sklearn.ensemble import RandomForestRegressor
names = df_features.columns
rf = RandomForestRegressor()
rf.fit(X, y)
feature_importances = rf.feature_importances_
print "Features sorted by their score:"
print sorted(zip(map(lambda x: round(x, 4),feature_importances), names), reverse=True)
# Remark
#feature_importances_ : the higher, the more important the feature, Sum(feature_importances_) = 1.0
# !!! Attention, random forest feature selection cons:
# With correlated features, strong features can end up with low scores and the method can be biased towards
# variables with many categories. As long as the gotchas are kept in mind, there really is no reason not to try them out on your data.
from sklearn import cross_validation
from sklearn import linear_model
estimator = linear_model.LinearRegression()
y = df['Sales']
X_mod1 = df[['TV','Radio', 'Newspaper']]
X_mod2 = df[['TV','Radio']]
X_mod3 = df[['TV','Newspaper']]
X_mod4 = df[['TV']]
X_mod5 = df[['Radio']]
X_mod6 = df[['Newspaper']]
scores = cross_validation.cross_val_score(estimator, X_mod1, y, cv=10, scoring='r2')
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
scores = cross_validation.cross_val_score(estimator, X_mod2, y, cv=10, scoring='r2')
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
scores = cross_validation.cross_val_score(estimator, X_mod3, y, cv=10, scoring='r2')
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
scores = cross_validation.cross_val_score(estimator, X_mod4, y, cv=10, scoring='r2')
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
scores = cross_validation.cross_val_score(estimator, X_mod5, y, cv=10, scoring='r2')
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
scores = cross_validation.cross_val_score(estimator, X_mod6, y, cv=10, scoring='r2')
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
# => We see that tv and radio bring R2 at 0.89 but newspaper does not really helps.
#sklearn doesn't seem to have a forward selection algorithm.
#However, it does provide recursive feature elimination, which is a greedy feature elimination algorithm
#similar to sequential backward selection. See the documentation here:
#http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html
from sklearn.datasets import make_friedman1
from sklearn.feature_selection import RFE
from sklearn.svm import SVR
from sklearn import linear_model
# choose one estimator, e.g.
#estimator = SVR(kernel="linear")
estimator = linear_model.LinearRegression()
# !!! We say that we want to keep 2 values, if None half of the initial will be used
selector = RFE(estimator, n_features_to_select=2, step=1)
selector = selector.fit(X, y)
print X.columns.values
print selector.support_
print selector.ranking_
Explanation: Feature selection
End of explanation
# read here:
# http://stats.stackexchange.com/questions/20836/algorithms-for-automatic-model-selection
Explanation: Issues:
End of explanation |
3,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Discussion 11
Step3: Understanding Gradient Descent
In order to better understand gradient descent, let's implement it to solve a familiar problem - least-squares linear regression. While we are able to find the solution to ordinary least-squares linear regression analytically (recall its value as $\theta = (X^TX)^{โ1}X^TY$), we can also find it using gradient descent.
Question 1
Step6: Question 2
Step7: Question 3
Step8: Question 4
Step9: We create some toy data in two dimensions to perform our regressions on
Step10: And plot our loss
Step11: Consider
Step12: Now let's visualize how our regression estimates change as we perform gradient descent
Step13: Question 6
Step14: And then complete the gradient function. You should get a gradient of about $[0.65, 0.61]$ for the given values $\theta$ on this example dataset.
Step15: Now let's see how we can use our gradient descent tools to fit a regression on some real data! First, let's load the breast cancer dataset from lecture, and plot breast mass radius versus category - malignant or benign. As in lecture, we jitter the response variable to avoid overplotting.
Step16: Question 8
Step17: With optimal $\theta$ chosen, we can now plot our logistic curve and our decision boundary, and look at how our model categorizes our data
Step18: And, we can calculate our classification accuracy. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import patches, cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
from IPython.display import display, Latex, Markdown
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
Explanation: Discussion 11: Logistic Regression and Gradient Descent
End of explanation
q1_answer = r
Put your answer here, replacing this text.
$$\frac{\partial}{\partial \theta_j} Loss(\theta) = \frac{1}{n} \sum_{i=1}^n \dots$$
display(Markdown(q1_answer))
q1_answer = r
**SOLUTION:**
$$\frac{\partial}{\partial \theta_j} Loss(\theta) = \frac{2}{n} \sum_{i=1}^n -x_{i,j} \left(y_i - f_\theta(x_i)\right)$$
display(Markdown(q1_answer))
Explanation: Understanding Gradient Descent
In order to better understand gradient descent, let's implement it to solve a familiar problem - least-squares linear regression. While we are able to find the solution to ordinary least-squares linear regression analytically (recall its value as $\theta = (X^TX)^{โ1}X^TY$), we can also find it using gradient descent.
Question 1:
First, let's consider the gradient function for ordinary least squares regression. Recall the OLS loss function as
$$Loss(\theta) = \frac{1}{n} \sum_{i=1}^n \left(y_i - f_\theta(x_i)\right)^2$$
And the function $f_\theta(x_i)$, for input data with $p$ dimensions, as
$$f_\theta(x_i) = \sum_{j=1}^p \theta_j x_{i,j} $$
Given these functions, what is the gradient function for OLS regression? First, state it in terms of a single component of $\theta$, $\theta_j$, using a sum over each data point $i$ in $X$.
End of explanation
q2_answer = r
Put your answer here, replacing this text.
$$\frac{\partial}{\partial \theta} Loss(X) = \dots$$
display(Markdown(q2_answer))
q2_answer = r
**SOLUTION:**
$$\frac{\partial}{\partial \theta} Loss(X) = -\frac{2}{n} X^T (y - X^T \theta)$$
display(Markdown(q2_answer))
Explanation: Question 2:
Now, try to write that formula in terms of the matricies $X$, $y$, and $\theta$.
End of explanation
def linear_regression_grad(X, y, theta):
grad = -2/X.shape[0] * X.T @ (y - X @ theta) #SOLUTION
return grad
theta = [1, 4]
simple_X = np.vstack([np.ones(10), np.arange(10)]).T
simple_y = np.arange(10) * 3 + 2
linear_regression_grad(simple_X, simple_y, theta)
Explanation: Question 3:
Using this gradient function, complete the python function below which calculates the gradient for inputs $X$, $y$, and $\theta$. You should get a gradient of $[7, 48]$ on the simple data below.
End of explanation
def plot_surface_3d(X, Y, Z, angle):
highest_Z = max(Z.reshape(-1,1))
lowest_Z = min(Z.reshape(-1,1))
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z,
cmap=cm.coolwarm,
linewidth=0,
antialiased=False,
rstride=5, cstride=5)
ax.zaxis.set_major_locator(LinearLocator(5))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.view_init(45, angle)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.title("Regression Loss Function")
plt.xlabel("Theta_0")
plt.ylabel("Theta_1")
plt.show()
Explanation: Question 4:
Before we perform gradient descent, let's visualize the surface we're attempting to descend over. Run the next few cells to plot the loss surface as a function of $\theta_0$ and $\theta_1$, for some toy data.
End of explanation
np.random.seed(100)
X_1 = np.arange(50)/5 + 5
X = np.vstack([np.ones(50), X_1]).T
y = (X_1 * 2 + 3) + np.random.normal(0, 2.5, size=50)
plt.plot(X_1, y, ".")
Explanation: We create some toy data in two dimensions to perform our regressions on:
End of explanation
angle_slider = widgets.FloatSlider(min=0, max=360, step=15, value=45)
def plot_regression_loss(angle):
t0_vals = np.linspace(-10,10,100)
t1_vals = np.linspace(-2,5,100)
theta_0,theta_1 = np.meshgrid(t0_vals, t1_vals)
thetas = np.vstack((theta_0.flatten(), theta_1.flatten()))
loss_vals = 2/X.shape[0] * sum(((y - (X @ thetas).T)**2).T)
loss_vals = loss_vals.reshape(100, -100)
plot_surface_3d(theta_0, theta_1, loss_vals, angle)
interact(plot_regression_loss, angle=angle_slider);
Explanation: And plot our loss:
End of explanation
def gradient_descent(X, y, theta0, gradient_function, learning_rate = 0.001, max_iter=1000000, epsilon=0.001):
theta_hat = theta0 # Initial guess
for t in range(1, max_iter):
grad = gradient_function(X, y, theta_hat)
# Now for the update step
theta_hat = theta_hat - learning_rate * grad #SOLUTION
# When our gradient is small enough, we have converged
if np.linalg.norm(grad) < epsilon:
print("converged after {} steps".format(t))
return theta_hat
# If we hit max_iter iterations
print("Warning - Failed to converge")
return theta_hat
theta_0 = [10, -1]
gradient_descent(X, y, theta_0, linear_regression_grad)
Explanation: Consider:
- What do you notice about the loss surface for this simple regression example?
- Where are the optimal values $(\theta_0, \theta_1)$?
- Do you think that the shape of this surface will make gradient descent a viable solution to find these optimal values?
- What other loss surface shapes could you imagine?
Question 5:
Now, let's implement a general function to perform batch gradient descent. Given data X and y, initial weights $\theta_0$, a learning rate $\rho$, and a function gradient_function that has the same function signature as linear_regression_grad, implement a general gradient descent algorithm for finding optimal $\theta$.
End of explanation
theta_0s = []
theta_1s = []
plot_idx = [1, 5, 20, 100, 500, 2000, 10000]
def plot_gradient_wrapper(X, y, theta):
grad = linear_regression_grad(X, y, theta)
theta_0s.append(theta[0])
theta_1s.append(theta[1])
t = len(theta_0s)
if t in plot_idx:
plt.subplot(121)
plt.xlim([4, 12])
plt.ylim([-2, 3])
plt.plot(theta_0s, theta_1s)
plt.plot(theta[0], theta[1], ".", color="b")
plt.title('theta(s) over time, t={}'.format(t))
plt.subplot(122)
plt.xlim([0, 20])
plt.ylim([-10, 40])
plt.plot(np.arange(50)/2.5, y, ".")
plt.plot(np.arange(50)/2.5, X @ theta)
plt.title('Regression line vs. data, t={}'.format(t))
plt.show()
return grad
gradient_descent(X, y, theta_0, plot_gradient_wrapper)
Explanation: Now let's visualize how our regression estimates change as we perform gradient descent:
End of explanation
def sigmoid(t):
return 1/(1 + np.e**-t)
Explanation: Question 6:
In Prof. Gonzalez's lecture, instead of using a constant learning rate, he used a learning rate that decreased over time, according to a function:
$$\rho(t) = \frac{r}{t}$$
Where $r$ represents some initial learning rate. This has the feature of decreasing the learning rate as we get closer to the optimal solution.
- Why might this be useful, compared to a constant learning rate?
- What problems might be caused by using too high of a learning rate?
- What about too low?
Extending to Logistic Regression
Question 7
As discussed in lecture, while ordinary least squares has a simple analytical solution, logistic regression must be fitted using gradient descent. Using the tools we've constructed, we can do just that. First, create a new function, logistic_regression_grad, which functions similarly to its counterpart linear_regression_grad. In the case of logistic regression, this should be the gradient of the logistic regression log-likelihood function - you may wish to refer to the lecture slides to find this gradient equation.
First, we define the sigmoid function:
End of explanation
def logistic_regression_grad(X, y, theta):
grad = (sigmoid(X @ theta) - y) @ X #SOLUTION
return grad
theta = [0, 1]
simple_X_1 = np.hstack([np.arange(10)/10, np.arange(10)/10 + 0.75])
simple_X = np.vstack([np.ones(20), simple_X_1]).T
simple_y = np.hstack([np.zeros(10), np.ones(10)])
linear_regression_grad(simple_X, simple_y, theta)
Explanation: And then complete the gradient function. You should get a gradient of about $[0.65, 0.61]$ for the given values $\theta$ on this example dataset.
End of explanation
import sklearn.datasets
data_dict = sklearn.datasets.load_breast_cancer()
data = pd.DataFrame(data_dict['data'], columns=data_dict['feature_names'])
data['malignant'] = (data_dict['target'] == 0)
data['malignant'] = data['malignant'] + 0.1*np.random.rand(len(data['malignant'])) - 0.05
X_log_1 = data['mean radius']
X_log = np.vstack([np.ones(len(X_log_1)), X_log_1.values]).T
y_log = data['malignant'].values
plt.plot(X_log_1, y_log, ".")
Explanation: Now let's see how we can use our gradient descent tools to fit a regression on some real data! First, let's load the breast cancer dataset from lecture, and plot breast mass radius versus category - malignant or benign. As in lecture, we jitter the response variable to avoid overplotting.
End of explanation
theta_log = gradient_descent(X_log, y_log, [0, 1], logistic_regression_grad, learning_rate=0.0001) #SOLUTION
theta_log
Explanation: Question 8:
Now, using our earlier defined gradient_descent function, find optimal parameters $(\theta_0, \theta_1)$ to fit the breast cancer data. You will have to tune the learning rate beyond the default of the function, and think of what a good initial guess for $\theta$ would be, in both dimensions.
End of explanation
y_lowX = X_log_1[sigmoid(X_log @ theta_log) < 0.5]
y_lowy = y_log[sigmoid(X_log @ theta_log) < 0.5]
y_highX = X_log_1[sigmoid(X_log @ theta_log) > 0.5]
y_highy = y_log[sigmoid(X_log @ theta_log) > 0.5]
sigrange = np.arange(5, 30, 0.05)
sigrange_X = np.vstack([np.ones(500), sigrange]).T
d_boundary = -theta_log[0]/theta_log[1]
plt.plot(sigrange, sigmoid(sigrange_X @ theta_log), ".", color="g")
plt.hlines(0.5, 5, 30, "g")
plt.vlines(d_boundary, -0.2, 1.2, "g")
plt.plot(y_lowX, y_lowy, ".", color="b")
plt.plot(y_highX, y_highy, ".", color="r")
plt.title("Classification (blue=benign, red=malignant), assuming a P=0.5 decision boundary")
Explanation: With optimal $\theta$ chosen, we can now plot our logistic curve and our decision boundary, and look at how our model categorizes our data:
End of explanation
n_errors = sum(y_lowy > 0.5) + sum(y_highy < 0.5)
accuracy = round((len(y_log)-n_errors)/len(y_log) * 1000)/10
print("Classification Accuracy - {}%".format(accuracy))
Explanation: And, we can calculate our classification accuracy.
End of explanation |
3,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NS
$$h(t) \approx_0 \frac{16 \pi^2 G}{c^4 r}I \epsilon (\nu_0 + \dot{\nu}t)^2 cos(2\pi(\nu_0+\dot{\nu}t)t)$$
Step1: chirp | Python Code:
G = 6.67408*1e-11
c = 299792458
r = 2.4377e+20
I = 1e38
epsilon = 1e-4
nu0 = 1
nudot = -5e-10
cost = 16*math.pi**2*G/(c**4*r)*I*epsilon
print(cost)
nmesi = 9
tobs = nmesi*30*24*60*60
print(tobs)
tempi = numpy.linspace(0,10,100000)
leggeOraria = nu0+nudot*tempi
ampiezza = cost*numpy.power(leggeOraria,2)
onda = ampiezza*numpy.cos(2*math.pi*leggeOraria*tempi)
%matplotlib notebook
#pylab.plot(tempi,ampiezza)
pylab.plot(tempi,onda)
pylab.show()
t = numpy.linspace(1,24,100000)
ampiezza = 1e-19
sd = 1e-9
freqIniz = 1
ondaNS = ampiezza*(sd/(freqIniz-sd*t))**(1/2)*numpy.cos(2*(freqIniz-sd*t)*t)
%matplotlib notebook
pylab.plot(t,ondaNS)
pylab.show()
ampPerTempo = ampiezza*(sd/(freqIniz-sd*t))**(1/2)
%matplotlib notebook
pylab.plot(t,ampPerTempo)
pylab.show()
Explanation: NS
$$h(t) \approx_0 \frac{16 \pi^2 G}{c^4 r}I \epsilon (\nu_0 + \dot{\nu}t)^2 cos(2\pi(\nu_0+\dot{\nu}t)t)$$
End of explanation
t = numpy.linspace(1,2,1000)
ampiezza = 1e-2
tcoal = 2.05
freqIniz = 20
ondaChirp = ampiezza*freqIniz*numpy.power((1-t/tcoal),-2/8)*numpy.cos(freqIniz*numpy.power((1-t/tcoal),-3/8)*t)
%matplotlib notebook
pylab.plot(t,ondaChirp)
pylab.show()
ampPerTempo = ampiezza*freqIniz*numpy.power((1-t/tcoal),-2/8)
%matplotlib notebook
pylab.plot(t,ampPerTempo)
pylab.show()
Explanation: chirp
End of explanation |
3,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Risk Factors
This sub-section models the single risk factors. We start with definition of the risk-neutral discounting object.
Step2: Three risk factors ares modeled
Step3: Assumptions for the geometric_brownian_motion object.
Step4: Assumptions for the jump_diffusion object.
Step5: Assumptions for the stochastic_volatility object.
Step6: Finally, the unifying valuation assumption for the valuation environment.
Step7: These are added to the single market_environment objects of the risk factors.
Step8: Finally, the market model with the risk factors and the correlations between them.
Step9: Derivatives
In this sub-section, we model the single derivatives instruments.
American Put Option
The first derivative instrument is an American put option.
Step10: European Maximum Call on 2 Assets
The second derivative instrument is a European maximum call option on two risk factors.
Step11: American Minimum Put on 2 Assets
The third derivative instrument is an American minimum put on two risk factors.
Step12: Portfolio
To compose a derivatives portfolio, derivatives_position objects are needed.
Step13: These objects are to be collected in dictionary objects.
Step14: All is together to instantiate the derivatives_portfolio class.
Step15: Let us have a look at the major portfolio statistics.
Step16: Finally, a graphical look at two selected, simulated paths of the stochastic volatility risk factor and the jump diffusion risk factor, respectively.
Step17: The resulting plot illustrates the strong negative correlation. | Python Code:
from dx import *
from pylab import plt
plt.style.use('seaborn')
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Multi-Risk Derivatives Portfolios
The step from multi-risk derivatives instruments to multi-risk derivatives instrument portfolios is not a too large one. This part of the tutorial shows how to model an economy with three risk factors
End of explanation
# constant short rate
r = constant_short_rate('r', 0.02)
Explanation: Risk Factors
This sub-section models the single risk factors. We start with definition of the risk-neutral discounting object.
End of explanation
# market environments
me_gbm = market_environment('gbm', dt.datetime(2015, 1, 1))
me_jd = market_environment('jd', dt.datetime(2015, 1, 1))
me_sv = market_environment('sv', dt.datetime(2015, 1, 1))
Explanation: Three risk factors ares modeled:
geometric Brownian motion
jump diffusion
stochastic volatility process
End of explanation
# geometric Brownian motion
me_gbm.add_constant('initial_value', 36.)
me_gbm.add_constant('volatility', 0.2)
me_gbm.add_constant('currency', 'EUR')
me_gbm.add_constant('model', 'gbm')
Explanation: Assumptions for the geometric_brownian_motion object.
End of explanation
# jump diffusion
me_jd.add_constant('initial_value', 36.)
me_jd.add_constant('volatility', 0.2)
me_jd.add_constant('lambda', 0.5)
# probability for jump p.a.
me_jd.add_constant('mu', -0.75)
# expected jump size [%]
me_jd.add_constant('delta', 0.1)
# volatility of jump
me_jd.add_constant('currency', 'EUR')
me_jd.add_constant('model', 'jd')
Explanation: Assumptions for the jump_diffusion object.
End of explanation
# stochastic volatility model
me_sv.add_constant('initial_value', 36.)
me_sv.add_constant('volatility', 0.2)
me_sv.add_constant('vol_vol', 0.1)
me_sv.add_constant('kappa', 2.5)
me_sv.add_constant('theta', 0.4)
me_sv.add_constant('rho', -0.5)
me_sv.add_constant('currency', 'EUR')
me_sv.add_constant('model', 'sv')
Explanation: Assumptions for the stochastic_volatility object.
End of explanation
# valuation environment
val_env = market_environment('val_env', dt.datetime(2015, 1, 1))
val_env.add_constant('paths', 10000)
val_env.add_constant('frequency', 'W')
val_env.add_curve('discount_curve', r)
val_env.add_constant('starting_date', dt.datetime(2015, 1, 1))
val_env.add_constant('final_date', dt.datetime(2015, 12, 31))
Explanation: Finally, the unifying valuation assumption for the valuation environment.
End of explanation
# add valuation environment to market environments
me_gbm.add_environment(val_env)
me_jd.add_environment(val_env)
me_sv.add_environment(val_env)
Explanation: These are added to the single market_environment objects of the risk factors.
End of explanation
risk_factors = {'gbm' : me_gbm, 'jd' : me_jd, 'sv' : me_sv}
correlations = [['gbm', 'jd', 0.66], ['jd', 'sv', -0.75]]
Explanation: Finally, the market model with the risk factors and the correlations between them.
End of explanation
gbm = geometric_brownian_motion('gbm_obj', me_gbm)
me_put = market_environment('put', dt.datetime(2015, 1, 1))
me_put.add_constant('maturity', dt.datetime(2015, 12, 31))
me_put.add_constant('strike', 40.)
me_put.add_constant('currency', 'EUR')
me_put.add_environment(val_env)
am_put = valuation_mcs_american_single('am_put', mar_env=me_put, underlying=gbm,
payoff_func='np.maximum(strike - instrument_values, 0)')
am_put.present_value(fixed_seed=True, bf=5)
Explanation: Derivatives
In this sub-section, we model the single derivatives instruments.
American Put Option
The first derivative instrument is an American put option.
End of explanation
jd = jump_diffusion('jd_obj', me_jd)
me_max_call = market_environment('put', dt.datetime(2015, 1, 1))
me_max_call.add_constant('maturity', dt.datetime(2015, 9, 15))
me_max_call.add_constant('currency', 'EUR')
me_max_call.add_environment(val_env)
payoff_call = "np.maximum(np.maximum(maturity_value['gbm'], maturity_value['jd']) - 34., 0)"
assets = {'gbm' : me_gbm, 'jd' : me_jd}
asset_corr = [correlations[0]]
asset_corr
max_call = valuation_mcs_european_multi('max_call', me_max_call, assets, asset_corr,
payoff_func=payoff_call)
max_call.present_value(fixed_seed=False)
max_call.delta('jd')
max_call.delta('gbm')
Explanation: European Maximum Call on 2 Assets
The second derivative instrument is a European maximum call option on two risk factors.
End of explanation
sv = stochastic_volatility('sv_obj', me_sv)
me_min_put = market_environment('min_put', dt.datetime(2015, 1, 1))
me_min_put.add_constant('maturity', dt.datetime(2015, 6, 17))
me_min_put.add_constant('currency', 'EUR')
me_min_put.add_environment(val_env)
payoff_put = "np.maximum(32. - np.minimum(instrument_values['jd'], instrument_values['sv']), 0)"
assets = {'jd' : me_jd, 'sv' : me_sv}
asset_corr = [correlations[1]]
asset_corr
min_put = valuation_mcs_american_multi(
'min_put', val_env=me_min_put, risk_factors=assets,
correlations=asset_corr, payoff_func=payoff_put)
min_put.present_value(fixed_seed=True)
min_put.delta('jd')
min_put.delta('sv')
Explanation: American Minimum Put on 2 Assets
The third derivative instrument is an American minimum put on two risk factors.
End of explanation
am_put_pos = derivatives_position(
name='am_put_pos',
quantity=2,
underlyings=['gbm'],
mar_env=me_put,
otype='American single',
payoff_func='np.maximum(instrument_values - 36., 0)')
max_call_pos = derivatives_position(
'max_call_pos', 3, ['gbm', 'jd'],
me_max_call, 'European multi',
payoff_call)
min_put_pos = derivatives_position(
'min_put_pos', 5, ['sv', 'jd'],
me_min_put, 'American multi',
payoff_put)
Explanation: Portfolio
To compose a derivatives portfolio, derivatives_position objects are needed.
End of explanation
positions = {'am_put_pos' : am_put_pos, 'max_call_pos' : max_call_pos,
'min_put_pos' : min_put_pos}
Explanation: These objects are to be collected in dictionary objects.
End of explanation
port = derivatives_portfolio(name='portfolio',
positions=positions,
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations)
Explanation: All is together to instantiate the derivatives_portfolio class.
End of explanation
%time stats = port.get_statistics()
stats
stats['pos_value'].sum()
Explanation: Let us have a look at the major portfolio statistics.
End of explanation
path_no = 1
paths1 = port.underlying_objects['sv'].get_instrument_values()[:, path_no]
paths2 = port.underlying_objects['jd'].get_instrument_values()[:, path_no]
paths1
paths2
Explanation: Finally, a graphical look at two selected, simulated paths of the stochastic volatility risk factor and the jump diffusion risk factor, respectively.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.plot(port.time_grid, paths1, 'r', label='sv')
plt.plot(port.time_grid, paths2, 'b', label='jd')
plt.gcf().autofmt_xdate()
plt.legend(loc=0); plt.grid(True)
# negatively correlated underlyings
Explanation: The resulting plot illustrates the strong negative correlation.
End of explanation |
3,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
3,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: TensorFlow 2 quickstart for experts
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Load and prepare the MNIST dataset.
Step3: Use tf.data to batch and shuffle the dataset
Step4: Build the tf.keras model using the Keras model subclassing API
Step5: Choose an optimizer and loss function for training
Step6: Select metrics to measure the loss and the accuracy of the model. These metrics accumulate the values over epochs and then print the overall result.
Step7: Use tf.GradientTape to train the model
Step8: Test the model | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
Explanation: TensorFlow 2 quickstart for experts
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/advanced"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This is a Google Colaboratory notebook file. Python programs are run directly in the browserโa great way to learn and use TensorFlow. To follow this tutorial, run the notebook in Google Colab by clicking the button at the top of this page.
In Colab, connect to a Python runtime: At the top-right of the menu bar, select CONNECT.
Run all the notebook code cells: Select Runtime > Run all.
Download and install TensorFlow 2. Import TensorFlow into your program:
Note: Upgrade pip to install the TensorFlow 2 package. See the install guide for details.
Import TensorFlow into your program:
End of explanation
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype("float32")
x_test = x_test[..., tf.newaxis].astype("float32")
Explanation: Load and prepare the MNIST dataset.
End of explanation
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
Explanation: Use tf.data to batch and shuffle the dataset:
End of explanation
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10)
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
Explanation: Build the tf.keras model using the Keras model subclassing API:
End of explanation
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
Explanation: Choose an optimizer and loss function for training:
End of explanation
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
Explanation: Select metrics to measure the loss and the accuracy of the model. These metrics accumulate the values over epochs and then print the overall result.
End of explanation
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(images, training=True)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
Explanation: Use tf.GradientTape to train the model:
End of explanation
@tf.function
def test_step(images, labels):
# training=False is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 5
for epoch in range(EPOCHS):
# Reset the metrics at the start of the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
print(
f'Epoch {epoch + 1}, '
f'Loss: {train_loss.result()}, '
f'Accuracy: {train_accuracy.result() * 100}, '
f'Test Loss: {test_loss.result()}, '
f'Test Accuracy: {test_accuracy.result() * 100}'
)
Explanation: Test the model:
End of explanation |
3,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating MNE's data structures from scratch
MNE provides mechanisms for creating various core objects directly from
NumPy arrays.
Step1: Creating
Step2: You can also supply more extensive metadata
Step3: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
Step4: Creating
Step5: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape (n_events, 3) where the first column is the
sample number (time) of the event, the second column indicates the value from
which the transition is made from (only used when the new value is bigger
than the old one), and the third column is the new event value.
Step6: More information about the event codes
Step7: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
Step8: Now we can create the
Step9: Creating | Python Code:
import mne
import numpy as np
Explanation: Creating MNE's data structures from scratch
MNE provides mechanisms for creating various core objects directly from
NumPy arrays.
End of explanation
# Create some dummy metadata
n_channels = 32
sampling_rate = 200
info = mne.create_info(n_channels, sampling_rate)
print(info)
Explanation: Creating :class:~mne.Info objects
<div class="alert alert-info"><h4>Note</h4><p>for full documentation on the :class:`~mne.Info` object, see
`tut-info-class`. See also `ex-array-classes`.</p></div>
Normally, :class:mne.Info objects are created by the various
data import functions.
However, if you wish to create one from scratch, you can use the
:func:mne.create_info function to initialize the minimally required
fields. Further fields can be assigned later as one would with a regular
dictionary.
The following creates the absolute minimum info structure:
End of explanation
# Names for each channel
channel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG']
# The type (mag, grad, eeg, eog, misc, ...) of each channel
channel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog']
# The sampling rate of the recording
sfreq = 1000 # in Hertz
# The EEG channels use the standard naming strategy.
# By supplying the 'montage' parameter, approximate locations
# will be added for them
montage = 'standard_1005'
# Initialize required fields
info = mne.create_info(channel_names, sfreq, channel_types)
info.set_montage(montage)
# Add some more information
info['description'] = 'My custom dataset'
info['bads'] = ['Pz'] # Names of bad channels
print(info)
Explanation: You can also supply more extensive metadata:
End of explanation
# Generate some random data
data = np.random.randn(5, 1000)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=100)
custom_raw = mne.io.RawArray(data, info)
print(custom_raw)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
:class:`mne.Info` object, it is important that the
fields are consistent:
- The length of the channel information field ``chs`` must be
``nchan``.
- The length of the ``ch_names`` field must be ``nchan``.
- The ``ch_names`` field should be consistent with the ``name``
field of the channel information contained in ``chs``.</p></div>
Creating :class:~mne.io.Raw objects
To create a :class:mne.io.Raw object from scratch, you can use the
:class:mne.io.RawArray class, which implements raw data that is backed by a
numpy array. The correct units for the data are:
V: eeg, eog, seeg, emg, ecg, bio, ecog
T: mag
T/m: grad
M: hbo, hbr
Am: dipole
AU: misc
The :class:mne.io.RawArray constructor simply takes the data matrix and
:class:mne.Info object:
End of explanation
# Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch
sfreq = 100
data = np.random.randn(10, 5, sfreq * 2)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=sfreq)
Explanation: Creating :class:~mne.Epochs objects
To create an :class:mne.Epochs object from scratch, you can use the
:class:mne.EpochsArray class, which uses a numpy array directly without
wrapping a raw object. The array must be of shape
(n_epochs, n_chans, n_times). The proper units of measure are listed
above.
End of explanation
# Create an event matrix: 10 events with alternating event codes
events = np.array([
[0, 0, 1],
[1, 0, 2],
[2, 0, 1],
[3, 0, 2],
[4, 0, 1],
[5, 0, 2],
[6, 0, 1],
[7, 0, 2],
[8, 0, 1],
[9, 0, 2],
])
Explanation: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape (n_events, 3) where the first column is the
sample number (time) of the event, the second column indicates the value from
which the transition is made from (only used when the new value is bigger
than the old one), and the third column is the new event value.
End of explanation
event_id = dict(smiling=1, frowning=2)
Explanation: More information about the event codes: subject was either smiling or
frowning
End of explanation
# Trials were cut from -0.1 to 1.0 seconds
tmin = -0.1
Explanation: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
End of explanation
custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id)
print(custom_epochs)
# We can treat the epochs object as we would any other
_ = custom_epochs['smiling'].average().plot(time_unit='s')
Explanation: Now we can create the :class:mne.EpochsArray object
End of explanation
# The averaged data
data_evoked = data.mean(0)
# The number of epochs that were averaged
nave = data.shape[0]
# A comment to describe to evoked (usually the condition name)
comment = "Smiley faces"
# Create the Evoked object
evoked_array = mne.EvokedArray(data_evoked, info, tmin,
comment=comment, nave=nave)
print(evoked_array)
_ = evoked_array.plot(time_unit='s')
Explanation: Creating :class:~mne.Evoked Objects
If you already have data that is collapsed across trials, you may also
directly create an evoked array. Its constructor accepts an array of shape
(n_chans, n_times) in addition to some bookkeeping parameters.
The proper units of measure for the data are listed above.
End of explanation |
3,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
UFC Chicago Twitter Stream Topic Modeling
By Chris Tufts
This notebook uses Latent Dirichlet Allocation (LDA) to perform topic modeling on a stream of tweets collected during the UFC Chicago event held on July 25th 2015. The data was collected using Python, Tweepy, and MySQL as outlined here. The tweets were parsed using a version of Christopher Pott's happy fun tokenizer provided by the World Well Being Project at UPenn.
Step1: Additional pre-processing of the tweets is carried out below. All retweets and user mentions are identified and stored in separate lists. A distinct list of tweets is also created which has all mentions and retweets removed.
Step2: Stopwords and links are removed from the distinct tweets followed by lemmatization of the tweets.
Step3: To aid in identification of the fighters, the previously defined <i>fighters</i> dictionary is used to convert the individual first and last name of the fighters into a combined first and last name with no whitespace present. Some exceptions had to be made with the fighters. For example JoeLauzon only looks for Lauzon, since one of the primary announcers is also named Joe (Rogan).
Step4: Below is commented code for performing LDA with a single core with the gensim package. However since I had 4 cores at my disposal I used the <i>ldamulticore.LdaMulticore</i> function. There were 8 fights on the card while I was recording, so I decided I'd look for a slightly higher number of topics, 10.
Step5: The <i>pyLDAvis</i> package is used to plot the results of the topic model. If you take a look at model topic 8, you'll see that Cole Hamels name present as one of the most used names. At the time Cole Hamels was still pitching for the Philadelphia Phillies and threw a no-hitter that night in Chicago while the fights were airing. Was strange seeing this considering the search term used in tweet collection was <i>UFC</i>, but it was a significant event that fight fans seem to react to as well.
Step6: The remainder of the notebook calculates the number of tweets per minute and displays them as a function of time using the seaborn package. | Python Code:
import pandas as pd
from pandas.tseries.resample import TimeGrouper
from pandas.tseries.offsets import DateOffset
import numpy as np
from langdetect import detect, lang_detect_exception
from nltk import FreqDist,WordNetLemmatizer
from nltk.corpus import stopwords
from gensim import corpora, models
import pyLDAvis.gensim
import csv
import pickle
import re
import warnings
import seaborn as sns
# ignore user warning for seaborn
warnings.simplefilter('ignore', UserWarning )
warnings.simplefilter('ignore', DeprecationWarning)
# list of fights from the UFC Chicago main card
fighters = {'tjdillashaw':['tj', 'dillashaw'],
'renanbarao':['renan','barao','baroa'],
'mieshatate':['meisha','miesha','tate'],
'jessicaeye':['jessica'],# eye is a common word, don't include
'edsonbarboza':['edson','barboza'],
'paulfelder':['paul','felder'],
'joelauzon':['lauzon'], # can't include joe because of announcers
'takanorigomi':['takanori','gomi'],
'gianvillante':['gian', 'villante', 'vilante'],
'tomlawlor':['tom','lawlor'],
'jimmiller':['jim', 'miller'],
'dannycastillo':['danny', 'castillo'],
'kennyrobertson':['kenny','robertson'],
'bensaunders':['ben', 'saunders'],
'eddiewineland':['eddie', 'wineland'],
'bryancaraway':['bryan', 'caraway'] }
# import tweet dataset
ds = pd.read_csv("data/ufcChicago_tweets.csv", low_memory=False)
print(ds.head())
# import tokenized tweets
fileObject = open('data/parsedTweets','rb')
# load the object from the file into var b
tweets = pickle.load(fileObject)
Explanation: UFC Chicago Twitter Stream Topic Modeling
By Chris Tufts
This notebook uses Latent Dirichlet Allocation (LDA) to perform topic modeling on a stream of tweets collected during the UFC Chicago event held on July 25th 2015. The data was collected using Python, Tweepy, and MySQL as outlined here. The tweets were parsed using a version of Christopher Pott's happy fun tokenizer provided by the World Well Being Project at UPenn.
End of explanation
mentions = []
retweets = []
distinctTweet = []
import re
# filter out retweets and save results in
# distinctTweet
# filter out mentions
for tw in tweets:
rt = False
for t in tw:
if re.match('@\\w',t):
mentions.append(t)
if re.match('rt',t):
rt = True
if rt:
retweets.append(tw)
else:
distinctTweet.append(tw)
Explanation: Additional pre-processing of the tweets is carried out below. All retweets and user mentions are identified and stored in separate lists. A distinct list of tweets is also created which has all mentions and retweets removed.
End of explanation
# remove mentions
# remove remaining punctuation
stopwords = stopwords.words('english')
additional_stopwords = ['ufc','http','https', 'ufcchicago', 'ufconfox','ufcfightnight', '...']
for sword in additional_stopwords:
stopwords.append(sword)
lem = WordNetLemmatizer()
# remove stop words ands stem words
filtered_tweets = []
for tw in distinctTweet:
# remove stop words
filtered_tweets.append([lem.lemmatize(w) for w in tw if w.lower() not in stopwords and
len(w.lower()) > 2 and
not (re.match('@\\w',w)) and
not (re.match('#\\w',w)) and
not (re.match('htt\\w',w))])
Explanation: Stopwords and links are removed from the distinct tweets followed by lemmatization of the tweets.
End of explanation
# check for fighter names and convert to the full name
for i in range(0, len(filtered_tweets)):
for k in range(0, len(filtered_tweets[i])):
for key,value in fighters.items():
for v in value:
filtered_tweets[i][k]=re.sub(r"\b%s\b" % v, key, filtered_tweets[i][k])
Explanation: To aid in identification of the fighters, the previously defined <i>fighters</i> dictionary is used to convert the individual first and last name of the fighters into a combined first and last name with no whitespace present. Some exceptions had to be made with the fighters. For example JoeLauzon only looks for Lauzon, since one of the primary announcers is also named Joe (Rogan).
End of explanation
# # LDA using single core
# dictionary = corpora.Dictionary(filtered_tweets)
# corpus = [dictionary.doc2bow(text) for text in filtered_tweets]
# # model the topics
# ldamodel = models.ldamodel.LdaModel(corpus, num_topics=8, id2word = dictionary, passes=50)
#LDA using 4 cores
dictionary = corpora.Dictionary(filtered_tweets)
corpus = [dictionary.doc2bow(text) for text in filtered_tweets]
ldamodel = models.ldamulticore.LdaMulticore(corpus, num_topics=10,
id2word = dictionary, passes=50, workers=3)
Explanation: Below is commented code for performing LDA with a single core with the gensim package. However since I had 4 cores at my disposal I used the <i>ldamulticore.LdaMulticore</i> function. There were 8 fights on the card while I was recording, so I decided I'd look for a slightly higher number of topics, 10.
End of explanation
ufc_vis_data = pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary)
pyLDAvis.display(ufc_vis_data)
# cole hamels threw a no-hitter against Chicago, game ended around 6:40 (4pm start time, 2:38 duration)
Explanation: The <i>pyLDAvis</i> package is used to plot the results of the topic model. If you take a look at model topic 8, you'll see that Cole Hamels name present as one of the most used names. At the time Cole Hamels was still pitching for the Philadelphia Phillies and threw a no-hitter that night in Chicago while the fights were airing. Was strange seeing this considering the search term used in tweet collection was <i>UFC</i>, but it was a significant event that fight fans seem to react to as well.
End of explanation
# convert to time series
ds['created_at'] = pd.to_datetime(pd.Series(ds['created_at']))
ds.set_index('created_at', drop=False, inplace=True)
ds.index = ds.index.tz_localize('GMT').tz_convert('EST')
ds.index = ds.index - DateOffset(hours = 12)
start_time = ds.index.min()
end_time = ds.index.max()
# minute = Minute()
date_bin = pd.date_range(start_time, end_time, freq='T')
tweet_frequency = pd.Series(np.zeros(len(date_bin)), index=date_bin)
for date_entry in range(1,len(date_bin)):
temp = temp = ds[ds.index > date_bin[(date_entry-1)]]
temp = temp[temp.index < date_bin[date_entry]]
tweet_frequency[date_entry] = len(temp)
tweet_frequency.head()
%matplotlib inline
sns.tsplot(data=tweet_frequency)
Explanation: The remainder of the notebook calculates the number of tweets per minute and displays them as a function of time using the seaborn package.
End of explanation |
3,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inferring Cluster Model Parameters from an X-ray Image
Forward modeling is always instructive
Step1: Spot the difference!
The data, $N_k$, now have a double circle around them, to remind us that even though we are supposing that they have been drawn from the sampling distribution ${\rm Pr}(N_k\;|\;\mu_k(\theta),{\rm ex}_k,{\rm pb}_k,H)$, in practice we only get one draw, and the $N_k$ are constants.
The parameters, $\theta$, are now allowed to vary, and are not asserted to have fixed values - so they come with a probability distribution represented by a circular node
Step2: Good. Here's the code that is being run, inside the "XrayData" class
Step3: To normalize this, we need to take care not to try and exponentiate any very large or small numbers...
Step4: Let's plot this as a 2D probability density map.
Step5: The above figure captures and illustrate our uncertainty about the position of the cluster center, given our model (and its assumptions), plus our assertions about the values of the other parameters. We say it is "conditioned on" the values of $\beta$, $r_c$, $S_0$ and $b$.
To fully account for our uncertainty we should allow all model parameters to vary, and compute the 6D posterior PDF. As you can see, this will be time consuming! In sessions 3 and 4 we will look at ways to do these calculations far more efficiently.
Summarizing Posterior PDFs
The Bayesian solution to an inference problem is the posterior PDF for the parameters given the data. However, it is often helpful to compress that information into a few numbers (to put in the abstract of a paper, for example).
Most useful summaries of posterior PDFs are integrals. For example | Python Code:
# import cluster_pgm
# cluster_pgm.inverse()
from IPython.display import Image
Image(filename="cluster_pgm_inverse.png")
Explanation: Inferring Cluster Model Parameters from an X-ray Image
Forward modeling is always instructive: we got a good sense of the parameters of our cluster + background model simply by generating mock data and visualizing it.
The "inverse problem", also known as "inference," is to learn the parameters of an assumed model from a set of data. Intuitively we can see how it is going to work: try a lot of possible parameter combinations, and see which ones "match" the data.
The sampling distribution ${\rm Pr}(d|\theta,H)$ encodes uncertainty about what might have been, given a model (or hypothesis) $H$ with parameters $\theta$. It allows us to generate mock datasets that are similar to the data that we do observe.
Our inability to guess parameter values accurately first time shows that we are uncertain about them. In Bayesian inference, we use probability distributions to describe this uncertainty mathematically.
Probability
The idea of using probability distributions to quantify the uncertainty in our model parameters (and indeed in the models themselves) is due to Pierre Simon Laplace (1774), who rediscovered Thomas Bayes' earlier results on the probability of future events given their past history.
Let's remind ourselves how probabilities work
Laplace and Bayes' key result is the following, usually referred to as "Bayes' Theorem:"
${\rm Pr}(\theta|d,H) = \frac{1}{{\rm Pr}(d|H)}\;{\rm Pr}(d|\theta,H)\;{\rm Pr}(\theta|H)$
What you know about your model parameters given the data is what you knew about them before $\left[ {\rm Pr}(\theta|H) \right]$, combined with what the data are telling you $\left[ {\rm Pr}(d|\theta,H) \right]$.
${\rm Pr}(\theta|d,H)$ is called the posterior probability distribution for the parameters given the data and the model, and is the general solution to the inverse problem.
Before we take any data, our uncertainty about our model parameter values is encoded in the prior PDF for the parameters given the model, ${\rm Pr}(\theta|H)$.
Both the posterior and prior PDFs are functions of the model parameters. The sampling distribution ${\rm Pr}(d|\theta,H)$ is a function of the data given the parameters - when written as a function of $\theta$ it is called the likelihood of the parameters given the model.
The likelihood captures the information that is in the data, and so lies at the center of data analysis.
${\rm Pr}(d|\theta,H)$ has the form of a prior over datasets. This makes sense: we can imagine defining this PDF and using it to generate mock data without us ever having seen any real data at all!
PGMs for Inverse Problems
Here's the probabilistic graphical model for the inverse X-ray cluster model problem:
End of explanation
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import numpy as np
import cluster
lets = cluster.XrayData()
lets.read_in_data()
lets.set_up_maps()
x0,y0 = 328,328 # The center of the image is 328,328
S0,b = 0.001,1e-6 # Cluster and background surface brightness, arbitrary units
beta = 2.0/3.0 # Canonical value is beta = 2/3
rc = 12 # Core radius, in pixels
logprob = lets.evaluate_unnormalised_log_posterior(x0,y0,S0,rc,beta,b)
print (logprob)
Explanation: Spot the difference!
The data, $N_k$, now have a double circle around them, to remind us that even though we are supposing that they have been drawn from the sampling distribution ${\rm Pr}(N_k\;|\;\mu_k(\theta),{\rm ex}_k,{\rm pb}_k,H)$, in practice we only get one draw, and the $N_k$ are constants.
The parameters, $\theta$, are now allowed to vary, and are not asserted to have fixed values - so they come with a probability distribution represented by a circular node: the prior.
This PGM illustrates the joint PDF for the parameters and the data, which can be factorised as:
$\prod_k \; {\rm Pr}(N_k\;|\;\mu_k(\theta),{\rm ex}_k,{\rm pb}_k,H) \; {\rm Pr}(\,\theta\,|H)$
Note the huge product over pixel values!
It can also be factorised to:
${\rm Pr}(\,\theta\,|{N_k}\,H) \; {\rm Pr}({N_k}\,|H)$
which is, up to the normalizing constant, the posterior PDF for the model parameters, given all the data ${N_k}$. This is just Bayes Theorem rearranged, with the normalizing denominator appearing on the left hand side instead.
PGMs can be used to design inferences
Calculating Posterior PDFs
Notice that the prior PDF ${\rm Pr}(\theta|H)$ and the likelihood function ${\rm Pr}(d|\theta,H)$ can be evaluated at any point in the parameter space.
This means that we can always simply evaluate the posterior PDF on a grid (or at least attempt to), and normalize it by numerical integration.
Let's do this for a simplified version of our X-ray cluster model.
End of explanation
npix = 15
# Initial guess at the interesting range of cluster position parameters:
#xmin,xmax = 310,350
#ymin,ymax = 310,350
# Refinement, found by fiddling around a bit:
xmin,xmax = 327.7,328.3
ymin,ymax = 346.4,347.0
x0grid = np.linspace(xmin,xmax,npix)
y0grid = np.linspace(ymin,ymax,npix)
logprob = np.zeros([npix,npix])
for i,x0 in enumerate(x0grid):
for j,y0 in enumerate(y0grid):
logprob[j,i] = lets.evaluate_unnormalised_log_posterior(x0,y0,S0,rc,beta,b)
print ("Done column",i)
print (logprob[0:5,0])
Explanation: Good. Here's the code that is being run, inside the "XrayData" class:
```python
def evaluate_log_prior(self):
# Uniform in all parameters...
return 0.0 # HACK
def evaluate_log_likelihood(self):
self.make_mean_image()
# Return un-normalized Poisson sampling distribution:
# log (\mu^N e^{-\mu} / N!) = N log \mu - \mu + constant
return np.sum(self.im * np.log(self.mu) - self.mu)
def evaluate_unnormalised_log_posterior(self,x0,y0,S0,rc,beta,b):
self.set_pars(x0,y0,S0,rc,beta,b)
return self.evaluate_log_likelihood() + self.evaluate_log_prior()
```
It's worth starting at, and thinking about, this code for a few minutes.
Recall from the PGM discussion above that we have
${\rm Pr}(\,\theta\,|{N_k}\,H) = \frac{1}{Z} \prod_k \; {\rm Pr}(N_k\;|\;\mu_k(\theta),{\rm ex}_k,{\rm pb}_k,H) \; {\rm Pr}(\,\theta\,|H)$
where $Z = {\rm Pr}({N_k}\,|H)$
The product over (assumed) independent pixel values' Poisson sampling distribution terms becomes a sum in the log likelihood.
If the prior PDF for all parameters is uniform, then the log prior (and the prior) is just a constant (whose actual value is unimportant). In other problems we will need to be more careful than this!
Now let's try evaluating the 2D posterior PDF for cluster position, conditioned on reasonable values of the cluster and background flux, cluster size and beta:
End of explanation
Z = np.max(logprob)
prob = np.exp(logprob - Z)
norm = np.sum(prob)
prob /= norm
print (prob[0:5,0])
Explanation: To normalize this, we need to take care not to try and exponentiate any very large or small numbers...
End of explanation
import astropy.visualization as viz
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
plt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[xmin,xmax,ymin,ymax])
plt.xlabel('x / pixels')
plt.ylabel('y / pixels')
Explanation: Let's plot this as a 2D probability density map.
End of explanation
# First, double-check that the posterior PDF sums
# (i.e. approximately integrates) to 1:
print (np.sum(prob))
# Now, sort the pixel value of the PDF map, and
# find the cumulative distribution:
sorted = np.sort(prob.flatten())
C = sorted.cumsum()
# Find the pixel values that lie at the levels that contain
# 68% and 95% of the probability:
lvl68 = np.min(sorted[C > (1.0 - 0.68)])
lvl95 = np.min(sorted[C > (1.0 - 0.95)])
plt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[xmin,xmax,ymin,ymax])
plt.contour(prob.T,[lvl95,lvl68],colors='black',extent=[xmin,xmax,ymin,ymax])
plt.xlabel('x / pixels')
plt.ylabel('y / pixels')
plt.savefig("figures/cluster_2D-inferred-xy.png")
Explanation: The above figure captures and illustrate our uncertainty about the position of the cluster center, given our model (and its assumptions), plus our assertions about the values of the other parameters. We say it is "conditioned on" the values of $\beta$, $r_c$, $S_0$ and $b$.
To fully account for our uncertainty we should allow all model parameters to vary, and compute the 6D posterior PDF. As you can see, this will be time consuming! In sessions 3 and 4 we will look at ways to do these calculations far more efficiently.
Summarizing Posterior PDFs
The Bayesian solution to an inference problem is the posterior PDF for the parameters given the data. However, it is often helpful to compress that information into a few numbers (to put in the abstract of a paper, for example).
Most useful summaries of posterior PDFs are integrals. For example:
The posterior mean, $\langle x \rangle = \int\;{\rm Pr}(x|d,H)\;dx$
The posterior median, $x_{50}$ s.t. $\int_{x_{50}}^{\infty}\;{\rm Pr}(x|d,H)\;dx = 0.5$
The $N^{th}$ percentile, $x_{N}$ s.t. $\int_{x_{N}}^{\infty}\;{\rm Pr}(x|d,H)\;dx = N\%$
The posterior probability that $x > a$, $\int_{a}^{\infty}\;{\rm Pr}(x|d,H)\;dx$
The reason why integral quantities are useful is that they quantify probability mass rather than probability density: they are insensitive to sharp features that have low associated integrated probability.
For a 2D PDF like ours, the equivalent quantity to a percentile is a "confidence contour," that encloses a specified percentage of the integrated posterior probability. Standard choices are the contours that enclose 68% and 95% of the probability. (Do you know why?)
Let's add these contours to our plot.
End of explanation |
3,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CryoBLM BLEDP
2017-03-26
Display, extraction and analysis of the binary BLEDP format.
Also provides the possibility of directly writing out the analysis results.
Decoder class based on BLEDPy
Imports
Step1: Constants definition
Step4: Retriever class
Step5: Prepare data function
Just an additional abstraction for retrieving the data.
By default I have set it up to be only for the channels that matter (2015 CryoIRRAD) and with resampling at 1ms.
Step6: Get & Display a single file
An example case for retrieving and displaying a single spill.
Step7: CSV extraction
Extract a single file
Example case for extracting a single file to a CSV format.
Variables like the path, folder, filename for input and output as well as parameters like the channels to be extracted and the resampling need to be specified.
Step8: Extract a whole directory
This part of the notebook will go through the files of a directory and extract all to CSV.
It will take a lot of time due to slow I/O on EOS and most likely create (too) many files.
Step9: Per file processing
The whole point is to open a file, do an operation on the data, retrieve a (or several values) and store these on an output file. So this is an example of a simple sum (charge) operation which can be extended for anything needed.
Step10: Checking out the result
Step11: CI15 file transformation | Python Code:
import numpy as np
import logging
from time import time
from datetime import datetime
from os.path import join
from os import walk
%matplotlib notebook
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from itertools import cycle
Explanation: CryoBLM BLEDP
2017-03-26
Display, extraction and analysis of the binary BLEDP format.
Also provides the possibility of directly writing out the analysis results.
Decoder class based on BLEDPy
Imports
End of explanation
# Do not change
BLEDP_SAMPLE_RATE = 2e-6 # seconds
BLEDP_CURRENT_MULT = 0.0625 # bits to uA
BLEDP_CHANNELS = 8
FRAME_VALUE_MASK = 1048575
FRAME_CH_SHIFT = 26
FRAME_CH_MASK = 7
# Change as needed
LBL_DET = ['Si 300um', 'K6430', 'Si 300um ?', 'Si 100um ?', 'HS', 'CVD1', 'CVD2', 'Trg']
markers = cycle(['1', '2', '3', '4'])
Explanation: Constants definition
End of explanation
# Do not modify
class BledpOfflineRetriever(object):
Manage BLEDPy offline data retrieval.
# retrieved = pyqtSignal() not needed in this context
def __init__(self):
super(BledpOfflineRetriever, self).__init__()
self.frames = np.array([])
self.channel = None # 0=multi, ch from 1
def load_data(self, filename, **kwargs):
Get multi channel data from file.
offset = kwargs.get('offset', 0)
limit = kwargs.get('limit', 0)
limit -= offset
if limit <= 0:
limit = -1
tick = time()
with open(filename, 'rb') as fp:
self.frames = np.fromfile(fp, count=2, dtype='>u4')
ch = self.get_channels()
if ch[0] == ch[1]:
self.channel = ch[0] + 1
offset <<= 2
else:
self.channel = 0
offset <<= 5
limit <<= 3
fp.seek(offset)
self.frames = np.fromfile(fp, count=limit, dtype='>u4')
def get_values(self, y_curr=False):
res = np.bitwise_and(FRAME_VALUE_MASK, self.frames)
if y_curr:
res = res.astype(np.float)
res *= BLEDP_CURRENT_MULT
if not self.channel:
res = res.reshape(res.size >> 3, BLEDP_CHANNELS).transpose()
return res
def get_datapack(self, x_time=False, y_curr=False):
y = self.get_values(y_curr=y_curr)
if y.ndim == 2:
# Multi channel
a = np.arange(0, y[0].size, 1.0)
if x_time:
a *= BLEDP_SAMPLE_RATE
x = np.repeat(a, BLEDP_CHANNELS)
x = x.reshape(y[0].size, BLEDP_CHANNELS).transpose()
return np.array([(xi, yi) for xi, yi in zip(x, y)])
# Single channel
x = np.arange(y.size)
if x_time:
x *= BLEDP_SAMPLE_RATE
return np.array([(x, y)])
def get_channels(self, sl=slice(None, None)):
res = np.right_shift(self.frames[sl], FRAME_CH_SHIFT)
res = np.bitwise_and(FRAME_CH_MASK, res)
return res
def average(arr, n):
end = n * int(len(arr)/n)
return np.mean(arr[:end].reshape(-1, n), 1)
Explanation: Retriever class
End of explanation
def prepare_data(datapack, channels=[0, 2, 3, 5, 6], resample='1ms'):
resample_dict = {'no': 0, '10us': 5, '100us': 50, '1ms': 500}
resample_sel = resample_dict[resample]
x, y = [], []
if resample_sel:
for ch in channels:
y.append(average(dp[ch][1], resample_sel))
x.append(dp[ch][0][::resample_sel][:y[-1].size])
else:
for ch in channels:
y.append(dp[ch][1])
x.append(dp[ch][0])
return x, y
Explanation: Prepare data function
Just an additional abstraction for retrieving the data.
By default I have set it up to be only for the channels that matter (2015 CryoIRRAD) and with resampling at 1ms.
End of explanation
retriever = BledpOfflineRetriever()
main_path = '/eos/project/c/cryoblm/2015_CryoIRRAD/data/'
folder = 'bledp/voltage_scans/VS_2015-11-01_16h'
filename = '2015-11-01_17h55m44s'
fpath = join(main_path, folder, filename)
print('File path: %s' % fpath)
off, lim = int(0.3/BLEDP_SAMPLE_RATE), int(0.9/BLEDP_SAMPLE_RATE)
retriever.load_data(fpath, offset=off, limit=lim)
dp = retriever.get_datapack(x_time=True, y_curr=True)
selected_channels = [0, 2, 3, 5, 6]
# selected_channels = [0, 5]
x, y = prepare_data(dp, channels=selected_channels, resample='1ms')
fig = plt.figure('BLEDP Plotter', figsize=(8, 6))
ax = fig.add_subplot(111)
cmap = plt.get_cmap('jet')
colors = cmap(np.linspace(0, 1.0, 8.0))
for i, c in zip(range(len(x)), selected_channels):
ax.plot(x[i], y[i], c=colors[c], marker=next(markers), ls='-', label=LBL_DET[c])
ax.grid(True, ls='--')
ax.set_xlabel('Time, (s)')
ax.set_ylabel('Current, (uA)')
ax.legend(loc=1, numpoints=1, framealpha=.7, fancybox=True)
plt.show()
Explanation: Get & Display a single file
An example case for retrieving and displaying a single spill.
End of explanation
# Specify input path & file name
main_path = '/eos/project/c/cryoblm/2015_CryoIRRAD/data/'
folder = 'bledp/voltage_scans/VS_2015-11-01_16h'
filename = '2015-11-01_17h55m44s'
folder = 'bledp/2015-10-29/2015-10-29_03h'
filename = "2015-10-29_03h00m17s"
# Specify ouput path & file name
fn_in = join(main_path, folder, filename)
out_dir = join(main_path, 'bledp_extracted')
# Specify channels and resampling
selected_channels = [0, 5]
resample = '1ms' # can be also 10us, 100us
retriever = BledpOfflineRetriever()
off, lim = int(0.3/BLEDP_SAMPLE_RATE), int(0.9/BLEDP_SAMPLE_RATE)
retriever.load_data(fn_in, offset=off, limit=lim)
dp = retriever.get_datapack(x_time=True, y_curr=True)
x, y = prepare_data(dp, channels=selected_channels, resample='1ms')
arry = np.array([np.array(yi) for yi in y])
arr = np.vstack((x[0], arry)).transpose()
fn_out = join(out_dir, filename+'.csv')
header = '{}\n{}\n{}'
hdr = header.format(filename, 'Time, '+','.join([LBL_DET[i] for i in selected_channels]), '(s), '+','.join(['uA, ']*len(selected_channels)))
np.savetxt(fn_out, arr, fmt='%.5f', delimiter=',', header=hdr)
Explanation: CSV extraction
Extract a single file
Example case for extracting a single file to a CSV format.
Variables like the path, folder, filename for input and output as well as parameters like the channels to be extracted and the resampling need to be specified.
End of explanation
# Specify input path
main_path = '/eos/project/c/cryoblm/2015_CryoIRRAD/data/'
folder = 'bledp/voltage_scans/VS_2015-11-01_16h'
path = join(main_path, folder)
folder = 'bledp/2015-11-16/2015-11-16_08h'
path = join(main_path, folder)
# Specify output path
# CAUTION : it is ***much*** faster if you run it locally.
# Write out the files, and then let CERNBox upload them to the server.
out_dir = join(main_path, 'bledp_extracted')
print(path)
print(out_dir)
# %%script false # Comment this line out if you want to run the extraction of directories
# Specify channels and resampling.
# Also a file number limit if you want to process e.g. only 10 files
selected_channels = [0, 5]
resample = '1ms' # can be also 10us, 100us
flim = 10
retriever = BledpOfflineRetriever()
off, lim = int(0.3/BLEDP_SAMPLE_RATE), int(0.9/BLEDP_SAMPLE_RATE)
header = '{}\n{}\n{}'
cnt = 0
tick = time()
for root, dirs, files in walk(path):
for filename in files:
# Check also filename to be as date etc
if filename.startswith('.'):
continue
fn_in = join(root, filename)
retriever.load_data(fn_in, offset=off, limit=lim)
dp = retriever.get_datapack(x_time=True, y_curr=True)
x, y = prepare_data(dp, channels=selected_channels, resample='1ms')
arry = np.array([np.array(yi) for yi in y])
arr = np.vstack((x[0], arry)).transpose()
fn_out = join(out_dir, filename+'.csv')
hdr = header.format(filename, 'Time, '+','.join([LBL_DET[i] for i in selected_channels]), '(s), '+','.join(['uA, ']*len(selected_channels)))
np.savetxt(fn_out, arr, fmt='%.5f', delimiter=',', header=hdr)
cnt += 1
if cnt >= flim:
break
print('Time Elapsed : %.3f sec' % (time()-tick))
print('Files extracted : %d' % cnt)
Explanation: Extract a whole directory
This part of the notebook will go through the files of a directory and extract all to CSV.
It will take a lot of time due to slow I/O on EOS and most likely create (too) many files.
End of explanation
# Specify input path
main_path = '/eos/project/c/cryoblm/2015_CryoIRRAD/data/'
folder = 'bledp/voltage_scans/VS_2015-11-01_16h'
path = join(main_path, folder)
# Specify output path and filename
# CAUTION : it is ***much*** faster if you run it locally.
# Write out the files, and then let CERNBox upload them to the server.
out_dir = join(main_path, 'bledp_extracted')
fn_out = join(out_dir, '%s_out_sum.csv' % datetime.now().strftime('%Y%m%d_%H%M%S'))
print(fn_out)
# Set up a function that returns the results that you want from each file.
# TODO similarly !!!
def sum_analysis(y_values):
res = []
for y in y_values:
res.append(np.sum(y))
return res
with open(fn_out, 'w') as fp:
header = "Analysis on {}\n{}\n{}\n"
hdr = header.format(path, 'Timestamp, '+', '.join([LBL_DET[i] for i in selected_channels]), '(s), '+', '.join(['sum']*len(selected_channels)))
fp.write(hdr)
out_line = "{}, {}\n"
cnt = 0
tick = time()
for root, dirs, files in walk(path):
for filename in files:
# Check also filename to be as date etc
if filename.startswith('.'):
continue
cnt += 1
fn_in = join(root, filename)
retriever.load_data(fn_in, offset=off, limit=lim)
dp = retriever.get_datapack(x_time=True, y_curr=True)
x, y = prepare_data(dp, channels=selected_channels, resample='1ms')
res = sum_analysis(y)
res_str = map(str, res)
# ts = datetime.strptime(filename, '%Y-%m-%d_%Hh%Mm%Ss').strftime('%Y-%m-%d %H:%M:%S')
fp.write(out_line.format(filename, ", ".join(res_str)))
print('Time elapsed : %.3f sec' % (time()-tick))
print('Files processed: %d' % cnt)
Explanation: Per file processing
The whole point is to open a file, do an operation on the data, retrieve a (or several values) and store these on an output file. So this is an example of a simple sum (charge) operation which can be extended for anything needed.
End of explanation
# Specify file - If you want you can leave that to take the file that was created earlier.
fn = fn_out
# fn = join(out_dir, '20180328_110158_out_sum.csv') # or choose specifically
dt_fmt = "%Y-%m-%d %H:%M:%S"
with open(fn, 'r') as fp:
hdr = [next(fp) for x in range(3)]
data = np.loadtxt(fp, dtype=str, delimiter=',', unpack=True)
ts = [datetime.strptime(item, dt_fmt) for item in data[0]]
y = data[1:].astype(np.float)
lbls = [item.strip() for item in hdr[1][:-1].split(',')[1:]]
fig = plt.figure("Sum results", figsize=(8, 6))
ax = fig.add_subplot(111)
for i, yi in enumerate(y):
ax.plot_date(ts, yi, ls='None', marker=next(markers), label=lbls[i])
myFmt = mdates.DateFormatter('%H:%M:%S')
ax.xaxis.set_major_formatter(myFmt)
ax.set_xlabel('Time')
ax.set_ylabel('Sum')
ax.grid(True, ls='--')
ax.legend(loc=1, numpoints=1, framealpha=.7, fancybox=True)
fig.autofmt_xdate()
plt.show()
Explanation: Checking out the result
End of explanation
fin = '/eos/project/c/cryoblm/2015_CryoIRRAD/analysis/CI15_sync_bledp_vs_beam.csv'
fout = '/eos/project/c/cryoblm/2015_CryoIRRAD/analysis/CI15_sync_bledp_vs_beam_mod.csv'
print('FIN: %s' % fin)
tick = time()
with open(fin, 'r') as fp:
hdr = next(fp)
data = np.loadtxt(fp, dtype=str, delimiter=',', unpack=True)
new_ts = [datetime.strptime(item, '%Y-%m-%d %H:%M:%S').strftime('%Y-%m-%d_%Hh%Mm%Ss') for item in data[0]]
new_data = np.copy(data[1:])
new_data = np.vstack([new_ts, new_data])
np.savetxt(fout, new_data.transpose(), fmt='%s', delimiter=',', header=hdr)
print('FOUT: %s' % fout)
print('Time Elapsed: %.3f sec' % (time()-tick))
Explanation: CI15 file transformation
End of explanation |
3,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lambert Scattering (irrad_method='horvat')
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
For parameters that affect reflection and heating (irrad_frac_*) see the tutorial on reflection and heating.
The 'irrad_method' compute option dictates whether irradiation is handled according to the new Horvat scheme which includes Lambert Scattering, Wilson's original reflection scheme, or ignored entirely.
Step3: Influence on Light Curves (fluxes)
Let's (roughtly) reproduce Figure 8 from Prsa et al. 2016 which shows the difference between Wilson and Horvat schemes for various inclinations.
<img src="prsa+2016_fig8.png" alt="Figure 8" width="600px"/>
First we'll roughly create a A0-K0 binary and set reasonable albedos.
Step4: We'll also disable any eclipsing effects.
Step5: Now we'll compute the light curves with wilson and horvat irradiation, and plot the relative differences between the two as a function of phase, for several different values of the inclination. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Lambert Scattering (irrad_method='horvat')
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
print(b['irrad_method'])
Explanation: Relevant Parameters
For parameters that affect reflection and heating (irrad_frac_*) see the tutorial on reflection and heating.
The 'irrad_method' compute option dictates whether irradiation is handled according to the new Horvat scheme which includes Lambert Scattering, Wilson's original reflection scheme, or ignored entirely.
End of explanation
b['teff@primary'] = 11000
b['requiv@primary'] = 2.5
b['gravb_bol@primary'] = 1.0
b['teff@secondary'] = 5000
b['requiv@secondary'] = 0.85
b['q@binary'] = 0.8/3.0
b.flip_constraint('mass@primary', solve_for='sma@binary')
b['mass@primary'] = 3.0
print(b.filter(qualifier=['mass', 'requiv', 'teff'], context='component'))
b['irrad_frac_refl_bol@primary'] = 1.0
b['irrad_frac_refl_bol@secondary'] = 0.6
Explanation: Influence on Light Curves (fluxes)
Let's (roughtly) reproduce Figure 8 from Prsa et al. 2016 which shows the difference between Wilson and Horvat schemes for various inclinations.
<img src="prsa+2016_fig8.png" alt="Figure 8" width="600px"/>
First we'll roughly create a A0-K0 binary and set reasonable albedos.
End of explanation
b['eclipse_method'] = 'only_horizon'
Explanation: We'll also disable any eclipsing effects.
End of explanation
phases = phoebe.linspace(0,1,101)
b.add_dataset('lc', times=b.to_time(phases))
for incl in [0,30,60,90]:
b.set_value('incl@binary', incl)
b.run_compute(irrad_method='wilson')
fluxes_wilson = b.get_value('fluxes', context='model')
b.run_compute(irrad_method='horvat')
fluxes_horvat = b.get_value('fluxes', context='model')
plt.plot(phases, (fluxes_wilson-fluxes_horvat)/fluxes_wilson, label='i={}'.format(incl))
plt.xlabel('phase')
plt.ylabel('[F(wilson) - F(horvat)] / F(wilson)')
plt.legend(loc='upper center')
plt.show()
Explanation: Now we'll compute the light curves with wilson and horvat irradiation, and plot the relative differences between the two as a function of phase, for several different values of the inclination.
End of explanation |
3,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explain PyTextRank
Step1: Create some text to use....
Step2: Then add PyTextRank into the spaCy pipeline...
Step3: Examine the results
Step4: Construct a list of the sentence boundaries with a phrase vector (initialized to empty set) for each...
Step5: Iterate through the top-ranked phrases, added them to the phrase vector for each sentence...
Step6: Let's take a look at the results...
Step7: We also construct a unit_vector for all of the phrases, up to the limit requested...
Step8: Iterate through each sentence, calculating its euclidean distance from the unit vector...
Step9: Sort the sentence indexes in descending order
Step10: Extract the sentences with the lowest distance, up to the limite requested... | Python Code:
import warnings
warnings.filterwarnings("ignore")
import spacy
nlp = spacy.load("en_core_web_sm")
Explanation: Explain PyTextRank: extractive summarization
How does PyTextRank perform extractive summarization on a text document?
First we perform some basic housekeeping for Jupyter, then load spaCy with a language model for English ...
End of explanation
text = "Compatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered. Upper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are given. These criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types."
Explanation: Create some text to use....
End of explanation
import pytextrank
tr = pytextrank.TextRank()
nlp.add_pipe(tr.PipelineComponent, name="textrank", last=True)
doc = nlp(text)
Explanation: Then add PyTextRank into the spaCy pipeline...
End of explanation
for p in doc._.phrases:
print("{:.4f} {:5d} {}".format(p.rank, p.count, p.text))
print(p.chunks)
Explanation: Examine the results: a list of top-ranked phrases in the document
End of explanation
sent_bounds = [ [s.start, s.end, set([])] for s in doc.sents ]
sent_bounds
Explanation: Construct a list of the sentence boundaries with a phrase vector (initialized to empty set) for each...
End of explanation
limit_phrases = 4
phrase_id = 0
unit_vector = []
for p in doc._.phrases:
print(phrase_id, p.text, p.rank)
unit_vector.append(p.rank)
for chunk in p.chunks:
print(" ", chunk.start, chunk.end)
for sent_start, sent_end, sent_vector in sent_bounds:
if chunk.start >= sent_start and chunk.start <= sent_end:
print(" ", sent_start, chunk.start, chunk.end, sent_end)
sent_vector.add(phrase_id)
break
phrase_id += 1
if phrase_id == limit_phrases:
break
Explanation: Iterate through the top-ranked phrases, added them to the phrase vector for each sentence...
End of explanation
sent_bounds
for sent in doc.sents:
print(sent)
Explanation: Let's take a look at the results...
End of explanation
unit_vector
sum_ranks = sum(unit_vector)
unit_vector = [ rank/sum_ranks for rank in unit_vector ]
unit_vector
Explanation: We also construct a unit_vector for all of the phrases, up to the limit requested...
End of explanation
from math import sqrt
sent_rank = {}
sent_id = 0
for sent_start, sent_end, sent_vector in sent_bounds:
print(sent_vector)
sum_sq = 0.0
for phrase_id in range(len(unit_vector)):
print(phrase_id, unit_vector[phrase_id])
if phrase_id not in sent_vector:
sum_sq += unit_vector[phrase_id]**2.0
sent_rank[sent_id] = sqrt(sum_sq)
sent_id += 1
print(sent_rank)
Explanation: Iterate through each sentence, calculating its euclidean distance from the unit vector...
End of explanation
from operator import itemgetter
sorted(sent_rank.items(), key=itemgetter(1))
Explanation: Sort the sentence indexes in descending order
End of explanation
limit_sentences = 2
sent_text = {}
sent_id = 0
for sent in doc.sents:
sent_text[sent_id] = sent.text
sent_id += 1
num_sent = 0
for sent_id, rank in sorted(sent_rank.items(), key=itemgetter(1)):
print(sent_id, sent_text[sent_id])
num_sent += 1
if num_sent == limit_sentences:
break
Explanation: Extract the sentences with the lowest distance, up to the limite requested...
End of explanation |
3,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load the default setting. This setting is identical to Example 6 except that this setting adds a filter in order to remove the error message.
Step1: Compare the results between standard AM1.5d and the SMARTS-generated spectrum | Python Code:
file = build_smarts_file(
**astmg_173_03_m
)
data = send_to_smarts(file)
plt.plot(data.iloc[:,0],data.iloc[:,1],hold=True)
plt.plot(data.iloc[:,0],data.iloc[:,4])
plt.show()
data.columns
Explanation: Load the default setting. This setting is identical to Example 6 except that this setting adds a filter in order to remove the error message.
End of explanation
ill=Illumination("AM1.5d")
x,y=ill.get_spectrum("nm")
plt.plot(x,y,hold=True,label="AM1.5d")
plt.plot(data['WVLGTH'],data['BEAM_NORM+'],hold=True,label="SMART")
ill_g=Illumination("AM1.5g")
x,y=ill_g.get_spectrum("nm")
plt.plot(x,y,hold=True,label="AM1.5g")
plt.plot(data['WVLGTH'],data['GLOBL_TILT'],hold=True,label="SMART")
data=get_astm_airmass(1.5)
plt.plot(x,y,hold=True,label="AM1.5g")
plt.plot(data['WVLGTH'],data['GLOBL_TILT'],hold=True,label="SMART")
amass=np.linspace(0.5,3,num=10)
for a in amass:
data=get_astm_airmass(a)
plt.plot(data['WVLGTH'],data['GLOBL_TILT'],hold=True,label='%s'%a)
amass=np.linspace(0.5,3,num=10)
for a in amass:
data=get_astm_airmass(a)
plt.plot(data['WVLGTH'],data['GLOB_HORIZ'],hold=True,label='%s'%a)
data.columns
Explanation: Compare the results between standard AM1.5d and the SMARTS-generated spectrum
End of explanation |
3,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img style='float
Step1: Connect to server
Step2: <hr> Random points with default styling
Step3: <hr> Random small red points
Step4: <hr> Random points with all styling options
Step5: <hr> Fun with colors | Python Code:
from lightning import Lightning
from numpy import random, asarray, amin, concatenate, column_stack
from seaborn import color_palette
from sklearn import datasets
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> 3D scatter plots in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
Explanation: Connect to server
End of explanation
n = 100
x = random.rand(n)*100
y = random.rand(n)*100
z = random.rand(n)*100
lgn.scatter3(x,y,z)
Explanation: <hr> Random points with default styling
End of explanation
n = 100
x = random.rand(n)*100
y = random.rand(n)*100
z = random.rand(n)*100
c = [240,117,145]
lgn.scatter3(x,y,z,size=4,color=c)
Explanation: <hr> Random small red points
End of explanation
n = 100
x = random.rand(n)*100
y = random.rand(n)*100
z = random.rand(n)*100
c = [asarray(color_palette('Blues', 100)[random.choice(range(100))])*255 for i in range(n)]
s = random.rand(n)*8+1
lgn.scatter3(x, y, z, color=c, size=s)
Explanation: <hr> Random points with all styling options
End of explanation
n = 500
x = random.rand(n)*255
y = random.rand(n)*255
z = random.rand(n)*255
c = column_stack((x,y,z))
lgn.scatter3(x,y,z,color=c, size=3)
Explanation: <hr> Fun with colors
End of explanation |
3,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 4
Imports
Step1: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$
Step2: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
Step3: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step4: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 4
Imports
End of explanation
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
def complete_deg(n):
return (n-1)*np.identity(n, dtype=int)
D = complete_deg(5)
assert D.shape==(5, 5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
def complete_adj(n):
return np.ones((n,n), dtype=int)-np.identity(n, dtype=int)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
def L(n): return complete_deg(n)-complete_adj(n)
smalleig = np.empty((100,))
for n in np.arange(2,100):
lap = L(n)
eig = np.linalg.eigvals(lap)
np.append(smalleig, np.min(eig))
plt.plot(np.arange(100), smalleig)
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation |
3,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pumping test analysis of a constant-rate pumping test in an anisotropic unconfined aquifer
The description and data for this example are taken from the aqtesolve website.
Lohman (1972) presented data from a constant-rate pumping test performed in an unconfined aquifer with delayed gravity response near Ione, Colorado. The thickness of the unconfined alluvium was 39.4 ft. The fully penetrating test well pumped at a rate of 1170 gallons-per-minute (gpm) for 4270 minutes. The drawdown data were recorded in an observation well located 63 ft from the test well at a depth of 19.7 ft below the static water surface.
Step1: Flow is simulated with a quasi three-dimensional model consisting of one aquifer which is divided into nlay model layers. The top and bottom of the aquifer are impermeable. The horizontal hydraulic conductivity $k$, phreatic storage $S_y$, elastic storage $S_s$, and vertical anisotropy $k_v/k_h$ are unkonwn. The variable p contains all unknown parameters. The well is modeled with the Well element. TTim divides the discharge along the layers such that the head is the same at the well in all screened layers.
Step2: This model is similar to the first model except for the Well function. Here, a DischargeWell is used and the discharge is evenly divided over all the layers. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.optimize import fmin
from ttim import *
# problem definition
H = 39.4 * 0.3048 # thickness [meters]
xw, yw = 0, 0 # location well
xp, yp = 63 * 0.3048, 0 # Location piezometer [meter]
Qw = 1170 * 5.45 # discharge well in [m3/d]
z_obswell = -19.7 * 0.3048 # elevation of observation well
# loading data
data = np.loadtxt('pumptest_neuman.txt') # time and drawdown
time, dd = data[:,0], data[:,1]
td = time / 60 / 24 # t in [days]
ho = -dd * 0.3048 # observed head [meter]
print('minimum and maximum time:', td.min(), td.max())
# layer definition
nlay = 12 # number of layers
zlayers = np.linspace(0, -H, nlay + 1)
zcenter = 0.5 * (zlayers[:-1] + zlayers[1:])
layer_obswell = np.argmin(np.abs(z_obswell - zcenter))
Explanation: Pumping test analysis of a constant-rate pumping test in an anisotropic unconfined aquifer
The description and data for this example are taken from the aqtesolve website.
Lohman (1972) presented data from a constant-rate pumping test performed in an unconfined aquifer with delayed gravity response near Ione, Colorado. The thickness of the unconfined alluvium was 39.4 ft. The fully penetrating test well pumped at a rate of 1170 gallons-per-minute (gpm) for 4270 minutes. The drawdown data were recorded in an observation well located 63 ft from the test well at a depth of 19.7 ft below the static water surface.
End of explanation
Saq = 1e-4 * np.ones(nlay)
Saq[0] = 0.2
ml = Model3D(kaq=10, z=zlayers, Saq=Saq, kzoverkh=0.2,
phreatictop=True, tmin=1e-4, tmax=10)
w = Well(ml, xw=xw, yw=yw, rw=0.3, tsandQ=[(0, Qw)],
layers=range(nlay))
ml.solve()
cal = Calibrate(ml)
cal.set_parameter(name='kaq0_11', initial=100, pmin=10, pmax=400)
cal.set_parameter(name='Saq0', initial=0.1, pmin=0.01, pmax=1)
cal.set_parameter(name='Saq1_11', initial=1e-4, pmin=1e-5, pmax=1e-3)
cal.set_parameter_by_reference(name='kzoverkh', parameter=ml.aq.kzoverkh[:], initial=0.2, pmin=0.01, pmax=1)
cal.series(name='obs1', x=xp, y=yp, layer=layer_obswell, t=td, h=ho)
cal.fit()
cal.parameters
k, Sy, Ss, kzoverkh = cal.parameters['optimal'].values
hm1 = ml.head(xp, yp, td, layers=layer_obswell)
plt.figure(figsize=(14,6))
plt.subplot(121)
plt.plot(time, ho, 'ko', label='Observed')
plt.plot(time, hm1[0], 'b', label='TTim')
plt.xlabel('time [min]')
plt.ylabel('Drawdouwn (m)')
plt.legend(loc='best');
plt.subplot(122)
plt.loglog(time, -ho, 'ko', label='Observed')
plt.loglog(time, -hm1[0], 'b', label='TTim')
plt.ylim(10, 0.01)
plt.xlabel('time [min]')
plt.ylabel('Drawdouwn (m)')
plt.legend(loc='best')
plt.suptitle('TTim Aquifer Test Analysis in Unconfined Aquifer');
r = pd.DataFrame(columns=['$T$ [ft$^2$/day]', '$S_y$', '$S$','$k_h/k_r$'],
index=['Lohman (1972)', 'AQTESOLV', 'TTim'])
r.loc['Lohman (1972)'] = [22000, 0.2, 0, 0.3]
r.loc['AQTESOLV'] = [22980, 0.15, 0.008166, 0.25]
r.loc['TTim'] = [k * H / 0.0929, Sy, Ss * H, kzoverkh]
r
Explanation: Flow is simulated with a quasi three-dimensional model consisting of one aquifer which is divided into nlay model layers. The top and bottom of the aquifer are impermeable. The horizontal hydraulic conductivity $k$, phreatic storage $S_y$, elastic storage $S_s$, and vertical anisotropy $k_v/k_h$ are unkonwn. The variable p contains all unknown parameters. The well is modeled with the Well element. TTim divides the discharge along the layers such that the head is the same at the well in all screened layers.
End of explanation
ml = Model3D(kaq=10, z=zlayers, Saq=Saq, kzoverkh=0.2,
phreatictop=True, tmin=1e-4, tmax=10)
Qp = Qw / nlay #deviding Qw over the layers equal
w = DischargeWell(ml, xw=xw, yw=yw, rw=0.3, tsandQ=[(0, Qp)],
layers=range(nlay))
ml.solve()
cal = Calibrate(ml)
cal.set_parameter(name='kaq0_11', initial=100, pmin=10, pmax=400)
cal.set_parameter(name='Saq0', initial=0.1, pmin=0.01, pmax=1)
cal.set_parameter(name='Saq1_11', initial=1e-4, pmin=1e-5, pmax=1e-3)
cal.set_parameter_by_reference(name='kzoverkh', parameter=ml.aq.kzoverkh[:], initial=0.2, pmin=0.01, pmax=1)
cal.series(name='obs1', x=xp, y=yp, layer=layer_obswell, t=td, h=ho)
cal.fit()
cal.parameters
k, Sy, Ss, kzoverkh = cal.parameters['optimal'].values
r.loc['TTim uniform discharge well'] = [k * H / 0.0929, Sy, Ss * H, kzoverkh]
r
Explanation: This model is similar to the first model except for the Well function. Here, a DischargeWell is used and the discharge is evenly divided over all the layers.
End of explanation |
3,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Circular area-sink
Circular area-sink with radius 100 m, located at the origin.
Step1: Circular area-sink and well
Discharge of well is the same as total infiltration rate of the circular area-sink.
Well and center of area-sink area located at equal distances from $y$-axis, so that the head remains
zero along the $y$-axis. Solution approaches steady-state solution.
Step2: Two layers
Discharge of well is the same as total infiltration rate of the circular area-sink. Center of area-sink and well are at the origin. Circular area-sink in layer 0, well in layer 1. | Python Code:
N = 0.001
R = 100
ml = ModelMaq(kaq=5, z=[10, 0], Saq=2e-4, tmin=1e-3, tmax=1e4)
ca = CircAreaSink(ml, 0, 0, 100, tsandN=[(0, 0.001)])
ml.solve()
ml.xsection(-200, 200, 0, 0, t=[0.1, 1, 10], figsize=(12, 4), sstart=-200)
x = np.linspace(-200, 200, 200)
qx = np.zeros_like(x)
for t in [0.1, 1, 10]:
for i in range(len(x)):
qx[i], qy = ml.disvec(x[i], 1e-6, t)
plt.plot(x, qx, label='time is ' + str(t))
qxb = N * np.pi * R ** 2 / (2 * np.pi * R)
plt.axhline(qxb, color='r', ls='--')
plt.axhline(-qxb, color='r', ls='--')
plt.xlabel('x (m)')
plt.ylabel('Qx (m^2/d)')
plt.legend(loc='best');
Explanation: Circular area-sink
Circular area-sink with radius 100 m, located at the origin.
End of explanation
N = 0.001
R = 100
Q = N * np.pi * R ** 2
ml = ModelMaq(kaq=5, z=[10, 0], Saq=2e-4, tmin=1e-3, tmax=1e4, M=10)
ca = CircAreaSink(ml, -200, 0, 100, tsandN=[(0, 0.001)])
w = Well(ml, 200, 0, rw=0.1, tsandQ=[(0, Q)])
ml.solve()
ml.xsection(-400, 300, 0, 0, t=[0.1, 1, 10, 100, 1000], figsize=(12, 4), sstart=-400)
t = np.logspace(-3, 4, 100)
h = ml.head(-200, 0, t)
plt.semilogx(t, h[0])
plt.xlabel('time')
plt.ylabel('head')
plt.title('head at center of area-sink');
N = 0.001
R = 100
Q = N * np.pi * R ** 2
ml = ModelMaq(kaq=5, z=[10, 0], Saq=2e-4, tmin=10, tmax=100, M=10)
ca = CircAreaSink(ml, -200, 0, 100, tsandN=[(0, 0.001)])
w = Well(ml, 200, 0, rw=0.1, tsandQ=[(0, Q)])
ml.solve()
ml.contour([-300, 300, -200, 200], ngr=40, t=20)
Explanation: Circular area-sink and well
Discharge of well is the same as total infiltration rate of the circular area-sink.
Well and center of area-sink area located at equal distances from $y$-axis, so that the head remains
zero along the $y$-axis. Solution approaches steady-state solution.
End of explanation
N = 0.001
R = 100
Q = N * np.pi * R ** 2
ml = ModelMaq(kaq=[5, 20], z=[20, 12, 10, 0], c=[1000], Saq=[2e-4, 1e-4], tmin=1e-3, tmax=1e4, M=10)
ca = CircAreaSink(ml, 0, 0, 100, tsandN=[(0, 0.001)])
w = Well(ml, 0, 0, rw=0.1, tsandQ=[(0, Q)], layers=1)
ml.solve()
ml.xsection(-200, 200, 0, 0, t=[0.1, 100], layers=[0, 1], sstart=-200)
ml.xsection(-500, 500, 0, 0, t=[0.1, 100, 1000], layers=[0, 1], sstart=-500)
Explanation: Two layers
Discharge of well is the same as total infiltration rate of the circular area-sink. Center of area-sink and well are at the origin. Circular area-sink in layer 0, well in layer 1.
End of explanation |
3,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Multi-Armed Bandits
Imagine this scenario
Step3: Algorithm 1 - Epsilon Greedy
At each round $t = 1, 2, ...$ the Epsilon Greedy algorithm will
Step4: The decrease_const parameter in the function above may look unfamiliar.
For the Epsilon Greedy algorithm, setting the $\epsilon$ can be a bit tricky. If itโs too small, exploration will be slow at the beginning, and you will be slow to react to changes. If we happen to sample, say, the second-best arm the first few times, it may take a long time to discover that another arm is actually better. If $\epsilon$ is too big, youโll waste many trials pulling random arms without gaining much.
To accomodate for this situation, we will set the $\epsilon$ value at a higher value in the beginning and anneal (gradually lower) it over time. Intuitively, this simply means that after exploring around for a while, we become more certained about each arms' empirical means. After that, it's better to exploit.
In the function call above, the $\epsilon$ at turn $t$ will become
Step6: Algorithm 2 - Boltzmann Exploration (Softmax)
The Softmax algorithm picks each arm with a probability that is proportional to its average reward.
$$ p_i(t+1)= \frac{ e^{u_i(t) / \tau} }{ \sum_{j=1}^K e^{u_j(t) / \tau} }$$
Where $\tau$ is a temperature parameter, controlling the randomness of the choice. When $\tau$ = 0, the algorithm acts like pure greedy. As $\tau$ grows to infinity, the algorithm will pick arms uniformly at random.
Step8: Algorithm 3 - Upper Confidence Bounds (UCB)
In the world of statistics, whenever you estimate some unknown parameter (such as the mean of a distribution) using random samples, there is a way to quantify the uncertainty inherent in your estimate. For example, the true mean of a fair six-sided die is 3.5. But if you only roll it once and get a 2, your best estimate of the mean is just 2. Obviously that estimate is not very good, and we can quantify the confidence we have for our estimate. There are confidence bounds which can be written, for example, as
Step11: Experimenting With Bandit Algorithms
In this section, we'll use our simulated data to experiment with our algorithms. To do this we'll also need a metric to calculate how well we are doing. Recall the absolute best we can do is to always pick the webpage (arm) with the largest click through rate (ctr). Denote this best arm's probability of $w_{opt}$. Our score should be relative to how well we would have done had we chosen the best arm from the beginning. This motivates the total regret of a strategy, defined as
Step12: Section Conclusion
Step18: There are two important things to note about the Beta distribution
Step20: In our simulation, we gave the Bayesian bandit two webpages (arms) - one had a CTR of 0.25, the other had a CTR of 0.35. To start with, both webpages were displayed to the user with roughly equal probability. Over time, evidence accumulated that arm 2 was considerably better than arm 1. At this point the algorithm switched to displaying primarily webpage 1, and the overall CTR of the experiment converged to 0.35 (the optimal CTR).
We can also visualize our Beta distribution for each arms in different turns. | Python Code:
def generate_bernoulli_bandit_data( n_simulations, K ):
generate simluate data, that represents success / trial data
Parameters
----------
n_simulations : int
the total number of turns in a simulation
K : int
the total number of arms
Returns
-------
ctr : float 1d-array, shape(K, )
the randomly generated empirical click through rate for each arm
rewards : boolean 2d-array, shape(n_simulations, K)
given the empirical ctr, simulate in each turn of the simulation,
whether the arm gets pulled will obtain the
reward or not (whether the webpage gets clicked)
ctr = np.random.rand(K)
rewards = np.random.rand(n_simulations, K) < np.tile( ctr, (n_simulations, 1) )
return ctr, rewards
K = 2
n_simulations = 5
ctr, rewards = generate_bernoulli_bandit_data(n_simulations, K)
print(ctr)
print(rewards)
Explanation: Multi-Armed Bandits
Imagine this scenario: Youโre in a casino. There are many different slot machines (known as "one-armed bandits", as theyโre known for robbing you), each with a lever (an arm, if you will). You think that some slot machines payout more frequently than others do, and youโre goal is to walk out of the casino with the most money.
The question is, how do you learn which slot machine is the best and get the most money in the shortest amount of time? You could try all the slot machines out to get a sense of the expected return from playing each machine. But remember, each time you play a poor performing machine, you lower your take that you walk out of the casino with that night. In order to maximize how much money you walk out of the casino with, you will have to be efficient with how you collect your data.
Rewriting the scenario above into a business language. Each time a shopper looks comes to a webpage, we show them one of the $K$ variations of the webpage. They either click on it or do not, and we log this information about the (binary) reward for each $K$ variations. Kext, we proceed to the next shopper or have to choose one of $K$ webpage variations again.
Differences Between A/B Testing and Bandit Testing
In both scenarios above, we would normally determine our "winner" (the slot machine that pays the most, or the best webpage variations that gets the most clicks) using the well-known A/B testing approach. The A/B testing approach consists of a period of pure exploration, where youโre randomly assigning equal numbers of users to one of the $K$ variations and run the test until it's valid. After that, it jumps into pure exploitation, where you send 100% of your users to the more successful version of your site.
Two possible problems with the classical A/B testing approach is that:
It jumps discretely from exploration to exploitation, when you might be able to transition more smoothly.
During the exploratory phase (the test), it wastes resources exploring inferior options in order to gather as much data as possible.
Given the exploration - exploitation dilemma stated above, the bandit testing approach try to account for this. The following graph depicts the difference between the two types of testing method:
<img src=compare_testing.png width=700 height=700>
If we have three variations that we wish to test, the A/B testing approach we try out each of the three variations with equal proportions until we run our test at week 5, and then select the variation with the highest value.
As for bandit testing, it attempts to use what it knows about each variation from the very beginning, and it continuously updates the probabilities that it will select each variation throughout the optimization process. In the above chart we can see that with each new week, the bandit testing reduces how often it selects the lower performing options and increases how often if selects the highest performing option.
You need to explore in order to figure out what works and what doesnโt. On the other hand, if you exploit you take advantage of what you have learned. The bandit testing approach highlights the fact that collecting data also has its cost.
To be specific, bandit testing algorithms will try to minimize whatโs known as regret, which is the difference between your actual payoff and the payoff you would have collected had you played the optimal (best) options at every opportunity. There are tons of different bandit methods, in the next section we'll look at some of the more common ones.
Bandit Algorithms
Before introducing the algorithms and trying them out through simulations, we'll denote some notations and terminologies to formally define the problem:
arms is simply the variations that we're testing (webpages that we're testing) and there will be $K$ of them in total.
In a simulation of t turns (how many samples in a simulation), we'll maintain empirical means of the reward for each arm (e.g. after trying out arm A for 10 turns, it got 3 clicks, the empirical means is simply 0.3) that are updated at every turn t.
$u_i(t)$ is the empirical mean of arm $i$ after $t$ turns.
$p_i(t)$ is the probability of picking arm $i$ at turn $t$.
Let's look at our simulated data beforing diving into each algorithms (hopefully the docstrings are self-explanatory).
End of explanation
def epsilon_greedy(counts, epsilon = 0.5, decrease_const = 1000):
adaptive epsilon greedy
Parameters
----------
counts : int 2d-array, shape(K, 2), where K = the total number of arms
success and failures for each arm where column 0 represents
success, 1 represents failure
epsilon : float
the initial probability of choosing a random arm;
1 - epsilon is the probability of choosing the current best arm
decrease_const : int
parameter for the adaptive (annealing) epsilon
Returns
-------
(int) the chosen arm
# calculate the empirical means and the total number of simulations that were ran
totals = counts.sum(axis = 1)
successes = counts[:, 0]
empirical_means = successes / totals
total_counts = counts.sum()
epsilon /= (1 + total_counts / decrease_const)
if np.random.rand() > epsilon:
return np.argmax(empirical_means)
else:
return np.random.randint(0, empirical_means.shape[0])
# counts : stores the counts of success and failures for each arm
# where column 0 represents success, 1 represents failure.
# each arm's count is initialiated as 1 to ensure that each arm is
# played at least once, to prevent "cold start" problem and
# 0 division in the beginning
counts = np.ones((K, 2))
print(counts)
epsilon_greedy(counts)
Explanation: Algorithm 1 - Epsilon Greedy
At each round $t = 1, 2, ...$ the Epsilon Greedy algorithm will:
Choose a random arm with the probability of $\epsilon$.
Choose the arm with the current best empirical mean with probability of $1-\epsilon$.
In mathematical notations:
$$
p_i(t+1)=
\begin{cases}
1 - \epsilon + \epsilon \big/ K & \quad \text{if i = } argmax_{j = 1, ..., K} \ u_j(t) \
\epsilon \big/ K & \quad otherwise
\end{cases}
$$
Or more intuitively:
When a new visitor comes to the site, the algorithm flips a coin that comes up tail with the probability of $\epsilon$. When it does in fact comes up tail, the algorithm is going to explore. The exploration phase is simply randomly choosing between any possible arm with equal (uniform) probability and showing it to the visitor.
On the other hand, the algorithm will exploit the best known solution with the proability of $1- \epsilon$. To exploit, the algorithm simply looks up the current empirical means and shows the best one to the visitor.
The image below sums up the algorithm pretty well.
<img src=epsilon_greedy.png width = 700 height = 700>
End of explanation
# show adaptive learning rate
epsilon = 0.5
decrease_const = 1000
# the epsilon value after 10 turns
total_counts = 10
print( epsilon / (1 + total_counts / decrease_const) )
# after 10000 turns
total_counts = 10000
print( epsilon / (1 + total_counts / decrease_const) )
Explanation: The decrease_const parameter in the function above may look unfamiliar.
For the Epsilon Greedy algorithm, setting the $\epsilon$ can be a bit tricky. If itโs too small, exploration will be slow at the beginning, and you will be slow to react to changes. If we happen to sample, say, the second-best arm the first few times, it may take a long time to discover that another arm is actually better. If $\epsilon$ is too big, youโll waste many trials pulling random arms without gaining much.
To accomodate for this situation, we will set the $\epsilon$ value at a higher value in the beginning and anneal (gradually lower) it over time. Intuitively, this simply means that after exploring around for a while, we become more certained about each arms' empirical means. After that, it's better to exploit.
In the function call above, the $\epsilon$ at turn $t$ will become:
$$\epsilon(t) = \epsilon(0) \Big/ (1 + t/T)$$
Where $T$ is a new parameter that represents a decreasing constant.
Note that there are different ways of annealing a parameter, but the spirit is the same.
End of explanation
def softmax(counts):
adaptive softmax
Parameters
----------
counts : int 2d-array, shape( K, 2 ), where K = the total number of arms
success and failures for each arm where column 0 represents
success, 1 represents failure
Returns
-------
(int) the chosen arm
# calculate the empirical means and the total number of simulations that were ran
totals = counts.sum(axis = 1)
successes = counts[:, 0]
empirical_means = successes / totals
total_counts = counts.sum()
# annealing (adaptive learning rate)
tau = 1 / np.log(total_counts + 0.000001)
probs_n = np.exp(empirical_means / tau)
probs_d = probs_n.sum()
probs = probs_n / probs_d
cum_prob = 0.
z = np.random.rand()
for idx, prob in enumerate(probs):
cum_prob += prob
if cum_prob > z:
return(idx)
counts = np.ones((K, 2))
softmax(counts)
Explanation: Algorithm 2 - Boltzmann Exploration (Softmax)
The Softmax algorithm picks each arm with a probability that is proportional to its average reward.
$$ p_i(t+1)= \frac{ e^{u_i(t) / \tau} }{ \sum_{j=1}^K e^{u_j(t) / \tau} }$$
Where $\tau$ is a temperature parameter, controlling the randomness of the choice. When $\tau$ = 0, the algorithm acts like pure greedy. As $\tau$ grows to infinity, the algorithm will pick arms uniformly at random.
End of explanation
def ucb(counts):
adaptive softmax
Parameters
----------
counts : int 2d-array, shape( K, 2 ), where K = the total number of arms
success and failures for each arm where column 0 represents
success, 1 represents failure
Returns
-------
(int) the chosen arm
# calculate the empirical means and the total number of simulations that were ran
totals = counts.sum(axis = 1)
successes = counts[:, 0]
empirical_means = successes / totals
total_counts = counts.sum()
bonus = np.sqrt(2 * np.log(total_counts) / totals)
return np.argmax(empirical_means + bonus)
counts = np.ones((K, 2))
softmax(counts)
Explanation: Algorithm 3 - Upper Confidence Bounds (UCB)
In the world of statistics, whenever you estimate some unknown parameter (such as the mean of a distribution) using random samples, there is a way to quantify the uncertainty inherent in your estimate. For example, the true mean of a fair six-sided die is 3.5. But if you only roll it once and get a 2, your best estimate of the mean is just 2. Obviously that estimate is not very good, and we can quantify the confidence we have for our estimate. There are confidence bounds which can be written, for example, as: "The mean of this die is 2, with a 95-th percentile lower bound of 1.4 and a 95-th percentile upper bound of 5.2."
The upper confidence bound (UCB) family of algorithms, as its name suggests, simply selects the arm with the largest upper confidence bound at each turn. The intuition is this: the more times you roll the die, the tighter the confidence bounds, on the other hand, if your roll the die an infinite number of times then the width of the confidence bound is zero. In short, as the number of rolls increases, the uncertainty decreases, and so does the width of the confidence bound.
Thus, unlike the Epsilon Greedy and Softmax algorithm that only keeps track of the empirical means, the UCB algorithm also maitains the number of times that each arm has been played, denoted by $n_i(t)$. Initially, each arm is played once. Afterwards, at round t, the algorithm greedily picks the arm $j(t)$ as follows:
$$j(t) = argmax_{i = 1, ..., K} \left( u_i + \sqrt{\frac{2lnt}{n_i}} \right)$$
We can see that the UCB algorithm will try to learn about arms that we don't know enough about. The main advantages of these types of algorithms are:
Take uncertainty of sample mean estimate into account in a smart way.
No parameters (e.g. epsilon, annealing) to validate.
End of explanation
def run_bandit_algo(rewards, ctr, algo, **kwargs):
Run different types of bandit algorithms
Parameters
----------
rewards, ctr :
Return value of the `generate_bernoulli_bandit_data` function
algo : bandit function
[ epsilon_greedy, softmax, ucb ]
**kwargs :
additional parameters to pass in to the algo
Returns
-------
cum_regret : 1d-array, shape( n_simulations, )
The total regret accumulated over the experiment, where the regret
is measured by the maximum ctr - the chosen arm's ctr
opt_arm_percentage : float
The percentage of plays in which the optimal arm is pulled
n_simulations, K = rewards.shape
# counts : success and failures for each arm where column 0 represents
# success, 1 represents failure. Each arm's count is initialiated as 1
# to ensure that each arm is played at least once, to prevent "cold start"
# problem and 0 division in the beginning
counts = np.ones( (K, 2), dtype = int )
regret = np.zeros(n_simulations)
max_ctr_count = 0
max_ctr = np.max(ctr)
max_ctr_idx = np.argmax(ctr)
for i in range(n_simulations):
# 1. run the algorithm to obtain the arm that got pulled
# 2. update the success / failure according to the generated rewards
# 3. update the expected regret for each turn of the simulation
# 4. if the arm that got pulled is the one with the opt ctr, increment this count
arm = algo( counts, **kwargs )
if rewards[i, arm] == 1:
counts[arm, 0] += 1
else:
counts[arm, 1] += 1
regret[i] = max_ctr - ctr[arm]
if arm == max_ctr_idx:
max_ctr_count += 1
cum_regret = np.cumsum(regret)
opt_arm_percentage = max_ctr_count / n_simulations
return cum_regret, opt_arm_percentage
K = 5
n_simulations = 10000
algorithms = [epsilon_greedy, softmax, ucb]
def run_experiment(K, n_simulations, algorithms):
Run the bandit algorithm's simulation by the
specified number of samples for simulation, the number of arms
and the different version of algorithm
Parameters
----------
n_simulations : int
the total number of turns in a simulation
K : int
the total number of arms
algorithms : list of functions
the list of bandit algorithms to simulate
Returns
-------
ctr : float 1d-array, shape( K, )
the randomly generated empirical click through rate for each arm
algo_opt_arm_percentage : float list
the percentage of simulations that chose the best arm
algo_cum_regret : float 2d-array, shape( n_simulations, length of the algorithm )
each column stores the cumulative regret for one algorithm
fig : matplotlib figure
the cumulative regret for each bandit algorithm
algo_opt_arm_percentage = []
algo_cum_regret = np.zeros( (n_simulations, len(algorithms)) )
fig = plt.figure( figsize = (10, 7) )
ctr, rewards = generate_bernoulli_bandit_data(n_simulations, K)
for idx, algo in enumerate(algorithms):
cum_regret, opt_arm_percentage = run_bandit_algo(rewards, ctr, algo = algo)
algo_cum_regret[:, idx] = cum_regret
algo_opt_arm_percentage.append(opt_arm_percentage)
plt.semilogy(cum_regret, label = algo.__name__)
plt.title( 'Simulated Bandit Performance for K = {}'.format(K) )
plt.ylabel('Cumulative Expected Regret')
plt.xlabel('Round Index')
plt.legend(loc = 'lower right')
return ctr, algo_opt_arm_percentage, algo_cum_regret, fig
# change default figure size and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
ctr, algo_opt_arm_percentage, algo_cum_regret, fig = run_experiment(K, n_simulations, algorithms)
fig.show()
print(ctr)
print(algo_opt_arm_percentage)
Explanation: Experimenting With Bandit Algorithms
In this section, we'll use our simulated data to experiment with our algorithms. To do this we'll also need a metric to calculate how well we are doing. Recall the absolute best we can do is to always pick the webpage (arm) with the largest click through rate (ctr). Denote this best arm's probability of $w_{opt}$. Our score should be relative to how well we would have done had we chosen the best arm from the beginning. This motivates the total regret of a strategy, defined as:
$$
\begin{align}
R_T & = \sum_{t=1}^{T} \left( w_{opt} - w_{I(t)} \right) \nonumber \
& = Tw_{opt} - \sum_{t=1}^{T} \; w_{I(t)} \nonumber
\end{align}
$$
Where $T$ is the total number of samples in the experiment, $w_{I(t)}$ is the probability of obtaining the reward (getting clicked) of the chosen arm in the $t_{th}$ turn. A total regret of 0 means the strategy is attaining the best possible score. This is likely not possible, as initially our algorithm will often make the wrong choice. Ideally, a strategy's total regret should flatten as it learns the best bandit. (Mathematically, we achieve $w_{I(t)} = w_{opt}$ often)
We'll run the experiment and plot the cumulative regret of the three algorithms below:
End of explanation
plt.figure( figsize = (12, 5) )
x = np.linspace(0.01, .99, 100)
params = [ (2, 5), (1, 1), (5, 5), (20, 4) ]
for a, b in params:
y = beta.pdf(x, a, b)
lines = plt.plot( x, y, label = "(%.1f,%.1f)" % (a, b), lw = 2 )
plt.fill_between( x, 0, y, alpha = 0.2, color = lines[0].get_color() )
plt.autoscale(tight = True)
plt.legend(loc = 'upper left', title = "(a,b)-parameters")
plt.show()
Explanation: Section Conclusion: The plot of the cumulative expected regret of the experiment above showed that all three different algorithms have converged (the cumulative expected regret gradually decreases to a steady level). And the UCB seems to be doing better than the other two algorithms in this limited horizon.
Bayesian Bandits
All of that was great, so what's next? Well, it turns out that none of the algorithms we used in the last section are really suitable in real world applications. Why?
Recall that in the experiment above, we tested different kinds of bandit algorithms with the assumption that there is no delay between pulling an arm and observing the result. Or, more precisely, if there is a delay, it should happen before the next timing to pull an arm. This means the following timeline is impossible: 12:00 Visitor A sees Variation 1. 12:01 visitor B sees Variation 2. 12:02 Visitor A converts.
Also if you have limited computational resources, which means that you can only update your observed data in batch every 2 hours. For these kinds of delayed batch case, the algorithms described in the last section will pull the same arm every time for those 2 hours because it is deterministic in the absence of immediate updates. To sum up, the algorithms we just described needs the distributions to be updated every single round to work properly, which may not be applicable in a lot of practical cases ....
Having that caveat in mind, we'll introduce a Bayesian method that is more "immune" to this delayed feedback problem, namely Thompson Sampling.
Recall that the the problem we want to solve is the following. You have come up with $K$ different variations of the webpage (e.g. different layout) that now you wish to find the ones with the best click through rate (CTR), e.g. clicking to sign-up for the newsletter. Let's represent each CTR by $\theta_i$ - i.e., $\theta_i$ is the true probability that an individual user will click when they were shown with the $i_{th}$ webpage. It is important to note that we don't actually know what $\theta_i$ is - if we did, we could simply choose ii for which $\theta_i$ was largest and move on. We're simply pretending that we know in order to simulate the performance of the algorithm.
Using the Bayesian approach we will construct a prior probability distribution which represents our original belief about what the actual value of $\theta_i$, our ctr for the $i_{th}$ webpage is. The prior we'll use is the Beta distribution. Here's a quick recap of the distribution:
Beta Distribution
The Beta distribution is very useful in Bayesian statistics. A random variable $X$ has a Beta distribution, with parameters $(\alpha, \beta)$, if its density function is:
$$f_X(x | \; \alpha, \beta ) = \frac{ x^{(\alpha - 1)}(1-x)^{ (\beta - 1) } }{B(\alpha, \beta) }$$
where $B$ is the Beta function (hence the name). The random variable $X$ is only allowed in [0,1], making the Beta distribution a popular distribution for decimal values, probabilities and proportions. The values of $\alpha$ and $\beta$, both positive values, provide great flexibility in the shape of the distribution. Below we plot some Beta distributions with different $\alpha$ and $\beta$ values:
End of explanation
class BayesianBandit:
Thompson Sampling
Parameters
----------
K : int
total number of arms
prior_params : list of float length 2 tuple, default None, (optional)
each element of the list is a tuple, where each tuple
contains the alpha and beta parameter that represents the prior
beta distribution for each arm. If not supplied
it will assume that all arms's prior starts with an uniform distribution
Attributes
----------
trials, success : int 1d-array, shape( K, )
stores the trials and success for each arm,
e.g. trial = [ 1, 1 ] and success = [ 0, 1 ] means
that both arm has been pulled once and arm 1 has generated
the reward (clicked)
def __init__(self, K, prior_params = None):
if prior_params:
priors = namedtuple( "priors", ["alpha", "beta"] )
prior = [priors(*p) for p in prior_params]
self.alphas = np.array([p.alpha for p in prior])
self.betas = np.array([p.beta for p in prior])
else:
self.alphas = np.ones(K)
self.betas = np.ones(K)
self.trials = np.zeros(K, dtype = int)
self.success = np.zeros(K, dtype = int)
def get_recommendation(self):
for all arms, construct their beta distribution and
draw a random sample from it, then return the arm
with the maximum value random sample
theta = np.random.beta(self.alphas + self.success,
self.betas + self.trials - self.success)
return np.argmax(theta)
def update_result(self, arm, converted):
override the trials and success array, the success array
will only be updated if it has generated a reward
self.trials[arm] += 1
if converted:
self.success[arm] += 1
return self
def experiment(T, K = None, ctr = None, prior_params = None):
run the experiment for Thompson Sampling,
pass in ctr, the fixed ctr for each arm
or K, the total number of arms to run the experiment,
if K is supplied then it will be randomly generated
Parameters
----------
T : int
number of simulation in an experiment
K : int, , default = None, (optional)
total number of arms
ctr : float sequence, len = K, default = None, (optional)
the empirical click through rate for each arm
prior_params : list of float length 2 tuple, default None, (optional)
each element of the list is a tuple, where each tuple
contains the alpha and beta parameter that represents the prior
beta distribution for each arm. If not supplied
it will assume that all arms's prior starts with an uniform distribution
Returns
-------
ctr : float sequence, len = K
the supplied or the randomly generated ctr
trials, success : 2d-array, shape( T, K )
trials and success recorded for each turn of the experiment
alphas, betas : float 1d-array, shape( K, )
the alpha and beta parameters for each arm
if ctr:
K = len(ctr)
else:
ctr = np.random.rand(K)
trials = np.zeros( (T, K), dtype = int )
success = np.zeros( (T, K), dtype = int )
bb = BayesianBandit(K, prior_params)
for t in range(T):
arm = bb.get_recommendation()
converted = np.random.rand() < ctr[arm]
bb.update_result(arm, converted)
trials[t] = bb.trials
success[t] = bb.success
return ctr, trials, success, bb.alphas, bb.betas
def experiment_plot(ctr, trials, success):
Pass in the ctr, trials and success returned
by the `experiment` function and plot
the Cumulative Number of Turns For Each Arm and
the CTR's Convergence Plot side by side
T, K = trials.shape
n = np.arange(T) + 1
fig = plt.figure( figsize = (14, 7) )
plt.subplot(121)
for i in range(K):
plt.loglog( n, trials[:, i], label = "arm {}".format(i + 1) )
plt.legend(loc = "upper left")
plt.xlabel("Number of turns")
plt.ylabel("Number of turns/arm")
plt.title("Cumulative Number of Turns For Each Arm")
plt.subplot(122)
for i in range(K):
plt.semilogx( n, np.zeros(T) + ctr[i], label = "arm {}'s CTR".format(i + 1) )
plt.semilogx( n, ( success[:, 0] + success[:, 1] ) / n, label = "CTR at turn t" )
plt.axis([0, T, 0, 1])
plt.legend(loc = "upper left")
plt.xlabel("Number of turns")
plt.ylabel("CTR")
plt.title("CTR's Convergence Plot")
return fig
# number of simulation in an experiment
T = 10000
# the empirical click through rate for each arm
ctr = 0.25, 0.35
ctr, trials, success, alphas, betas = experiment(T = T, ctr = ctr)
fig = experiment_plot(ctr, trials, success)
fig.show()
Explanation: There are two important things to note about the Beta distribution:
The first is the presence of the flat distribution above, specified by parameters $(1,1)$. This is the Uniform distribution. Hence the Beta distribution is a generalization of the Uniform distribution.
The second is that there is an interesting connection between the Beta distribution and the Binomial distribution. Suppose we are interested in some unknown proportion or probability $p$. We assign a $\text{Beta}(\alpha, \beta)$ prior to $p$. We observe some data generated by a Binomial process, say $X \sim \text{Binomial}(N, p)$, with $p$ still unknown. Then our posterior is again a Beta distribution, i.e. $p | X \sim \text{Beta}( \alpha + X, \beta + N -X )$. Succinctly, one can relate the two by "a Beta prior with Binomial observations creates a Beta posterior".
In light of the above two paragraphs, if we start with a $\text{Beta}(1,1)$ prior on $p$ (which is a Uniform), observe data $X \sim \text{Binomial}(N, p)$, then our posterior is $\text{Beta}(1 + X, 1 + N - X)$.
Thompson Sampling
So after assuming the priors on the probability of ctr for each webpage. To be explicit on the phrase "assuming the priors", we will assume that we're completely ignorant of these probabilities. So a very natural prior is the flat prior over 0 to 1, $\text{Beta}(\alpha=1,\beta=1)$. The algorithm then proceeds as follows:
For each turn:
Sample a random variable $X_i$ from the prior of arm $i$, for all $i$ ($K$ in total).
Select the arm with largest sample, i.e. select $i = \text{argmax}\; X_i$.
Observe the result of pulled arm $i$, and update your prior with that arm $i$.
Return to 1.
Like all the algorithms we've introduced before, Thompson Sampling suggests that we should not discard losers, but we should pick them at a decreasing rate as we gather confidence that there exist better webpages (arms). This follows because there is always a non-zero chance that a webpage with a lower ctr will get chosen, but the probability of this event decreases as we play more rounds.
End of explanation
def plot_beta_dist(ctr, trials, success, alphas, betas, turns):
Pass in the ctr, trials and success, alphas, betas returned
by the `experiment` function and the number of turns
and plot the beta distribution for all the arms in that turn
subplot_num = len(turns) / 2
x = np.linspace(0.001, .999, 200)
fig = plt.figure( figsize = (14, 7) )
for idx, turn in enumerate(turns):
plt.subplot(subplot_num, 2, idx + 1)
for i in range( len(ctr) ):
y = beta( alphas[i] + success[turn, i],
betas[i] + trials[turn, i] - success[ turn, i ] ).pdf(x)
line = plt.plot( x, y, lw = 2, label = "arm {}".format( i + 1 ) )
color = line[0].get_color()
plt.fill_between(x, 0, y, alpha = 0.2, color = color)
plt.axvline(x = ctr[i], color = color, linestyle = "--", lw = 2)
plt.title("Posteriors After {} turns".format(turn) )
plt.legend(loc = "upper right")
return fig
turns = [1, 100, 1000, 9999]
posterior_fig = plot_beta_dist(ctr, trials, success, alphas, betas, turns)
posterior_fig.show()
Explanation: In our simulation, we gave the Bayesian bandit two webpages (arms) - one had a CTR of 0.25, the other had a CTR of 0.35. To start with, both webpages were displayed to the user with roughly equal probability. Over time, evidence accumulated that arm 2 was considerably better than arm 1. At this point the algorithm switched to displaying primarily webpage 1, and the overall CTR of the experiment converged to 0.35 (the optimal CTR).
We can also visualize our Beta distribution for each arms in different turns.
End of explanation |
3,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Classification with spaCy
This walkthrough is based on this spaCy tutorial.
Train a convolutional neural network text classifier on the
IMDB dataset, using the TextCategorizer component. The dataset will be loaded
automatically via Thinc's built-in dataset loader. The model is added to
spacy.pipeline, and predictions are available via doc.cats.
Set Up Environment
This notebook has been tested with the following package versions
Step1: Set Up Verta
Step2: Imports
Step5: Helper Functions
Step6: Train Model
Step7: Log for Deployment
Create Wrapper Class
Verta deployment expects a particular interface for its models.
They must expose a predict() function, so we'll create a thin wrapper class around our spaCy pipeline.
Step8: Create Deployment Artifacts
Verta deployment also needs a couple more details about the model.
What do its inputs and outputs look like?
Step9: What PyPI-installable packages (with version numbers) are required to deserialize and run the model?
Step10: Log Model
Step11: Deployment
Step12: Click the link above to view your Experiment Run in the Verta Web App, and deploy it.
Once it's ready, you can make predictions against the deployed model. | Python Code:
# Python >3.5
!pip install verta
!pip install spacy==2.1.6
!python -m spacy download en
Explanation: Text Classification with spaCy
This walkthrough is based on this spaCy tutorial.
Train a convolutional neural network text classifier on the
IMDB dataset, using the TextCategorizer component. The dataset will be loaded
automatically via Thinc's built-in dataset loader. The model is added to
spacy.pipeline, and predictions are available via doc.cats.
Set Up Environment
This notebook has been tested with the following package versions:
(you may need to change pip to pip3, depending on your own Python environment)
End of explanation
HOST = 'app.verta.ai'
PROJECT_NAME = 'Film Review Classification'
EXPERIMENT_NAME = 'spaCy CNN'
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST, use_git=False)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
run = client.set_experiment_run()
Explanation: Set Up Verta
End of explanation
from __future__ import print_function
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import random
import six
import numpy as np
import thinc.extra.datasets
import spacy
from spacy.util import minibatch, compounding
Explanation: Imports
End of explanation
def load_data(limit=0, split=0.8):
Load data from the IMDB dataset.
# Partition off part of the dataset to train and test
train_data, _ = thinc.extra.datasets.imdb()
random.shuffle(train_data)
train_data = train_data[-limit:]
texts, labels = zip(*train_data)
cats = [{"POSITIVE": bool(y), "NEGATIVE": not bool(y)} for y in labels]
split = int(len(train_data) * split)
return (texts[:split], cats[:split]), (texts[split:], cats[split:])
def evaluate(tokenizer, textcat, texts, cats):
Evaluate with text data, calculates precision, recall and f score
docs = (tokenizer(text) for text in texts)
tp = 0.0 # True positives
fp = 1e-8 # False positives
fn = 1e-8 # False negatives
tn = 0.0 # True negatives
for i, doc in enumerate(textcat.pipe(docs)):
gold = cats[i]
for label, score in doc.cats.items():
if label not in gold:
continue
if label == "NEGATIVE":
continue
if score >= 0.5 and gold[label] >= 0.5:
tp += 1.0
elif score >= 0.5 and gold[label] < 0.5:
fp += 1.0
elif score < 0.5 and gold[label] < 0.5:
tn += 1
elif score < 0.5 and gold[label] >= 0.5:
fn += 1
precision = tp / (tp + fp)
recall = tp / (tp + fn)
if (precision + recall) == 0:
f_score = 0.0
else:
f_score = 2 * (precision * recall) / (precision + recall)
return {"textcat_p": precision, "textcat_r": recall, "textcat_f": f_score}
Explanation: Helper Functions
End of explanation
hyperparams = {
'model':'en',
'n_iter': 2, # epochs
'n_texts': 500, # num of training samples
'architecture': 'simple_cnn',
'num_samples': 1000,
'train_test_split': 0.8,
'dropout': 0.2
}
run.log_hyperparameters(hyperparams)
# using the basic en model
try:
nlp = spacy.load(hyperparams['model']) # load existing spaCy model
except OSError:
nlp = spacy.blank(hyperparams['model']) # create blank Language class
print("Created blank '{}' model".format(hyperparams['model']))
else:
print("Loaded model '{}'".format(nlp))
# add the text classifier to the pipeline if it doesn't exist
if "textcat" not in nlp.pipe_names:
textcat = nlp.create_pipe(
"textcat",
config={
"exclusive_classes": True,
"architecture": hyperparams['architecture'],
}
)
nlp.add_pipe(textcat, last=True)
# otherwise, get it, so we can add labels to it
else:
textcat = nlp.get_pipe("textcat")
# add label to text classifier
_= textcat.add_label("POSITIVE")
_= textcat.add_label("NEGATIVE")
# load the IMDB dataset
print("Loading IMDB data...")
(train_texts, train_cats), (dev_texts, dev_cats) = load_data(limit=hyperparams['num_samples'],
split=hyperparams['train_test_split'])
print(
"Using {} examples ({} training, {} evaluation)".format(
hyperparams['num_samples'], len(train_texts), len(dev_texts)
)
)
train_data = list(zip(train_texts, [{"cats": cats} for cats in train_cats]))
# sample train data
train_data[:1]
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "textcat"]
print("other pipes:", other_pipes)
with nlp.disable_pipes(*other_pipes): # only train textcat
optimizer = nlp.begin_training()
print("Training the model...")
print("{:^5}\t{:^5}\t{:^5}\t{:^5}".format("LOSS", "P", "R", "F"))
batch_sizes = compounding(4.0, 32.0, 1.001)
for i in range(hyperparams['n_iter']):
losses = {}
# batch up the examples using spaCy's minibatch
random.shuffle(train_data)
batches = minibatch(train_data, size=batch_sizes)
for batch in batches:
texts, annotations = zip(*batch)
nlp.update(texts, annotations, sgd=optimizer, drop=hyperparams['dropout'], losses=losses)
with textcat.model.use_params(optimizer.averages):
# evaluate on the dev data split off in load_data()
scores = evaluate(nlp.tokenizer, textcat, dev_texts, dev_cats)
print(
"{0:.3f}\t{1:.3f}\t{2:.3f}\t{3:.3f}".format( # print a simple table
losses["textcat"],
scores["textcat_p"],
scores["textcat_r"],
scores["textcat_f"],
)
)
run.log_observation('loss', losses['textcat'])
run.log_observation('precision', scores['textcat_p'])
run.log_observation('recall', scores['textcat_r'])
run.log_observation('f_score', scores['textcat_f'])
Explanation: Train Model
End of explanation
class TextClassifier:
def __init__(self, nlp):
self.nlp = nlp
def predict(self, input_list): # param must be a list/batch of inputs
predictions = []
for text in input_list:
scores = self.nlp(text).cats
if scores['POSITIVE'] > scores['NEGATIVE']:
predictions.append("POSITIVE")
else:
predictions.append("NEGATIVE")
return np.array(predictions) # response currently must be a NumPy array
input_list = [
"This movie was subpar at best.",
"Plot didn't make sense."
]
model = TextClassifier(nlp)
model.predict(input_list)
Explanation: Log for Deployment
Create Wrapper Class
Verta deployment expects a particular interface for its models.
They must expose a predict() function, so we'll create a thin wrapper class around our spaCy pipeline.
End of explanation
from verta.utils import ModelAPI # Verta-provided utility class
model_api = ModelAPI(
input_list, # example inputs
model.predict(input_list), # example outputs
)
Explanation: Create Deployment Artifacts
Verta deployment also needs a couple more details about the model.
What do its inputs and outputs look like?
End of explanation
requirements = ["numpy", "spacy", "thinc"]
# this could also have been a path to a requirements.txt file on disk
run.log_requirements(requirements)
Explanation: What PyPI-installable packages (with version numbers) are required to deserialize and run the model?
End of explanation
# test the trained model
test_text = 'The Lion King was very entertaining. The movie was visually spectacular.'
doc = nlp(test_text)
print(test_text)
print(doc.cats)
run.log_model(
model,
model_api=model_api,
)
Explanation: Log Model
End of explanation
run
Explanation: Deployment
End of explanation
from verta._demo_utils import DeployedModel
deployed_model = DeployedModel(HOST, run.id)
deployed_model.predict(["I would definitely watch this again!"])
Explanation: Click the link above to view your Experiment Run in the Verta Web App, and deploy it.
Once it's ready, you can make predictions against the deployed model.
End of explanation |
3,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
"Detection of anomalous tweets using supervising outlier techniques"
Importing the Dependencies and Loading the Data
Step1: Data Preparation
Data prepration with the available data. I made the combination such that the classes are highly imbalanced making it apt for anomaly detection problem
Step2: Data pre-processing - text analytics to create a corpus
1) Converting text to matrix of token counts [Bag of words]
Stemming - lowercasing, removing stop-words, removing punctuation and reducing words to its lexical roots
2) Stemmer, tokenizer(removes non-letters) are created by ourselves.These are passed as parameters to CountVectorizer of sklearn.
3) Extracting important words and using them as input to the classifier
Feature Engineering
Step3: The below implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.
Note
Step4: Fit_Transform
Step5: Train-Test Split
Step6: A text polarity depends on what words appear in that text, discarding any grammar or word order but keeping multiplicity.
1) All the above text processing for features ended up with the same entries in our dataset
2) Instead of having them defined by a whole text, they are now defined by a series of counts of the most frequent words in our whole corpus.
3) These vectors are used as features to train a classifier.
Training the model | Python Code:
import nltk
import pandas as pd
import numpy as np
data = pd.read_csv("original_train_data.csv", header = None,delimiter = "\t", quoting=3,names = ["Polarity","TextFeed"])
#Data Visualization
data.head()
Explanation: "Detection of anomalous tweets using supervising outlier techniques"
Importing the Dependencies and Loading the Data
End of explanation
data_positive = data.loc[data["Polarity"]==1]
data_negative = data.loc[data["Polarity"]==0]
anomaly_data = pd.concat([data_negative.sample(n=10),data_positive,data_negative.sample(n=10)])
anomaly_data.Polarity.value_counts()
#Number of words per sentence
print ("No of words for sentence in train data",np.mean([len(s.split(" ")) for s in anomaly_data.TextFeed]))
Explanation: Data Preparation
Data prepration with the available data. I made the combination such that the classes are highly imbalanced making it apt for anomaly detection problem
End of explanation
import re
from sklearn.feature_extraction.text import CountVectorizer
nltk.download()
from nltk.stem.porter import PorterStemmer
''' this code is taken from
http://www.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html
'''
# a stemmer widely used
stemmer = PorterStemmer()
def stem_tokens(tokens, stemmer):
stemmed = []
for item in tokens:
stemmed.append(stemmer.stem(item))
return stemmed
def tokenize(text):
# remove non letters
text = re.sub("[^a-zA-Z]", " ", text)
# tokenize
tokens = nltk.word_tokenize(text)
# stem
stems = stem_tokens(tokens, stemmer)
return stems
Explanation: Data pre-processing - text analytics to create a corpus
1) Converting text to matrix of token counts [Bag of words]
Stemming - lowercasing, removing stop-words, removing punctuation and reducing words to its lexical roots
2) Stemmer, tokenizer(removes non-letters) are created by ourselves.These are passed as parameters to CountVectorizer of sklearn.
3) Extracting important words and using them as input to the classifier
Feature Engineering
End of explanation
#Max_Features selected as 80 - can be changed for the better trade-off
vector_data = CountVectorizer(
analyzer = 'word',
tokenizer = tokenize,
lowercase = True,
stop_words = 'english',
max_features = 90
)
Explanation: The below implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.
Note: I am not using frequencies(TfidTransformer, apt for longer documents) because the text size is small and can be dealt with occurences(CountVectorizer).
End of explanation
#using only the "Text Feed" column to build the features
features = vector_data.fit_transform(anomaly_data.TextFeed.tolist())
#converting the data into the array
features = features.toarray()
features.shape
#printing the words in the vocabulary
vocab = vector_data.get_feature_names()
print (vocab)
# Sum up the counts of each vocabulary word
dist = np.sum(features, axis=0)
# For each, print the vocabulary word and the number of times it
# appears in the data set
a = zip(vocab,dist)
print (list(a))
Explanation: Fit_Transform:
1) Fits the model and learns the vocabulary
2) transoforms the data into feature vectors
End of explanation
from sklearn.cross_validation import train_test_split
#80:20 ratio
X_train, X_test, y_train, y_test = train_test_split(
features,
anomaly_data.Polarity,
train_size=0.80,
random_state=1234)
print ("Training data - positive and negative values")
print (pd.value_counts(pd.Series(y_train)))
print ("Testing data - positive and negative values")
print (pd.value_counts(pd.Series(y_test)))
Explanation: Train-Test Split
End of explanation
from sklearn.svm import SVC
clf = SVC()
clf.fit(X=X_train,y=y_train)
wclf = SVC(class_weight={0: 20})
wclf.fit(X=X_train,y=y_train)
y_pred = clf.predict(X_test)
y_pred_weighted = wclf.predict(X_test)
from sklearn.metrics import classification_report
print ("Basic SVM metrics")
print(classification_report(y_test, y_pred))
print ("Weighted SVM metrics")
print(classification_report(y_test, y_pred_weighted))
from sklearn.metrics import confusion_matrix
print ("Basic SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred))
print ("Weighted SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred_weighted))
tn, fp, fn, tp = confusion_matrix(y_test, y_pred_weighted).ravel()
(tn, fp, fn, tp)
Explanation: A text polarity depends on what words appear in that text, discarding any grammar or word order but keeping multiplicity.
1) All the above text processing for features ended up with the same entries in our dataset
2) Instead of having them defined by a whole text, they are now defined by a series of counts of the most frequent words in our whole corpus.
3) These vectors are used as features to train a classifier.
Training the model
End of explanation |
3,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Modeling
2017-04-28, Josh Montague
This is Part 2 of the "Time Series Modeling in Python" series.
In Part 1, we looked at data structures within the pandas library that make working with time series particularly convenient.
Here, in Part 2, we'll look at the some of the simplest methods for modeling a time series, making forecasts, and evaluating the accuracy of our models. We'll take advantage of both pandas and numpy functionality since they're both great for these sorts of tasks.
Step1: < optional web data acquisition >
If you want to experiment with other easily accessible data...
<!-- data acquisition -->
Step2: Simple models
Let's start with the simplest set of things we can possibly do in our forecasting model[1]. These models are computationally simple, require only a small amount of data to make predictions, and generally provide a baseline over which to compare more complicated models.
Generally, we will want some form of a forecast() method.
But we'll get to abstractions in a moment. First, let's start by just doing it directly.
Naive method ( + .shift())
Let's start with the most straightforward and obvious forecasting method possible
Step3: Note that the shift() method effectively slides the index up (relative to the data), keeping the same index range but clipping or extending the data column as needed.
What if you want the data column to have the same range, but shift the index? For this case, the shift() method has a freq kwarg to use instead of the (implicit) periods kwarg.
Step4: Note that the data column stayed fixed, but we adjusted each value in the index according to the freq kwarg while maintaining the period e.g. one month.
Step5: Ok, great, now that we can simply shift the existing data column; let's attach that our original frame as our first "forecast" series.
Step6: We made a forecast!
Step7: Now how do we assess our model?
It's always a good idea to do a visual inspection of the data. Both in the original of the data (time series), and also in the space of your truth vs predictions.
Step8: Not so bad! Certainly better than a random number generator.
Another common way to inspect a prediction (in any supervised learning task) is to plot the true data against the predicted data and fit a line through it. In the case where prediction = truth, this will be a beautiful, straight line, with zero residual error across the entire data set.
If you do see a beautiful line like this, you most likely made a mistake somewhere; the real world is messy. Double check that you didn't accidentally train on your hold-out data, or evaluate on your training data, or something similar.
In the real world (if your model is good), your plot should appear roughly linear, and should have some variance around the line you would draw through the data.
Step10: Looks pretty good! We'll come back to a more quantitative assessment of this prediction in a bit.
Before we move to other models, let's convert the naive model to a functional form. This way, as we develop more forecasting models, we can use the same API for consistency.
numpy > pandas
While we typically have our data in DataFrames, we'll see that most (all?) of our forecasting methods don't make use of the extra metadata in a DataFrame or Series. As a result, we'll define our forecasting models with the expectaion of numpy arrays as input (which are more general), and we'll know that we can always use the .values attribute to get the respective numpy array data out of a pandas object.
Here's an implementation of the naive model in functional form. It's a lot of typing for a simple model, but the pattern will prove useful.
Step11: We can use this function to create a similar DataFrame to the one we used earlier. We'll loop over the observed data, and use the past data as our input.
Step13: NOW FOR MOAR MODELS!
Seasonal naive method
Instead of just forecasting based on the previous point, we might know there is a consistent cycle within the data. Often, the term "seasonal" is used to describe any sort of cyclic behavior. Seems sloppy, but oh well. In this case, we can prescribe how far back to look in the series, and forecast the corresponding value.
Step15: Note that we aren't paying any attention to the observed resolution of the data, only the relative position of any cyclic behavior.
Since we want to create a new forecast array (as before), let's make a function that encapsulates the iteration over an observed array and returns the corresponding forecast array.
Step17: Mean method
Another straightforward model consists of averaging the past n points and forecasting using that mean value.
Step19: Drift method
The last simple model we'll look at involves a linear extrapolation from recently observed data. In this case, the prediction is an adjustment from the most recent observed point, according to the slope extrapolated from two points
Step20: Model accuracy metrics
At this point, we've looked at four different simple models
Step21: Another view on these charts to quantify the quality of a model is to look at the distribution of residuals.
Step22: They all look pretty good, but we can be more specific.
$R^2$ (coefficient of determination) score
The $R^2$ score is a common regression scoring method. The user guide (and wikipedia) have nice explanations of both the defintion, and the domain; in short, the maximum value is 1.0, scoring a constant prediction that is the mean value of the observed data will give 0.0, and it can be arbitraily negative). The $R^2$ metric is basically what your eyeballs are assessing when you look at a scatter plot of the truth data vs. the predicted data.
Let's look at the $R^2$ value for the models that we've introduced so far.
Step23: Feel free to experiment (or BYO GridSearchCV), but I think you'll typically find these are all >0.95, and the naive model frequently has the highest value.
Mean squared error
The mean squared error (MSE) is a simpler calculation that is the expected value of the quadratic error.
$$ MSE(y,\hat{y}) = \frac{1}{n_{samples}} \sum_i (y_i - \hat{y}_i)^2 $$
Step24: Since MAE isn't normalized, it's a little hard to eyeball (big numbers), and to compare different models. There are also mean absolute, and median absolute errors, if you feel like you want to "penalize" outliers in any particular fashion.
I like $R^2$ because it's relateively straightforward, but it's good to know there are alternatives.
Wrap-up
While none of these models are particularly fancy, they provide great baselines for any other model you can dream up. They are generally very fast and straightforward to evaluate, and provide a surprisingly high accuracy on many forecasting tasks.
For some broader context on forecasting (if, sadly, written in R), check out Ref [1].
Hopefully, there are still two classes of time series modeling that we'll look at in the future
Step25: Window functions
The official docs include many examples, but a common pattern is creating a Rolling object - which has the notion of a window size, and a window type (square, triangular, etc.) - and then applying functions like you would after a groupby or a resample.
Step26: Exponentially weighted windows
Finally, if you should want them, there are windows that have an exponentially decaying weight. These are relatively new to pandas, and havequestionable documentation. But, they're configured by either a span (much like the rolling window), the decay constant alpha, or a couple of related. | Python Code:
import copy
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
from sklearn.metrics import r2_score, mean_squared_error
%matplotlib inline
plt.rcParams["figure.figsize"] = (8,6)
# `unemploy.csv` is included in the repo - it's a small csv
! head misc/unemploy.csv
# get data from local dir
data = pd.read_csv('misc/unemploy.csv', parse_dates=True, index_col=0)
data.head()
# this is for the data we have locally
data_name = 'UNEMPLOY'
Explanation: Time Series Modeling
2017-04-28, Josh Montague
This is Part 2 of the "Time Series Modeling in Python" series.
In Part 1, we looked at data structures within the pandas library that make working with time series particularly convenient.
Here, in Part 2, we'll look at the some of the simplest methods for modeling a time series, making forecasts, and evaluating the accuracy of our models. We'll take advantage of both pandas and numpy functionality since they're both great for these sorts of tasks.
End of explanation
# what does this data look like?
data.plot()
Explanation: < optional web data acquisition >
If you want to experiment with other easily accessible data...
<!-- data acquisition -->
End of explanation
# check the top of the dataset
data.head()
# what does the shift() method do?
# (remember that pandas methods return a new df)
data.shift().head()
# what happens at the end of the series?
data.tail()
# 'periods=1' is the default first arg
data.shift(periods=1).tail()
Explanation: Simple models
Let's start with the simplest set of things we can possibly do in our forecasting model[1]. These models are computationally simple, require only a small amount of data to make predictions, and generally provide a baseline over which to compare more complicated models.
Generally, we will want some form of a forecast() method.
But we'll get to abstractions in a moment. First, let's start by just doing it directly.
Naive method ( + .shift())
Let's start with the most straightforward and obvious forecasting method possible: the next data point will be the same as the previous data point. This approach (reasonably) assumes that in many real world systems there is some form of inertia or momentum in the underlying processes that is greater than the associated random fluctuations.
In form, what we'd like is a DataFrame with a column of observed data and column of forecasted data.
To start, our forecasted column will be a shifted version of the existing values column. We can do that with pandas' .shift() method.
End of explanation
data.head()
# a timely reminder of bit.ly/pd-offsets
data.shift(freq='2D').head()
Explanation: Note that the shift() method effectively slides the index up (relative to the data), keeping the same index range but clipping or extending the data column as needed.
What if you want the data column to have the same range, but shift the index? For this case, the shift() method has a freq kwarg to use instead of the (implicit) periods kwarg.
End of explanation
data.tail()
data.shift(freq='1M').tail()
Explanation: Note that the data column stayed fixed, but we adjusted each value in the index according to the freq kwarg while maintaining the period e.g. one month.
End of explanation
# use a copy for our forecast frame
d = copy.deepcopy(data)
# assign the "forecast" to be the 1-period shifted version of the "observed" column
d['forecast'] = data.shift(1)[data_name]
d.head()
Explanation: Ok, great, now that we can simply shift the existing data column; let's attach that our original frame as our first "forecast" series.
End of explanation
Image(filename='misc/kermit.png')
Explanation: We made a forecast!
End of explanation
# how does our forecast look by eye?
d.plot()
plt.title('naive')
Explanation: Now how do we assess our model?
It's always a good idea to do a visual inspection of the data. Both in the original of the data (time series), and also in the space of your truth vs predictions.
End of explanation
plt.scatter(d[data_name], d[data_name])
plt.xlabel('truth')
plt.ylabel('also truth')
plt.title('this will never happen');
plt.scatter(d[data_name], d['forecast'])
plt.xlabel('truth')
plt.ylabel('forecast')
plt.title("variance is a sign that you're alive");
Explanation: Not so bad! Certainly better than a random number generator.
Another common way to inspect a prediction (in any supervised learning task) is to plot the true data against the predicted data and fit a line through it. In the case where prediction = truth, this will be a beautiful, straight line, with zero residual error across the entire data set.
If you do see a beautiful line like this, you most likely made a mistake somewhere; the real world is messy. Double check that you didn't accidentally train on your hold-out data, or evaluate on your training data, or something similar.
In the real world (if your model is good), your plot should appear roughly linear, and should have some variance around the line you would draw through the data.
End of explanation
def fc_naive(data, **kwargs):
The 'naive' forecast of the next point in `data` (presumed to be
ordered in time) is equal to the last point observed in the series.
`data` should be a 1-D numpy array
Returns a single-valued forecast for the next value in the series.
forecast = data[-1]
return forecast
Explanation: Looks pretty good! We'll come back to a more quantitative assessment of this prediction in a bit.
Before we move to other models, let's convert the naive model to a functional form. This way, as we develop more forecasting models, we can use the same API for consistency.
numpy > pandas
While we typically have our data in DataFrames, we'll see that most (all?) of our forecasting methods don't make use of the extra metadata in a DataFrame or Series. As a result, we'll define our forecasting models with the expectaion of numpy arrays as input (which are more general), and we'll know that we can always use the .values attribute to get the respective numpy array data out of a pandas object.
Here's an implementation of the naive model in functional form. It's a lot of typing for a simple model, but the pattern will prove useful.
End of explanation
# container for the forecast
forecasts = []
# loop over positions in the array
for idx in range(len(data[data_name])):
# subset the series from beginning to position idx
array_slice = data[data_name].iloc[:idx].values
if idx < 10:
# a view behind the scenes...
print('iteration {}, array_slice: {}'.format(idx, array_slice))
# make a forecast using that series
try:
forecasts.append( fc_naive(array_slice) )
except IndexError:
# the first position won't have a forecast value
forecasts.append(np.nan)
d = copy.deepcopy(data)
d['forecast'] = forecasts
d.head()
d.plot()
plt.title('same naive graph');
Explanation: We can use this function to create a similar DataFrame to the one we used earlier. We'll loop over the observed data, and use the past data as our input.
End of explanation
def fc_snaive(data, n=7, **kwargs):
The 'seasonal naive' forecast of the next point in `data` (presumed to be
ordered in time) is equal to the point observed `n` points prior in the series.
`data` should be a 1-D numpy array
`n` should be an integer
Returns a single-valued forecast for the next value in the series.
forecast = data[-n]
return forecast
Explanation: NOW FOR MOAR MODELS!
Seasonal naive method
Instead of just forecasting based on the previous point, we might know there is a consistent cycle within the data. Often, the term "seasonal" is used to describe any sort of cyclic behavior. Seems sloppy, but oh well. In this case, we can prescribe how far back to look in the series, and forecast the corresponding value.
End of explanation
def forecast_series(observed, fc_func, **kwargs):
Returns an array of forecasted values (using `fc_func` and any `kwargs` like a window `n`)
for each value in the input np.array `observed`.
# container for the forecast
forecasts = []
for idx in range(len(observed)):
# subset the series from beginning to position idx
array_slice = observed[:idx]
# make a forecast using that series
try:
forecasts.append( fc_func(array_slice, **kwargs) )
except IndexError:
# the first position won't have a forecast value
forecasts.append(np.nan)
return forecasts
d = copy.deepcopy(data)
# our data is monthly, and i have a hunch about quarterly cycles, so let's use n=3 (3 months in a quarter)
forecasts = forecast_series(d[data_name].values, fc_snaive, n=3)
d['forecast'] = forecasts
d.head()
d.plot()
plt.title('seasonal naive (n=3)')
plt.scatter(d[data_name], d['forecast'])
plt.xlabel('truth')
plt.ylabel('forecast')
plt.title('seasonal naive method')
Explanation: Note that we aren't paying any attention to the observed resolution of the data, only the relative position of any cyclic behavior.
Since we want to create a new forecast array (as before), let's make a function that encapsulates the iteration over an observed array and returns the corresponding forecast array.
End of explanation
def fc_mean(data, n=3, **kwargs):
The 'mean' forecast of the next point in `data` (presumed to be
ordered in time) is equal to the mean value of the most recent `n` observed points.
`data` should be a 1-D numpy array
`n` should be an integer
Returns a single-valued forecast for the next value in the series.
# don't start averaging until we've seen n points
if len(data[-n:]) < n:
forecast = np.nan
else:
# nb: we'll keep the forecast as a float
forecast = np.mean(data[-n:])
return forecast
d = copy.deepcopy(data)
# let's try a 4-point rolling mean
forecasts = forecast_series(d[data_name].values, fc_mean, n=4)
d['forecast'] = forecasts
d.head()
d.plot()
plt.title('mean forecast (n=3)');
plt.scatter(d[data_name], d['forecast'])
plt.xlabel('truth')
plt.ylabel('forecast')
plt.title('mean method')
Explanation: Mean method
Another straightforward model consists of averaging the past n points and forecasting using that mean value.
End of explanation
def fc_drift(data, n=3, **kwargs):
The 'drift' forecast of the next point in `data` (presumed to be
ordered in time) is a linear extrapoloation from `n` points ago, through the
most recent point.
`data` should be a 1-D numpy array
`n` should be an integer
Returns a single-valued forecast for the next value in the series.
yi = data[-n]
yf = data[-1]
slope = (yf - yi) / (n-1)
forecast = yf + slope
return forecast
d = copy.deepcopy(data)
# let's try a 5-point drift method
forecasts = forecast_series(d[data_name].values, fc_drift, n=5)
d['forecast'] = forecasts
d.head()
d.plot()
plt.title('drift method');
plt.scatter(d[data_name], d['forecast'])
plt.xlabel('truth')
plt.ylabel('forecast')
Explanation: Drift method
The last simple model we'll look at involves a linear extrapolation from recently observed data. In this case, the prediction is an adjustment from the most recent observed point, according to the slope extrapolated from two points: one at n points prior, and the most recent point.
End of explanation
d = copy.deepcopy(data)
# feel free to tweak the 'n' args
model_list = [('naive', fc_naive, 0),
('seasonal_naive', fc_snaive, 3),
('mean', fc_mean, 3),
('drift', fc_drift, 5)]
# create new cols for each model
for name, model, nn in model_list:
d[name] = forecast_series(d[data_name].values, model, n=nn)
d.head()
d.plot()
plt.title('ALL THE FORECASTS!');
for name, series_data in d.items():
plt.plot(d[data_name], series_data, 'o', alpha=0.6, label=name)
plt.xlabel('truth')
plt.ylabel('pred')
plt.title('another view')
plt.legend()
Explanation: Model accuracy metrics
At this point, we've looked at four different simple models: naive, seasonal naive, (rolling) mean, and drift. A next reasonable question is: which one best reflects our data? To answer that, we'll look to some metrics for model accuracy measurements.
As is often the case, scikit-learn has a module that supplies a handful of these out of the box. The first thing you'll note is that there are more metrics for classification accuracy than for regression accuracy evaluation. Still, at least we don't have to reinvent the wheel!
First, let's make some forecasts...
End of explanation
comparison = 'naive'
(d[data_name] - d[comparison]).hist(bins=30)
plt.xlabel('residuals')
plt.title('residual distribution for method: {}'.format(comparison));
Explanation: Another view on these charts to quantify the quality of a model is to look at the distribution of residuals.
End of explanation
print('* R2 scores (bigger = better) *\n')
# calculate R2 for each model (against the observed data)
for name, series_data in d.items():
# strip rows with nans
subdf = d[[data_name, name]].dropna()
truth = subdf[data_name].values
pred = subdf[name].values
# calculate metric
r2 = r2_score(truth, pred)
print('{} - {:.4f}'.format(name, r2))
Explanation: They all look pretty good, but we can be more specific.
$R^2$ (coefficient of determination) score
The $R^2$ score is a common regression scoring method. The user guide (and wikipedia) have nice explanations of both the defintion, and the domain; in short, the maximum value is 1.0, scoring a constant prediction that is the mean value of the observed data will give 0.0, and it can be arbitraily negative). The $R^2$ metric is basically what your eyeballs are assessing when you look at a scatter plot of the truth data vs. the predicted data.
Let's look at the $R^2$ value for the models that we've introduced so far.
End of explanation
print('* MAE scores (smaller = better) *\n')
# calculate MAE for each model (against the observed data)
for name, series_data in d.items():
# strip rows with nans
subdf = d[[data_name, name]].dropna()
truth = subdf[data_name].values
pred = subdf[name].values
# calculate metric
mae = mean_squared_error(truth, pred)
print('{} - {:.4f}'.format(name, mae))
Explanation: Feel free to experiment (or BYO GridSearchCV), but I think you'll typically find these are all >0.95, and the naive model frequently has the highest value.
Mean squared error
The mean squared error (MSE) is a simpler calculation that is the expected value of the quadratic error.
$$ MSE(y,\hat{y}) = \frac{1}{n_{samples}} \sum_i (y_i - \hat{y}_i)^2 $$
End of explanation
# recall our original 'data' dataframe
data.head()
Explanation: Since MAE isn't normalized, it's a little hard to eyeball (big numbers), and to compare different models. There are also mean absolute, and median absolute errors, if you feel like you want to "penalize" outliers in any particular fashion.
I like $R^2$ because it's relateively straightforward, but it's good to know there are alternatives.
Wrap-up
While none of these models are particularly fancy, they provide great baselines for any other model you can dream up. They are generally very fast and straightforward to evaluate, and provide a surprisingly high accuracy on many forecasting tasks.
For some broader context on forecasting (if, sadly, written in R), check out Ref [1].
Hopefully, there are still two classes of time series modeling that we'll look at in the future: so-called "state space models" (of which the most famous is the ARIMA family), and more recent, neural network-based models. When we do get to these models, now we can baseline them against these simple methods!
pandas Appendix
There are a handful of pandas methods that are time-series related, but not necessarily about forecasting and this seemed like a good place to highlight them. In particular, if you'd like to transform a time series according to a rolling window average, or exponential smoothing, there are efficient built-in methods for that!
End of explanation
# make a "rolling" object that we can use for calculations
r = data.rolling(window=5)
# this object can be treated much like a GroupBy object
r
# we can apply a number of methods to the Rolling object, like standard numerical calcs
r.mean().head(10)
plt.plot(data, 'o--', label=data_name)
plt.plot(r.mean(), '.-', label='rolling mean')
plt.legend()
plt.plot(data, 'o--', label=data_name)
plt.plot(r.max(), '.-', label='rolling max')
plt.legend()
# calculate stdev on the rolling object within window size
stds = r.std()
# add the stdevs as error bars on each point
data.plot(style='o', yerr=stds)
plt.title('observed data points + windowed stdev')
plt.legend();
Explanation: Window functions
The official docs include many examples, but a common pattern is creating a Rolling object - which has the notion of a window size, and a window type (square, triangular, etc.) - and then applying functions like you would after a groupby or a resample.
End of explanation
plt.plot(data, 'o-', label=data_name)
plt.plot(data.ewm(span=5).mean(), '.--', label='EMW')
plt.legend();
Explanation: Exponentially weighted windows
Finally, if you should want them, there are windows that have an exponentially decaying weight. These are relatively new to pandas, and havequestionable documentation. But, they're configured by either a span (much like the rolling window), the decay constant alpha, or a couple of related.
End of explanation |
3,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Image Processing tutorial
Basic image data manipulation is introduced, using OpenCV library.
The sample image is obtained from PEXELS.
OpenCV is image processing library which supports
- loading image in numpy.ndarray format, save image
- converting image color format (RGB, YUV, Gray scale etc)
- resize
and other useful image processing functionality.
To install opencv, execute
$conda install -c https
Step1: Loading and save image
cv2.imread for loading image.
cv2.imwrite for save image.
plt.imshow for plotting, and plt.savefig for save plot image.
OpenCV image format is usually 3 dimension (or 2 dimension if the image is gray scale).
1st dimension is for height,
2nd dimension is for width,
3rd dimension is for channel (RGB, YUV etc).
To convert color format cv2.cvtColor can be used.
Details are written in next section.
Step2: Change color format
cv2.cvtColor for converting color format.
Note that openCV version 3 reads the image color in the order B, G, R.
However, matplotlib deals with the image color in the corder R, G, B.
So you need to convert color order, refer readRGBImage function.
If the image is gray scale, the image is 2 dimensional array
1st dimension is for height,
2nd dimension is for width.
Step4: Resize
cv2.imread for resize.
Note that size should be specified in the order width, height.
Step5: Crop
numpy slicing can be used for cropping image
Step6: Image processing with channels
RGB channel manipulation.
Understanding the meaning of "channel" is important in deep learning.
Below code provides some insight that what each channel represents. | Python Code:
import os
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
def readRGBImage(imagepath):
image = cv2.imread(imagepath) # Height, Width, Channel
(major, minor, _) = cv2.__version__.split(".")
if major == '3':
# version 3 is used, need to convert
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
else:
# Version 2 is used, not necessary to convert
pass
return image
Explanation: Basic Image Processing tutorial
Basic image data manipulation is introduced, using OpenCV library.
The sample image is obtained from PEXELS.
OpenCV is image processing library which supports
- loading image in numpy.ndarray format, save image
- converting image color format (RGB, YUV, Gray scale etc)
- resize
and other useful image processing functionality.
To install opencv, execute
$conda install -c https://conda.binstar.org/menpo -y opencv3
End of explanation
# Read image from file, save image with matplotlib using `imshow` function
basedir = './src/cnn/images'
imagepath = os.path.join(basedir, 'sample.jpeg')
#image = cv2.imread(imagepath, cv2.IMREAD_GRAYSCALE)
image = readRGBImage(imagepath)
# Width and Height shows pixel size of this image
# Channel=3 indicates the RGB channel
print('image.shape (Height, Width, Channel) = ', image.shape)
# Save image with openCV
# This may be blue image because the color format RGB is opposite.
cv2.imwrite('./src/cnn/images/out.jpg', image)
# bgr_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
# cv2.imwrite('./src/cnn/images/out.jpg', bgr_image)
# Plotting
plt.imshow(image)
plt.savefig('./src/cnn/images/out_plt.jpg')
Explanation: Loading and save image
cv2.imread for loading image.
cv2.imwrite for save image.
plt.imshow for plotting, and plt.savefig for save plot image.
OpenCV image format is usually 3 dimension (or 2 dimension if the image is gray scale).
1st dimension is for height,
2nd dimension is for width,
3rd dimension is for channel (RGB, YUV etc).
To convert color format cv2.cvtColor can be used.
Details are written in next section.
End of explanation
gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Gray scale image is 2 dimension, No channel dimension.
print('gray_image.shape (Height, Width) = ', gray_image.shape)
cv2.imwrite('./src/cnn/images/out_gray.jpg', gray_image)
Explanation: Change color format
cv2.cvtColor for converting color format.
Note that openCV version 3 reads the image color in the order B, G, R.
However, matplotlib deals with the image color in the corder R, G, B.
So you need to convert color order, refer readRGBImage function.
If the image is gray scale, the image is 2 dimensional array
1st dimension is for height,
2nd dimension is for width.
End of explanation
%matplotlib inline
print('image.shape (Height, Width, Channel) = ', image.shape)
# Resize image to half size
height, width = image.shape[:2]
half_image = cv2.resize(image, (width//2, height//2)) # size must be int
print('half_image.shape (Height, Width, Channel) = ', half_image.shape)
plt.imshow(half_image)
plt.savefig('./src/cnn/images/out_half.jpg')
# Resize image by specifying longer side size
def resize_longedge(image, pixel):
Resize the input image
Longer edge size will be `pixel`, and aspect ratio doesn't change
height, width = image.shape[:2]
longer_side = max(height, width)
ratio = float(pixel) / longer_side
return cv2.resize(image, None, fx=ratio, fy=ratio) # size must be int
resized128_image = resize_longedge(image, 128)
print('resized128_image.shape (Height, Width, Channel) = ', resized128_image.shape)
plt.imshow(resized128_image)
plt.savefig('./src/cnn/images/out_resized128.jpg')
Explanation: Resize
cv2.imread for resize.
Note that size should be specified in the order width, height.
End of explanation
%matplotlib inline
# Crop center of half_image
height, width = half_image.shape[:2]
crop_length = min(height, width)
height_start = (height - crop_length) // 2
width_start = (width - crop_length) // 2
cropped_image = half_image[
height_start:height_start+crop_length,
width_start:width_start+crop_length,
:]
print('cropped_image.shape (Height, Width, Channel) = ', cropped_image.shape)
plt.imshow(cropped_image)
plt.savefig('./src/cnn/images/out_cropped.jpg')
Explanation: Crop
numpy slicing can be used for cropping image
End of explanation
%matplotlib inline
# Show RGB channel separately in gray scale
fig, axes = plt.subplots(1, 3)
# image[:, :, 0] is R channel.
axes[0].set_title('R channel')
axes[0].imshow(image[:, :, 0], cmap='gray')
# image[:, :, 1] is G channel.
axes[1].set_title('G channel')
axes[1].imshow(image[:, :, 1], cmap='gray')
# image[:, :, 2] is B channel.
axes[2].set_title('B channel')
axes[2].imshow(image[:, :, 2], cmap='gray')
plt.savefig(os.path.join(basedir, 'RGB_gray.jpg'))
# Show RGB channel separately in color
fig, axes = plt.subplots(1, 3)
# image[:, :, 0] is R channel, replace the rest by 0.
imageR = image.copy()
imageR[:, :, 1:3] = 0
axes[0].set_title('R channel')
axes[0].imshow(imageR)
# image[:, :, 1] is G channel, replace the rest by 0.
imageG = image.copy()
imageG[:, :, [0, 2]] = 0
axes[1].set_title('G channel')
axes[1].imshow(imageG)
# image[:, :, 2] is B channel, replace the rest by 0.
imageB = image.copy()
imageB[:, :, 0:2] = 0
axes[2].set_title('B channel')
axes[2].imshow(imageB)
plt.savefig(os.path.join(basedir, 'RGB_color.jpg'))
Explanation: Image processing with channels
RGB channel manipulation.
Understanding the meaning of "channel" is important in deep learning.
Below code provides some insight that what each channel represents.
End of explanation |
3,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading Pre-Trained Models
In this tutorial, we'll use the squeezenet model to identify objects in images. The image location will pass in a URL to a photo or the location of a local file. The codes is a list of AlexNet object codes, like "985" which equates to "daisy".
If you came from the Image Pre-Processing Tutorial, you will see that we're using rescale and crop functions to prep the image, as well as reformatting the image to be CHW, BGR, and finally NCHW. We also correct the image mean, by either using the calculated mean from a provided npy file or statically removing 128 as a placeholder average.
You'll find that loading pre-trained model is really simple and is accomplished in just a few lines of code. Here we show those few lines
Step1: In the block below we're loading the mean file (if it exists) and the image and then pre-processing the image for ingestion into a Caffe2 convolutional neural network!
Step2: Now that the image is ready to be ingested by the CNN, let's open the protobufs, load them into the workspace, and run the net.
Step3: See that we have 1000 result there in the middle? If we had submitted more that one image in our batch then the array would be larger, but still have 1000 units there in the middle. It is holding the probability for each category in the pre-trained model. So when you look at the results, it's like saying, "Computer, what's the probability that this is a Beryllium sphere?" Or gila monster, or any of the other 998 groups of things in there.
Let's see what you have! Run the next block for the result. | Python Code:
%matplotlib inline
from caffe2.proto import caffe2_pb2
import numpy as np
import skimage.io
import skimage.transform
from matplotlib import pyplot
import os
from caffe2.python import core, workspace, models
import urllib2
print("Required modules imported.")
# Configuration --- Change to your setup and preferences!
CAFFE_MODELS = "/usr/local/caffe2/python/models"
# sample images you can try, or use any URL to a regular image.
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Whole-Lemon.jpg/1235px-Whole-Lemon.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/7/7b/Orange-Whole-%26-Split.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/a/ac/Pretzel.jpg"
# IMAGE_LOCATION = "https://cdn.pixabay.com/photo/2015/02/10/21/28/flower-631765_1280.jpg"
# IMAGE_LOCATION = "images/cat.jpg"
# IMAGE_LOCATION = "images/cowboy-hat.jpg"
# IMAGE_LOCATION = "images/cell-tower.jpg"
# IMAGE_LOCATION = "images/Ducreux.jpg"
# IMAGE_LOCATION = "images/pretzel.jpg"
# IMAGE_LOCATION = "images/orangutan.jpg"
# IMAGE_LOCATION = "images/aircraft-carrier.jpg"
IMAGE_LOCATION = "images/flower.jpg"
# What model are we using? You should have already converted or downloaded one.
# format below is the model's:
# folder, INIT_NET, predict_net, mean, input image size
# you can switch squeezenet out with 'bvlc_alexnet', 'bvlc_googlenet' or others that you have downloaded
# if you have a mean file, place it in the same dir as the model
MODEL = 'squeezenet', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 227
# codes - these help decypher the output and source from a list from AlexNet's object codes to provide an result like "tabby cat" or "lemon" depending on what's in the picture you submit to the neural network.
# The list of output codes for the AlexNet models (squeezenet)
codes = "https://gist.githubusercontent.com/aaronmarkham/cd3a6b6ac071eca6f7b4a6e40e6038aa/raw/9edb4038a37da6b5a44c3b5bc52e448ff09bfe5b/alexnet_codes"
print "Config set!"
Explanation: Loading Pre-Trained Models
In this tutorial, we'll use the squeezenet model to identify objects in images. The image location will pass in a URL to a photo or the location of a local file. The codes is a list of AlexNet object codes, like "985" which equates to "daisy".
If you came from the Image Pre-Processing Tutorial, you will see that we're using rescale and crop functions to prep the image, as well as reformatting the image to be CHW, BGR, and finally NCHW. We also correct the image mean, by either using the calculated mean from a provided npy file or statically removing 128 as a placeholder average.
You'll find that loading pre-trained model is really simple and is accomplished in just a few lines of code. Here we show those few lines:
read the protobuf files
with open("init_net.pb") as f:
init_net = f.read()
with open("predict_net.pb") as f:
predict_net = f.read()
use the Predictor function in your workspace to load the blobs from the protobufs
p = workspace.Predictor(init_net, predict_net)
run the net and get the results!
results = p.run([img])
The results come back as a multidimensional array of probabilities. Essentially each row is a percentage chance that the object matches something that the neural net recognizes. When you run the flower it should give you over 95% rating that the flower is a daisy.
Configuration
Settings are in a code block below. Most of the time these should run out of the box.
Model Download Options
Check out the Model Zoo for pre-trained models, or you can also use Caffe2's models.download module to acquire pre-trained models from Github caffe2/models
caffe2.python.models.download takes in an argument for the name of the model. Check the repo for the available models' names and swap out the squeezenet argument if you want to try something different. Example:
python -m caffe2.python.models.download -i squeezenet
If the above download worked then you should have a copy of squeezenet in your model folder or if you used the -i flag it will have installed the model locally in the /caffe2/python/models folder.
Alternatively, you can clone the entire repo of models: git clone https://github.com/caffe2/models.
End of explanation
def crop_center(img,cropx,cropy):
y,x,c = img.shape
startx = x//2-(cropx//2)
starty = y//2-(cropy//2)
return img[starty:starty+cropy,startx:startx+cropx]
def rescale(img, input_height, input_width):
print("Original image shape:" + str(img.shape) + " and remember it should be in H, W, C!")
print("Model's input shape is %dx%d") % (input_height, input_width)
aspect = img.shape[1]/float(img.shape[0])
print("Orginal aspect ratio: " + str(aspect))
if(aspect>1):
# landscape orientation - wide image
res = int(aspect * input_height)
imgScaled = skimage.transform.resize(img, (input_width, res))
if(aspect<1):
# portrait orientation - tall image
res = int(input_width/aspect)
imgScaled = skimage.transform.resize(img, (res, input_height))
if(aspect == 1):
imgScaled = skimage.transform.resize(img, (input_width, input_height))
pyplot.figure()
pyplot.imshow(imgScaled)
pyplot.axis('on')
pyplot.title('Rescaled image')
print("New image shape:" + str(imgScaled.shape) + " in HWC")
return imgScaled
print "Functions set."
# set paths and variables from model choice and prep image
CAFFE_MODELS = os.path.expanduser(CAFFE_MODELS)
# mean can be 128 or custom based on the model
# gives better results to remove the colors found in all of the training images
MEAN_FILE = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[3])
if not os.path.exists(MEAN_FILE):
mean = 128
else:
mean = np.load(MEAN_FILE).mean(1).mean(1)
mean = mean[:, np.newaxis, np.newaxis]
print "mean was set to: ", mean
# some models were trained with different image sizes, this helps you calibrate your image
INPUT_IMAGE_SIZE = MODEL[4]
# make sure all of the files are around...
#if not os.path.exists(CAFFE2_ROOT):
# print("Houston, you may have a problem.")
INIT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[1])
print 'INIT_NET = ', INIT_NET
PREDICT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[2])
print 'PREDICT_NET = ', PREDICT_NET
if not os.path.exists(INIT_NET):
print(INIT_NET + " not found!")
else:
print "Found ", INIT_NET, "...Now looking for", PREDICT_NET
if not os.path.exists(PREDICT_NET):
print "Caffe model file, " + PREDICT_NET + " was not found!"
else:
print "All needed files found! Loading the model in the next block."
# load and transform image
img = skimage.img_as_float(skimage.io.imread(IMAGE_LOCATION)).astype(np.float32)
img = rescale(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
img = crop_center(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
print "After crop: " , img.shape
pyplot.figure()
pyplot.imshow(img)
pyplot.axis('on')
pyplot.title('Cropped')
# switch to CHW
img = img.swapaxes(1, 2).swapaxes(0, 1)
pyplot.figure()
for i in range(3):
# For some reason, pyplot subplot follows Matlab's indexing
# convention (starting with 1). Well, we'll just follow it...
pyplot.subplot(1, 3, i+1)
pyplot.imshow(img[i])
pyplot.axis('off')
pyplot.title('RGB channel %d' % (i+1))
# switch to BGR
img = img[(2, 1, 0), :, :]
# remove mean for better results
img = img * 255 - mean
# add batch size
img = img[np.newaxis, :, :, :].astype(np.float32)
print "NCHW: ", img.shape
Explanation: In the block below we're loading the mean file (if it exists) and the image and then pre-processing the image for ingestion into a Caffe2 convolutional neural network!
End of explanation
# initialize the neural net
with open(INIT_NET) as f:
init_net = f.read()
with open(PREDICT_NET) as f:
predict_net = f.read()
p = workspace.Predictor(init_net, predict_net)
# run the net and return prediction
results = p.run([img])
# turn it into something we can play with and examine which is in a multi-dimensional array
results = np.asarray(results)
print "results shape: ", results.shape
Explanation: Now that the image is ready to be ingested by the CNN, let's open the protobufs, load them into the workspace, and run the net.
End of explanation
# the rest of this is digging through the results
results = np.delete(results, 1)
index = 0
highest = 0
arr = np.empty((0,2), dtype=object)
arr[:,0] = int(10)
arr[:,1:] = float(10)
for i, r in enumerate(results):
# imagenet index begins with 1!
i=i+1
arr = np.append(arr, np.array([[i,r]]), axis=0)
if (r > highest):
highest = r
index = i
# top 3 results
print "Raw top 3 results:", sorted(arr, key=lambda x: x[1], reverse=True)[:3]
# now we can grab the code list
response = urllib2.urlopen(codes)
# and lookup our result from the list
for line in response:
code, result = line.partition(":")[::2]
if (code.strip() == str(index)):
print MODEL[0], "infers that the image contains ", result.strip()[1:-2], "with a ", highest*100, "% probability"
Explanation: See that we have 1000 result there in the middle? If we had submitted more that one image in our batch then the array would be larger, but still have 1000 units there in the middle. It is holding the probability for each category in the pre-trained model. So when you look at the results, it's like saying, "Computer, what's the probability that this is a Beryllium sphere?" Or gila monster, or any of the other 998 groups of things in there.
Let's see what you have! Run the next block for the result.
End of explanation |
3,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Step 2
Step4: Model Architecture
Step5: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Step6: Step 3
Step7: Predict the Sign Type for Each Image
Step8: Analyze Performance
Step9: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability
Step10: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note | Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
data_dir = "data/"
training_file = data_dir + "train.p"
validation_file = data_dir + "valid.p"
testing_file = data_dir + "test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 0: Load The Data
End of explanation
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).size
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import mpl_toolkits.axes_grid1.inset_locator as insetLoc
from mpl_toolkits.axes_grid1 import ImageGrid
import scipy.stats as stats
import pandas as pd
import csv
# Visualizations will be shown in the notebook.
%matplotlib inline
y_all = np.concatenate([y_train, y_valid, y_test])
# Prepare a figure to display data summary
fig, axes = plt.subplots(1, 5, figsize=(15, 20), sharey = 'all')
plot_data = [X_test, y_all, y_train, y_valid, y_test]
titles = ['Examples', 'All', 'Training', 'Validation', 'Test']
classes, class_indices = np.unique(y_train, return_inverse = True)
# Prepare grid for display of class examples in first subplot
class_labels = [];
with open('signnames.csv', newline='\n') as csvfile:
nameReader = csv.reader(csvfile, delimiter=',')
for row in nameReader:
class_labels.append(row[1])
class_labels = class_labels[1:]
axes[0].set_xticks([])
axes[0].set_title(titles[0])
n_examples = 5
grid = ImageGrid(fig, 151,
nrows_ncols=(n_classes, n_examples),
axes_pad=0.025)
# Get indices of class examples
class_examples = []
for i in range(n_classes):
example_indices = np.where(y_test == classes[i])
class_examples.extend(example_indices[0][0:n_examples])
class_examples.reverse()
# Display class examples
for i in range(n_classes*n_examples):
grid[i].imshow(plot_data[0][class_examples[i]])
grid[i].axis('off')
grid[i].set_xticks([])
grid[i].set_yticks([])
# Display histogram for each data set and compare distributions
for i in range(1, len(axes)):
arr = axes[i].hist(plot_data[i], bins = range(n_classes+1), normed = 1,
orientation = 'horizontal', rwidth = 0.95,
)
axes[i].set_title(titles[i])
for j in range(len(arr[0])):
axes[i].text(arr[0][j],arr[1][j]+0.5,"{:.0f}".format(arr[0][j]*len(plot_data[i])))
if i > 1:
observed = np.bincount(plot_data[i])/len(plot_data[i])
expected = np.bincount(y_all)/len(y_all)
chisq, p = stats.chisquare(f_obs= observed, f_exp= expected)
axes[i].set_xlabel('Proportion of Set\n(chisq = ' + str(chisq) + ',\np = ' + str(p) + ')')
axes[i].set_ylim((0, 42))
axes[i].set_yticks([])
axes[1].set_xlabel('Proportion of Samples')
axes[0].set_ylim(0, 43)
axes[0].set_yticks(np.arange(0, n_classes)+0.5)
axes[0].set_yticklabels(class_labels)
print(' ') # for some reason this supresses output from somewhere else
Explanation: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
End of explanation
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import cv2
def preprocess_images(images):
# Convert image color to Lab
images = [cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) for image in images]
# Image correction
clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8, 8))
#for i in range(len(images)):
# l, a, b = cv2.split(images[i])
# l = clahe.apply(l)
# images[i] = cv2.merge((l, a, b))
images = [clahe.apply(image) for image in images]
# Convert image color back to RGB
#images = [cv2.cvtColor(image, cv2.COLOR_LAB2RGB) for image in images]
# Add third (singleton) dimension
images = [np.expand_dims(image, axis = 2) for image in images]
# Normalize values for input to range [-1, 1]
images = np.divide(np.subtract(np.array(images, dtype = 'float'), 128), 128)
return images
def show_examples(images, labels, n_examples):
# Select n_examples examples from each class
classes = np.unique(labels)
class_examples_indices = []
for i in range(len(classes)):
example_indices = np.where(labels == classes[i])
random_select = np.random.randint(0, len(example_indices[0]))
class_examples_indices.append(example_indices[0][random_select])
class_examples = [images[index] for index in class_examples_indices]
# Display class examples
fig, axes = plt.subplots(1, n_examples, figsize=(4, 4*n_examples))
for i in range(0, n_examples):
axes[i].imshow(class_examples[i])
axes[i].axis('off')
axes[i].set_xticks([])
axes[i].set_yticks([])
# Apply image preprocessing but reverse final normalization for viewing images
class_examples = preprocess_images(class_examples)
class_examples = np.multiply(np.add(np.array(class_examples, dtype = 'float'), 128), 128)
# Display preprocessed class examples
fig, axes = plt.subplots(1, n_examples, figsize=(4, 4*n_examples))
for i in range(0, n_examples):
axes[i].imshow(np.squeeze(class_examples[i], axis = 2), cmap = "gray")
axes[i].axis('off')
axes[i].set_xticks([])
axes[i].set_yticks([])
return
# Preprocess data
X_train_preprocessed = preprocess_images(X_train)
X_valid_preprocessed = preprocess_images(X_valid)
X_test_preprocessed = preprocess_images(X_test)
# Display some examples
show_examples(X_test, y_test, 5)
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
End of explanation
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def CNN(x, keep_prob):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x10.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 10), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(10))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Dropout
conv1 = tf.nn.dropout(conv1, keep_prob)
# Pooling. Input = 28x28x12. Output = 14x14x12.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 10, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Dropout
conv2 = tf.nn.dropout(conv2, keep_prob)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = n_classes.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: Model Architecture
End of explanation
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from sklearn.utils import shuffle
EPOCHS = 10
BATCH_SIZE = 128
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32, (None))
one_hot_y = tf.one_hot(y, n_classes)
rate = 0.001
logits = CNN(x, keep_prob)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train_preprocessed)
print("Training...")
print()
for i in range(EPOCHS):
print("EPOCH {} ...".format(i+1))
X_train_preprocessed, y_train = shuffle(X_train_preprocessed, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_preprocessed[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.6})
validation_accuracy = evaluate(X_valid_preprocessed, y_valid)
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './cnn')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test_preprocessed, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import glob
import math
# Read images
web_images = []
for file in glob.glob("web_images\*.jpg"):
image = cv2.cvtColor(cv2.imread(file), cv2.COLOR_BGR2RGB)
web_images.append(np.array(image))
# Show raw images
fig, axes = plt.subplots(1, len(web_images), figsize=(8, 48), dpi = 80)
for i in range(len(web_images)):
axes[i].imshow(web_images[i])
axes[i].set_xlabel(str(web_images[i].shape))
axes[i].set_xticks([])
axes[i].set_yticks([])
# Resize/pad images for input
desired_size = (32, 32)
for i in range(len(web_images)):
image = web_images[i]
scale = min(desired_size[0]/image.shape[0], desired_size[1]/image.shape[1])
image = cv2.resize(image, None, image, scale, scale, interpolation = cv2.INTER_AREA)
im_shape = image.shape
x_pad = (desired_size[0] - im_shape[0])/2
y_pad = (desired_size[1] - im_shape[1])/2
left_pad = math.floor(x_pad)
right_pad = math.ceil(x_pad)
top_pad = math.floor(y_pad)
bottom_pad = math.ceil(y_pad)
image = image.transpose(2, 0, 1)
new_image = np.zeros((3,)+desired_size, dtype = "uint8")
for j in range(3):
new_image[j] = np.pad(image[j], ((left_pad,right_pad),(top_pad,bottom_pad)), mode = "edge")
web_images[i] = new_image.transpose(1, 2, 0)
# Show images after resizing/padding
fig, axes = plt.subplots(1, len(web_images), figsize=(8, 48), dpi = 80)
for i in range(len(web_images)):
axes[i].imshow(web_images[i])
axes[i].set_xlabel(str(web_images[i].shape))
axes[i].set_xticks([])
axes[i].set_yticks([])
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
# Create input images (preprocessed) and expected labels
X_web = preprocess_images(web_images)
labels_web = ["General caution", "No entry", "Road work", "Stop", "Yield"]
y_web = []
for i in range(len(labels_web)):
y_web.append(class_labels.index(labels_web[i]))
expected = [class_labels[index] for index in y_web]
print("Input classes: ", y_web)
print("Expected labels: ", expected)
# Predict image labels
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
output = sess.run(tf.argmax(logits, 1), feed_dict={x: X_web, y: y_web, keep_prob: 1.0})
predictions = [class_labels[index] for index in output]
print("Output classes: ", output)
print("Predicted labels: ", predictions)
sess.close()
Explanation: Predict the Sign Type for Each Image
End of explanation
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
accuracy = sess.run(accuracy_operation, feed_dict={x: X_web, y: y_web, keep_prob: 1.0})
print("Web test accuracy = {:.3f}".format(accuracy))
sess.close()
Explanation: Analyze Performance
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
top_probs = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=5), feed_dict={x: X_web, y: y_web, keep_prob: 1.0})
print("Top softmax probabilities:\n", top_probs.values)
print("Top classes: ", top_probs.indices)
sess.close()
Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
print(activation.shape)
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
outputFeatureMap(X_web, conv2)
sess.close()
Explanation: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
End of explanation |
3,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
title
Step1: 0. Open dataset and load package
This dataset examines the relationship between multitasking and working memory. Link here to original paper by Uncapher et al. 2016.
Step2: 1. Familiarize yourself with the data
Quick review from data cleaning
Step3: 2. Selecting relevant variables
Sometimes datasets have many variables that are unnecessary for a given analysis. To simplify your life, and your code, we can select only the given variables we'd like to use for now.
Step4: 3. Basic Descriptives
Summarizing data
Let's learn how to make simple tables of summary statistics.
First, we will calculate summary info across all data using describe(), a useful function for creating summaries. Note that we're not creating a new object for this summary (i.e. not using the = symbol), so this will print but not save.
Step5: 3. Grouping data
Next, we will learn how to group data based on certain variables of interest.
We will use the groupby() function in pandas, which will automatically group any subsequent actions called on the data.
Step6: We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking
Step7: Then we'll check how evenly split these groups are by using groupby() the size() functions
Step8: Then we'll calculate some summary info about these groups
Step9: A note on piping / stringing commands together
In R, we often use the pipe %>% to string a series of steps together. We can do the same in python with many functions in a row
This is how we're able to take the output of df.groupby(["groupStatus","adhdF"]) and then send that output into the mean() function
5. Extra
Step10: How many trials were there per subject?
Step11: Combine summary statistics with the full data frame
For some analyses, you might want to add a higher level variable (e.g. subject average hitRate) alongside your long data. We can do this by summarizing the data in a new data frame and then merging it with the full data. | Python Code:
# load packages we will be using for this lesson
import pandas as pd
Explanation: title: "Data Manipulation in Python"
subtitle: "CU Psych Scientific Computing Workshop"
weight: 1301
tags: ["core", "python"]
Goals of this lesson
Students will learn:
How to group and categorize data in Python
How to generative descriptive statistics in Python
Links to Files
The files for all tutorials can be downloaded from the Columbia Psychology Scientific Computing GitHub page. This particular file is located here: /content/tutorials/python/3-data-manipulation/index.ipynb.
End of explanation
# use pd.read_csv to open data into python
df = pd.read_csv("uncapher_2016_repeated_measures_dataset.csv")
Explanation: 0. Open dataset and load package
This dataset examines the relationship between multitasking and working memory. Link here to original paper by Uncapher et al. 2016.
End of explanation
df.head()
df.shape
df.columns
Explanation: 1. Familiarize yourself with the data
Quick review from data cleaning: take a look at the basic data structure, number of rows and columns.
End of explanation
df = df[["subjNum", "groupStatus", "adhd", "hitRate", "faRate", "dprime"]]
df.head()
Explanation: 2. Selecting relevant variables
Sometimes datasets have many variables that are unnecessary for a given analysis. To simplify your life, and your code, we can select only the given variables we'd like to use for now.
End of explanation
df.describe()
Explanation: 3. Basic Descriptives
Summarizing data
Let's learn how to make simple tables of summary statistics.
First, we will calculate summary info across all data using describe(), a useful function for creating summaries. Note that we're not creating a new object for this summary (i.e. not using the = symbol), so this will print but not save.
End of explanation
df.groupby(["groupStatus"]).mean()
Explanation: 3. Grouping data
Next, we will learn how to group data based on certain variables of interest.
We will use the groupby() function in pandas, which will automatically group any subsequent actions called on the data.
End of explanation
df["adhdF"] = pd.qcut(df["adhd"],q=2,labels=["Low","High"])
Explanation: We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking: high or low).
We will first make a factor for ADHD (median-split), and add it as a grouping variable using the qcut() function in pandas:
End of explanation
df.groupby(["groupStatus","adhdF"]).size()
Explanation: Then we'll check how evenly split these groups are by using groupby() the size() functions:
End of explanation
df.groupby(["groupStatus","adhdF"]).mean()
Explanation: Then we'll calculate some summary info about these groups:
End of explanation
subList = df["subjNum"].unique()
nSubs = len(subList)
nSubs
Explanation: A note on piping / stringing commands together
In R, we often use the pipe %>% to string a series of steps together. We can do the same in python with many functions in a row
This is how we're able to take the output of df.groupby(["groupStatus","adhdF"]) and then send that output into the mean() function
5. Extra: Working with a long dataset
This is a repeated measures ("long") dataset, with multiple rows per subject. This makes things a bit tricker, but we are going to show you some tools for how to work with "long" datasets.
How many unique subjects are in the data?
End of explanation
nTrialsPerSubj = df.groupby(["subjNum"]).size().reset_index(name="nTrials")
nTrialsPerSubj.head()
Explanation: How many trials were there per subject?
End of explanation
avgHR = df.groupby(["subjNum"])["hitRate"].mean().reset_index(name="avgHR")
avgHR.head()
df = df.merge(avgHR,on="subjNum")
df.head()
Explanation: Combine summary statistics with the full data frame
For some analyses, you might want to add a higher level variable (e.g. subject average hitRate) alongside your long data. We can do this by summarizing the data in a new data frame and then merging it with the full data.
End of explanation |
3,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Similarity-based Learning
Similiarity-based approaches in machine learning come from the idea that the best way to make predictions is simply to look at what has worked in the past and predict the same thing again. The fundamental concepts required to build a system based on this idea are feature spaces and measures of similarity.
Reading
Chapter 5 of Fundamentals of Machine Learning for Predictive Data Analytics
Chapter 5 Slides 'A' (internal)
Chapter 5 Slides 'B' (internal)
Slides are posted on the internal server http
Step1: More general Minkowski distance
In a d-dimensional vector space, the Minkowski distance of order $p$ is defined as
Step2: Scaling the Axes
The Euclidean distance can be written in (suggestive) vector notation as
Step3: Nearest-neighbor clustering
Step4: K-Nearest Neighbors Classifier
The NearestNeighbors function helps us recover the neighbors that are closest to the desired data point; but if we're interested in using k-nearest neighbors for classification, we can use KNeighborsClassifier. | Python Code:
import numpy as np
import math as ma
import matplotlib.pyplot as plt
%matplotlib inline
X = np.array([3.3, 1.2])
Y = np.array([2.1, -1.8])
plt.arrow(0,0,*X, head_width=0.2);
plt.arrow(0,0,*Y, head_width=0.2);
plt.xlim([0, 4]);
plt.ylim([-2,2]);
plt.show();
# Euclidean distance manually:
ma.sqrt(np.sum((X-Y)**2))
# numpy norm:
np.linalg.norm(X-Y)
Explanation: Similarity-based Learning
Similiarity-based approaches in machine learning come from the idea that the best way to make predictions is simply to look at what has worked in the past and predict the same thing again. The fundamental concepts required to build a system based on this idea are feature spaces and measures of similarity.
Reading
Chapter 5 of Fundamentals of Machine Learning for Predictive Data Analytics
Chapter 5 Slides 'A' (internal)
Chapter 5 Slides 'B' (internal)
Slides are posted on the internal server http://131.96.197.204/~pmolnar/mlbook
What is a metric?
The "distance" $d$ between two points in a vector space must satisfy the following requirements:
* It is non-negative: $d(x,y) \geq 0$ for all $x$, $y$, with $d(x,y) = 0$ if and only if $x = y$
* It is symmetric: $d(x,y) = d(y,x)$
* It satisfies the triangle inequality: $d(x,y) \leq d(x,z) + d(z,y)$
Some common measures of distance:
Euclidean distance
This is perhaps the most commonly used distance metric: $d(X,Y) = \sqrt{(X_0-Y_0)^2 + (X_1-Y_1)^2}$.
End of explanation
import scipy.spatial.distance as dst
# Manhattan distance
dst.cdist(np.expand_dims(X, axis=0),np.expand_dims(Y, axis=0),'cityblock')
# Chebyshev distance
dst.cdist(np.expand_dims(X, axis=0),np.expand_dims(Y, axis=0),'chebyshev')
Explanation: More general Minkowski distance
In a d-dimensional vector space, the Minkowski distance of order $p$ is defined as:
$d_p(X,Y) = \left(\sum_{i=1}^{d} \left|X_i - Y_i\right|^p \right)^{1/p}$
The Euclidean distance is a special case of the Minkowski distance with $p=2$.
Some other common cases include:
The Manhattan distance: $p = 1$
The Chebyshev distance: $p = \infty$, where $d_\infty(X,Y) = \max_{i = 0,\ldots,n}\left| X_i - Y_i \right|$
End of explanation
import pandas as pd
from sklearn.neighbors import NearestNeighbors
# read in the basketball draft dataset
df = pd.read_csv('./Table5-2.csv', names=['ID','Speed','Agility','Draft'], skiprows=[0])
df.head()
fig, ax = plt.subplots()
ax.margins(0.05)
groups = df.groupby('Draft')
for name, group in groups:
ax.plot(group.Speed, group.Agility, marker='o', linestyle='', ms=12, label=name);
ax.legend(numpoints=1, loc='lower right');
Explanation: Scaling the Axes
The Euclidean distance can be written in (suggestive) vector notation as:
$d^2(X,Y) = (X-Y)^T I_{n \times n} (X-Y)$
Instead of the $n \times n$ identity matrix, we could use and positive definite matrix.
A positive definite matrix is defined as a matrix $M$ for which $z^T M z \geq 0$ for all real vectors $z$, with equality only if $z$ is the vector of all zeros.
We can use this matrix to appropriately rescale the axes, for example to correct for high variance along a given dimension in our feature space: this gives us the Mahalanobis metric,
$d_M(X,Y) = (X-Y)^T \Sigma^{-1} (X-Y)$,
where $\Sigma$ is the covariance matrix of your data points.
Additional reading:
Distances between words (taking into account the context):
http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/
Let's try some clustering
End of explanation
# Let's fit a nearest-neighbor model to our data, using Euclidean distance...
nearest_neighbor_model = NearestNeighbors(1, metric='euclidean').fit(df[['Speed','Agility']], df['Draft'])
# OK, now let's find the nearest neighbors for some new data points!
samples = np.array([[7,7],[5,5]]) # samples to classify, in [speed, agility] format
fig, ax = plt.subplots()
ax.margins(0.05)
groups = df.groupby('Draft')
for name, group in groups:
ax.plot(group.Speed, group.Agility, marker='o', linestyle='', ms=12, label=name);
ax.legend(numpoints=1, loc='lower right');
ax.plot(samples[:,0],samples[:,1], marker='o', linestyle='', ms=12, c='red');
nearest_neighbor_model.kneighbors(samples, return_distance=True)
df.Draft.iloc[nearest_neighbor_model.kneighbors(samples, return_distance=False).ravel()] # the kneighbors method returns the index of the
# nearest neighbors....
nearest_neighbor_model.kneighbors([[7,7],[5,4]], return_distance=False).ravel()
Explanation: Nearest-neighbor clustering
End of explanation
from sklearn.neighbors import KNeighborsClassifier
# define model and train it on the input data
knn_model = KNeighborsClassifier(n_neighbors=5, metric='euclidean').fit(df[['Speed','Agility']], df['Draft'])
# predict classes for "samples", using k nearest neighbors
knn_model.predict(samples)
from sklearn.cluster import KMeans
kmeans_model = KMeans(2).fit(df[['Speed','Agility']])
df['Clust'] = kmeans_model.predict(df[['Speed','Agility']])
fig, ax = plt.subplots()
ax.margins(0.05)
groups = df.groupby('Clust')
for name, group in groups:
ax.plot(group.Speed, group.Agility, marker='o', linestyle='', ms=12, label=name);
ax.legend(numpoints=1, loc='lower right');
ax.plot(samples[:,0],samples[:,1], marker='o', linestyle='', ms=12, c='red');
help(KMeans)
Explanation: K-Nearest Neighbors Classifier
The NearestNeighbors function helps us recover the neighbors that are closest to the desired data point; but if we're interested in using k-nearest neighbors for classification, we can use KNeighborsClassifier.
End of explanation |
3,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
# Normalize RGB for each image data
norm_image_data = np.ndarray(x.shape, np.float32)
for i, image_data in enumerate(x):
temp_image_data = np.ndarray(x.shape, np.float32)
temp_image_data = image_data
temp_image_data[:,:,0] = abs((temp_image_data[:,:,0] - 128)/128)
temp_image_data[:,:,1] = abs((temp_image_data[:,:,1] - 128)/128)
temp_image_data[:,:,2] = abs((temp_image_data[:,:,2] - 128)/128)
norm_image_data[i] = temp_image_data
#print('Sample normalized image data: {}'.format(norm_image_data[2].max()))
return norm_image_data
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
one_hot = np.zeros(shape=[len(x), 10])
for i, label_id in enumerate(x):
one_hot[i, label_id] = True
return one_hot
one_hot_encoding_map = {0:'airplane', 1:'automobile', 2:'bird', 3:'cat', 4:'deer', 5:'dog', 6:'frog', 7:'horse',\
8:'ship', 9:'truck'}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
image_input = tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
return image_input
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
label_input = tf.placeholder(tf.float32, shape=(None, n_classes), name='y')
return label_input
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
keep_prob_input = tf.placeholder(tf.float32, name='keep_prob')
return keep_prob_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
print('Conv_ksize: ', conv_ksize, ' Conv_strides: ', conv_strides, ' Conv output depth:', conv_num_outputs, \
x_tensor.get_shape().as_list(), ' Pool ksize: ', pool_ksize, ' Pool strides: ', pool_strides)
#Convolution and max pool Parameters
input_depth = x_tensor.get_shape().as_list()[3]
output_depth = conv_num_outputs
weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], input_depth, output_depth], mean=0.0, stddev=0.1))
biases = tf.Variable(tf.truncated_normal(output_depth))
strides = [1, conv_strides[0], conv_strides[1], 1]
pool_strides = [1, pool_strides[0], pool_strides[1], 1]
#Convolution & Max pool
conv2d_1 = tf.nn.conv2d(x_tensor, weight, strides, padding='SAME')
conv2d_1 = tf.nn.bias_add(conv2d_1, biases)
conv2d_1 = tf.nn.relu(conv2d_1)
conv2d_1 = tf.nn.max_pool(conv2d_1, [1, pool_ksize[0], pool_ksize[1], 1], pool_strides, padding='SAME')
return conv2d_1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
#print(x_tensor.get_shape().as_list()[3])
#print(x_tensor.get_shape().as_list())
h = x_tensor.get_shape().as_list()[1]
w = x_tensor.get_shape().as_list()[2]
d = x_tensor.get_shape().as_list()[3]
flattened_tensor = tf.reshape(x_tensor, [-1, h*w*d])
#print(flattened_tensor.get_shape().as_list())
return flattened_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weight_rows = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([weight_rows, num_outputs], mean=0.0, stddev=0.1))
biases = tf.Variable(tf.truncated_normal([num_outputs]))
fc1 = tf.add(tf.matmul(x_tensor, weight), biases)
fc1 = tf.nn.relu(fc1)
return fc1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weight = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.1))
biases = tf.Variable(tf.zeros([num_outputs]))
out = tf.add(tf.matmul(x_tensor, weight), biases)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note: Activation, softmax, or cross entropy shouldn't be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv2d_1 = conv2d_maxpool(x, 10, (5, 5), (1, 1), (2, 2), (2, 2))
conv2d_2 = conv2d_maxpool(conv2d_1, 32, (5, 5), (1, 1), (2, 2), (2, 2))
conv2d_3 = conv2d_maxpool(conv2d_2, 64, (5, 5), (1, 1), (2, 2), (2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flattened_tensor = flatten(conv2d_3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc1 = fully_conn(flattened_tensor, 64)
fc1 = tf.nn.dropout(fc1, keep_prob)
fc2 = fully_conn(fc1, 32)
fc2 = tf.nn.dropout(fc2, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
logits = output(fc2, 10)
# TODO: return output
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Cost: ', loss)
print('Accuracy: ', valid_accuracy)
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 20
batch_size = 64
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
3,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4 - Convolutional Neural Network with MNIST
This lab corresponds to Module 4 of the "Deep Learning Explained" course. We assume that you have successfully Lab 1 to download the MNIST data.
We will train a Convolutional Neural Network (CNN) on MNIST data.
Introduction
A convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural network made up of neurons that have learnable weights and biases, very similar to ordinary multi-layer perceptron (MLP) networks introduced in Module 3. The CNNs take advantage of the spatial nature of the data.
In nature, we perceive different objects by their shapes, size and colors. For example, objects in a natural scene are typically composed of edges, corners/vertices (defined by two of more edges), color patches etc. These primitives are often identified using different detectors (e.g., edge detection, color detector) or combination of detectors interacting to facilitate image interpretation (object classification, region of interest detection, scene description etc.) in real world vision related tasks. These detectors are also known as filters.
Convolution is a mathematical operator that takes an image and a filter as input and produces a filtered output (representing say egdges, corners, colors etc in the input image). Historically, these filters are a set of weights that were often hand crafted or modeled with mathematical functions (e.g., Gaussian / Laplacian / Canny filter). The filter outputs are mapped through non-linear activation functions mimicking human brain cells called neurons.
Convolutional networks provide a machinery to learn these filters from the data directly instead of explicit mathematical models and have been found to be superior (in real world tasks) compared to historically crafted filters. With convolutional networks, the focus is on learning the filter weights instead of learning individually fully connected pair-wise (between inputs and outputs) weights. In this way, the number of weights to learn is reduced when compared with the traditional MLP networks from the previous tutorials. In a convolutional network, one learns several filters ranging in number from single digits to thousands depending on the network complexity.
Many of the CNN primitives have been shown to have a conceptually parallel components in brain's visual cortex. A neuron in the visual cortex will emit responses when a certain region of its input cells are stimulated. This region is known as the receptive field (RF) of the neuron.
Equivalently, in a convolution layer, the input region corresponding to the filter dimensions at certain locations in the input layer can be considered as the receptive field of the nodes in the convolutional layer. Popular deep CNNs or ConvNets (such as AlexNet, VGG, Inception, ResNet) that are used for various computer vision tasks have many of these architectural primitives (inspired from biology).
We will introduce the convolution operation and gain familiarity with the different parameters in CNNs.
Problem
Step1: Goal
Step2: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
Step3: Data reading
In this section, we will read the data generated in Lab1_MNIST_DataLoader.
We are using the MNIST data that you have downloaded using the Lab1_MNIST_DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In previous labs, as shown below, we have always flattened the input image into a vector. With convoultional networks, we do not flatten the image - we preserve its 2D shape..
Input Dimensions
Step4: Data reading
There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort.
We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form
Step5: <a id='#Model Creation'></a>
CNN Model Creation
CNN is a feedforward network made up of bunch of layers in such a way that the output of one layer becomes the input to the next layer (similar to MLP). In MLP, all possible pairs of input pixels are connected to the output nodes with each pair having a weight, thus leading to a combinatorial explosion of parameters to be learnt and also increasing the possibility of overfitting (details). Convolution layers take advantage of the spatial arrangement of the pixels and learn multiple filters that significantly reduce the amount of parameters in the network (details). The size of the filter is a parameter of the convolution layer.
In this section, we introduce the basics of convolution operations. We show the illustrations in the context of RGB images (3 channels), eventhough the MNIST data we are using a grayscale image (single channel).
Convolution Layer
A convolution layer is a set of filters. Each filter is defined by a weight (W) matrix, and bias ($b$).
These filters are scanned across the image performing the dot product between the weights and corresponding input value (${x}$). The bias value is added to the output of the dot product and the resulting sum is optionally mapped through an activation function. This process is illustrated in the following animation.
Step6: Convolution layers incorporate the following key features
Step7: Building our CNN models
We define two containers. One for the input MNIST image and the second one being the labels corresponding to the 10 digits. When reading the data, the reader automatically maps the 784 pixels per image to a shape defined by the input_dim_model tuple (in this example it is set to (1, 28, 28)).
Step8: The first model we build is a simple convolution only network. Here we have two convolutional layers. Since, our task is to detect the 10 digits in the MNIST database, the output of the network should be a vector of length 10, 1 element corresponding to each output class. This is achieved by projecting the output of the last convolutional layer using a dense layer with the output being num_output_classes. We have seen this before with Logistic Regression and MLP where features were mapped to the number of classes in the final layer. Also, note that since we will be using the softmax operation that is combined with the cross entropy loss function during training (see a few cells below), the final dense layer has no activation function associated with it.
The following figure illustrates the model we are going to build. Note the parameters in the model below are to be experimented with. These are often called network hyperparameters. Increasing the filter shape leads to an increase in the number of model parameters, increases the compute time and helps the model better fit to the data. However, one runs the risk of overfitting. Typically, the number of filters in the deeper layers are more than the number of filters in the layers before them. We have chosen 8, 16 for the first and second layers, respectively. These hyperparameters should be experimented with during model building.
Step9: Let us create an instance of the model and inspect the different components of the model. z will be used to represent the output of a network. In this model, we use the relu activation function. Note
Step10: Understanding the number of learnable parameters in a model is key to deep learning since there is a dependency between the number of parameters and the amount of data one needs to have to train the model.
You need more data for a model that has a larger number of parameters to prevent overfitting. In other words, with a fixed amount of data, one has to constrain the number of parameters. There is no golden rule between the amount of data one needs for a model. However, there are ways one can boost performance of model training with data augmentation.
Step11: Understanding Parameters
Step12: Next we will need a helper function to perform the model training. First let us create additional helper functions that will be needed to visualize different functions associated with training.
Step13: Configure training
Previously we have described the concepts of loss function, the optimizers or learners and the associated machinery needed to train a model. Please refer to earlier labs for gaining familiarility with these concepts. Here we combine model training and testing in a helper function below.
Step14: <a id='#Run the trainer'></a>
Run the trainer and test model
We are now ready to train our convolutional neural net.
Step15: Note, the average test error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid overfitting.
Let us check what is the value of some of the network parameters. We will check the bias value of the output dense layer. Previously, it was all 0. Now you see there are non-zero values, indicating that a model parameters were updated during training.
Step16: Evaluation / Prediction
We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes.
Step17: Let us test a small minibatch sample from the test data.
Step18: Let us visualize some of the results
Step19: Pooling Layer
Often times, one needs to control the number of parameters especially when having deep networks. For every layer of the convolution layer output (each layer, corresponds to the output of a filter), one can have a pooling layer. Pooling layers are typically introduced to
Step20: Typical convolution network
A typical CNN contains a set of alternating convolution and pooling layers followed by a dense output layer for classification. You will find variants of this structure in many classical deep networks (VGG, AlexNet etc). This is in contrast to the MLP network we used in Lab 3, which consisted of 2 dense layers followed by a dense output layer.
The illustrations are presented in the context of 2-dimensional (2D) images, but the concept and the CNTK components can operate on any dimensional data. The above schematic shows 2 convolution layer and 2 pooling layers. A typical strategy is to increase the number of filters in the deeper layers while reducing the spatial size of each intermediate layers.
Task
Step21: Knowledge Check | Python Code:
# Figure 1
Image(url= "http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png", width=200, height=200)
Explanation: Lab 4 - Convolutional Neural Network with MNIST
This lab corresponds to Module 4 of the "Deep Learning Explained" course. We assume that you have successfully Lab 1 to download the MNIST data.
We will train a Convolutional Neural Network (CNN) on MNIST data.
Introduction
A convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural network made up of neurons that have learnable weights and biases, very similar to ordinary multi-layer perceptron (MLP) networks introduced in Module 3. The CNNs take advantage of the spatial nature of the data.
In nature, we perceive different objects by their shapes, size and colors. For example, objects in a natural scene are typically composed of edges, corners/vertices (defined by two of more edges), color patches etc. These primitives are often identified using different detectors (e.g., edge detection, color detector) or combination of detectors interacting to facilitate image interpretation (object classification, region of interest detection, scene description etc.) in real world vision related tasks. These detectors are also known as filters.
Convolution is a mathematical operator that takes an image and a filter as input and produces a filtered output (representing say egdges, corners, colors etc in the input image). Historically, these filters are a set of weights that were often hand crafted or modeled with mathematical functions (e.g., Gaussian / Laplacian / Canny filter). The filter outputs are mapped through non-linear activation functions mimicking human brain cells called neurons.
Convolutional networks provide a machinery to learn these filters from the data directly instead of explicit mathematical models and have been found to be superior (in real world tasks) compared to historically crafted filters. With convolutional networks, the focus is on learning the filter weights instead of learning individually fully connected pair-wise (between inputs and outputs) weights. In this way, the number of weights to learn is reduced when compared with the traditional MLP networks from the previous tutorials. In a convolutional network, one learns several filters ranging in number from single digits to thousands depending on the network complexity.
Many of the CNN primitives have been shown to have a conceptually parallel components in brain's visual cortex. A neuron in the visual cortex will emit responses when a certain region of its input cells are stimulated. This region is known as the receptive field (RF) of the neuron.
Equivalently, in a convolution layer, the input region corresponding to the filter dimensions at certain locations in the input layer can be considered as the receptive field of the nodes in the convolutional layer. Popular deep CNNs or ConvNets (such as AlexNet, VGG, Inception, ResNet) that are used for various computer vision tasks have many of these architectural primitives (inspired from biology).
We will introduce the convolution operation and gain familiarity with the different parameters in CNNs.
Problem:
We will continue to work on the same problem of recognizing digits in MNIST data. The MNIST data comprises of hand-written digits with little background noise.
End of explanation
from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter)
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import time
import cntk as C
%matplotlib inline
Explanation: Goal:
Our goal is to train a classifier that will identify the digits in the MNIST dataset.
Approach:
The same 5 stages we have used in the previous labs are applicable: Data reading, Data preprocessing, Creating a model, Learning the model parameters and Evaluating (a.k.a. testing/prediction) the model.
We will experiment with two models with different architechtural components.
End of explanation
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
# Test for CNTK version
if not C.__version__ == "2.0":
raise Exception("this notebook is designed to work with 2.0. Current Version: " + C.__version__)
Explanation: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
End of explanation
# Ensure we always get the same amount of randomness
np.random.seed(0)
C.cntk_py.set_fixed_random_seed(1)
C.cntk_py.force_deterministic_algorithms()
# Define the data dimensions
input_dim_model = (1, 28, 28) # images are 28 x 28 with 1 channel of color (gray)
input_dim = 28*28 # used by readers to treat input data as a vector
num_output_classes = 10
Explanation: Data reading
In this section, we will read the data generated in Lab1_MNIST_DataLoader.
We are using the MNIST data that you have downloaded using the Lab1_MNIST_DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In previous labs, as shown below, we have always flattened the input image into a vector. With convoultional networks, we do not flatten the image - we preserve its 2D shape..
Input Dimensions:
In convolutional networks for images, the input data is often shaped as a 3D matrix (number of channels, image width, image height), which preserves the spatial relationship between the pixels. In the MNIST data, the image is a single channel (grayscale) data, so the input dimension is specified as a (1, image width, image height) tuple.
Natural scene color images are often presented as Red-Green-Blue (RGB) color channels. The input dimension of such images are specified as a (3, image width, image height) tuple. If one has RGB input data as a volumetric scan with volume width, volume height and volume depth representing the 3 axes, the input data format would be specified by a tuple of 4 values (3, volume width, volume height, volume depth). In this way CNTK enables specification of input images in arbitrary higher-dimensional space.
End of explanation
# Read a CTF formatted text (as mentioned above) using the CTF deserializer from a file
def create_reader(path, is_training, input_dim, num_label_classes):
ctf = C.io.CTFDeserializer(path, C.io.StreamDefs(
labels=C.io.StreamDef(field='labels', shape=num_label_classes, is_sparse=False),
features=C.io.StreamDef(field='features', shape=input_dim, is_sparse=False)))
return C.io.MinibatchSource(ctf,
randomize = is_training, max_sweeps = C.io.INFINITELY_REPEAT if is_training else 1)
# Ensure the training and test data is available
# We search in two locations in the toolkit for the cached MNIST data set.
data_found=False # A flag to indicate if train/test data found in local cache
for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"),
os.path.join("data", "MNIST")]:
train_file=os.path.join(data_dir, "Train-28x28_cntk_text.txt")
test_file=os.path.join(data_dir, "Test-28x28_cntk_text.txt")
if os.path.isfile(train_file) and os.path.isfile(test_file):
data_found=True
break
if not data_found:
raise ValueError("Please generate the data by completing Lab1_MNIST_DataLoader")
print("Data directory is {0}".format(data_dir))
Explanation: Data reading
There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort.
We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form:
|labels 0 0 0 1 0 0 0 0 0 0 |features 0 0 0 0 ...
(784 integers each representing a pixel)
We are going to use the image pixels corresponding the integer stream named "features". We define a create_reader function to read the training and test data using the CTF deserializer. The labels are 1-hot encoded. Refer to Lab 1 for data format visualizations.
End of explanation
Image(url="https://www.cntk.ai/jup/cntk103d_conv2d_final.gif", width= 300)
Explanation: <a id='#Model Creation'></a>
CNN Model Creation
CNN is a feedforward network made up of bunch of layers in such a way that the output of one layer becomes the input to the next layer (similar to MLP). In MLP, all possible pairs of input pixels are connected to the output nodes with each pair having a weight, thus leading to a combinatorial explosion of parameters to be learnt and also increasing the possibility of overfitting (details). Convolution layers take advantage of the spatial arrangement of the pixels and learn multiple filters that significantly reduce the amount of parameters in the network (details). The size of the filter is a parameter of the convolution layer.
In this section, we introduce the basics of convolution operations. We show the illustrations in the context of RGB images (3 channels), eventhough the MNIST data we are using a grayscale image (single channel).
Convolution Layer
A convolution layer is a set of filters. Each filter is defined by a weight (W) matrix, and bias ($b$).
These filters are scanned across the image performing the dot product between the weights and corresponding input value (${x}$). The bias value is added to the output of the dot product and the resulting sum is optionally mapped through an activation function. This process is illustrated in the following animation.
End of explanation
# Plot images with strides of 2 and 1 with padding turned on
images = [("https://www.cntk.ai/jup/cntk103d_padding_strides.gif" , 'With stride = 2'),
("https://www.cntk.ai/jup/cntk103d_same_padding_no_strides.gif", 'With stride = 1')]
for im in images:
print(im[1])
display(Image(url=im[0], width=200, height=200))
Explanation: Convolution layers incorporate the following key features:
Instead of being fully-connected to all input nodes , each convolution node is locally-connected to a subset of input nodes localized to a smaller input region, also referred to as receptive field (RF). The figure above illustrates a small 3 x 3 regions in the image as the RF region. In the case of an RGB, image there would be 3 such 3 x 3 regions, one for each of the 3 color channels.
Instead of having a single set of weights (as in a Dense layer), convolutional layers have multiple sets (shown in figure with multiple colors), called filters. Each filter detects features within each possible RF in the input image. The output of the convolution is a set of n sub-layers (shown in the animation below) where n is the number of filters (refer to the above figure).
Within a sublayer, instead of each node having its own set of weights, a single set of shared weights are used by all nodes in that sublayer. This reduces the number of parameters to be learnt and help reduce the risk of overfitting. This also opens the door for several aspects of deep learning which has enabled very practical solutions to be built:
Handling larger images (say 512 x 512)
Trying larger filter sizes (corresponding to a larger RF) say 11 x 11
Learning more filters (say 128)
Explore deeper architectures (100+ layers)
Achieve translation invariance (the ability to recognize a feature independent of where they appear in the image).
Strides and Pad parameters
How are filters positioned? In general, the filters are arranged in overlapping tiles, from left to right, and top to bottom. Each convolution layer has a parameter to specify the filter_shape, specifying the width and height of the filter. There is a parameter (strides) that controls the how far to step to right when moving the filters through multiple RF's in a row, and how far to step down when moving to the next row. The boolean parameter pad controls if the input should be padded around the edges to allow a complete tiling of the RF's near the borders.
The animation above shows the results with a filter_shape = (3, 3), strides = (2, 2) and pad = False. The two animations below show the results when pad is set to True. First, with a stride of 2 and second having a stride of 1.
Note: the shape of the output (the teal layer) is different between the two stride settings. In many problems, the stride and pad values are chosen to control the size of output layer.
End of explanation
x = C.input_variable(input_dim_model)
y = C.input_variable(num_output_classes)
Explanation: Building our CNN models
We define two containers. One for the input MNIST image and the second one being the labels corresponding to the 10 digits. When reading the data, the reader automatically maps the 784 pixels per image to a shape defined by the input_dim_model tuple (in this example it is set to (1, 28, 28)).
End of explanation
# function to build model
# relu error:1.37
# sigmoid error: 2.69
# leaky relu error: 1.26
# tanh error: 1.4
# def create_model(features):
# with C.layers.default_options(init=C.glorot_uniform(), activation=C.relu):
# #with C.layers.default_options(init=C.glorot_uniform(), activation=C.sigmoid):
# #with C.layers.default_options(init=C.glorot_uniform(), activation=C.leaky_relu):
# #with C.layers.default_options(init=C.glorot_uniform(), activation=C.tanh):
# h = features
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=8,
# strides=(2,2),
# pad=True, name='first_conv')(h)
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=16,
# strides=(2,2),
# pad=True, name='second_conv')(h)
# r = C.layers.Dense(num_output_classes, activation=None, name='classify')(h)
# return r
# average pool
# relu error:1.25
# leak relu error: 1.17
# sigmoid error:3.89
# tanh erro:1.47
# def create_model(features):
# with C.layers.default_options(init=C.glorot_uniform(), activation=C.leaky_relu):
# h = features
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=8,
# strides=(1,1),
# pad=True, name='first_conv')(h)
# h = C.layers.AveragePooling(filter_shape=(5,5),
# strides=(2,2), name = 'first_avg_pool')(h)
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=16,
# strides=(1,1),
# pad=True, name='second_conv')(h)
# h = C.layers.AveragePooling(filter_shape=(5,5),
# strides=(2,2), name = 'second_avg_pool')(h)
# r = C.layers.Dense(num_output_classes, activation=None, name='classify')(h)
# return r
# max pool
# leaky relu 2*2 2*2 error:1.12
# leaky relu 5*5 5*5 error:1.37
def create_model(features):
with C.layers.default_options(init=C.glorot_uniform(), activation=C.leaky_relu):
h = features
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=8,
strides=(1,1),
pad=True, name='first_conv')(h)
h = C.layers.MaxPooling(filter_shape=(5,5),
strides=(2,2), name = 'first_max_pool')(h)
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=16,
strides=(1,1),
pad=True, name='second_conv')(h)
h = C.layers.MaxPooling(filter_shape=(5,5),
strides=(2,2), name = 'second_max_pool')(h)
r = C.layers.Dense(num_output_classes, activation=None, name='classify')(h)
return r
Explanation: The first model we build is a simple convolution only network. Here we have two convolutional layers. Since, our task is to detect the 10 digits in the MNIST database, the output of the network should be a vector of length 10, 1 element corresponding to each output class. This is achieved by projecting the output of the last convolutional layer using a dense layer with the output being num_output_classes. We have seen this before with Logistic Regression and MLP where features were mapped to the number of classes in the final layer. Also, note that since we will be using the softmax operation that is combined with the cross entropy loss function during training (see a few cells below), the final dense layer has no activation function associated with it.
The following figure illustrates the model we are going to build. Note the parameters in the model below are to be experimented with. These are often called network hyperparameters. Increasing the filter shape leads to an increase in the number of model parameters, increases the compute time and helps the model better fit to the data. However, one runs the risk of overfitting. Typically, the number of filters in the deeper layers are more than the number of filters in the layers before them. We have chosen 8, 16 for the first and second layers, respectively. These hyperparameters should be experimented with during model building.
End of explanation
# Create the model
z = create_model(x)
# Print the output shapes / parameters of different components
print("Output Shape of the first convolution layer:", z.first_conv.shape)
print("Bias value of the last dense layer:", z.classify.b.value)
print("Output Shape of the second convolution layer:", z.second_conv.shape)
Explanation: Let us create an instance of the model and inspect the different components of the model. z will be used to represent the output of a network. In this model, we use the relu activation function. Note: using the C.layers.default_options is an elegant and concise way to build models. This is key to minimizing modeling errors, saving precious debugging time.
End of explanation
# Number of parameters in the network
C.logging.log_number_of_parameters(z)
Explanation: Understanding the number of learnable parameters in a model is key to deep learning since there is a dependency between the number of parameters and the amount of data one needs to have to train the model.
You need more data for a model that has a larger number of parameters to prevent overfitting. In other words, with a fixed amount of data, one has to constrain the number of parameters. There is no golden rule between the amount of data one needs for a model. However, there are ways one can boost performance of model training with data augmentation.
End of explanation
def create_criterion_function(model, labels):
loss = C.cross_entropy_with_softmax(model, labels)
errs = C.classification_error(model, labels)
return loss, errs # (model, labels) -> (loss, error metric)
Explanation: Understanding Parameters:
Our model has 2 convolution layers each having a weight and bias. This adds up to 4 parameter tensors. Additionally the dense layer has weight and bias tensors. Thus, the 6 parameter tensors.
Remember that in a convolutional layer, the number of parameters is not dependent on the number of nodes, only on the shared weights and bias of each filter.
Let us now count the number of parameters:
- First convolution layer: There are 8 filters each of size (1 x 5 x 5) where 1 is the number of channels in the input image. This adds up to 200 values in the weight matrix and 8 bias values.
Second convolution layer: There are 16 filters each of size (8 x 5 x 5) where 8 is the number of channels in the input to the second layer (= output of the first layer). This adds up to 3200 values in the weight matrix and 16 bias values.
Last dense layer: There are 16 x 7 x 7 input values and it produces 10 output values corresponding to the 10 digits in the MNIST dataset. This corresponds to (16 x 7 x 7) x 10 weight values and 10 bias values.
Adding these up gives the 11274 parameters in the model.
Knowledge Check: Does the dense layer shape align with the task (MNIST digit classification)?
Suggested Explorations
- Try printing shapes and parameters of different network layers,
- Record the training error you get with relu as the activation function,
- Now change to sigmoid as the activation function and see if you can improve your training error.
- Different supported activation functions can be found here. Which activation function gives the least training error?
Learning model parameters
We use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions).
Training
We minimize the cross-entropy between the label and predicted probability by the network. Since we are going to build more than one model, we will create a few helper functions.
End of explanation
# Define a utility function to compute the moving average sum.
# A more efficient implementation is possible with np.cumsum() function
def moving_average(a, w=5):
if len(a) < w:
return a[:] # Need to send a copy of the array
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Defines a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss = "NA"
eval_error = "NA"
if mb%frequency == 0:
training_loss = trainer.previous_minibatch_loss_average
eval_error = trainer.previous_minibatch_evaluation_average
if verbose:
print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}%".format(mb, training_loss, eval_error*100))
return mb, training_loss, eval_error
Explanation: Next we will need a helper function to perform the model training. First let us create additional helper functions that will be needed to visualize different functions associated with training.
End of explanation
def train_test(train_reader, test_reader, model_func, num_sweeps_to_train_with=10):
# Instantiate the model function; x is the input (feature) variable
# We will scale the input image pixels within 0-1 range by dividing all input value by 255.
model = model_func(x/255)
# Instantiate the loss and error function
loss, label_error = create_criterion_function(model, y)
# Instantiate the trainer object to drive the model training
learning_rate = 0.2
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
learner = C.sgd(z.parameters, lr_schedule)
trainer = C.Trainer(z, (loss, label_error), [learner])
# Initialize the parameters for the trainer
minibatch_size = 64
num_samples_per_sweep = 60000
num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size
# Map the data streams to the input and labels.
input_map={
y : train_reader.streams.labels,
x : train_reader.streams.features
}
# Uncomment below for more detailed logging
training_progress_output_freq = 500
# Start a timer
start = time.time()
for i in range(0, int(num_minibatches_to_train)):
# Read a mini batch from the training data file
data=train_reader.next_minibatch(minibatch_size, input_map=input_map)
trainer.train_minibatch(data)
print_training_progress(trainer, i, training_progress_output_freq, verbose=1)
# Print training time
print("Training took {:.1f} sec".format(time.time() - start))
# Test the model
test_input_map = {
y : test_reader.streams.labels,
x : test_reader.streams.features
}
# Test data for trained model
test_minibatch_size = 512
num_samples = 10000
num_minibatches_to_test = num_samples // test_minibatch_size
test_result = 0.0
for i in range(num_minibatches_to_test):
# We are loading test data in batches specified by test_minibatch_size
# Each data point in the minibatch is a MNIST digit image of 784 dimensions
# with one pixel per dimension that we will encode / decode with the
# trained model.
data = test_reader.next_minibatch(test_minibatch_size, input_map=test_input_map)
eval_error = trainer.test_minibatch(data)
test_result = test_result + eval_error
# Average of evaluation errors of all test minibatches
print("Average test error: {0:.2f}%".format(test_result*100 / num_minibatches_to_test))
Explanation: Configure training
Previously we have described the concepts of loss function, the optimizers or learners and the associated machinery needed to train a model. Please refer to earlier labs for gaining familiarility with these concepts. Here we combine model training and testing in a helper function below.
End of explanation
def do_train_test():
global z
z = create_model(x)
reader_train = create_reader(train_file, True, input_dim, num_output_classes)
reader_test = create_reader(test_file, False, input_dim, num_output_classes)
train_test(reader_train, reader_test, z)
do_train_test()
Explanation: <a id='#Run the trainer'></a>
Run the trainer and test model
We are now ready to train our convolutional neural net.
End of explanation
print("Bias value of the last dense layer:", z.classify.b.value)
Explanation: Note, the average test error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid overfitting.
Let us check what is the value of some of the network parameters. We will check the bias value of the output dense layer. Previously, it was all 0. Now you see there are non-zero values, indicating that a model parameters were updated during training.
End of explanation
out = C.softmax(z)
Explanation: Evaluation / Prediction
We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes.
End of explanation
# Read the data for evaluation
reader_eval=create_reader(test_file, False, input_dim, num_output_classes)
eval_minibatch_size = 25
eval_input_map = {x: reader_eval.streams.features, y:reader_eval.streams.labels}
data = reader_eval.next_minibatch(eval_minibatch_size, input_map=eval_input_map)
img_label = data[y].asarray()
img_data = data[x].asarray()
# reshape img_data to: M x 1 x 28 x 28 to be compatible with model
img_data = np.reshape(img_data, (eval_minibatch_size, 1, 28, 28))
predicted_label_prob = [out.eval(img_data[i]) for i in range(len(img_data))]
# Find the index with the maximum value for both predicted as well as the ground truth
pred = [np.argmax(predicted_label_prob[i]) for i in range(len(predicted_label_prob))]
gtlabel = [np.argmax(img_label[i]) for i in range(len(img_label))]
print("Label :", gtlabel[:25])
print("Predicted:", pred)
Explanation: Let us test a small minibatch sample from the test data.
End of explanation
# Plot a random image
sample_number = 5
plt.imshow(img_data[sample_number].reshape(28,28), cmap="gray_r")
plt.axis('off')
img_gt, img_pred = gtlabel[sample_number], pred[sample_number]
print("Image Label: ", img_pred)
Explanation: Let us visualize some of the results
End of explanation
# Plot images with strides of 2 and 1 with padding turned on
images = [("https://www.cntk.ai/jup/c103d_max_pooling.gif" , 'Max pooling'),
("https://www.cntk.ai/jup/c103d_average_pooling.gif", 'Average pooling')]
for im in images:
print(im[1])
display(Image(url=im[0], width=200, height=200))
Explanation: Pooling Layer
Often times, one needs to control the number of parameters especially when having deep networks. For every layer of the convolution layer output (each layer, corresponds to the output of a filter), one can have a pooling layer. Pooling layers are typically introduced to:
- Reduce the shape of the current layer (speeding up the network),
- Make the model more tolerant to changes in object location in the image. For example, even when a digit is shifted to one side of the image instead of being in the middle, the classifer would perform the classification task well.
The calculation on a pooling node is much simpler than a normal feedforward node. It has no weight, bias, or activation function. It uses a simple aggregation function (like max or average) to compute its output. The most commonly used function is "max" - a max pooling node simply outputs the maximum of the input values corresponding to the filter position of the input. The figure below shows the input values in a 4 x 4 region. The max pooling window size is 2 x 2 and starts from the top left corner, and uses a stride of 2x2. The maximum value within the window becomes the output of the region. Every time the model is shifted by the amount specified by the stride parameter (as shown in the figure below) and the maximum pooling operation is repeated.
Another alternative is average pooling, which emits that average value instead of the maximum value. The two different pooling opearations are summarized in the animation below.
End of explanation
# Modify this model
def create_model(features):
with C.layers.default_options(init = C.glorot_uniform(), activation = C.relu):
h = features
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=8,
strides=(2,2),
pad=True, name='first_conv')(h)
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=16,
strides=(2,2),
pad=True, name='second_conv')(h)
r = C.layers.Dense(num_output_classes, activation = None, name='classify')(h)
return r
# do_train_test()
Explanation: Typical convolution network
A typical CNN contains a set of alternating convolution and pooling layers followed by a dense output layer for classification. You will find variants of this structure in many classical deep networks (VGG, AlexNet etc). This is in contrast to the MLP network we used in Lab 3, which consisted of 2 dense layers followed by a dense output layer.
The illustrations are presented in the context of 2-dimensional (2D) images, but the concept and the CNTK components can operate on any dimensional data. The above schematic shows 2 convolution layer and 2 pooling layers. A typical strategy is to increase the number of filters in the deeper layers while reducing the spatial size of each intermediate layers.
Task: Create a network with Average Pooling
Typical convolutional networks have interlacing convolution and pooling layers. The previous model had only convolution layer. In this section, you will create a model with the following architecture.
You will use the CNTK Average Pooling function to achieve this task. You will edit the create_model function below and add the Average Pooling operation.
End of explanation
import numpy
from PIL import Image
img = Image.open("MysteryNumberD.bmp").convert("F")
imgarr = numpy.asarray(img)
imgarr = np.reshape(imgarr,(1,28,28))
np.argmax(out.eval(imgarr))
type(imgarr)
np.reshape(imgarr,(1,28,28)).ndim
out.eval(np.reshape(imgarr,(1,28,28)))
type(img_data[1])
img_data[1].ndim
Explanation: Knowledge Check: How many parameters do we have in this second model (as shown in the figure)?
Suggested Explorations
- Add average pooling layer after each of the two convolution layer. Use the parameters as shown in the figure.
- Does use of LeakyRelu help improve the error rate?
- What percentage of the parameter does the last dense layer contribute w.r.t. the overall number of parameters for (a) purely two convolutional layer and (b) alternating 2 convolutional and average pooling layers
End of explanation |
3,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial On Save & Restore Model
Introduction
This tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.
This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.
Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.
This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: PrettyTensor version
Step4: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step5: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step6: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
Step7: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
Step8: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step9: Plot a few images to see if data is correct
Step10: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below
Step11: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step12: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step13: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step14: Neural Network
This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.
The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
Step15: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
Step16: Getting the Weights
Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.
We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
Step17: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like
Step18: Optimization Method
Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.
It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step19: Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
Step20: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step21: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step22: Saver
In order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the optimize()-function.
Step23: The saved files are often called checkpoints because they may be written at regular intervals during optimization.
This is the directory used for saving and retrieving the data.
Step24: Create the directory if it does not exist.
Step25: This is the path for the checkpoint-file.
Step26: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step27: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
Step28: Execute the function now to initialize the variables.
Step29: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
Step30: The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
Step31: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
Step32: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
Step33: Helper-function to plot confusion matrix
Step34: Helper-functions for calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
Step35: Calculate the predicted class for the test-set.
Step36: Calculate the predicted class for the validation-set.
Step37: Helper-functions for the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4
Step38: Calculate the classification accuracy on the validation-set.
Step39: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Step40: Helper-function for plotting convolutional weights
Step41: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
Step42: The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
Step43: Perform 10,000 optimization iterations
We now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.
An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
Step44: The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.
But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.
The mean and standard deviation has also changed slightly, so the optimized weights must be different.
Step45: Initialize Variables Again
Re-initialize all the variables of the neural network with random values.
Step46: This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
Step47: The convolutional weights should now be different from the weights shown above.
Step48: Restore Best Variables
Re-load all the variables that were saved to file during optimization.
Step49: The classification accuracy is high again when using the variables that were previously saved.
Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
Step50: The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
Step51: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
Explanation: TensorFlow Tutorial On Save & Restore Model
Introduction
This tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.
This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.
Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.
This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
Explanation: Imports
End of explanation
tf.__version__
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
pt.__version__
Explanation: PrettyTensor version:
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
Explanation: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
End of explanation
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used for inputting data to the graph.
Variables that are going to be optimized so as to make the convolutional network perform better.
The mathematical formulas for the convolutional network.
A loss measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
End of explanation
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.argmax(y_true, dimension=1)
Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
End of explanation
x_pretty = pt.wrap(x_image)
Explanation: Neural Network
This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.
The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
End of explanation
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
Explanation: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
End of explanation
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
Explanation: Getting the Weights
Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.
We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
End of explanation
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
Explanation: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
Explanation: Optimization Method
Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.
It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
saver = tf.train.Saver()
Explanation: Saver
In order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the optimize()-function.
End of explanation
save_dir = 'checkpoints/'
Explanation: The saved files are often called checkpoints because they may be written at regular intervals during optimization.
This is the directory used for saving and retrieving the data.
End of explanation
if not os.path.exists(save_dir):
os.makedirs(save_dir)
Explanation: Create the directory if it does not exist.
End of explanation
save_path = os.path.join(save_dir, 'best_validation')
Explanation: This is the path for the checkpoint-file.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
def init_variables():
session.run(tf.global_variables_initializer())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
End of explanation
init_variables()
Explanation: Execute the function now to initialize the variables.
End of explanation
train_batch_size = 64
Explanation: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
Explanation: The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
End of explanation
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
Explanation: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
End of explanation
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function to plot confusion matrix
End of explanation
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
Explanation: Helper-functions for calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
Explanation: Calculate the predicted class for the test-set.
End of explanation
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
Explanation: Calculate the predicted class for the validation-set.
End of explanation
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
Explanation: Helper-functions for the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4
End of explanation
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
Explanation: Calculate the classification accuracy on the validation-set.
End of explanation
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
Explanation: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
End of explanation
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting convolutional weights
End of explanation
print_test_accuracy()
Explanation: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
End of explanation
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: Perform 10,000 optimization iterations
We now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.
An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.
But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.
The mean and standard deviation has also changed slightly, so the optimized weights must be different.
End of explanation
init_variables()
Explanation: Initialize Variables Again
Re-initialize all the variables of the neural network with random values.
End of explanation
print_test_accuracy()
Explanation: This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights should now be different from the weights shown above.
End of explanation
saver.restore(sess=session, save_path=save_path)
Explanation: Restore Best Variables
Re-load all the variables that were saved to file during optimization.
End of explanation
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: The classification accuracy is high again when using the variables that were previously saved.
Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
3,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Dataset API
Learning Objectives
1. Learn how to use tf.data to read data from memory
1. Learn how to use tf.data in a training loop
1. Learn how to use tf.data to read data from disk
1. Learn how to write production input pipelines with feature engineering (batching, shuffling, etc.)
In this notebook, we will start by refactoring the linear regression we implemented in the previous lab so that it takes data from atf.data.Dataset, and we will learn how to implement stochastic gradient descent with it. In this case, the original dataset will be synthetic and read by the tf.data API directly from memory.
In a second part, we will learn how to load a dataset with the tf.data API when the dataset resides on disk.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Loading data from memory
Creating the dataset
Let's consider the synthetic dataset of the previous section
Step2: We begin with implementing a function that takes as input
our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$
the number of passes over the dataset we want to train on (epochs)
the size of the batches the dataset (batch_size)
and returns a tf.data.Dataset
Step3: Let's test our function by iterating twice over our dataset in batches of 3 datapoints
Step4: Loss function and gradients
The loss function and the function that computes the gradients are the same as before
Step5: Training loop
The main difference now is that now, in the traning loop, we will iterate directly on the tf.data.Dataset generated by our create_dataset function.
Step6: Loading data from disk
Locating the CSV files
We will start with the taxifare dataset CSV files that we wrote out in a previous lab.
The taxifare dataset files have been saved into ../toy_data.
Check that it is the case in the cell below, and, if not, regenerate the taxifare
Step7: Use tf.data to read the CSV files
The tf.data API can easily read csv files using the helper function tf.data.experimental.make_csv_dataset
If you have TFRecords (which is recommended), you may use tf.data.experimental.make_batched_features_dataset
The first step is to define
the feature names into a list CSV_COLUMNS
their default values into a list DEFAULTS
Step8: Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located
Step9: Note that this is a prefetched dataset, where each element is an OrderedDict whose keys are the feature names and whose values are tensors of shape (1,) (i.e. vectors).
Step10: Transforming the features
What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary
Step11: Let's iterate over 2 examples from our tempds dataset and apply our feature_and_labels
function to each of the examples to make sure it's working
Step12: Batching
Let's now refactor our create_dataset function so that it takes an additional argument batch_size and batch the data correspondingly. We will also use the features_and_labels function we implemented for our dataset to produce tuples of features and labels.
Step13: Let's test that our batches are of the right size
Step14: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
Let's refactor our create_dataset function so that it shuffles the data, when the dataset is used for training.
We will introduce an additional argument mode to our function to allow the function body to distinguish the case
when it needs to shuffle the data (mode == "train") from when it shouldn't (mode == "eval").
Also, before returning we will want to prefetch 1 data point ahead of time (dataset.prefetch(1)) to speed-up training
Step15: Let's check that our function works well in both modes | Python Code:
# The json module is mainly used to convert the python dictionary above into a JSON string that can be written into a file
import json
# The math module in python provides some mathematical functions
import math
# The OS module in python provides functions for interacting with the operating system
import os
# The pprint module provides a capability to `pretty-print` arbitrary Python data structures in a form which can be used as input to the interpreter
from pprint import pprint
# Here we'll import data processing libraries like numpy and tensorflow
import numpy as np
import tensorflow as tf
# Here we'll show the currently installed version of TensorFlow
print(tf.version.VERSION)
Explanation: TensorFlow Dataset API
Learning Objectives
1. Learn how to use tf.data to read data from memory
1. Learn how to use tf.data in a training loop
1. Learn how to use tf.data to read data from disk
1. Learn how to write production input pipelines with feature engineering (batching, shuffling, etc.)
In this notebook, we will start by refactoring the linear regression we implemented in the previous lab so that it takes data from atf.data.Dataset, and we will learn how to implement stochastic gradient descent with it. In this case, the original dataset will be synthetic and read by the tf.data API directly from memory.
In a second part, we will learn how to load a dataset with the tf.data API when the dataset resides on disk.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
N_POINTS = 10
# The .constant() method will creates a constant tensor from a tensor-like object.
X = tf.constant(range(N_POINTS), dtype=tf.float32)
Y = 2 * X + 10
Explanation: Loading data from memory
Creating the dataset
Let's consider the synthetic dataset of the previous section:
End of explanation
# Let's define create_dataset() procedure
# TODO 1
def create_dataset(X, Y, epochs, batch_size):
# Using the tf.data.Dataset.from_tensor_slices() method we are able to get the slices of list or array
dataset = tf.data.Dataset.from_tensor_slices((X, Y))
dataset = dataset.repeat(epochs).batch(batch_size, drop_remainder=True)
return dataset
Explanation: We begin with implementing a function that takes as input
our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$
the number of passes over the dataset we want to train on (epochs)
the size of the batches the dataset (batch_size)
and returns a tf.data.Dataset:
Remark: Note that the last batch may not contain the exact number of elements you specified because the dataset was exhausted.
If you want batches with the exact same number of elements per batch, we will have to discard the last batch by
setting:
python
dataset = dataset.batch(batch_size, drop_remainder=True)
We will do that here.
End of explanation
BATCH_SIZE = 3
EPOCH = 2
dataset = create_dataset(X, Y, epochs=EPOCH, batch_size=BATCH_SIZE)
for i, (x, y) in enumerate(dataset):
# You can convert a native TF tensor to a NumPy array using .numpy() method
# Let's output the value of `x` and `y`
print("x:", x.numpy(), "y:", y.numpy())
assert len(x) == BATCH_SIZE
assert len(y) == BATCH_SIZE
Explanation: Let's test our function by iterating twice over our dataset in batches of 3 datapoints:
End of explanation
# Let's define loss_mse() procedure which will return computed mean of elements across dimensions of a tensor.
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
# Let's define compute_gradients() procedure which will return value of recorded operations for automatic differentiation
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
Explanation: Loss function and gradients
The loss function and the function that computes the gradients are the same as before:
End of explanation
# Here we will configure the dataset so that it iterates 250 times over our synthetic dataset in batches of 2.
# TODO 2
EPOCHS = 250
BATCH_SIZE = 2
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dataset = create_dataset(X, Y, epochs=EPOCHS, batch_size=BATCH_SIZE)
for step, (X_batch, Y_batch) in enumerate(dataset):
dw0, dw1 = compute_gradients(X_batch, Y_batch, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X_batch, Y_batch, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
assert loss < 0.0001
assert abs(w0 - 2) < 0.001
assert abs(w1 - 10) < 0.001
Explanation: Training loop
The main difference now is that now, in the traning loop, we will iterate directly on the tf.data.Dataset generated by our create_dataset function.
End of explanation
# ls shows the working directory's contents.
# Using -l parameter will lists the files with assigned permissions
!ls -l ../toy_data/taxi*.csv
Explanation: Loading data from disk
Locating the CSV files
We will start with the taxifare dataset CSV files that we wrote out in a previous lab.
The taxifare dataset files have been saved into ../toy_data.
Check that it is the case in the cell below, and, if not, regenerate the taxifare
End of explanation
# Defining the feature names into a list `CSV_COLUMNS`
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
# Defining the default values into a list `DEFAULTS`
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
Explanation: Use tf.data to read the CSV files
The tf.data API can easily read csv files using the helper function tf.data.experimental.make_csv_dataset
If you have TFRecords (which is recommended), you may use tf.data.experimental.make_batched_features_dataset
The first step is to define
the feature names into a list CSV_COLUMNS
their default values into a list DEFAULTS
End of explanation
# TODO 3
def create_dataset(pattern):
# The tf.data.experimental.make_csv_dataset() method reads CSV files into a dataset
return tf.data.experimental.make_csv_dataset(
pattern, 1, CSV_COLUMNS, DEFAULTS)
tempds = create_dataset('../toy_data/taxi-train*')
# Let's output the value of `tempds`
print(tempds)
Explanation: Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located:
End of explanation
# Let's iterate over the first two element of this dataset using `dataset.take(2)`.
# Then convert them ordinary Python dictionary with numpy array as values for more readability:
for data in tempds.take(2):
pprint({k: v.numpy() for k, v in data.items()})
print("\n")
Explanation: Note that this is a prefetched dataset, where each element is an OrderedDict whose keys are the feature names and whose values are tensors of shape (1,) (i.e. vectors).
End of explanation
UNWANTED_COLS = ['pickup_datetime', 'key']
# Let's define the features_and_labels() method
# TODO 4a
def features_and_labels(row_data):
# The .pop() method will return item and drop from frame.
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
Explanation: Transforming the features
What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary:
Remove the unwanted column "key"
Keep the label separate from the features
Let's first implement a function that takes as input a row (represented as an OrderedDict in our tf.data.Dataset as above) and then returns a tuple with two elements:
The first element being the same OrderedDict with the label dropped
The second element being the label itself (fare_amount)
Note that we will need to also remove the key and pickup_datetime column, which we won't use.
End of explanation
for row_data in tempds.take(2):
features, label = features_and_labels(row_data)
pprint(features)
print(label, "\n")
assert UNWANTED_COLS[0] not in features.keys()
assert UNWANTED_COLS[1] not in features.keys()
assert label.shape == [1]
Explanation: Let's iterate over 2 examples from our tempds dataset and apply our feature_and_labels
function to each of the examples to make sure it's working:
End of explanation
# Let's define the create_dataset() method
# TODO 4b
def create_dataset(pattern, batch_size):
# The tf.data.experimental.make_csv_dataset() method reads CSV files into a dataset
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
return dataset.map(features_and_labels)
Explanation: Batching
Let's now refactor our create_dataset function so that it takes an additional argument batch_size and batch the data correspondingly. We will also use the features_and_labels function we implemented for our dataset to produce tuples of features and labels.
End of explanation
BATCH_SIZE = 2
tempds = create_dataset('../toy_data/taxi-train*', batch_size=2)
for X_batch, Y_batch in tempds.take(2):
pprint({k: v.numpy() for k, v in X_batch.items()})
print(Y_batch.numpy(), "\n")
assert len(Y_batch) == BATCH_SIZE
Explanation: Let's test that our batches are of the right size:
End of explanation
# TODO 4c
def create_dataset(pattern, batch_size=1, mode="eval"):
# The tf.data.experimental.make_csv_dataset() method reads CSV files into a dataset
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
# The map() function executes a specified function for each item in an iterable.
# The item is sent to the function as a parameter.
dataset = dataset.map(features_and_labels).cache()
if mode == "train":
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
Explanation: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
Let's refactor our create_dataset function so that it shuffles the data, when the dataset is used for training.
We will introduce an additional argument mode to our function to allow the function body to distinguish the case
when it needs to shuffle the data (mode == "train") from when it shouldn't (mode == "eval").
Also, before returning we will want to prefetch 1 data point ahead of time (dataset.prefetch(1)) to speed-up training:
End of explanation
tempds = create_dataset('../toy_data/taxi-train*', 2, "train")
print(list(tempds.take(1)))
tempds = create_dataset('../toy_data/taxi-valid*', 2, "eval")
print(list(tempds.take(1)))
Explanation: Let's check that our function works well in both modes:
End of explanation |
3,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reconstruction de synonymes - รฉnoncรฉ
Ce notebook est plus un jeu. On rรฉcupรจre d'abord des synonymes via la base WOLF. On ne garde que les synonymes composรฉ d'un seul mot. On prend ensuite un texte quelconque qu'on dรฉcoupe en phrase. Pour chaque phrase qu'on rencontre, on remplace des mots au hasard par leur synonymes. Chaque phrase sera prรฉsente une fois ร l'identique et plusieurs fois avec des synonymes diffรฉrents. L'objectif est alors de proposer une mรฉthode pour reconstruire la base de synonymes.
Step1: Construction de la base de synonymes
Step2: On passe en revue toute la basse (il y a environ 120.000 lignes) et on s'arrรชte aprรจs 10000 synonymes car sinon, cela prend un temps fou.
Step3: On affiche les premiers groupes
Step4: Gรฉnรฉration d'une base de phrases modifiรฉes
On utilise Zadig.
Step5: On dรฉcoupe en mot.
Step6: On gรฉnรจre les phrases modifiรฉes
Step7: On traite tous les mots
Step8: Exercice | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Reconstruction de synonymes - รฉnoncรฉ
Ce notebook est plus un jeu. On rรฉcupรจre d'abord des synonymes via la base WOLF. On ne garde que les synonymes composรฉ d'un seul mot. On prend ensuite un texte quelconque qu'on dรฉcoupe en phrase. Pour chaque phrase qu'on rencontre, on remplace des mots au hasard par leur synonymes. Chaque phrase sera prรฉsente une fois ร l'identique et plusieurs fois avec des synonymes diffรฉrents. L'objectif est alors de proposer une mรฉthode pour reconstruire la base de synonymes.
End of explanation
from actuariat_python.data import wolf_xml
wolf_xml()
import os
if not os.path.exists("wolf-1.0b4.xml"):
raise FileNotFoundError("wolf-1.0b4.xml")
if os.stat("wolf-1.0b4.xml").st_size < 3000000:
raise FileNotFoundError("Size of 'wolf-1.0b4.xml' is very small: {0}".format(os.stat("wolf-1.0b4.xml").st_size))
from actuariat_python.data import enumerate_wolf_synonyms
for syn in enumerate_wolf_synonyms("wolf-1.0b4.xml", errors="ignore"):
print(syn)
break
Explanation: Construction de la base de synonymes
End of explanation
allsyn = {}
for line, syn in enumerate(enumerate_wolf_synonyms("wolf-1.0b4.xml")):
if line % 10000 == 0: print("line", line, "allsyn", len(allsyn))
clean = [_.lower() for _ in syn if " " not in _]
if len(clean) > 1:
for word in clean:
if word not in allsyn:
allsyn[word] = set(clean)
continue
else:
for cl in clean:
allsyn[word].add(cl)
if len(allsyn) > 10000:
break
len(allsyn)
Explanation: On passe en revue toute la basse (il y a environ 120.000 lignes) et on s'arrรชte aprรจs 10000 synonymes car sinon, cela prend un temps fou.
End of explanation
i = 0
for k, v in allsyn.items():
print(k,v)
i += 1
if i > 10:
break
Explanation: On affiche les premiers groupes :
End of explanation
import urllib.request
with urllib.request.urlopen("http://www.gutenberg.org/cache/epub/4647/pg4647.txt") as u:
content = u.read()
char = content.decode(encoding="utf-8")
Explanation: Gรฉnรฉration d'une base de phrases modifiรฉes
On utilise Zadig.
End of explanation
import re
reg = re.compile("([- a-zA-Zรขร รครฉรจรซรชรฎรฏรดรถรนรปรผ']+)")
phrases = [_.lower() for _ in reg.findall(char)]
for i, phrase in enumerate(phrases):
if i >= 990:
print(phrase)
if i >= 1000:
break
Explanation: On dรฉcoupe en mot.
End of explanation
import random
def modification(phrase, allsyn, nmax=10):
mots = phrase.split()
options = []
nb = 1
for mot in mots:
if mot in allsyn:
options.append(list(set([mot] + list(allsyn[mot]))))
else:
options.append([mot])
nb *= len(options[-1])
if nb == 1:
return [phrase]
else:
res = []
for i in range(0, min(nmax, nb//2+1, nb)):
sol = []
for mot in options:
h = random.randint(0, len(mot)-1)
sol.append(mot[h])
res.append(sol)
return res
modification("chatouiller le cรฉrรฉbral", allsyn)
Explanation: On gรฉnรจre les phrases modifiรฉes :
End of explanation
len(phrases)
with open("zadig_augmente.txt", "w", encoding="utf-8") as f:
total = 0
init = 0
for i, phrase in enumerate(phrases):
augm = modification(phrase, allsyn)
init += 1
for au in augm:
f.write(" ".join(au) + "\n")
total += 1
"total", total, "initial", init
Explanation: On traite tous les mots :
End of explanation
from pyensae.datasource import download_data
download_data("zadig_augmente.zip")
Explanation: Exercice : retrouver une partie des synonymes ร partir du dernier fichier crรฉรฉ
Le fichier utilisรฉ peut รชtre gรฉnรฉrรฉ ร partir du code prรฉcรฉdent ou utiliser cette version : zadig_augmente.zip.
End of explanation |
3,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ๆบๅจๅญฆไน ๅทฅ็จๅธ็บณ็ฑณๅญฆไฝ
ๅ
ฅ้จ
้กน็ฎ 0
Step1: ไปๆณฐๅฆๅฐผๅ
ๅท็ๆฐๆฎๆ ทๆฌไธญ๏ผๆไปฌๅฏไปฅ็ๅฐ่นไธๆฏไฝๆ
ๅฎข็็นๅพ
Survived๏ผๆฏๅฆๅญๆดป๏ผ0ไปฃ่กจๅฆ๏ผ1ไปฃ่กจๆฏ๏ผ
Pclass๏ผ็คพไผ้ถ็บง๏ผ1ไปฃ่กจไธๅฑ้ถ็บง๏ผ2ไปฃ่กจไธญๅฑ้ถ็บง๏ผ3ไปฃ่กจๅบๅฑ้ถ็บง๏ผ
Name๏ผ่นไธไนๅฎข็ๅๅญ
Sex๏ผ่นไธไนๅฎข็ๆงๅซ
Age
Step3: ่ฟไธชไพๅญๅฑ็คบไบๅฆไฝๅฐๆณฐๅฆๅฐผๅ
ๅท็ Survived ๆฐๆฎไป DataFrame ็งป้คใๆณจๆๅฐ data๏ผไนๅฎขๆฐๆฎ๏ผๅ outcomes ๏ผๆฏๅฆๅญๆดป๏ผ็ฐๅจๅทฒ็ปๅน้
ๅฅฝใ่ฟๆๅณ็ๅฏนไบไปปไฝไนๅฎข็ data.loc[i] ้ฝๆๅฏนๅบ็ๅญๆดป็็ปๆ outcome[i]ใ
ไธบไบ้ช่ฏๆไปฌ้ขๆต็็ปๆ๏ผๆไปฌ้่ฆไธไธชๆ ๅๆฅ็ปๆไปฌ็้ขๆตๆๅใๅ ไธบๆไปฌๆๆๅ
ด่ถฃ็ๆฏๆไปฌ้ขๆต็ๅ็กฎ็๏ผๆขๆญฃ็กฎ้ขๆตไนๅฎขๅญๆดป็ๆฏไพใ่ฟ่กไธ้ข็ไปฃ็ ๆฅๅๅปบๆไปฌ็ accuracy_score ๅฝๆฐไปฅๅฏนๅไบๅไนๅฎข็้ขๆตๆฅๅๆต่ฏใ
ๆ่้ข๏ผไป็ฌฌๅ
ญไธชไนๅฎข็ฎ่ตท๏ผๅฆๆๆไปฌ้ขๆตไปไปฌๅ
จ้จ้ฝๅญๆดป๏ผไฝ ่งๅพๆไปฌ้ขๆต็ๅ็กฎ็ๆฏๅคๅฐ๏ผ
Step5: ๆ็คบ๏ผๅฆๆไฝ ไฟๅญ iPython Notebook๏ผไปฃ็ ่ฟ่ก็่พๅบไนๅฐ่ขซไฟๅญใไฝๆฏ๏ผไธๆฆไฝ ้ๆฐๆๅผ้กน็ฎ๏ผไฝ ็ๅทฅไฝๅบๅฐไผ่ขซ้็ฝฎใ่ฏท็กฎไฟๆฏๆฌก้ฝไปไธๆฌก็ฆปๅผ็ๅฐๆน่ฟ่กไปฃ็ ๆฅ้ๆฐ็ๆๅ้ๅๅฝๆฐใ
้ขๆต
ๅฆๆๆไปฌ่ฆ้ขๆตๆณฐๅฆๅฐผๅ
ๅทไธ็ไนๅฎขๆฏๅฆๅญๆดป๏ผไฝๆฏๆไปฌๅๅฏนไปไปฌไธๆ ๆ็ฅ๏ผ้ฃไนๆๅฅฝ็้ขๆตๅฐฑๆฏ่นไธ็ไบบๆ ไธๅนธๅ
ใ่ฟๆฏๅ ไธบ๏ผๆไปฌๅฏไปฅๅๅฎๅฝ่นๆฒๆฒก็ๆถๅๅคงๅคๆฐไนๅฎข้ฝ้้พไบใไธ้ข็ predictions_0 ๅฝๆฐๅฐฑ้ขๆต่นไธ็ไนๅฎขๅ
จ้จ้้พใ
Step6: ้ฎ้ข1
ๅฏนๆฏ็ๅฎ็ๆณฐๅฆๅฐผๅ
ๅท็ๆฐๆฎ๏ผๅฆๆๆไปฌๅไธไธชๆๆไนๅฎข้ฝๆฒกๆๅญๆดป็้ขๆต๏ผไฝ ่ฎคไธบ่ฟไธช้ขๆต็ๅ็กฎ็่ฝ่พพๅฐๅคๅฐ๏ผ
ๆ็คบ๏ผ่ฟ่กไธ้ข็ไปฃ็ ๆฅๆฅ็้ขๆต็ๅ็กฎ็ใ
Step7: ๅ็ญ
Step9: ่งๅฏๆณฐๅฆๅฐผๅ
ๅทไธไนๅฎขๅญๆดป็ๆฐๆฎ็ป่ฎก๏ผๆไปฌๅฏไปฅๅ็ฐๅคง้จๅ็ทๆงไนๅฎขๅจ่นๆฒๆฒก็ๆถๅ้ฝ้้พไบใ็ธๅ็๏ผๅคง้จๅๅฅณๆงไนๅฎข้ฝๅจไบๆ
ไธญ็่ฟใ่ฎฉๆไปฌๅจๅ
ๅๆจๆญ็ๅบ็กไธ็ปง็ปญๅๅปบ๏ผๅฆๆไนๅฎขๆฏ็ทๆง๏ผ้ฃไนๆไปฌๅฐฑ้ขๆตไปไปฌ้้พ๏ผๅฆๆไนๅฎขๆฏๅฅณๆง๏ผ้ฃไนๆไปฌ้ขๆตไปไปฌๅจไบๆ
ไธญๆดปไบไธๆฅใ
ๅฐไธ้ข็ไปฃ็ ่กฅๅ
ๅฎๆด๏ผ่ฎฉๅฝๆฐๅฏไปฅ่ฟ่กๆญฃ็กฎ้ขๆตใ
ๆ็คบ๏ผๆจๅฏไปฅ็จ่ฎฟ้ฎ dictionary๏ผๅญๅ
ธ๏ผ็ๆนๆณๆฅ่ฎฟ้ฎ่นไธไนๅฎข็ๆฏไธช็นๅพๅฏนๅบ็ๅผใไพๅฆ๏ผ passenger['Sex'] ่ฟๅไนๅฎข็ๆงๅซใ
Step10: ้ฎ้ข2
ๅฝๆไปฌ้ขๆต่นไธๅฅณๆงไนๅฎขๅ
จ้จๅญๆดป๏ผ่ๅฉไธ็ไบบๅ
จ้จ้้พ๏ผ้ฃไนๆไปฌ้ขๆต็ๅ็กฎ็ไผ่พพๅฐๅคๅฐ๏ผ
ๆ็คบ๏ผ่ฟ่กไธ้ข็ไปฃ็ ๆฅๆฅ็ๆไปฌ้ขๆต็ๅ็กฎ็ใ
Step11: ๅ็ญ
Step13: ไป็ป่งๅฏๆณฐๅฆๅฐผๅ
ๅทๅญๆดป็ๆฐๆฎ็ป่ฎก๏ผๅจ่นๆฒๆฒก็ๆถๅ๏ผๅคง้จๅๅฐไบ10ๅฒ็็ทๅญฉ้ฝๆดป็๏ผ่ๅคงๅคๆฐ10ๅฒไปฅไธ็็ทๆง้ฝ้็่น็ๆฒๆฒก่้้พใ่ฎฉๆไปฌ็ปง็ปญๅจๅ
ๅ้ขๆต็ๅบ็กไธๆๅปบ๏ผๅฆๆไนๅฎขๆฏๅฅณๆง๏ผ้ฃไนๆไปฌๅฐฑ้ขๆตๅฅนไปฌๅ
จ้จๅญๆดป๏ผๅฆๆไนๅฎขๆฏ็ทๆงๅนถไธๅฐไบ10ๅฒ๏ผๆไปฌไนไผ้ขๆตไปไปฌๅ
จ้จๅญๆดป๏ผๆๆๅ
ถๅฎๆไปฌๅฐฑ้ขๆตไปไปฌ้ฝๆฒกๆๅนธๅญใ
ๅฐไธ้ข็ผบๅคฑ็ไปฃ็ ่กฅๅ
ๅฎๆด๏ผ่ฎฉๆไปฌ็ๅฝๆฐๅฏไปฅๅฎ็ฐ้ขๆตใ
ๆ็คบ
Step14: ้ฎ้ข3
ๅฝ้ขๆตๆๆๅฅณๆงไปฅๅๅฐไบ10ๅฒ็็ทๆง้ฝๅญๆดป็ๆถๅ๏ผ้ขๆต็ๅ็กฎ็ไผ่พพๅฐๅคๅฐ๏ผ
ๆ็คบ๏ผ่ฟ่กไธ้ข็ไปฃ็ ๆฅๆฅ็้ขๆต็ๅ็กฎ็ใ
Step15: ๅ็ญ
Step17: ๅฝๆฅ็ๅ็ ็ฉถไบๅพๅฝขๅ็ๆณฐๅฆๅฐผๅ
ๅทไธไนๅฎข็ๆฐๆฎ็ป่ฎกๅ๏ผ่ฏท่กฅๅ
จไธ้ข่ฟๆฎตไปฃ็ ไธญ็ผบๅคฑ็้จๅ๏ผไฝฟๅพๅฝๆฐๅฏไปฅ่ฟๅไฝ ็้ขๆตใ
ๅจๅฐ่พพๆ็ป็้ขๆตๆจกๅๅ่ฏท็กฎไฟ่ฎฐๅฝไฝ ๅฐ่ฏ่ฟ็ๅ็ง็นๅพๅๆกไปถใ
ๆ็คบ
Step18: ็ป่ฎบ
่ฏทๆ่ฟฐไฝ ๅฎ็ฐ80%ๅ็กฎๅบฆ็้ขๆตๆจกๅๆ็ปๅ็ๆญฅ้ชคใๆจ่งๅฏ่ฟๅชไบ็นๅพ๏ผๆไบ็นๆงๆฏๅฆๆฏๅ
ถไป็นๅพๆดๆๅธฎๅฉ๏ผไฝ ็จไบไปไนๆกไปถๆฅ้ขๆต็่ฟ็ปๆ๏ผไฝ ๆ็ป็้ขๆต็ๅ็กฎ็ๆฏๅคๅฐ๏ผ
ๆ็คบ | Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
# ๆฐๆฎๅฏ่งๅไปฃ็
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
# ๅ ่ฝฝๆฐๆฎ้
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
# ๆพ็คบๆฐๆฎๅ่กจไธญ็ๅๅ ้กนไนๅฎขๆฐๆฎ
display(full_data.head())
Explanation: ๆบๅจๅญฆไน ๅทฅ็จๅธ็บณ็ฑณๅญฆไฝ
ๅ
ฅ้จ
้กน็ฎ 0: ้ขๆตๆณฐๅฆๅฐผๅ
ๅทไนๅฎข็่ฟ็
1912ๅนด๏ผๆณฐๅฆๅฐผๅ
ๅทๅจ็ฌฌไธๆฌก่ช่กไธญๅฐฑไธๅฐๅฑฑ็ธๆๆฒๆฒก๏ผๅฏผ่ดไบๅคง้จๅไนๅฎขๅ่นๅ่บซไบกใๅจ่ฟไธชๅ
ฅ้จ้กน็ฎไธญ๏ผๆไปฌๅฐๆข็ดข้จๅๆณฐๅฆๅฐผๅ
ๅทๆ
ๅฎขๅๅ๏ผๆฅ็กฎๅฎๅชไบ็นๅพๅฏไปฅๆๅฅฝๅฐ้ขๆตไธไธชไบบๆฏๅฆไผ็่ฟใไธบไบๅฎๆ่ฟไธช้กน็ฎ๏ผไฝ ๅฐ้่ฆๅฎ็ฐๅ ไธชๅบไบๆกไปถ็้ขๆตๅนถๅ็ญไธ้ข็้ฎ้ขใๆไปฌๅฐๆ นๆฎไปฃ็ ็ๅฎๆๅบฆๅๅฏน้ฎ้ข็่งฃ็ญๆฅๅฏนไฝ ๆไบค็้กน็ฎ็่ฟ่ก่ฏไผฐใ
ๆ็คบ๏ผ่ฟๆ ท็ๆๅญๅฐไผๆๅฏผไฝ ๅฆไฝไฝฟ็จ iPython Notebook ๆฅๅฎๆ้กน็ฎใ
็นๅป่ฟ้ๆฅ็ๆฌๆไปถ็่ฑๆ็ๆฌใ
ๅผๅง
ๅฝๆไปฌๅผๅงๅค็ๆณฐๅฆๅฐผๅ
ๅทไนๅฎขๆฐๆฎๆถ๏ผไผๅ
ๅฏผๅ
ฅๆไปฌ้่ฆ็ๅ่ฝๆจกๅไปฅๅๅฐๆฐๆฎๅ ่ฝฝๅฐ pandas DataFrameใ่ฟ่กไธ้ขๅบๅไธญ็ไปฃ็ ๅ ่ฝฝๆฐๆฎ๏ผๅนถไฝฟ็จ .head() ๅฝๆฐๆพ็คบๅๅ ้กนไนๅฎขๆฐๆฎใ
ๆ็คบ๏ผไฝ ๅฏไปฅ้่ฟๅๅปไปฃ็ ๅบๅ๏ผ็ถๅไฝฟ็จ้ฎ็ๅฟซๆท้ฎ Shift+Enter ๆ Shift+ Return ๆฅ่ฟ่กไปฃ็ ใๆ่
ๅจ้ๆฉไปฃ็ ๅไฝฟ็จๆญๆพ๏ผrun cell๏ผๆ้ฎๆง่กไปฃ็ ใๅ่ฟๆ ท็ MarkDown ๆๆฌๅฏไปฅ้่ฟๅๅป็ผ่พ๏ผๅนถไฝฟ็จ่ฟไบ็ธๅ็ๅฟซๆท้ฎไฟๅญใMarkdown ๅ
่ฎธไฝ ็ผๅๆ่ฏป็็บฏๆๆฌๅนถไธๅฏไปฅ่ฝฌๆขไธบ HTMLใ
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
# ไปๆฐๆฎ้ไธญ็งป้ค 'Survived' ่ฟไธช็นๅพ๏ผๅนถๅฐๅฎๅญๅจๅจไธไธชๆฐ็ๅ้ไธญใ
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
# ๆพ็คบๅทฒ็งป้ค 'Survived' ็นๅพ็ๆฐๆฎ้
display(data.head())
Explanation: ไปๆณฐๅฆๅฐผๅ
ๅท็ๆฐๆฎๆ ทๆฌไธญ๏ผๆไปฌๅฏไปฅ็ๅฐ่นไธๆฏไฝๆ
ๅฎข็็นๅพ
Survived๏ผๆฏๅฆๅญๆดป๏ผ0ไปฃ่กจๅฆ๏ผ1ไปฃ่กจๆฏ๏ผ
Pclass๏ผ็คพไผ้ถ็บง๏ผ1ไปฃ่กจไธๅฑ้ถ็บง๏ผ2ไปฃ่กจไธญๅฑ้ถ็บง๏ผ3ไปฃ่กจๅบๅฑ้ถ็บง๏ผ
Name๏ผ่นไธไนๅฎข็ๅๅญ
Sex๏ผ่นไธไนๅฎข็ๆงๅซ
Age:่นไธไนๅฎข็ๅนด้พ๏ผๅฏ่ฝๅญๅจ NaN๏ผ
SibSp๏ผไนๅฎขๅจ่นไธ็ๅ
ๅผๅงๅฆนๅ้
ๅถ็ๆฐ้
Parch๏ผไนๅฎขๅจ่นไธ็็ถๆฏไปฅๅๅฐๅญฉ็ๆฐ้
Ticket๏ผไนๅฎข่น็ฅจ็็ผๅท
Fare๏ผไนๅฎขไธบ่น็ฅจๆฏไป็่ดน็จ
Cabin๏ผไนๅฎขๆๅจ่น่ฑ็็ผๅท๏ผๅฏ่ฝๅญๅจ NaN๏ผ
Embarked๏ผไนๅฎขไธ่น็ๆธฏๅฃ๏ผC ไปฃ่กจไป Cherbourg ็ป่น๏ผQ ไปฃ่กจไป Queenstown ็ป่น๏ผS ไปฃ่กจไป Southampton ็ป่น๏ผ
ๅ ไธบๆไปฌๆๅ
ด่ถฃ็ๆฏๆฏไธชไนๅฎขๆ่นๅๆฏๅฆๅจไบๆ
ไธญๆดปไบไธๆฅใๅฏไปฅๅฐ Survived ่ฟไธ็นๅพไป่ฟไธชๆฐๆฎ้็งป้ค๏ผๅนถไธ็จไธไธชๅ็ฌ็ๅ้ outcomes ๆฅๅญๅจใๅฎไนๅไธบๆไปฌ่ฆ้ขๆต็็ฎๆ ใ
่ฟ่ก่ฏฅไปฃ็ ๏ผไปๆฐๆฎ้ไธญ็งป้ค Survived ่ฟไธช็นๅพ๏ผๅนถๅฐๅฎๅญๅจๅจๅ้ outcomes ไธญใ
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
# ็กฎไฟ้ขๆต็ๆฐ้ไธ็ปๆ็ๆฐ้ไธ่ด
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
# ่ฎก็ฎ้ขๆตๅ็กฎ็๏ผ็พๅๆฏ๏ผ
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
# ๆต่ฏ 'accuracy_score' ๅฝๆฐ
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: ่ฟไธชไพๅญๅฑ็คบไบๅฆไฝๅฐๆณฐๅฆๅฐผๅ
ๅท็ Survived ๆฐๆฎไป DataFrame ็งป้คใๆณจๆๅฐ data๏ผไนๅฎขๆฐๆฎ๏ผๅ outcomes ๏ผๆฏๅฆๅญๆดป๏ผ็ฐๅจๅทฒ็ปๅน้
ๅฅฝใ่ฟๆๅณ็ๅฏนไบไปปไฝไนๅฎข็ data.loc[i] ้ฝๆๅฏนๅบ็ๅญๆดป็็ปๆ outcome[i]ใ
ไธบไบ้ช่ฏๆไปฌ้ขๆต็็ปๆ๏ผๆไปฌ้่ฆไธไธชๆ ๅๆฅ็ปๆไปฌ็้ขๆตๆๅใๅ ไธบๆไปฌๆๆๅ
ด่ถฃ็ๆฏๆไปฌ้ขๆต็ๅ็กฎ็๏ผๆขๆญฃ็กฎ้ขๆตไนๅฎขๅญๆดป็ๆฏไพใ่ฟ่กไธ้ข็ไปฃ็ ๆฅๅๅปบๆไปฌ็ accuracy_score ๅฝๆฐไปฅๅฏนๅไบๅไนๅฎข็้ขๆตๆฅๅๆต่ฏใ
ๆ่้ข๏ผไป็ฌฌๅ
ญไธชไนๅฎข็ฎ่ตท๏ผๅฆๆๆไปฌ้ขๆตไปไปฌๅ
จ้จ้ฝๅญๆดป๏ผไฝ ่งๅพๆไปฌ้ขๆต็ๅ็กฎ็ๆฏๅคๅฐ๏ผ
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
# ้ขๆต 'passenger' ็็่ฟ็
predictions.append(0)
# Return our predictions
# ่ฟๅ้ขๆต็ปๆ
return pd.Series(predictions)
# Make the predictions
# ่ฟ่ก้ขๆต
predictions = predictions_0(data)
Explanation: ๆ็คบ๏ผๅฆๆไฝ ไฟๅญ iPython Notebook๏ผไปฃ็ ่ฟ่ก็่พๅบไนๅฐ่ขซไฟๅญใไฝๆฏ๏ผไธๆฆไฝ ้ๆฐๆๅผ้กน็ฎ๏ผไฝ ็ๅทฅไฝๅบๅฐไผ่ขซ้็ฝฎใ่ฏท็กฎไฟๆฏๆฌก้ฝไปไธๆฌก็ฆปๅผ็ๅฐๆน่ฟ่กไปฃ็ ๆฅ้ๆฐ็ๆๅ้ๅๅฝๆฐใ
้ขๆต
ๅฆๆๆไปฌ่ฆ้ขๆตๆณฐๅฆๅฐผๅ
ๅทไธ็ไนๅฎขๆฏๅฆๅญๆดป๏ผไฝๆฏๆไปฌๅๅฏนไปไปฌไธๆ ๆ็ฅ๏ผ้ฃไนๆๅฅฝ็้ขๆตๅฐฑๆฏ่นไธ็ไบบๆ ไธๅนธๅ
ใ่ฟๆฏๅ ไธบ๏ผๆไปฌๅฏไปฅๅๅฎๅฝ่นๆฒๆฒก็ๆถๅๅคงๅคๆฐไนๅฎข้ฝ้้พไบใไธ้ข็ predictions_0 ๅฝๆฐๅฐฑ้ขๆต่นไธ็ไนๅฎขๅ
จ้จ้้พใ
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: ้ฎ้ข1
ๅฏนๆฏ็ๅฎ็ๆณฐๅฆๅฐผๅ
ๅท็ๆฐๆฎ๏ผๅฆๆๆไปฌๅไธไธชๆๆไนๅฎข้ฝๆฒกๆๅญๆดป็้ขๆต๏ผไฝ ่ฎคไธบ่ฟไธช้ขๆต็ๅ็กฎ็่ฝ่พพๅฐๅคๅฐ๏ผ
ๆ็คบ๏ผ่ฟ่กไธ้ข็ไปฃ็ ๆฅๆฅ็้ขๆต็ๅ็กฎ็ใ
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: ๅ็ญ: Predictions have an accuracy of 61.62%.
ๆไปฌๅฏไปฅไฝฟ็จ survival_stats ๅฝๆฐๆฅ็็ Sex ่ฟไธ็นๅพๅฏนไนๅฎข็ๅญๆดป็ๆๅคๅคงๅฝฑๅใ่ฟไธชๅฝๆฐๅฎไนๅจๅไธบ titanic_visualizations.py ็ Python ่ๆฌๆไปถไธญ๏ผๆไปฌ็้กน็ฎๆไพไบ่ฟไธชๆไปถใไผ ้็ปๅฝๆฐ็ๅไธคไธชๅๆฐๅๅซๆฏๆณฐๅฆๅฐผๅ
ๅท็ไนๅฎขๆฐๆฎๅไนๅฎข็ ็่ฟ็ปๆใ็ฌฌไธไธชๅๆฐ่กจๆๆไปฌไผไพๆฎๅชไธช็นๅพๆฅ็ปๅถๅพๅฝขใ
่ฟ่กไธ้ข็ไปฃ็ ็ปๅถๅบไพๆฎไนๅฎขๆงๅซ่ฎก็ฎๅญๆดป็็ๆฑๅฝขๅพใ
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# ็งป้คไธๆน็ 'pass' ๅฃฐๆ
# and write your prediction conditions here
# ่พๅ
ฅไฝ ่ชๅทฑ็้ขๆตๆกไปถ
if passenger['Sex'] == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
# ่ฟๅ้ขๆต็ปๆ
return pd.Series(predictions)
# Make the predictions
# ่ฟ่ก้ขๆต
predictions = predictions_1(data)
Explanation: ่งๅฏๆณฐๅฆๅฐผๅ
ๅทไธไนๅฎขๅญๆดป็ๆฐๆฎ็ป่ฎก๏ผๆไปฌๅฏไปฅๅ็ฐๅคง้จๅ็ทๆงไนๅฎขๅจ่นๆฒๆฒก็ๆถๅ้ฝ้้พไบใ็ธๅ็๏ผๅคง้จๅๅฅณๆงไนๅฎข้ฝๅจไบๆ
ไธญ็่ฟใ่ฎฉๆไปฌๅจๅ
ๅๆจๆญ็ๅบ็กไธ็ปง็ปญๅๅปบ๏ผๅฆๆไนๅฎขๆฏ็ทๆง๏ผ้ฃไนๆไปฌๅฐฑ้ขๆตไปไปฌ้้พ๏ผๅฆๆไนๅฎขๆฏๅฅณๆง๏ผ้ฃไนๆไปฌ้ขๆตไปไปฌๅจไบๆ
ไธญๆดปไบไธๆฅใ
ๅฐไธ้ข็ไปฃ็ ่กฅๅ
ๅฎๆด๏ผ่ฎฉๅฝๆฐๅฏไปฅ่ฟ่กๆญฃ็กฎ้ขๆตใ
ๆ็คบ๏ผๆจๅฏไปฅ็จ่ฎฟ้ฎ dictionary๏ผๅญๅ
ธ๏ผ็ๆนๆณๆฅ่ฎฟ้ฎ่นไธไนๅฎข็ๆฏไธช็นๅพๅฏนๅบ็ๅผใไพๅฆ๏ผ passenger['Sex'] ่ฟๅไนๅฎข็ๆงๅซใ
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: ้ฎ้ข2
ๅฝๆไปฌ้ขๆต่นไธๅฅณๆงไนๅฎขๅ
จ้จๅญๆดป๏ผ่ๅฉไธ็ไบบๅ
จ้จ้้พ๏ผ้ฃไนๆไปฌ้ขๆต็ๅ็กฎ็ไผ่พพๅฐๅคๅฐ๏ผ
ๆ็คบ๏ผ่ฟ่กไธ้ข็ไปฃ็ ๆฅๆฅ็ๆไปฌ้ขๆต็ๅ็กฎ็ใ
End of explanation
survival_stats(data, outcomes, 'Parch', ["Sex == 'female'"])
Explanation: ๅ็ญ: Predictions have an accuracy of 78.68%.
ไป
ไป
ไฝฟ็จไนๅฎขๆงๅซ๏ผSex๏ผ่ฟไธ็นๅพ๏ผๆไปฌ้ขๆต็ๅ็กฎๆงๅฐฑๆไบๆๆพ็ๆ้ซใ็ฐๅจๅ็ไธไธไฝฟ็จ้ขๅค็็นๅพ่ฝๅฆๆด่ฟไธๆญฅๆๅๆไปฌ็้ขๆตๅ็กฎๅบฆใไพๅฆ๏ผ็ปผๅ่่ๆๆๅจๆณฐๅฆๅฐผๅ
ๅทไธ็็ทๆงไนๅฎข๏ผๆไปฌๆฏๅฆๆพๅฐ่ฟไบไนๅฎขไธญ็ไธไธชๅญ้๏ผไปไปฌ็ๅญๆดปๆฆ็่พ้ซใ่ฎฉๆไปฌๅๆฌกไฝฟ็จ survival_stats ๅฝๆฐๆฅ็็ๆฏไฝ็ทๆงไนๅฎข็ๅนด้พ๏ผAge๏ผใ่ฟไธๆฌก๏ผๆไปฌๅฐไฝฟ็จ็ฌฌๅไธชๅๆฐๆฅ้ๅฎๆฑๅฝขๅพไธญๅชๆ็ทๆงไนๅฎขใ
่ฟ่กไธ้ข่ฟๆฎตไปฃ็ ๏ผๆ็ทๆงๅบไบๅนด้พ็็ๅญ็ปๆ็ปๅถๅบๆฅใ
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# ็งป้คไธๆน็ 'pass' ๅฃฐๆ
# and write your prediction conditions here
# ่พๅ
ฅไฝ ่ชๅทฑ็้ขๆตๆกไปถ
if passenger['Sex'] == 'male':
if passenger['Age'] > 10:
predictions.append(0)
else:
predictions.append(1)
else:
predictions.append(1)
# Return our predictions
# ่ฟๅ้ขๆต็ปๆ
return pd.Series(predictions)
# Make the predictions
# ่ฟ่ก้ขๆต
predictions = predictions_2(data)
Explanation: ไป็ป่งๅฏๆณฐๅฆๅฐผๅ
ๅทๅญๆดป็ๆฐๆฎ็ป่ฎก๏ผๅจ่นๆฒๆฒก็ๆถๅ๏ผๅคง้จๅๅฐไบ10ๅฒ็็ทๅญฉ้ฝๆดป็๏ผ่ๅคงๅคๆฐ10ๅฒไปฅไธ็็ทๆง้ฝ้็่น็ๆฒๆฒก่้้พใ่ฎฉๆไปฌ็ปง็ปญๅจๅ
ๅ้ขๆต็ๅบ็กไธๆๅปบ๏ผๅฆๆไนๅฎขๆฏๅฅณๆง๏ผ้ฃไนๆไปฌๅฐฑ้ขๆตๅฅนไปฌๅ
จ้จๅญๆดป๏ผๅฆๆไนๅฎขๆฏ็ทๆงๅนถไธๅฐไบ10ๅฒ๏ผๆไปฌไนไผ้ขๆตไปไปฌๅ
จ้จๅญๆดป๏ผๆๆๅ
ถๅฎๆไปฌๅฐฑ้ขๆตไปไปฌ้ฝๆฒกๆๅนธๅญใ
ๅฐไธ้ข็ผบๅคฑ็ไปฃ็ ่กฅๅ
ๅฎๆด๏ผ่ฎฉๆไปฌ็ๅฝๆฐๅฏไปฅๅฎ็ฐ้ขๆตใ
ๆ็คบ: ๆจๅฏไปฅ็จไนๅ predictions_1 ็ไปฃ็ ไฝไธบๅผๅงๆฅไฟฎๆนไปฃ็ ๏ผๅฎ็ฐๆฐ็้ขๆตๅฝๆฐใ
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: ้ฎ้ข3
ๅฝ้ขๆตๆๆๅฅณๆงไปฅๅๅฐไบ10ๅฒ็็ทๆง้ฝๅญๆดป็ๆถๅ๏ผ้ขๆต็ๅ็กฎ็ไผ่พพๅฐๅคๅฐ๏ผ
ๆ็คบ๏ผ่ฟ่กไธ้ข็ไปฃ็ ๆฅๆฅ็้ขๆต็ๅ็กฎ็ใ
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'female'", "Age < 30"])
Explanation: ๅ็ญ: Predictions have an accuracy of 68.91%.
ๆทปๅ ๅนด้พ๏ผAge๏ผ็นๅพไธๆงๅซ๏ผSex๏ผ็็ปๅๆฏๅ็ฌไฝฟ็จๆงๅซ๏ผSex๏ผไนๆ้ซไบไธๅฐๅ็กฎๅบฆใ็ฐๅจ่ฏฅไฝ ๆฅๅ้ขๆตไบ๏ผๆพๅฐไธ็ณปๅ็็นๅพๅๆกไปถๆฅๅฏนๆฐๆฎ่ฟ่กๅๅ๏ผไฝฟๅพ้ขๆต็ปๆๆ้ซๅฐ80%ไปฅไธใ่ฟๅฏ่ฝ้่ฆๅคไธช็นๆงๅๅคไธชๅฑๆฌก็ๆกไปถ่ฏญๅฅๆไผๆๅใไฝ ๅฏไปฅๅจไธๅ็ๆกไปถไธๅคๆฌกไฝฟ็จ็ธๅ็็นๅพใPclass๏ผSex๏ผAge๏ผSibSp ๅ Parch ๆฏๅปบ่ฎฎๅฐ่ฏไฝฟ็จ็็นๅพใ
ไฝฟ็จ survival_stats ๅฝๆฐๆฅ่งๆตๆณฐๅฆๅฐผๅ
ๅทไธไนๅฎขๅญๆดป็ๆฐๆฎ็ป่ฎกใ
ๆ็คบ: ่ฆไฝฟ็จๅคไธช่ฟๆปคๆกไปถ๏ผๆๆฏไธไธชๆกไปถๆพๅจไธไธชๅ่กจ้ไฝไธบๆๅไธไธชๅๆฐไผ ้่ฟๅปใไพๅฆ: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
if passenger['Pclass'] > 2:
predictions.append(0)
elif passenger['Age'] > 10:
predictions.append(0)
elif passenger['Parch'] < 1:
predictions.append(0)
else:
predictions.append(1)
elif passenger['Parch'] > 3:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: ๅฝๆฅ็ๅ็ ็ฉถไบๅพๅฝขๅ็ๆณฐๅฆๅฐผๅ
ๅทไธไนๅฎข็ๆฐๆฎ็ป่ฎกๅ๏ผ่ฏท่กฅๅ
จไธ้ข่ฟๆฎตไปฃ็ ไธญ็ผบๅคฑ็้จๅ๏ผไฝฟๅพๅฝๆฐๅฏไปฅ่ฟๅไฝ ็้ขๆตใ
ๅจๅฐ่พพๆ็ป็้ขๆตๆจกๅๅ่ฏท็กฎไฟ่ฎฐๅฝไฝ ๅฐ่ฏ่ฟ็ๅ็ง็นๅพๅๆกไปถใ
ๆ็คบ: ๆจๅฏไปฅ็จไนๅ predictions_2 ็ไปฃ็ ไฝไธบๅผๅงๆฅไฟฎๆนไปฃ็ ๏ผๅฎ็ฐๆฐ็้ขๆตๅฝๆฐใ
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: ็ป่ฎบ
่ฏทๆ่ฟฐไฝ ๅฎ็ฐ80%ๅ็กฎๅบฆ็้ขๆตๆจกๅๆ็ปๅ็ๆญฅ้ชคใๆจ่งๅฏ่ฟๅชไบ็นๅพ๏ผๆไบ็นๆงๆฏๅฆๆฏๅ
ถไป็นๅพๆดๆๅธฎๅฉ๏ผไฝ ็จไบไปไนๆกไปถๆฅ้ขๆต็่ฟ็ปๆ๏ผไฝ ๆ็ป็้ขๆต็ๅ็กฎ็ๆฏๅคๅฐ๏ผ
ๆ็คบ:่ฟ่กไธ้ข็ไปฃ็ ๆฅๆฅ็ไฝ ็้ขๆตๅ็กฎๅบฆใ
End of explanation |
3,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 2
Step1: Import libraries
Step2: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment
Step3: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step4: Process the item embeddings data
You run the sp_ExractEmbeddings stored procedure to process the item embeddings data and write the results to the item_embeddings table.
This stored procedure works as follows
Step5: Get a count of the records in the item_embeddings table
Step6: See a sample of the data in the item_embeddings table
Step7: Export the item embedding vector data
Export the item embedding data to Cloud Storage by using a Dataflow pipeline. This pipeline does the following
Step8: Run the pipeline
It takes about 5 minutes to run the pipeline. You can see the graph for the running pipeline in the Dataflow Console.
Step9: List the CSV files that were written to Cloud Storage | Python Code:
!pip install -U -q apache-beam[gcp]
Explanation: Part 2: Process the item embedding data in BigQuery and export it to Cloud Storage
This notebook is the second of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to complete the following tasks:
Process the song embeddings data in BigQuery to generate a single embedding vector for each song.
Use a Dataflow pipeline to write the embedding vector data to CSV files and export the files to a Cloud Storage bucket.
Before starting this notebook, you must run the 01_train_bqml_mf_pmi notebook to calculate item PMI data and then train a matrix factorization model with it.
After completing this notebook, run the 03_create_embedding_lookup_model notebook to create a model to serve the item embedding data.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
End of explanation
import os
import numpy as np
import tensorflow.io as tf_io
import apache_beam as beam
from datetime import datetime
Explanation: Import libraries
End of explanation
PROJECT_ID = 'yourProject' # Change to your project.
BUCKET = 'yourBucketName' # Change to the bucket you created.
REGION = 'yourDataflowRegion' # Change to your Dataflow region.
BQ_DATASET_NAME = 'recommendations'
!gcloud config set project $PROJECT_ID
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
REGION: The region to use for the Dataflow job.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
%%bigquery --project $PROJECT_ID
CALL recommendations.sp_ExractEmbeddings()
Explanation: Process the item embeddings data
You run the sp_ExractEmbeddings stored procedure to process the item embeddings data and write the results to the item_embeddings table.
This stored procedure works as follows:
Uses the ML.WEIGHTS function to extract the item embedding matrices from the item_matching_model model.
Aggregates these matrices to generate a single embedding vector for each item.
Because BigQuery ML matrix factorization models are designed for user-item recommendation use cases, they generate two embedding matrices, one for users, and the other of items. However, in this use case, both embedding matrices represent items, but in different axes of the feedback matrix. For more information about how the feedback matrix is calculated, see Understanding item embeddings.
Run the sp_ExractEmbeddings stored procedure
End of explanation
%%bigquery --project $PROJECT_ID
SELECT COUNT(*) embedding_count
FROM recommendations.item_embeddings;
Explanation: Get a count of the records in the item_embeddings table:
End of explanation
%%bigquery --project $PROJECT_ID
SELECT *
FROM recommendations.item_embeddings
LIMIT 5;
Explanation: See a sample of the data in the item_embeddings table:
End of explanation
runner = 'DataflowRunner'
timestamp = datetime.utcnow().strftime('%y%m%d%H%M%S')
job_name = f'ks-bqml-export-embeddings-{timestamp}'
bq_dataset_name = BQ_DATASET_NAME
embeddings_table_name = 'item_embeddings'
output_dir = f'gs://{BUCKET}/bqml/item_embeddings'
project = PROJECT_ID
temp_location = os.path.join(output_dir, 'tmp')
region = REGION
print(f'runner: {runner}')
print(f'job_name: {job_name}')
print(f'bq_dataset_name: {bq_dataset_name}')
print(f'embeddings_table_name: {embeddings_table_name}')
print(f'output_dir: {output_dir}')
print(f'project: {project}')
print(f'temp_location: {temp_location}')
print(f'region: {region}')
try: os.chdir(os.path.join(os.getcwd(), 'embeddings_exporter'))
except: pass
Explanation: Export the item embedding vector data
Export the item embedding data to Cloud Storage by using a Dataflow pipeline. This pipeline does the following:
Reads the item embedding records from the item_embeddings table in BigQuery.
Writes each item embedding record to a CSV file.
Writes the item embedding CSV files to a Cloud Storage bucket.
The pipeline in implemented in the embeddings_exporter/pipeline.py module.
Configure the pipeline variables
Configure the variables needed by the pipeline:
End of explanation
if tf_io.gfile.exists(output_dir):
print("Removing {} contents...".format(output_dir))
tf_io.gfile.rmtree(output_dir)
print("Creating output: {}".format(output_dir))
tf_io.gfile.makedirs(output_dir)
!python runner.py \
--runner={runner} \
--job_name={job_name} \
--bq_dataset_name={bq_dataset_name} \
--embeddings_table_name={embeddings_table_name} \
--output_dir={output_dir} \
--project={project} \
--temp_location={temp_location} \
--region={region}
Explanation: Run the pipeline
It takes about 5 minutes to run the pipeline. You can see the graph for the running pipeline in the Dataflow Console.
End of explanation
!gsutil ls {output_dir}/embeddings-*.csv
Explanation: List the CSV files that were written to Cloud Storage
End of explanation |
3,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 DeepMind Technologies Limited. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
Full license text
Step1: A (very) basic GAN for MNIST in JAX/Haiku
Based on a TensorFlow tutorial written by Mihaela Rosca.
Original GAN paper
Step2: Define the dataset
Step13: Define the model
Step14: Train the model
Step15: Visualize the losses
Unlike losses for classifiers or VAEs, GAN losses do not decrease steadily, instead going up and down depending on the training dynamics.
Step16: Visualize samples | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 DeepMind Technologies Limited. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
Full license text
End of explanation
# Uncomment the line below if running on colab.research.google.com.
# !pip install dm-haiku
import functools
from typing import Any, NamedTuple
import haiku as hk
import jax
import optax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
import tensorflow_datasets as tfds
Explanation: A (very) basic GAN for MNIST in JAX/Haiku
Based on a TensorFlow tutorial written by Mihaela Rosca.
Original GAN paper: https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
Imports
End of explanation
# Download the data once.
mnist = tfds.load("mnist")
def make_dataset(batch_size, seed=1):
def _preprocess(sample):
# Convert to floats in [0, 1].
image = tf.image.convert_image_dtype(sample["image"], tf.float32)
# Scale the data to [-1, 1] to stabilize training.
return 2.0 * image - 1.0
ds = mnist["train"]
ds = ds.map(map_func=_preprocess,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.cache()
ds = ds.shuffle(10 * batch_size, seed=seed).repeat().batch(batch_size)
return iter(tfds.as_numpy(ds))
Explanation: Define the dataset
End of explanation
class Generator(hk.Module):
Generator network.
def __init__(self, output_channels=(32, 1), name=None):
super().__init__(name=name)
self.output_channels = output_channels
def __call__(self, x):
Maps noise latents to images.
x = hk.Linear(7 * 7 * 64)(x)
x = jnp.reshape(x, x.shape[:1] + (7, 7, 64))
for output_channels in self.output_channels:
x = jax.nn.relu(x)
x = hk.Conv2DTranspose(output_channels=output_channels,
kernel_shape=[5, 5],
stride=2,
padding="SAME")(x)
# We use a tanh to ensure that the generated samples are in the same
# range as the data.
return jnp.tanh(x)
class Discriminator(hk.Module):
Discriminator network.
def __init__(self,
output_channels=(8, 16, 32, 64, 128),
strides=(2, 1, 2, 1, 2),
name=None):
super().__init__(name=name)
self.output_channels = output_channels
self.strides = strides
def __call__(self, x):
Classifies images as real or fake.
for output_channels, stride in zip(self.output_channels, self.strides):
x = hk.Conv2D(output_channels=output_channels,
kernel_shape=[5, 5],
stride=stride,
padding="SAME")(x)
x = jax.nn.leaky_relu(x, negative_slope=0.2)
x = hk.Flatten()(x)
# We have two classes: 0 = input is fake, 1 = input is real.
logits = hk.Linear(2)(x)
return logits
def tree_shape(xs):
return jax.tree_map(lambda x: x.shape, xs)
def sparse_softmax_cross_entropy(logits, labels):
one_hot_labels = jax.nn.one_hot(labels, logits.shape[-1])
return -jnp.sum(one_hot_labels * jax.nn.log_softmax(logits), axis=-1)
class GANTuple(NamedTuple):
gen: Any
disc: Any
class GANState(NamedTuple):
params: GANTuple
opt_state: GANTuple
class GAN:
A basic GAN.
def __init__(self, num_latents):
self.num_latents = num_latents
# Define the Haiku network transforms.
# We don't use BatchNorm so we don't use `with_state`.
self.gen_transform = hk.without_apply_rng(
hk.transform(lambda *args: Generator()(*args)))
self.disc_transform = hk.without_apply_rng(
hk.transform(lambda *args: Discriminator()(*args)))
# Build the optimizers.
self.optimizers = GANTuple(gen=optax.adam(1e-4, b1=0.5, b2=0.9),
disc=optax.adam(1e-4, b1=0.5, b2=0.9))
@functools.partial(jax.jit, static_argnums=0)
def initial_state(self, rng, batch):
Returns the initial parameters and optimize states.
# Generate dummy latents for the generator.
dummy_latents = jnp.zeros((batch.shape[0], self.num_latents))
# Get initial network parameters.
rng_gen, rng_disc = jax.random.split(rng)
params = GANTuple(gen=self.gen_transform.init(rng_gen, dummy_latents),
disc=self.disc_transform.init(rng_disc, batch))
print("Generator: \n\n{}\n".format(tree_shape(params.gen)))
print("Discriminator: \n\n{}\n".format(tree_shape(params.disc)))
# Initialize the optimizers.
opt_state = GANTuple(gen=self.optimizers.gen.init(params.gen),
disc=self.optimizers.disc.init(params.disc))
return GANState(params=params, opt_state=opt_state)
def sample(self, rng, gen_params, num_samples):
Generates images from noise latents.
latents = jax.random.normal(rng, shape=(num_samples, self.num_latents))
return self.gen_transform.apply(gen_params, latents)
def gen_loss(self, gen_params, rng, disc_params, batch):
Generator loss.
# Sample from the generator.
fake_batch = self.sample(rng, gen_params, num_samples=batch.shape[0])
# Evaluate using the discriminator. Recall class 1 is real.
fake_logits = self.disc_transform.apply(disc_params, fake_batch)
fake_probs = jax.nn.softmax(fake_logits)[:, 1]
loss = -jnp.log(fake_probs)
return jnp.mean(loss)
def disc_loss(self, disc_params, rng, gen_params, batch):
Discriminator loss.
# Sample from the generator.
fake_batch = self.sample(rng, gen_params, num_samples=batch.shape[0])
# For efficiency we process both the real and fake data in one pass.
real_and_fake_batch = jnp.concatenate([batch, fake_batch], axis=0)
real_and_fake_logits = self.disc_transform.apply(disc_params,
real_and_fake_batch)
real_logits, fake_logits = jnp.split(real_and_fake_logits, 2, axis=0)
# Class 1 is real.
real_labels = jnp.ones((batch.shape[0],), dtype=jnp.int32)
real_loss = sparse_softmax_cross_entropy(real_logits, real_labels)
# Class 0 is fake.
fake_labels = jnp.zeros((batch.shape[0],), dtype=jnp.int32)
fake_loss = sparse_softmax_cross_entropy(fake_logits, fake_labels)
return jnp.mean(real_loss + fake_loss)
@functools.partial(jax.jit, static_argnums=0)
def update(self, rng, gan_state, batch):
Performs a parameter update.
rng, rng_gen, rng_disc = jax.random.split(rng, 3)
# Update the discriminator.
disc_loss, disc_grads = jax.value_and_grad(self.disc_loss)(
gan_state.params.disc,
rng_disc,
gan_state.params.gen,
batch)
disc_update, disc_opt_state = self.optimizers.disc.update(
disc_grads, gan_state.opt_state.disc)
disc_params = optax.apply_updates(gan_state.params.disc, disc_update)
# Update the generator.
gen_loss, gen_grads = jax.value_and_grad(self.gen_loss)(
gan_state.params.gen,
rng_gen,
gan_state.params.disc,
batch)
gen_update, gen_opt_state = self.optimizers.gen.update(
gen_grads, gan_state.opt_state.gen)
gen_params = optax.apply_updates(gan_state.params.gen, gen_update)
params = GANTuple(gen=gen_params, disc=disc_params)
opt_state = GANTuple(gen=gen_opt_state, disc=disc_opt_state)
gan_state = GANState(params=params, opt_state=opt_state)
log = {
"gen_loss": gen_loss,
"disc_loss": disc_loss,
}
return rng, gan_state, log
Explanation: Define the model
End of explanation
#@title {vertical-output: true}
num_steps = 20001
log_every = num_steps // 100
# Let's see what hardware we're working with. The training takes a few
# minutes on a GPU, a bit longer on CPU.
print(f"Number of devices: {jax.device_count()}")
print("Device:", jax.devices()[0].device_kind)
print("")
# Make the dataset.
dataset = make_dataset(batch_size=64)
# The model.
gan = GAN(num_latents=20)
# Top-level RNG.
rng = jax.random.PRNGKey(1729)
# Initialize the network and optimizer.
rng, rng1 = jax.random.split(rng)
gan_state = gan.initial_state(rng1, next(dataset))
steps = []
gen_losses = []
disc_losses = []
for step in range(num_steps):
rng, gan_state, log = gan.update(rng, gan_state, next(dataset))
# Log the losses.
if step % log_every == 0:
# It's important to call `device_get` here so we don't take up device
# memory by saving the losses.
log = jax.device_get(log)
gen_loss = log["gen_loss"]
disc_loss = log["disc_loss"]
print(f"Step {step}: "
f"gen_loss = {gen_loss:.3f}, disc_loss = {disc_loss:.3f}")
steps.append(step)
gen_losses.append(gen_loss)
disc_losses.append(disc_loss)
Explanation: Train the model
End of explanation
sns.set_style("whitegrid")
fig, axes = plt.subplots(1, 2, figsize=(20, 6))
# Plot the discriminator loss.
axes[0].plot(steps, disc_losses, "-")
axes[0].plot(steps, np.log(2) * np.ones_like(steps), "r--",
label="Discriminator is being fooled")
axes[0].legend(fontsize=20)
axes[0].set_title("Discriminator loss", fontsize=20)
# Plot the generator loss.
axes[1].plot(steps, gen_losses, '-')
axes[1].set_title("Generator loss", fontsize=20);
Explanation: Visualize the losses
Unlike losses for classifiers or VAEs, GAN losses do not decrease steadily, instead going up and down depending on the training dynamics.
End of explanation
#@title {vertical-output: true}
def make_grid(samples, num_cols=8, rescale=True):
batch_size, height, width = samples.shape
assert batch_size % num_cols == 0
num_rows = batch_size // num_cols
# We want samples.shape == (height * num_rows, width * num_cols).
samples = samples.reshape(num_rows, num_cols, height, width)
samples = samples.swapaxes(1, 2)
samples = samples.reshape(height * num_rows, width * num_cols)
return samples
# Generate samples from the trained generator.
rng = jax.random.PRNGKey(12)
samples = gan.sample(rng, gan_state.params.gen, num_samples=64)
samples = jax.device_get(samples)
samples = samples.squeeze(axis=-1)
# Our model outputs values in [-1, 1] so scale it back to [0, 1].
samples = (samples + 1.0) / 2.0
plt.gray()
plt.axis("off")
samples_grid = make_grid(samples)
plt.imshow(samples_grid);
Explanation: Visualize samples
End of explanation |
3,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing In Context
Social Sciences Track
Lecture 3--text mining for real
Matthew L. Jones
like, with code and stuff
Step1: IMPORTANT
Step2: Let's get some text
Let's use the remarkable narratives available from Documenting the American South (http
Step3: list comprehensions!
most python thing evah!
how many words in each text within our_texts? can you make a list?
Sure, you could do this as a for loop
for text in our texts
Step4: How to process text
Python Libraries
Python has an embarrasment of riches when it comes to working with texts. Some libraries are higher level with simpler, well thought out defaults, namely pattern and TextBlob. Most general, of long development, and foundational is the Natural Language Tool Kit--NLTK. The ideas we'll learn to today are key--they have slightly different instantiations in the different tools. Not everything is yet in Python 3, alas!!
nltk
Step5: Our first tool
Step6: for the documentation of sklearn's text data functionality, see http
Step7: so document_term_matrix is a matrix with 294 rows--the documents--and 1658 columns--the vocabulary or terms or features
Step8: right now stored super efficiently as a sparse matrix
almost all zeros--good for our computers' limited memory
easier for us to see as a dense matrix
Step9: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
Now we can throw wide variety of mining algorithms at our data!
Similarity and dissimilarity
We reduced our text to a vector of term-weights.
What can we do once we've committed this real violence on the text?
We can measure distance and similarity
I know. Crazy talk.
Right now our text is just a series of numbers, indexed to words. We can treat it like any collection of vectors more or less.
And the key way to distinguish two vectors is by measuring their distance or computing their similiarity (1-distance).
You already know how, though you may have buried it along with memories of high school.
Many distance metrics to choose from
key one in textual analysis
Step10: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
Step11: HOMEWORK EXERCISE
Step12: OMG U...G...L...Y!
WHAT THE? This is nonsense
what's the problem?
we just tried to plot a bunch o' features!
we need only the most significant words!
way to do this | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import textmining_blackboxes as tm
Explanation: Computing In Context
Social Sciences Track
Lecture 3--text mining for real
Matthew L. Jones
like, with code and stuff
End of explanation
#see if package imported correctly
tm.icantbelieve("butter")
Explanation: IMPORTANT: tm is our temporarily helper, not a standard python package!!
download it from my github:
https://github.com/matthewljones/computingincontext
End of explanation
title_info=pd.read_csv('data/na-slave-narratives/data/toc.csv')
#this is the "metadata" of these files--we didn't use today
#why does data appear twice?
#Let's use a brittle thing for reading in a directory of pure txt files.
our_texts=tm.readtextfiles('data/na-slave-narratives/data/texts')
#again, this is not a std python package
#returns a simple list of the document as very long strings
#note if you want the following notebook will work on any directory of text files.
len(our_texts)
our_texts[100][:300] # first 300 words of 100th text
Explanation: Let's get some text
Let's use the remarkable narratives available from Documenting the American South (http://docsouth.unc.edu/docsouthdata/)
Assuming that you are storing your data in a directory in the same place as your iPython notebook.
Put the slave narratives texts within a data directory in the same place as this notebook
End of explanation
lengths=[len(text) for text in our_texts]
Explanation: list comprehensions!
most python thing evah!
how many words in each text within our_texts? can you make a list?
Sure, you could do this as a for loop
for text in our texts:
blah.blah.blah(our_texts) #not real code
or
for i in range(len(our_texts)
But super easy in python
End of explanation
our_texts=tm.data_cleanse(our_texts)
#more necessary when have messy text
#eliminate escaped characters
Explanation: How to process text
Python Libraries
Python has an embarrasment of riches when it comes to working with texts. Some libraries are higher level with simpler, well thought out defaults, namely pattern and TextBlob. Most general, of long development, and foundational is the Natural Language Tool Kit--NLTK. The ideas we'll learn to today are key--they have slightly different instantiations in the different tools. Not everything is yet in Python 3, alas!!
nltk : grandparent of text analysis packages, cross-platform, complex
crucial for moving beyond bag of words: tagging & other grammatical analysis
pattern : higher level and easier to use the nltk but Python 2.7 only. (wah!)
textblob : even higher level range of natural language processing (3.4 but not yet in conda?)
scikit learn (sklearn): toolkit for scientists, faster, better (use for processing/memory intensive stuff) (Our choice!)
Things we might do to clean up text
tokenization
making .split much better
Examples??
stemming:
converting inflected forms into some normalized forms
e.g. "chefs" --> "chef"
"goes" --> "go"
"children" --> "child"
stopwords
they are the words you don't want to be included:
"from" "to" "a" "they" "she" "he"
If you need to do lots of such things, you'll want to use ntlk, pattern or TextBlob.
For now, we'll play with the cool scientists and use the powerful and fast scikit learn package.
Our Zero-ith tool: cleaning up the text
I've included a little utility function in tm that takes a list of strings and cleans it up a bit
check out the code on your own time later
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
document_term_matrix=vectorizer.fit_transform(our_texts)
Explanation: Our first tool: vectorizer from scikit learn
End of explanation
# now let's get our vocabulary--the names corresponding to the rows
# "feature" is the general term in machine learning and data mining
# we seek to characterize data by picking out features that will enable discovery
vocab=vectorizer.get_feature_names()
len(vocab)
document_term_matrix.shape
Explanation: for the documentation of sklearn's text data functionality, see http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
while this works, mini-lecture on crashes
see kernel above. Therein is the secret to eliminating the dreaded *.
End of explanation
vocab[1000:1100]
Explanation: so document_term_matrix is a matrix with 294 rows--the documents--and 1658 columns--the vocabulary or terms or features
End of explanation
document_term_matrix_dense=document_term_matrix.toarray()
dtmdf=pd.DataFrame(document_term_matrix_dense, columns=vocab)
dtmdf
Explanation: right now stored super efficiently as a sparse matrix
almost all zeros--good for our computers' limited memory
easier for us to see as a dense matrix
End of explanation
#easy to program, but let's use a robust version from sklearn!
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(document_term_matrix)
#Note here that the `cosine_similiary` can take
#an entire matrix as its argument
#what'd we get?
similarity
similarity.shape
Explanation: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
Now we can throw wide variety of mining algorithms at our data!
Similarity and dissimilarity
We reduced our text to a vector of term-weights.
What can we do once we've committed this real violence on the text?
We can measure distance and similarity
I know. Crazy talk.
Right now our text is just a series of numbers, indexed to words. We can treat it like any collection of vectors more or less.
And the key way to distinguish two vectors is by measuring their distance or computing their similiarity (1-distance).
You already know how, though you may have buried it along with memories of high school.
Many distance metrics to choose from
key one in textual analysis:
cosine similarity
If $\mathbf{a}$ and $\mathbf{b}$ are vectors, then
$\mathbf{a}\cdot\mathbf{b}=\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\cos\theta$
Or
$\text{similarity} = \cos(\theta) = {A \cdot B \over \|A\| \|B\|} = \frac{ \sum\limits_{i=1}^{n}{A_i \times B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{(A_i)^2}} \times \sqrt{\sum\limits_{i=1}^{n}{(B_i)^2}} }$
(h/t wikipedia)
End of explanation
similarity[100]
#this gives the similarity of row 100 to each of the other rows
Explanation: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
End of explanation
term_document_matrix=document_term_matrix.T
# .T is the easy transposition method for a
# matrix in python's matrix packages.
# import a bunch of packages we need
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
from scipy.cluster.hierarchy import ward, dendrogram
#distance is 1-similarity, so:
dist=1-cosine_similarity(term_document_matrix)
# ward is an algorithm for hierarchical clustering
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
Explanation: HOMEWORK EXERCISE:
for given document find the most similar and give titles from the csv file
you'll see!
supervised vs. unsupervised learning
slides from class omitted
first example of unsupervised learning
hierarchical clustering
This time we're interested in relations among the words not the texts.
In other words, we're interested in the similarities between one column and another--one term and another term
So we'll work with the transposed matrix--the term-document matrix, rather than the document-term matrix.
For a description of hierarchical clustering, look at the example at https://en.wikipedia.org/wiki/Hierarchical_clustering
End of explanation
vectorizer=TfidfVectorizer(min_df=.96, stop_words='english', use_idf=True)
#try a very high min_df
#rerun the model
document_term_matrix=vectorizer.fit_transform(our_texts)
vocab=vectorizer.get_feature_names()
#check the length of the vocab
len(vocab)
#switch again to the term_document_matrix
term_document_matrix=document_term_matrix.T
dist=1-cosine_similarity(term_document_matrix)
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
Explanation: OMG U...G...L...Y!
WHAT THE? This is nonsense
what's the problem?
we just tried to plot a bunch o' features!
we need only the most significant words!
way to do this: change the min_df parameter in vectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
more an art than a science
End of explanation |
3,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Corrupt known signal with point spread
The aim of this tutorial is to demonstrate how to put a known signal at a
desired location(s) in a
Step1: First, we set some parameters.
Step2: Load the MEG data
Step3: Estimate the background noise covariance from the baseline period
Step4: Generate sinusoids in two spatially distant labels
Step5: Find the center vertices in source space of each label
We want the known signal in each label to only be active at the center. We
create a mask for each label that is 1 at the center vertex and 0 at all
other vertices in the label. This mask is then used when simulating
source-space data.
Step6: Create source-space data with known signals
Put known signals onto surface vertices using the array of signals and
the label masks (stored in labels[i].values).
Step7: Plot original signals
Note that the original signals are highly concentrated (point) sources.
Step8: Simulate sensor-space signals
Use the forward solution and add Gaussian noise to simulate sensor-space
(evoked) data from the known source-space signals. The amount of noise is
controlled by nave (higher values imply less noise).
Step9: Plot the point-spread of corrupted signal
Notice that after applying the forward- and inverse-operators to the known
point sources that the point sources have spread across the source-space.
This spread is due to the minimum norm solution so that the signal leaks to
nearby vertices with similar orientations so that signal ends up crossing the
sulci and gyri. | Python Code:
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.simulation import simulate_stc, simulate_evoked
Explanation: Corrupt known signal with point spread
The aim of this tutorial is to demonstrate how to put a known signal at a
desired location(s) in a :class:mne.SourceEstimate and then corrupt the
signal with point-spread by applying a forward and inverse solution.
End of explanation
seed = 42
# parameters for inverse method
method = 'sLORETA'
snr = 3.
lambda2 = 1.0 / snr ** 2
# signal simulation parameters
# do not add extra noise to the known signals
nave = np.inf
T = 100
times = np.linspace(0, 1, T)
dt = times[1] - times[0]
# Paths to MEG data
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_fwd = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-fwd.fif')
fname_inv = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-fixed-inv.fif')
fname_evoked = op.join(data_path, 'MEG', 'sample',
'sample_audvis-ave.fif')
Explanation: First, we set some parameters.
End of explanation
fwd = mne.read_forward_solution(fname_fwd)
fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True,
use_cps=False)
fwd['info']['bads'] = []
inv_op = read_inverse_operator(fname_inv)
raw = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw.fif'))
raw.set_eeg_reference(projection=True)
events = mne.find_events(raw)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
epochs = mne.Epochs(raw, events, event_id, baseline=(None, 0), preload=True)
epochs.info['bads'] = []
evoked = epochs.average()
labels = mne.read_labels_from_annot('sample', subjects_dir=subjects_dir)
label_names = [l.name for l in labels]
n_labels = len(labels)
Explanation: Load the MEG data
End of explanation
cov = mne.compute_covariance(epochs, tmin=None, tmax=0.)
Explanation: Estimate the background noise covariance from the baseline period
End of explanation
# The known signal is all zero-s off of the two labels of interest
signal = np.zeros((n_labels, T))
idx = label_names.index('inferiorparietal-lh')
signal[idx, :] = 1e-7 * np.sin(5 * 2 * np.pi * times)
idx = label_names.index('rostralmiddlefrontal-rh')
signal[idx, :] = 1e-7 * np.sin(7 * 2 * np.pi * times)
Explanation: Generate sinusoids in two spatially distant labels
End of explanation
hemi_to_ind = {'lh': 0, 'rh': 1}
for i, label in enumerate(labels):
# The `center_of_mass` function needs labels to have values.
labels[i].values.fill(1.)
# Restrict the eligible vertices to be those on the surface under
# consideration and within the label.
surf_vertices = fwd['src'][hemi_to_ind[label.hemi]]['vertno']
restrict_verts = np.intersect1d(surf_vertices, label.vertices)
com = labels[i].center_of_mass(subject='sample',
subjects_dir=subjects_dir,
restrict_vertices=restrict_verts,
surf='white')
# Convert the center of vertex index from surface vertex list to Label's
# vertex list.
cent_idx = np.where(label.vertices == com)[0][0]
# Create a mask with 1 at center vertex and zeros elsewhere.
labels[i].values.fill(0.)
labels[i].values[cent_idx] = 1.
Explanation: Find the center vertices in source space of each label
We want the known signal in each label to only be active at the center. We
create a mask for each label that is 1 at the center vertex and 0 at all
other vertices in the label. This mask is then used when simulating
source-space data.
End of explanation
stc_gen = simulate_stc(fwd['src'], labels, signal, times[0], dt,
value_fun=lambda x: x)
Explanation: Create source-space data with known signals
Put known signals onto surface vertices using the array of signals and
the label masks (stored in labels[i].values).
End of explanation
kwargs = dict(subjects_dir=subjects_dir, hemi='split', smoothing_steps=4,
time_unit='s', initial_time=0.05, size=1200,
views=['lat', 'med'])
clim = dict(kind='value', pos_lims=[1e-9, 1e-8, 1e-7])
brain_gen = stc_gen.plot(clim=clim, **kwargs)
Explanation: Plot original signals
Note that the original signals are highly concentrated (point) sources.
End of explanation
evoked_gen = simulate_evoked(fwd, stc_gen, evoked.info, cov, nave,
random_state=seed)
# Map the simulated sensor-space data to source-space using the inverse
# operator.
stc_inv = apply_inverse(evoked_gen, inv_op, lambda2, method=method)
Explanation: Simulate sensor-space signals
Use the forward solution and add Gaussian noise to simulate sensor-space
(evoked) data from the known source-space signals. The amount of noise is
controlled by nave (higher values imply less noise).
End of explanation
brain_inv = stc_inv.plot(**kwargs)
Explanation: Plot the point-spread of corrupted signal
Notice that after applying the forward- and inverse-operators to the known
point sources that the point sources have spread across the source-space.
This spread is due to the minimum norm solution so that the signal leaks to
nearby vertices with similar orientations so that signal ends up crossing the
sulci and gyri.
End of explanation |
3,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train policy
The training of the policy will take around 1h on GPU.
Step1: Generate and review frames from policy | Python Code:
iteration_num=300
hparams = trainer_lib.create_hparams("ppo_atari_base", "epochs_num={}".format(iteration_num+1))
ppo_dir = tempfile.mkdtemp(dir=data_dir, prefix="ppo_")
rl_trainer_lib.train(hparams, "stacked_pong", ppo_dir)
agent_policy_path = os.path.join(ppo_dir, "model{}.ckpt.index".format(iteration_num))[:-6]
Explanation: Train policy
The training of the policy will take around 1h on GPU.
End of explanation
sys.argv = [sys.argv[0], "--agent_policy_path", agent_policy_path]
tf.reset_default_graph()
# This step is also time consuming - takes around 30 minutes.
gym_problem = problems.problem("gym_pong_trajectories_from_policy")
gym_problem.generate_data(data_dir, tmp_dir)
dataset = gym_problem.dataset(Modes.TRAIN, data_dir)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
fig=plt.figure(figsize=(20, 80))
columns = 10
rows = 40
with tf.Session() as sess:
for inx in range(100):
value = sess.run(next_element)
for i in range(10): # skipping surplus frames.
value = sess.run(next_element)
fig.add_subplot(rows, columns, inx+1)
image = value["inputs"].reshape([210,160,3])
plt.imshow(image[:, :, 0].astype(np.float32), cmap=plt.get_cmap('gray'))
plt.show()
Explanation: Generate and review frames from policy
End of explanation |
3,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Clustering with Scikit-learn
(C) 2017-2019 by Damir Cavar
Download
Step1: To use the K-means clustering algorithm from Scikit-learn, we import it and specify the number of clusters (that is the k), and the random state to initialize the centroid centers of the clusters. We assume that the data can be grouped into two clusters
Step2: We can now apply the clustering algorithm to the datapoints in $X$
Step3: The output above shows the assignment of datapoints to clusters.
We can use the model now to make predictions about other datapoints
Step4: We can also output the centroids of the two clusters
Step5: If we want to use K-Nearest Neighbor in Scikit Learn, we need to import the KNeighborsClassifier from the neighbors submodule
Step6: We instantiate a KNN-classifier
Step7: We use the following dataset X and class-vector y
Step8: We train the classifier
Step9: We ask the classifier to suggest a class for an unseen vector
Step10: It can also give us the likelihoods for the probability of a data-point being in any of the classes
Step11: We might not have a class assignment for a sample set. If we want to use a sample set to find the closes data-point, we could use the KNeighborsClassifier as well. Here is a sample.
Step12: We can now train for K=1, the nearest neighbor model
Step13: We could ask for the nearest neighbor of a concrete data-point
Step14: The returned result [[0.5]] and [[2]] means that nearest neighbor is the third sample in samples and that the distance between the two is 0.5. One can also query for the distance of multiple data-points. In this case the output of the distance is supressed | Python Code:
import numpy
X = numpy.array([[1, 2], [1, 4], [1, 0], [4, 2], [4, 4], [4, 0]])
Explanation: Python Clustering with Scikit-learn
(C) 2017-2019 by Damir Cavar
Download: This and various other Jupyter notebooks are available from my GitHub repo.
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This is a tutorial related to the discussion of clustering in the textbook Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach.
This tutorial was developed as part of my course material for the course Machine Learning for Computational Linguistics in the Computational Linguistics Program of the Department of Linguistics at Indiana University.
K-means Clustering
We will use the array objects from the Python module numpy:
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2, random_state=0)
Explanation: To use the K-means clustering algorithm from Scikit-learn, we import it and specify the number of clusters (that is the k), and the random state to initialize the centroid centers of the clusters. We assume that the data can be grouped into two clusters:
End of explanation
kmeans.fit(X)
print(kmeans.labels_)
Explanation: We can now apply the clustering algorithm to the datapoints in $X$:
End of explanation
print(kmeans.predict([[0, 0], [4, 4]]))
Explanation: The output above shows the assignment of datapoints to clusters.
We can use the model now to make predictions about other datapoints:
End of explanation
print(kmeans.cluster_centers_)
Explanation: We can also output the centroids of the two clusters:
End of explanation
from sklearn.neighbors import KNeighborsClassifier
Explanation: If we want to use K-Nearest Neighbor in Scikit Learn, we need to import the KNeighborsClassifier from the neighbors submodule:
End of explanation
KNNClassifier = KNeighborsClassifier(n_neighbors = 3)
Explanation: We instantiate a KNN-classifier:
End of explanation
X = [[0, 1], [1, 1], [2, 4], [3, 4]]
y = [0, 0, 1, 1]
Explanation: We use the following dataset X and class-vector y:
End of explanation
KNNClassifier.fit(X, y)
Explanation: We train the classifier:
End of explanation
print(KNNClassifier.predict([[1.1, 0.9]]))
Explanation: We ask the classifier to suggest a class for an unseen vector:
End of explanation
print(KNNClassifier.predict_proba([[2.9, 3.1]]))
Explanation: It can also give us the likelihoods for the probability of a data-point being in any of the classes:
End of explanation
samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
Explanation: We might not have a class assignment for a sample set. If we want to use a sample set to find the closes data-point, we could use the KNeighborsClassifier as well. Here is a sample.
End of explanation
from sklearn.neighbors import NearestNeighbors
KNNClassifier = NearestNeighbors(n_neighbors=1)
KNNClassifier.fit(samples)
Explanation: We can now train for K=1, the nearest neighbor model:
End of explanation
print(KNNClassifier.kneighbors([[1., 1., 1.]]))
Explanation: We could ask for the nearest neighbor of a concrete data-point:
End of explanation
X = [[0., 1., 0.], [1., 0., 1.]]
KNNClassifier.kneighbors(X, return_distance=False)
Explanation: The returned result [[0.5]] and [[2]] means that nearest neighbor is the third sample in samples and that the distance between the two is 0.5. One can also query for the distance of multiple data-points. In this case the output of the distance is supressed:
End of explanation |
3,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
APS 8
Entrega
Step1: b)A partir do cรกlculo da cdf, sabe-se que a probabilidade de um aluno ser aprovado nessa disciplina รฉ igual a 67,98%.
Dataset para as questรตes de programaรงรฃo
Vamos trabalhar com o problema House Sales in King County, do Kaggle
https
Step2: Q3
O dataset fornecido รฉ uma amostra extraรญda de um dataset maior. A amostra pode ter vindo de uma populaรงรฃo com mรฉdia de sqft_lot de 8250?
Passos
Step3: Pela distribuiรงรฃo dos pontos ao longo da reta, pode-se afirmar que nรฃo se trata de uma normal.
Caso a distribuiรงรฃo se tratasse de maneira normal, entรฃo poderia ser realizado um teste de hipรณtese para anรกlise desta distribuiรงรฃo.
O valor de ฯ nรฃo seria conhecido, apesar de ser possรญvel encontrar um valor aproximado deste, por outro lado o valor de ฮผ seria conhecido e poderia ser prontamente utilizado para avaliaรงรฃo da distribuiรงรฃo normal.
Step4: Q4
Faรงa uma regressรฃo que usa sqft_living como variรกvel explicativa e price como resultado
Step5: A partir da tabela sabe-se os valor de beta0 e beta1 e portanto
Step6: b). Interprete o $R^2$ , os valores $P > |t|$ e tambรฉm Prob (F-statistic).
O valor de Rยฒ รฉ basicamente a demonstraรงรฃo numรฉrica de quanto o modelo รฉ capaz de os valores, ou seja uma espรฉcie de ajuste do mesmo. Enquanto o P>|t|รฉ apenas o calculo da probabilidade de que nรฃo existe relaรงรฃo ou influรชncia entre variรกveis determinadas.
c) Escreva a equaรงรฃo que expressa price em funรงรฃo de sqft_living.
Step7: d) Faรงa um grรกfico de dispersรฃo de price em funรงรฃo de sqft_living, plote a equaรงรฃo que encontrou no item anterior no mesmo grรกfico
Step8: Q5
Faรงa agora uma regressรฃo mรบltipla que tenta prever price (variรกvel explicada) em funรงรฃo das variรกveis sqft_lot e sqft_basement (explicativa).
Dica | Python Code:
from scipy import stats
Prob = 1-(stats.norm.cdf(5,loc=5.5,scale=1.07))
Prob
Explanation: APS 8
Entrega: 28/11 ao final do atendimento (17:15)
Questรฃo 1
Assuma que $X$ seja uma variรกvel aleatรณria contรญnua que descreve o preรงo de um multรญmetro digital em uma loja brasileira qualquer. Ainda, assuma que o preรงo mรฉdio seja igual a $R\$ 45,00$ e o desvio padrรฃo, igual a $R\$45,00$ e o desvio padrรฃo, igual a $R\$ 25,00$.
Em todos os itens a seguir, deixe claro como foi construรญdo o novo preรงo em funรงรฃo de $X$.
Deixe todas as contas claramente indicadas.
a) Se todas as lojas brasileiras que vendem multรญmetro resolvessem acrescentar 15 reais no preรงo de cada um dos seus respectivos multรญmetros. Nesse caso, qual o novo preรงo mรฉdio do multรญmetro e respectivo desvio padrรฃo?
b) Se todas as lojas brasileiras que vendem multรญmetro resolvessem aumentar o preรงo de cada um dos seus respectivos multรญmetros em $150\%$. Nesse caso, qual o novo preรงo mรฉdio do multรญmetro e respectivo desvio padrรฃo?
c) Se vocรช precisa comprar dez multรญmetros e deseja comprรก-los todos iguais e na mesma loja. Nesse caso, qual o gasto total esperado e respectivo desvio padrรฃo?
Respostas
a) Nesse caso a diferenรงa se observa apenas no preรงo mรฉdio, que nesse caso aumenta tambรฉm em 15,00 reais, de acordo com a seguinte fรณrmula : E(X+d) = 45 + 15 = 60, ou seja X o preรงo anterior, d o acrรฉscimo e o novo preรงo mรฉdio รฉ de 60,00 reais. Jรก o desvio padrรฃo nรฃo se altera nesse caso.
b) Como dessa forma hรก uma alteraรงรฃo de maneira "multiplicativa", segue-se a seguinte lรณgica: E(cX+d) = (45 x 1,5) + 45 = 112,5 reias, enquanto o novo desvio padrรฃo serรก a raiz de, Var(cX+d) = 2,5^2 x 25^2 = 62,5 reais รฉ o novo desvio padrรฃo.
c) Partindo-se do fato de que o preรงo irรก se repetir, afinal serรฃo escolhas iguais, o gasto total esperado รฉ a mรฉdia multiplicada pelo nรบmero de produtos comprados, portanto 10x45 = 450 reais, e o desvio padrรฃo da mesma maneira, 25 x 10 = 250 reais de desvio.
Questรฃo 2
Assuma que, na disciplina Ciรชncia dos dados, a nota final da disciplina seja funรงรฃo da nota mรฉdia das provas $X$ e da nota mรฉdia dos projetos $Y$. Assuma ainda que ambas as notas tem mesmo peso na construรงรฃo da nota final na disciplina.
Considerando que a nota mรฉdia dos alunos nas provas segue uma normal com mรฉdia 4 e desvio padrรฃo igual a 1,5, ou seja $ X \sim N(4 ; 2,25)$ . E que a nota mรฉdia dos alunos nos projetos segue uma normal com mรฉdia 7 e desvio padrรฃo igual a 1, ou seja, $Y\sim N(7 ; 1)$.
Na prรกtica, รฉ razoรกvel acreditar que haja uma associaรงรฃo positiva entre essas notas. Aqui, considere que a covariรขncia entre essas notas mรฉdias รฉ igual a $0,675$ .
Em todos os itens a seguir, deixe todas as contas claramente indicadas.
a) Calcule a nota mรฉdia final e respectivo desvio padrรฃo.
b) Considere que a nota final na disciplina Ciรชncia dos dados segue uma distribuiรงรฃo com mรฉdia e variรขncia obtidas no item anterior. Calcule a probabilidade de um aluno ser aprovado na disciplina.
Respostas
a) Pautando-se nas propriedades de esperanรงa e variรขncia, calcula-se a nota mรฉdia final por E(aX+bY) = aE(X) + bE(Y) = 5.5, afinal ambas tem o mesmo peso (50%) e "a" nota 4, e "b" nota 7. Jรก para o desvio padrรฃo, deve se fazer o cรกlculo a partir da variรขncia e entรฃo tirar sua raiz, sendo que, Var(aX+bY) = (0.5^2)x2.25 + (0.5^2)x1 + (2x0.5x0.5)x0.675 = 1.15, portanto o desvio padrรฃo รฉ igual 1.07.
End of explanation
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from scipy import stats
%matplotlib inline
houses = pd.read_csv("houses.csv")
houses.head()
Explanation: b)A partir do cรกlculo da cdf, sabe-se que a probabilidade de um aluno ser aprovado nessa disciplina รฉ igual a 67,98%.
Dataset para as questรตes de programaรงรฃo
Vamos trabalhar com o problema House Sales in King County, do Kaggle
https://www.kaggle.com/harlfoxem/housesalesprediction
End of explanation
tamanho = houses.sqft_lot
stats.probplot(tamanho, dist='norm', plot=plt)
plt.show()
Explanation: Q3
O dataset fornecido รฉ uma amostra extraรญda de um dataset maior. A amostra pode ter vindo de uma populaรงรฃo com mรฉdia de sqft_lot de 8250?
Passos:
a) Verifique que sqft_lot nรฃo รฉ normal.
Dica: na APS 7 foi feito um teste de normalidade. Repita o mesmo processo
b) Se sqft_lot fosse normal, que tipo de teste poderรญamos ter usado?
c) Saberรญamos $\mu$ para montar o t? Saberรญamos $\sigma$?
Bootstrap:
d) Monte um intervalo percentil boostrap de $95\%$.
e) Informe os valores inferiores e superiores
f) Conclua o teste
Respostas
End of explanation
am = []
for i in houses.sqft_lot:
am.append(i)
al = 0.05
n = len(am)
s = np.std(am, ddof=1)
m_ = np.mean(am)
l = []
for i in range(10000):
#valor aleatรณrio para toda a porcentagem que se deseja analisar o bootsrap
a = np.random.choice(am,size = n, replace = True).mean()
l.append(a)
#Valores Mรกximo e Mรญnimo do boostrap criado
print("Valor mรญnimo igual a: ")
print(pd.Series(l).quantile(al/2))
print("Valor Mรกximo igual a: ")
print(pd.Series(l).quantile(1-(al/2)))
Explanation: Pela distribuiรงรฃo dos pontos ao longo da reta, pode-se afirmar que nรฃo se trata de uma normal.
Caso a distribuiรงรฃo se tratasse de maneira normal, entรฃo poderia ser realizado um teste de hipรณtese para anรกlise desta distribuiรงรฃo.
O valor de ฯ nรฃo seria conhecido, apesar de ser possรญvel encontrar um valor aproximado deste, por outro lado o valor de ฮผ seria conhecido e poderia ser prontamente utilizado para avaliaรงรฃo da distribuiรงรฃo normal.
End of explanation
hp = houses.price
hl = houses["sqft_living"]
hl = sm.add_constant(hl, prepend=True)
res = sm.OLS(hp,hl).fit()
res.summary()
Explanation: Q4
Faรงa uma regressรฃo que usa sqft_living como variรกvel explicativa e price como resultado:
a). De acordo com a notaรงรฃo que usamos em aula, quem รฉ $\hat{\beta_0}$ e quem รฉ $\hat{\beta_1}$ ?
End of explanation
B0 = res.params.const
B1 = res.params.sqft_living
print("B0 รฉ igual a: ")
print(B0)
print("B1 รฉ igual a: ")
print(B1)
Explanation: A partir da tabela sabe-se os valor de beta0 e beta1 e portanto:
End of explanation
print("House.price = {0} + {1} hl, sendo hp o preรงo e hl o sqft_living, ou seja tamanho do interior habitรกvel da casa".format(B0,B1))
Explanation: b). Interprete o $R^2$ , os valores $P > |t|$ e tambรฉm Prob (F-statistic).
O valor de Rยฒ รฉ basicamente a demonstraรงรฃo numรฉrica de quanto o modelo รฉ capaz de os valores, ou seja uma espรฉcie de ajuste do mesmo. Enquanto o P>|t|รฉ apenas o calculo da probabilidade de que nรฃo existe relaรงรฃo ou influรชncia entre variรกveis determinadas.
c) Escreva a equaรงรฃo que expressa price em funรงรฃo de sqft_living.
End of explanation
houses.plot.scatter("sqft_living","price")
hp = 49281.45802 + 247.080994*houses["sqft_living"]
plt.plot(houses['sqft_living'],hp,"green")
Explanation: d) Faรงa um grรกfico de dispersรฃo de price em funรงรฃo de sqft_living, plote a equaรงรฃo que encontrou no item anterior no mesmo grรกfico
End of explanation
a2 = houses['price']
hloc = houses.loc[:, ['sqft_lot', 'sqft_basement']]
hloc = sm.add_constant(hloc, prepend=True)
resultados = sm.OLS(a2,hloc).fit()
resultados.summary()
Explanation: Q5
Faรงa agora uma regressรฃo mรบltipla que tenta prever price (variรกvel explicada) em funรงรฃo das variรกveis sqft_lot e sqft_basement (explicativa).
Dica: inclua as duas colunas dentro de uma variรกvel X, o restante nรฃo se altera em relaรงรฃo ร regressรฃo simples.
python
Y = houses['price']
X = houses.loc[:, ['sqft_lot', 'sqft_basement']]
Pede-se:
a) Repita a intepretaรงรฃo dos valores P>|t|P>|t| e Prob (F-statistic) porรฉm com duas variรกveis.
O que os valores encontrados implicam para a regressรฃo?
Esses valores, P>|t|P>|t| e Prob (F-statistic) demonstram ideia de relaรงรฃo que entre para as duas variรกveis.
b) Escreva a funรงรฃo que foi encontrada pela regressรฃo ( price em funรงรฃo de sqft_lot e sqft_basement)
End of explanation |
3,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
================================================================
Compute sparse inverse solution with mixed norm
Step1: Run solver
Step2: Plot dipole activations
Step3: Plot residual
Step4: Generate stc from dipoles
Step5: View in 2D and 3D ("glass" brain like 3D plot)
Step6: Morph onto fsaverage brain and view | Python Code:
# Author: Alexandre Gramfort <[email protected]>
# Daniel Strohmeier <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.inverse_sparse import mixed_norm, make_stc_from_dipoles
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left Auditory'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0, tmax=0.3)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
Explanation: ================================================================
Compute sparse inverse solution with mixed norm: MxNE and irMxNE
================================================================
Runs an (ir)MxNE (L1/L2 [1] or L0.5/L2 [2] mixed norm) inverse solver.
L0.5/L2 is done with irMxNE which allows for sparser
source estimates with less amplitude bias due to the non-convexity
of the L0.5/L2 mixed norm penalty.
References
.. [1] Gramfort A., Kowalski M. and Hamalainen, M.
"Mixed-norm estimates for the M/EEG inverse problem using accelerated
gradient methods", Physics in Medicine and Biology, 2012.
https://doi.org/10.1088/0031-9155/57/7/1937.
.. [2] Strohmeier D., Haueisen J., and Gramfort A.
"Improved MEG/EEG source localization with reweighted mixed-norms",
4th International Workshop on Pattern Recognition in Neuroimaging,
Tuebingen, 2014. 10.1109/PRNI.2014.6858545
End of explanation
alpha = 55 # regularization parameter between 0 and 100 (100 is high)
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
n_mxne_iter = 10 # if > 1 use L0.5/L2 reweighted mixed norm solver
# if n_mxne_iter > 1 dSPM weighting can be avoided.
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
depth=depth, fixed=True,
use_cps=True)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute (ir)MxNE inverse solution with dipole output
dipoles, residual = mixed_norm(
evoked, forward, cov, alpha, loose=loose, depth=depth, maxit=3000,
tol=1e-4, active_set_size=10, debias=True, weights=stc_dspm,
weights_min=8., n_mxne_iter=n_mxne_iter, return_residual=True,
return_as_dipoles=True)
Explanation: Run solver
End of explanation
plot_dipole_amplitudes(dipoles)
# Plot dipole location of the strongest dipole with MRI slices
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# # Plot dipole locations of all dipoles with MRI slices
# for dip in dipoles:
# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
# subjects_dir=subjects_dir, mode='orthoview',
# idx='amplitude')
Explanation: Plot dipole activations
End of explanation
ylim = dict(eeg=[-10, 10], grad=[-400, 400], mag=[-600, 600])
evoked.pick_types(meg=True, eeg=True, exclude='bads')
evoked.plot(ylim=ylim, proj=True)
residual.pick_types(meg=True, eeg=True, exclude='bads')
residual.plot(ylim=ylim, proj=True)
Explanation: Plot residual
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
Explanation: Generate stc from dipoles
End of explanation
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
fig_name="MxNE (cond %s)" % condition,
opacity=0.1)
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation
stc_fsaverage = stc.morph(subject_from='sample', subject_to='fsaverage',
grade=None, sparse=True, subjects_dir=subjects_dir)
src_fsaverage_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
src_fsaverage = mne.read_source_spaces(src_fsaverage_fname)
plot_sparse_source_estimates(src_fsaverage, stc_fsaverage, bgcolor=(1, 1, 1),
fig_name="Morphed MxNE (cond %s)" % condition,
opacity=0.1)
Explanation: Morph onto fsaverage brain and view
End of explanation |
3,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Basic tour of the Bayesian Optimization package
This is a constrained global optimization package built upon bayesian inference and gaussian process, that attempts to find the maximum value of an unknown function in as few iterations as possible. This technique is particularly suited for optimization of high cost functions, situations where the balance between exploration and exploitation is important.
Bayesian optimization works by constructing a posterior distribution of functions (gaussian process) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not, as seen in the picture below.
As you iterate over and over, the algorithm balances its needs of exploration and exploitation taking into account what it knows about the target function. At each step a Gaussian Process is fitted to the known samples (points previously explored), and the posterior distribution, combined with a exploration strategy (such as UCB (Upper Confidence Bound), or EI (Expected Improvement)), are used to determine the next point that should be explored (see the gif below).
This process is designed to minimize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor. See the references for a proper discussion of this method.
1. Specifying the function to be optimized
This is a function optimization package, therefore the first and most important ingreedient is, of course, the function to be optimized.
DISCLAIMER
Step2: 2. Getting Started
All we need to get started is to instanciate a BayesianOptimization object specifying a function to be optimized f, and its parameters with their corresponding bounds, pbounds. This is a constrained optimization technique, so you must specify the minimum and maximum values that can be probed for each parameter in order for it to work
Step3: The BayesianOptimization object will work out of the box without much tuning needed. The main method you should be aware of is maximize, which does exactly what you think it does.
There are many parameters you can pass to maximize, nonetheless, the most important ones are
Step4: The best combination of parameters and target value found can be accessed via the property bo.max.
Step5: While the list of all parameters probed and their corresponding target values is available via the property bo.res.
Step6: 2.1 Changing bounds
During the optimization process you may realize the bounds chosen for some parameters are not adequate. For these situations you can invoke the method set_bounds to alter them. You can pass any combination of existing parameters and their associated new bounds.
Step7: 3. Guiding the optimization
It is often the case that we have an idea of regions of the parameter space where the maximum of our function might lie. For these situations the BayesianOptimization object allows the user to specify specific points to be probed. By default these will be explored lazily (lazy=True), meaning these points will be evaluated only the next time you call maximize. This probing process happens before the gaussian process takes over.
Parameters can be passed as dictionaries such as below
Step8: Or as an iterable. Beware that the order has to be alphabetical. You can usee optimizer.space.keys for guidance
Step9: 4. Saving, loading and restarting
By default you can follow the progress of your optimization by setting verbose>0 when instanciating the BayesianOptimization object. If you need more control over logging/alerting you will need to use an observer. For more information about observers checkout the advanced tour notebook. Here we will only see how to use the native JSONLogger object to save to and load progress from files.
4.1 Saving progress
Step10: The observer paradigm works by
Step11: 4.2 Loading progress
Naturally, if you stored progress you will be able to load that onto a new instance of BayesianOptimization. The easiest way to do it is by invoking the load_logs function, from the util submodule. | Python Code:
def black_box_function(x, y):
Function with unknown internals we wish to maximize.
This is just serving as an example, for all intents and
purposes think of the internals of this function, i.e.: the process
which generates its output values, as unknown.
return -x ** 2 - (y - 1) ** 2 + 1
Explanation: Basic tour of the Bayesian Optimization package
This is a constrained global optimization package built upon bayesian inference and gaussian process, that attempts to find the maximum value of an unknown function in as few iterations as possible. This technique is particularly suited for optimization of high cost functions, situations where the balance between exploration and exploitation is important.
Bayesian optimization works by constructing a posterior distribution of functions (gaussian process) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not, as seen in the picture below.
As you iterate over and over, the algorithm balances its needs of exploration and exploitation taking into account what it knows about the target function. At each step a Gaussian Process is fitted to the known samples (points previously explored), and the posterior distribution, combined with a exploration strategy (such as UCB (Upper Confidence Bound), or EI (Expected Improvement)), are used to determine the next point that should be explored (see the gif below).
This process is designed to minimize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor. See the references for a proper discussion of this method.
1. Specifying the function to be optimized
This is a function optimization package, therefore the first and most important ingreedient is, of course, the function to be optimized.
DISCLAIMER: We know exactly how the output of the function below depends on its parameter. Obviously this is just an example, and you shouldn't expect to know it in a real scenario. However, it should be clear that you don't need to. All you need in order to use this package (and more generally, this technique) is a function f that takes a known set of parameters and outputs a real number.
End of explanation
from bayes_opt import BayesianOptimization
# Bounded region of parameter space
pbounds = {'x': (2, 4), 'y': (-3, 3)}
optimizer = BayesianOptimization(
f=black_box_function,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
random_state=1,
)
Explanation: 2. Getting Started
All we need to get started is to instanciate a BayesianOptimization object specifying a function to be optimized f, and its parameters with their corresponding bounds, pbounds. This is a constrained optimization technique, so you must specify the minimum and maximum values that can be probed for each parameter in order for it to work
End of explanation
optimizer.maximize(
init_points=2,
n_iter=3,
)
Explanation: The BayesianOptimization object will work out of the box without much tuning needed. The main method you should be aware of is maximize, which does exactly what you think it does.
There are many parameters you can pass to maximize, nonetheless, the most important ones are:
- n_iter: How many steps of bayesian optimization you want to perform. The more steps the more likely to find a good maximum you are.
- init_points: How many steps of random exploration you want to perform. Random exploration can help by diversifying the exploration space.
End of explanation
print(optimizer.max)
Explanation: The best combination of parameters and target value found can be accessed via the property bo.max.
End of explanation
for i, res in enumerate(optimizer.res):
print("Iteration {}: \n\t{}".format(i, res))
Explanation: While the list of all parameters probed and their corresponding target values is available via the property bo.res.
End of explanation
optimizer.set_bounds(new_bounds={"x": (-2, 3)})
optimizer.maximize(
init_points=0,
n_iter=5,
)
Explanation: 2.1 Changing bounds
During the optimization process you may realize the bounds chosen for some parameters are not adequate. For these situations you can invoke the method set_bounds to alter them. You can pass any combination of existing parameters and their associated new bounds.
End of explanation
optimizer.probe(
params={"x": 0.5, "y": 0.7},
lazy=True,
)
Explanation: 3. Guiding the optimization
It is often the case that we have an idea of regions of the parameter space where the maximum of our function might lie. For these situations the BayesianOptimization object allows the user to specify specific points to be probed. By default these will be explored lazily (lazy=True), meaning these points will be evaluated only the next time you call maximize. This probing process happens before the gaussian process takes over.
Parameters can be passed as dictionaries such as below:
End of explanation
print(optimizer.space.keys)
optimizer.probe(
params=[-0.3, 0.1],
lazy=True,
)
optimizer.maximize(init_points=0, n_iter=0)
Explanation: Or as an iterable. Beware that the order has to be alphabetical. You can usee optimizer.space.keys for guidance
End of explanation
from bayes_opt.observer import JSONLogger
from bayes_opt.event import Events
Explanation: 4. Saving, loading and restarting
By default you can follow the progress of your optimization by setting verbose>0 when instanciating the BayesianOptimization object. If you need more control over logging/alerting you will need to use an observer. For more information about observers checkout the advanced tour notebook. Here we will only see how to use the native JSONLogger object to save to and load progress from files.
4.1 Saving progress
End of explanation
logger = JSONLogger(path="./logs.json")
optimizer.subscribe(Events.OPTMIZATION_STEP, logger)
optimizer.maximize(
init_points=2,
n_iter=3,
)
Explanation: The observer paradigm works by:
1. Instantiating an observer object.
2. Tying the observer object to a particular event fired by an optimizer.
The BayesianOptimization object fires a number of internal events during optimization, in particular, everytime it probes the function and obtains a new parameter-target combination it will fire an Events.OPTIMIZATION_STEP event, which our logger will listen to.
Caveat: The logger will not look back at previously probed points.
End of explanation
from bayes_opt.util import load_logs
new_optimizer = BayesianOptimization(
f=black_box_function,
pbounds={"x": (-2, 2), "y": (-2, 2)},
verbose=2,
random_state=7,
)
print(len(new_optimizer.space))
load_logs(new_optimizer, logs=["./logs.json"]);
print("New optimizer is now aware of {} points.".format(len(new_optimizer.space)))
new_optimizer.maximize(
init_points=0,
n_iter=10,
)
Explanation: 4.2 Loading progress
Naturally, if you stored progress you will be able to load that onto a new instance of BayesianOptimization. The easiest way to do it is by invoking the load_logs function, from the util submodule.
End of explanation |
3,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graded =11/11
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: Problem set #3
Step11: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step12: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step13: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step14: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step15: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step16: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Graded =11/11
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
numbers = [int(i) for i in numbers_str.split(",")]
max(numbers)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
sorted(numbers)[-10:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
sorted([number for number in numbers if number%3 == 0])
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
from math import sqrt
[sqrt(number) for number in numbers if number < 100]
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
[planet['name'] for planet in planets if planet['diameter']/2 > (planets[2]['diameter'] / 2 * 4)]
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
sum([planet['mass'] for planet in planets])
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
[planet['name'] for planet in planets if 'giant' in planet['type']]
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
[planet['name'] for planet in sorted(planets, key = lambda planet: planet['moons'])]
# Useful reads:
# http://stackoverflow.com/questions/8966538/syntax-behind-sortedkey-lambda
# https://docs.python.org/3.5/howto/sorting.html#sortinghowto
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
[line for line in poem_lines if re.search(r"\b\w{4}\b\s\b\w{4}\b", line)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
#character class that matches non-alphanumeric characters = \W
#in ananalogy to \s and \S
[line for line in poem_lines if re.search(r"\b\w{5}(?:$|\W$)", line)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
[line[2:] for line in re.findall(r"\bI\b\s\b\w{1,}\b", all_lines)]
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
#test cell to try code bits
[item.split("$") for item in entrees if re.search(r"(?:\d\d|\d).\d\d", item)]
#TA-Stephan: Careful - price should be int.
menu = []
for item in entrees:
dish ={}
dish['name'] = re.search(r"(.*)\s\$", item).group(1)
dish['price'] = re.search(r"\d{1,2}\.\d{2}", item).group()
dish['vegetarian'] = bool(re.search(r"\s-\sv", item))
menu.append(dish)
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation |
3,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
General pandas Concepts
Step1: Now weโve covered numpy the basis for pandas. Weโve covered some of the more advanced python concepts like list comprehensions and lambda functions. Letโs jump back to our roadmap.
Weโve covered the general ecosystem. Weโve covered a lot of numpy, now letโs get our hands dirty with some real data and actually using pandas. I hope youโve watched the numpy videos that we covered earlier, they may seem academic but theyโre really going to provide a fantastic foundation for what weโre going to learn now.
Now I'm going to breeze through a couple of subjects right now. Donโt feel the need to take notes or even try this code yourself. You can if you like, but itโs mainly to introduce you to the power of pandas, not for you to copy.
Pandas is made up of a couple of core types.
Weโve got an index. The index is a way of querying the data in an array or Series or querying the data in a Series or DataFrame.
Step2: Weโve got the Series. The Series is like a 1 dimensional array in numpy. It has some helper functions and an index that allows for querying of the data in simple ways.
We can make a simple Series from a numpy array.
Step3: Now that weโve created it. We can see it has an index, that we just talked about, as well as values. When we print these out, they should look similar - just like numpy arrays. Now here is where the series gets powerful.
Step4: we can replace the index with our own index. In this example Iโll use the lower case values of ascii characters.
Step5: Now we can query just like we would if an array. You can think of the Series like an extremely powerful array.
We can query either sections or specific values.
Step6: Now donโt worry about the functions that Iโm using. Weโre going to go over those in detail - I just wanted to introduce the concept.
Weโve got the DataFrame which is like a matrix or series of seriesโ. It also has an index (or multiple indexes).
Step7: Letโs go ahead and create one. Weโve make it from the lowercase, uppercase, and a number range.
Step8: Just like a numpy array we can transpose it.
Step9: But now that we have columns as well as an index, we can rename the columns to better describe and query the data.
Step10: We can even set up a date range to associate each letter with a date. Now obviously this isnโt too helpful for the alphabet, but this allows you to do some amazing things once you are analyzing real data. | Python Code:
import sys
print(sys.version)
import numpy as np
print(np.__version__)
import pandas as pd
print(pd.__version__)
Explanation: General pandas Concepts
End of explanation
pd.Index
Explanation: Now weโve covered numpy the basis for pandas. Weโve covered some of the more advanced python concepts like list comprehensions and lambda functions. Letโs jump back to our roadmap.
Weโve covered the general ecosystem. Weโve covered a lot of numpy, now letโs get our hands dirty with some real data and actually using pandas. I hope youโve watched the numpy videos that we covered earlier, they may seem academic but theyโre really going to provide a fantastic foundation for what weโre going to learn now.
Now I'm going to breeze through a couple of subjects right now. Donโt feel the need to take notes or even try this code yourself. You can if you like, but itโs mainly to introduce you to the power of pandas, not for you to copy.
Pandas is made up of a couple of core types.
Weโve got an index. The index is a way of querying the data in an array or Series or querying the data in a Series or DataFrame.
End of explanation
pd.Series
series_ex = pd.Series(np.arange(26))
series_ex
Explanation: Weโve got the Series. The Series is like a 1 dimensional array in numpy. It has some helper functions and an index that allows for querying of the data in simple ways.
We can make a simple Series from a numpy array.
End of explanation
series_ex.index
Explanation: Now that weโve created it. We can see it has an index, that we just talked about, as well as values. When we print these out, they should look similar - just like numpy arrays. Now here is where the series gets powerful.
End of explanation
import string
lcase = string.ascii_lowercase
ucase = string.ascii_uppercase
print(lcase, ucase)
lcase = list(lcase)
ucase = list(ucase)
print(lcase)
print(ucase)
series_ex.index = lcase
series_ex.index
series_ex
Explanation: we can replace the index with our own index. In this example Iโll use the lower case values of ascii characters.
End of explanation
series_ex.ix['d':'k']
series_ex.ix['f']
Explanation: Now we can query just like we would if an array. You can think of the Series like an extremely powerful array.
We can query either sections or specific values.
End of explanation
pd.DataFrame
Explanation: Now donโt worry about the functions that Iโm using. Weโre going to go over those in detail - I just wanted to introduce the concept.
Weโve got the DataFrame which is like a matrix or series of seriesโ. It also has an index (or multiple indexes).
End of explanation
letters = pd.DataFrame([lcase, ucase, list(range(26))])
letters
Explanation: Letโs go ahead and create one. Weโve make it from the lowercase, uppercase, and a number range.
End of explanation
letters = letters.transpose()
letters.head()
letters.columns
letters.index
Explanation: Just like a numpy array we can transpose it.
End of explanation
letters.columns = ['lowercase','uppercase','number']
letters.lowercase
letters['lowercase']
Explanation: But now that we have columns as well as an index, we can rename the columns to better describe and query the data.
End of explanation
letters.index = pd.date_range('9/1/2012',periods=26)
letters
letters['9-10-2012':'9-15-2012']
Explanation: We can even set up a date range to associate each letter with a date. Now obviously this isnโt too helpful for the alphabet, but this allows you to do some amazing things once you are analyzing real data.
End of explanation |
3,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy ๋ฐฐ์ด ์์ฑ๊ณผ ๋ณํ
NumPy์ ์๋ฃํ
NumPy์ ndarrayํด๋์ค๋ ํฌํจํ๋ ๋ชจ๋ ๋ฐ์ดํฐ๊ฐ ๊ฐ์ ์๋ฃํ(data type)์ด์ด์ผ ํ๋ค. ๋ํ ์๋ฃํ ์์ฒด๋ ์ผ๋ฐ ํ์ด์ฌ์์ ์ ๊ณตํ๋ ๊ฒ๋ณด๋ค ํจ์ฌ ์ธ๋ถํ๋์ด ์๋ค.
NumPy์ ์๋ฃํ์ dtype ์ด๋ผ๋ ์ธ์๋ก ์ง์ ํ๋ค. dtype ์ธ์๋ก ์ง์ ํ ๊ฐ์ ๋ค์ ํ์ ๋ณด์ธ๊ฒ๊ณผ ๊ฐ์ dtype ์ ๋์ฌ๋ก ์์ํ๋ ๋ฌธ์์ด์ด๊ณ ๋นํธ/๋ฐ์ดํธ ์๋ฅผ ์๋ฏธํ๋ ์ซ์๊ฐ ๋ถ์ ์๋ ์๋ค.
| dtype ์ ๋์ฌ | ์ค๋ช
| ์ฌ์ฉ ์ |
|-|-|-|
| t | ๋นํธ ํ๋ | t4 (4๋นํธ) |
| b | ๋ถ๋ฆฌ์ธ | b (์ฐธ ํน์ ๊ฑฐ์ง) |
| i | ์ ์ | i8 (64๋นํธ) |
| u | ๋ถํธ ์๋ ์ ์ | u8 (64๋นํธ) |
| f | ๋ถ๋์์์ | f8 (64๋นํธ) |
| c | ๋ณต์ ๋ถ๋์์์ | c16 (128๋นํธ) |
| O | ๊ฐ์ฒด | 0 (๊ฐ์ฒด์ ๋ํ ํฌ์ธํฐ) |
| S, a | ๋ฌธ์์ด | S24 (24 ๊ธ์) |
| U | ์ ๋์ฝ๋ ๋ฌธ์์ด | U24 (24 ์ ๋์ฝ๋ ๊ธ์) |
| V | ๊ธฐํ | V12 (12๋ฐ์ดํธ์ ๋ฐ์ดํฐ ๋ธ๋ญ) |
ndarray ๊ฐ์ฒด์ dtype ์์ฑ์ผ๋ก ์๋ฃํ์ ์ ์ ์๋ค.
Step1: ๋ง์ฝ ๋ถ๋์์์ ์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ์๋ ๋ฌดํ๋๋ฅผ ํํํ๊ธฐ ์ํ np.inf์ ์ ์ํ ์ ์๋ ์ซ์๋ฅผ ๋ํ๋ด๋ np.nan ์ ์ฌ์ฉํ ์ ์๋ค.
Step2: The irrational number e is also known as Eulerโs number. It is approximately 2.718281, and is the base of the natural logarithm
Step3: ๋ฐฐ์ด ์์ฑ
Step4: ์์์ ํ์ด์ฌ ๋ฆฌ์คํธ๋ฅผ NumPy์ ndarray ๊ฐ์ฒด๋ก ๋ณํํ์ฌ ์์ฑํ๋ ค๋ฉด array ๋ช
๋ น์ ์ฌ์ฉํ์๋ค. ๊ทธ๋ฌ๋ ๋ณดํต์ ์ด๋ฌํ ๊ธฐ๋ณธ ๊ฐ์ฒด์์ด ๋ค์๊ณผ ๊ฐ์ ๋ช
๋ น์ ์ฌ์ฉํ์ฌ ๋ฐ๋ก ndarray ๊ฐ์ฒด๋ฅผ ์์ฑํ๋ค.
zeros, ones
zeros_like, ones_like
empty
arange
linspace, logspace
rand, randn
ํฌ๊ธฐ๊ฐ ์ ํด์ ธ ์๊ณ ๋ชจ๋ ๊ฐ์ด 0์ธ ๋ฐฐ์ด์ ์์ฑํ๋ ค๋ฉด zeros ๋ช
๋ น์ ์ฌ์ฉํ๋ค. dtype ์ธ์๊ฐ ์์ผ๋ฉด ์ ์ํ์ด ๋๋ค.
Step5: dtype ์ธ์๋ฅผ ๋ช
์ํ๋ฉด ํด๋น ์๋ฃํ ์์๋ฅผ ๊ฐ์ง ๋ฐฐ์ด์ ๋ง๋ ๋ค.
Step6: ๋ฌธ์์ด ๋ฐฐ์ด๋ ๊ฐ๋ฅํ์ง๋ฉด ๋ชจ๋ ์์์ ๋ฌธ์์ด ํฌ๊ธฐ๊ฐ ๊ฐ์์ผ ํ๋ค. ๋ง์ฝ ๋ ํฐ ํฌ๊ธฐ์ ๋ฌธ์์ด์ ํ ๋นํ๋ฉด ์๋ฆด ์ ์๋ค.
Step7: 0์ด ์๋ 1๋ก ์ด๊ธฐํ๋ ๋ฐฐ์ด์ ์์ฑํ๋ ค๋ฉด ones ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
Step8: ๋ง์ฝ ํฌ๊ธฐ๋ฅผ ํํ(tuple)๋ก ๋ช
์ํ์ง ์๊ณ ํน์ ํ ๋ฐฐ์ด ํน์ ๋ฆฌ์คํธ์ ๊ฐ์ ํฌ๊ธฐ์ ๋ฐฐ์ด์ ์์ฑํ๊ณ ์ถ๋ค๋ฉด ones_like, zeros_like ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
Step9: ๋ฐฐ์ด์ ํฌ๊ธฐ๊ฐ ์ปค์ง๋ฉด ๋ฐฐ์ด์ ์ด๊ธฐํํ๋๋ฐ๋ ์๊ฐ์ด ๊ฑธ๋ฆฐ๋ค. ์ด ์๊ฐ์ ๋จ์ถํ๋ ค๋ฉด ์์ฑ๋ง ํ๊ณ ์ด๊ธฐํ๋ฅผ ํ์ง ์๋ empty ๋ช
๋ น์ ์ฌ์ฉํ ์ ์๋ค. empty ๋ช
๋ น์ผ๋ก ์์ฑ๋ ๋ฐฐ์ด์ ์ด๋ค ๊ฐ์ด ๋ค์ด์์์ง๋ ์ ์ ์๋ค.
Step10: arange ๋ช
๋ น์ NumPy ๋ฒ์ ์ range ๋ช
๋ น์ด๋ผ๊ณ ๋ณผ ์ ์๋ค. ํด๋นํ๋ ๋ฒ์์ ์ซ์ ์์ด์ ์์ฑํ๋ค.
Step11: linspace ๋ช
๋ น์ด๋ logspace ๋ช
๋ น์ ์ ํ ๊ตฌ๊ฐ ํน์ ๋ก๊ทธ ๊ตฌ๊ฐ์ ์ง์ ํ ๊ตฌ๊ฐ์ ์๋งํผ ๋ถํ ํ๋ค.
Step12: ์์์ ๋์๋ฅผ ์์ฑํ๊ณ ์ถ๋ค๋ฉด random ์๋ธํจํค์ง์ rand ํน์ randn ๋ช
๋ น์ ์ฌ์ฉํ๋ค. rand ๋ช
๋ น์ uniform ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ๋์๋ฅผ ์์ฑํ๊ณ randn ๋ช
๋ น์ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ๋์๋ฅผ ์์ฑํ๋ค. ์์ฑํ ์๋(seed)๊ฐ์ ์ง์ ํ๋ ค๋ฉด seed ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
Step13: ๋ฐฐ์ด์ ํฌ๊ธฐ ๋ณํ
์ผ๋จ ๋ง๋ค์ด์ง ๋ฐฐ์ด์ ๋ด๋ถ ๋ฐ์ดํฐ๋ ๋ณด์กดํ ์ฑ๋ก ํํ๋ง ๋ฐ๊พธ๋ ค๋ฉด reshape ๋ช
๋ น์ด๋ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ๋ค. ์๋ฅผ ๋ค์ด 12๊ฐ์ ์์๋ฅผ ๊ฐ์ง 1์ฐจ์ ํ๋ ฌ์ 3x4 ํํ์ 2์ฐจ์ ํ๋ ฌ๋ก ๋ง๋ค ์ ์๋ค.
Step14: ์ฌ์ฉํ๋ ์์์ ๊ฐฏ์๊ฐ ์ ํด์ ์๊ธฐ ๋๋ฌธ์ reshape ๋ช
๋ น์ ํํ ํํ์ ์์ ์ค ํ๋๋ -1์ด๋ผ๋ ์ซ์๋ก ๋์ฒดํ ์ ์๋ค. -1์ ๋ฃ์ผ๋ฉด ํด๋น ์ซ์๋ ๋ค๋ฅธ ๊ฐ์์ ๊ณ์ฐ๋์ด ์ฌ์ฉ๋๋ค.
Step15: ๋ค์ฐจ์ ๋ฐฐ์ด์ ๋ฌด์กฐ๊ฑด 1์ฐจ์์ผ๋ก ํผ์น๊ธฐ ์ํด์๋ flatten ๋ช
๋ น์ด๋ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ๋ค.
Step16: ๊ธธ์ด๊ฐ 5์ธ 1์ฐจ์ ๋ฐฐ์ด๊ณผ ํ, ์ด์ ๊ฐฏ์๊ฐ (5,1)์ธ 2์ฐจ์ ๋ฐฐ์ด์ ๋ฐ์ดํฐ๋ ๊ฐ์๋ ์์ฐํ ๋ค๋ฅธ ๊ฐ์ฒด์ด๋ค.
Step17: ์ด๋ ๊ฒ ๊ฐ์ ๋ฐฐ์ด์ ๋ํด ์ฐจ์๋ง 1์ฐจ์ ์ฆ๊ฐ์ํค๋ ๊ฒฝ์ฐ์๋ newaxis ๋ช
๋ น์ ์ฌ์ฉํ๊ธฐ๋ ํ๋ค.
Step18: ๋ฐฐ์ด ์ฐ๊ฒฐ
ํ์ ์๋ ์ด์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์ฐ๊ฒฐํ์ฌ(concatenate) ๋ ํฐ ๋ฐฐ์ด์ ๋ง๋ค ๋๋ ๋ค์๊ณผ ๊ฐ์ ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
hstack
vstack
dstack
stack
r_
tile
hstack ๋ช
๋ น์ ํ์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์์ผ๋ก ์ฐ๊ฒฐํ์ฌ ์ด์ ์๊ฐ ๋ ๋ง์ ๋ฐฐ์ด์ ๋ง๋ ๋ค. ์ฐ๊ฒฐํ ๋ฐฐ์ด์ ํ๋์ ๋ฆฌ์คํธ์ ๋ด์์ผ ํ๋ค.
Step19: vstack ๋ช
๋ น์ ์ด์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์์๋๋ก ์ฐ๊ฒฐํ์ฌ ํ์ ์๊ฐ ๋ ๋ง์ ๋ฐฐ์ด์ ๋ง๋ ๋ค. ์ฐ๊ฒฐํ ๋ฐฐ์ด์ ๋ง์ฐฌ๊ฐ์ง๋ก ํ๋์ ๋ฆฌ์คํธ์ ๋ด์์ผ ํ๋ค.
Step20: dstack ๋ช
๋ น์ ์ 3์ ์ถ ์ฆ, ํ์ด๋ ์ด์ด ์๋ ๊น์ด(depth) ๋ฐฉํฅ์ผ๋ก ๋ฐฐ์ด์ ํฉ์น๋ค.
Step21: stack ๋ช
๋ น์ ์๋ก์ด ์ฐจ์(์ถ์ผ๋ก) ๋ฐฐ์ด์ ์ฐ๊ฒฐํ๋ฉฐ ๋น์ฐํ ์ฐ๊ฒฐํ๊ณ ์ ํ๋ ๋ฐฐ์ด๋ค์ ํฌ๊ธฐ๊ฐ ๋ชจ๋ ๊ฐ์์ผ ํ๋ค.
axis ์ธ์(๋ํดํธ 0)๋ฅผ ์ฌ์ฉํ์ฌ ์ฐ๊ฒฐํ์ ํ์ ๋ฐฉํฅ์ ์ ํ๋ค.
Step22: r_ ๋ฉ์๋๋ hstack ๋ช
๋ น๊ณผ ์ ์ฌํ๋ค. ๋ค๋ง ๋ฉ์๋์์๋ ๋ถ๊ตฌํ๊ณ ์๊ดํธ(parenthesis, ())๋ฅผ ์ฌ์ฉํ์ง ์๊ณ ์ธ๋ฑ์ฑ๊ณผ ๊ฐ์ด ๋๊ดํธ(bracket, [])๋ฅผ ์ฌ์ฉํ๋ค.
Step23: tile ๋ช
๋ น์ ๋์ผํ ๋ฐฐ์ด์ ๋ฐ๋ณตํ์ฌ ์ฐ๊ฒฐํ๋ค.
Step24: ๊ทธ๋ฆฌ๋ ์์ฑ
๋ณ์๊ฐ 2๊ฐ์ธ 2์ฐจ์ ํจ์์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๊ฑฐ๋ ํ๋ฅผ ์์ฑํ๋ ค๋ฉด ๋ง์ ์ขํ๋ฅผ ํ๊บผ๋ฒ์ ์์ฑํ์ฌ ๊ฐ ์ขํ์ ๋ํ ํจ์ ๊ฐ์ ๊ณ์ฐํด์ผ ํ๋ค.
์๋ฅผ ๋ค์ด x, y ๋ผ๋ ๋ ๋ณ์๋ฅผ ๊ฐ์ง ํจ์์์ x๊ฐ 0๋ถํฐ 2๊น์ง, y๊ฐ 0๋ถํฐ 4๊น์ง์ ์ฌ๊ฐํ ์์ญ์์ ๋ณํํ๋ ๊ณผ์ ์ ๋ณด๊ณ ์ถ๋ค๋ฉด ์ด ์ฌ๊ฐํ ์์ญ ์์ ๋ค์๊ณผ ๊ฐ์ (x,y) ์ ๊ฐ๋ค์ ๋ํด ํจ์๋ฅผ ๊ณ์ฐํด์ผ ํ๋ค.
$$ (x,y) = (0,0), (0,1), (0,2), (0,3), (0,4), (1,0), \cdots (2,4) $$
์ด๋ฌํ ๊ณผ์ ์ ์๋์ผ๋ก ํด์ฃผ๋ ๊ฒ์ด NumPy์ meshgrid ๋ช
๋ น์ด๋ค. meshgrid ๋ช
๋ น์ ์ฌ๊ฐํ ์์ญ์ ๊ตฌ์ฑํ๋ ๊ฐ๋ก์ถ์ ์ ๋ค๊ณผ ์ธ๋ก์ถ์ ์ ์ ๋ํ๋ด๋ ๋ ๋ฒกํฐ๋ฅผ ์ธ์๋ก ๋ฐ์์ ์ด ์ฌ๊ฐํ ์์ญ์ ์ด๋ฃจ๋ ์กฐํฉ์ ์ถ๋ ฅํ๋ค. ๋จ ์กฐํฉ์ด ๋ (x,y)์์ x๊ฐ๋ง์ ํ์ํ๋ ํ๋ ฌ๊ณผ y๊ฐ๋ง์ ํ์ํ๋ ํ๋ ฌ ๋ ๊ฐ๋ก ๋ถ๋ฆฌํ์ฌ ์ถ๋ ฅํ๋ค. | Python Code:
x = np.array([1, 2, 3])
x.dtype
x = np.array([1, 2, 3])
x.dtype #2.7๊ณผ 3๋ฒ์ ์ ์ฐจ์ด์ธ๊ฐ?
Explanation: NumPy ๋ฐฐ์ด ์์ฑ๊ณผ ๋ณํ
NumPy์ ์๋ฃํ
NumPy์ ndarrayํด๋์ค๋ ํฌํจํ๋ ๋ชจ๋ ๋ฐ์ดํฐ๊ฐ ๊ฐ์ ์๋ฃํ(data type)์ด์ด์ผ ํ๋ค. ๋ํ ์๋ฃํ ์์ฒด๋ ์ผ๋ฐ ํ์ด์ฌ์์ ์ ๊ณตํ๋ ๊ฒ๋ณด๋ค ํจ์ฌ ์ธ๋ถํ๋์ด ์๋ค.
NumPy์ ์๋ฃํ์ dtype ์ด๋ผ๋ ์ธ์๋ก ์ง์ ํ๋ค. dtype ์ธ์๋ก ์ง์ ํ ๊ฐ์ ๋ค์ ํ์ ๋ณด์ธ๊ฒ๊ณผ ๊ฐ์ dtype ์ ๋์ฌ๋ก ์์ํ๋ ๋ฌธ์์ด์ด๊ณ ๋นํธ/๋ฐ์ดํธ ์๋ฅผ ์๋ฏธํ๋ ์ซ์๊ฐ ๋ถ์ ์๋ ์๋ค.
| dtype ์ ๋์ฌ | ์ค๋ช
| ์ฌ์ฉ ์ |
|-|-|-|
| t | ๋นํธ ํ๋ | t4 (4๋นํธ) |
| b | ๋ถ๋ฆฌ์ธ | b (์ฐธ ํน์ ๊ฑฐ์ง) |
| i | ์ ์ | i8 (64๋นํธ) |
| u | ๋ถํธ ์๋ ์ ์ | u8 (64๋นํธ) |
| f | ๋ถ๋์์์ | f8 (64๋นํธ) |
| c | ๋ณต์ ๋ถ๋์์์ | c16 (128๋นํธ) |
| O | ๊ฐ์ฒด | 0 (๊ฐ์ฒด์ ๋ํ ํฌ์ธํฐ) |
| S, a | ๋ฌธ์์ด | S24 (24 ๊ธ์) |
| U | ์ ๋์ฝ๋ ๋ฌธ์์ด | U24 (24 ์ ๋์ฝ๋ ๊ธ์) |
| V | ๊ธฐํ | V12 (12๋ฐ์ดํธ์ ๋ฐ์ดํฐ ๋ธ๋ญ) |
ndarray ๊ฐ์ฒด์ dtype ์์ฑ์ผ๋ก ์๋ฃํ์ ์ ์ ์๋ค.
End of explanation
np.exp(-np.inf)
-np.inf
Explanation: ๋ง์ฝ ๋ถ๋์์์ ์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ์๋ ๋ฌดํ๋๋ฅผ ํํํ๊ธฐ ์ํ np.inf์ ์ ์ํ ์ ์๋ ์ซ์๋ฅผ ๋ํ๋ด๋ np.nan ์ ์ฌ์ฉํ ์ ์๋ค.
End of explanation
np.exp(1)
np.array([1, 0]) / np.array([0, 0])
np.array([1, 0]) / np.array([0, 0])
Explanation: The irrational number e is also known as Eulerโs number. It is approximately 2.718281, and is the base of the natural logarithm
End of explanation
x = np.array([1, 2, 3])
x
Explanation: ๋ฐฐ์ด ์์ฑ
End of explanation
a = np.zeros(5)
a
Explanation: ์์์ ํ์ด์ฌ ๋ฆฌ์คํธ๋ฅผ NumPy์ ndarray ๊ฐ์ฒด๋ก ๋ณํํ์ฌ ์์ฑํ๋ ค๋ฉด array ๋ช
๋ น์ ์ฌ์ฉํ์๋ค. ๊ทธ๋ฌ๋ ๋ณดํต์ ์ด๋ฌํ ๊ธฐ๋ณธ ๊ฐ์ฒด์์ด ๋ค์๊ณผ ๊ฐ์ ๋ช
๋ น์ ์ฌ์ฉํ์ฌ ๋ฐ๋ก ndarray ๊ฐ์ฒด๋ฅผ ์์ฑํ๋ค.
zeros, ones
zeros_like, ones_like
empty
arange
linspace, logspace
rand, randn
ํฌ๊ธฐ๊ฐ ์ ํด์ ธ ์๊ณ ๋ชจ๋ ๊ฐ์ด 0์ธ ๋ฐฐ์ด์ ์์ฑํ๋ ค๋ฉด zeros ๋ช
๋ น์ ์ฌ์ฉํ๋ค. dtype ์ธ์๊ฐ ์์ผ๋ฉด ์ ์ํ์ด ๋๋ค.
End of explanation
b = np.zeros((5, 2), dtype="f8")
b
Explanation: dtype ์ธ์๋ฅผ ๋ช
์ํ๋ฉด ํด๋น ์๋ฃํ ์์๋ฅผ ๊ฐ์ง ๋ฐฐ์ด์ ๋ง๋ ๋ค.
End of explanation
c = np.zeros(5, dtype='S4')
c
c = np.zeros(5, dtype="S4")
c[0] = 'abcd'
c[1] = 'ABCDE'
c
Explanation: ๋ฌธ์์ด ๋ฐฐ์ด๋ ๊ฐ๋ฅํ์ง๋ฉด ๋ชจ๋ ์์์ ๋ฌธ์์ด ํฌ๊ธฐ๊ฐ ๊ฐ์์ผ ํ๋ค. ๋ง์ฝ ๋ ํฐ ํฌ๊ธฐ์ ๋ฌธ์์ด์ ํ ๋นํ๋ฉด ์๋ฆด ์ ์๋ค.
End of explanation
d = np.ones((2,3,2,4), dtype='i8')
d
Explanation: 0์ด ์๋ 1๋ก ์ด๊ธฐํ๋ ๋ฐฐ์ด์ ์์ฑํ๋ ค๋ฉด ones ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
End of explanation
e = range(10)
print(e)
f=np.ones_like(e, dtype="f")
f
Explanation: ๋ง์ฝ ํฌ๊ธฐ๋ฅผ ํํ(tuple)๋ก ๋ช
์ํ์ง ์๊ณ ํน์ ํ ๋ฐฐ์ด ํน์ ๋ฆฌ์คํธ์ ๊ฐ์ ํฌ๊ธฐ์ ๋ฐฐ์ด์ ์์ฑํ๊ณ ์ถ๋ค๋ฉด ones_like, zeros_like ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
End of explanation
g = np.empty((3,6))
g
Explanation: ๋ฐฐ์ด์ ํฌ๊ธฐ๊ฐ ์ปค์ง๋ฉด ๋ฐฐ์ด์ ์ด๊ธฐํํ๋๋ฐ๋ ์๊ฐ์ด ๊ฑธ๋ฆฐ๋ค. ์ด ์๊ฐ์ ๋จ์ถํ๋ ค๋ฉด ์์ฑ๋ง ํ๊ณ ์ด๊ธฐํ๋ฅผ ํ์ง ์๋ empty ๋ช
๋ น์ ์ฌ์ฉํ ์ ์๋ค. empty ๋ช
๋ น์ผ๋ก ์์ฑ๋ ๋ฐฐ์ด์ ์ด๋ค ๊ฐ์ด ๋ค์ด์์์ง๋ ์ ์ ์๋ค.
End of explanation
np.arange(10) # 0 . . . n-1
np.arange(3, 21, 2) # start, end (exclusive), step
Explanation: arange ๋ช
๋ น์ NumPy ๋ฒ์ ์ range ๋ช
๋ น์ด๋ผ๊ณ ๋ณผ ์ ์๋ค. ํด๋นํ๋ ๋ฒ์์ ์ซ์ ์์ด์ ์์ฑํ๋ค.
End of explanation
np.linspace(0, 100, 5) # start, end, num-points
np.logspace(0, 4, 4, endpoint=False)
Explanation: linspace ๋ช
๋ น์ด๋ logspace ๋ช
๋ น์ ์ ํ ๊ตฌ๊ฐ ํน์ ๋ก๊ทธ ๊ตฌ๊ฐ์ ์ง์ ํ ๊ตฌ๊ฐ์ ์๋งํผ ๋ถํ ํ๋ค.
End of explanation
np.random.seed(0)
np.random.rand(4)
np.random.randn(3,5)
Explanation: ์์์ ๋์๋ฅผ ์์ฑํ๊ณ ์ถ๋ค๋ฉด random ์๋ธํจํค์ง์ rand ํน์ randn ๋ช
๋ น์ ์ฌ์ฉํ๋ค. rand ๋ช
๋ น์ uniform ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ๋์๋ฅผ ์์ฑํ๊ณ randn ๋ช
๋ น์ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ๋์๋ฅผ ์์ฑํ๋ค. ์์ฑํ ์๋(seed)๊ฐ์ ์ง์ ํ๋ ค๋ฉด seed ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
End of explanation
a = np.arange(12)
a
b = a.reshape(3, 4)
b
Explanation: ๋ฐฐ์ด์ ํฌ๊ธฐ ๋ณํ
์ผ๋จ ๋ง๋ค์ด์ง ๋ฐฐ์ด์ ๋ด๋ถ ๋ฐ์ดํฐ๋ ๋ณด์กดํ ์ฑ๋ก ํํ๋ง ๋ฐ๊พธ๋ ค๋ฉด reshape ๋ช
๋ น์ด๋ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ๋ค. ์๋ฅผ ๋ค์ด 12๊ฐ์ ์์๋ฅผ ๊ฐ์ง 1์ฐจ์ ํ๋ ฌ์ 3x4 ํํ์ 2์ฐจ์ ํ๋ ฌ๋ก ๋ง๋ค ์ ์๋ค.
End of explanation
a.reshape(2,2,-1)
a.reshape(2,-1,2)
Explanation: ์ฌ์ฉํ๋ ์์์ ๊ฐฏ์๊ฐ ์ ํด์ ์๊ธฐ ๋๋ฌธ์ reshape ๋ช
๋ น์ ํํ ํํ์ ์์ ์ค ํ๋๋ -1์ด๋ผ๋ ์ซ์๋ก ๋์ฒดํ ์ ์๋ค. -1์ ๋ฃ์ผ๋ฉด ํด๋น ์ซ์๋ ๋ค๋ฅธ ๊ฐ์์ ๊ณ์ฐ๋์ด ์ฌ์ฉ๋๋ค.
End of explanation
a.flatten()
Explanation: ๋ค์ฐจ์ ๋ฐฐ์ด์ ๋ฌด์กฐ๊ฑด 1์ฐจ์์ผ๋ก ํผ์น๊ธฐ ์ํด์๋ flatten ๋ช
๋ น์ด๋ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ๋ค.
End of explanation
x = np.arange(5)
x
y = x.reshape(5, 1)
y
Explanation: ๊ธธ์ด๊ฐ 5์ธ 1์ฐจ์ ๋ฐฐ์ด๊ณผ ํ, ์ด์ ๊ฐฏ์๊ฐ (5,1)์ธ 2์ฐจ์ ๋ฐฐ์ด์ ๋ฐ์ดํฐ๋ ๊ฐ์๋ ์์ฐํ ๋ค๋ฅธ ๊ฐ์ฒด์ด๋ค.
End of explanation
z = x[:, np.newaxis]
z
Explanation: ์ด๋ ๊ฒ ๊ฐ์ ๋ฐฐ์ด์ ๋ํด ์ฐจ์๋ง 1์ฐจ์ ์ฆ๊ฐ์ํค๋ ๊ฒฝ์ฐ์๋ newaxis ๋ช
๋ น์ ์ฌ์ฉํ๊ธฐ๋ ํ๋ค.
End of explanation
a1 = np.ones((2, 3))
a1
a2 = np.zeros((2, 2))
a2
np.hstack([a1, a2])
Explanation: ๋ฐฐ์ด ์ฐ๊ฒฐ
ํ์ ์๋ ์ด์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์ฐ๊ฒฐํ์ฌ(concatenate) ๋ ํฐ ๋ฐฐ์ด์ ๋ง๋ค ๋๋ ๋ค์๊ณผ ๊ฐ์ ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
hstack
vstack
dstack
stack
r_
tile
hstack ๋ช
๋ น์ ํ์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์์ผ๋ก ์ฐ๊ฒฐํ์ฌ ์ด์ ์๊ฐ ๋ ๋ง์ ๋ฐฐ์ด์ ๋ง๋ ๋ค. ์ฐ๊ฒฐํ ๋ฐฐ์ด์ ํ๋์ ๋ฆฌ์คํธ์ ๋ด์์ผ ํ๋ค.
End of explanation
b1 = np.ones((2, 3))
b1
b2 = np.zeros((3, 3))
b2
np.vstack([b1, b2])
Explanation: vstack ๋ช
๋ น์ ์ด์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์์๋๋ก ์ฐ๊ฒฐํ์ฌ ํ์ ์๊ฐ ๋ ๋ง์ ๋ฐฐ์ด์ ๋ง๋ ๋ค. ์ฐ๊ฒฐํ ๋ฐฐ์ด์ ๋ง์ฐฌ๊ฐ์ง๋ก ํ๋์ ๋ฆฌ์คํธ์ ๋ด์์ผ ํ๋ค.
End of explanation
c1 = np.ones((2,3))
c1
c2 = np.zeros((2,3))
c2
np.dstack([c1, c2])
Explanation: dstack ๋ช
๋ น์ ์ 3์ ์ถ ์ฆ, ํ์ด๋ ์ด์ด ์๋ ๊น์ด(depth) ๋ฐฉํฅ์ผ๋ก ๋ฐฐ์ด์ ํฉ์น๋ค.
End of explanation
np.stack([c1, c2])
np.stack([c1, c2], axis=0)
np.stack([c1, c2], axis=1)
np.stack([c1, c2], axis=2)
Explanation: stack ๋ช
๋ น์ ์๋ก์ด ์ฐจ์(์ถ์ผ๋ก) ๋ฐฐ์ด์ ์ฐ๊ฒฐํ๋ฉฐ ๋น์ฐํ ์ฐ๊ฒฐํ๊ณ ์ ํ๋ ๋ฐฐ์ด๋ค์ ํฌ๊ธฐ๊ฐ ๋ชจ๋ ๊ฐ์์ผ ํ๋ค.
axis ์ธ์(๋ํดํธ 0)๋ฅผ ์ฌ์ฉํ์ฌ ์ฐ๊ฒฐํ์ ํ์ ๋ฐฉํฅ์ ์ ํ๋ค.
End of explanation
np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])]
Explanation: r_ ๋ฉ์๋๋ hstack ๋ช
๋ น๊ณผ ์ ์ฌํ๋ค. ๋ค๋ง ๋ฉ์๋์์๋ ๋ถ๊ตฌํ๊ณ ์๊ดํธ(parenthesis, ())๋ฅผ ์ฌ์ฉํ์ง ์๊ณ ์ธ๋ฑ์ฑ๊ณผ ๊ฐ์ด ๋๊ดํธ(bracket, [])๋ฅผ ์ฌ์ฉํ๋ค.
End of explanation
a = np.array([0, 1, 2])
np.tile(a, 2)
np.tile(a, [2,3])
b = np.array([2,3])
np.tile(a,b)
np.tile(a, (3, 2))
Explanation: tile ๋ช
๋ น์ ๋์ผํ ๋ฐฐ์ด์ ๋ฐ๋ณตํ์ฌ ์ฐ๊ฒฐํ๋ค.
End of explanation
x = np.arange(3)
x
y = np.arange(5)
y
X, Y = np.meshgrid(x, y)
X
Y
[zip(x, y)]
[zip(X, Y)]
for x, y in zip(X, Y):
print (x, y)
for x, y in zip(X,Y):
print (x, y)
[zip(x, y) for x, y in zip(X, Y)]
X
Y
plt.scatter(X, Y, linewidths=10);
Explanation: ๊ทธ๋ฆฌ๋ ์์ฑ
๋ณ์๊ฐ 2๊ฐ์ธ 2์ฐจ์ ํจ์์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๊ฑฐ๋ ํ๋ฅผ ์์ฑํ๋ ค๋ฉด ๋ง์ ์ขํ๋ฅผ ํ๊บผ๋ฒ์ ์์ฑํ์ฌ ๊ฐ ์ขํ์ ๋ํ ํจ์ ๊ฐ์ ๊ณ์ฐํด์ผ ํ๋ค.
์๋ฅผ ๋ค์ด x, y ๋ผ๋ ๋ ๋ณ์๋ฅผ ๊ฐ์ง ํจ์์์ x๊ฐ 0๋ถํฐ 2๊น์ง, y๊ฐ 0๋ถํฐ 4๊น์ง์ ์ฌ๊ฐํ ์์ญ์์ ๋ณํํ๋ ๊ณผ์ ์ ๋ณด๊ณ ์ถ๋ค๋ฉด ์ด ์ฌ๊ฐํ ์์ญ ์์ ๋ค์๊ณผ ๊ฐ์ (x,y) ์ ๊ฐ๋ค์ ๋ํด ํจ์๋ฅผ ๊ณ์ฐํด์ผ ํ๋ค.
$$ (x,y) = (0,0), (0,1), (0,2), (0,3), (0,4), (1,0), \cdots (2,4) $$
์ด๋ฌํ ๊ณผ์ ์ ์๋์ผ๋ก ํด์ฃผ๋ ๊ฒ์ด NumPy์ meshgrid ๋ช
๋ น์ด๋ค. meshgrid ๋ช
๋ น์ ์ฌ๊ฐํ ์์ญ์ ๊ตฌ์ฑํ๋ ๊ฐ๋ก์ถ์ ์ ๋ค๊ณผ ์ธ๋ก์ถ์ ์ ์ ๋ํ๋ด๋ ๋ ๋ฒกํฐ๋ฅผ ์ธ์๋ก ๋ฐ์์ ์ด ์ฌ๊ฐํ ์์ญ์ ์ด๋ฃจ๋ ์กฐํฉ์ ์ถ๋ ฅํ๋ค. ๋จ ์กฐํฉ์ด ๋ (x,y)์์ x๊ฐ๋ง์ ํ์ํ๋ ํ๋ ฌ๊ณผ y๊ฐ๋ง์ ํ์ํ๋ ํ๋ ฌ ๋ ๊ฐ๋ก ๋ถ๋ฆฌํ์ฌ ์ถ๋ ฅํ๋ค.
End of explanation |
3,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use different extra sources such as CCSN neutrino-driven winds
Prepared by Christian Ritter
Step1: AGB and massive star tables used
Step2: Setup
Step3: Default setup
Step4: Setup with different extra sources
Here we use yields in two (extra source) yield tables which we apply in the mass range from 8Msun to 12Msun and from 12Msun to 30Msun respectively. We apply a factor of 0.5 to the extra yields of the first yield table and 1. to the second yield table.
Step5: SYGMA
Step6: OMEGA | Python Code:
%matplotlib nbagg
import matplotlib.pyplot as plt
import sys
import matplotlib
import numpy as np
from NuPyCEE import sygma as s
from NuPyCEE import omega as o
from NuPyCEE import read_yields as ry
Explanation: How to use different extra sources such as CCSN neutrino-driven winds
Prepared by Christian Ritter
End of explanation
table='yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'
Explanation: AGB and massive star tables used
End of explanation
# OMEGA parameters for MW
mass_loading = 0.0
nb_1a_per_m = 3.0e-3
sfe=0.04
SF_law=True
DM_evolution=False
imf_yields_range=[1.0,30.0]
special_timesteps=30
Z_trans=0.0
iniZ=0.0001
Explanation: Setup
End of explanation
o0=o.omega(iniZ=iniZ,galaxy='milky_way',Z_trans=Z_trans, table=table,sfe=sfe, DM_evolution=DM_evolution,\
mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=special_timesteps,
imf_yields_range=imf_yields_range,
SF_law=SF_law)
Explanation: Default setup
End of explanation
extra_source_table=['yield_tables/r_process_arnould_2007.txt',
'yield_tables/r_process_arnould_2007.txt']
#Apply yields only in specific mass ranges;
extra_source_mass_range = [[8,12],[12,30]]
#percentage of stars to which the yields are added. First entry for first yield table etc.
f_extra_source = [0.5,1.]
#metallicity to exclude (in this case none)
extra_source_exclude_Z = [[], []]
#you can look at the yields directly with the y1 and y2 parameter below.
y1=ry.read_yields_Z("./NuPyCEE/"+extra_source_table[0])
y2=ry.read_yields_Z("./NuPyCEE/"+extra_source_table[1])
Explanation: Setup with different extra sources
Here we use yields in two (extra source) yield tables which we apply in the mass range from 8Msun to 12Msun and from 12Msun to 30Msun respectively. We apply a factor of 0.5 to the extra yields of the first yield table and 1. to the second yield table.
End of explanation
s0 = s.sygma(iniZ=0.0001,extra_source_on=False) #default False
s0p1 = s.sygma(iniZ=0.0001,extra_source_on=True,
extra_source_table=extra_source_table,extra_source_mass_range=extra_source_mass_range,
f_extra_source=f_extra_source, extra_source_exclude_Z=extra_source_exclude_Z)
Explanation: SYGMA
End of explanation
o0p1=o.omega(iniZ=iniZ,galaxy='milky_way',Z_trans=Z_trans, table=table,sfe=sfe, DM_evolution=DM_evolution,\
mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=special_timesteps,
imf_yields_range=imf_yields_range,SF_law=SF_law,extra_source_on=True,
extra_source_table=extra_source_table,extra_source_mass_range=extra_source_mass_range,
f_extra_source=f_extra_source, extra_source_exclude_Z=extra_source_exclude_Z)
Explanation: OMEGA
End of explanation |
3,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
There are many specialized packages for dealing with data analysis and statistical programming. One very important code that you will see in MATH1024, Introduction to Probability and Statistics, is R. A Python package for performing similar analysis of large data sets is pandas. However, simple statistical tasks on simple data sets can be tackled using numpy and scipy.
Getting data in
A data file containing the monthly rainfall for Southampton, taken from the Met Office data can be downloaded from this link. We will save that file locally, and then look at the data.
The first few lines of the file are
Step1: We can use numpy to load this data into a variable, where we can manipulate it. This is not ideal
Step2: We see that the first column - the year - has been converted to a floating point number, which is not helpful. However, we can now split the data using standard numpy operations
Step3: We can now plot, for example, the rainfall in January for all years
Step4: Basic statistical functions
numpy contains a number of basic statistical functions, such as min, max and mean. These will act on entire arrays to give the "all time" minimum, maximum, and average rainfall
Step5: Of more interest would be either
the mean (min/max) rainfall in a given month for all years, or
the mean (min/max) rainfall in a given year for all months.
So the mean rainfall in the first year, 1855, would be
Step6: Whilst the mean rainfall in January, averaging over all years, would be
Step7: If we wanted to plot the mean rainfall per year, across all years, this would be tedious - there are 145 years of data in the file. Even computing the mean rainfall in each month, across all years, would be bad with 12 months. We could write a loop. However, numpy allows us to apply a function along an axis of the array, which does this is one operation
Step8: The axis argument gives the direction we want to keep - that we do not apply the operation to. For this data set, each row contains a year and each column a month. To find the mean in a given month we want to keep the row information (axis 0) and take the mean over the column. To find the mean in a given year we want to keep the column information (axis 1) and take the mean over the row.
We can now plot how the mean varies with each year.
Step9: We can also compute the standard deviation
Step10: We can then add confidence intervals to the plot
Step11: This isn't particularly pretty or clear
Step12: Categorical data
Looking at the means by month, it would be better to give them names rather than numbers. We will also summarize the available information using a boxplot
Step13: Much better ways of working with categorical data are available through more specialized packages.
Regression
We can go beyond the basic statistical functions in numpy and look at other standard tasks. For example, we can look for simple trends in our data with a linear regression. There is a function to compute the linear regression in scipy we can use. We will use this to see if there is a trend in the mean yearly rainfall
Step14: It looks like there's a good chance that the slight decrease in mean rainfall with time is a real effect.
Random numbers
Random processes and random variables may be at the heart of probability and statistics, but computers cannot generate anything "truly" random. Instead they can generate pseudo-random numbers using random number generators (RNGs). Constructing a random number generator is a hard problem and wherever possible you should use a well-tested RNG rather than attempting to write your own.
Python has many ways of generating random numbers. Perhaps the most useful are given by the numpy.random module, which can generate a numpy array filled with random numbers from various distributions. For example
Step15: More distributions
Whilst the standard distributions are given by the convenience functions above, the full documentation of numpy.random shows many other distributions available. For example, we can draw $10,000$ samples from the Beta distribution using the parameters $\alpha = 1/2 = \beta$ as
Step16: We can do this $5,000$ times and compute the mean of each set of samples | Python Code:
!head southampton_precip.txt
Explanation: There are many specialized packages for dealing with data analysis and statistical programming. One very important code that you will see in MATH1024, Introduction to Probability and Statistics, is R. A Python package for performing similar analysis of large data sets is pandas. However, simple statistical tasks on simple data sets can be tackled using numpy and scipy.
Getting data in
A data file containing the monthly rainfall for Southampton, taken from the Met Office data can be downloaded from this link. We will save that file locally, and then look at the data.
The first few lines of the file are:
End of explanation
import numpy
data = numpy.loadtxt('southampton_precip.txt')
data
Explanation: We can use numpy to load this data into a variable, where we can manipulate it. This is not ideal: it will lose the information in the header, and that the first column corresponds to years. However, it is simple to use.
End of explanation
years = data[:, 0]
rainfall = data[:, 1:]
Explanation: We see that the first column - the year - has been converted to a floating point number, which is not helpful. However, we can now split the data using standard numpy operations:
End of explanation
%matplotlib inline
from matplotlib import pyplot
pyplot.plot(years, rainfall[:,0])
pyplot.xlabel('Year')
pyplot.ylabel('Rainfall in January');
Explanation: We can now plot, for example, the rainfall in January for all years:
End of explanation
print("Minimum rainfall: {}".format(rainfall.min()))
print("Maximum rainfall: {}".format(rainfall.max()))
print("Mean rainfall: {}".format(rainfall.mean()))
Explanation: Basic statistical functions
numpy contains a number of basic statistical functions, such as min, max and mean. These will act on entire arrays to give the "all time" minimum, maximum, and average rainfall:
End of explanation
print ("Mean rainfall in 1855: {}".format(rainfall[0, :].mean()))
Explanation: Of more interest would be either
the mean (min/max) rainfall in a given month for all years, or
the mean (min/max) rainfall in a given year for all months.
So the mean rainfall in the first year, 1855, would be
End of explanation
print ("Mean rainfall in January: {}".format(rainfall[:, 0].mean()))
Explanation: Whilst the mean rainfall in January, averaging over all years, would be
End of explanation
mean_rainfall_in_month = rainfall.mean(axis=0)
mean_rainfall_per_year = rainfall.mean(axis=1)
Explanation: If we wanted to plot the mean rainfall per year, across all years, this would be tedious - there are 145 years of data in the file. Even computing the mean rainfall in each month, across all years, would be bad with 12 months. We could write a loop. However, numpy allows us to apply a function along an axis of the array, which does this is one operation:
End of explanation
pyplot.plot(years, mean_rainfall_per_year)
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall');
Explanation: The axis argument gives the direction we want to keep - that we do not apply the operation to. For this data set, each row contains a year and each column a month. To find the mean in a given month we want to keep the row information (axis 0) and take the mean over the column. To find the mean in a given year we want to keep the column information (axis 1) and take the mean over the row.
We can now plot how the mean varies with each year.
End of explanation
std_rainfall_per_year = rainfall.std(axis=1)
Explanation: We can also compute the standard deviation:
End of explanation
pyplot.errorbar(years, mean_rainfall_per_year, yerr = std_rainfall_per_year)
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall');
Explanation: We can then add confidence intervals to the plot:
End of explanation
pyplot.plot(years, mean_rainfall_per_year)
pyplot.fill_between(years, mean_rainfall_per_year - std_rainfall_per_year,
mean_rainfall_per_year + std_rainfall_per_year,
alpha=0.25, color=None)
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall');
Explanation: This isn't particularly pretty or clear: a nicer example would use better packages, but a quick fix uses an alternative matplotlib approach:
End of explanation
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
pyplot.boxplot(rainfall, labels=months)
pyplot.xlabel('Month')
pyplot.ylabel('Mean rainfall');
Explanation: Categorical data
Looking at the means by month, it would be better to give them names rather than numbers. We will also summarize the available information using a boxplot:
End of explanation
from scipy import stats
slope, intercept, r_value, p_value, std_err = stats.linregress(years, mean_rainfall_per_year)
pyplot.plot(years, mean_rainfall_per_year, 'b-', label='Data')
pyplot.plot(years, intercept + slope*years, 'k-', label='Linear Regression')
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall')
pyplot.legend();
print("The change in rainfall (the slope) is {}.".format(slope))
print("However, the error estimate is {}.".format(std_err))
print("The correlation coefficient between rainfall and year"
" is {}.".format(r_value))
print("The probability that the slope is zero is {}.".format(p_value))
Explanation: Much better ways of working with categorical data are available through more specialized packages.
Regression
We can go beyond the basic statistical functions in numpy and look at other standard tasks. For example, we can look for simple trends in our data with a linear regression. There is a function to compute the linear regression in scipy we can use. We will use this to see if there is a trend in the mean yearly rainfall:
End of explanation
from numpy import random
uniform = random.rand(10000)
normal = random.randn(10000)
fig = pyplot.figure()
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
ax1.hist(uniform, 20)
ax1.set_title('Uniform data')
ax2.hist(normal, 20)
ax2.set_title('Normal data')
fig.tight_layout()
fig.show();
Explanation: It looks like there's a good chance that the slight decrease in mean rainfall with time is a real effect.
Random numbers
Random processes and random variables may be at the heart of probability and statistics, but computers cannot generate anything "truly" random. Instead they can generate pseudo-random numbers using random number generators (RNGs). Constructing a random number generator is a hard problem and wherever possible you should use a well-tested RNG rather than attempting to write your own.
Python has many ways of generating random numbers. Perhaps the most useful are given by the numpy.random module, which can generate a numpy array filled with random numbers from various distributions. For example:
End of explanation
beta_samples = random.beta(0.5, 0.5, 10000)
pyplot.hist(beta_samples, 20)
pyplot.title('Beta data')
pyplot.show();
Explanation: More distributions
Whilst the standard distributions are given by the convenience functions above, the full documentation of numpy.random shows many other distributions available. For example, we can draw $10,000$ samples from the Beta distribution using the parameters $\alpha = 1/2 = \beta$ as
End of explanation
n_trials = 5000
beta_means = numpy.zeros((n_trials,))
for trial in range(n_trials):
beta_samples = random.beta(0.5, 0.5, 10000)
beta_means[trial] = numpy.mean(beta_samples)
pyplot.hist(beta_means, 20)
pyplot.title('Mean of Beta trials')
pyplot.show();
Explanation: We can do this $5,000$ times and compute the mean of each set of samples:
End of explanation |
3,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overfitting
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie. Algorithms by David Edwards.
Part of the Quantopian Lecture Series
Step1: When working with real data, there is unlikely to ever be a situation where a ninth-degree polynomial is appropriate
Step2: However, when we use the same estimated parameters to model a different time period, we find that the single-variable model fits worse, while the multiple-variable model is entirely useless. It seems that the relationships we found are not consistent and are particular to the original sample period.
Step3: If we wanted, we could scan our universe for variables that were correlated with the dependent variable, and construct an extremely overfitted model. However, in most cases the correlation will be spurious, and the relationship will not continue into the future.
Example
Step4: If we pick the length based on which seems best - say, on how well our model or algorithm performs - we are overfitting. Below we have a simple trading algorithm which bets on the stock price reverting to the rolling mean (for more details, check out the mean reversion notebook). We use the performance of this algorithm to score window lengths and find the best one. However, when we consider a different timeframe, this window length is far from optimal. This is because our original choice was overfitted to the sample data.
Step5: Clearly fitting to our sample data doesn't always give good results in the future. Just for fun, let's plot the length scores computed from the two different timeframes | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
from statsmodels import regression
from scipy import poly1d
x = np.arange(10)
y = 2*np.random.randn(10) + x**2
xs = np.linspace(-0.25, 9.25, 200)
lin = np.polyfit(x, y, 1)
quad = np.polyfit(x, y, 2)
many = np.polyfit(x, y, 9)
plt.scatter(x, y)
plt.plot(xs, poly1d(lin)(xs))
plt.plot(xs, poly1d(quad)(xs))
plt.plot(xs, poly1d(many)(xs))
plt.ylabel('Y')
plt.xlabel('X')
plt.legend(['Underfit', 'Good fit', 'Overfit']);
Explanation: Overfitting
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie. Algorithms by David Edwards.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
What is overfitting?
When constructing a model, we tune both the parameters and the model by fitting to sample data. We then use the model and parameters to predict data we have not yet observed. We say a model is overfit when it is overly sensitive to noise and idiosyncracies in the sample data, and therefore does not reflect the underlying data-generating process.
To understand why this happens, one has to consider the amount of noise present in any dataset. One can consider a set of data as $D_{T}$, the true underlying data that came from whatever process we are trying to model, and $\epsilon$, some random noise. Because what we see is $D = D_{T} + \epsilon$, we might fit our model to very perfectly predict for the given $\epsilon$, but not for $D_{T}$.
This is problematic because we only care about fitting to the sample insofar as that gives an accurate fit to future data. The two broad causes of overfitting are:
* small sample size, so that noise and trend are not distinguishable
* choosing an overly complex model, so that it ends up contorting to fit the noise in the sample
Verbal Example: Too Many Rules (Complexity)
Let's say you have the following dataset:
| TV Channel | Room Lighting Intensity | Enjoyment |
|------------|-------------------------|-----------|
| 1 | 2 | 1 |
| 2 | 3 | 2 |
| 3 | 1 | 3 |
You are trying to predict enjoyment, so you create the following rules:
If TV Channel is 1 and Lighting Intensity is 2, then Enjoyment will be 1.
If TV Channel is 2 and Lighting Intensity is 3, then Enjoyment will be 2.
If TV Channel is 3 and Lighting Intensity is 1, then Enjoyment will be 3.
In all other cases predict an average enjoyment of 2.
This is a well defined model for future data, however, in this case let's say your enjoyment is purely dependent on the tv channel and not on the lighting. Because we have a rule for each row in our dataset, our model is perfectly predictive in our historical data, but would performly poorly in real trials because we are overfitting to random noise in the lighting intensity data.
Generalizing this to stocks, if your model starts developing many specific rules based on specific past events, it is almost definitely overfitting. This is why black-box machine learning (neural networks, etc.) is so dangerous when not done correctly.
Example: Curve fitting
Overfitting is most easily seen when we look at polynomial regression. Below we construct a dataset which noisily follows a quadratic. The linear model is underfit: simple linear models aren't suitable for all situations, especially when we have reason to believe that the data is nonlinear. The quadratic curve has some error but fits the data well.
When we fit a ninth-degree polynomial to the data, the error is zero - a ninth-degree polynomial can be constructed to go through any 10 points - but, looking at the tails of the curve, we know that we can't expect it to accurately predict other samples from the same distribution. It fits the data perfectly, but that is because it also fits the noise perfectly, and the noise is not what we want to model. In this case we have selected a model that is too complex.
End of explanation
# Load one year's worth of pricing data for five different assets
start = '2013-01-01'
end = '2014-01-01'
x1 = get_pricing('PEP', fields='price', start_date=start, end_date=end)
x2 = get_pricing('MCD', fields='price', start_date=start, end_date=end)
x3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end)
x4 = get_pricing('DOW', fields='price', start_date=start, end_date=end)
y = get_pricing('PG', fields='price', start_date=start, end_date=end)
# Build a linear model using only x1 to explain y
slr = regression.linear_model.OLS(y, sm.add_constant(x1)).fit()
slr_prediction = slr.params[0] + slr.params[1]*x1
# Run multiple linear regression using x1, x2, x3, x4 to explain y
mlr = regression.linear_model.OLS(y, sm.add_constant(np.column_stack((x1,x2,x3,x4)))).fit()
mlr_prediction = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4
# Compute adjusted R-squared for the two different models
print 'SLR R-squared:', slr.rsquared_adj
print 'SLR p-value:', slr.f_pvalue
print 'MLR R-squared:', mlr.rsquared_adj
print 'MLR p-value:', mlr.f_pvalue
# Plot y along with the two different predictions
y.plot()
slr_prediction.plot()
mlr_prediction.plot()
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['PG', 'SLR', 'MLR']);
Explanation: When working with real data, there is unlikely to ever be a situation where a ninth-degree polynomial is appropriate: our choice of function should reflect a belief about the underlying process, and real-world processes generally do not follow high-degree polynomial curves. This example is contrived, but it can be tempting to use a quadratic or cubic model just to decrease sample error.
Note: Model/Parameter Parsimony
Just as the most elegant physics models describe a tremendous amount of our world through a few equations, a good trading model should explain most of the data through a few rules. Any time you start to have a number of rules even close to the number of points in your data set, you can be sure you are overfitting. Since parameters can be thought of as rules as they equivalently constrain a model, the same is true of parameters. Fewer parameters is better, and it is better to explain 60% of the data with 2-3 parameters than 90% with 10.
Beware of the perfect fit
Because there is almost always noise present in real data, a perfect fit is almost always indicative of overfitting. It is almost impossible to know the percentage noise/signal in a given data set while you are developing the model, but use your common sense. Are the predictions surprisingly good? Then you're probably overfitting.
Example: Regression parameters
How do we know which variables to include in a model? If we're afraid of omitting something important, we might try different ones and include all the variables we can find that improve the fit. Below we regress one asset that is in the same sector as the asset whose price we're trying to predict, and three other unrelated ones. In our initial timeframe, we are able to fit the model more closely to the data when using multiple variables than when using just one.
End of explanation
# Load the next of pricing data
start = '2014-01-01'
end = '2015-01-01'
x1 = get_pricing('PEP', fields='price', start_date=start, end_date=end)
x2 = get_pricing('MCD', fields='price', start_date=start, end_date=end)
x3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end)
x4 = get_pricing('DOW', fields='price', start_date=start, end_date=end)
y = get_pricing('PG', fields='price', start_date=start, end_date=end)
# Extend our model from before to the new time period
slr_prediction2 = slr.params[0] + slr.params[1]*x1
mlr_prediction2 = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4
# Manually compute adjusted R-squared over the new time period
# Adjustment 1 is for the SLR model
p = 1
N = len(y)
adj1 = float(N - 1)/(N - p - 1)
# Now for MLR
p = 4
N = len(y)
adj2 = float(N - 1)/(N - p - 1)
SST = sum((y - np.mean(y))**2)
SSRs = sum((slr_prediction2 - y)**2)
print 'SLR R-squared:', 1 - adj1*SSRs/SST
SSRm = sum((mlr_prediction2 - y)**2)
print 'MLR R-squared:', 1 - adj2*SSRm/SST
# Plot y along with the two different predictions
y.plot()
slr_prediction2.plot()
mlr_prediction2.plot()
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['PG', 'SLR', 'MLR']);
Explanation: However, when we use the same estimated parameters to model a different time period, we find that the single-variable model fits worse, while the multiple-variable model is entirely useless. It seems that the relationships we found are not consistent and are particular to the original sample period.
End of explanation
# Load the pricing data for a stock
start = '2011-01-01'
end = '2013-01-01'
pricing = get_pricing('MCD', fields='price', start_date=start, end_date=end)
# Compute rolling averages for various window lengths
mu_30d = pricing.rolling(window=30).mean()
mu_60d = pricing.rolling(window=60).mean()
mu_100d = pricing.rolling(window=100).mean()
# Plot asset pricing data with rolling means from the 100th day, when all the means become available
plt.plot(pricing[100:], label='Asset')
plt.plot(mu_30d[100:], label='30d MA')
plt.plot(mu_60d[100:], label='60d MA')
plt.plot(mu_100d[100:], label='100d MA')
plt.xlabel('Day')
plt.ylabel('Price')
plt.legend();
Explanation: If we wanted, we could scan our universe for variables that were correlated with the dependent variable, and construct an extremely overfitted model. However, in most cases the correlation will be spurious, and the relationship will not continue into the future.
Example: Rolling windows
One of the challenges in building a model that uses rolling parameter estimates, such as rolling mean or rolling beta, is choosing a window length. A longer window will take into account long-term trends and be less volatile, but it will also lag more when taking into account new observations. The choice of window length strongly affects the rolling parameter estimate and can change how we see and treat the data. Below we calculate the rolling averages of a stock price for different window lengths:
End of explanation
# Trade using a simple mean-reversion strategy
def trade(stock, length):
# If window length is 0, algorithm doesn't make sense, so exit
if length == 0:
return 0
# Compute rolling mean and rolling standard deviation
rolling_window = stock.rolling(window=length)
mu = rolling_window.mean()
std = rolling_window.std()
# Compute the z-scores for each day using the historical data up to that day
zscores = (stock - mu)/std
# Simulate trading
# Start with no money and no positions
money = 0
count = 0
for i in range(len(stock)):
# Sell short if the z-score is > 1
if zscores[i] > 1:
money += stock[i]
count -= 1
# Buy long if the z-score is < 1
elif zscores[i] < -1:
money -= stock[i]
count += 1
# Clear positions if the z-score between -.5 and .5
elif abs(zscores[i]) < 0.5:
money += count*stock[i]
count = 0
return money
# Find the window length 0-254 that gives the highest returns using this strategy
length_scores = [trade(pricing, l) for l in range(255)]
best_length = np.argmax(length_scores)
print 'Best window length:', best_length
# Get pricing data for a different timeframe
start2 = '2013-01-01'
end2 = '2015-01-01'
pricing2 = get_pricing('MCD', fields='price', start_date=start2, end_date=end2)
# Find the returns during this period using what we think is the best window length
length_scores2 = [trade(pricing2, l) for l in range(255)]
print best_length, 'day window:', length_scores2[best_length]
# Find the best window length based on this dataset, and the returns using this window length
best_length2 = np.argmax(length_scores2)
print best_length2, 'day window:', length_scores2[best_length2]
Explanation: If we pick the length based on which seems best - say, on how well our model or algorithm performs - we are overfitting. Below we have a simple trading algorithm which bets on the stock price reverting to the rolling mean (for more details, check out the mean reversion notebook). We use the performance of this algorithm to score window lengths and find the best one. However, when we consider a different timeframe, this window length is far from optimal. This is because our original choice was overfitted to the sample data.
End of explanation
plt.plot(length_scores)
plt.plot(length_scores2)
plt.xlabel('Window length')
plt.ylabel('Score')
plt.legend(['2011-2013', '2013-2015']);
Explanation: Clearly fitting to our sample data doesn't always give good results in the future. Just for fun, let's plot the length scores computed from the two different timeframes:
End of explanation |
3,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
let us avaluate a function of 3 variables on relatively large mesh
Step1: isolevel can be changed from Python side
Step2: to avoid recentering one can disable camera auto fit
Step3: one can add other plots to the same scene | Python Code:
T = 1.618033988749895
from numpy import sin,cos,pi
r = 4.77
zmin,zmax = -r,r
xmin,xmax = -r,r
ymin,ymax = -r,r
Nx,Ny,Nz = 80,80,80
x = np.linspace(xmin,xmax,Nx)
y = np.linspace(ymin,ymax,Ny)
z = np.linspace(zmin,zmax,Nz)
x,y,z = np.meshgrid(x,y,z,indexing='ij')
%time p = 2 - (cos(x + T*y) + cos(x - T*y) + cos(y + T*z) \
+ cos(y - T*z) + cos(z - T*x) + cos(z + T*x))
p3d_1 = k3d.marching_cubes(p,xmin=xmin,xmax=xmax,ymin=ymin,ymax=ymax, zmin=zmin, zmax=zmax,\
level=0.0)
plot += p3d_1
plot.display()
p3d_1.color = 0x114455
Explanation: let us avaluate a function of 3 variables on relatively large mesh
End of explanation
p3d_1.level=-0.1
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
@interact(l=widgets.FloatSlider(value=-.1,min=-3,max=1.1))
def g(l):
p3d_1.level=-l
Explanation: isolevel can be changed from Python side:
End of explanation
plot.camera_auto_fit = False
plot.grid_auto_fit = False
Explanation: to avoid recentering one can disable camera auto fit:
End of explanation
%%time
p =(x**2+y**2+z**2+2*y-1)*((x**2+y**2+z**2-2*y-1)**2-8*z**2)+16*x*z*(x**2+y**2+z**2-2*y-1)
plot += k3d.marching_cubes(p,xmin=xmin,xmax=xmax,ymin=ymin,ymax=ymax, zmin=zmin, zmax=zmax, level=0.0,color=0xff0000)
%%time
p = x**2 + y**2 - z**2 -0.
plot += k3d.marching_cubes(p,xmin=xmin,xmax=xmax,ymin=ymin,ymax=ymax, zmin=zmin, zmax=zmax, level=0.0,color=0x00ff00)
Explanation: one can add other plots to the same scene:
End of explanation |
3,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dask Tutorial Notebook
<hr>
Notebook Description
This notebook serves as an introduction to Dask, which is a software library that allows us to scale analyses to large datasets. This notebook covers both what it is and how to use it, particularly in the context of the Open Data Cube. This notebook is strongly inspired by the Digital Earth Africa Dask tutorial notebook.
<hr>
<a id="Dask_Tutorial_Notebook_index"></a>Index
What is Dask?
Import Dependencies and Connect to the Data Cube
Define the Extents of the Analysis
Standard Load
Creating a Dask Client and Cluster
View the Dask Dashboard
Lazy Load with Dask
Viewing Dask Chunks
Viewing Task Graphs
Running a Task Graph (Loading)
Adding More Tasks
Persisting Data
Dask Resilience and Recovery
Chunking Strategies
Machine Learning with Dask
<span id="Dask_Tutorial_Notebook_what_is_dask">What is Dask? ▴</span>
Dask is a useful tool when working with large analyses - whether in space, time, or data density (amount of data per pixel) - as it breaks data into manageable chunks that can be easily stored in memory.
It can also use multiple computing cores to speed up computations.
It does all of this by creating a task graph. When applying operations to data loaded from the Data Cube normally, operations occur immediately. However, when using Dask, operations instead become plans to perform them based on how the data is chunked. How the data is chunked is specified in the arguments to datacube.Datacube.load() calls.
You can read the Dask documentation here.
You can read more about how xarray integrates with Dask here.
<span id="Dask_Tutorial_Notebook_import">Import Dependencies and Connect to the Data Cube ▴</span>
Step1: <span id="Dask_Tutorial_Notebook_define_extents">Define the Extents of the Analysis ▴</span>
Here we define an area of interest and examine it on a map.
Step2: <span id="Dask_Tutorial_Notebook_standard_load">Standard Load ▴</span>
By default, the datacube library will not use Dask when loading data.
This means that when dc.load() is used, all data relating to the load query will be requested and loaded into memory.
For very large areas or long time spans, this can cause the Jupyter notebook to crash.
Below, we show a standard load example.
Step3: Take note of how long the load took ("Wall time" in the output of the cell above).
<span id="Dask_Tutorial_Notebook_create_client_cluster">Creating a Dask Client and Cluster ▴</span>
To use Dask, you need to first set up a Dask client and cluster. The client allows access to the cluster. There are 2 main methods for doing this
Step4: A printout should appear, displaying information about the Client and the Cluster.
Enabling Dask - Advanced Method (JupyterLabs)
On the left-hand menu bar, click the Dask icon, as shown below
Step5: The function should return much faster than the standard load because it is not loading any data.
<span id="Dask_Tutorial_Notebook_view_chunks">Viewing Dask Chunks ▴</span>
To get a visual intuition for how the data has been broken into chunks, we can use the .data attribute provided by the xarray.Dataset from datacube.Datacube.load().
This attribute can be used on individual measurements from the lazy-loaded data.
When used in a Jupyter Notebook, it provides a table summarising the size of individual chunks and the number of chunks needed.
An example is shown below, using the red measurement from the lazy-loaded data
Step6: From the Chunk column of the table, we can see that the data has been broken into 4 chunks, with each chunk having a shape of (1 time, 782 pixels, 782 pixels) and taking up 1.17 MB of memory.
Comparing this with the Array column, using Dask means that we can load 5 chunks of 1.17 MB rather than one chunk of 5.87 MB.
Breaking large datasets into chunks and loading chunks one at a time means that you can do computations over large areas and time spans without crashing the Python kernel by running out of memory.
<span id="Dask_Tutorial_Notebook_view_task_graph">Viewing Task Graphs ▴</span>
When using lazy-loading, Dask breaks up the loading operation into a series of steps.
A useful way to visualise the steps is the task graph, which can be accessed by adding the .visualize() method to a .data call
Step7: The task graph is read from bottom to top.
The 5 rectangles at the bottom of the graph are the database entries describing the files that need to be read to load the data.
Above the rectangles are individual load commands that will do the reading.
There is one for each chunk.
The arrows describe which files need to be read for each operation.
At the very top are the indexes of the chunks that will make up the final array.
<span id="Dask_Tutorial_Notebook_run_task_graph">Running a Task Graph (Loading) ▴</span>
When working with lazy-loaded data, you have to specifically ask Dask to run a task graph to obtain the results.
Until you do this, the task graph only knows what to do, not the results.
To obtain a task graph result and return it as an xarray.Dataset with real values (data being held in numpy.ndarray objects, not dask.array objects), call .compute() on the DataArray or Dataset.
If you opened the Dask progress or graph window, you should see the computation proceed there.
Step8: The Dask arrays constructed by the lazy load looked like this
Step9: <span id="Dask_Tutorial_Notebook_persist">Persisting Data ▴</span>
As discussed previously, calling compute() on Dask data will cause the task graph to be processed and the result to be brought into memory.
However, there are times that temporary, intermediate results need to be kept in memory to be reused in a task graph rather than recomputed.
First, we need to load more data with larger chunks to see a significant performance difference.
Step10: Consider the following 2 task graphs
Step11: See how long these 2 graphs take to run.
Step12: In contrast to compute() which stores results in the calling context, persist() stores results in Dask to be reused.
Note that persist() is "non-blocking", or "asynchronous", so although it does process the task graph up to the temporary result referenced by the object persist() is called on, it does so in the background, so the persist() call terminates very quickly even for large task graphs.
Calling persist() on lazy_data_large will load the data into Dask's memory.
Step13: Wait a few seconds after the above cell has finished, and then run the following cell to see how long it takes to compute the 2 task graphs we did previously. You can track the progress in the Dask dashboard.
Step14: In this case, these 2 task graphs compute() is several times faster if the data is already loaded by calling persist() on it instead of loading it for each task graph. Most of the time taken in the first example (without persist()) was in loading the data 2 times.
WARNING
Step15: Xarray objects containing Dask arrays constitute cancellable objects for a Dask client. So we can simply do the following to terminate the processing for this load | Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.dask import create_local_dask_cluster
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube(app="Dask_Tutorial_Notebook")
Explanation: Dask Tutorial Notebook
<hr>
Notebook Description
This notebook serves as an introduction to Dask, which is a software library that allows us to scale analyses to large datasets. This notebook covers both what it is and how to use it, particularly in the context of the Open Data Cube. This notebook is strongly inspired by the Digital Earth Africa Dask tutorial notebook.
<hr>
<a id="Dask_Tutorial_Notebook_index"></a>Index
What is Dask?
Import Dependencies and Connect to the Data Cube
Define the Extents of the Analysis
Standard Load
Creating a Dask Client and Cluster
View the Dask Dashboard
Lazy Load with Dask
Viewing Dask Chunks
Viewing Task Graphs
Running a Task Graph (Loading)
Adding More Tasks
Persisting Data
Dask Resilience and Recovery
Chunking Strategies
Machine Learning with Dask
<span id="Dask_Tutorial_Notebook_what_is_dask">What is Dask? ▴</span>
Dask is a useful tool when working with large analyses - whether in space, time, or data density (amount of data per pixel) - as it breaks data into manageable chunks that can be easily stored in memory.
It can also use multiple computing cores to speed up computations.
It does all of this by creating a task graph. When applying operations to data loaded from the Data Cube normally, operations occur immediately. However, when using Dask, operations instead become plans to perform them based on how the data is chunked. How the data is chunked is specified in the arguments to datacube.Datacube.load() calls.
You can read the Dask documentation here.
You can read more about how xarray integrates with Dask here.
<span id="Dask_Tutorial_Notebook_import">Import Dependencies and Connect to the Data Cube ▴</span>
End of explanation
# Freetown, Sierra Leone (small)
x=(-13.3422, -13.1396)
y=(8.2968, 8.5076)
display_map(y,x)
Explanation: <span id="Dask_Tutorial_Notebook_define_extents">Define the Extents of the Analysis ▴</span>
Here we define an area of interest and examine it on a map.
End of explanation
common_load_params = dict(product='ls8_usgs_sr_scene',
measurements=['red', 'green', 'blue'],
x=x, y=y, time=("2017-01-01", "2017-12-31"),
output_crs='EPSG:4326', resolution=(-0.00027, 0.00027))
%%time
data = dc.load(**common_load_params)
data
Explanation: <span id="Dask_Tutorial_Notebook_standard_load">Standard Load ▴</span>
By default, the datacube library will not use Dask when loading data.
This means that when dc.load() is used, all data relating to the load query will be requested and loaded into memory.
For very large areas or long time spans, this can cause the Jupyter notebook to crash.
Below, we show a standard load example.
End of explanation
client = create_local_dask_cluster()
client
Explanation: Take note of how long the load took ("Wall time" in the output of the cell above).
<span id="Dask_Tutorial_Notebook_create_client_cluster">Creating a Dask Client and Cluster ▴</span>
To use Dask, you need to first set up a Dask client and cluster. The client allows access to the cluster. There are 2 main methods for doing this: the preferred method and an advanced method that should be used in a JupyterLabs environment when either the preferred method does not work or both the JupyterLabs Dask menu allows creation of a distributed cluster (not a local cluster, which is created by default unless the JupyterLabs environment is configured otherwise) AND scalability is desired.
Enabling Dask - Preferred, Convenient Method
The preferred way to create a Dask cluster which works in and outside Jupyter environments is by using the create_local_dask_cluster() function (see imports above for utils path), as used in the cell below. This function creates a Dask cluster on your Jupyter server, so it will share compute resources like CPU, memory, and storage with your Jupyter server. This cluster cannot scale up in resources beyond your Jupyter server's resources.
This is preferred for these reasons:
It works anywhere Dask is available - not just in Jupyter environments.
It allows notebooks that use Dask to be easily run by tools that run notebooks automatically (good for testing).
End of explanation
%%time
lazy_data = dc.load(**common_load_params,
dask_chunks={'time': 1, 'longitude': 3000, 'latitude': 3000})
lazy_data
Explanation: A printout should appear, displaying information about the Client and the Cluster.
Enabling Dask - Advanced Method (JupyterLabs)
On the left-hand menu bar, click the Dask icon, as shown below:
Click the "+ New" button to create a new Dask cluster.
You should now see a new Dask Dashboard URL.
Clicking on a cluster in the clusters list sets the dashboard URL to the one for that cluster. This changes what cluster is being monitored by the JupyterLabs Dask plugin, but note that the cluster used by code depends on what cluster the Dask client is connected to in that code's context (e.g. a Jupyter notebook).
To create a client for this cluster, click and drag the cluster into a notebook. This will create a cell that creates a Dask client called client.
Here is what dragging the cluster entry into a notebook looks like:
This is the resulting cell that creates the Dask client for the selected cluster:
Note that in this example, the cluster is local. On JupyterLabs servers that only create local Dask clusters like this, this method is only to be used if the create_local_dask_cluster() function is not available.
Enabling Dask - Final Fallback
If none of the above methods of creating a Dask client and cluster work, then continue without them. The code should still work, but you will not be able to view the Dask dashboard (to monitor the Dask cluster) or perform actions that require a Dask client object.
<span id="Dask_Tutorial_Notebook_view_dashboard">View the Dask Dashboard ▴</span>
For now, we're mostly interested in the hyperlink after the Dashboard heading, which should look something like /user/\<username>/proxy/\<port>/status, where \<username> is your username on the Jupyter server. The full URL to the dashboard is that URL prefixed with the domain. For example, if your JupyterLabs server URL is odc.sandbox/user/me/lab, and your dashboard URL is /user/me/proxy/8787/status or something like http:\/\/127.0.0.1:8787/status, then your full dashboard URL is odc.sandbox/user/me/proxy/8787/status.
The Dashboard link for the client creation cell in a notebook will open a new tab in your browser to the Dask dashboard for that cluster, which allows you to monitor that Dask cluster - including real-time task progress.
There are 2 ways to view the dashboard:
1. Navigate to the dashboard in your browser, such as by clicking the dashboard URL in the printout for the client in the notebook.
2. Set up the dashboard inside the JupyterLabs Environment. This does not require a different browser tab for the dashboard.
We'll now cover how to do the second option.
Dask dashboard inside JupyterLabs (no extra browser tabs)
On the left-hand menu bar, click the Dask icon, as shown below:
Copy and paste the Dashboard link from the Client print out into the DASK DASHBOARD URL text box (may need to account for a format like 127.0.0.1, as mentioned previously):
If the URL is valid, the buttons should change color from grey to orange.
Click the orange PROGRESS button on the Dask panel, which will open a new tab inside the JupyterLabs environment.
To view the Dask window and your active notebook at the same time, drag the new Dask Progress tab to the bottom of the screen.
Now, when you do computations with Dask, you'll see the progress of the computations in this new Dask window.
<span id="Dask_Tutorial_Notebook_lazy_load">Lazy Load with Dask ▴</span>
To use Dask with ODC data, specify a dask_chunks argument to datacube.Datacube.load() calls. This argument is a dictionary specifying what chunk size to use for each dimension (x, y, time). If no chunk size is specified for a dimension, or if it is set to -1, or if the specified chunk size exceeds the length of that dimension of the data, then the chunk size is the length of that dimension of the data.
Here, we load data with Dask - specifically with a chunk size of {'time': 20, 'x': 3000, 'y': 3000}.
When using Dask, the datacube.Datacube.load() function will switch from immediately loading the data to "lazy-loading" the data. This means the data is only loaded when the task graph is run (when the data is needed).
Using Dask changes the data structure returned from datacube.Datacube.load(): the returned xarray.Dataset will be comprised of dask.array objects instead of numpy.ndarray objects.
The following is an example of lazy loading data with Dask:
End of explanation
lazy_data.red.data
Explanation: The function should return much faster than the standard load because it is not loading any data.
<span id="Dask_Tutorial_Notebook_view_chunks">Viewing Dask Chunks ▴</span>
To get a visual intuition for how the data has been broken into chunks, we can use the .data attribute provided by the xarray.Dataset from datacube.Datacube.load().
This attribute can be used on individual measurements from the lazy-loaded data.
When used in a Jupyter Notebook, it provides a table summarising the size of individual chunks and the number of chunks needed.
An example is shown below, using the red measurement from the lazy-loaded data:
End of explanation
lazy_data.red.data.visualize()
Explanation: From the Chunk column of the table, we can see that the data has been broken into 4 chunks, with each chunk having a shape of (1 time, 782 pixels, 782 pixels) and taking up 1.17 MB of memory.
Comparing this with the Array column, using Dask means that we can load 5 chunks of 1.17 MB rather than one chunk of 5.87 MB.
Breaking large datasets into chunks and loading chunks one at a time means that you can do computations over large areas and time spans without crashing the Python kernel by running out of memory.
<span id="Dask_Tutorial_Notebook_view_task_graph">Viewing Task Graphs ▴</span>
When using lazy-loading, Dask breaks up the loading operation into a series of steps.
A useful way to visualise the steps is the task graph, which can be accessed by adding the .visualize() method to a .data call:
End of explanation
loaded_data = lazy_data.compute()
loaded_data
Explanation: The task graph is read from bottom to top.
The 5 rectangles at the bottom of the graph are the database entries describing the files that need to be read to load the data.
Above the rectangles are individual load commands that will do the reading.
There is one for each chunk.
The arrows describe which files need to be read for each operation.
At the very top are the indexes of the chunks that will make up the final array.
<span id="Dask_Tutorial_Notebook_run_task_graph">Running a Task Graph (Loading) ▴</span>
When working with lazy-loaded data, you have to specifically ask Dask to run a task graph to obtain the results.
Until you do this, the task graph only knows what to do, not the results.
To obtain a task graph result and return it as an xarray.Dataset with real values (data being held in numpy.ndarray objects, not dask.array objects), call .compute() on the DataArray or Dataset.
If you opened the Dask progress or graph window, you should see the computation proceed there.
End of explanation
lazy_data.red.mean('time').data.visualize()
Explanation: The Dask arrays constructed by the lazy load looked like this:
red (time, latitude, longitude) int16 dask.array<chunksize=(1, 782, 751), meta=np.ndarray>
But the result of calling compute() on the lazy_data xarray object is:
red (time, latitude, longitude) int16 7280 7297 7307 ... 233 233
So the result is the same as the data loaded from the first query (data).
<span id="Dask_Tutorial_Notebook_add_tasks">Adding More Tasks ▴</span>
Now lets add more tasks to the task graph than just loading the data.
We will calculate the mean of the red band across time and then view the new task graph:
End of explanation
# Freetown, Sierra Leone (large)
x=(-13.3422, -12.8890)
y=(8.0670, 8.5076)
%%time
lazy_data_large = dc.load(product='ls8_usgs_sr_scene',
measurements=['red', 'blue'],
x=x, y=y, time=("2015-01-01", "2017-12-31"),
output_crs='EPSG:4326', resolution=(-0.00027, 0.00027),
dask_chunks={'time': 20, 'longitude': 3000, 'latitude': 3000})
lazy_data_large
Explanation: <span id="Dask_Tutorial_Notebook_persist">Persisting Data ▴</span>
As discussed previously, calling compute() on Dask data will cause the task graph to be processed and the result to be brought into memory.
However, there are times that temporary, intermediate results need to be kept in memory to be reused in a task graph rather than recomputed.
First, we need to load more data with larger chunks to see a significant performance difference.
End of explanation
lazy_red_minus_blue = lazy_data_large.red - lazy_data_large.blue
lazy_blue_minus_red = lazy_data_large.blue - lazy_data_large.red
Explanation: Consider the following 2 task graphs: 1 computes the difference of the red and blue bands and the other computes the difference of the blue and red bands.
End of explanation
%%time
lazy_red_minus_blue.compute()
lazy_blue_minus_red.compute()
print()
Explanation: See how long these 2 graphs take to run.
End of explanation
%%time
lazy_data_large = lazy_data_large.persist()
print()
Explanation: In contrast to compute() which stores results in the calling context, persist() stores results in Dask to be reused.
Note that persist() is "non-blocking", or "asynchronous", so although it does process the task graph up to the temporary result referenced by the object persist() is called on, it does so in the background, so the persist() call terminates very quickly even for large task graphs.
Calling persist() on lazy_data_large will load the data into Dask's memory.
End of explanation
lazy_red_minus_blue = lazy_data_large.red - lazy_data_large.blue
lazy_blue_minus_red = lazy_data_large.blue - lazy_data_large.red
%%time
lazy_red_minus_blue.compute()
lazy_blue_minus_red.compute()
print()
Explanation: Wait a few seconds after the above cell has finished, and then run the following cell to see how long it takes to compute the 2 task graphs we did previously. You can track the progress in the Dask dashboard.
End of explanation
x=(-13.4699, -11.9100)
y=(7.3398, 8.5804)
%%time
lazy_data_large = dc.load(product='ls8_usgs_sr_scene',
measurements=['red', 'blue'],
x=x, y=y, time=("2015-01-01", "2017-12-31"),
output_crs='EPSG:4326', resolution=(-0.00027, 0.00027),
dask_chunks={'time': 40, 'longitude': 3000, 'latitude': 3000})
if client is not None:
lazy_data_large.persist()
Explanation: In this case, these 2 task graphs compute() is several times faster if the data is already loaded by calling persist() on it instead of loading it for each task graph. Most of the time taken in the first example (without persist()) was in loading the data 2 times.
WARNING: Generally, persist() should not be run on a task graph that will only load data, because there may not be enough memory to hold all of this data. Either scale up the resources of your Dask cluster temporarily, or if that is not possible, examine your full notebook algorithm (how the data is processed) to see if there is a temporary result that is smaller than the full loaded dataset (e.g. composites).
You can read more about managing memory with Dask here.
<span id="Dask_Tutorial_Notebook_resilience_recovery">Dask Resilience and Recovery ▴</span>
Dask is very fault-tolerant. If a worker becomes unresponsive, it will be replaced by another worker. When this happens, the original worker's data is lost and must be reobtained by another worker.
WARNING: If the total amount of data required for a persist() or compute() surpasses the amount of free data across the cluster, then the cluster can get stuck in an infinite loop of trying to obtain the results and having all workers fail due to memory exhaustion. This will continue to waste resources until the task is cancelled or the workers are stopped.
Dask tasks can be stopped by the client. For a client object called client and a cancellable object obj, client.cancel(obj) terminates processing of the task graph associated with obj. So this requires a Dask client object. If you do not have a Dask client object, then be very careful what you tell Dask to do because it cannot be cancelled.
The following example attempts to load a lot of data and persist it.
End of explanation
if client is not None:
client.cancel(lazy_data_large)
Explanation: Xarray objects containing Dask arrays constitute cancellable objects for a Dask client. So we can simply do the following to terminate the processing for this load:
End of explanation |
3,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantile regression
This example page shows how to use statsmodels' QuantReg class to replicate parts of the analysis published in
Koenker, Roger and Kevin F. Hallock. "Quantile Regression". Journal of Economic Perspectives, Volume 15, Number 4, Fall 2001, Pages 143โ156
We are interested in the relationship between income and expenditures on food for a sample of working class Belgian households in 1857 (the Engel data).
Setup
We first need to load some modules and to retrieve the data. Conveniently, the Engel dataset is shipped with statsmodels.
Step1: Least Absolute Deviation
The LAD model is a special case of quantile regression where q=0.5
Step2: Visualizing the results
We estimate the quantile regression model for many quantiles between .05 and .95, and compare best fit line from each of these models to Ordinary Least Squares results.
Prepare data for plotting
For convenience, we place the quantile regression results in a Pandas DataFrame, and the OLS results in a dictionary.
Step3: First plot
This plot compares best fit lines for 10 quantile regression models to the least squares fit. As Koenker and Hallock (2001) point out, we see that
Step4: Second plot
The dotted black lines form 95% point-wise confidence band around 10 quantile regression estimates (solid black line). The red lines represent OLS regression results along with their 95% confidence interval.
In most cases, the quantile regression point estimates lie outside the OLS confidence interval, which suggests that the effect of income on food expenditure may not be constant across the distribution. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
data = sm.datasets.engel.load_pandas().data
data.head()
Explanation: Quantile regression
This example page shows how to use statsmodels' QuantReg class to replicate parts of the analysis published in
Koenker, Roger and Kevin F. Hallock. "Quantile Regression". Journal of Economic Perspectives, Volume 15, Number 4, Fall 2001, Pages 143โ156
We are interested in the relationship between income and expenditures on food for a sample of working class Belgian households in 1857 (the Engel data).
Setup
We first need to load some modules and to retrieve the data. Conveniently, the Engel dataset is shipped with statsmodels.
End of explanation
mod = smf.quantreg('foodexp ~ income', data)
res = mod.fit(q=.5)
print(res.summary())
Explanation: Least Absolute Deviation
The LAD model is a special case of quantile regression where q=0.5
End of explanation
quantiles = np.arange(.05, .96, .1)
def fit_model(q):
res = mod.fit(q=q)
return [q, res.params['Intercept'], res.params['income']] + \
res.conf_int().loc['income'].tolist()
models = [fit_model(x) for x in quantiles]
models = pd.DataFrame(models, columns=['q', 'a', 'b', 'lb', 'ub'])
ols = smf.ols('foodexp ~ income', data).fit()
ols_ci = ols.conf_int().loc['income'].tolist()
ols = dict(a = ols.params['Intercept'],
b = ols.params['income'],
lb = ols_ci[0],
ub = ols_ci[1])
print(models)
print(ols)
Explanation: Visualizing the results
We estimate the quantile regression model for many quantiles between .05 and .95, and compare best fit line from each of these models to Ordinary Least Squares results.
Prepare data for plotting
For convenience, we place the quantile regression results in a Pandas DataFrame, and the OLS results in a dictionary.
End of explanation
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
fig, ax = plt.subplots(figsize=(8, 6))
for i in range(models.shape[0]):
y = get_y(models.a[i], models.b[i])
ax.plot(x, y, linestyle='dotted', color='grey')
y = get_y(ols['a'], ols['b'])
ax.plot(x, y, color='red', label='OLS')
ax.scatter(data.income, data.foodexp, alpha=.2)
ax.set_xlim((240, 3000))
ax.set_ylim((240, 2000))
legend = ax.legend()
ax.set_xlabel('Income', fontsize=16)
ax.set_ylabel('Food expenditure', fontsize=16);
Explanation: First plot
This plot compares best fit lines for 10 quantile regression models to the least squares fit. As Koenker and Hallock (2001) point out, we see that:
Food expenditure increases with income
The dispersion of food expenditure increases with income
The least squares estimates fit low income observations quite poorly (i.e. the OLS line passes over most low income households)
End of explanation
n = models.shape[0]
p1 = plt.plot(models.q, models.b, color='black', label='Quantile Reg.')
p2 = plt.plot(models.q, models.ub, linestyle='dotted', color='black')
p3 = plt.plot(models.q, models.lb, linestyle='dotted', color='black')
p4 = plt.plot(models.q, [ols['b']] * n, color='red', label='OLS')
p5 = plt.plot(models.q, [ols['lb']] * n, linestyle='dotted', color='red')
p6 = plt.plot(models.q, [ols['ub']] * n, linestyle='dotted', color='red')
plt.ylabel(r'$\beta_{income}$')
plt.xlabel('Quantiles of the conditional food expenditure distribution')
plt.legend()
plt.show()
Explanation: Second plot
The dotted black lines form 95% point-wise confidence band around 10 quantile regression estimates (solid black line). The red lines represent OLS regression results along with their 95% confidence interval.
In most cases, the quantile regression point estimates lie outside the OLS confidence interval, which suggests that the effect of income on food expenditure may not be constant across the distribution.
End of explanation |
3,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
7 - Sample Data Inheritance - The Microstructure Class
This sixth Notebook will introduce you to
Step2: This file is zipped in the package to reduce its size. We will have to unzip it to use it and learn how to reduce its size with the SampleData methods. If you are just reading the documentation and not executing it, you may just skip this cell and the next one.
Step3: Let us now open the dataset, using the Microstructure class constructor this time
Step4: The dataset is now open, we can now look at its content
Step5: We can also print the description attribute of the dataset to read information written by its creators
Step6: As you can see, this dataset is a digital twin of a real polycristalline microstructure of a grade 2 Titanium sample, gathering both experimental and numerical data obtained through Diffraction Contrast Tomography imaging, and FFT-based mechanical simulation.
This dataset will serve as an illustration of the Microstructure class data model. In the dataset content printed above, we indeed recognize the 4 groups defined by the minimal_data_model. They all contain essential data to represent the microstructure, and are presented in details in the next subsections.
Phases
Obviously, the first information that must be provided when describing a polycrystalline material sample, is its composition. Polycrystalline samples may be composed of multiple phases, having all distinct crystallographic and physical properties. The PhaseData group in the Microstructure data model is designed to store this essential information on the various material phases composing the sample, and their properties.
We can see in the dataset content above that this Group only contains another group phase_01. Their content is
Step7: As you may observe, these groups only contain data in the form of attributes, i.e. metadata, on the crystalline phase that compose the dataset. The PhaseData group has as many children groups as there are phases in the dataset. Each one of these groups has a predefined name, following the pattern phase_+ phase_number. They provide the name, number, and crystallographic (symmetry, lattice parameters), chemical (formula) and physical information (elasticity constants) of each phase.
<div class="alert alert-warning">
**Warning**
Working with multiple phases in `Microstructure` datasets is still a feature under development, not yet stable. It is possible to have mutliple phase groups within a dataset, but some class methods working with Phases may not yet be compatible with multiple phase datasets.
</div>
Getting Phase objects from a dataset
The medata content of a phase_XX Group can be retrieved as a Pymicro Phase object, which is a container object, with the get_phase method
Step8: The class has a _phase private attribute, that is a list of Pymicro CrystallinePhase objects, synchronized with the content of the PhaseData group. Calling the sync_phases method ensures this synchronization. It also ensured by the after_file_open method at each time that the file is opened.
The CellData Group
Group aim and content
We will move now to the description of the CellData group. As you can see, it is an Image Group (3D image in the case of this example file). The CellData group is aimed at storing descriptions of the polycristalline microstructure of the sample, in the form of images. They may describe the geometry and orientation of the grains, the presence of damage or the micromechanical state of the sample.
These images can be 2D or 3D, and be
Step9: Group minimal content
The minimal_data_model of the class contains three fields data items that are attached to the CellData group. These scalar fields of integers allow to completely describe the geometry and microstructure of the polycrystalline material sample associated to the Microstructure dataset. They are
Step10: As you can see, the phase map is not defined here, which is not a problem when the sample only has one crystalline phase (which is the case here). The mask also contains only 1 values, which means that the dataset represents a full cube of the sample microstructure. The grain map contains 111 grain IDs values.
The active grain map class attribute
It may be relevant, for some datasets, to store several fields for the same information. For instance, when reconstructing a microstructure from an imaging experiment, the microstructure in the reconstruction process raw output may be incomplete, and require some image processing. In that case, storing both versions of the array (raw reconstruction output and image after processing) is relevant.
In the specific case of the grain_map field, if you have several arrays for this information in your dataset, you may use the active_grain_map class attribute to specify which one is to be considered as the sample main grain map by setting the Microstructure.active_grain_map attribute to a name of this field data item. This implies that the get_grain_map method will return specifically this field, and that all Microstructure methods working with the sample grain map will use this field as well.
Our example dataset has two version of the grain map information
Step11: We can now chekc the return of get_grain_map
Step12: <div class="alert alert-info">
**Note**
Note that the same mechanism will be implemented soon for the phase map field.
</div>
Visualize sample microstructure image
Step13: When no argument is provided, the method plots the middle (X,Y)-wise slice of the sample grain map, with a random color map for the grain ids, and shows the mask on the foreground in transparency mode. Here the mask cannot be actually seen as it is uniform on the slice.
Let us plot a slice of an other example dataset that allows to see the mask
Step14: You should get a figure like this one
Step15: You should get
Step16: You should get
Step17: You should get
Step19: Other methods to get specific grain data are also available in the class interface
Step20: Getting grain objects
Pymicro also has Grain objects that are specific containers equivalent to a row of the dataset GrainDataTable. You can get them with the following methods
Step21: The grains class attribute
The Microstructure.grains attribute is, as mentioned earlier, an alias for the Pytables node associated to the GrainDataTable data item. As such, it allows to manipulate and interact with the GrainDataTable content directly in the dataset.
You can use this attribute to access grain data just like you would manipulate a Numpy structured array
Step22: This attribute can also be iterated
Step23: The MeshData Group
The MeshData group is aimed at storing descriptions of the polycristalline microstructure of the sample, in the form of a mesh. In the present dataset, this group is used as a container group for a Mesh Group, grains_mesh. Mesh support has not yet been developped for Pymicro Microstructure class, and hence the data model of this group is for now empty.
Additional data items
As you can see above, the example dataset also contains data items that are not defined in the data model of the Microstructure class (the Amitex_Results Group, several fields of the CellData image...). Obviously, as a SampleData children class, with Microstructure class, you may add any additional data item to your datasets, as you would with SampleData datasets.
This concludes the presentation of the Microstructure class data model and data access.
Creating and setting Microstructures
Now that the class data model has been presented, we will now introduce how to create and fill Microstructure objects.
There are three ways to create a Microstructure dataset
Step24: Crop an existing Microstructure
Sometimes, it may be desirable not to copy a complete microstructure, but only a specific region of it to build a dataset dedicated to this region. For that, you may use the crop method of the Microstructure class, that return a new microstructure object.
The method
Step25: You should get
Step26: The new microstructure has been created, with the complete class data model filled with empty data items. We will now fill it with data corresponding to the cropped region in the previous subsection.
Setting CellData items
As an Image Group, you can add fields to the CellData with the SampleData method add_field. In the specific case of the Microstructure class, you can use specific methods to set the value of the fields that are part of the class data model
Step27: Now that we have created our arrays, we can set the CellData fields. Note that when you add the first CellData field, you have to specify a pixel/voxel size to set the scale of the image.
Step28: You should get
Step29: Setting the phase
Likewise, we can take advantage of the get_phase and set_phase methods to transfer the phase data from the original dataset to our new one. The set_phase method takes as argument a pymicro CrystallinePhase object. These object contain an identification number for the phase. If a phase with same number already exists in the dataset, it is overwritten by the inputted one.
Let us add phase data to our dataset
Step30: Note that phases can also be added with the set_phases method, that takes as input a list of pymicro CrystallinePhase objects. You can also use the add_phase method, that adds a CrystallinePhase object to the dataset with the next available phase identification number of the microstructure.
Setting the GrainDataTable
The Microstructure class offers several ways to fill the GrainDataTable, that are successively reviewed in this subsection.
From the grain map
As detailed earlier, the GrainDataTable contains data describing the grains position and morphology. These values can be computed from the grain map, that provides the full geometry of each grain. Specific methods of the Microstucture class allow to compute those values and automatically fill the GrainDataTable with them. They are
Step31: The table has been updated with the geometric data of 9 grains, the grain identity number. You can also see that the method has added a random orientation to each grain. If you want to avoid this, you may use instead the compute_grains_geometry method, that do not generates random orientations for the grains but fills orientation with zeros.
From data arrays
So, at this point, our GrainDataTable has its geometric values in accordance with the grain map, but wrong grain orientations, that have been randomly generated. We can however get the correct grain orientations from the original example microstructure dataset, with the get_grain_rodrigues method
Step32: Now, we can use the set_orientations method to add this information to our data table
Step33: Similar methods exist for the rest of the data in the table
Step34: But you can also iterate the grains attribute of the class. By doing this, you will get at each iteration a Pytables Row object representing a row of data in the table, i.e. a grain. You can access its values exactly as if it was a Numpy structured array
Step35: You can also use this process to set specifically some values for one grain. You can iterate the table to find your grain object, set one of its values as if it was a Numpy structured array. Then you have to use the specific update method of the Pytables Row class to set the value in the dataset, as follows
Step36: Obviously, you can use the same method to get/set other columns of the table (centers, bounding boxes...)
We have now completed this introduction to the Microstructure class. More advanced features of the class are already implemented in the code, and many more will be in the next years. Specific tutorial Notebooks about this features will be released in the future, as well as examples presented in the documentation Cookbook.
We can now close our datasets, and remove the original unarchived file | Python Code:
from config import PYMICRO_EXAMPLES_DATA_DIR # import file directory path
import os
dataset_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'example_microstructure') # test dataset file path
tar_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'example_microstructure.tar.gz') # dataset archive path
Explanation: 7 - Sample Data Inheritance - The Microstructure Class
This sixth Notebook will introduce you to:
the SampleData class inheritance mechanisms
the Microstructure class of the pymicro package
the Microstructure class data model and how to browse through its content
the different ways that exist to create a Microstructure object
the basic methods to get and set the data that compose a Microstructure object
<div class="alert alert-info">
**Note**
Throughout this notebook, it will be assumed that the reader is familiar with the overview of the SampleData file format and data model presented in the [first notebook of this User Guide](./SampleData_Introduction.ipynb) of this User Guide.
</div>
SampleData Inheritance
The SampleData class implements a generic interface between users, numeric tools, and HDF5/XDMF multimodal datasets for material science and mechanics. It allows to create empty datasets or open existing ones, and leaves entirely to the user the definition of the dataset internal content and organization.
For specific and repeated applications, that always involve the same type of datasets, it may be convenient to standardize and predefine the internal organization of the dataset. For instance, to manage the data of a serie of material samples that are studied through SEM, EBSD imaging, and numerical simulation of the imaging digital twins, you will to define for each the same Image Group to store the imaging outputs, and a Mesh or Image group at least to store the simulation output.
For such cases, it becomes convenient to define a more specific interface, for which all the internal organization of datasets (their data mode), is already defined. For that purpose, the SampleData class offers the possibility to create inherited classes with a predefined data model through two particular and simple mechanisms, that are the subject of the present section.
Custom Data Model
The SampleData class defines a minimal data model for all the datasets created with the class. This data model is a collection of data item indexnames, pathes and types, in the form of two dictionaries. The keys of these two dictionaries must be identical, and define all the indexnames of the items in the data model. There values are:
minimal_content_index_dic: the path of each data item in the data model
minimal_content_type_dic: the type of each data item in the data model
The content index dictionary
Each item of this dictionary will define a data item of the model. Its key will be the indexname given to the data item in the dataset, and the item value must be a string giving a valid path for the data item in the dataset. For a path to be valid, the different levels of depth in it must have been declared within the dictionary.
This dictionary should hence look like this:
python
minimal_content_index_dic = {'item1': '/path_to_item1',
'item2': '/path_to_item1/path_to_item2',
'item3': '/path_to_item3',
'...': '...',}
An item of the form 'wrongitem': '/undeclared_item/path_to_wrong_item' would have been a non valid path.
The dictionary example just above would lead to the creation of at least 3 data items, with names item1, item2 and item3, with items 1 and 3 being directly attached to the dataset Root Group, and the item 2 being a children of item 1.
The content type dictionary
The second dictionary that has to be declared must have the same keys as the minimal_content_index_dic. Its values must be valid SampleData data item types. This dictionary will determine the type of data item that will be automatically created at the dataset creation, by the subclass.
Possible values and associated data types are (see previous tutorials for description of these data types):
'Group': creates a HDF5 group data item
'2DImage', '3DImage', or 'Image': creates an empty Image group
'2DMesh', '3DMesh', 'Mesh': creates an empty Mesh group
'data_array': creates an empty Data Array
'field_array': creates an empty Field Array (its path must be a children of a an Image or Mesh group)
'string_array': creates an empty String Array
a numpy.dtype or a tables.IsDescription class (see here and tutorial 3):
This dictionary should look like this (assuming that it corresponds to the content dictionary of the subsection above):
python
minimal_content_index_dic = {'item1': '3DMesh',
'item2': 'field_array',
'item3': 'data_array',
'...': '...',}
In this case, the first item would be created as a Mesh Group, the second will be created as a field data item stored in this mesh, and the last as a data array attached to the Root Group.
These two dictionaries are returned by the minimal_data_model method of the SampleData class. They are used during the dataset object initialization, to create the prescribed data model, and populate it with empty objects, with the right names and organization. This allows to prepend a set of names and pathes that form a particular data model that all objects created by the class should have.
It is labelled as a minimal data model, as it only prescribes the data items and organization that will be present in each dataset of the subclass. The user is free to enrich the datasets created with this class with any additional data item that he would want to add.
In the SampleData code, they are returned empty, so that no actual data model is created within a new SampleData dataset. This method is actually designed to create subclasses of SampleData associated to a specific data model. To achieve this, you have to:
Create a new class, inherited from SampleData
Override the minimal_data_model method and write your data model in the two dictionaries returned by the class
You will then get a class derived from SampleData (hence with all its methods and features), that creates datasets with this prescribed data model. You will see an example of it in the next section dedicated to the Microstructure class, which is designed this way.
Custom initialization
The other mechanisms that is important to design subclasses of SampleData, is the specification of all initialization commands that must be run each time the dataset files are closed and opened again (this happens for instance when repacking the dataset, or calling the pause_for_visualization method). These operations can include, for instance, the definition of class attributes that points toward a specific node in the dataset, the loading of data from the dataset files in some class attributes, some sanity checks on the data etc.....
All these operations must be implemented in the _after_file_open method of the subclass. Again, the Microstructure class described in the next section will provide an example.
The Microstructure Class
The Microstructure class has been designed to handle multimodal datasets representing polycrystalline material samples. These materials have a specific microstructure composed of crystalline grains, that are characterized by a specific geometry and a crystalline orientation. The microstructure of polycrystalline materials strongly determines their physical and mechanical properties, and is thus extensively studied by material scientists.
The Microstructure class offers methods to easily manipulate multiomdal 4D data of granular material samples, in particular regarding geometrical and crystallographic aspects of data management and processing. As this type of data is a particular case of the of datasets for which the SampleData class has been designed, the Microstructure has been derived from the SampleData class.
A SampleData children class
Indeend, the Microstructure class is a subclass of the SampleData class:
python
class Microstructure(SampleData):
As a children of SampleData, it inherits all of its features: a Microstructure object is associated with a HDF5/XDMF file pair, and allows to create/get/remove/compress all types of data items handeled by the SampleData class, presented in the previous tutorials. As a children of SampleData, the Microstructure class benefits of the two mechanisms presented in the first section of this tutorial. We will see now how they are implemented for this class.
The minimal data model
This subsection will present the Microstructure class data model, and will also serve as a demonstrator of the data model mechanism described in the first section of this tutorial.
The code of the minimal_data_model method of the Microstructure class contains the following declaration of the data model dictionaries:
python
minimal_content_index_dic = {'Image_data': '/CellData',
'grain_map': '/CellData/grain_map',
'phase_map': '/CellData/phase_map',
'mask': '/CellData/mask',
'Mesh_data': '/MeshData',
'Grain_data': '/GrainData',
'GrainDataTable': '/GrainData/GrainDataTable',
'Phase_data': '/PhaseData'}
minimal_content_type_dic = {'Image_data': '3DImage',
'grain_map': 'field_array',
'phase_map': 'field_array',
'mask': 'field_array',
'Mesh_data': 'Mesh',
'Grain_data': 'Group',
'GrainDataTable': GrainData,
'Phase_data': 'Group'}
You can see that this data model contains a GrainData data item type. This is a tables.IsDescription object, inducing hence the creation of a Structured Array data item. The definition of this description in the Microstructure class code will be provided further in this tutorial, in the subsection dedicated to the Grain Data Table data item of the class.
As you can see, the data model contains one Image Group, with three fields declared, one Mesh Group, two Groups, one containing a Structured Array data item. It will be detailed in the next section of this tutorial.
The after file open operations
The _after_file_open method of the Microstructure is composed of the following lines of code:
python
def _after_file_open(self):
Initialization code to run after opening a Sample Data file.
self.grains = self.get_node('GrainDataTable')
if self._file_exist:
self.active_grain_map = self.get_attribute('active_grain_map',
'CellData')
if self.active_grain_map is None:
self.set_active_grain_map()
self._init_phase(phase)
if not hasattr(self, 'active_phase_id'):
self.active_phase_id = 1
else:
self.set_active_grain_map()
self._init_phase(phase)
self.active_phase_id = 1
return
This method creates a class attribute grains that is associated with the Structured Array node GrainDataTable. Therefore, this attribute is an alias for the Pytable Node object associated to this array (see here how to handle these objects).
This grains attribute is used by many of the class methods, and hence must always be properly associated to the GrainDataTable. To ensure that it is the case, it is initialized in the _after_file_open method. Hence, this attribute is initialized at dataset opening, but also after in the methods that close and re-open the dataset (like pause_for_visualization or repack_h5file.
This methods also ensures that the class _phases nd active_grain_map attributes are synchronized with the dataset content, each time that the file is opened. Those two arguments are discussed later on in this tutorial.
Microstructure Data Model & Getting Microstructure data
What data defines a Microstructure ?
To define the microstructure and geometry of a polycrystalline sample, the following information are needed:
the description of the geometry of the sample
inside the sample, the description of the crystalline phases that compose the sample
within each phase, the description of the grains that compose the phase
To easily identify these elements, in a Microstructure dataset, each phase and grain in the microstructure has a identification number. These numbers are used to gather phasewise or grainwise data in data arrays, but is used as well in fields to describe the geometry of these phases/grains. These features are detailed in the following.
We will now review the various elements of the Microstructure class data model. To illustrate them, we will use a dataset from the example data base of the pymicro package.
Opening a Microstructure file
Opening an already existing microstructure dataset is done exactly like opening a SampleData dataset. We will look at a material science dataset that is part of the Pymicro example datasets, that is already used in the previous tutorial on data compression.
End of explanation
# Save current directory
cwd = os.getcwd()
# move to example data directory
os.chdir(PYMICRO_EXAMPLES_DATA_DIR)
# unarchive the dataset
os.system(f'tar -xvf {tar_file}')
# get back to UserGuide directory
os.chdir(cwd)
Explanation: This file is zipped in the package to reduce its size. We will have to unzip it to use it and learn how to reduce its size with the SampleData methods. If you are just reading the documentation and not executing it, you may just skip this cell and the next one.
End of explanation
# import SampleData class
from pymicro.crystal.microstructure import Microstructure
# import Numpy
import numpy as np
# Open Microstructure dataset
micro = Microstructure(filename=dataset_file)
Explanation: Let us now open the dataset, using the Microstructure class constructor this time:
End of explanation
print(micro)
Explanation: The dataset is now open, we can now look at its content:
End of explanation
print(micro.get_description())
Explanation: We can also print the description attribute of the dataset to read information written by its creators:
End of explanation
micro.print_node_info('PhaseData')
micro.print_node_info('phase_01')
Explanation: As you can see, this dataset is a digital twin of a real polycristalline microstructure of a grade 2 Titanium sample, gathering both experimental and numerical data obtained through Diffraction Contrast Tomography imaging, and FFT-based mechanical simulation.
This dataset will serve as an illustration of the Microstructure class data model. In the dataset content printed above, we indeed recognize the 4 groups defined by the minimal_data_model. They all contain essential data to represent the microstructure, and are presented in details in the next subsections.
Phases
Obviously, the first information that must be provided when describing a polycrystalline material sample, is its composition. Polycrystalline samples may be composed of multiple phases, having all distinct crystallographic and physical properties. The PhaseData group in the Microstructure data model is designed to store this essential information on the various material phases composing the sample, and their properties.
We can see in the dataset content above that this Group only contains another group phase_01. Their content is:
End of explanation
phase_01 = micro.get_phase(phase_id=1)
print(phase_01)
Explanation: As you may observe, these groups only contain data in the form of attributes, i.e. metadata, on the crystalline phase that compose the dataset. The PhaseData group has as many children groups as there are phases in the dataset. Each one of these groups has a predefined name, following the pattern phase_+ phase_number. They provide the name, number, and crystallographic (symmetry, lattice parameters), chemical (formula) and physical information (elasticity constants) of each phase.
<div class="alert alert-warning">
**Warning**
Working with multiple phases in `Microstructure` datasets is still a feature under development, not yet stable. It is possible to have mutliple phase groups within a dataset, but some class methods working with Phases may not yet be compatible with multiple phase datasets.
</div>
Getting Phase objects from a dataset
The medata content of a phase_XX Group can be retrieved as a Pymicro Phase object, which is a container object, with the get_phase method:
End of explanation
micro.print_node_info('CellData')
Explanation: The class has a _phase private attribute, that is a list of Pymicro CrystallinePhase objects, synchronized with the content of the PhaseData group. Calling the sync_phases method ensures this synchronization. It also ensured by the after_file_open method at each time that the file is opened.
The CellData Group
Group aim and content
We will move now to the description of the CellData group. As you can see, it is an Image Group (3D image in the case of this example file). The CellData group is aimed at storing descriptions of the polycristalline microstructure of the sample, in the form of images. They may describe the geometry and orientation of the grains, the presence of damage or the micromechanical state of the sample.
These images can be 2D or 3D, and be:
outputs of real imaging experiments (DCT, CT, EBSD ...)
digitally generated microstructure images (Voronoi tesselations, outputs from softwares like Neper, DREAM3D ...)
outputs of numerical simulations techniques that provide images (such as FFT-based solvers)
This group is a classical Image Group, with the usual set of attributes (see dedicated tutorial):
End of explanation
mask = micro.get_mask()
phase_map = micro.get_phase_map()
grain_map = micro.get_grain_map()
print(f'List of values in the mask:\n {np.unique(mask)}\n')
print(f'List of values in the phase map:\n {np.unique(phase_map)}\n')
print(f'List of values in the grain map:\n {np.unique(grain_map)}\n')
print(f'Number of grains in the grain map: {len(np.unique(grain_map))}')
Explanation: Group minimal content
The minimal_data_model of the class contains three fields data items that are attached to the CellData group. These scalar fields of integers allow to completely describe the geometry and microstructure of the polycrystalline material sample associated to the Microstructure dataset. They are:
mask: a field describing the geometry of the sample. It has a 1 value inside the sample, and a 0 value outside the sample (typically, segmented X-ray Contrast Tomography outputs can be used as Microstructure mask).
phase_map: a field indicating for each pixel/voxel the ID of the phase in which it is located. In areas of the Image that are not part of the material sample, the grain_map field takes a zero or negative value.
grain_map: a field indicating for each pixel/voxel the ID of the grain in which it is located. In areas of the Image that are not part of the material sample, the grain_map field takes a zero or negative value.
You can retrieve the value of these fields as shown in the previous tutorials (get_field, attribute or dict like access). You can also use the Microstructure class dedicated methods, that ensure to return arrays of dimension 3 (with a 1 sized 3 dimension when getting 2D images fields), used just below:
End of explanation
# print active grain map
print(f'The active grain map is {micro.active_grain_map}')
# change active grain map
micro.set_active_grain_map('grain_map_raw')
print(f'The active grain map is {micro.active_grain_map}')
Explanation: As you can see, the phase map is not defined here, which is not a problem when the sample only has one crystalline phase (which is the case here). The mask also contains only 1 values, which means that the dataset represents a full cube of the sample microstructure. The grain map contains 111 grain IDs values.
The active grain map class attribute
It may be relevant, for some datasets, to store several fields for the same information. For instance, when reconstructing a microstructure from an imaging experiment, the microstructure in the reconstruction process raw output may be incomplete, and require some image processing. In that case, storing both versions of the array (raw reconstruction output and image after processing) is relevant.
In the specific case of the grain_map field, if you have several arrays for this information in your dataset, you may use the active_grain_map class attribute to specify which one is to be considered as the sample main grain map by setting the Microstructure.active_grain_map attribute to a name of this field data item. This implies that the get_grain_map method will return specifically this field, and that all Microstructure methods working with the sample grain map will use this field as well.
Our example dataset has two version of the grain map information: the grain_map and grain_map_raw fields. They represent the processed and raw version of the DCT reconstruction algorithm output data. We will use them to illustrate the role of the active_grain_map attribute.
You can set its value with the set_active_grain_map method:
End of explanation
# get both grain maps arrays
grain_map = micro['grain_map']
grain_map_raw = micro['grain_map_raw']
# check class method return grain map
print(f'Is the active grain map equal to "grain_map" array ? {np.all(grain_map == micro.get_grain_map())}')
print(f'Is the active grain map equal to "grain_map_raw" array ? {np.all(grain_map_raw == micro.get_grain_map())}\n')
# let us change again the grain map and redo the test
micro.set_active_grain_map('grain_map')
print(f'Is the active grain map equal to "grain_map" array ? {np.all(grain_map == micro.get_grain_map())}')
print(f'Is the active grain map equal to "grain_map_raw" array ? {np.all(grain_map_raw == micro.get_grain_map())}')
Explanation: We can now chekc the return of get_grain_map:
End of explanation
micro.view_slice(display=False)
Explanation: <div class="alert alert-info">
**Note**
Note that the same mechanism will be implemented soon for the phase map field.
</div>
Visualize sample microstructure image: View Slice method
These three arrays, as Field data items of the CellData group can be visualized by opening the HDF5 dataset file with the Paraview software, as the Microstructure class inherits this feature from SampleData. In addition, the Microstructure class provides a visualization method to plot a slice of the microstructure (hence the whole image if it is a 2D microstructure) using the grain_map and mask arrays.
This method is the view_slice_method. It has many input arguments, but can be called without any one. In that, you should obtain an output like this:
<img src="./Images/Tutorial_7/Slice1.png" width="40%">
If you are executing interactively this Notebook, you may try to reproduce this figure by setting the display option to True in the following line:
End of explanation
micro2 = Microstructure(filename=os.path.join(PYMICRO_EXAMPLES_DATA_DIR,'t5_dct_slice_data.h5'))
micro2.view_slice(display=False) # change display to True to try to reproduce the figure below
del micro2
Explanation: When no argument is provided, the method plots the middle (X,Y)-wise slice of the sample grain map, with a random color map for the grain ids, and shows the mask on the foreground in transparency mode. Here the mask cannot be actually seen as it is uniform on the slice.
Let us plot a slice of an other example dataset that allows to see the mask:
End of explanation
import matplotlib.pyplot as plt
# set display to True to try reproducing the figure below !
micro.view_slice(slice=15, color='grain_ids', show_grain_ids=True, highlight_ids=[98,953,1019,1335,1534,1560,1905],
display=False)
Explanation: You should get a figure like this one:
<img src="./Images/Tutorial_7/Slice2.png" width="40%">
The mask shows two regions, a black region representing the outside of the sample, and a red region, representing the sample geometry. The grain map appear below this transparent mask color layer.
The view_slice method has many optional arguments. The most important are:
slice: this argument allows you to choose the index of the slice that is plotted. The default value is the middle slice.
color: allows you to chose the color map used to plot the grains. random is the default value, alternatives are grain_ids, ipf (inverse pole figure coloring) and schmid (plot intensity of grain maximal schmid factor for the load direction specified by the axis argument, and for the slip system object provided in the slip_system argument)
show_mask: set to False if you do not want to plot the mask
show_grain_ids: set to True to annotate the grains with their ID number
show_slip_traces: set to True to annotate the grains with the trace of a slip plane (provided as a slip system object to the hkl_planes argument)
display: is set to False, the image is not plotted. The matplotlib figure and axis created are returned anyway by the method.
You will find below two examples. If you are reading through the Notebook version of the documentation, you may try to change the value of these arguments to experiment the various possibilities offered by the view_slice method.
End of explanation
# get one basal slip system to compute its schmid factors and plot them on the slice
from pymicro.crystal.lattice import SlipSystem
lattice = micro.get_phase().get_lattice()
slip_system = lattice.get_slip_systems('basal')[1]
# plot slice with schmid factor colormap
# set display to True to try reproducing the figure below !
micro.view_slice(slice=15, color='schmid', slip_system=slip_system, display=False)
Explanation: You should get:
<img src="./Images/Tutorial_7/Slice3.png" width="40%">
End of explanation
# Get two slip planes (basal and one prismatic) to plot slip plane traces on the selected grains,
# coloured with inverse pole figure color map
plane_list = []
slip_system = lattice.get_slip_systems('basal')[0]
plane_list.append(slip_system.get_slip_plane())
slip_system = lattice.get_slip_systems('prism')[1]
plane_list.append(slip_system.get_slip_plane())
# set display to True to try reproducing the figure below !
micro.view_slice(slice=15, color='ipf', slip_system=slip_system, show_slip_traces=True, display=False,
hkl_planes=plane_list, highlight_ids=[98,953,1019,1335,1534,1560,1905])
print(plane_list)
Explanation: You should get:
<img src="./Images/Tutorial_7/Slice4.png" width="40%">
End of explanation
# retrieve table as numpy structured array with dictionary like access
GrainDataTable = micro['GrainDataTable']
# get table columns from class methods and compare to numpy array
grain_ids = micro.get_grain_ids()
print(f'grain ids equal ? {np.all(grain_ids == GrainDataTable["idnumber"])}')
grain_centers = micro.get_grain_centers()
print(f'grain centers equal ? {np.all(grain_centers == GrainDataTable["center"])}')
grain_volumes = micro.get_grain_volumes()
print(f'grain volumes equal ? {np.all(grain_volumes == GrainDataTable["volume"])}')
grain_bboxes = micro.get_grain_bounding_boxes()
print(f'grain bounding boxes equal ? {np.all(grain_bboxes == GrainDataTable["bounding_box"])}')
grain_rodrigues = micro.get_grain_rodrigues()
print(f'grain orientations equal ? {np.all(grain_rodrigues == GrainDataTable["orientation"])}')
Explanation: You should get:
<img src="./Images/Tutorial_7/Slice5.png" width="40%">
The GrainData Group
Group aim and content
We will move now to the description of the GrainData group. As you can see, it is a classical HDF5 Group. The GrainData group is aimed at storing statistical data describing the sample grains. In the data model as well as in the example dataset, this Group has contains only one data item, the GrainDataTable.
The GrainDataTable is a structured array that contains the statistical data describing the grains. Its description in the Pymicro code is:
python
class GrainData(tables.IsDescription):
Description class specifying structured storage of grain data in
Microstructure Class, in HDF5 node /GrainData/GrainDataTable
# grain identity number
idnumber = tables.Int32Col() # Signed 32-bit integer
# grain volume
volume = tables.Float32Col() # float
# grain center of mass coordinates
center = tables.Float32Col(shape=(3,)) # float (double-precision)
# Rodrigues vector defining grain orientation
orientation = tables.Float32Col(shape=(3,)) # float (double-precision)
# Grain Bounding box
bounding_box = tables.Int32Col(shape=(3, 2)) # Signed 64-bit integer
As you can see, each row contains:
the identity number of the grain
two columns describing the grain geometry: grain volume, position of grain center of mass
the orientation of the grain provided as a Rodrigues vector
the indices of the grain bounding box in the CellData image field arrays
Getting information on grains from Grain Data Table
Like for the CellData group items, you can retrieve the GrainDataTable columns value as shown in the previous tutorials (get_node, attribute or dict like access). You can also use the Microstructure class dedicated methods, used just below:
End of explanation
centers = micro.get_grain_positions()
print(f'The position of the 10 first grain centers of mass are:\n {centers[:10]}\n')
volume_fractions = micro.get_grain_volume_fractions()
print(f'The 10 first grain volume fractions are:\n {volume_fractions[:10]}\n')
volume_fr = micro.get_grain_volume_fraction(1335)
print(f'Volume fraction of grain 1335 is {volume_fr*100}%')
Explanation: Other methods to get specific grain data are also available in the class interface:
End of explanation
# get the grain object of a specific grain
grain = micro.get_grain(1335)
print(f'Grain 1335 grain object:\n {grain}')
Schmid = grain.schmid_factor(lattice.get_slip_systems('basal')[0])
print(f'Schmid factor of grain {grain.id} for first basal slip system is {Schmid}')
# get a list of all grain objects in the microstructure
grains_list = micro.get_all_grains()
print(f'First 2 grain objects of the microstructure:\n {grains_list[:2]}')
Explanation: Getting grain objects
Pymicro also has Grain objects that are specific containers equivalent to a row of the dataset GrainDataTable. You can get them with the following methods:
End of explanation
print(micro.grains[0]['center'],'\n')
print(micro.grains[4:10]['orientation'])
Explanation: The grains class attribute
The Microstructure.grains attribute is, as mentioned earlier, an alias for the Pytables node associated to the GrainDataTable data item. As such, it allows to manipulate and interact with the GrainDataTable content directly in the dataset.
You can use this attribute to access grain data just like you would manipulate a Numpy structured array:
End of explanation
# iterate through grains with ID number below 100
for g in micro.grains:
if g["idnumber"] > 100:
break
print(f'Grain {g["idnumber"]} center of mass is located at {g["center"]}')
Explanation: This attribute can also be iterated:
End of explanation
original_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR,'t5_dct_slice_data.h5')
micro_copy = Microstructure.copy_sample(src_micro_file=original_file, dst_micro_file='micro_copy',
get_object=True, autodelete=True, overwrite=True)
print(micro_copy)
del micro_copy
Explanation: The MeshData Group
The MeshData group is aimed at storing descriptions of the polycristalline microstructure of the sample, in the form of a mesh. In the present dataset, this group is used as a container group for a Mesh Group, grains_mesh. Mesh support has not yet been developped for Pymicro Microstructure class, and hence the data model of this group is for now empty.
Additional data items
As you can see above, the example dataset also contains data items that are not defined in the data model of the Microstructure class (the Amitex_Results Group, several fields of the CellData image...). Obviously, as a SampleData children class, with Microstructure class, you may add any additional data item to your datasets, as you would with SampleData datasets.
This concludes the presentation of the Microstructure class data model and data access.
Creating and setting Microstructures
Now that the class data model has been presented, we will now introduce how to create and fill Microstructure objects.
There are three ways to create a Microstructure dataset:
Creating an empty microstructure dataset
Copying an existing microstructure dataset, or a croped version of this microstructure
Create a Microstructure object from a compatible data file contaning microstructure data. These files can be outputs of imaging techniques (reconstruction of DCT or EBSD scans for instance) or microstructure generation tools (such as Neper)
The third point will be the subject of detailed and specific Notebook. The first will be presented further in the tutorial to build a full microstructure dataset. Hence, we will start by presenting how a Microstructure object/dataset can be created from an already existing one.
From an existing Microstructure
Copy an existing Microstructure
As for SampleData datasets, Microstructure datasets can be copied from already existing one, using the copy_sample method:
End of explanation
micro_crop = micro.crop(x_start=30, x_end=70, y_start=30, y_end=70, z_start=30, z_end=70, crop_name='test_crop',
autodelete=True)
micro_crop.print_dataset_content(short=True)
micro_crop.view_slice(display=False) # set to True to try reproducing the figure below !
Explanation: Crop an existing Microstructure
Sometimes, it may be desirable not to copy a complete microstructure, but only a specific region of it to build a dataset dedicated to this region. For that, you may use the crop method of the Microstructure class, that return a new microstructure object.
The method:
creates a new Microstructure dataset, with the same name plus the suffix _crop, or the name specified by the optional argument crop_name
crops all fields of the CellData group of the original Microstructure, by extracting the subregion indicated by the x_start, x_end, y_start, y_end, z_start, z_end arguments (bounds indices of the cropped region). Then, it adds them to the CellData group of the new Microstructure.
fills the GrainDataTable of the new microstructure with only the grains contained in the cropped region, and recomputes the grains geometric data for the new grain map, unless argument recompute_geometry is set to False.
Like the copy_sample method, the crop method also has an autodelete optional argument that sets the autodelete mode of the cropped microstructure instance.
Let us try to crop a small region of our example microstructure dataset:
End of explanation
micro2 = Microstructure(filename='Crop_remake.h5', overwrite_hdf5=True, autodelete=True)
print(micro2)
print(micro2.GrainDataTable)
Explanation: You should get:
<img src="./Images/Tutorial_7/Slice6.png" width="40%">
You can observe that the data that was not in the CellData group nor in the class data model in the original file has not been added to the cropped Microstructure (the AmitexResults Group for instance).
<div class="alert alert-warning">
**Warning**
Cropping a microstructure can be long if the original microstructure is heavy and has a lot a fields for the `CellData` group. If you only want to crop some of these fields, you may want to create a new microstructure, add to its `CellData` group only the fields you want to crop, and then create your crop from this new instance.
</div>
Creating and filling an Empty Microstructure
We will now see how to create from scratch a complete Microstructure dataset. As an exercise for this tutorial, we will attempt to recreate the cropped microstructure that has been created in the cell just above.
For that, we first need to create an empty Microstructure object. The Microstructure class can be used as the SampleData class constructor (see here):
End of explanation
# Crop manually the original grain map
grain_map_original = micro.get_grain_map()
grain_map_crop = grain_map_original[30:70,30:70,30:70]
# Create a mask array full of ones with appropriate shape
mask_crop = np.ones_like(grain_map_crop)
Explanation: The new microstructure has been created, with the complete class data model filled with empty data items. We will now fill it with data corresponding to the cropped region in the previous subsection.
Setting CellData items
As an Image Group, you can add fields to the CellData with the SampleData method add_field. In the specific case of the Microstructure class, you can use specific methods to set the value of the fields that are part of the class data model: set_mask , set_grain_map and set_phase_map.
We are trying to recreate a cropped microstructure with a uniform mask and no phase map. We need to create the appropriate grain map and mask arrays to set their value in the dataset:
End of explanation
# retrieve voxel size of original microstructure
voxel_size = micro.get_voxel_size()
# set the grain map of our new microstructure
micro2.set_grain_map(grain_map_crop, voxel_size)
# set the mask
micro2.set_mask(mask_crop)
# visualize slice of added arrays
micro2.view_slice(display=False) # set to True to try reproducing the figure below !
Explanation: Now that we have created our arrays, we can set the CellData fields. Note that when you add the first CellData field, you have to specify a pixel/voxel size to set the scale of the image.
End of explanation
uncertainty_map_crop = micro['uncertainty_map'][30:70,30:70,30:70]
micro2.add_field(gridname='CellData', fieldname='uncertainty_map', array=uncertainty_map_crop)
uncertainty_map_crop = micro['grain_map_raw'][30:70,30:70,30:70]
micro2.add_field(gridname='CellData', fieldname='grain_map_raw', array=uncertainty_map_crop)
micro2.print_group_content('CellData', short=True)
Explanation: You should get:
<img src="./Images/Tutorial_7/Slice6.png" width="40%">
To add the other fields that were part of the original microstructure Image Group, we have to go back to SampleData methods:
End of explanation
phase = micro.get_phase(1)
print(phase)
micro2.set_phase(phase)
print(micro2.get_phase_ids_list())
print(micro2.get_phase(1))
Explanation: Setting the phase
Likewise, we can take advantage of the get_phase and set_phase methods to transfer the phase data from the original dataset to our new one. The set_phase method takes as argument a pymicro CrystallinePhase object. These object contain an identification number for the phase. If a phase with same number already exists in the dataset, it is overwritten by the inputted one.
Let us add phase data to our dataset:
End of explanation
micro2.build_grain_table_from_grain_map()
print(micro2.GrainDataTable)
Explanation: Note that phases can also be added with the set_phases method, that takes as input a list of pymicro CrystallinePhase objects. You can also use the add_phase method, that adds a CrystallinePhase object to the dataset with the next available phase identification number of the microstructure.
Setting the GrainDataTable
The Microstructure class offers several ways to fill the GrainDataTable, that are successively reviewed in this subsection.
From the grain map
As detailed earlier, the GrainDataTable contains data describing the grains position and morphology. These values can be computed from the grain map, that provides the full geometry of each grain. Specific methods of the Microstucture class allow to compute those values and automatically fill the GrainDataTable with them. They are:
recompute_grain_centers: computes and fills the center column of the GrainDataTable from the grains geometry in grain map
recompute_grain_volumes: computes and fills the volume column of the GrainDataTable from the grains geometry in grain map
recompute_grain_bounding_boxes: computes and fills the bounding_box column of the GrainDataTable from the grains geometry in grain map
If you need to call them all, you can do it at once with the build_grain_table_from_grain_map, that will first synchronize the grain ids that are in the grain map and the GrainDataTable, and then call the 3 previous methods to fill the geometric grain data in the table. We will use for our tutorial exercise:
End of explanation
# get the list of grain ids of the new microstructure
new_ids = micro2.get_grain_ids().tolist()
# get the orientations of this list of grains
orientations = micro.get_grain_rodrigues(new_ids)
print(orientations)
Explanation: The table has been updated with the geometric data of 9 grains, the grain identity number. You can also see that the method has added a random orientation to each grain. If you want to avoid this, you may use instead the compute_grains_geometry method, that do not generates random orientations for the grains but fills orientation with zeros.
From data arrays
So, at this point, our GrainDataTable has its geometric values in accordance with the grain map, but wrong grain orientations, that have been randomly generated. We can however get the correct grain orientations from the original example microstructure dataset, with the get_grain_rodrigues method:
End of explanation
micro2.set_orientations(orientations)
print(micro2.GrainDataTable['orientation'])
Explanation: Now, we can use the set_orientations method to add this information to our data table:
End of explanation
grain_index = np.where(micro2.GrainDataTable['idnumber'] == 485)
print(f'Grain 485 center is {micro2.GrainDataTable[grain_index]["center"]}')
print(f'Grain 485 volume is {micro2.GrainDataTable[grain_index]["volume"]}')
Explanation: Similar methods exist for the rest of the data in the table:
set_centers
set_bounding_boxes
set_volumes
Getting/Setting data for/from a specific grain
To get information from a specific grain, you use the GrainDataTable as a standard Numpy structured array:
End of explanation
for g in micro2.grains:
if g['idnumber'] == 485:
print(f'Grain 485 center is {g["center"]}')
print(f'Grain 485 volume is {g["volume"]}')
Explanation: But you can also iterate the grains attribute of the class. By doing this, you will get at each iteration a Pytables Row object representing a row of data in the table, i.e. a grain. You can access its values exactly as if it was a Numpy structured array:
End of explanation
# get old orientation value
grain_orientation = micro2.GrainDataTable[grain_index]['orientation']
print(f'The orientation of the grain is {micro2.GrainDataTable[grain_index]["orientation"]}')
# iterate to find the grain and set its orientation to a random value
for g in micro2.grains:
if g['idnumber'] == 485:
g['orientation'] = np.random.rand(3)
g.update()
print(f'The new orientation of the grain is {micro2.GrainDataTable[grain_index]["orientation"]}')
# Set back the original value of the orientation
for g in micro2.grains:
if g['idnumber'] == 485:
g['orientation'] = grain_orientation
g.update()
print(f'The orientation of the grain is back at {micro2.GrainDataTable[grain_index]["orientation"]}')
Explanation: You can also use this process to set specifically some values for one grain. You can iterate the table to find your grain object, set one of its values as if it was a Numpy structured array. Then you have to use the specific update method of the Pytables Row class to set the value in the dataset, as follows:
End of explanation
# remove SampleData instance
del micro2
del micro
os.remove(dataset_file+'.h5')
os.remove(dataset_file+'.xdmf')
Explanation: Obviously, you can use the same method to get/set other columns of the table (centers, bounding boxes...)
We have now completed this introduction to the Microstructure class. More advanced features of the class are already implemented in the code, and many more will be in the next years. Specific tutorial Notebooks about this features will be released in the future, as well as examples presented in the documentation Cookbook.
We can now close our datasets, and remove the original unarchived file:
End of explanation |
3,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Python Tour of Data Science
Step1: 3.1 Facebook
There is two ways to scrape data from Facebook, you can choose one or combine them.
1. The low-level approach, sending HTTP requests and receiving JSON responses to / from their Graph API. That can be achieved with the json and requests packages (altough you can use urllib or urllib2, requests has a better API). The knowledge you'll acquire using that method will be useful to query other web APIs than FB. This method is also more flexible.
2. The high-level approach, using a Python SDK. The code you'll have to write for this method is gonna be shorter, but specific to the FB Graph API.
You will need an access token, which can be created with the help of the Graph Explorer. That tool may prove useful to test queries. Once you have your token, you may create a credentials.ini file with the following content
Step2: 3.1.1 Scrap with HTTP requests
3.1.1.1 Get the number of likes
The process is three-way
Step3: 3.1.1.2 Get posts
The process is similar here, except that the query and extraction are more complicated (because we work with more data). As you may have found out, FB returns at most 100 posts at a time. To get more posts, they provide paging, which we use to requests the next posts.
Step4: 3.1.2 Scrap with Facebook SDK
That method is much slower because it should retrieve the comments and likes, not only their number, for each post. The API is not expressive enough to do otherwise.
Step5: 3.2 Twitter
There exists a bunch of Python-based clients for Twitter. Tweepy is a popular choice.
You will need to create a Twitter app and copy the four tokens and secrets in the credentials.ini file
Step6: The code is much simpler for Twitter than Facebook because Tweepy handles much of the dirty work, like paging.
Step7: 4 Prepare and save data
To facilitate our analysis, we first prepare the data.
1. Convert floating point numbers to integers.
1. Convert Facebook post time from string to datetime.
That is not necessary, but it'll allow to e.g. compare posting dates with standard comparison operators like > and <.
Step8: Now that we collected everything, let's save it in two SQLite databases.
Step9: 5 Data analysis
Answer the questions using pandas, statsmodels, scipy.stats, bokeh.
Step10: 5.1 Number of posts
Step11: 5.2 Most liked
Looks like we're really into rankings !!
Step12: 5.3 Engagement
Step13: 5.4 Text length
There is a stricking difference here
Step14: 5.5 Posting time
We can clearly observe the office hours.
Step15: Let's look if the time of posting influence the number of likes. Do you see a peak at 5am ? Do you really think we should post at 5am ? What's going on here ? | Python Code:
# Number of posts / tweets to retrieve.
# Small value for development, then increase to collect final data.
n = 4000 # 20
Explanation: A Python Tour of Data Science: Data Acquisition & Exploration
Michaรซl Defferrard, PhD student, EPFL LTS2
1 Exercise: problem definition
Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.
As you probably don't have a company (yet?), you can either use your own social network profile as if it were the company's one or choose an established entity, e.g. EPFL. You will need to be registered in FB or Twitter to generate access tokens. If you're not, either ask a classmate to create a token for you or create a fake / temporary account for yourself (no need to follow other people, we can fetch public data).
At the end of the exercise, you should have two datasets (Facebook & Twitter) and have used them to answer the following questions, for both Facebook and Twitter.
1. How many followers / friends / likes has your chosen profile ?
2. How many posts / tweets in the last year ?
3. What were the 5 most liked posts / tweets ?
4. Plot histograms of number of likes and comments / retweets.
5. Plot basic statistics and an histogram of text lenght.
6. Is there any correlation between the lenght of the text and the number of likes ?
7. Be curious and explore your data. Did you find something interesting or surprising ?
1. Create at least one interactive plot (with bokeh) to explore an intuition (e.g. does the posting time plays a role).
2 Ressources
Here are some links you may find useful to complete that exercise.
Web APIs: these are the references.
* Facebook Graph API
* Twitter REST API
Tutorials:
* Mining the Social Web
* Mining Twitter data with Python
* Simple Python Facebook Scraper
3 Web scraping
Tasks:
1. Download the relevant information from Facebook and Twitter. Try to minimize the quantity of collected data to the minimum required to answer the questions.
2. Build two SQLite databases, one for Facebook and the other for Twitter, using pandas and SQLAlchemy.
1. For FB, each row is a post, and the columns are at least (you can include more if you want): the post id, the message (i.e. the text), the time when it was posted, the number of likes and the number of comments.
2. For Twitter, each row is a tweet, and the columns are at least: the tweet id, the text, the creation time, the number of likes (was called favorite before) and the number of retweets.
Note that some data cleaning is already necessary. E.g. there are some FB posts without message, i.e. without text. Some tweets are also just retweets without any more information. Should they be collected ?
End of explanation
import configparser
# Read the confidential token.
credentials = configparser.ConfigParser()
credentials.read('credentials.ini')
token = credentials.get('facebook', 'token')
# Or token = 'YOUR-FB-ACCESS-TOKEN'
import requests # pip install requests
import facebook # pip install facebook-sdk
import pandas as pd
page = 'EPFL.ch'
Explanation: 3.1 Facebook
There is two ways to scrape data from Facebook, you can choose one or combine them.
1. The low-level approach, sending HTTP requests and receiving JSON responses to / from their Graph API. That can be achieved with the json and requests packages (altough you can use urllib or urllib2, requests has a better API). The knowledge you'll acquire using that method will be useful to query other web APIs than FB. This method is also more flexible.
2. The high-level approach, using a Python SDK. The code you'll have to write for this method is gonna be shorter, but specific to the FB Graph API.
You will need an access token, which can be created with the help of the Graph Explorer. That tool may prove useful to test queries. Once you have your token, you may create a credentials.ini file with the following content:
[facebook]
token = YOUR-FB-ACCESS-TOKEN
End of explanation
# 1. Form URL.
url = 'https://graph.facebook.com/{}?fields=likes&access_token={}'.format(page, token)
#print(url)
# 2. Get data.
data = requests.get(url).json()
print('data:', data)
# Optionally, check for errors. Most probably the session has expired.
if 'error' in data.keys():
raise Exception(data)
# 3. Extract data.
print('{} has {} likes'.format(page, data['likes']))
Explanation: 3.1.1 Scrap with HTTP requests
3.1.1.1 Get the number of likes
The process is three-way:
1. Assemble an URL to query. The documentation of the FB Graph API is useful there. You can click on the URL to let your browser make the query and return the result.
2. Send an HTTP GET request, receive the results and interpret it as JSON (because Facebook sends data in JSON).
3. Explore the received data and extract what interests us, here the number of likes. If we don't get what we want (or if we get too much), we can modify the query url. Note that the hierarchical JSON format is exposed as a dictionary.
End of explanation
# 1. Form URL. You can click that url and see the returned JSON in your browser.
fields = 'id,created_time,message,likes.limit(0).summary(1),comments.limit(0).summary(1)'
url = 'https://graph.facebook.com/{}/posts?fields={}&access_token={}'.format(page, fields, token)
#print(url)
# Create the pandas DataFrame, a table which columns are post id, message, created time, #likes and #comments.
fb = pd.DataFrame(columns=['id', 'text', 'time', 'likes', 'comments'])
# The outer loop is to query FB multiple times, as FB sends at most 100 posts at a time.
while len(fb) < n:
# 2. Get the data from FB. At most 100 posts.
posts = requests.get(url).json()
# 3. Here we extract information for each of the received post.
for post in posts['data']:
# The information is stored in a dictionary.
serie = dict(id=post['id'], time=post['created_time'])
try:
serie['text'] = post['message']
except KeyError:
# Let's say we are not interested in posts without text.
continue
serie['likes'] = post['likes']['summary']['total_count']
serie['comments'] = post['comments']['summary']['total_count']
# Add the dictionary as a new line to our pandas DataFrame.
fb = fb.append(serie, ignore_index=True)
try:
# That URL is returned by FB to access the next 'page', i.e. the next 100 posts.
url = posts['paging']['next']
except KeyError:
# No more posts.
break
fb[:5]
Explanation: 3.1.1.2 Get posts
The process is similar here, except that the query and extraction are more complicated (because we work with more data). As you may have found out, FB returns at most 100 posts at a time. To get more posts, they provide paging, which we use to requests the next posts.
End of explanation
g = facebook.GraphAPI(token, version='2.7')
# We limit to 10 because it's slow.
posts = g.get_connections(page, 'posts', limit=10)
if 'error' in posts.keys():
# Most probably the session has expired.
raise Exception(data)
for post in posts['data']:
pid = post['id']
try:
text = post['message']
except KeyError:
continue
time = post['created_time']
likes = g.get_connections(pid, 'likes', summary=True, limit=0)
nlikes = likes['summary']['total_count']
comments = g.get_connections(pid, 'comments', summary=True, limit=0)
ncomments = comments['summary']['total_count']
print('{:6d} {:6d} {} {}'.format(nlikes, ncomments, time, text[:50]))
Explanation: 3.1.2 Scrap with Facebook SDK
That method is much slower because it should retrieve the comments and likes, not only their number, for each post. The API is not expressive enough to do otherwise.
End of explanation
import tweepy # pip install tweepy
auth = tweepy.OAuthHandler(credentials.get('twitter', 'consumer_key'), credentials.get('twitter', 'consumer_secret'))
auth.set_access_token(credentials.get('twitter', 'access_token'), credentials.get('twitter', 'access_secret'))
api = tweepy.API(auth)
user = 'EPFL_en'
followers = api.get_user(user).followers_count
print('{} has {} followers'.format(user, followers))
Explanation: 3.2 Twitter
There exists a bunch of Python-based clients for Twitter. Tweepy is a popular choice.
You will need to create a Twitter app and copy the four tokens and secrets in the credentials.ini file:
[twitter]
consumer_key = YOUR-CONSUMER-KEY
consumer_secret = YOUR-CONSUMER-SECRET
access_token = YOUR-ACCESS-TOKEN
access_secret = YOUR-ACCESS-SECRET
End of explanation
tw = pd.DataFrame(columns=['id', 'text', 'time', 'likes', 'shares'])
for tweet in tweepy.Cursor(api.user_timeline, screen_name=user).items(n):
serie = dict(id=tweet.id, text=tweet.text, time=tweet.created_at)
serie.update(dict(likes=tweet.favorite_count, shares=tweet.retweet_count))
tw = tw.append(serie, ignore_index=True)
Explanation: The code is much simpler for Twitter than Facebook because Tweepy handles much of the dirty work, like paging.
End of explanation
#fb.id = fb.id.astype(int)
fb.likes = fb.likes.astype(int)
fb.comments = fb.comments.astype(int)
tw.id = tw.id.astype(int)
tw.likes = tw.likes.astype(int)
tw.shares = tw.shares.astype(int)
from datetime import datetime
def convert_time(row):
return datetime.strptime(row['time'], '%Y-%m-%dT%H:%M:%S+0000')
fb['time'] = fb.apply(convert_time, axis=1)
from IPython.display import display
display(fb[:5])
display(tw[:5])
Explanation: 4 Prepare and save data
To facilitate our analysis, we first prepare the data.
1. Convert floating point numbers to integers.
1. Convert Facebook post time from string to datetime.
That is not necessary, but it'll allow to e.g. compare posting dates with standard comparison operators like > and <.
End of explanation
import os
folder = os.path.join('..', 'data', 'social_media')
try:
os.makedirs(folder)
except FileExistsError:
pass
filename = os.path.join(folder, 'facebook.sqlite')
fb.to_sql('facebook', 'sqlite:///' + filename, if_exists='replace')
filename = os.path.join(folder, 'twitter.sqlite')
tw.to_sql('twitter', 'sqlite:///' + filename, if_exists='replace')
Explanation: Now that we collected everything, let's save it in two SQLite databases.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
Explanation: 5 Data analysis
Answer the questions using pandas, statsmodels, scipy.stats, bokeh.
End of explanation
date = datetime(2016, 9, 4)
datestr = date.strftime('%Y-%m-%d')
print('Number of posts after {}: {}'.format(datestr, sum(fb.time > date)))
print('Number of tweets after {}: {}'.format(datestr, sum(tw.time > date)))
Explanation: 5.1 Number of posts
End of explanation
display(fb.sort_values(by='likes', ascending=False)[:5])
display(tw.sort_values(by='likes', ascending=False)[:5])
Explanation: 5.2 Most liked
Looks like we're really into rankings !!
End of explanation
pd.concat([fb.describe(), tw.loc[:,'likes':'shares'].describe()], axis=1)
fig, axs = plt.subplots(1, 4, figsize=(15, 5))
fb.likes.plot(kind='box', ax=axs[0]);
fb.comments.plot(kind='box', ax=axs[1]);
tw.likes.plot(kind='box', ax=axs[2]);
tw.shares.plot(kind='box', ax=axs[3]);
fb.hist(bins=20, log=True, figsize=(15, 5));
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
tw.loc[:,'likes'].hist(bins=20, log=True, ax=axs[0]);
tw.loc[tw.shares < 200, 'shares'].hist(bins=20, log=True, ax=axs[1]);
Explanation: 5.3 Engagement: likes, comments, shares
End of explanation
def text_length(texts):
lengths = np.empty(len(texts), dtype=int)
for i, text in enumerate(texts):
lengths[i] = len(text)
plt.figure(figsize=(15, 5))
prop = lengths.min(), '{:.2f}'.format(lengths.mean()), lengths.max()
plt.title('min = {}, mean={}, max = {}'.format(*prop))
plt.hist(lengths, bins=20)
text_length(tw.text)
text_length(fb.text)
Explanation: 5.4 Text length
There is a stricking difference here:
1. On Twitter, almost all tweets reach the 140 characters limit.
2. The distribution is more Gaussian on Facebook.
End of explanation
fb.id.groupby(fb.time.dt.hour).count().plot(kind='bar', alpha=0.4, color='y', figsize=(15,5));
tw.id.groupby(tw.time.dt.hour).count().plot(kind='bar', alpha=0.4, color='g', figsize=(15,5));
Explanation: 5.5 Posting time
We can clearly observe the office hours.
End of explanation
fb.likes.groupby(fb.time.dt.hour).mean().plot(kind='bar', figsize=(15,5));
plt.figure()
tw.likes.groupby(tw.time.dt.hour).mean().plot(kind='bar', figsize=(15,5));
Explanation: Let's look if the time of posting influence the number of likes. Do you see a peak at 5am ? Do you really think we should post at 5am ? What's going on here ?
End of explanation |
3,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Salvar y cargar en archivos de disco
NumPy puede salvar y recuperar datos desde archivos de disco en varios formatos
Step1: Salvar los datos en un archivo de texto
Step2: Recuperar los datos desde un archivo de texto
Step3: Formato binario de NumPy
Step4: Recuperar los datos desde el archivo | Python Code:
import numpy as np
a = np.random.randn(10,4)
a
Explanation: Salvar y cargar en archivos de disco
NumPy puede salvar y recuperar datos desde archivos de disco en varios formatos
End of explanation
# Esta estructura del ejemplo ocupa 1022 bytes en disco
np.savetxt('datosRandom.txt',a)
Explanation: Salvar los datos en un archivo de texto
End of explanation
b = np.loadtxt('datosRandom.txt')
np.array_equal(a,b)
Explanation: Recuperar los datos desde un archivo de texto
End of explanation
# En el formato binario esta estructura ocupa 400 bytes
np.save('datos.npy',a)
Explanation: Formato binario de NumPy
End of explanation
c = np.load('datos.npy')
np.array_equal(c,a)
Explanation: Recuperar los datos desde el archivo
End of explanation |
3,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="AW&H2015.tiff" style="float
Step1: We want to explore drawdown as a function of time
So, set up an array of times to evaluate, and loop over them. Also, we can specify a distance from each well at which to calculate the curve of drawdown over time.
Step2: We will want to normalize our plots
Let's figure out the maximum drawdown to use for setting our colorbar on the plots.
Step3: Clobber the PNG output files
We will make a sequence of PNG files from which to render the movie. It's a good idea to delete the existing ones from our folder.
Do this kind of thing with caution!
You are deleting files and will not be asked to confirm anything!
Step4: Loop over time and make figures
We will make a figure with the drawdown contours over the whole grid.
Step5: Let's make an animation!
Step6: Finally, we can embed the drawdown movie into the notebook
We could also just look at it outside of the notebook by looking in the working subdirectory P3-4_Theis. Note too that you can manually move through time by grabbing the time slider in the playbar. | Python Code:
# Problem 3.4, page 107 Anderson, Woessner and Hunt (2015)
# import Python libraries/functionality for use in this notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.special
import sys, os
from mpl_toolkits.axes_grid1 import make_axes_locatable
# return current working directory
os.getcwd()
# Set the name of the path to the model working directory
dirname = "P3-4_Theis"
datapath = os.getcwd()
modelpath = os.path.join(datapath, dirname)
print 'Name of model path: ', modelpath
# Now let's check if this directory exists. If not, then we will create it.
if os.path.exists(modelpath):
print 'Model working directory already exists.'
else:
print 'Creating model working directory.'
os.mkdir(modelpath)
os.chdir(modelpath)
os.getcwd()
#Define an function, class, and object for Theis Well analysis
def well_function(u):
return scipy.special.exp1(u)
def theis(Q, T, S, r, t):
u = r ** 2 * S / 4. / T / t
s = Q / 4. / np.pi / T * well_function(u)
return s
class Well(object):
def __init__(self, x, y, rate, name):
self.x = float(x)
self.y = float(y)
self.rate = rate
self.name = name
self.swell = None
return
# Parameters needed to solve Theis
r = 500 # m
T = 2000 # m^2/d (100 m/d Kh x 20 m thick)
S = 0.00002 # unitless
t = 30. # days
#Q = pumping rate # m^3/d - but we'll enter it below in the well info
# Well information
well_list =[]
well_obj = Well(250, 250, 5760, "Well A") # 4000 L/min = 5760 m^3/d
well_list.append(well_obj)
well_list.append(Well(550, 250, 17280, "Well B")) # 12000 L/min = 17280 m^3/d
# Grid information as requested in problem
x = np.linspace(0, 800., 50) # x-direction 0 to 800 m, 50 m increments
y = np.linspace(0, 500., 50) # y-direction 0 to 500 m, 50 m increments
xgrid, ygrid = np.meshgrid(x, y) # make a grid with these coordinates
Explanation: <img src="AW&H2015.tiff" style="float: left">
<br />
<br />
<br />
Problem P3.4 Theis solution with two pumping
In this notebook, we will work through one tutorial based the Theis solution for transient pumping, and investigate the superposition of drawdown from two interfering pumping wells. Two wells fully penetrate a 20-m-thick confined aquifer that is isotropic and homogeneous (Fig. P3.1). Storativity is estimated to be 2 x 10-5. The hydraulic conductivity is 100 m/d. The confining unit is composed of very low permeability material and is approximated as impermeable. Both wells have a radius of 0.5 m and are pumped continuously at a constant rate for 30 days; well A is pumped at 4000 L/min and well B is pumped at 12,000 L/min. Before pumping, the head is 100 m everywhere in the problem domain. The 800 m by 500 m problem domain in Fig. P3.1 is the near-field region of a problem domain that extends over many tens of square kilometers so that the aquifer effectively is of infinite extent and the composite cone of depression does not reach the boundaries after 30 days of pumping.
We simpflied it to look like this:
<img src="P3.4_figure.tiff">
Below is an iPython Notebook that builds a Theis function and plots results.
[Acknowledgements: This tutorial was created by Randy Hunt and all failings are mine. The exercise here is modeled after example iPython Notebooks developed by Chris Langevin and Joe Hughes for the USGS Spring 2015 Python Training course GW1774]<br />
End of explanation
times = np.linspace(0.,30.,31) # linear interpolation of time from 0 to 30 days, make 30 increments days at 0.5
rdist = 25 # this sets the distance to plot drawdown over time
print times
Explanation: We want to explore drawdown as a function of time
So, set up an array of times to evaluate, and loop over them. Also, we can specify a distance from each well at which to calculate the curve of drawdown over time.
End of explanation
#let's find the maximum drawdown
drawdown_grid_max = np.zeros(xgrid.shape, dtype=float)
for well_obj in well_list:
r = ((well_obj.x - xgrid)**2 + (well_obj.y - ygrid) ** 2) ** 0.5
s_max = theis(well_obj.rate, T, S, r, times[-1])
drawdown_grid_max += s_max
max_drawdown = np.max(drawdown_grid_max)
print max_drawdown
Explanation: We will want to normalize our plots
Let's figure out the maximum drawdown to use for setting our colorbar on the plots.
End of explanation
for cf in os.listdir(os.getcwd()):
if cf.endswith('.png'):
os.remove(cf)
Explanation: Clobber the PNG output files
We will make a sequence of PNG files from which to render the movie. It's a good idea to delete the existing ones from our folder.
Do this kind of thing with caution!
You are deleting files and will not be asked to confirm anything!
End of explanation
# Note that this section of code is saving figures for animation - not plotting them!
from IPython.display import clear_output
# to make our plots of drawdown over time a one point, we can
# predefine the response as np.nan. That way, when we plot incrementally
# as we calculate through time, only the times for which calculations
# have been made will appear using plt.plot()
for well_obj in well_list:
well_obj.swell = np.ones_like(times)*np.nan
# using "enumerate" we get both the iterant (t) and a counter (i)
for i,t in enumerate(times):
# the following stuff just writes out a status message to the screen
clear_output()
perc_done = (i/float(len(times)-1)) * 100
sys.stdout.write('working on time {0}: {1:2.2f}% complete'.format(t,
perc_done))
if i < len(times):
sys.stdout.flush()
# here's the end of the silly shenanigans of plotting out status to the screen
# now we calculate the drawdown for each time.
drawdown_grid = np.zeros(xgrid.shape, dtype=float)
for well_obj in well_list:
r = ((well_obj.x - xgrid)**2 + (well_obj.y - ygrid) ** 2) ** 0.5
s = theis(well_obj.rate, T, S, r, t)
well_obj.swell[i] = (theis(well_obj.rate, T, S, rdist, t))
drawdown_grid += s
# drawdown contour map (map view)
plt.subplot(1, 3, 1, aspect='equal')
im = plt.contourf(xgrid,
ygrid,
drawdown_grid,
np.linspace(0,max_drawdown,10))
# optional color bar configuration
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", "5%", pad="3%")
plt.colorbar(im, cax=cax).ax.invert_yaxis()
for well_obj in well_list:
plt.text(well_obj.x, well_obj.y, well_obj.name)
plt.title('Drawdown at time = {0:.0f}'.format(t))
# Let's finish with a drawdown only plot --> make a second set of figures with only the
# make a plot
plt.subplot(1, 1, 1, aspect='equal')
im = plt.contourf(xgrid,
ygrid,
drawdown_grid,
np.linspace(0,max_drawdown,10))
plt.colorbar().ax.invert_yaxis()
for well_obj in well_list:
plt.text(well_obj.x, well_obj.y, well_obj.name)
plt.title('Drawdown at time = {0:.0f}'.format(t))
plt.savefig('s_only{0}.png'.format(i))
Explanation: Loop over time and make figures
We will make a figure with the drawdown contours over the whole grid.
End of explanation
# for execution robustness, we need to determine where ffmpeg lives
# in general, you probably won't need to bother
import platform
from subprocess import check_output
if 'Windows' in platform.platform():
if '64bit' in platform.architecture()[0]:
ffmpeg_path = os.path.join(binpath, 'ffmpeg.exe')
else:
ffmpeg_path = os.path.join(binpath, 'win32', 'ffmpeg.exe')
else:
#Assume it is in path on macos
ffmpeg_path = 'ffmpeg'
print 'ffmpeg_path is: ', ffmpeg_path
figfiles = ['s_only%d.png']
anmfiles = ['Theis_movie1.mp4']
# note the tricky way we can iterate over the elements of
# two lists in pairs using zip (if you wanted to add more plots)
for figfile,anmfile in zip(figfiles,anmfiles):
try:
os.remove(anmfile)
print 'Deleted the existing animation: ', anmfile
except:
pass
# now we do a system call, making the movie using command line arguments
# for ffmpeg
output = check_output([ffmpeg_path,
'-f', 'image2',
'-i', figfile,
'-vcodec', 'libx264',
'-pix_fmt', 'yuv420p',
anmfile])
Explanation: Let's make an animation!
End of explanation
from IPython.display import HTML
from base64 import b64encode
video = open(anmfiles[0], "rb").read()
video_encoded = b64encode(video)
video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)
HTML(data=video_tag)
Explanation: Finally, we can embed the drawdown movie into the notebook
We could also just look at it outside of the notebook by looking in the working subdirectory P3-4_Theis. Note too that you can manually move through time by grabbing the time slider in the playbar.
End of explanation |
3,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, youโre going to take a peek into the realm of neural network machine translation. Youโll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step22: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step25: Decoding - Training
Create a training decoding layer
Step28: Decoding - Inference
Create inference decoder
Step31: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step34: Build the Neural Network
Apply the functions you implemented above to
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step41: Batch and pad the source and target sequences
Step44: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step46: Save Parameters
Save the batch_size and save_path parameters for inference.
Step48: Checkpoint
Step51: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step53: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, youโre going to take a peek into the realm of neural network machine translation. Youโll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (30, 40)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# Split text into sentences
source_sentences, target_sentences = source_text.split('\n'), target_text.split('\n')
# Split sentences into words and convert to integers
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_sentences]
# Append <EOS> to all target sentences
for sentence in target_id_text:
sentence.append(target_vocab_to_int['<EOS>'])
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input_ = tf.placeholder(tf.int32, [None, None], name='input')
target = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_len = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return input_, target, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# Code taken from https://github.com/udacity/deep-learning/blob/master/seq2seq/sequence_to_sequence_implementation.ipynb
# Slice out the last column (will remove the last word ids in each batch)
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
# Fill a new vector column with <GO> tags
go_fill = tf.fill([batch_size, 1], target_vocab_to_int['<GO>'])
# Concatenate the <GO> fill vector to the beginning of the batches
preprocessed = tf.concat([go_fill, ending], 1)
return preprocessed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def build_cell(lstm_size, keep_prob):
Build a basic LSTM cell with dropout
:param lstm_size: Number of LSTM units
:param keep_prob: Dropout keep value
# initializer = tf.random_uniform_initializer(-0.1, 0.1, seed=100)
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# Create the embedding layer and LSTM layer(s)
embed = tf.contrib.layers.embed_sequence(
rnn_inputs, vocab_size=source_vocab_size, embed_dim=encoding_embedding_size)
cell = tf.contrib.rnn.MultiRNNCell([build_cell(rnn_size, keep_prob) for _ in range(num_layers)])
# Join the embedding/LSTM layers together
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, sequence_length=source_sequence_length, dtype=tf.float32)
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# The training decoding layer takes training examples as input
helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer=output_layer)
final_outputs, final_state = tf.contrib.seq2seq.dynamic_decode(
decoder, impute_finished=True, maximum_iterations=max_summary_length)
return final_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# Broadcast the start sequence id to a list of start tokens, one for each batch
start_tokens = tf.tile(tf.constant([start_of_sequence_id]), [batch_size])
# The inference decoding layer feeds takes its output as input
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer=output_layer)
final_outputs, final_state = tf.contrib.seq2seq.dynamic_decode(
decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
return final_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# Create the embedding layer; note that an embedding lookup is passed to the training decoder,
# while the whole embedding is passed to the inference decoder (pass the output back into itself)
dec_embedding = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size], -1, 1))
dec_embed = tf.nn.embedding_lookup(dec_embedding, dec_input)
dec_cell = tf.contrib.rnn.MultiRNNCell([build_cell(rnn_size, keep_prob) for _ in range(num_layers)])
# Define a dense layer for both training and inference decoders, outputs vocabulary
kernel_initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.1)
output_layer = Dense(target_vocab_size, kernel_initializer=kernel_initializer)
with tf.variable_scope('decode') as decoding_scope:
# Training decoder
train_output = decoding_layer_train(
encoder_state, dec_cell, dec_embed, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
# Reuse the same components from the training decoder
decoding_scope.reuse_variables()
# Inference decoder
infer_output = decoding_layer_infer(
encoder_state, dec_cell, dec_embedding, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob)
return train_output, infer_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# Pass input through the encoder, retrieving the state (discard output)
_, encoder_state = encoding_layer(
input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
# Preprocess the target input for the training decoder
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# Decode encoded input
train_output, infer_output = decoding_layer(
dec_input, encoder_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return train_output, infer_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.7
display_step = 50
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence_lower = [word.lower() for word in sentence.split()]
sentence_int = [
vocab_to_int[word] if word in vocab_to_int
else vocab_to_int['<UNK>']
for word in sentence_lower
]
return sentence_int
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
# translate_sentence = 'he saw a old yellow truck .'
translate_sentence = sentences[10]
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
3,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outputting a movie
In this brief example, we show how to create a movie of a seismic shot with SeisCL
Step1: We first create a constant velocity model, with one source in the middle
Step2: To output a movie, we have to set the input 'movout' to a number greater than zero. For movout=10, the movie will contain every 10 time steps.
Step3: SeisCL python wrapper contains a method to read the movie file.
Step4: This last variable contains a list of movies for all the ouput variables given by seisout. In our case, seisout=1, so the outputs are vx and vz. We can visualize the movie with the following code. | Python Code:
%matplotlib inline
from SeisCL import SeisCL
import matplotlib.pyplot as plt
import numpy as np
Explanation: Outputting a movie
In this brief example, we show how to create a movie of a seismic shot with SeisCL
End of explanation
seis = SeisCL()
# Constants for the modeling
seis.ND = 2
N = 200
seis.N = np.array([N, 2*N])
seis.dt = dt = 0.25e-03
seis.dh = dh = 2
seis.NT = NT = 1000
seis.seisout = 1
seis.f0 = 20
# Source and receiver positions
sx = seis.N[1]//2 * dh
sy = 0
sz = seis.N[0]//2 * dh
gx = np.arange(N//4 * dh, (N - N//4)*dh, dh)
gy = gx * 0
gz = gx * 0 + N//4*dh
gsid = gz * 0
gid = np.arange(0, len(gz))
seis.src_pos_all = np.stack([[sx], [sy], [sz], [0], [100]], axis=0)
seis.rec_pos_all = np.stack([gx, gy, gz, gsid, gid, gx * 0, gx * 0, gx * 0], axis=0)
# We start with a simple model
vp_a = np.zeros(seis.N) + 3500
vs_a = np.zeros(seis.N) + 2000
rho_a = np.zeros(seis.N) + 2000
Explanation: We first create a constant velocity model, with one source in the middle
End of explanation
seis.movout = 20
seis.set_forward(seis.src_pos_all[3, :], {"vp": vp_a, "rho": rho_a, "vs": vs_a}, withgrad=False)
stdout = seis.execute()
Explanation: To output a movie, we have to set the input 'movout' to a number greater than zero. For movout=10, the movie will contain every 10 time steps.
End of explanation
movs = seis.read_movie()
Explanation: SeisCL python wrapper contains a method to read the movie file.
End of explanation
from matplotlib import animation
from IPython.display import HTML
toplot = movs[0][:,:,:,0]
fig = plt.figure(figsize=(6, 6))
im = plt.imshow(toplot[:,:,0], animated=True, vmin=np.min(toplot) / 10, vmax=np.max(toplot) / 10)
def init():
im.set_array(toplot[:,:,0])
return im,
def animate(t):
im.set_array(toplot[:,:,t])
return [im]
plt.close()
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=movs[0].shape[2]-1, interval=100, blit=True, repeat=True)
HTML(anim.to_html5_video())
Explanation: This last variable contains a list of movies for all the ouput variables given by seisout. In our case, seisout=1, so the outputs are vx and vz. We can visualize the movie with the following code.
End of explanation |
3,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading an NCEP BUFR data set
NCEP BUFR (Binary Universal Form for the Representation of meteorological data) can be read two ways
Step1: For the purposes of this demo I've made a local copy of the Argo data tank on WCOSS
located at
/dcom/us007003/201808/b031/xx005
Begin by opening the file
Step2: Movement and data access within the BUFR file is through these methods
Step3: You can print the subset and determine the parameter names. BUFR dumps can be very verbose, so I'll just copy in the header and the first subset replication from a bufr.dump_subset() command.
I've highlighted in red the parameters I want to plot.
<pre style="font-size
Step4: Set up the plotting figure. But this time, just for fun, let's put both the temperature and salinity profiles on the same axes. This trick uses both the top and bottom axis for different parameters.
As these are depth profiles, we need twin x-axes and a shared y-axis for the depth. | Python Code:
import matplotlib.pyplot as plt # graphics library
import numpy as np
import ncepbufr # python wrappers around BUFRLIB
Explanation: Reading an NCEP BUFR data set
NCEP BUFR (Binary Universal Form for the Representation of meteorological data) can be read two ways:
Fortran code with BUFRLIB
py-ncepbufr, which is basically Python wrappers around BUFRLIB
In this example we'll use py-ncepbufr to read a snapshot of the Argo data tank from WCOSS, show how to navigate the BUFR structure, and how to extract and plot a profile.
The py-ncepbufr library and installation instructions can be found at
https://github.com/JCSDA/py-ncepbufr
We begin by importing the required libraries.
End of explanation
bufr = ncepbufr.open('data/xx005')
Explanation: For the purposes of this demo I've made a local copy of the Argo data tank on WCOSS
located at
/dcom/us007003/201808/b031/xx005
Begin by opening the file
End of explanation
# move down to first message - a return code of 0 indicates success
bufr.advance()
# load the message subset -- a return code of 0 indicates success
bufr.load_subset()
Explanation: Movement and data access within the BUFR file is through these methods:
bufr.advance()
bufr.load_subset()
bufr.read_subset()
bufr.rewind()
bufr.close()
There is a lot more functionality to ncepbufr, such as searching on multiple mnenomics, printing or saving the BUFR table included in the file, printing or saving the inventory and subsets, setting and using checkpoints in the file. See the ncepbufr help for more details.
Important Note: py-ncepbufr is unforgiving of mistakes. A BUFRLIB fortran error will result in an immediate exit from the Python interpreter.
End of explanation
temp = bufr.read_subset('SSTH').squeeze()-273.15 # convert from Kelvin to Celsius
sal = bufr.read_subset('SALNH').squeeze()
depth = bufr.read_subset('WPRES').squeeze()/10000. # convert from Pa to depth in meters
# observation location, date, and receipt time
lon = bufr.read_subset('CLONH')[0][0]
lat = bufr.read_subset('CLATH')[0][0]
date = bufr.msg_date
receipt = bufr.receipt_time
bufr.close()
Explanation: You can print the subset and determine the parameter names. BUFR dumps can be very verbose, so I'll just copy in the header and the first subset replication from a bufr.dump_subset() command.
I've highlighted in red the parameters I want to plot.
<pre style="font-size: x-small">
MESSAGE TYPE NC031005
004001 YEAR 2018.0 YEAR YEAR
004002 MNTH 8.0 MONTH MONTH
004003 DAYS 1.0 DAY DAY
004004 HOUR 0.0 HOUR HOUR
004005 MINU 16.0 MINUTE MINUTE
035195 SEQNUM 317 ( 4)CCITT IA5 CHANNEL SEQUENCE NUMBER
035021 BUHD IOPX01 ( 6)CCITT IA5 BULLETIN BEING MONITORED (TTAAii)
035023 BORG KWBC ( 4)CCITT IA5 BULLETIN BEING MONITORED (CCCC)
035022 BULTIM 010029 ( 6)CCITT IA5 BULLETIN BEING MONITORED (YYGGgg)
035194 BBB MISSING ( 6)CCITT IA5 BULLETIN BEING MONITORED (BBB)
008202 RCTS 0.0 CODE TABLE RECEIPT TIME SIGNIFICANCE
004200 RCYR 2018.0 YEAR YEAR - TIME OF RECEIPT
004201 RCMO 8.0 MONTH MONTH - TIME OF RECEIPT
004202 RCDY 1.0 DAY DAY - TIME OF RECEIPT
004203 RCHR 0.0 HOUR HOUR - TIME OF RECEIPT
004204 RCMI 31.0 MINUTE MINUTE - TIME OF RECEIPT
033215 CORN 0.0 CODE TABLE CORRECTED REPORT INDICATOR
001087 WMOP 6903327.0 NUMERIC WMO marine observing platform extended identifie
001085 OPMM S2-X (20)CCITT IA5 Observing platform manufacturer's model
001086 OPMS 10151 ( 32)CCITT IA5 Observing platform manufacturer's serial number
002036 BUYTS 2.0 CODE TABLE Buoy type
002148 DCLS 8.0 CODE TABLE Data collection and/or location system
002149 BUYT 14.0 CODE TABLE Type of data buoy
022055 FCYN 28.0 NUMERIC Float cycle number
022056 DIPR 0.0 CODE TABLE Direction of profile
022067 IWTEMP 846.0 CODE TABLE INSTRUMENT TYPE FOR WATER TEMPERATURE PROFILE ME
005001 CLATH 59.34223 DEGREES LATITUDE (HIGH ACCURACY)
006001 CLONH -9.45180 DEGREES LONGITUDE (HIGH ACCURACY)
008080 QFQF 20.0 CODE TABLE Qualifier for GTSPP quality flag
033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag
(GLPFDATA) 636 REPLICATIONS
++++++ GLPFDATA REPLICATION # 1 ++++++
<span style="color: red">007065 WPRES 10000.0 PA Water pressure</span>
008080 QFQF 10.0 CODE TABLE Qualifier for GTSPP quality flag
033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag
<span style="color: red">022045 SSTH 285.683 K Sea/water temperature</span>
008080 QFQF 11.0 CODE TABLE Qualifier for GTSPP quality flag
033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag
<span style="color: red">022064 SALNH 35.164 PART PER THOUSAND Salinity</span>
008080 QFQF 12.0 CODE TABLE Qualifier for GTSPP quality flag
033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag
</pre>
Now we can load the data for plotting
End of explanation
fig = plt.figure(figsize = (5,4))
ax1 = plt.axes()
ax1.plot(temp, depth,'r-')
ax1.grid(axis = 'y')
ax1.invert_yaxis() # flip the y-axis for ocean depths
ax2 = ax1.twiny() # here's the second x-axis definition
ax2.plot(np.nan, 'r-', label = 'Temperature')
ax2.plot(sal, depth, 'b-', label = 'Salinity')
ax2.legend()
ax1.set_xlabel('Temperature (C)', color = 'red')
ax1.set_ylabel('Depth (m)')
ax2.set_xlabel('Salinity (PSU)', color = 'blue')
ttl='ARGO T,S Profiles at lon:{:6.2f}, lat:{:6.2f}\ntimestamp: {} received: {}\n'.format(lon,lat,date,receipt)
fig.suptitle(ttl,x = 0.5,y = 1.1,fontsize = 'large');
Explanation: Set up the plotting figure. But this time, just for fun, let's put both the temperature and salinity profiles on the same axes. This trick uses both the top and bottom axis for different parameters.
As these are depth profiles, we need twin x-axes and a shared y-axis for the depth.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.