Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OT for domain adaptation on empirical distributions
This example introduces a domain adaptation in a 2D setting. It explicits
the problem of domain adaptation and introduces some optimal transport
approaches to solve it.
Quantities such as optimal couplings, greater coupling coefficients and
transported samples are represented in order to give a visual understanding
of what the transport methods are doing.
Step1: generate data
Step2: Instantiate the different transport algorithms and fit them
Step3: Fig 1
Step4: Fig 2
Step5: Fig 3 | Python Code:
# Authors: Remi Flamary <[email protected]>
# Stanislas Chambon <[email protected]>
#
# License: MIT License
import matplotlib.pylab as pl
import ot
import ot.plot
Explanation: OT for domain adaptation on empirical distributions
This example introduces a domain adaptation in a 2D setting. It explicits
the problem of domain adaptation and introduces some optimal transport
approaches to solve it.
Quantities such as optimal couplings, greater coupling coefficients and
transported samples are represented in order to give a visual understanding
of what the transport methods are doing.
End of explanation
n_samples_source = 150
n_samples_target = 150
Xs, ys = ot.datasets.make_data_classif('3gauss', n_samples_source)
Xt, yt = ot.datasets.make_data_classif('3gauss2', n_samples_target)
# Cost matrix
M = ot.dist(Xs, Xt, metric='sqeuclidean')
Explanation: generate data
End of explanation
# EMD Transport
ot_emd = ot.da.EMDTransport()
ot_emd.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport
ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1)
ot_sinkhorn.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport with Group lasso regularization
ot_lpl1 = ot.da.SinkhornLpl1Transport(reg_e=1e-1, reg_cl=1e0)
ot_lpl1.fit(Xs=Xs, ys=ys, Xt=Xt)
# transport source samples onto target samples
transp_Xs_emd = ot_emd.transform(Xs=Xs)
transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=Xs)
transp_Xs_lpl1 = ot_lpl1.transform(Xs=Xs)
Explanation: Instantiate the different transport algorithms and fit them
End of explanation
pl.figure(1, figsize=(10, 10))
pl.subplot(2, 2, 1)
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Source samples')
pl.subplot(2, 2, 2)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Target samples')
pl.subplot(2, 2, 3)
pl.imshow(M, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Matrix of pairwise distances')
pl.tight_layout()
Explanation: Fig 1 : plots source and target samples + matrix of pairwise distance
End of explanation
pl.figure(2, figsize=(10, 6))
pl.subplot(2, 3, 1)
pl.imshow(ot_emd.coupling_, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nEMDTransport')
pl.subplot(2, 3, 2)
pl.imshow(ot_sinkhorn.coupling_, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornTransport')
pl.subplot(2, 3, 3)
pl.imshow(ot_lpl1.coupling_, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornLpl1Transport')
pl.subplot(2, 3, 4)
ot.plot.plot2D_samples_mat(Xs, Xt, ot_emd.coupling_, c=[.5, .5, 1])
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.title('Main coupling coefficients\nEMDTransport')
pl.subplot(2, 3, 5)
ot.plot.plot2D_samples_mat(Xs, Xt, ot_sinkhorn.coupling_, c=[.5, .5, 1])
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.title('Main coupling coefficients\nSinkhornTransport')
pl.subplot(2, 3, 6)
ot.plot.plot2D_samples_mat(Xs, Xt, ot_lpl1.coupling_, c=[.5, .5, 1])
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.title('Main coupling coefficients\nSinkhornLpl1Transport')
pl.tight_layout()
Explanation: Fig 2 : plots optimal couplings for the different methods
End of explanation
# display transported samples
pl.figure(4, figsize=(10, 4))
pl.subplot(1, 3, 1)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.5)
pl.scatter(transp_Xs_emd[:, 0], transp_Xs_emd[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.title('Transported samples\nEmdTransport')
pl.legend(loc=0)
pl.xticks([])
pl.yticks([])
pl.subplot(1, 3, 2)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.5)
pl.scatter(transp_Xs_sinkhorn[:, 0], transp_Xs_sinkhorn[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.title('Transported samples\nSinkhornTransport')
pl.xticks([])
pl.yticks([])
pl.subplot(1, 3, 3)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.5)
pl.scatter(transp_Xs_lpl1[:, 0], transp_Xs_lpl1[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.title('Transported samples\nSinkhornLpl1Transport')
pl.xticks([])
pl.yticks([])
pl.tight_layout()
pl.show()
Explanation: Fig 3 : plot transported samples
End of explanation |
3,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exact solution used in MES runs
We would like to MES the operation
\begin{eqnarray}
\frac{\int_0^{2\pi} f \rho d\theta}{\int_0^{2\pi} \rho d\theta}
= \frac{\int_0^{2\pi} f d\theta}{\int_0^{2\pi} d\theta}
= \frac{\int_0^{2\pi} f d\theta}{2\pi}
\end{eqnarray}
Using cylindrical geometry.
Step1: Initialize
Step2: Define the variables
Step3: Define the function to take the derivative of
NOTE
Step4: Calculating the solution
Step5: Plot
Step6: Print the variables in BOUT++ format | Python Code:
%matplotlib notebook
from sympy import init_printing
from sympy import S
from sympy import sin, cos, tanh, exp, pi, sqrt
from sympy import integrate
import numpy as np
from boutdata.mms import x, y, z, t
import os, sys
# If we add to sys.path, then it must be an absolute path
common_dir = os.path.abspath('./../../../common')
# Sys path is a list of system paths
sys.path.append(common_dir)
from CELMAPy.MES import get_metric, make_plot, BOUT_print
init_printing()
Explanation: Exact solution used in MES runs
We would like to MES the operation
\begin{eqnarray}
\frac{\int_0^{2\pi} f \rho d\theta}{\int_0^{2\pi} \rho d\theta}
= \frac{\int_0^{2\pi} f d\theta}{\int_0^{2\pi} d\theta}
= \frac{\int_0^{2\pi} f d\theta}{2\pi}
\end{eqnarray}
Using cylindrical geometry.
End of explanation
folder = '../zHat/'
metric = get_metric()
Explanation: Initialize
End of explanation
# Initialization
the_vars = {}
Explanation: Define the variables
End of explanation
# We need Lx
from boututils.options import BOUTOptions
myOpts = BOUTOptions(folder)
Lx = eval(myOpts.geom['Lx'])
# Z hat function
# NOTE: The function is not continuous over origo
s = 2
c = pi
w = pi/2
the_vars['f'] = ((1/2)*(tanh(s*(z-(c-w/2)))-tanh(s*(z-(c+w/2)))))*sin(3*2*pi*x/Lx)
Explanation: Define the function to take the derivative of
NOTE:
These do not need to be fulfilled in order to get convergence
z must be periodic
The field $f(\rho, \theta)$ must be of class infinity in $z=0$ and $z=2\pi$
The field $f(\rho, \theta)$ must be continuous in the $\rho$ direction with $f(\rho, \theta + \pi)$
But this needs to be fulfilled:
1. The field $f(\rho, \theta)$ must be single valued when $\rho\to0$
2. Eventual BC in $\rho$ must be satisfied
End of explanation
the_vars['S'] = (integrate(the_vars['f'], (z, 0, 2*np.pi))/(2*np.pi)).evalf()
Explanation: Calculating the solution
End of explanation
make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False)
Explanation: Plot
End of explanation
BOUT_print(the_vars, rational=False)
Explanation: Print the variables in BOUT++ format
End of explanation |
3,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Triplet Loss for Implicit Feedback Neural Recommender Systems
The goal of this notebook is first to demonstrate how it is possible to build a bi-linear recommender system only using positive feedback data.
In a latter section we show that it is possible to train deeper architectures following the same design principles.
This notebook is inspired by Maciej Kula's Recommendations in Keras using triplet loss. Contrary to Maciej we won't use the BPR loss but instead will introduce the more common margin-based comparator.
Loading the movielens-100k dataset
For the sake of computation time, we will only use the smallest variant of the movielens reviews dataset. Beware that the architectural choices and hyperparameters that work well on such a toy dataset will not necessarily be representative of the behavior when run on a more realistic dataset such as Movielens 10M or the Yahoo Songs dataset with 700M rating.
Step1: Implicit feedback data
Consider ratings >= 4 as positive feed back and ignore the rest
Step2: Because the median rating is around 3.5, this cut will remove approximately half of the ratings from the datasets
Step4: The Triplet Loss
The following section demonstrates how to build a low-rank quadratic interaction model between users and items. The similarity score between a user and an item is defined by the unormalized dot products of their respective embeddings.
The matching scores can be use to rank items to recommend to a specific user.
Training of the model parameters is achieved by randomly sampling negative items not seen by a pre-selected anchor user. We want the model embedding matrices to be such that the similarity between the user vector and the negative vector is smaller than the similarity between the user vector and the positive item vector. Furthermore we use a margin to further move appart the negative from the anchor user.
Here is the architecture of such a triplet architecture. The triplet name comes from the fact that the loss to optimize is defined for triple (anchor_user, positive_item, negative_item)
Step5: Here is the actual code that builds the model(s) with shared weights. Note that here we use the cosine similarity instead of unormalized dot products (both seems to yield comparable results).
The triplet model is used to train the weights of the companion
similarity model. The triplet model takes 1 user, 1 positive item
(relative to the selected user) and one negative item and is
trained with comparator loss.
The similarity model takes one user and one item as input and return
compatibility score (aka the match score).
Step7: Note that triplet_model and match_model have as much parameters, they share both user and item embeddings. Their only difference is that the latter doesn't compute the negative similarity.
Quality of Ranked Recommendations
Now that we have a randomly initialized model we can start computing random recommendations. To assess their quality we do the following for each user
Step8: By default the model should make predictions that rank the items in random order. The ROC AUC score is a ranking score that represents the expected value of correctly ordering uniformly sampled pairs of recommendations.
A random (untrained) model should yield 0.50 ROC AUC on average.
Step10: Training the Triplet Model
Let's now fit the parameters of the model by sampling triplets
Step11: Let's train the triplet model
Step12: Exercise
Step13: Training a Deep Matching Model on Implicit Feedback
Instead of using hard-coded cosine similarities to predict the match of a (user_id, item_id) pair, we can instead specify a deep neural network based parametrisation of the similarity. The parameters of that matching model are also trained with the margin comparator loss
Step14: Exercise | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os.path as op
from zipfile import ZipFile
try:
from urllib.request import urlretrieve
except ImportError: # Python 2 compat
from urllib import urlretrieve
ML_100K_URL = "http://files.grouplens.org/datasets/movielens/ml-100k.zip"
ML_100K_FILENAME = ML_100K_URL.rsplit('/', 1)[1]
ML_100K_FOLDER = 'ml-100k'
if not op.exists(ML_100K_FILENAME):
print('Downloading %s to %s...' % (ML_100K_URL, ML_100K_FILENAME))
urlretrieve(ML_100K_URL, ML_100K_FILENAME)
if not op.exists(ML_100K_FOLDER):
print('Extracting %s to %s...' % (ML_100K_FILENAME, ML_100K_FOLDER))
ZipFile(ML_100K_FILENAME).extractall('.')
data_train = pd.read_csv(op.join(ML_100K_FOLDER, 'ua.base'), sep='\t',
names=["user_id", "item_id", "rating", "timestamp"])
data_test = pd.read_csv(op.join(ML_100K_FOLDER, 'ua.test'), sep='\t',
names=["user_id", "item_id", "rating", "timestamp"])
data_train.describe()
def extract_year(release_date):
if hasattr(release_date, 'split'):
components = release_date.split('-')
if len(components) == 3:
return int(components[2])
# Missing value marker
return 1920
m_cols = ['item_id', 'title', 'release_date', 'video_release_date', 'imdb_url']
items = pd.read_csv(op.join(ML_100K_FOLDER, 'u.item'), sep='|',
names=m_cols, usecols=range(5), encoding='latin-1')
items['release_year'] = items['release_date'].map(extract_year)
data_train = pd.merge(data_train, items)
data_test = pd.merge(data_test, items)
data_train.head()
# data_test.describe()
max_user_id = max(data_train['user_id'].max(), data_test['user_id'].max())
max_item_id = max(data_train['item_id'].max(), data_test['item_id'].max())
n_users = max_user_id + 1
n_items = max_item_id + 1
print('n_users=%d, n_items=%d' % (n_users, n_items))
Explanation: Triplet Loss for Implicit Feedback Neural Recommender Systems
The goal of this notebook is first to demonstrate how it is possible to build a bi-linear recommender system only using positive feedback data.
In a latter section we show that it is possible to train deeper architectures following the same design principles.
This notebook is inspired by Maciej Kula's Recommendations in Keras using triplet loss. Contrary to Maciej we won't use the BPR loss but instead will introduce the more common margin-based comparator.
Loading the movielens-100k dataset
For the sake of computation time, we will only use the smallest variant of the movielens reviews dataset. Beware that the architectural choices and hyperparameters that work well on such a toy dataset will not necessarily be representative of the behavior when run on a more realistic dataset such as Movielens 10M or the Yahoo Songs dataset with 700M rating.
End of explanation
pos_data_train = data_train.query("rating >= 4")
pos_data_test = data_test.query("rating >= 4")
Explanation: Implicit feedback data
Consider ratings >= 4 as positive feed back and ignore the rest:
End of explanation
pos_data_train['rating'].count()
pos_data_test['rating'].count()
Explanation: Because the median rating is around 3.5, this cut will remove approximately half of the ratings from the datasets:
End of explanation
import tensorflow as tf
from tensorflow.keras import layers
def identity_loss(y_true, y_pred):
Ignore y_true and return the mean of y_pred
This is a hack to work-around the design of the Keras API that is
not really suited to train networks with a triplet loss by default.
return tf.reduce_mean(y_pred)
class MarginLoss(layers.Layer):
def __init__(self, margin=1.):
super().__init__()
self.margin = margin
def call(self, inputs):
pos_pair_similarity = inputs[0]
neg_pair_similarity = inputs[1]
diff = neg_pair_similarity - pos_pair_similarity
return tf.maximum(diff + self.margin, 0.)
Explanation: The Triplet Loss
The following section demonstrates how to build a low-rank quadratic interaction model between users and items. The similarity score between a user and an item is defined by the unormalized dot products of their respective embeddings.
The matching scores can be use to rank items to recommend to a specific user.
Training of the model parameters is achieved by randomly sampling negative items not seen by a pre-selected anchor user. We want the model embedding matrices to be such that the similarity between the user vector and the negative vector is smaller than the similarity between the user vector and the positive item vector. Furthermore we use a margin to further move appart the negative from the anchor user.
Here is the architecture of such a triplet architecture. The triplet name comes from the fact that the loss to optimize is defined for triple (anchor_user, positive_item, negative_item):
<img src="images/rec_archi_implicit_2.svg" style="width: 600px;" />
We call this model a triplet model with bi-linear interactions because the similarity between a user and an item is captured by a dot product of the first level embedding vectors. This is therefore not a deep architecture.
End of explanation
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Embedding, Flatten, Input, Dense
from tensorflow.keras.layers import Lambda, Dot
from tensorflow.keras.regularizers import l2
class TripletModel(Model):
def __init__(self, n_users, n_items, latent_dim=64,
l2_reg=None, margin=1.):
super().__init__(name="TripletModel")
self.margin = margin
l2_reg = None if l2_reg == 0 else l2(l2_reg)
self.user_layer = Embedding(n_users, latent_dim,
input_length=1,
input_shape=(1,),
name='user_embedding',
embeddings_regularizer=l2_reg)
# The following embedding parameters will be shared to
# encode both the positive and negative items.
self.item_layer = Embedding(n_items, latent_dim,
input_length=1,
name="item_embedding",
embeddings_regularizer=l2_reg)
# The 2 following layers are without parameters, and can
# therefore be used for both positive and negative items.
self.flatten = Flatten()
self.dot = Dot(axes=1, normalize=True)
self.margin_loss = MarginLoss(margin)
def call(self, inputs, training=False):
user_input = inputs[0]
pos_item_input = inputs[1]
neg_item_input = inputs[2]
user_embedding = self.user_layer(user_input)
user_embedding = self.flatten(user_embedding)
pos_item_embedding = self.item_layer(pos_item_input)
pos_item_embedding = self.flatten(pos_item_embedding)
neg_item_embedding = self.item_layer(neg_item_input)
neg_item_embedding = self.flatten(neg_item_embedding)
# Similarity computation between embeddings
pos_similarity = self.dot([user_embedding, pos_item_embedding])
neg_similarity = self.dot([user_embedding, neg_item_embedding])
return self.margin_loss([pos_similarity, neg_similarity])
triplet_model = TripletModel(n_users, n_items,
latent_dim=64, l2_reg=1e-6)
class MatchModel(Model):
def __init__(self, user_layer, item_layer):
super().__init__(name="MatchModel")
# Reuse shared weights for those layers:
self.user_layer = user_layer
self.item_layer = item_layer
self.flatten = Flatten()
self.dot = Dot(axes=1, normalize=True)
def call(self, inputs):
user_input = inputs[0]
pos_item_input = inputs[1]
user_embedding = self.user_layer(user_input)
user_embedding = self.flatten(user_embedding)
pos_item_embedding = self.item_layer(pos_item_input)
pos_item_embedding = self.flatten(pos_item_embedding)
pos_similarity = self.dot([user_embedding,
pos_item_embedding])
return pos_similarity
match_model = MatchModel(triplet_model.user_layer,
triplet_model.item_layer)
Explanation: Here is the actual code that builds the model(s) with shared weights. Note that here we use the cosine similarity instead of unormalized dot products (both seems to yield comparable results).
The triplet model is used to train the weights of the companion
similarity model. The triplet model takes 1 user, 1 positive item
(relative to the selected user) and one negative item and is
trained with comparator loss.
The similarity model takes one user and one item as input and return
compatibility score (aka the match score).
End of explanation
from sklearn.metrics import roc_auc_score
def average_roc_auc(model, data_train, data_test):
Compute the ROC AUC for each user and average over users
max_user_id = max(data_train['user_id'].max(),
data_test['user_id'].max())
max_item_id = max(data_train['item_id'].max(),
data_test['item_id'].max())
user_auc_scores = []
for user_id in range(1, max_user_id + 1):
pos_item_train = data_train[data_train['user_id'] == user_id]
pos_item_test = data_test[data_test['user_id'] == user_id]
# Consider all the items already seen in the training set
all_item_ids = np.arange(1, max_item_id + 1)
items_to_rank = np.setdiff1d(
all_item_ids, pos_item_train['item_id'].values)
# Ground truth: return 1 for each item positively present in
# the test set and 0 otherwise.
expected = np.in1d(
items_to_rank, pos_item_test['item_id'].values)
if np.sum(expected) >= 1:
# At least one positive test value to rank
repeated_user_id = np.empty_like(items_to_rank)
repeated_user_id.fill(user_id)
predicted = model.predict(
[repeated_user_id, items_to_rank], batch_size=4096)
user_auc_scores.append(roc_auc_score(expected, predicted))
return sum(user_auc_scores) / len(user_auc_scores)
Explanation: Note that triplet_model and match_model have as much parameters, they share both user and item embeddings. Their only difference is that the latter doesn't compute the negative similarity.
Quality of Ranked Recommendations
Now that we have a randomly initialized model we can start computing random recommendations. To assess their quality we do the following for each user:
compute matching scores for items (except the movies that the user has already seen in the training set),
compare to the positive feedback actually collected on the test set using the ROC AUC ranking metric,
average ROC AUC scores across users to get the average performance of the recommender model on the test set.
End of explanation
average_roc_auc(match_model, pos_data_train, pos_data_test)
Explanation: By default the model should make predictions that rank the items in random order. The ROC AUC score is a ranking score that represents the expected value of correctly ordering uniformly sampled pairs of recommendations.
A random (untrained) model should yield 0.50 ROC AUC on average.
End of explanation
def sample_triplets(pos_data, max_item_id, random_seed=0):
Sample negatives at random
rng = np.random.RandomState(random_seed)
user_ids = pos_data['user_id'].values
pos_item_ids = pos_data['item_id'].values
neg_item_ids = rng.randint(low=1, high=max_item_id + 1,
size=len(user_ids))
return [user_ids, pos_item_ids, neg_item_ids]
Explanation: Training the Triplet Model
Let's now fit the parameters of the model by sampling triplets: for each user, select a movie in the positive feedback set of that user and randomly sample another movie to serve as negative item.
Note that this sampling scheme could be improved by removing items that are marked as positive in the data to remove some label noise. In practice this does not seem to be a problem though.
End of explanation
# we plug the identity loss and the a fake target variable ignored by
# the model to be able to use the Keras API to train the triplet model
fake_y = np.ones_like(pos_data_train["user_id"])
triplet_model.compile(loss=identity_loss, optimizer="adam")
n_epochs = 10
batch_size = 64
for i in range(n_epochs):
# Sample new negatives to build different triplets at each epoch
triplet_inputs = sample_triplets(pos_data_train, max_item_id,
random_seed=i)
# Fit the model incrementally by doing a single pass over the
# sampled triplets.
triplet_model.fit(x=triplet_inputs, y=fake_y, shuffle=True,
batch_size=64, epochs=1)
# Evaluate the convergence of the model. Ideally we should prepare a
# validation set and compute this at each epoch but this is too slow.
test_auc = average_roc_auc(match_model, pos_data_train, pos_data_test)
print("Epoch %d/%d: test ROC AUC: %0.4f"
% (i + 1, n_epochs, test_auc))
Explanation: Let's train the triplet model:
End of explanation
print(match_model.summary())
print(triplet_model.summary())
# %load solutions/triplet_parameter_count.py
# Analysis:
#
# Both models have exactly the same number of parameters,
# namely the parameters of the 2 embeddings:
#
# - user embedding: n_users x embedding_dim
# - item embedding: n_items x embedding_dim
#
# The triplet model uses the same item embedding twice,
# once to compute the positive similarity and the other
# time to compute the negative similarity. However because
# those two nodes in the computation graph share the same
# instance of the item embedding layer, the item embedding
# weight matrix is shared by the two branches of the
# graph and therefore the total number of parameters for
# each model is in both cases:
#
# (n_users x embedding_dim) + (n_items x embedding_dim)
Explanation: Exercise:
Count the number of parameters in match_model and triplet_model. Which model has the largest number of parameters?
End of explanation
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Embedding, Flatten, Dense
from tensorflow.keras.layers import Concatenate, Dropout
from tensorflow.keras.regularizers import l2
class MLP(layers.Layer):
def __init__(self, n_hidden=1, hidden_size=64, dropout=0.,
l2_reg=None):
super().__init__()
# TODO
class DeepTripletModel(Model):
def __init__(self, n_users, n_items, user_dim=32, item_dim=64,
margin=1., n_hidden=1, hidden_size=64, dropout=0,
l2_reg=None):
super().__init__()
# TODO
class DeepMatchModel(Model):
def __init__(self, user_layer, item_layer, mlp):
super().__init__(name="MatchModel")
# TODO
# %load solutions/deep_implicit_feedback_recsys.py
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Embedding, Flatten, Dense
from tensorflow.keras.layers import Concatenate, Dropout
from tensorflow.keras.regularizers import l2
class MLP(layers.Layer):
def __init__(self, n_hidden=1, hidden_size=64, dropout=0.,
l2_reg=None):
super().__init__()
self.layers = [Dropout(dropout)]
for _ in range(n_hidden):
self.layers.append(Dense(hidden_size, activation="relu",
kernel_regularizer=l2_reg))
self.layers.append(Dropout(dropout))
self.layers.append(Dense(1, activation="relu",
kernel_regularizer=l2_reg))
def call(self, x, training=False):
for layer in self.layers:
if isinstance(layer, Dropout):
x = layer(x, training=training)
else:
x = layer(x)
return x
class DeepTripletModel(Model):
def __init__(self, n_users, n_items, user_dim=32, item_dim=64, margin=1.,
n_hidden=1, hidden_size=64, dropout=0, l2_reg=None):
super().__init__()
l2_reg = None if l2_reg == 0 else l2(l2_reg)
self.user_layer = Embedding(n_users, user_dim,
input_length=1,
input_shape=(1,),
name='user_embedding',
embeddings_regularizer=l2_reg)
self.item_layer = Embedding(n_items, item_dim,
input_length=1,
name="item_embedding",
embeddings_regularizer=l2_reg)
self.flatten = Flatten()
self.concat = Concatenate()
self.mlp = MLP(n_hidden, hidden_size, dropout, l2_reg)
self.margin_loss = MarginLoss(margin)
def call(self, inputs, training=False):
user_input = inputs[0]
pos_item_input = inputs[1]
neg_item_input = inputs[2]
user_embedding = self.user_layer(user_input)
user_embedding = self.flatten(user_embedding)
pos_item_embedding = self.item_layer(pos_item_input)
pos_item_embedding = self.flatten(pos_item_embedding)
neg_item_embedding = self.item_layer(neg_item_input)
neg_item_embedding = self.flatten(neg_item_embedding)
# Similarity computation between embeddings
pos_embeddings_pair = self.concat([user_embedding,
pos_item_embedding])
neg_embeddings_pair = self.concat([user_embedding,
neg_item_embedding])
pos_similarity = self.mlp(pos_embeddings_pair)
neg_similarity = self.mlp(neg_embeddings_pair)
return self.margin_loss([pos_similarity, neg_similarity])
class DeepMatchModel(Model):
def __init__(self, user_layer, item_layer, mlp):
super().__init__(name="MatchModel")
self.user_layer = user_layer
self.item_layer = item_layer
self.mlp = mlp
self.flatten = Flatten()
self.concat = Concatenate()
def call(self, inputs):
user_input = inputs[0]
pos_item_input = inputs[1]
user_embedding = self.flatten(self.user_layer(user_input))
pos_item_embedding = self.flatten(self.item_layer(pos_item_input))
pos_embeddings_pair = self.concat([user_embedding, pos_item_embedding])
pos_similarity = self.mlp(pos_embeddings_pair)
return pos_similarity
hyper_parameters = dict(
user_dim=32,
item_dim=64,
n_hidden=1,
hidden_size=128,
dropout=0.1,
l2_reg=0.,
)
deep_triplet_model = DeepTripletModel(n_users, n_items,
**hyper_parameters)
deep_match_model = DeepMatchModel(deep_triplet_model.user_layer,
deep_triplet_model.item_layer,
deep_triplet_model.mlp)
deep_triplet_model.compile(loss=identity_loss, optimizer='adam')
fake_y = np.ones_like(pos_data_train['user_id'])
n_epochs = 20
for i in range(n_epochs):
# Sample new negatives to build different triplets at each epoch
triplet_inputs = sample_triplets(pos_data_train, max_item_id,
random_seed=i)
# Fit the model incrementally by doing a single pass over the
# sampled triplets.
deep_triplet_model.fit(triplet_inputs, fake_y, shuffle=True,
batch_size=64, epochs=1)
# Monitor the convergence of the model
test_auc = average_roc_auc(
deep_match_model, pos_data_train, pos_data_test)
print("Epoch %d/%d: test ROC AUC: %0.4f"
% (i + 1, n_epochs, test_auc))
Explanation: Training a Deep Matching Model on Implicit Feedback
Instead of using hard-coded cosine similarities to predict the match of a (user_id, item_id) pair, we can instead specify a deep neural network based parametrisation of the similarity. The parameters of that matching model are also trained with the margin comparator loss:
<img src="images/rec_archi_implicit_1.svg" style="width: 600px;" />
Exercise to complete as a home assignment:
Implement a deep_match_model, deep_triplet_model pair of models
for the architecture described in the schema. The last layer of
the embedded Multi Layer Perceptron outputs a single scalar that
encodes the similarity between a user and a candidate item.
Evaluate the resulting model by computing the per-user average
ROC AUC score on the test feedback data.
Check that the AUC ROC score is close to 0.50 for a randomly
initialized model.
Check that you can reach at least 0.91 ROC AUC with this deep
model (you might need to adjust the hyperparameters).
Hints:
it is possible to reuse the code to create embeddings from the previous model
definition;
the concatenation between user and the positive item embedding can be
obtained with the Concatenate layer:
```py
concat = Concatenate()
positive_embeddings_pair = concat([user_embedding,
positive_item_embedding])
negative_embeddings_pair = concat([user_embedding,
negative_item_embedding])
```
those embedding pairs should be fed to a shared MLP instance to compute the similarity scores.
End of explanation
print(deep_match_model.summary())
print(deep_triplet_model.summary())
# %load solutions/deep_triplet_parameter_count.py
# Analysis:
#
# Both models have again exactly the same number of parameters,
# namely the parameters of the 2 embeddings:
#
# - user embedding: n_users x user_dim
# - item embedding: n_items x item_dim
#
# and the parameters of the MLP model used to compute the
# similarity score of an (user, item) pair:
#
# - first hidden layer weights: (user_dim + item_dim) * hidden_size
# - first hidden biases: hidden_size
# - extra hidden layers weights: hidden_size * hidden_size
# - extra hidden layers biases: hidden_size
# - output layer weights: hidden_size * 1
# - output layer biases: 1
#
# The triplet model uses the same item embedding layer
# twice and the same MLP instance twice:
# once to compute the positive similarity and the other
# time to compute the negative similarity. However because
# those two lanes in the computation graph share the same
# instances for the item embedding layer and for the MLP,
# their parameters are shared.
#
# Reminder: MLP stands for multi-layer perceptron, which is a
# common short-hand for Feed Forward Fully Connected Neural
# Network.
Explanation: Exercise:
Count the number of parameters in deep_match_model and deep_triplet_model. Which model has the largest number of parameters?
End of explanation |
3,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the frame context in the TIMIT MLP model
This notebook is an extension of the MLP_TIMIT demo which takes a context of many frames at input to model the same output. So if we have a phoneme, say 'a', instead of just using one vector of 26 features to recognize it, we provide several frames of 26 features before and after the one we are looking at, in order to capture its context.
This technique helps greatly improve the the quality of the solution, but isn't as scalable as some other solutions. First of all, the greater the context, the more parameters we need to determine. The bigger the model, the more data is required to accurately appraise all the parameters.
One solution would be to use tied weights, rather then a classical dense layer in such a way that different frames (within the context) use the same set of weights, so the number of weights is kept constant even though we use a larger context.
Furthermore, the model assumes a context of a specific size. It would be nice if the size is unlimited. Again, this would probably make the model impractical if we use a standard dense layer, but could work with the tied weights technique.
Another way of looking at the tied weights solution that has an unlimited context is simply as an RNN. In fact, most implementions of BPTT (used to train RNN) simply unroll the training loop in time and treat the model as a simple MLP with tied weights. This works quite well, but has other issues that are solved using more advanced topologies (LSTM, GRU) which will be dicussed in other notebooks.
In this notebook, we will take an MLP which has an input context of 10 frames on the left and the right side of the analyzed frame. This is done in order to reproduce the resutls from the same paper and thesis as in the MLP_TIMIT notebook.
We begin with the same introductory code as in the previous notebook
Step1: Loading the data
Step2: Global training parameters
Step3: 1-hot output
Step4: Adding frame context
Here we add the frame context. The number 10 is taken from the paper thesis as
Step5: Model definition
Since we have an input as a 3D shape, we use a Reshape layer at the start of the model to convert the input frames into a flat vector. Again, this is to save a little memory at the cost of time it takes to reshape the input. Not sure if its worth it or if it even works as intended (ie. saving memory).
Evertyhing else here is the same as with the standard MLP except for the learning rate which has to be lower in order to reproduce the same results as in the thesis.
Step6: Training
Step7: Plotting progress
These can be handy for debugging. If you draw the graph with using different hyperparameters you can establish if it underfits (i.e. the values are still decreasing at the end of the training) or overfits (the minimum is reached earlier and dev/test values begin increasing as train continues to decrease). In this case, you can see hoe the graph changes with different learning rate values. It's impossible to achieve a single optimal value, but this one seems to be fairly good.
Step8: Final result
Here we reached the value from the thesis just fine, but we used a different learning rate. For some reason, the value from the thesis underfits by a great margin. Not sure if it's a mistake in the thesis or a consequence
Step9: Just as before we can check what epoch we reached the optimum.
Step10: Checking the accuracy calculation
When computing the final loss value, we simply measure the mean of the consecutive batch loss values, because we assume that weight updates are performed once per batch and the mean loss of the whole batch is used in the cross entropy to asses the model (just like in MSE).
With accuracy, however, it's not that simple as using the mean of all the batch accuracies. What we use instad is a weighted average where the weights are determined by the length of each batch/uterance. To make sure this is correct, I do a simple experiment here where I manually count the errors and sample amounts using the predict method. We can see that the values are identical, so using the weighted average is fine. | Python Code:
import os
os.environ['CUDA_VISIBLE_DEVICES']='1'
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Reshape
from keras.optimizers import Adam, SGD
from IPython.display import clear_output
from tqdm import *
Explanation: Using the frame context in the TIMIT MLP model
This notebook is an extension of the MLP_TIMIT demo which takes a context of many frames at input to model the same output. So if we have a phoneme, say 'a', instead of just using one vector of 26 features to recognize it, we provide several frames of 26 features before and after the one we are looking at, in order to capture its context.
This technique helps greatly improve the the quality of the solution, but isn't as scalable as some other solutions. First of all, the greater the context, the more parameters we need to determine. The bigger the model, the more data is required to accurately appraise all the parameters.
One solution would be to use tied weights, rather then a classical dense layer in such a way that different frames (within the context) use the same set of weights, so the number of weights is kept constant even though we use a larger context.
Furthermore, the model assumes a context of a specific size. It would be nice if the size is unlimited. Again, this would probably make the model impractical if we use a standard dense layer, but could work with the tied weights technique.
Another way of looking at the tied weights solution that has an unlimited context is simply as an RNN. In fact, most implementions of BPTT (used to train RNN) simply unroll the training loop in time and treat the model as a simple MLP with tied weights. This works quite well, but has other issues that are solved using more advanced topologies (LSTM, GRU) which will be dicussed in other notebooks.
In this notebook, we will take an MLP which has an input context of 10 frames on the left and the right side of the analyzed frame. This is done in order to reproduce the resutls from the same paper and thesis as in the MLP_TIMIT notebook.
We begin with the same introductory code as in the previous notebook:
End of explanation
import sys
sys.path.append('../python')
from data import Corpus, History
train=Corpus('../data/TIMIT_train.hdf5',load_normalized=True,merge_utts=False)
dev=Corpus('../data/TIMIT_dev.hdf5',load_normalized=True,merge_utts=False)
test=Corpus('../data/TIMIT_test.hdf5',load_normalized=True,merge_utts=False)
tr_in,tr_out_dec=train.get()
dev_in,dev_out_dec=dev.get()
tst_in,tst_out_dec=test.get()
for u in range(tr_in.shape[0]):
tr_in[u]=tr_in[u][:,:26]
for u in range(dev_in.shape[0]):
dev_in[u]=dev_in[u][:,:26]
for u in range(tst_in.shape[0]):
tst_in[u]=tst_in[u][:,:26]
Explanation: Loading the data
End of explanation
input_dim=tr_in[0].shape[1]
output_dim=61
hidden_num=250
epoch_num=1000
Explanation: Global training parameters
End of explanation
def dec2onehot(dec):
ret=[]
for u in dec:
assert np.all(u<output_dim)
num=u.shape[0]
r=np.zeros((num,output_dim))
r[range(0,num),u]=1
ret.append(r)
return np.array(ret)
tr_out=dec2onehot(tr_out_dec)
dev_out=dec2onehot(dev_out_dec)
tst_out=dec2onehot(tst_out_dec)
Explanation: 1-hot output
End of explanation
#adds context to data
ctx_fr=10
ctx_size=2*ctx_fr+1
def ctx(data):
ret=[]
for utt in data:
l=utt.shape[0]
ur=[]
for t in range(l):
f=[]
for s in range(t-ctx_fr,t+ctx_fr+1):
if(s<0):
s=0
if(s>=l):
s=l-1
f.append(utt[s,:])
ur.append(f)
ret.append(np.array(ur))
return np.array(ret)
tr_in=ctx(tr_in)
dev_in=ctx(dev_in)
tst_in=ctx(tst_in)
print tr_in.shape
print tr_in[0].shape
Explanation: Adding frame context
Here we add the frame context. The number 10 is taken from the paper thesis as: symmetrical
time-windows from 0 to 10 frames. Now I'm not 100% sure (and it's not explained anywhere), but I assume this means 10 frames on the left and 10 on the right (i.e. symmetrical), which gives 21 frames alltogether. It's written elsewhere that 0 means no context and uses one frame.
In Keras/Python we implement this in a slightly roundabout way: instead of duplicating the data explicitly, we merely make a 3D array that contains the references to the same data ranges in different cells. In other words, if we make an array where each utterance has a shape $(time_steps, context*frame_size)$, I think it would take more memory than by using the shape $(time_steps,context,frame_size)$, because in the latter case the same vector (located somewhere in the memory) can be reused in different cotenxts and time steps.
End of explanation
model = Sequential()
model.add(Reshape(input_shape=(ctx_size,input_dim),target_shape=(ctx_size*input_dim,)))
model.add(Dense(output_dim=hidden_num))
model.add(Activation('sigmoid'))
model.add(Dense(output_dim=output_dim))
model.add(Activation('softmax'))
optimizer= SGD(lr=1e-3,momentum=0.9,nesterov=False)
loss='categorical_crossentropy'
metrics=['accuracy']
model.compile(loss=loss, optimizer=optimizer,metrics=['accuracy'])
Explanation: Model definition
Since we have an input as a 3D shape, we use a Reshape layer at the start of the model to convert the input frames into a flat vector. Again, this is to save a little memory at the cost of time it takes to reshape the input. Not sure if its worth it or if it even works as intended (ie. saving memory).
Evertyhing else here is the same as with the standard MLP except for the learning rate which has to be lower in order to reproduce the same results as in the thesis.
End of explanation
from random import shuffle
tr_hist=History('Train')
dev_hist=History('Dev')
tst_hist=History('Test')
tr_it=range(tr_in.shape[0])
for e in range(epoch_num):
print 'Epoch #{}/{}'.format(e+1,epoch_num)
sys.stdout.flush()
shuffle(tr_it)
for u in tqdm(tr_it):
l,a=model.train_on_batch(tr_in[u],tr_out[u])
tr_hist.r.addLA(l,a,tr_out[u].shape[0])
clear_output()
tr_hist.log()
for u in range(dev_in.shape[0]):
l,a=model.test_on_batch(dev_in[u],dev_out[u])
dev_hist.r.addLA(l,a,dev_out[u].shape[0])
dev_hist.log()
for u in range(tst_in.shape[0]):
l,a=model.test_on_batch(tst_in[u],tst_out[u])
tst_hist.r.addLA(l,a,tst_out[u].shape[0])
tst_hist.log()
print 'Done!'
Explanation: Training
End of explanation
import matplotlib.pyplot as P
%matplotlib inline
fig,ax=P.subplots(2,sharex=True,figsize=(12,10))
ax[0].set_title('Loss')
ax[0].plot(tr_hist.loss,label='Train')
ax[0].plot(dev_hist.loss,label='Dev')
ax[0].plot(tst_hist.loss,label='Test')
ax[0].legend()
ax[0].set_ylim((0.8,2))
ax[1].set_title('PER %')
ax[1].plot(100*(1-np.array(tr_hist.acc)),label='Train')
ax[1].plot(100*(1-np.array(dev_hist.acc)),label='Dev')
ax[1].plot(100*(1-np.array(tst_hist.acc)),label='Test')
ax[1].legend()
ax[1].set_ylim((32,42))
Explanation: Plotting progress
These can be handy for debugging. If you draw the graph with using different hyperparameters you can establish if it underfits (i.e. the values are still decreasing at the end of the training) or overfits (the minimum is reached earlier and dev/test values begin increasing as train continues to decrease). In this case, you can see hoe the graph changes with different learning rate values. It's impossible to achieve a single optimal value, but this one seems to be fairly good.
End of explanation
print 'Min test PER: {:%}'.format(1-np.max(tst_hist.acc))
print 'Min dev PER epoch: #{}'.format((np.argmax(dev_hist.acc)+1))
print 'Test PER on min dev: {:%}'.format(1-tst_hist.acc[np.argmax(dev_hist.acc)])
Explanation: Final result
Here we reached the value from the thesis just fine, but we used a different learning rate. For some reason, the value from the thesis underfits by a great margin. Not sure if it's a mistake in the thesis or a consequence
End of explanation
wer=0.36999999
print 'Epoch where PER reached {:%}: #{}'.format(wer,np.where((1-np.array(tst_hist.acc))<wer)[0][0])
Explanation: Just as before we can check what epoch we reached the optimum.
End of explanation
err=0
cnt=0
for u in range(tst_in.shape[0]):
p=model.predict_on_batch(tst_in[u])
c=np.argmax(p,axis=-1)
err+=np.sum(c!=tst_out_dec[u])
cnt+=tst_out[u].shape[0]
print 'Manual PER: {:%}'.format(err/float(cnt))
print 'PER using average: {:%}'.format(1-tst_hist.acc[-1])
Explanation: Checking the accuracy calculation
When computing the final loss value, we simply measure the mean of the consecutive batch loss values, because we assume that weight updates are performed once per batch and the mean loss of the whole batch is used in the cross entropy to asses the model (just like in MSE).
With accuracy, however, it's not that simple as using the mean of all the batch accuracies. What we use instad is a weighted average where the weights are determined by the length of each batch/uterance. To make sure this is correct, I do a simple experiment here where I manually count the errors and sample amounts using the predict method. We can see that the values are identical, so using the weighted average is fine.
End of explanation |
3,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise from which we expect values around
0 with less than 2 standard deviations. Covariance estimation and diagnostic
plots are based on [1].
References
[1] Engemann D. and Gramfort A. (2015) Automated model selection in covariance
estimation and spatial whitening of MEG and EEG signals, vol. 108,
328-342, NeuroImage.
Step1: Set parameters
Step2: Compute covariance using automated regularization
Step3: Show whitening | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Denis A. Engemann <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
Explanation: Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise from which we expect values around
0 with less than 2 standard deviations. Covariance estimation and diagnostic
plots are based on [1].
References
[1] Engemann D. and Gramfort A. (2015) Automated model selection in covariance
estimation and spatial whitening of MEG and EEG signals, vol. 108,
328-342, NeuroImage.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, method='iir', n_jobs=1)
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
Explanation: Set parameters
End of explanation
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
Explanation: Compute covariance using automated regularization
End of explanation
evoked = epochs.average()
evoked.plot() # plot evoked response
# plot the whitened evoked data for to see if baseline signals match the
# assumption of Gaussian white noise from which we expect values around
# 0 with less than 2 standard deviations. For the Global field power we expect
# a value of 1.
evoked.plot_white(noise_covs)
Explanation: Show whitening
End of explanation |
3,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
파이썬 기본 자료형 2부
수정 사항
문자열 메소드 사용 예제
Step1: 실제로 확인하면 웹사이트의 내용 전체가 하나의 문자열로 저장되어 있다.
주의
Step2: 소스코드에서 줄바꾸기, 띄어쓰기, 인용부호 등 특수 기호를 적절하게 해석하여 출력하고자 하면 print 명령어를 사용한다.
Step3: 문자열 자료형
Step4: 유니코드(unicode)
웹 상에서 정보를 추출할 경우 text_str의 경우처럼 유니코드(Unicode)로
문자열을 변환하는 방식을 사용하는 것을 권장한다.
예를 들어 아래와 같이 text_bytes을 선언하면 다른 형식으로 저장된다.
Step5: 유니코드 대 문자열
두 자료형은 거의 동일하며, 영어와 같은 라틴어 계열 이외에
한국어, 일어 등의 언어를 처리하기 위해서 유니코드가 표준으로 사용된다.
하지만 파이썬3은 문자열(str)로 통일해서 사용한다.
여기서는, 텍스트에 한국어, 일어, 중국어 등이 사용되었을 경우 unicode 방식으로
처리해 주어야 한다는 정도로만 기억하고 넘어간다.
웹사이트 소스코드 확인 방법
실제로 해당 웹사이트의 소스코드를 확인해보면 동일한 결과를 확인할 수 있다.
주의
Step6: 위 문자열에서 원하는 정보인 커피콩의 가격을 어떻게 추출할 것인가?
커피콩의 가격은 실시간으로 변한다. 하지만 다섯째 줄 끝부분에 위치하고
달러기호($)로 시작하며 x.xx 형식의 소수로 표현된 부분이 커피콩의 가격 정보이다.
따라서, 예를 들어 문자열인 ">$"의 위치를 알면 커피콩 가격정보를 얻을 수 있다.
그런데 특정 문자열 또는 문자의 위치를 어떻게 알 수 있을까?
바로 인덱스 정보와 슬라이싱 기능을 활용하면 된다.
인덱스
문자열에 사용되는 모든 문자의 위치는 인덱스(index)라는 고유한 번호를 갖는다.
인덱스는 0부터 시작하며 오른쪽으로 한 문자씩 이동할 때마다 1씩 증가한다.
주의
Step7: 특정 인덱스에 위치한 문자의 정보는 다음과 같이 확인한다.
0번 인덱스 값, 즉 첫째 문자
Step8: 1번 인덱스 값, 즉, 둘째 문자
Step9: 2번 인덱스 값, 즉, 셋째 문자
Step10: 등등.
-1번 인덱스
문자열이 길 경우 맨 오른편에 위치한 문자의 인덱스 번호를 확인하기가 어렵다.
그래서 파이썬에서는 -1을 마지막 문자의 인덱스로 사용한다.
즉, 맨 오른편의 인덱스는 -1이고, 그 왼편은 -2, 등등으로 진행한다.
Step11: 등등.
문자열의 길이와 인덱스
문자열의 길이보다 같거나 큰 인덱스를 사용하면 오류가 발생한다.
문자열의 길이는 len() 함수를 이용하여 확인할 수 있다.
Step12: 슬라이싱
문자열의 하나의 문자가 아닌 특정 구간 및 부분을 추출하고자 할 경우 슬라이싱을 사용한다.
슬라이싱은 다음과 같이 실행한다.
문자열변수[시작인덱스
Step13: kebap에서 ke 부분을 추출하고 싶다면 다음과 같이 하면 된다
Step14: 즉, 문자열 처음부터 2번 인덱스 전까지, 즉 두 번째 문자까지 모두 추출하는 것이다.
반면에 하나씩 건너서 추출하려면 다음처럼 하면 된다
Step15: 시작인덱스, 끝인덱스, 계단 각각의 인자가 경우에 따라 생략될 수도 있다.
그럴 때는 각각의 위치에 기본값(default)이 들어 있는 것으로 처리되며, 각 자리의 기본값은 다음과 같다.
시작인덱스의 기본값 = 0
끝인덱스의 기본값 = 문자열의 길이
계단의 기본값 = 1
Step16: 양수와 음수를 인덱스로 섞어서 사용할 수도 있다.
Step17: 주의
Step18: 아래와 같이 아무 것도 입력하지 않으면 해당 문자열 전체를 추출한다.
Step19: 시작인덱스 값이 끝 인덱스 값보다 같거나 작아야 제대로 추출한다.
그렇지 않으면 공문자열이 추출된다.
Step20: 이유는 슬라이싱은 기본적으로 작은 인덱스에 큰 인덱스 방향으로 확인하기 때문이다.
역순으로 추출하고자 한다면 계단을 음수로 사용하면 된다.
Step21: find() 메소드 활용하기
인덱스와 슬라이싱의 기능을 이해하였다면 이제 text 변수에 할당된 문자열에서 ">$"라는
문자열의 시작위치를 알아내기만 하면 된다.
아주 간단한 방법이 있다. 0번부터 시작해서 주욱 세어가면서 ">$"의 시작 문자인
">"의 인덱스를 확인하면 된다.
하지만 이런 방식은 아래와 같은 이유로 매우 위험하다.
셈이 틀릴 수 있다.
문자열이 길 경우 셈 자체가 불가능할 수 있다.
문자열이 조금만 변경되어도 새로 처음부터 세어야 하기 때문에 경우에 따라 재활용이 불가능하다.
이런 문제를 해결하는 좋은 방법이 있다.
바로 find()라는 문자열 메소드를 활용하면 된다.
Step22: 이제, 찾고자 하는 ">$" 문자열이 232번 인덱스에서 시작한다는 것을 알았다.
따라서 커피콩의 가격정보는 인덱스가 2보다 큰 234번이고 거기서부터 길이가 4인
부분문자열에 담겨 있게 된다.
Step23: 하지만, 여기서 234를 사용하기 보다는 find() 메소드를
직접 활용하는 것이 더욱 좋다.
Step24: 주의
Step25: 그래서 예를 들어 커피콩 가격이 6달러 이상이면 커피숍의 아메리카노 가격을 올리고,
그렇지 않으면 가격을 그대로 유지하는 것을 실행하도록 하는
코드를 작성할 수가 없다.
이유는, 문자열은 숫자가 아니라서 문자열과 숫자를 직접 비교할 수 없기 때문이다.
하지만 숫자로만 이루어닌 문자열을 진짜 숫자로 형변환시킬 수 있다.
예를 들어 int() 또는 float() 함수를 이용한다.
Step26: float() 함수를 이용하면 부동소수점 모양의 문자열을 부동소수점으로 형변환시킬 수 있다.
Step27: 주의
Step28: 주의
Step29: 부동소수점 모양의 문자열이 아니면 float() 함수도 오류를 발생시킨다.
Step30: 커피콩 가격 정보 활용 코드 예제
지금까지 배운 내용을 이용하여 커피콩 가격이 6.0달러 이상이면 커피숍의 아메리카노 가격을 올리고, 그렇지 않으면 가격을 그대로 유지하는 것을 실행하도록 하는 코드를 작성하면 다음과 같다.
가격 확인은 1초에 한 번 하는 것으로 한다.
시차를 두고 코드를 실행하기 위해 time 모듈의 sleep() 함수를 활용할 수 있다.
주의
Step31: 문자열 관련 메소드
find() 메소드처럼 문자열 자료형에만 사용하는 함수들이 있다.
이와같이 특정 자료형에만 사용하는 함수들을 __메소드__라 부른다.
보다 자세한 설명은 여기서는 하지 않는다.
다만 find() 메소드의 활용을 통해 보았듯이 특정 자료형을 잘 다루기 위해서는
어떤 경우에 어떤 메소드를 유용하게 활용할 수 있는지를 잘 파악해두는 것이 매우
중요하다는 점만 강조한다.
메소드 호출 방법
앞서 find() 메소드를 호출하는 방법을 기억해야 한다.
text.find("<$")
메소드는 일반적인 함수들과는 달리, 특정 자료형의 값이 먼저 언급된 다음에
호출된다.
주의
Step32: strip() 메소드는 문자열의 양 끝을 지정한 문자열 기준으로 삭제하는 방식으로 정리한다.
예를 들어, 문자열 양끝에 있는 스페이스를 삭제하고자 할 경우 아래와 같이 실행한다.
Step33: strip() 메소드를 인자 없이 호출하는 경우와 동일하다.
Step34: split() 메소드는 지정된 부분문자열을 기준으로 문자열을 쪼개어 문자열들의 리스트로 반환한다.
리스트 자료형은 이후에 자세히 다룬다. 여기서는 기본적으로 알고 있는 내용으로 이해하면 된다.
아래 예제는 ", ", 즉 콤마와 스페이스를 기준으로 문자열을 쪼갠다.
Step35: 두 개 이상의 메소드를 조합해서 활용할 수도 있다.
예를 들어, strip() 메소드를 먼저 실행한 다음에 그 결과에 split() 메소드를 실행하면
좀 더 산뜻한 결과를 얻을 수 있다.
Step36: replace() 메소드는 하나의 문자열을 다른 문자열로 대체한다.
예를 들어, " Mon"을 "Mon"으로 대체할 경우 아래와 같이 실행한다.
Step37: upper() 메소드는 모든 문자를 대문자로 변환시킨다.
Step38: lower() 메소드는 모든 문자를 소문자로 변환시킨다.
Step39: capitalize() 메소드는 제일 첫 문자를 대문자로 변환시킨다.
아래 예제는 변화가 없어 보인다. 이유는 첫 문자가 스페이스이기 때문이다.
Step40: title() 메소드는 각각의 단어의 첫 문자를 대문자로 변환시킨다.
참조
Step41: startswith() 메소드는 문자열이 특정 문자열로 시작하는지 여부를 판단해준다.
Step42: endswith() 메소드는 문자열이 특정 문자열로 끝나는지 여부를 판단해준다.
Step43: 불변 자료형
파이썬의 문자열 자료형의 값들은 변경이 불가능하다.
앞서 week_days에 할당된 문자열에 다양한 메소드를 적용하여 새로운 문자열을 생성하였지만
week_days에 할당된 문자열 자체는 전혀 변하지 않았음을 아래와 같이 확인할 수 있다.
Step44: 이와 같이 한 번 정해지면 절대 변경이 불가능한 자료형을 불변(immutable) 자료형이라 부른다.
주어진 문자열을 이용하여 새로운 문자열을 생성하고 활용하려면 새로운 변수에 저장하여 활용해야 한다.
Step45: 연습문제
애완동물의 목록을 할당받는 pets 변수가 아래와 같이 선언되어 있다.
Step46: 연습
애완동물의 종류를 의미하는 단어의 첫알파벳을 대문자로 바꾸려면 어떻게 해야 하는가?
단, 특정 메소드를 사용하여 한 줄 코드로 작성해야 한다.
견본답안
Step47: 연습
pets으로부터 대문자 C 문자 하나를 추출하라.
견본답안
Step48: 연습
hedgehog을 추출하려면?
견본답안
Step49: 연습 (이전 문제 이어서)
hdeo을 추출하려면?
견본답안
Step50: 연습
gohegdeh을 추출하려면?
견본답안
Step51: 연습
dogs와 cats 두 개의 변수가 다음과 같이 선언되었다.
Step52: 강아지와 고양이를 몇 마리씩 갖고 있는지 확인하는 방법은?
강아지가 고양이보다 몇 마리 더 많은지 확인하는 방법은?
견본답안
Step53: 연습
입력받은 문자열이 dog라는 부분문자열을 갖고 있는지 여부를 판별하는 함수
find_dog를 구현하라.
find_dog('Bull dog')
True
find_dog('강아지')
False
힌트
Step54: 견본답안
Step55: 연습
아래 코드는 커피콩의 현재 가격을 알아내어 일정 가격 이상이면
커피숍의 아메리카노 가격을 인상할 것을 권유하는 프로그램이다.
```python
import urllib.request
import time # 시간과 관련된 함수들의 모듈
price = 5.0
while price < 6.0
Step56: 예를 들어, 현재 커피콩의 가격이 5.7달러이고, 커피콩의 실시간 가격이
5.2달러 이하이면 아메리카노의 가격을 50센트 내리고
6.2달러 이상이면 50센트 올리라고 권유하고자 한다면 아래와 같이
price_setter() 함수를 호출하면 된다.
Step57: 연습
기상청에서 날씨 정보를 확인하는 프로그램을 작성하고자 한다.
먼저 기상청 정보를 담고 있는 아래 사이트의 소스코드를 읽어 온다.
http
Step58: 읽어 온 소소크드 내용의 앞 부분을 확인하면 다음과 같다.
Step59: 이제 비가 올지 여부를 설명하는 부분을 찾아서 비라는 단어의 포함여부에 따라 우산을 가져가야 하는지 여부를 결정하는 코드를 아래와 같이 작성할 수 있다. | Python Code:
import urllib.request
page = urllib.request.urlopen("http://beans-r-us.appspot.com/prices.html")
text = page.read().decode("utf8")
Explanation: 파이썬 기본 자료형 2부
수정 사항
문자열 메소드 사용 예제: 보다 실용적인 예제였으면 함.
요약
문자열 자료형 다루기
문자열 메소드 활용
응용: 웹 상에 있는 데이터를 가져와서 정보 활용하기
준비 사항
문자열의 정의화 기초적인 활용법에 대한 자세한 설명은
여기를
참조한다.
최종 목표
아래 사이트에서 커피콩의 가격정보 자동으로 확인하여 응용하기
http://beans-r-us.appspot.com/prices.html
위 사이트를 방문하면 실시간으로 변하는 커피콩의 시세를 아래와 같은 내용으로 확인할 수 있다.
참조: Head First Programming(한빛미디어) 2장
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/coffee-beans01.jpg" style="width:600">
</td>
</tr>
</table>
</p>
참조: http://beans-r-us.appspot.com/prices.html
이번 장에서는 언급된 웹사이트를 직접 방문하지 않으면서 실시간으로 변하는
커피콩 가격(위 그림에서는 5달러 27센트)을 확인하는 방법을 배운다.
기본적으로 두 가지 기술이 필요하다.
웹사이트 내용 읽어 들이기
문자열로 저장된 데이터에서 필요한 정보 확인하기
웹사이트 내용 읽어 들이기
웹사이트 주소를 이용하여 해당 사이트의 내용 전체를 읽어 들일 수 있다.
예를 들어 앞서 언급된 사이트의 소스코드 전체를 아래 방식으로 가져올 수 있다.
End of explanation
text
Explanation: 실제로 확인하면 웹사이트의 내용 전체가 하나의 문자열로 저장되어 있다.
주의: html 관련 이해할 수 없는 기호들은 여기서는 일단 무시하고 넘어가는 게 좋다.
또한, 위 코드를 자세히 이해하지 못해도 상관 없다.
특정 웹사이트의 소스크드를 가져오기 위해 위 코드 형식을 사용한다는 것만 기억해 두면 된다.
End of explanation
print(text)
Explanation: 소스코드에서 줄바꾸기, 띄어쓰기, 인용부호 등 특수 기호를 적절하게 해석하여 출력하고자 하면 print 명령어를 사용한다.
End of explanation
type(text)
Explanation: 문자열 자료형: str 과 unicode
문자열(str)
text에 저장된 값의 자료형은 문자열이다.
End of explanation
page = urllib.request.urlopen("http://beans-r-us.appspot.com/prices.html")
text_bytes = page.read()
type(text_bytes)
text_bytes
Explanation: 유니코드(unicode)
웹 상에서 정보를 추출할 경우 text_str의 경우처럼 유니코드(Unicode)로
문자열을 변환하는 방식을 사용하는 것을 권장한다.
예를 들어 아래와 같이 text_bytes을 선언하면 다른 형식으로 저장된다.
End of explanation
print(text)
Explanation: 유니코드 대 문자열
두 자료형은 거의 동일하며, 영어와 같은 라틴어 계열 이외에
한국어, 일어 등의 언어를 처리하기 위해서 유니코드가 표준으로 사용된다.
하지만 파이썬3은 문자열(str)로 통일해서 사용한다.
여기서는, 텍스트에 한국어, 일어, 중국어 등이 사용되었을 경우 unicode 방식으로
처리해 주어야 한다는 정도로만 기억하고 넘어간다.
웹사이트 소스코드 확인 방법
실제로 해당 웹사이트의 소스코드를 확인해보면 동일한 결과를 확인할 수 있다.
주의: 커피콩의 가격은 실시간으로 변한다. 하지만 가격 이외의 문장은 변하지 않는다.
윈도우 크롬
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/coffee-beans04.jpg" style="width:600">
</td>
</tr>
</table>
</p>
맥 크롬
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/coffee-beans02.png" style="width:600">
</td>
</tr>
</table>
</p>
크롬에서 소스코드 확인하는 법
윈도우 크롬
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/coffee-beans05.jpg" style="width:600">
</td>
</tr>
</table>
</p>
맥 크롬
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/coffee-beans03.png" style="width:600">
</td>
</tr>
</table>
</p>
문자열로 저장된 데이터에서 필요한 정보 확인하기
text에 저장된 문자열을 다시 확인해보자.
End of explanation
a_food = "kebap"
Explanation: 위 문자열에서 원하는 정보인 커피콩의 가격을 어떻게 추출할 것인가?
커피콩의 가격은 실시간으로 변한다. 하지만 다섯째 줄 끝부분에 위치하고
달러기호($)로 시작하며 x.xx 형식의 소수로 표현된 부분이 커피콩의 가격 정보이다.
따라서, 예를 들어 문자열인 ">$"의 위치를 알면 커피콩 가격정보를 얻을 수 있다.
그런데 특정 문자열 또는 문자의 위치를 어떻게 알 수 있을까?
바로 인덱스 정보와 슬라이싱 기능을 활용하면 된다.
인덱스
문자열에 사용되는 모든 문자의 위치는 인덱스(index)라는 고유한 번호를 갖는다.
인덱스는 0부터 시작하며 오른쪽으로 한 문자씩 이동할 때마다 1씩 증가한다.
주의: 파이썬을 포함해서 많은 대부분의 프로그래밍 언어에서 인덱싱은 0부터 시작한다.
따라서 첫 째 문자를 확인하고자 할 때는 1이 아닌 0을 인덱스로 사용해야 한다.
예제를 통해 인덱스와 친숙해질 필요가 있다.
End of explanation
a_food[0]
Explanation: 특정 인덱스에 위치한 문자의 정보는 다음과 같이 확인한다.
0번 인덱스 값, 즉 첫째 문자
End of explanation
a_food[1]
Explanation: 1번 인덱스 값, 즉, 둘째 문자
End of explanation
a_food[2]
Explanation: 2번 인덱스 값, 즉, 셋째 문자
End of explanation
a_food[-1]
a_food[-2]
Explanation: 등등.
-1번 인덱스
문자열이 길 경우 맨 오른편에 위치한 문자의 인덱스 번호를 확인하기가 어렵다.
그래서 파이썬에서는 -1을 마지막 문자의 인덱스로 사용한다.
즉, 맨 오른편의 인덱스는 -1이고, 그 왼편은 -2, 등등으로 진행한다.
End of explanation
a_food[5]
len(a_food)
Explanation: 등등.
문자열의 길이와 인덱스
문자열의 길이보다 같거나 큰 인덱스를 사용하면 오류가 발생한다.
문자열의 길이는 len() 함수를 이용하여 확인할 수 있다.
End of explanation
a_food
Explanation: 슬라이싱
문자열의 하나의 문자가 아닌 특정 구간 및 부분을 추출하고자 할 경우 슬라이싱을 사용한다.
슬라이싱은 다음과 같이 실행한다.
문자열변수[시작인덱스 : 끝인덱스 : 계단(step)]
시작인덱스: 해당 인덱스부터 문자를 추출한다.
끝인덱스: 해당 인덱스 전까지 문자를 추출한다.
계단: 시작인덱스부터 몇 계단씩 건너뛰며 문자를 추출할지 결정한다. 예를 들어 계단값이 2라면 하나 건너 추출한다.
End of explanation
a_food[0 : 2 : 1]
Explanation: kebap에서 ke 부분을 추출하고 싶다면 다음과 같이 하면 된다:
End of explanation
a_food[0 : 4 : 2]
Explanation: 즉, 문자열 처음부터 2번 인덱스 전까지, 즉 두 번째 문자까지 모두 추출하는 것이다.
반면에 하나씩 건너서 추출하려면 다음처럼 하면 된다:
End of explanation
a_food[0 : 2]
a_food[: 2]
a_food[: 4 : 2]
a_food[ : : 2]
Explanation: 시작인덱스, 끝인덱스, 계단 각각의 인자가 경우에 따라 생략될 수도 있다.
그럴 때는 각각의 위치에 기본값(default)이 들어 있는 것으로 처리되며, 각 자리의 기본값은 다음과 같다.
시작인덱스의 기본값 = 0
끝인덱스의 기본값 = 문자열의 길이
계단의 기본값 = 1
End of explanation
a_food[ : -1 : 2]
Explanation: 양수와 음수를 인덱스로 섞어서 사용할 수도 있다.
End of explanation
a_food[: 10]
Explanation: 주의: -1은 문자열의 끝인덱스를 의미한다.
끝인덱스가 문자열의 길이보다 클 수도 있다.
다만 문자열의 길이 만큼만 문자를 확인한다.
End of explanation
a_food[:]
Explanation: 아래와 같이 아무 것도 입력하지 않으면 해당 문자열 전체를 추출한다.
End of explanation
a_food[3 : 1]
Explanation: 시작인덱스 값이 끝 인덱스 값보다 같거나 작아야 제대로 추출한다.
그렇지 않으면 공문자열이 추출된다.
End of explanation
a_food[3 : 1 : -1]
a_food[-1 : : -1]
Explanation: 이유는 슬라이싱은 기본적으로 작은 인덱스에 큰 인덱스 방향으로 확인하기 때문이다.
역순으로 추출하고자 한다면 계단을 음수로 사용하면 된다.
End of explanation
text.find(">$")
Explanation: find() 메소드 활용하기
인덱스와 슬라이싱의 기능을 이해하였다면 이제 text 변수에 할당된 문자열에서 ">$"라는
문자열의 시작위치를 알아내기만 하면 된다.
아주 간단한 방법이 있다. 0번부터 시작해서 주욱 세어가면서 ">$"의 시작 문자인
">"의 인덱스를 확인하면 된다.
하지만 이런 방식은 아래와 같은 이유로 매우 위험하다.
셈이 틀릴 수 있다.
문자열이 길 경우 셈 자체가 불가능할 수 있다.
문자열이 조금만 변경되어도 새로 처음부터 세어야 하기 때문에 경우에 따라 재활용이 불가능하다.
이런 문제를 해결하는 좋은 방법이 있다.
바로 find()라는 문자열 메소드를 활용하면 된다.
End of explanation
print(text[234: 238])
Explanation: 이제, 찾고자 하는 ">$" 문자열이 232번 인덱스에서 시작한다는 것을 알았다.
따라서 커피콩의 가격정보는 인덱스가 2보다 큰 234번이고 거기서부터 길이가 4인
부분문자열에 담겨 있게 된다.
End of explanation
price_index = text.find(">$") + 2
bean_price_str = text[price_index : price_index + 4]
print(bean_price_str)
Explanation: 하지만, 여기서 234를 사용하기 보다는 find() 메소드를
직접 활용하는 것이 더욱 좋다.
End of explanation
type(bean_price_str)
Explanation: 주의:
bean_price_str 에 저장된 커피콩의 가격정보는 문자열로 저장되어 있다.
End of explanation
a_number = int('4')
print(a_number)
print(type(a_number))
Explanation: 그래서 예를 들어 커피콩 가격이 6달러 이상이면 커피숍의 아메리카노 가격을 올리고,
그렇지 않으면 가격을 그대로 유지하는 것을 실행하도록 하는
코드를 작성할 수가 없다.
이유는, 문자열은 숫자가 아니라서 문자열과 숫자를 직접 비교할 수 없기 때문이다.
하지만 숫자로만 이루어닌 문자열을 진짜 숫자로 형변환시킬 수 있다.
예를 들어 int() 또는 float() 함수를 이용한다.
End of explanation
float('4.2') * 2
Explanation: float() 함수를 이용하면 부동소수점 모양의 문자열을 부동소수점으로 형변환시킬 수 있다.
End of explanation
'4.2' * 2
Explanation: 주의: 문자열과 숫자의 곱은 의미가 완전히 다르다.
End of explanation
int('4.2') * 2
Explanation: 주의: int() 함수는 정수모양의 문자열에만 사용할 수 있다.
End of explanation
float('4.5GB')
Explanation: 부동소수점 모양의 문자열이 아니면 float() 함수도 오류를 발생시킨다.
End of explanation
import urllib.request
import time # 시간과 관련된 함수들의 모듈
price = 5.0
while price < 6.0:
time.sleep(1) # 코드 실행을 1초 정지한다.
page = urllib.request.urlopen("http://beans-r-us.appspot.com/prices.html")
text = page.read().decode("utf8")
where = text.find(">$") + 2
price_str = text[where : where + 4] # 가격정보 문자열
price = float(price_str) # 숫자로 형변환
print("커피콩 현재 가격이", price, "입니다. 아메리카노 가격을 인상하세요!")
Explanation: 커피콩 가격 정보 활용 코드 예제
지금까지 배운 내용을 이용하여 커피콩 가격이 6.0달러 이상이면 커피숍의 아메리카노 가격을 올리고, 그렇지 않으면 가격을 그대로 유지하는 것을 실행하도록 하는 코드를 작성하면 다음과 같다.
가격 확인은 1초에 한 번 하는 것으로 한다.
시차를 두고 코드를 실행하기 위해 time 모듈의 sleep() 함수를 활용할 수 있다.
주의: 기준 가격을 높게 책정하면 너무 오랫동안 기다려야 할 수도 있다.
End of explanation
week_days = " Mon, Tue, Wed, Thu, Fri, Sat, Sun "
Explanation: 문자열 관련 메소드
find() 메소드처럼 문자열 자료형에만 사용하는 함수들이 있다.
이와같이 특정 자료형에만 사용하는 함수들을 __메소드__라 부른다.
보다 자세한 설명은 여기서는 하지 않는다.
다만 find() 메소드의 활용을 통해 보았듯이 특정 자료형을 잘 다루기 위해서는
어떤 경우에 어떤 메소드를 유용하게 활용할 수 있는지를 잘 파악해두는 것이 매우
중요하다는 점만 강조한다.
메소드 호출 방법
앞서 find() 메소드를 호출하는 방법을 기억해야 한다.
text.find("<$")
메소드는 일반적인 함수들과는 달리, 특정 자료형의 값이 먼저 언급된 다음에
호출된다.
주의: 메소드의 호출방식은 다른 자료형의 경우에도 동일하다.
문자열 메소드 추가 예제
find() 메소드 이외에 문자열과 관련된 메소드는 매우 많다.
여기서는 가장 많이 사용되는 메소드 몇 개를 소개하고자 한다.
strip()
split()
replace()
upper()
lower()
capitalize()
title()
startswith()
endswith()
예제를 통해 각 메소드의 활용법을 간략하게 확인한다.
먼저 week_days 변수에 요일들을 저장한다.
End of explanation
week_days.strip(" ")
Explanation: strip() 메소드는 문자열의 양 끝을 지정한 문자열 기준으로 삭제하는 방식으로 정리한다.
예를 들어, 문자열 양끝에 있는 스페이스를 삭제하고자 할 경우 아래와 같이 실행한다.
End of explanation
week_days.strip()
Explanation: strip() 메소드를 인자 없이 호출하는 경우와 동일하다.
End of explanation
week_days.split(", ")
Explanation: split() 메소드는 지정된 부분문자열을 기준으로 문자열을 쪼개어 문자열들의 리스트로 반환한다.
리스트 자료형은 이후에 자세히 다룬다. 여기서는 기본적으로 알고 있는 내용으로 이해하면 된다.
아래 예제는 ", ", 즉 콤마와 스페이스를 기준으로 문자열을 쪼갠다.
End of explanation
week_days.strip(" ").split(", ")
Explanation: 두 개 이상의 메소드를 조합해서 활용할 수도 있다.
예를 들어, strip() 메소드를 먼저 실행한 다음에 그 결과에 split() 메소드를 실행하면
좀 더 산뜻한 결과를 얻을 수 있다.
End of explanation
week_days.replace(" Mon", "Mon")
Explanation: replace() 메소드는 하나의 문자열을 다른 문자열로 대체한다.
예를 들어, " Mon"을 "Mon"으로 대체할 경우 아래와 같이 실행한다.
End of explanation
week_days.upper()
week_days.strip().upper()
Explanation: upper() 메소드는 모든 문자를 대문자로 변환시킨다.
End of explanation
week_days.lower()
week_days.strip().lower()
week_days.strip().lower().split(", ")
Explanation: lower() 메소드는 모든 문자를 소문자로 변환시킨다.
End of explanation
week_days.capitalize()
week_days.strip().capitalize()
Explanation: capitalize() 메소드는 제일 첫 문자를 대문자로 변환시킨다.
아래 예제는 변화가 없어 보인다. 이유는 첫 문자가 스페이스이기 때문이다.
End of explanation
week_days.title()
week_days.strip().title()
Explanation: title() 메소드는 각각의 단어의 첫 문자를 대문자로 변환시킨다.
참조: 영문 책 제목의 타이틀에서 각 단어의 첫 알파벳이 대문자로 쓰여지는 경우가 많다.
End of explanation
week_days.startswith(" M")
Explanation: startswith() 메소드는 문자열이 특정 문자열로 시작하는지 여부를 판단해준다.
End of explanation
week_days.endswith("n ")
Explanation: endswith() 메소드는 문자열이 특정 문자열로 끝나는지 여부를 판단해준다.
End of explanation
week_days
Explanation: 불변 자료형
파이썬의 문자열 자료형의 값들은 변경이 불가능하다.
앞서 week_days에 할당된 문자열에 다양한 메소드를 적용하여 새로운 문자열을 생성하였지만
week_days에 할당된 문자열 자체는 전혀 변하지 않았음을 아래와 같이 확인할 수 있다.
End of explanation
stripped_week_days = week_days.strip()
stripped_week_days
Explanation: 이와 같이 한 번 정해지면 절대 변경이 불가능한 자료형을 불변(immutable) 자료형이라 부른다.
주어진 문자열을 이용하여 새로운 문자열을 생성하고 활용하려면 새로운 변수에 저장하여 활용해야 한다.
End of explanation
pets = 'dog cat hedgehog pig swan fish bird'
Explanation: 연습문제
애완동물의 목록을 할당받는 pets 변수가 아래와 같이 선언되어 있다.
End of explanation
pets.title()
Explanation: 연습
애완동물의 종류를 의미하는 단어의 첫알파벳을 대문자로 바꾸려면 어떻게 해야 하는가?
단, 특정 메소드를 사용하여 한 줄 코드로 작성해야 한다.
견본답안:
End of explanation
pets.title()[4]
Explanation: 연습
pets으로부터 대문자 C 문자 하나를 추출하라.
견본답안:
End of explanation
pets[8 : 16]
Explanation: 연습
hedgehog을 추출하려면?
견본답안:
End of explanation
pets[8 : 16 : 2]
Explanation: 연습 (이전 문제 이어서)
hdeo을 추출하려면?
견본답안:
End of explanation
pets[15: 7 : -1]
Explanation: 연습
gohegdeh을 추출하려면?
견본답안:
End of explanation
dogs, cats = '8', '4'
Explanation: 연습
dogs와 cats 두 개의 변수가 다음과 같이 선언되었다.
End of explanation
print(int(dogs))
print(int(cats))
print(abs(int(dogs) - int(cats)))
Explanation: 강아지와 고양이를 몇 마리씩 갖고 있는지 확인하는 방법은?
강아지가 고양이보다 몇 마리 더 많은지 확인하는 방법은?
견본답안:
End of explanation
'ab' in 'abc'
'cat' in 'casting'
Explanation: 연습
입력받은 문자열이 dog라는 부분문자열을 갖고 있는지 여부를 판별하는 함수
find_dog를 구현하라.
find_dog('Bull dog')
True
find_dog('강아지')
False
힌트: 특정 문자열이 주어진 문자열에 부분문자열로 포함되어 있는지 여부를 판단해 주는 방식을
활용한다. 아래 예제들을 참조하라.
End of explanation
def find_dog(word):
if 'dog' in word:
found_dog = True
else:
found_dog = False
return found_dog
find_dog('Bull dog')
find_dog('강아지')
Explanation: 견본답안:
End of explanation
import urllib.request
import time # 시간과 관련된 함수들의 모듈
def price_setter(b_price, a_price):
price = b_price
while 5.5 < price < 6.0:
time.sleep(1) # 코드 실행을 1초 정지한다.
page = urllib.request.urlopen("http://beans-r-us.appspot.com/prices.html")
text = page.read().decode("utf8")
where = text.find(">$") + 2
price_str = text[where : where + 4] # 가격정보 문자열
price = float(price_str) # 숫자로 형변환
print("현재 커피콩 가격이", price, "달러 입니다.")
if price <= 5.5:
print("아메리카노 가격을", a_price, "달러만큼 인하하세요!")
else:
print("아메리카노 가격을", a_price, "달러만큼 인상하세요!")
Explanation: 연습
아래 코드는 커피콩의 현재 가격을 알아내어 일정 가격 이상이면
커피숍의 아메리카노 가격을 인상할 것을 권유하는 프로그램이다.
```python
import urllib.request
import time # 시간과 관련된 함수들의 모듈
price = 5.0
while price < 6.0:
time.sleep(1) # 코드 실행을 1초 정지한다.
page = urllib.request.urlopen("http://beans-r-us.appspot.com/prices.html")
text = page.read().decode("utf8")
where = text.find(">$") + 2
price_str = text[where : where + 4] # 가격정보 문자열
price = float(price_str) # 숫자로 형변환
print("커피콩 현재 가격이", price, "입니다. 아메리카노 가격을 인상하세요!")
```
위 코드를 수정하여, 아래 내용을 수행하는 함수를 작성하라.
함수 이름: price_setter
함수에 사용되는 인자 두 개
첫째 인자(b_price): 기존의 커피콩 가격
둘째 인자(a_price): 아메리카노 인상 또는 인하 가격
price_setter(b_price, a_price)를 실행할 때
b_price는 커피콩의 기존 가격을 의미한다.
서버의 특징 상 5.5와 6.0 사이의 숫자로 주는 게 좋다.
커피콩의 실시간 가격이 b_price 보다 0.5 달러 이하면
아메리카노 가격을 a_price 만큼 내릴 것을 권유
커피콩의 실시간 가격이 b_price 보다 0.5 달러 이상이면
아메리카노 가격을 a_price 만큼 올릴 것을 권유
견본답안:
End of explanation
price_setter(5.7, 0.5)
Explanation: 예를 들어, 현재 커피콩의 가격이 5.7달러이고, 커피콩의 실시간 가격이
5.2달러 이하이면 아메리카노의 가격을 50센트 내리고
6.2달러 이상이면 50센트 올리라고 권유하고자 한다면 아래와 같이
price_setter() 함수를 호출하면 된다.
End of explanation
import urllib.request
page = urllib.request.urlopen("http://www.weather.go.kr/weather/forecast/mid-term-rss3.jsp?stnId-108")
text = page.read().decode("utf8")
Explanation: 연습
기상청에서 날씨 정보를 확인하는 프로그램을 작성하고자 한다.
먼저 기상청 정보를 담고 있는 아래 사이트의 소스코드를 읽어 온다.
http://www.weather.go.kr/weather/forecast/mid-term-rss3.jsp?stnId-108
End of explanation
text[0:1000]
Explanation: 읽어 온 소소크드 내용의 앞 부분을 확인하면 다음과 같다.
End of explanation
where_s = text.find("CDATA[]") + 6
where_e = text.find("]]></wf>")
text_weather = text[where_s : where_e]
text_weather_clean = text_weather.replace("<br />", " ")
if '비' in text_weather_clean:
print("우산을 가져가세요!")
else:
print("우산이 필요 없습니다!")
Explanation: 이제 비가 올지 여부를 설명하는 부분을 찾아서 비라는 단어의 포함여부에 따라 우산을 가져가야 하는지 여부를 결정하는 코드를 아래와 같이 작성할 수 있다.
End of explanation |
3,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: フェデレーテッドラーニングリサーチの TFF
Step2: TFF が動作していることを確認します。
Step4: 入力データを準備する
このセクションでは、TFF に含まれる EMNIST データセットを読み込んで事前処理します。EMNIST データセットの詳細は、画像分類のフェデレーテッドラーニングチュートリアルをご覧ください。
Step6: モデルを定義する
ここでは、元の FedAvg CNN に基づいて Keras モデルを定義し、それを tff.learning.Model インスタンスにラッピングして TFF が消費できるようにします。
モデルのみを直接生成する代わりに、モデルを生成する関数が必要となることに注意してください。また、その関数は構築済みのモデルをキャプチャするだけでなく、呼び出されるコンテキストで作成する必要があります。これは、TFF がデバイスで利用されるように設計されており、リソースが作られるタイミングを制御することで、キャプチャしてパッケージ化できる必要があるためです。
Step7: モデルのトレーニングとトレーニングメトリックの出力
フェデレーテッドアベレージングアルゴリズムを作成し、定義済みのモデルを EMNIST データセットでトレーニングする準備が整いました。
まず、tff.learning.build_federated_averaging_process API を使用して、フェデレーテッドアベレージングアルゴリズムを構築する必要があります。
Step11: では、フェデレーテッドアベレージングアルゴリズムを実行しましょう。TFF の観点からフェデレーテッドアベレージングアルゴリズムを実行するには、次のようになります。
アルゴリズムを初期化し、サーバーの初期状態を取得します。サーバーの状態には、アルゴリズムを実行するために必要な情報が含まれます。TFF は関数型であるため、この状態には、アルゴリズムが使用するオプティマイザの状態(慣性項)だけでなく、モデルパラメータ自体も含まれることを思い出してください。これらは引数として渡され、TFF 計算の結果として返されます。
ラウンドごとにアルゴリズムを実行します。各ラウンドでは、新しいサーバーの状態が、データでモデルをトレーニングしている各クライアントの結果として返されます。通常、1 つのラウンドでは次のことが発生します。
サーバーはすべての参加クライアントにモデルをブロードキャストします。
各クライアントは、モデルとそのデータに基づいて作業を実施します。
サーバーはすべてのモデルを集約し、新しいモデルを含むサーバーの状態を生成します。
詳細については、カスタムフェデレーテッドアルゴリズム、パート 2
Step12: 上記に示されるルートログディレクトリで TensorBoard を起動すると、トレーニングメトリックが表示されます。データの読み込みには数秒かかることがあります。Loss と Accuracy を除き、ブロードキャストされ集約されたデータの量も出力されます。ブロードキャストされたデータは、各クライアントにサーバーがプッシュしたテンソルで、集約データとは各クライアントがサーバーに返すテンソルを指します。
Step15: カスタムブロードキャストと集約関数を構築する
tensor_encoding API を使用して、ブロードキャストされたデータと集約データに対して非可逆圧縮アルゴリズムを使用する関数を実装しましょう。
まず、2 つの関数を定義します。
broadcast_encoder_fn
Step16: TFF は、エンコーダ関数を tff.learning.build_federated_averaging_process API が消費できる形式に変換する API を提供しています。tff.learning.framework.build_encoded_broadcast_from_model と tff.learning.framework.build_encoded_mean_from_model を使用することで、tff.learning.build_federated_averaging_process の broadcast_process と aggregation_process 引数に渡して、非可逆圧縮アルゴリズムでフェデレーテッドアベレージングアルゴリズムを作成するための関数を 2 つ作成することができます。
Step17: もう一度モデルをトレーニングする
では、新しいフェデレーテッドアベレージングアルゴリズムを実行しましょう。
Step18: もう一度 TensorBoard を起動して、2 つの実行のトレーニングメトリックを比較します。
Tensorboard を見てわかるように、broadcasted_bits と aggregated_bits 図 の orginial と compression の曲線に大きな減少を確認できます。loss と sparse_categorical_accuracy 図では、この 2 つの曲線は非常に似通っていました。
最後に、元のフェデレーテッドアベレージングアルゴリズムに似たパフォーマンスを達成できる圧縮アルゴリズムを実装しながら、通信コストを大幅に削減することができました。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated
!pip install --quiet --upgrade tensorflow-model-optimization
%load_ext tensorboard
import functools
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
from tensorflow_model_optimization.python.core.internal import tensor_encoding as te
Explanation: フェデレーテッドラーニングリサーチの TFF: モデルと更新圧縮
注意: この Colab は <a>最新リリースバージョン</a>の <code>tensorflow_federated</code> pip パッケージでの動作が確認されていますが、Tensorflow Federated プロジェクトは現在もプレリリース開発の段階にあるため、master では動作しない可能性があります。
このチュートリアルでは、EMNIST データセットを使用しながら、tff.learning.build_federated_averaging_process API と tensor_encoding API を使用するフェデレーテッドアベレージングアルゴリズムにおける通信コストを削減するために非可逆圧縮アルゴリズムを有効化する方法を実演します。フェデレーテッドアベレージングアルゴリズムの詳細については、論文「Communication-Efficient Learning of Deep Networks from Decentralized Data」をご覧ください。
始める前に
始める前に、次のコードを実行し、環境が正しくセットアップされていることを確認してください。挨拶文が表示されない場合は、インストールガイドで手順を確認してください。
End of explanation
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
Explanation: TFF が動作していることを確認します。
End of explanation
# This value only applies to EMNIST dataset, consider choosing appropriate
# values if switching to other datasets.
MAX_CLIENT_DATASET_SIZE = 418
CLIENT_EPOCHS_PER_ROUND = 1
CLIENT_BATCH_SIZE = 20
TEST_BATCH_SIZE = 500
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data(
only_digits=True)
def reshape_emnist_element(element):
return (tf.expand_dims(element['pixels'], axis=-1), element['label'])
def preprocess_train_dataset(dataset):
Preprocessing function for the EMNIST training dataset.
return (dataset
# Shuffle according to the largest client dataset
.shuffle(buffer_size=MAX_CLIENT_DATASET_SIZE)
# Repeat to do multiple local epochs
.repeat(CLIENT_EPOCHS_PER_ROUND)
# Batch to a fixed client batch size
.batch(CLIENT_BATCH_SIZE, drop_remainder=False)
# Preprocessing step
.map(reshape_emnist_element))
emnist_train = emnist_train.preprocess(preprocess_train_dataset)
Explanation: 入力データを準備する
このセクションでは、TFF に含まれる EMNIST データセットを読み込んで事前処理します。EMNIST データセットの詳細は、画像分類のフェデレーテッドラーニングチュートリアルをご覧ください。
End of explanation
def create_original_fedavg_cnn_model(only_digits=True):
The CNN model used in https://arxiv.org/abs/1602.05629.
data_format = 'channels_last'
max_pool = functools.partial(
tf.keras.layers.MaxPooling2D,
pool_size=(2, 2),
padding='same',
data_format=data_format)
conv2d = functools.partial(
tf.keras.layers.Conv2D,
kernel_size=5,
padding='same',
data_format=data_format,
activation=tf.nn.relu)
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
conv2d(filters=32),
max_pool(),
conv2d(filters=64),
max_pool(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10 if only_digits else 62),
tf.keras.layers.Softmax(),
])
return model
# Gets the type information of the input data. TFF is a strongly typed
# functional programming framework, and needs type information about inputs to
# the model.
input_spec = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[0]).element_spec
def tff_model_fn():
keras_model = create_original_fedavg_cnn_model()
return tff.learning.from_keras_model(
keras_model=keras_model,
input_spec=input_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
Explanation: モデルを定義する
ここでは、元の FedAvg CNN に基づいて Keras モデルを定義し、それを tff.learning.Model インスタンスにラッピングして TFF が消費できるようにします。
モデルのみを直接生成する代わりに、モデルを生成する関数が必要となることに注意してください。また、その関数は構築済みのモデルをキャプチャするだけでなく、呼び出されるコンテキストで作成する必要があります。これは、TFF がデバイスで利用されるように設計されており、リソースが作られるタイミングを制御することで、キャプチャしてパッケージ化できる必要があるためです。
End of explanation
federated_averaging = tff.learning.build_federated_averaging_process(
model_fn=tff_model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
Explanation: モデルのトレーニングとトレーニングメトリックの出力
フェデレーテッドアベレージングアルゴリズムを作成し、定義済みのモデルを EMNIST データセットでトレーニングする準備が整いました。
まず、tff.learning.build_federated_averaging_process API を使用して、フェデレーテッドアベレージングアルゴリズムを構築する必要があります。
End of explanation
#@title Load utility functions
def format_size(size):
A helper function for creating a human-readable size.
size = float(size)
for unit in ['B','KiB','MiB','GiB']:
if size < 1024.0:
return "{size:3.2f}{unit}".format(size=size, unit=unit)
size /= 1024.0
return "{size:.2f}{unit}".format(size=size, unit='TiB')
def set_sizing_environment():
Creates an environment that contains sizing information.
# Creates a sizing executor factory to output communication cost
# after the training finishes. Note that sizing executor only provides an
# estimate (not exact) of communication cost, and doesn't capture cases like
# compression of over-the-wire representations. However, it's perfect for
# demonstrating the effect of compression in this tutorial.
sizing_factory = tff.framework.sizing_executor_factory()
# TFF has a modular runtime you can configure yourself for various
# environments and purposes, and this example just shows how to configure one
# part of it to report the size of things.
context = tff.framework.ExecutionContext(executor_fn=sizing_factory)
tff.framework.set_default_context(context)
return sizing_factory
def train(federated_averaging_process, num_rounds, num_clients_per_round, summary_writer):
Trains the federated averaging process and output metrics.
# Create a environment to get communication cost.
environment = set_sizing_environment()
# Initialize the Federated Averaging algorithm to get the initial server state.
state = federated_averaging_process.initialize()
with summary_writer.as_default():
for round_num in range(num_rounds):
# Sample the clients parcitipated in this round.
sampled_clients = np.random.choice(
emnist_train.client_ids,
size=num_clients_per_round,
replace=False)
# Create a list of `tf.Dataset` instances from the data of sampled clients.
sampled_train_data = [
emnist_train.create_tf_dataset_for_client(client)
for client in sampled_clients
]
# Round one round of the algorithm based on the server state and client data
# and output the new state and metrics.
state, metrics = federated_averaging_process.next(state, sampled_train_data)
# For more about size_info, please see https://www.tensorflow.org/federated/api_docs/python/tff/framework/SizeInfo
size_info = environment.get_size_info()
broadcasted_bits = size_info.broadcast_bits[-1]
aggregated_bits = size_info.aggregate_bits[-1]
print('round {:2d}, metrics={}, broadcasted_bits={}, aggregated_bits={}'.format(round_num, metrics, format_size(broadcasted_bits), format_size(aggregated_bits)))
# Add metrics to Tensorboard.
for name, value in metrics['train']._asdict().items():
tf.summary.scalar(name, value, step=round_num)
# Add broadcasted and aggregated data size to Tensorboard.
tf.summary.scalar('cumulative_broadcasted_bits', broadcasted_bits, step=round_num)
tf.summary.scalar('cumulative_aggregated_bits', aggregated_bits, step=round_num)
summary_writer.flush()
# Clean the log directory to avoid conflicts.
!rm -R /tmp/logs/scalars/*
# Set up the log directory and writer for Tensorboard.
logdir = "/tmp/logs/scalars/original/"
summary_writer = tf.summary.create_file_writer(logdir)
train(federated_averaging_process=federated_averaging, num_rounds=10,
num_clients_per_round=10, summary_writer=summary_writer)
Explanation: では、フェデレーテッドアベレージングアルゴリズムを実行しましょう。TFF の観点からフェデレーテッドアベレージングアルゴリズムを実行するには、次のようになります。
アルゴリズムを初期化し、サーバーの初期状態を取得します。サーバーの状態には、アルゴリズムを実行するために必要な情報が含まれます。TFF は関数型であるため、この状態には、アルゴリズムが使用するオプティマイザの状態(慣性項)だけでなく、モデルパラメータ自体も含まれることを思い出してください。これらは引数として渡され、TFF 計算の結果として返されます。
ラウンドごとにアルゴリズムを実行します。各ラウンドでは、新しいサーバーの状態が、データでモデルをトレーニングしている各クライアントの結果として返されます。通常、1 つのラウンドでは次のことが発生します。
サーバーはすべての参加クライアントにモデルをブロードキャストします。
各クライアントは、モデルとそのデータに基づいて作業を実施します。
サーバーはすべてのモデルを集約し、新しいモデルを含むサーバーの状態を生成します。
詳細については、カスタムフェデレーテッドアルゴリズム、パート 2: フェデレーテッドアベレージングの実装チュートリアルをご覧ください。
トレーニングメトリックは、トレーニング後に表示できるように、TensorBoard ディレクトリに書き込まれます。
End of explanation
%tensorboard --logdir /tmp/logs/scalars/ --port=0
Explanation: 上記に示されるルートログディレクトリで TensorBoard を起動すると、トレーニングメトリックが表示されます。データの読み込みには数秒かかることがあります。Loss と Accuracy を除き、ブロードキャストされ集約されたデータの量も出力されます。ブロードキャストされたデータは、各クライアントにサーバーがプッシュしたテンソルで、集約データとは各クライアントがサーバーに返すテンソルを指します。
End of explanation
def broadcast_encoder_fn(value):
Function for building encoded broadcast.
spec = tf.TensorSpec(value.shape, value.dtype)
if value.shape.num_elements() > 10000:
return te.encoders.as_simple_encoder(
te.encoders.uniform_quantization(bits=8), spec)
else:
return te.encoders.as_simple_encoder(te.encoders.identity(), spec)
def mean_encoder_fn(value):
Function for building encoded mean.
spec = tf.TensorSpec(value.shape, value.dtype)
if value.shape.num_elements() > 10000:
return te.encoders.as_gather_encoder(
te.encoders.uniform_quantization(bits=8), spec)
else:
return te.encoders.as_gather_encoder(te.encoders.identity(), spec)
Explanation: カスタムブロードキャストと集約関数を構築する
tensor_encoding API を使用して、ブロードキャストされたデータと集約データに対して非可逆圧縮アルゴリズムを使用する関数を実装しましょう。
まず、2 つの関数を定義します。
broadcast_encoder_fn: サーバーのテンソルまたは変数をクライアント通信にエンコードする te.core.SimpleEncoder のインスタンスを作成します(ブロードキャストデータ)。
mean_encoder_fn: クライアントのテンソルまたは変数をサーバー通信にエンコードする te.core.GatherEncoder インスタンスを作成します(集約データ)。
一度にモデル全体に圧縮メソッドを適用しないことに十分に注意してください。モデルの各変数を圧縮するかどうか、またはどのように圧縮するかは、個別に決定します。これは一般的に、バイアスなどの小さな変数は不正確性により敏感であり、比較的小さいことから、潜在的な通信の節約量も比較的小さくなるためです。そのため、デフォルトでは小さな変数を圧縮しません。この例では、10000 個を超える要素を持つ変数ごとに 8 ビット(256 バケット)の均一量子化を適用し、ほかの変数にのみ ID を適用します。
End of explanation
encoded_broadcast_process = (
tff.learning.framework.build_encoded_broadcast_process_from_model(
tff_model_fn, broadcast_encoder_fn))
encoded_mean_process = (
tff.learning.framework.build_encoded_mean_process_from_model(
tff_model_fn, mean_encoder_fn))
federated_averaging_with_compression = tff.learning.build_federated_averaging_process(
tff_model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0),
broadcast_process=encoded_broadcast_process,
aggregation_process=encoded_mean_process)
Explanation: TFF は、エンコーダ関数を tff.learning.build_federated_averaging_process API が消費できる形式に変換する API を提供しています。tff.learning.framework.build_encoded_broadcast_from_model と tff.learning.framework.build_encoded_mean_from_model を使用することで、tff.learning.build_federated_averaging_process の broadcast_process と aggregation_process 引数に渡して、非可逆圧縮アルゴリズムでフェデレーテッドアベレージングアルゴリズムを作成するための関数を 2 つ作成することができます。
End of explanation
logdir_for_compression = "/tmp/logs/scalars/compression/"
summary_writer_for_compression = tf.summary.create_file_writer(
logdir_for_compression)
train(federated_averaging_process=federated_averaging_with_compression,
num_rounds=10,
num_clients_per_round=10,
summary_writer=summary_writer_for_compression)
Explanation: もう一度モデルをトレーニングする
では、新しいフェデレーテッドアベレージングアルゴリズムを実行しましょう。
End of explanation
%tensorboard --logdir /tmp/logs/scalars/ --port=0
Explanation: もう一度 TensorBoard を起動して、2 つの実行のトレーニングメトリックを比較します。
Tensorboard を見てわかるように、broadcasted_bits と aggregated_bits 図 の orginial と compression の曲線に大きな減少を確認できます。loss と sparse_categorical_accuracy 図では、この 2 つの曲線は非常に似通っていました。
最後に、元のフェデレーテッドアベレージングアルゴリズムに似たパフォーマンスを達成できる圧縮アルゴリズムを実装しながら、通信コストを大幅に削減することができました。
End of explanation |
3,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Plot dynamics functions
Step2: Sample data from the ARHMM
Step3: Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.
Step4: Fit an ARHMM | Python Code:
!pip install git+git://github.com/lindermanlab/ssm-jax-refactor.git
import ssm
import copy
import jax.numpy as np
import jax.random as jr
from tensorflow_probability.substrates import jax as tfp
from ssm.distributions.linreg import GaussianLinearRegression
from ssm.arhmm import GaussianARHMM
from ssm.utils import find_permutation, random_rotation
from ssm.plots import gradient_cmap # , white_to_color_cmap
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style("white")
sns.set_context("talk")
color_names = ["windows blue", "red", "amber", "faded green", "dusty purple", "orange", "brown", "pink"]
colors = sns.xkcd_palette(color_names)
cmap = gradient_cmap(colors)
# Make a transition matrix
num_states = 5
transition_probs = (np.arange(num_states) ** 10).astype(float)
transition_probs /= transition_probs.sum()
transition_matrix = np.zeros((num_states, num_states))
for k, p in enumerate(transition_probs[::-1]):
transition_matrix += np.roll(p * np.eye(num_states), k, axis=1)
plt.imshow(transition_matrix, vmin=0, vmax=1, cmap="Greys")
plt.xlabel("next state")
plt.ylabel("current state")
plt.title("transition matrix")
plt.colorbar()
plt.savefig("arhmm-transmat.pdf")
# Make observation distributions
data_dim = 2
num_lags = 1
keys = jr.split(jr.PRNGKey(0), num_states)
angles = np.linspace(0, 2 * np.pi, num_states, endpoint=False)
theta = np.pi / 25 # rotational frequency
weights = np.array([0.8 * random_rotation(key, data_dim, theta=theta) for key in keys])
biases = np.column_stack([np.cos(angles), np.sin(angles), np.zeros((num_states, data_dim - 2))])
covariances = np.tile(0.001 * np.eye(data_dim), (num_states, 1, 1))
# Compute the stationary points
stationary_points = np.linalg.solve(np.eye(data_dim) - weights, biases)
print(theta / (2 * np.pi) * 360)
print(360 / 5)
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/arhmm_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Autoregressive (AR) HMM Demo
Modified from
https://github.com/lindermanlab/ssm-jax-refactor/blob/main/notebooks/arhmm-example.ipynb
This notebook illustrates the use of the auto_regression observation model.
Let $x_t$ denote the observation at time $t$. Let $z_t$ denote the corresponding discrete latent state.
The autoregressive hidden Markov model has the following likelihood,
$$
\begin{align}
x_t \mid x_{t-1}, z_t &\sim
\mathcal{N}\left(A_{z_t} x_{t-1} + b_{z_t}, Q_{z_t} \right).
\end{align}
$$
(Technically, higher-order autoregressive processes with extra linear terms from inputs are also implemented.)
End of explanation
if data_dim == 2:
lim = 5
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(1, num_states, figsize=(3 * num_states, 6))
for k in range(num_states):
A, b = weights[k], biases[k]
dxydt_m = xy.dot(A.T) + b - xy
axs[k].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[k % len(colors)])
axs[k].set_xlabel("$y_1$")
# axs[k].set_xticks([])
if k == 0:
axs[k].set_ylabel("$y_2$")
# axs[k].set_yticks([])
axs[k].set_aspect("equal")
plt.tight_layout()
plt.savefig("arhmm-flow-matrices.pdf")
colors
print(stationary_points)
Explanation: Plot dynamics functions
End of explanation
# Make an Autoregressive (AR) HMM
true_initial_distribution = tfp.distributions.Categorical(logits=np.zeros(num_states))
true_transition_distribution = tfp.distributions.Categorical(probs=transition_matrix)
true_arhmm = GaussianARHMM(
num_states,
transition_matrix=transition_matrix,
emission_weights=weights,
emission_biases=biases,
emission_covariances=covariances,
)
time_bins = 10000
true_states, data = true_arhmm.sample(jr.PRNGKey(0), time_bins)
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*data[true_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3)
plt.plot(*data[:1000].T, "-k", lw=0.5, alpha=0.2)
plt.xlabel("$y_1$")
plt.ylabel("$y_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d.pdf")
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
ndx = true_states == k
data_k = data[ndx]
T = 12
data_k = data_k[:T, :]
plt.plot(data_k[:, 0], data_k[:, 1], "o", color=colors[k], alpha=0.75, markersize=3)
for t in range(T):
plt.text(data_k[t, 0], data_k[t, 1], t, color=colors[k], fontsize=12)
# plt.plot(*data[:1000].T, '-k', lw=0.5, alpha=0.2)
plt.xlabel("$y_1$")
plt.ylabel("$y_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d-temporal.pdf")
print(biases)
print(stationary_points)
colors
Explanation: Sample data from the ARHMM
End of explanation
lim
# Plot the data and the smoothed data
plot_slice = (0, 200)
lim = 1.05 * abs(data).max()
plt.figure(figsize=(8, 6))
plt.imshow(
true_states[None, :],
aspect="auto",
cmap=cmap,
vmin=0,
vmax=len(colors) - 1,
extent=(0, time_bins, -lim, (data_dim) * lim),
)
Ey = np.array(stationary_points)[true_states]
for d in range(data_dim):
plt.plot(data[:, d] + lim * d, "-k")
plt.plot(Ey[:, d] + lim * d, ":k")
plt.xlim(plot_slice)
plt.xlabel("time")
# plt.yticks(lim * np.arange(data_dim), ["$y_{{{}}}$".format(d+1) for d in range(data_dim)])
plt.ylabel("observations")
plt.tight_layout()
plt.savefig("arhmm-samples-1d.pdf")
data.shape
data[:10, :]
Explanation: Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.
End of explanation
# Now fit an HMM to the data
key1, key2 = jr.split(jr.PRNGKey(0), 2)
test_num_states = num_states
initial_distribution = tfp.distributions.Categorical(logits=np.zeros(test_num_states))
transition_distribution = tfp.distributions.Categorical(logits=np.zeros((test_num_states, test_num_states)))
emission_distribution = GaussianLinearRegression(
weights=np.tile(0.99 * np.eye(data_dim), (test_num_states, 1, 1)),
bias=0.01 * jr.normal(key2, (test_num_states, data_dim)),
scale_tril=np.tile(np.eye(data_dim), (test_num_states, 1, 1)),
)
arhmm = GaussianARHMM(test_num_states, data_dim, num_lags, seed=jr.PRNGKey(0))
lps, arhmm, posterior = arhmm.fit(data, method="em")
# Plot the log likelihoods against the true likelihood, for comparison
true_lp = true_arhmm.marginal_likelihood(data)
plt.plot(lps, label="EM")
plt.plot(true_lp * np.ones(len(lps)), ":k", label="True")
plt.xlabel("EM Iteration")
plt.ylabel("Log Probability")
plt.legend(loc="lower right")
plt.show()
# # Find a permutation of the states that best matches the true and inferred states
# most_likely_states = posterior.most_likely_states()
# arhmm.permute(find_permutation(true_states[num_lags:], most_likely_states))
# posterior.update()
# most_likely_states = posterior.most_likely_states()
if data_dim == 2:
lim = abs(data).max()
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(2, max(num_states, test_num_states), figsize=(3 * num_states, 6))
for i, model in enumerate([true_arhmm, arhmm]):
for j in range(model.num_states):
dist = model._emissions._distribution[j]
A, b = dist.weights, dist.bias
dxydt_m = xy.dot(A.T) + b - xy
axs[i, j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])
axs[i, j].set_xlabel("$x_1$")
axs[i, j].set_xticks([])
if j == 0:
axs[i, j].set_ylabel("$x_2$")
axs[i, j].set_yticks([])
axs[i, j].set_aspect("equal")
plt.tight_layout()
plt.savefig("argmm-flow-matrices-true-and-estimated.pdf")
if data_dim == 2:
lim = abs(data).max()
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(1, max(num_states, test_num_states), figsize=(3 * num_states, 6))
for i, model in enumerate([arhmm]):
for j in range(model.num_states):
dist = model._emissions._distribution[j]
A, b = dist.weights, dist.bias
dxydt_m = xy.dot(A.T) + b - xy
axs[j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])
axs[j].set_xlabel("$y_1$")
axs[j].set_xticks([])
if j == 0:
axs[j].set_ylabel("$y_2$")
axs[j].set_yticks([])
axs[j].set_aspect("equal")
plt.tight_layout()
plt.savefig("arhmm-flow-matrices-estimated.pdf")
# Plot the true and inferred discrete states
plot_slice = (0, 1000)
plt.figure(figsize=(8, 4))
plt.subplot(211)
plt.imshow(true_states[None, num_lags:], aspect="auto", interpolation="none", cmap=cmap, vmin=0, vmax=len(colors) - 1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{true}}$")
plt.yticks([])
plt.subplot(212)
# plt.imshow(most_likely_states[None,: :], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors)-1)
plt.imshow(posterior.expected_states[0].T, aspect="auto", interpolation="none", cmap="Greys", vmin=0, vmax=1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{inferred}}$")
plt.yticks([])
plt.xlabel("time")
plt.tight_layout()
plt.savefig("arhmm-state-est.pdf")
# Sample the fitted model
sampled_states, sampled_data = arhmm.sample(jr.PRNGKey(0), time_bins)
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*sampled_data[sampled_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3)
# plt.plot(*sampled_data.T, '-k', lw=0.5, alpha=0.2)
plt.plot(*sampled_data[:1000].T, "-k", lw=0.5, alpha=0.2)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d-estimated.pdf")
Explanation: Fit an ARHMM
End of explanation |
3,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pipelining estimators
In this section we study how different estimators maybe be chained.
A simple example
Step1: Previously, we applied the feature extraction manually, like so
Step2: The situation where we learn a transformation and then apply it to the test data is very common in machine learning.
Therefore scikit-learn has a shortcut for this, called pipelines
Step3: As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.
After the first step is fit, it will use the transform method of the first step to create a new representation.
This will then be fed to the fit of the next step, and so on.
Finally, on the last step, only fit is called.
If we call score, only transform will be called on each step - this could be the test set after all! Then, on the last step, score is called with the new representation. The same goes for predict.
Building pipelines not only simplifies the code, it is also important for model selection.
Say we want to grid-search C to tune our Logistic Regression above.
Let's say we do it like this
Step4: What did we do wrong?
Here, we did grid-search with cross-validation on X_train. However, when applying TfidfVectorizer, it saw all of the X_train,
not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.
We can fix this with the pipeline, though
Step5: Note that we need to tell the pipeline where at which step we wanted to set the parameter C.
We can do this using the special __ syntax. The name before the __ is simply the name of the class, the part after __ is the parameter we want to set with grid-search.
Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with GridSearchCV | Python Code:
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.cross_validation import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
Explanation: Pipelining estimators
In this section we study how different estimators maybe be chained.
A simple example: feature extraction and selection before an estimator
Feature extraction: vectorizer
For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.
To illustrate we load the SMS spam dataset we used earlier.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
Explanation: Previously, we applied the feature extraction manually, like so:
End of explanation
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
Explanation: The situation where we learn a transformation and then apply it to the test data is very common in machine learning.
Therefore scikit-learn has a shortcut for this, called pipelines:
End of explanation
# this illustrates a common mistake. Don't use this code!
from sklearn.grid_search import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
Explanation: As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.
After the first step is fit, it will use the transform method of the first step to create a new representation.
This will then be fed to the fit of the next step, and so on.
Finally, on the last step, only fit is called.
If we call score, only transform will be called on each step - this could be the test set after all! Then, on the last step, score is called with the new representation. The same goes for predict.
Building pipelines not only simplifies the code, it is also important for model selection.
Say we want to grid-search C to tune our Logistic Regression above.
Let's say we do it like this:
End of explanation
from sklearn.grid_search import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
grid = GridSearchCV(pipeline, param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
Explanation: What did we do wrong?
Here, we did grid-search with cross-validation on X_train. However, when applying TfidfVectorizer, it saw all of the X_train,
not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.
We can fix this with the pipeline, though:
End of explanation
from sklearn.grid_search import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100], "tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
Explanation: Note that we need to tell the pipeline where at which step we wanted to set the parameter C.
We can do this using the special __ syntax. The name before the __ is simply the name of the class, the part after __ is the parameter we want to set with grid-search.
Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with GridSearchCV:
End of explanation |
3,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 2
Step1: These variables do not change anything in the simulation engine, but
are just standard Python variables. They are used to increase the
readability and flexibility of the script. The box length is not a
parameter of this simulation, it is calculated from the number of
particles and the system density. This allows to change the parameters
later easily, e.g. to simulate a bigger system.
We use dictionaries for all particle related parameters, which is less error-prone and
readable as we will see later when we actually need the values. The parameters here define a purely repulsive,
equally sized, monovalent salt.
The simulation engine itself is modified by changing the
<tt>espressomd.System()</tt> properties. We create an instance <tt>system</tt> and
set the box length, periodicity and time step. The skin depth <tt>skin</tt>
is a parameter for the link--cell system which tunes its
performance, but shall not be discussed here.
Step2: We now fill this simulation box with particles at random positions, using type and charge from our dictionaries.
Using the length of the particle list <tt>system.part</tt> for the id, we make sure that our particles are numbered consecutively.
The particle type is used to link non-bonded interactions to a certain group of particles.
Step3: Before we can really start the simulation, we have to specify the
interactions between our particles. We already defined the Lennard-Jones parameters at the beginning,
what is left is to specify the combination rule and to iterate over particle type pairs. For simplicity,
we implement only the Lorentz-Berthelot rules.
We pass our interaction pair to <tt>system.non_bonded_inter[*,*]</tt> and set the
pre-calculated LJ parameters <tt>epsilon</tt>, <tt>sigma</tt> and <tt>cutoff</tt>. With <tt>shift="auto"</tt>,
we shift the interaction potential to the cutoff so that $U_\mathrm{LJ}(r_\mathrm{cutoff})=0$.
Step4: 3 Equilibration
With randomly positioned particles, we most likely have huge overlap and the strong repulsion will
cause the simulation to crash. The next step in our script therefore is a suitable LJ equilibration.
This is known to be a tricky part of a simulation and several approaches exist to reduce the particle overlap.
Here, we use the steepest descent algorithm and cap the maximal particle displacement per integration step
to 1% of sigma.
We use <tt>system.analysis.min_dist()</tt> to get the minimal distance between all particles pairs. This value
is used to stop the minimization when particles are far away enough from each other. At the end, we activate the
Langevin thermostat for our NVT ensemble with temperature <tt>temp</tt> and friction coefficient <tt>gamma</tt>.
Step5: ESPResSo uses so-called <tt>actors</tt> for electrostatics, magnetostatics and hydrodynamics. This ensures that unphysical combinations of algorithms are
avoided, for example simultaneous usage of two electrostatic interactions.
Adding an actor to the system also activates the method and calls necessary
initialization routines. Here, we define a P$^3$M object with parameters Bjerrum
length and rms force error. This automatically starts a
tuning function which tries to find optimal parameters for P$^3$M and prints them
to the screen
Step6: Before the production part of the simulation, we do a quick temperature
equilibration. For the output, we gather all energies with <tt>system.analysis.energy()</tt>, calculate the "current" temperature from the ideal part and print it to the screen along with the total and Coulomb energies. Note that for the ideal gas the temperature is given via $1/2 m \sqrt{\langle v^2 \rangle}=3/2 k_BT$, where $\langle \cdot \rangle$ denotes the ensemble average. Calculating some kind of "current temperature" via $T_\text{cur}=\frac{m}{3 k_B} \sqrt{ v^2 }$ won't produce the temperature in the system. Only when averaging the squared velocities first one would obtain the temperature for the ideal gas. $T$ is a fixed quantity and does not fluctuate in the canonical ensemble.
We integrate for a certain amount of steps with <tt>system.integrator.run(100)</tt>.
Step7: <figure>
<img src='figures/salt.png' alt='missing' style="width
Step8: Additionally, we append all particle configurations in the core with <tt>system.analysis.append()</tt> for a very convenient analysis later on.
5 Analysis
Now, we want to calculate the averaged radial distribution functions
$g_{++}(r)$ and $g_{+-}(r)$ with the <tt>rdf()</tt> command from <tt>system.analysis</tt>
Step9: The shown <tt>rdf()</tt> commands return the radial distribution functions for
equally and oppositely charged particles for specified radii and number of bins.
In this case, we calculate the averaged rdf of the stored
configurations, denoted by the chevrons in <tt>rdf_type='$<\mathrm{rdf}>$'</tt>. Using <tt>rdf_type='rdf'</tt> would simply calculate the rdf of the current particle
configuration. The results are two NumPy arrays containing the $r$ and $g(r)$
values. We can then write the data into a file with standard python output routines.
Step10: Finally we can plot the two radial distribution functions using pyplot. | Python Code:
from espressomd import System, electrostatics
import espressomd
import numpy
import matplotlib.pyplot as plt
plt.ion()
# Print enabled features
required_features = ["EXTERNAL_FORCES", "MASS", "ELECTROSTATICS", "LENNARD_JONES"]
espressomd.assert_features(required_features)
print(espressomd.features())
# System Parameters
n_part = 200
n_ionpairs = n_part / 2
density = 0.5
time_step = 0.01
temp = 1.0
gamma = 1.0
l_bjerrum = 7.0
num_steps_equilibration = 1000
num_configs = 500
integ_steps_per_config = 1000
# Particle Parameters
types = {"Anion": 0, "Cation": 1}
numbers = {"Anion": n_ionpairs, "Cation": n_ionpairs}
charges = {"Anion": -1.0, "Cation": 1.0}
lj_sigmas = {"Anion": 1.0, "Cation": 1.0}
lj_epsilons = {"Anion": 1.0, "Cation": 1.0}
WCA_cut = 2.**(1. / 6.)
lj_cuts = {"Anion": WCA_cut * lj_sigmas["Anion"],
"Cation": WCA_cut * lj_sigmas["Cation"]}
Explanation: Tutorial 2: A Simple Charged System, Part 1
1 Introduction
This tutorial introduces some of the basic features of ESPResSo for charged systems by constructing a simulation script for a simple salt crystal. In the subsequent task, we use a more realistic force-field for a NaCl crystal. Finally, we introduce constraints and 2D-Electrostatics to simulate a molten salt in a parallel plate capacitor. We assume that the reader is familiar with the basic concepts of Python and MD simulations. Compile ESPResSo with the following features in your <tt>myconfig.hpp</tt> to be set throughout the whole tutorial:
```c++
define EXTERNAL_FORCES
define MASS
define ELECTROSTATICS
define LENNARD_JONES
```
2 Basic Set Up
The script for the tutorial can be found in your build directory at <tt>/doc/tutorials/02-charged_system/scripts/nacl.py</tt>.
We start by importing numpy, pyplot, espressomd and setting up the simulation parameters:
End of explanation
# Setup System
box_l = (n_part / density)**(1. / 3.)
system = System(box_l=[box_l, box_l, box_l])
system.seed = 42
system.periodicity = [True, True, True]
system.time_step = time_step
system.cell_system.skin = 0.3
Explanation: These variables do not change anything in the simulation engine, but
are just standard Python variables. They are used to increase the
readability and flexibility of the script. The box length is not a
parameter of this simulation, it is calculated from the number of
particles and the system density. This allows to change the parameters
later easily, e.g. to simulate a bigger system.
We use dictionaries for all particle related parameters, which is less error-prone and
readable as we will see later when we actually need the values. The parameters here define a purely repulsive,
equally sized, monovalent salt.
The simulation engine itself is modified by changing the
<tt>espressomd.System()</tt> properties. We create an instance <tt>system</tt> and
set the box length, periodicity and time step. The skin depth <tt>skin</tt>
is a parameter for the link--cell system which tunes its
performance, but shall not be discussed here.
End of explanation
for i in range(int(n_ionpairs)):
system.part.add(id=len(system.part), type=types["Anion"],
pos=numpy.random.random(3) * box_l, q=charges["Anion"])
for i in range(int(n_ionpairs)):
system.part.add(id=len(system.part), type=types["Cation"],
pos=numpy.random.random(3) * box_l, q=charges["Cation"])
Explanation: We now fill this simulation box with particles at random positions, using type and charge from our dictionaries.
Using the length of the particle list <tt>system.part</tt> for the id, we make sure that our particles are numbered consecutively.
The particle type is used to link non-bonded interactions to a certain group of particles.
End of explanation
def combination_rule_epsilon(rule, eps1, eps2):
if rule == "Lorentz":
return (eps1 * eps2)**0.5
else:
return ValueError("No combination rule defined")
def combination_rule_sigma(rule, sig1, sig2):
if rule == "Berthelot":
return (sig1 + sig2) * 0.5
else:
return ValueError("No combination rule defined")
# Lennard-Jones interactions parameters
for s in [["Anion", "Cation"], ["Anion", "Anion"], ["Cation", "Cation"]]:
lj_sig = combination_rule_sigma("Berthelot", lj_sigmas[s[0]], lj_sigmas[s[1]])
lj_cut = combination_rule_sigma("Berthelot", lj_cuts[s[0]], lj_cuts[s[1]])
lj_eps = combination_rule_epsilon("Lorentz", lj_epsilons[s[0]], lj_epsilons[s[1]])
system.non_bonded_inter[types[s[0]], types[s[1]]].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig, cutoff=lj_cut, shift="auto")
Explanation: Before we can really start the simulation, we have to specify the
interactions between our particles. We already defined the Lennard-Jones parameters at the beginning,
what is left is to specify the combination rule and to iterate over particle type pairs. For simplicity,
we implement only the Lorentz-Berthelot rules.
We pass our interaction pair to <tt>system.non_bonded_inter[*,*]</tt> and set the
pre-calculated LJ parameters <tt>epsilon</tt>, <tt>sigma</tt> and <tt>cutoff</tt>. With <tt>shift="auto"</tt>,
we shift the interaction potential to the cutoff so that $U_\mathrm{LJ}(r_\mathrm{cutoff})=0$.
End of explanation
# Lennard-Jones Equilibration
max_sigma = max(lj_sigmas.values())
min_dist = 0.0
system.minimize_energy.init(f_max=0, gamma=10.0, max_steps=10,
max_displacement=max_sigma * 0.01)
while min_dist < max_sigma:
min_dist = system.analysis.min_dist([types["Anion"], types["Cation"]],
[types["Anion"], types["Cation"]])
system.minimize_energy.minimize()
# Set thermostat
system.thermostat.set_langevin(kT=temp, gamma=gamma, seed=42)
Explanation: 3 Equilibration
With randomly positioned particles, we most likely have huge overlap and the strong repulsion will
cause the simulation to crash. The next step in our script therefore is a suitable LJ equilibration.
This is known to be a tricky part of a simulation and several approaches exist to reduce the particle overlap.
Here, we use the steepest descent algorithm and cap the maximal particle displacement per integration step
to 1% of sigma.
We use <tt>system.analysis.min_dist()</tt> to get the minimal distance between all particles pairs. This value
is used to stop the minimization when particles are far away enough from each other. At the end, we activate the
Langevin thermostat for our NVT ensemble with temperature <tt>temp</tt> and friction coefficient <tt>gamma</tt>.
End of explanation
p3m = electrostatics.P3M(prefactor=l_bjerrum * temp,
accuracy=1e-3)
system.actors.add(p3m)
Explanation: ESPResSo uses so-called <tt>actors</tt> for electrostatics, magnetostatics and hydrodynamics. This ensures that unphysical combinations of algorithms are
avoided, for example simultaneous usage of two electrostatic interactions.
Adding an actor to the system also activates the method and calls necessary
initialization routines. Here, we define a P$^3$M object with parameters Bjerrum
length and rms force error. This automatically starts a
tuning function which tries to find optimal parameters for P$^3$M and prints them
to the screen:
End of explanation
# Temperature Equilibration
system.time = 0.0
for i in range(int(num_steps_equilibration / 50)):
energy = system.analysis.energy()
temp_measured = energy['kinetic'] / ((3.0 / 2.0) * n_part)
print("t={0:.1f}, E_total={1:.2f}, E_coulomb={2:.2f},T={3:.4f}"
.format(system.time, energy['total'], energy['coulomb'], temp_measured), end='\r')
system.integrator.run(200)
print()
Explanation: Before the production part of the simulation, we do a quick temperature
equilibration. For the output, we gather all energies with <tt>system.analysis.energy()</tt>, calculate the "current" temperature from the ideal part and print it to the screen along with the total and Coulomb energies. Note that for the ideal gas the temperature is given via $1/2 m \sqrt{\langle v^2 \rangle}=3/2 k_BT$, where $\langle \cdot \rangle$ denotes the ensemble average. Calculating some kind of "current temperature" via $T_\text{cur}=\frac{m}{3 k_B} \sqrt{ v^2 }$ won't produce the temperature in the system. Only when averaging the squared velocities first one would obtain the temperature for the ideal gas. $T$ is a fixed quantity and does not fluctuate in the canonical ensemble.
We integrate for a certain amount of steps with <tt>system.integrator.run(100)</tt>.
End of explanation
# Integration
system.time = 0.0
for i in range(num_configs):
energy = system.analysis.energy()
temp_measured = energy['kinetic'] / ((3.0 / 2.0) * n_part)
print("progress={:.0f}%, t={:.1f}, E_total={:.2f}, E_coulomb={:.2f}, T={:.4f}"
.format((i + 1) * 100. / num_configs, system.time, energy['total'],
energy['coulomb'], temp_measured), end='\r')
system.integrator.run(integ_steps_per_config)
# Internally append particle configuration
system.analysis.append()
print()
Explanation: <figure>
<img src='figures/salt.png' alt='missing' style="width: 300px;"/>
<center>
<figcaption>Figure 1: VMD Snapshot of the Salt System</figcaption>
</center>
</figure>
4 Running the Simulation
Now we can integrate the particle trajectories for a couple of time
steps. Our integration loop basically looks like the equilibration:
End of explanation
# Analysis
# Calculate the averaged rdfs
rdf_bins = 100
r_min = 0.0
r_max = system.box_l[0] / 2.0
r, rdf_00 = system.analysis.rdf(rdf_type='<rdf>',
type_list_a=[types["Anion"]],
type_list_b=[types["Anion"]],
r_min=r_min,
r_max=r_max,
r_bins=rdf_bins)
r, rdf_01 = system.analysis.rdf(rdf_type='<rdf>',
type_list_a=[types["Anion"]],
type_list_b=[types["Cation"]],
r_min=r_min,
r_max=r_max,
r_bins=rdf_bins)
Explanation: Additionally, we append all particle configurations in the core with <tt>system.analysis.append()</tt> for a very convenient analysis later on.
5 Analysis
Now, we want to calculate the averaged radial distribution functions
$g_{++}(r)$ and $g_{+-}(r)$ with the <tt>rdf()</tt> command from <tt>system.analysis</tt>:
End of explanation
with open('rdf.data', 'w') as rdf_fp:
for i in range(rdf_bins):
rdf_fp.write("%1.5e %1.5e %1.5e\n" % (r[i], rdf_00[i], rdf_01[i]))
Explanation: The shown <tt>rdf()</tt> commands return the radial distribution functions for
equally and oppositely charged particles for specified radii and number of bins.
In this case, we calculate the averaged rdf of the stored
configurations, denoted by the chevrons in <tt>rdf_type='$<\mathrm{rdf}>$'</tt>. Using <tt>rdf_type='rdf'</tt> would simply calculate the rdf of the current particle
configuration. The results are two NumPy arrays containing the $r$ and $g(r)$
values. We can then write the data into a file with standard python output routines.
End of explanation
# Plot the distribution functions
plt.figure(figsize=(10, 6), dpi=80)
plt.plot(r[:], rdf_00[:], label='Na$-$Na')
plt.plot(r[:], rdf_01[:], label='Na$-$Cl')
plt.xlabel('$r$', fontsize=20)
plt.ylabel('$g(r)$', fontsize=20)
plt.legend(fontsize=20)
plt.show()
Explanation: Finally we can plot the two radial distribution functions using pyplot.
End of explanation |
3,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combine all blast hits into a single dataframe
Step1: Extract the best hits for each cluster from each DB (q_cov > 80 and e_value < 1e-3 ) | Python Code:
all_blast_hits = blast_hits[0]
for search_hits in blast_hits[1:]:
all_blast_hits = all_blast_hits.append(search_hits)
all_blast_hits.head()
all_blast_hits.db.unique()
Explanation: Combine all blast hits into a single dataframe
End of explanation
#all_blast_hits[all_blast_hits.e_value < 0.001].groupby(["cluster","db"])
gb = all_blast_hits[ (all_blast_hits.q_cov > 80) & (all_blast_hits.e_value < 0.001) ].groupby(["cluster","db"])
reliable_fam_hits = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for _,hits in gb )[["cluster","db","tool","query_id","subject_id","pct_id","q_cov","q_len",
"bitscore","e_value","s_description"]]
sorted_fam_hits = pd.concat( hits.sort_values(by="bitscore",ascending=False) for _,hits in reliable_fam_hits.groupby("cluster") )
sorted_fam_hits.to_csv("1_out/filtered_blast_best_hits.csv",index=False)
sorted_fam_hits.head()
#Export all "valid" hits for each cluster
all_blast_hits[ (all_blast_hits.q_cov > 80) & (all_blast_hits.e_value < 0.001) ].to_csv("1_out/filtered_blast_all_hits.csv",index=False)
Explanation: Extract the best hits for each cluster from each DB (q_cov > 80 and e_value < 1e-3 )
End of explanation |
3,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Downloading genome data from NCBI with Biopython and Entrez
Introduction
In this worksheet, you will use Biopython to download pathogen genome data from NCBI programmatically with Python.
It is possible to obtain the same data by point-and-click from a browser, at the terminal using a program like wget, or by other means, but scripting data downloads in this way has advantages, such as
Step1: 2. Using Bio.Entrez to list available databases
When you send a query or request to NCBI using Bio.Entrez, the remote service will send back data in XML format. This is a file format designed to be easy for computers to read, but is very verbose and difficult to read for humans.
The Bio.Entrez module can read() this data so that you can extract useful information.
In the example below, you will ask NCBI for a list of the databases you can search by using the Entrez.einfo() function. This will return a handle containing the XML response from NCBI. This will be read into a record that you can inspect and manipulate, by the Entrez.read() function.
Step2: The variable record contains a list of the available databases at NCBI, which you can see by executing the cell below
Step3: You may recognise some of the database names, such as pubmed, nuccore, assembly, sra, and taxonomy.
Entrez allows you to query these databases using Entrez.esearch() in much the same way that you just obtained the list of databases with Entrez.einfo().
3. Using Bio.Entrez to find genome assemblies at NCBI
In the cells below, you will use Bio.Entrez to identify assemblies for the bacterial plant pathogen Ralstonia solanacearum. As our interest is genome data, we will query against the assembly database at NCBI. This database contains entries for all genome assemblies, whether complete or draft.
We are interested in Ralstonia solanacearum, so will search against the assembly database with the text "Ralstonia solanacearum" as a query. The function that allows us to do this is Entrez.esearch(). By default, searches are limited to 20 results (as on the NCBI webpage), but we can change this.
Step4: The returned information can be viewed by running the cell below.
The output may look confusing at first, but it simply describes the database identifiers that uniquely identify the assemblies present in the assembly database that correspond to the query we made, and a few other pieces of information (number of returned entries, total number of entries that could have been returned, how the query was processed) that we do not need, right now.
Step5: For now, we are interested in the list of database identifiers, in record['IdList']. We will use these to get information from the assembly database.
We will look at a single record first, and then consider how to get all the Ralstonia genomes at the same time.
4. Downloading a single genome from NCBI
In this section, you will use one of the database identifiers returned from your search at NCBI to identify and download the GenBank records corresponding to a single assembly of Ralstonia solanacearum.
To do this, we will select a single accession from the list in record["IdList"], using the code in the cell below.
<div class="alert alert-danger" role="alert">
Although this is a single assembly, with a single accession ID, we shall see that we need to download more than one sequence to cover the complete genome.
</div>
Step6: Linking across databases
<div class="alert alert-info" role="alert">
There is a complicating factor
Step8: The links variable may contain links to more than one version of the genome (NCBI keep third-party managed genome data in GenBank/INSDC records, and NCBI-'owned' data in RefSeq records).
The function below extracts only the INSDC information from the Elink() query. It is not important that you understand the code.
Step9: You will use the extract_insdc() function to get the accession IDs for the sequences in this Ralstonia solanacearum genome, in the cell below.
Step10: Fetching sequence records from NCBI
Now we have accession UIDs for the nucleotide sequences of the assembly, you will use Entrez.efetch as before to fetch each sequence record from NCBI.
We need to tell NCBI which database we want to use (in this case, nucleotide), and the identifiers for the records (the values in nuc_uids). To get all the data at the same time, we can join the accession ids into a single string, with commas to separate the individual UIDs.
We will also tell NCBI two further pieces of information
Step11: By running the cell below, you can see that each sequence in the Ralstonia solanacearum assembly has been downloaded into a SeqRecord, and that it contains useful metadata, describing the sequence assembly and properties of the annotation.
Step12: Writing sequence data with Biopython
The SeqIO module can be used to write sequence data out to a file on your local hard drive. You will do this in the cells below, using the SeqIO.write() function.
<div class="alert alert-info" role="alert">
The <b>SeqRecord</b>s you downloaded contain sequence and feature annotation data, and can be written in any of several file formats. Some of these formats preserve annotation information, and some do not.
</div>
Firstly, in the cell below, you will write GenBank format files that preserve both sequence and annotation data. For the SeqIO.write() function, we need to specify the list of SeqRecords (records), the output filename to which they will be written, and the format we wish to write (in this case "genbank").
Step13: If you inspect the newly-created ralstonia.gbk file, you should see that it contains complete GenBank records, describing this genome.
GenBank files are detailed and large, and sometimes we only want to consider the genome sequence itself, not its annotation. The FASTA sequence can be written out on its own by specifyinf the "fasta" format to SeqIO.write() instead. This time, we write the output to data/ralstonia.fasta. | Python Code:
# This line imports the Bio.Entrez module, and makes it available
# as 'Entrez'.
from Bio import Entrez
# The line below imports the Bio.SeqIO module, which allows reading
# and writing of common bioinformatics sequence formats.
from Bio import SeqIO
# Create a new directory (if needed) for output/downloads
import os
outdir = "ncbi_downloads"
os.makedirs(outdir, exist_ok=True)
# This line sets the variable 'Entrez.email' to the specified
# email address. You should substitute your own address for the
# example address provided below. Please do not provide a
# fake name.
Entrez.email = "[email protected]"
# This line sets the name of the tool that is making the queries
Entrez.tool = "Biopython_NCBI_Entrez_downloads.ipynb"
Explanation: Downloading genome data from NCBI with Biopython and Entrez
Introduction
In this worksheet, you will use Biopython to download pathogen genome data from NCBI programmatically with Python.
It is possible to obtain the same data by point-and-click from a browser, at the terminal using a program like wget, or by other means, but scripting data downloads in this way has advantages, such as:
automation - only one script is required to download many sequences
reproducibility - the same data will be downloaded each time, and copy-paste errors will be avoided
self-documentation - the script itself describes exactly how the data was obtained
future adaptability (and reuse) - only minor changes to the script may be required for the next analysis or project
<div class="alert alert-warning">
<b>Note: large data sets</b>: if you wish to download large datasets, then using <b>wget</b>, <b>ftp</b> or other methods can be better than programmatic access <i>via</i> <b>Entrez</b>. The <b>Entrez</b> interface may give errors partway through large downloads, and is not designed for large data transfers.
</div>
This Jupyter notebook provides some examples of scripting genome downloads from NCBI singly, and in groups. This method of obtaining genome data uses the Entrez interface that NCBI provides for automated querying of its data.
Running cells in this notebook
This is an interactive notebook, which means you are able to run the code that is written in each of the cells.
<div class="alert alert-info" role="alert">
To run the code in a cell, you should:
<ol>
<li>Place your mouse cursor in the cell, and click (this gives the cell <i>focus</i>) to make it active
<li>Hold down the <b>Shift</b> key, and press the <b>Return<b> key.
</ol>
</div>
If this is successful, you should see the input marker to the left of the cell change from
In [ ]:
to (for example)
In [1]:
and you may see output appear below the cell.
Related online documentation
Biopython tutorial for Entrez: http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc109
Biopython technical documentation for Bio.Entrez: http://biopython.org/DIST/docs/api/Bio.Entrez-module.html
Entrez introductory documentation at NCBI: http://www.ncbi.nlm.nih.gov/books/NBK25497/
Entrez help: http://www.ncbi.nlm.nih.gov/books/NBK3837/
Entrez Quick Start Guide: http://www.ncbi.nlm.nih.gov/books/NBK25500/
Requirements
<div class="alert alert-success">
To complete this worksheet, you will need:
<ul>
<li>an active internet connection
<li>the <b>Biopython</b> libraries
</ul>
</div>
Entrez
Entrez is the name NCBI give to the tools they provide as a computational interface to the data they hold across their genomic and other databases (e.g. PubMed). Many scripts and programs that interact with NCBI to download data (e.g. from GenBank or RefSeq) will be using this set of tools.
<div class="alert alert-warning">
<b>Caveats</b>
<br />
There are usage caps for this service, and it is possible to over-use <b>Entrez</b>. If this happens, you or your IP address may be blacklisted. In order to avoid this, you should keep to the following guidelines:
<br />
<ul>
<li> Make no more than three URL requests per second
<li> Make large queries outwith the hours of 0900-1700 EST (1400-2200 GMT)
<li> Provide your email address as an identifier when querying
</ul>
<br />
Programming libraries, such as <b>Biopython</b>'s <b>Bio.Entrez</b> module, will usually help you stay within those guidelines by limiting the frequency of queries, and insisting that you provide an email address.
</div>
Biopython and Bio.Entrez <img src="images/biopython_small.jpg" style="width: 150px; float: right;">
Biopython is a widely-used library, providing bioinformatics tools for the popular Python programming language. Similar libraries exist for other programming languages.
Bio.Entrez is a module of Biopython that provides tools to make queries against the NCBI databases using the Entrez interface.
1. Connecting to NCBI
In order to use the Bio.Entrez module, you need to import it. This is how modules become available for use in Python.
<div class="alert alert-info" role="alert">
It is good practice at this point to specify your email, so that <b>NCBI</b> can contact you in case of problems (or if you are likely to become blacklisted through excessive use).
It is also good practice to specify a '<b>tool</b>' that is the script making the call.
</div>
End of explanation
# The line below uses the Entrez.einfo() function to
# ask NCBI what databases are available. The result is
# 'stored' in a variable called 'handle'
handle = Entrez.einfo()
# In the line below, the response from NCBI is read
# into a record, that organises NCBI's response into
# something you can work with.
record = Entrez.read(handle)
Explanation: 2. Using Bio.Entrez to list available databases
When you send a query or request to NCBI using Bio.Entrez, the remote service will send back data in XML format. This is a file format designed to be easy for computers to read, but is very verbose and difficult to read for humans.
The Bio.Entrez module can read() this data so that you can extract useful information.
In the example below, you will ask NCBI for a list of the databases you can search by using the Entrez.einfo() function. This will return a handle containing the XML response from NCBI. This will be read into a record that you can inspect and manipulate, by the Entrez.read() function.
End of explanation
print(record["DbList"])
Explanation: The variable record contains a list of the available databases at NCBI, which you can see by executing the cell below:
End of explanation
# The line below carries out a search of the `assembly` database at NCBI,
# using the phrase `Ralstonia solanacearum` as the search query,
# and asks NCBI to return up to the first 100 results
handle = Entrez.esearch(db="assembly", term="Ralstonia solanacearum", retmax=100)
# This line converts the returned information from NCBI into a form we
# can use, as before.
record = Entrez.read(handle)
Explanation: You may recognise some of the database names, such as pubmed, nuccore, assembly, sra, and taxonomy.
Entrez allows you to query these databases using Entrez.esearch() in much the same way that you just obtained the list of databases with Entrez.einfo().
3. Using Bio.Entrez to find genome assemblies at NCBI
In the cells below, you will use Bio.Entrez to identify assemblies for the bacterial plant pathogen Ralstonia solanacearum. As our interest is genome data, we will query against the assembly database at NCBI. This database contains entries for all genome assemblies, whether complete or draft.
We are interested in Ralstonia solanacearum, so will search against the assembly database with the text "Ralstonia solanacearum" as a query. The function that allows us to do this is Entrez.esearch(). By default, searches are limited to 20 results (as on the NCBI webpage), but we can change this.
End of explanation
# This line prints the downloaded information from NCBI, so
# we can read it.
print(record)
Explanation: The returned information can be viewed by running the cell below.
The output may look confusing at first, but it simply describes the database identifiers that uniquely identify the assemblies present in the assembly database that correspond to the query we made, and a few other pieces of information (number of returned entries, total number of entries that could have been returned, how the query was processed) that we do not need, right now.
End of explanation
# The line below takes the first value in the list of
# database accessions record["IdList"], and places it in
# the variable 'accession'
accession = record["IdList"][0]
# Show the contents of the variable 'accession'
print(accession)
Explanation: For now, we are interested in the list of database identifiers, in record['IdList']. We will use these to get information from the assembly database.
We will look at a single record first, and then consider how to get all the Ralstonia genomes at the same time.
4. Downloading a single genome from NCBI
In this section, you will use one of the database identifiers returned from your search at NCBI to identify and download the GenBank records corresponding to a single assembly of Ralstonia solanacearum.
To do this, we will select a single accession from the list in record["IdList"], using the code in the cell below.
<div class="alert alert-danger" role="alert">
Although this is a single assembly, with a single accession ID, we shall see that we need to download more than one sequence to cover the complete genome.
</div>
End of explanation
# The line below requests the identifiers (UIDs) for all
# records in the `nucleotide` database that correspond to the
# assembly UID that is stored in the variable 'accession'
handle = Entrez.elink(dbfrom="assembly", db="nucleotide",
from_uid=accession)
# We place the downloaded information in the variable 'links'
links = Entrez.read(handle)
Explanation: Linking across databases
<div class="alert alert-info" role="alert">
There is a complicating factor: assemblies may not be a single complete sequence, and could comprise several contigs, or a chromosome and several extrachromosomal elements, all annotated independently. These are stored independently in a different database, called <b>nucleotide</b>, and each has an individual accession.
<br/><br />
We need to <i>link</i> the <b>assembly</b> accession to each of the <b>nucleotide</b> accessions.
<br/><br />
This is a common requirement when querying <b>NCBI</b> databases, and is achieved using the <b>Entrez.elink()</b> function.
</div>
We need to specify the database for which we have the accession (or UID), and which database we want to query for related records (in this case, nucleotide).
End of explanation
# The code below provides a function that extracts nucleotide
# database accessions for INSDC data from the result of an
# Entrez.elink() query.
def extract_insdc(links):
Returns the link UIDs for RefSeq entries, from the
passed Elink search results
# Work only with INSDC accession UIDs
linkset = [ls for ls in links[0]['LinkSetDb'] if
ls['LinkName'] == 'assembly_nuccore_insdc']
if 0 == len(linkset): # There are no INSDC UIDs
raise ValueError("Elink() output has no assembly_nuccore_insdc data")
# Make a list of the INSDC UIDs
uids = [i['Id'] for i in linkset[0]['Link']]
return uids
Explanation: The links variable may contain links to more than one version of the genome (NCBI keep third-party managed genome data in GenBank/INSDC records, and NCBI-'owned' data in RefSeq records).
The function below extracts only the INSDC information from the Elink() query. It is not important that you understand the code.
End of explanation
# The line below uses the extract_insdc() function to get INSDC/GenBank
# accession UIDs for the components of the genome/assembly referred to
# in the 'links' variable. These will be stored in the variable
# 'nuc_uids'
nuc_uids = extract_insdc(links)
# Show the contents of 'nuc_uids'
print(nuc_uids)
Explanation: You will use the extract_insdc() function to get the accession IDs for the sequences in this Ralstonia solanacearum genome, in the cell below.
End of explanation
# The lines below retrieve (fetch) the GenBank records for
# each database entry specified in `nuc_uids`, in plain text
# format. These are parsed with Biopython's SeqIO module into
# SeqRecords, which structure the data into a usable format.
# The SeqRecords are placed in the variable 'records'.
records = []
for nuc_uid in nuc_uids:
handle = Entrez.efetch(db="nucleotide", rettype="gbwithparts", retmode="text",
id=nuc_uid)
records.append(SeqIO.read(handle, 'genbank'))
Explanation: Fetching sequence records from NCBI
Now we have accession UIDs for the nucleotide sequences of the assembly, you will use Entrez.efetch as before to fetch each sequence record from NCBI.
We need to tell NCBI which database we want to use (in this case, nucleotide), and the identifiers for the records (the values in nuc_uids). To get all the data at the same time, we can join the accession ids into a single string, with commas to separate the individual UIDs.
We will also tell NCBI two further pieces of information:
The format we want the data returned in. We will ask for GenBank format (gbwithparts) to obtain the genome sequence and feature annotations.
How we want the data returned. We will ask for plain text (text).
End of explanation
# Show the contents of each downloaded `SeqRecord`.
for record in records:
print(record, "\n")
Explanation: By running the cell below, you can see that each sequence in the Ralstonia solanacearum assembly has been downloaded into a SeqRecord, and that it contains useful metadata, describing the sequence assembly and properties of the annotation.
End of explanation
# The line below writes the sequence data in 'seqdata' to
# the local file "data/ralstonia.gbk", in GenBank format.
# The function returns the number of sequences that were written to file
SeqIO.write(records, os.path.join(outdir, "ralstonia.gbk"), "genbank")
Explanation: Writing sequence data with Biopython
The SeqIO module can be used to write sequence data out to a file on your local hard drive. You will do this in the cells below, using the SeqIO.write() function.
<div class="alert alert-info" role="alert">
The <b>SeqRecord</b>s you downloaded contain sequence and feature annotation data, and can be written in any of several file formats. Some of these formats preserve annotation information, and some do not.
</div>
Firstly, in the cell below, you will write GenBank format files that preserve both sequence and annotation data. For the SeqIO.write() function, we need to specify the list of SeqRecords (records), the output filename to which they will be written, and the format we wish to write (in this case "genbank").
End of explanation
# The line below writes the sequence data in 'seqdata' to
# the local file "data/ralstonia.fasta", in FASTA format.
SeqIO.write(records, os.path.join(outdir, "ralstonia.fasta"), "fasta")
Explanation: If you inspect the newly-created ralstonia.gbk file, you should see that it contains complete GenBank records, describing this genome.
GenBank files are detailed and large, and sometimes we only want to consider the genome sequence itself, not its annotation. The FASTA sequence can be written out on its own by specifyinf the "fasta" format to SeqIO.write() instead. This time, we write the output to data/ralstonia.fasta.
End of explanation |
3,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 케라스와 텐서플로 허브를 사용한 영화 리뷰 텍스트 분류하기
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: IMDB 데이터셋 다운로드
IMDB 데이터셋은 imdb reviews 또는 텐서플로 데이터셋(TensorFlow datasets)에 포함되어 있습니다. 다음 코드는 IMDB 데이터셋을 컴퓨터(또는 코랩 런타임)에 다운로드합니다
Step3: 데이터 탐색
잠시 데이터 형태를 알아 보죠. 이 데이터셋의 샘플은 전처리된 정수 배열입니다. 이 정수는 영화 리뷰에 나오는 단어를 나타냅니다. 레이블(label)은 정수 0 또는 1입니다. 0은 부정적인 리뷰이고 1은 긍정적인 리뷰입니다.
처음 10개의 샘플을 출력해 보겠습니다.
Step4: 처음 10개의 레이블도 출력해 보겠습니다.
Step5: 모델 구성
신경망은 층을 쌓아서 만듭니다. 여기에는 세 개의 중요한 구조적 결정이 필요합니다
Step6: 이제 전체 모델을 만들어 보겠습니다
Step7: 순서대로 층을 쌓아 분류기를 만듭니다
Step8: 모델 훈련
512개 샘플의 미니 배치에서 10개 epoch 동안 모델을 훈련합니다. 이 동작은 x_train 및 y_train 텐서의 모든 샘플에 대한 10회 반복에 해당합니다. 훈련하는 동안 검증 세트의 10,000개 샘플에서 모델의 손실과 정확도를 모니터링합니다.
Step9: 모델 평가
모델의 성능을 확인해 보죠. 두 개의 값이 반환됩니다. 손실(오차를 나타내는 숫자이므로 낮을수록 좋습니다)과 정확도입니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install tensorflow-hub
!pip install tensorflow-datasets
import os
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
Explanation: 케라스와 텐서플로 허브를 사용한 영화 리뷰 텍스트 분류하기
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
<td><a href="https://tfhub.dev/s?module-type=text-embedding"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub 모델 보기</a></td>
</table>
이 노트북은 리뷰의 텍스트를 사용하여 영화 리뷰를 긍정적 또는 부정적으로 분류합니다. 이진( 또는 2-클래스 분류인 이 예는 광범위하게 적용할 수 있는 중요한 머신러닝 응용 사례입니다.
이 튜토리얼은 TensorFlow Hub 및 Keras를 사용한 전이 학습의 기본적인 응용을 보여줍니다.
여기서 사용하는 IMDB 데이터세트에는 인터넷 영화 데이터베이스에서 가져온 50,000개의 영화 리뷰 텍스트가 포함되어 있습니다. 훈련용 리뷰 25,000개와 테스트용 리뷰 25,000개로 나뉩니다. 훈련 및 테스트 세트는 균형을 이룹니다. 즉, 동일한 수의 긍정적인 리뷰와 부정적인 리뷰가 포함되어 있습니다.
이 노트북은 높은 수준의 API인 tf.keras를 사용하여 TensorFlow에서 모델을 빌드 및 훈련하고, 단일 코드 줄로 TFHub로부터 훈련된 모델을 로드하기 위한 라이브러리인 tensorflow_hub를 사용합니다. tf.keras를 사용한 고급 텍스트 분류 튜토리얼에 대해서는 MLCC 텍스트 분류 가이드를 참조하세요.
End of explanation
# Split the training set into 60% and 40% to end up with 15,000 examples
# for training, 10,000 examples for validation and 25,000 examples for testing.
train_data, validation_data, test_data = tfds.load(
name="imdb_reviews",
split=('train[:60%]', 'train[60%:]', 'test'),
as_supervised=True)
Explanation: IMDB 데이터셋 다운로드
IMDB 데이터셋은 imdb reviews 또는 텐서플로 데이터셋(TensorFlow datasets)에 포함되어 있습니다. 다음 코드는 IMDB 데이터셋을 컴퓨터(또는 코랩 런타임)에 다운로드합니다:
End of explanation
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch
Explanation: 데이터 탐색
잠시 데이터 형태를 알아 보죠. 이 데이터셋의 샘플은 전처리된 정수 배열입니다. 이 정수는 영화 리뷰에 나오는 단어를 나타냅니다. 레이블(label)은 정수 0 또는 1입니다. 0은 부정적인 리뷰이고 1은 긍정적인 리뷰입니다.
처음 10개의 샘플을 출력해 보겠습니다.
End of explanation
train_labels_batch
Explanation: 처음 10개의 레이블도 출력해 보겠습니다.
End of explanation
embedding = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
Explanation: 모델 구성
신경망은 층을 쌓아서 만듭니다. 여기에는 세 개의 중요한 구조적 결정이 필요합니다:
어떻게 텍스트를 표현할 것인가?
모델에서 얼마나 많은 층을 사용할 것인가?
각 층에서 얼마나 많은 은닉 유닛(hidden unit)을 사용할 것인가?
이 예제의 입력 데이터는 문장으로 구성됩니다. 예측할 레이블은 0 또는 1입니다.
텍스트를 표현하는 한 가지 방법은 문장을 임베딩 벡터로 변환하는 것입니다. 사전 훈련 된 텍스트 임베딩을 첫 번째 레이어로 사용할 수 있으며, 두 가지 이점이 있습니다.
텍스트 전처리에 대해 걱정할 필요가 없습니다.
전이 학습에 따른 이점이 있습니다.
임베딩은 고정 크기이기 때문에 처리 과정이 단순해집니다.
이 예에서는 google/nnlm-en-dim50/2라고 하는 TensorFlow Hub에서 사전 훈련된 텍스트 임베딩 모델을 사용합니다.
이 튜토리얼에서 사용할 수 있는 TFHub의 다른 많은 사전 훈련된 텍스트 임베딩이 있습니다.
google/nnlm-en-dim128/2 - google/nnlm-en-dim50/2와 동일한 데이터에 동일한 NNLM 아키텍처로 훈련하지만 임베딩 차원이 더 큽니다. 보다 큰 차원의 임베딩은 작업을 개선할 수 있지만 모델을 훈련하는 데 더 오래 걸릴 수 있습니다.
google/nnlm-en-dim128-with-normalization/2 - google/nnlm-en-dim128/2와 동일하지만 구두점 제거와 같은 추가적인 텍스트 정규화가 있습니다. 이는 작업의 텍스트에 추가 문자나 구두점이 포함된 경우 도움이 될 수 있습니다.
google/universal-sentence-encoder/4 - DAN(deep averaging network) 인코더로 훈련된 512 차원 임베딩을 생성하는 훨씬 더 큰 모델입니다.
그 밖에도 많이 있습니다! TFHub에서 더 많은 텍스트 임베딩 모델을 찾아보세요.
먼저 문장을 임베딩시키기 위해 텐서플로 허브 모델을 사용하는 케라스 층을 만들어 보죠. 그다음 몇 개의 샘플을 입력하여 테스트해 보겠습니다. 입력 텍스트의 길이에 상관없이 임베딩의 출력 크기는 (num_examples, embedding_dimension)가 됩니다.
End of explanation
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
Explanation: 이제 전체 모델을 만들어 보겠습니다:
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: 순서대로 층을 쌓아 분류기를 만듭니다:
첫 번째 레이어는 TensorFlow Hub 레이어입니다. 이 레이어는 사전 훈련된 저장된 모델을 사용하여 문장을 임베딩 벡터에 매핑합니다. 사용 중인 사전 훈련된 텍스트 임베딩 모델(google/nnlm-en-dim50/2)은 문장을 토큰으로 분할하고 각 토큰을 임베딩한 다음 임베딩을 결합합니다. 결과적인 차원은 (num_examples, embedding_dimension)입니다. 이 NNLM 모델의 경우에는 embedding_dimension은 50입니다.
이 고정 크기의 출력 벡터는 16개의 은닉 유닛(hidden unit)을 가진 완전 연결 층(Dense)으로 주입됩니다.
마지막 층은 하나의 출력 노드를 가진 완전 연결 층입니다. sigmoid 활성화 함수를 사용하므로 확률 또는 신뢰도 수준을 표현하는 0~1 사이의 실수가 출력됩니다.
이제 모델을 컴파일합니다.
손실 함수와 옵티마이저
모델에는 훈련을 위한 손실 함수와 옵티마이저가 필요합니다. 이진 분류 문제이고 모델이 로짓(선형 활성화가 있는 단일 단위 레이어)을 출력하므로 binary_crossentropy 손실 함수를 사용합니다.
다른 손실 함수를 선택할 수 없는 것은 아닙니다. 예를 들어 mean_squared_error를 선택할 수 있습니다. 하지만 일반적으로 binary_crossentropy가 확률을 다루는데 적합합니다. 이 함수는 확률 분포 간의 거리를 측정합니다. 여기에서는 정답인 타깃 분포와 예측 분포 사이의 거리입니다.
나중에 회귀 문제(예: 주택 가격 예측)를 살펴볼 때 평균 제곱 오차라고 하는 또 다른 손실 함수를 사용하는 방법을 살펴볼 것입니다.
이제 모델이 사용할 옵티마이저와 손실 함수를 설정해 보죠:
End of explanation
history = model.fit(train_data.shuffle(10000).batch(512),
epochs=10,
validation_data=validation_data.batch(512),
verbose=1)
Explanation: 모델 훈련
512개 샘플의 미니 배치에서 10개 epoch 동안 모델을 훈련합니다. 이 동작은 x_train 및 y_train 텐서의 모든 샘플에 대한 10회 반복에 해당합니다. 훈련하는 동안 검증 세트의 10,000개 샘플에서 모델의 손실과 정확도를 모니터링합니다.
End of explanation
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
Explanation: 모델 평가
모델의 성능을 확인해 보죠. 두 개의 값이 반환됩니다. 손실(오차를 나타내는 숫자이므로 낮을수록 좋습니다)과 정확도입니다.
End of explanation |
3,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orthogonality in Potapov modes
The modes of a system will not always be orthogonal because some of the signal leaks out of the system. Let's use the Potapov analysis for a specific example to determine when the orthogonality approximation can be made.
The example used here is example 3 in our code, which corresponds to figure 7 in our paper. This example is formed by two inter-linked cavities with two inputs and outputs.
In the notebook "Bi-orthogonality testing" we show how this issue can be avoided using a bi-orthogonal basis.
Step1: Varying r1 and r3 -- the input-output mirrors
Step2: Varying r1 with constant r3=1
Step3: Varying r2-- the internal mirror
Step4: Plot all 3 together | Python Code:
import Potapov_Code.Roots as Roots
import Potapov_Code.Potapov as Potapov
import Potapov_Code.Time_Delay_Network as Time_Delay_Network
import Potapov_Code.Time_Sims as Time_Sims
import Potapov_Code.functions as functions
import Potapov_Code.tests as tests
import numpy as np
import numpy.linalg as la
import matplotlib.pyplot as plt
%pylab inline
def contour_plot(Mat):
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(abs(Mat), interpolation='nearest')
fig.colorbar(cax)
plt.show()
def run_example3(r1 = 0.9,r2=0.4, r3 = 0.9):
Ex = Time_Delay_Network.Example3(r1=r1,r2=r2,r3=r3,max_freq=50.,max_linewidth=35.)
Ex.run_Potapov(commensurate_roots = True)
E = Ex.E
roots = Ex.roots
M1 = Ex.M1
delays = Ex.delays
modes = functions.spatial_modes(roots,M1,E)
Mat = functions.make_normalized_inner_product_matrix(roots,modes,delays)
#contour_plot(Mat)
#for root,mode in zip(roots,modes):
# print root,mode
return Mat
M = np.matrix([[1,1],[2,2]])
def off_diagonal_error(M):
err = 0.
if M.shape[0] != M.shape[1]:
print "Not a square matrix!"
return 0.
length = M.shape[0]
for i in range(length):
for j in range(length):
if i != j:
err += abs(M[i,j])**2
return np.sqrt(err) / (length*(length - 1))
rs = [.01,0.05,.1,.2,.3,.4,.5,.6,.7,.8,.9,.95,.99,.999]
Explanation: Orthogonality in Potapov modes
The modes of a system will not always be orthogonal because some of the signal leaks out of the system. Let's use the Potapov analysis for a specific example to determine when the orthogonality approximation can be made.
The example used here is example 3 in our code, which corresponds to figure 7 in our paper. This example is formed by two inter-linked cavities with two inputs and outputs.
In the notebook "Bi-orthogonality testing" we show how this issue can be avoided using a bi-orthogonal basis.
End of explanation
Ms = {}
for r in rs:
Ms[r] = run_example3(r1=r,r3=r)
for r in rs:
contour_plot(Ms[r])
err = {}
for r in rs:
err[r] = off_diagonal_error(Ms[r])
plt.figure(figsize = (12,6))
plt.plot(rs,[err[r] for r in rs],label='r1 and r3, with fixed r2 = 0.4')
plt.yscale('log')
plt.xlabel('Reflectivities',{'fontsize': 24})
plt.title('Normalized non-orthogonality error',{'fontsize': 24})
plt.ylabel('Error',{'fontsize': 24})
plt.yticks( size=20)
plt.xticks( size=20)
plt.legend(loc='lower left',fontsize=20)
plt.savefig('orth_err.pdf')
Explanation: Varying r1 and r3 -- the input-output mirrors
End of explanation
Ms0 = {}
for r in rs:
Ms0[r] = run_example3(r1=r,r3=1.)
for r in rs:
contour_plot(Ms0[r])
err0 = {}
for r in rs:
err0[r] = off_diagonal_error(Ms0[r])
plt.figure(figsize = (12,6))
plt.plot(rs,[err0[r] for r in rs])
plt.yscale('log',size=32)
plt.xlabel('Input-output r1',{'fontsize': 24})
plt.title('Normalized non-orthogonality error',{'fontsize': 24})
plt.ylabel('Error',{'fontsize': 24})
plt.yticks( size=20)
plt.xticks( size=20)
plt.savefig('orth_err.pdf')
Explanation: Varying r1 with constant r3=1
End of explanation
Ms2 = {}
for r in rs:
Ms2[r] = run_example3(r1=.9,r2 = r,r3=.9)
for r in rs:
contour_plot(Ms2[r])
err2 = {}
for r in rs:
err2[r] = off_diagonal_error(Ms2[r])
plt.figure(figsize = (12,6))
plt.plot(rs,[err2[r] for r in rs])
plt.yscale('log')
plt.xlabel('Internal Reflectivity',{'fontsize': 24})
plt.title('Normalized non-orthogonality error',{'fontsize': 24})
plt.ylabel('Error',{'fontsize': 24})
plt.yticks( size=20)
plt.xticks( size=20)
plt.savefig('orth_err.pdf')
Explanation: Varying r2-- the internal mirror
End of explanation
plt.figure(figsize = (12,6))
plt.plot(rs,[err0[r] for r in rs],label='varied r1, with fixed r2 = 0.4 and r3 = 1')
plt.plot(rs,[err[r] for r in rs],label='varied r1 and r3, with fixed r2 = 0.4')
plt.plot(rs,[err2[r] for r in rs],label='varied r2, with fixed r1 = r3 = 0.9')
plt.yscale('log')
plt.xlabel('Reflectivities',{'fontsize': 24})
plt.title('Normalized non-orthogonality error',{'fontsize': 24})
plt.ylabel('Error',{'fontsize': 24})
plt.yticks( size=20)
plt.xticks( size=20)
plt.legend(loc='lower center',fontsize=20)
plt.savefig('orth_err.pdf')
Explanation: Plot all 3 together
End of explanation |
3,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finite-Length Capacity of the Binary-Input AWGN (BI-AWGN) Channel
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* Calculating the finite-length capacity of the binary input AWGN channel using numerical integration and the normal approximation
* Illustrating the finite-length capacity for different code lengths
Step1: Conditional pdf $f_{Y|X}(y|x)$ for a channel with noise variance (per dimension) $\sigma_n^2$. This is merely the Gaussian pdf with mean $x$ and variance $\sigma_n^2$
Step2: Output pdf $f_Y(y) = \frac12[f_{Y|X}(y|X=+1)+f_{Y|X}(y|X=-1)]$
Step3: This is the function we like to integrate, $f_Y(y)\cdot\log_2(f_Y(y))$. We need to take special care of the case when the input is 0, as we defined $0\cdot\log_2(0)=0$, which is usually treated as "nan"
Step4: Compute the capacity using numerical integration. We have
\begin{equation}
C_{\text{BI-AWGN}} = -\int_{-\infty}^\infty f_Y(y)\log_2(f_Y(y))\mathrm{d}y - \frac12\log_2(2\pi e\sigma_n^2)
\end{equation}
Step5: Compute the dispersion of the BI-AWGN channel, which is given by (see, e.g., [1]). This is a
\begin{equation}
V = \frac{1}{\pi}\int_{-\infty}^{\infty}e^{-z^2}\left(1-\log_2\left(1+\exp\left(-\frac{2}{\sigma_n^2}+\frac{2\sqrt{2}}{\sigma_n}z\right)\right)-C\right)^2\mathrm{d}z
\end{equation}
where $C$ is the capacity of the BI-AWGN channel. The integral can be computed numerically or using the Gauss-Hermite quadrature (https
Step6: The finite-length capacity for the BI-AWGN channel is given by
\begin{equation}
r = \frac{\log_2(M)}{n} \approx C - \sqrt{\frac{V}{n}}Q^{-1}(P_e) + \frac{\log_2(n)}{2n}
\end{equation}
We can solve this equation for $P_e$, which gives
\begin{equation}
P_e \approx Q\left(\frac{n(C-r) + \frac{1}{2}\log_2(n)}{\sqrt{Vn}}\right)
\end{equation}
For a given channel (i.e., a given $E_s/N_0$ or its equivalent noise variance $\sigma_n^2$), we can compute the capacity $C$ and the dispersion $V$ and then use it to get an estimate of what error rate an ideal code with an idea decoder could achieve. Note that this is only an estimate and we do not know the exact value. However, we can compute upper and lower bounds, which are relatively close to the approximation (beyond the scope of this lecture, see, e.g., [1] for details)
Step7: Show finite length capacity estimates for some codes of different lengths $n$
Step8: Different representation, for a given channel (and here, we pick $E_s/N_0 = -2.83$ dB), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available. | Python Code:
import numpy as np
import scipy.integrate as integrate
from scipy.stats import norm
import matplotlib.pyplot as plt
Explanation: Finite-Length Capacity of the Binary-Input AWGN (BI-AWGN) Channel
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* Calculating the finite-length capacity of the binary input AWGN channel using numerical integration and the normal approximation
* Illustrating the finite-length capacity for different code lengths
End of explanation
def f_YgivenX(y,x,sigman):
return np.exp(-((y-x)**2)/(2*sigman**2))/np.sqrt(2*np.pi)/sigman
Explanation: Conditional pdf $f_{Y|X}(y|x)$ for a channel with noise variance (per dimension) $\sigma_n^2$. This is merely the Gaussian pdf with mean $x$ and variance $\sigma_n^2$
End of explanation
def f_Y(y,sigman):
return 0.5*(f_YgivenX(y,+1,sigman)+f_YgivenX(y,-1,sigman))
Explanation: Output pdf $f_Y(y) = \frac12[f_{Y|X}(y|X=+1)+f_{Y|X}(y|X=-1)]$
End of explanation
def integrand(y, sigman):
value = f_Y(y,sigman)
if value < 1e-20:
return_value = 0
else:
return_value = value * np.log2(value)
return return_value
Explanation: This is the function we like to integrate, $f_Y(y)\cdot\log_2(f_Y(y))$. We need to take special care of the case when the input is 0, as we defined $0\cdot\log_2(0)=0$, which is usually treated as "nan"
End of explanation
def C_BIAWGN(sigman):
# numerical integration of the h(Y) part
integral = integrate.quad(integrand, -np.inf, np.inf, args=(sigman))[0]
# take into account h(Y|X)
return -integral - 0.5*np.log2(2*np.pi*np.exp(1)*sigman**2)
Explanation: Compute the capacity using numerical integration. We have
\begin{equation}
C_{\text{BI-AWGN}} = -\int_{-\infty}^\infty f_Y(y)\log_2(f_Y(y))\mathrm{d}y - \frac12\log_2(2\pi e\sigma_n^2)
\end{equation}
End of explanation
def V_integrand(z, C, sigman):
sigmanq = np.square(sigman)
m1 = np.square(1 - np.log2(1 + np.exp(-2/sigmanq + 2*np.sqrt(2)*z/sigman)) - C)
m2 = np.exp(-np.square(z))
if np.isinf(m1) or np.isinf(m2):
value = 0
else:
value = m1*m2
return value
# compute the dispersion using numerical integration
def V_BIAWGN(C, sigman):
integral = integrate.quad(V_integrand, -np.inf, np.inf, args=(C,sigman))[0]
return integral/np.sqrt(np.pi)
# Alternative implementation using Gauss-Hermite Quadrature
x_GH, w_GH = np.polynomial.hermite.hermgauss(40)
def V_BIAWGN_GH(C, sigman):
integral = sum(w_GH * [np.square(1-np.log2(1 + np.exp(-2/np.square(sigman) + 2*np.sqrt(2)*xi/sigman)) - C) for xi in x_GH])
return integral / np.sqrt(np.pi)
Explanation: Compute the dispersion of the BI-AWGN channel, which is given by (see, e.g., [1]). This is a
\begin{equation}
V = \frac{1}{\pi}\int_{-\infty}^{\infty}e^{-z^2}\left(1-\log_2\left(1+\exp\left(-\frac{2}{\sigma_n^2}+\frac{2\sqrt{2}}{\sigma_n}z\right)\right)-C\right)^2\mathrm{d}z
\end{equation}
where $C$ is the capacity of the BI-AWGN channel. The integral can be computed numerically or using the Gauss-Hermite quadrature (https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature). Both versions are given below.
[1] M. Coşkun, G. Durisi, T. Jerkovits, G. Liva, W. Ryan, B. Stein, F. Steiner, "Efficient error-correcting codes in the short blocklength regime", Physical Communication, pp. 66-79, 34, 2019, preprint available online https://arxiv.org/abs/1812.08562
End of explanation
def get_Pe_finite_length(n, r, sigman):
# compute capacity
C = C_BIAWGN(sigman)
# compute dispersion
V = V_BIAWGN_GH(C, sigman)
# Q-function is "norm.sf" (survival function)
return norm.sf((n*(C-r) + 0.5*np.log2(n))/np.sqrt(n*V))
Explanation: The finite-length capacity for the BI-AWGN channel is given by
\begin{equation}
r = \frac{\log_2(M)}{n} \approx C - \sqrt{\frac{V}{n}}Q^{-1}(P_e) + \frac{\log_2(n)}{2n}
\end{equation}
We can solve this equation for $P_e$, which gives
\begin{equation}
P_e \approx Q\left(\frac{n(C-r) + \frac{1}{2}\log_2(n)}{\sqrt{Vn}}\right)
\end{equation}
For a given channel (i.e., a given $E_s/N_0$ or its equivalent noise variance $\sigma_n^2$), we can compute the capacity $C$ and the dispersion $V$ and then use it to get an estimate of what error rate an ideal code with an idea decoder could achieve. Note that this is only an estimate and we do not know the exact value. However, we can compute upper and lower bounds, which are relatively close to the approximation (beyond the scope of this lecture, see, e.g., [1] for details)
End of explanation
esno_dB_range = np.linspace(-4,3,100)
esno_lin_range = [10**(esno_db/10) for esno_db in esno_dB_range]
# compute sigma_n
sigman_range = [np.sqrt(1/2/esno_lin) for esno_lin in esno_lin_range]
capacity_BIAWGN = [C_BIAWGN(sigman) for sigman in sigman_range]
Pe_BIAWGN_r12_n100 = [get_Pe_finite_length(100, 0.5, sigman) for sigman in sigman_range]
Pe_BIAWGN_r12_n500 = [get_Pe_finite_length(500, 0.5, sigman) for sigman in sigman_range]
Pe_BIAWGN_r12_n1000 = [get_Pe_finite_length(1000, 0.5, sigman) for sigman in sigman_range]
Pe_BIAWGN_r12_n5000 = [get_Pe_finite_length(5000, 0.5, sigman) for sigman in sigman_range]
fig = plt.figure(1,figsize=(10,7))
plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n100)
plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n500)
plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n1000)
plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n5000)
plt.axvspan(-4, -2.83, alpha=0.5, color='gray')
plt.axvline(x=-2.83, color='k')
plt.ylim((1e-8,1))
plt.xlim((-4,2))
plt.xlabel('$E_s/N_0$ (dB)', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$'], fontsize=16)
plt.text(-3.2, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BI_AWGN_Pe_R12.pdf',bbox_inches='tight')
Explanation: Show finite length capacity estimates for some codes of different lengths $n$
End of explanation
#specify esno
esno = -2.83
n_range = np.linspace(10,2000,100)
sigman = np.sqrt(0.5*10**(-esno/10))
C = C_BIAWGN(sigman)
V = V_BIAWGN_GH(C, sigman)
r_Pe_1em3 = [C - np.sqrt(V/n)*norm.isf(1e-3) + 0.5*np.log2(n)/n for n in n_range]
r_Pe_1em6 = [C - np.sqrt(V/n)*norm.isf(1e-6) + 0.5*np.log2(n)/n for n in n_range]
r_Pe_1em9 = [C - np.sqrt(V/n)*norm.isf(1e-9) + 0.5*np.log2(n)/n for n in n_range]
fig = plt.figure(1,figsize=(10,7))
plt.plot(n_range, r_Pe_1em3)
plt.plot(n_range, r_Pe_1em6)
plt.plot(n_range, r_Pe_1em9)
plt.axhline(y=C, color='k')
plt.ylim((0,0.55))
plt.xlim((0,2000))
plt.xlabel('Length $n$', fontsize=16)
plt.ylabel('Rate $r$ (bit/channel use)', fontsize=16)
plt.legend(['$P_e = 10^{-3}$', '$P_e = 10^{-6}$','$P_e = 10^{-9}$', '$C$'], fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BI_AWGN_r_esno_m283.pdf',bbox_inches='tight')
Explanation: Different representation, for a given channel (and here, we pick $E_s/N_0 = -2.83$ dB), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
End of explanation |
3,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: OPTIONAL | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 1/float(1+np.exp(-x)) # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
output_errors = (targets-final_outputs)
output_grad=1.0 # Because No sigmoid in O/P layer. Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error - Replace these values with your calculations.
hidden_errors = np.dot(self.weights_hidden_to_output.T,output_errors)# errors propagated to the hidden layer
hidden_grad = hidden_outputs*(1-hidden_outputs) # hidden layer gradients
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * np.dot(output_errors*output_grad,hidden_outputs.T)# update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad,inputs.T) # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 1600
learning_rate = 0.01
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
My model have successfully passed all the test cases.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
3,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
=========================================
Reading/Writing a noise covariance matrix
=========================================
Plot a noise covariance matrix.
Step1: Show covariance | Python Code:
# Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
from os import path as op
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_evo = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
cov = mne.read_cov(fname_cov)
print(cov)
evoked = mne.read_evokeds(fname_evo)[0]
Explanation: =========================================
Reading/Writing a noise covariance matrix
=========================================
Plot a noise covariance matrix.
End of explanation
cov.plot(evoked.info, exclude='bads', show_svd=False)
Explanation: Show covariance
End of explanation |
3,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using dstoolbox visualization
Table of contents
Nodes and edges of a pipeline
Visualizing a pipeline
Step1: Nodes and edges of a pipeline
Every sklearn Pipeline and FeatureUnion can be seen as a graph. Sometimes, it's useful to get an explicit graph structure of your sklearn model, that is, it's nodes and edges. Those can be obtained by dstoolbox's get_nodes_and_edges
Step2: Visualizing a pipeline
One application of having the graph structure of your model is that you can use existing libraries to plot that structure. For convenience, dstoolbox implements a function that does this, make_graph. Using this, you get a pydotplus graph, which you can plot to your notebook or save in a file using dstoolbox.visualiziation.save_graph_to_file.
Note | Python Code:
from pprint import pprint
from IPython.display import Image
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import StandardScaler
from dstoolbox.utils import get_nodes_edges
from dstoolbox.visualization import make_graph
Explanation: Using dstoolbox visualization
Table of contents
Nodes and edges of a pipeline
Visualizing a pipeline
End of explanation
my_pipe = Pipeline([
('step0', FunctionTransformer()),
('step1', FeatureUnion([
('feat0', FunctionTransformer()),
('feat1', FunctionTransformer()),
])),
('step2', FunctionTransformer()),
])
nodes, edges = get_nodes_edges('my_pipe', my_pipe)
pprint(nodes)
pprint(edges)
Explanation: Nodes and edges of a pipeline
Every sklearn Pipeline and FeatureUnion can be seen as a graph. Sometimes, it's useful to get an explicit graph structure of your sklearn model, that is, it's nodes and edges. Those can be obtained by dstoolbox's get_nodes_and_edges:
End of explanation
my_pipe = Pipeline([
('step1', FunctionTransformer()),
('step2', FunctionTransformer()),
('step3', FeatureUnion([
('feat3_1', FunctionTransformer()),
('feat3_2', Pipeline([
('step10', FunctionTransformer()),
('step20', FeatureUnion([
('p', FeatureUnion([
('p0', FunctionTransformer()),
('p1', FunctionTransformer()),
])),
('q', FeatureUnion([
('q0', FunctionTransformer()),
('q1', FunctionTransformer()),
])),
])),
('step30', StandardScaler()),
])),
('feat3_3', FeatureUnion([
('feat10', FunctionTransformer()),
('feat11', FunctionTransformer()),
])),
])),
('step4', StandardScaler()),
('step5', FeatureUnion([
('feat5_1', FunctionTransformer()),
('feat5_2', FunctionTransformer()),
('feat5_3', FunctionTransformer()),
])),
('step6', StandardScaler()),
])
graph = make_graph('my pipe', my_pipe)
Image(graph.create_png())
Explanation: Visualizing a pipeline
One application of having the graph structure of your model is that you can use existing libraries to plot that structure. For convenience, dstoolbox implements a function that does this, make_graph. Using this, you get a pydotplus graph, which you can plot to your notebook or save in a file using dstoolbox.visualiziation.save_graph_to_file.
Note: Using this requires additional packages not covered by dstoolbox. Specifically, you need to install pydotplus and graphviz.
End of explanation |
3,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using a CFSv2 forecast
CFSv2 is a seasonal forecast system, used for analysing past climate and also making seasonal, up to 9-month, forecasts. Here we give a brief example on how to use Planet OS API to merge 9-month forecasts started at different initial times, into a single ensemble forecast.
Ensemble forecasting is a traditional technique in medium range (up to 10 days) weather forecasts, seasonal forecasts and climate modelling. By changing initial conditions or model parameters, a range of forecasts is created, which differ from each other slightly, due to the chaotic nature of fluid dynamics (which weather modelling is a subset of). For weather forecasting, the ensemble is usually created by small changes in initial conditions, but for seasonal forecast, it is much easier to just take real initial conditions every 6-hours. Here we are going to show, first how to merge the different dates into a single plot with the help of python pandas library, and in addition we show that even 6-hour changes in initial conditions can lead to large variability in long range forecasts.
In this example we look into 2 m temperture for upcoming winter. We are also adding climatological averages from CFS Reanalysis Climatologies to the plot for better overview.
If you have more interest in Planet OS API, please refer to our official documentation.
Please also note that the API_client python routine, used in this notebook, is still experimental and will change in the future, so take it just as a guidance using the API, and not as an official tool.
Note that we store 10 days of history on this dataset. So, this notebook will be updated with the latest data as well, which means that descriptions here might be little outdated as data is renewed.
Step1: The API needs a file APIKEY with your API key in the work folder. We initialize a datahub and dataset objects.
Step2: At the moment we are going to look into Tallinn, Innsbruck, Paris, Berlin and Lisbon temperature. In order to the automatic location selection to work, add your custom location to the API_client.python.lib.predef_locations file and after add your location into the list of locations here.
Step3: Here we clean the table just a bit and create time based index.
Step4: Next, we resample the data to 1-month totals.
Step5: Give new indexes to climatology dataframe to have data ordered the same way as cfsv2 forecast.
Step6: Finally, we are visualizing the monthly mean temperature for each different forecast, in a single plot.
Step7: Below we can find five location graphs. It's forecasted that November seems to be quite similar to climatological mean (red line) while December might be much warmer than usual. January again is pretty similar to the climatology. After January all the months are forecasted to be colder as average, specially April and May.
Forecast for Innsbruck is intresting. It's forecasted that upcoming seven months will be colder as usual. For example, January mean temperature might be two degrees lower.
In the same time, Paris might face bit colder November, while December and January could be pretty average. After that, all the forecasted values are rather colder as climatology.
Berlin might get pretty average November. However, after that December could get much colder. January and February could be quite average. After that, temperatures might be colder as usual.
Temperatures in Lisbon are forecasted to be bit colder in winter and spring. However, May is forecasted to quite average, while June forecast is way warmer as climatological mean. | Python Code:
%matplotlib notebook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import calendar
import datetime
import matplotlib.dates as mdates
from API_client.python.datahub import datahub_main
from API_client.python.lib.dataset import dataset
from API_client.python.lib.variables import variables
import matplotlib
import warnings
warnings.filterwarnings("ignore")
matplotlib.rcParams['font.family'] = 'Avenir Lt Std'
print (matplotlib.__version__)
Explanation: Using a CFSv2 forecast
CFSv2 is a seasonal forecast system, used for analysing past climate and also making seasonal, up to 9-month, forecasts. Here we give a brief example on how to use Planet OS API to merge 9-month forecasts started at different initial times, into a single ensemble forecast.
Ensemble forecasting is a traditional technique in medium range (up to 10 days) weather forecasts, seasonal forecasts and climate modelling. By changing initial conditions or model parameters, a range of forecasts is created, which differ from each other slightly, due to the chaotic nature of fluid dynamics (which weather modelling is a subset of). For weather forecasting, the ensemble is usually created by small changes in initial conditions, but for seasonal forecast, it is much easier to just take real initial conditions every 6-hours. Here we are going to show, first how to merge the different dates into a single plot with the help of python pandas library, and in addition we show that even 6-hour changes in initial conditions can lead to large variability in long range forecasts.
In this example we look into 2 m temperture for upcoming winter. We are also adding climatological averages from CFS Reanalysis Climatologies to the plot for better overview.
If you have more interest in Planet OS API, please refer to our official documentation.
Please also note that the API_client python routine, used in this notebook, is still experimental and will change in the future, so take it just as a guidance using the API, and not as an official tool.
Note that we store 10 days of history on this dataset. So, this notebook will be updated with the latest data as well, which means that descriptions here might be little outdated as data is renewed.
End of explanation
apikey = open('APIKEY').readlines()[0].strip()
dh = datahub_main(apikey)
ds = dataset('ncep_cfsv2', dh, debug=False)
ds2 = dataset('ncep_cfsr_climatologies', dh, debug=False)
ds.variables()
ds.vars = variables(ds.variables(), {'reftimes':ds.reftimes,'timesteps':ds.timesteps},ds)
ds2.vars = variables(ds2.variables(), {},ds2)
Explanation: The API needs a file APIKEY with your API key in the work folder. We initialize a datahub and dataset objects.
End of explanation
start_date = datetime.datetime.now() - datetime.timedelta(days=9)
end_date = datetime.datetime.now() + datetime.timedelta(days=5)
reftime_start = start_date.strftime('%Y-%m-%d') + 'T00:00:00'
reftime_end = end_date.strftime('%Y-%m-%d') + 'T18:00:00'
locations = ['Tallinn','Innsbruck','Paris','Berlin','Lisbon']
for locat in locations:
ds2.vars.TMAX_2maboveground.get_values_analysis(count=1000, location=locat)
ds.vars.Temperature_height_above_ground.get_values(count=1000, location=locat, reftime=reftime_start,
reftime_end=reftime_end)
Explanation: At the moment we are going to look into Tallinn, Innsbruck, Paris, Berlin and Lisbon temperature. In order to the automatic location selection to work, add your custom location to the API_client.python.lib.predef_locations file and after add your location into the list of locations here.
End of explanation
def clean_table(loc):
ddd_clim = ds2.vars.TMAX_2maboveground.values[loc][['time','TMAX_2maboveground']]
ddd_temp = ds.vars.Temperature_height_above_ground.values[loc][['reftime','time','Temperature_height_above_ground']]
dd_temp=ddd_temp.set_index('time')
return ddd_clim,dd_temp
Explanation: Here we clean the table just a bit and create time based index.
End of explanation
def resample_1month_totals(loc):
reft_unique = ds.vars.Temperature_height_above_ground.values[loc]['reftime'].unique()
nf_tmp = []
for reft in reft_unique:
abc = dd_temp[dd_temp.reftime==reft].resample('M').mean()
abc['Temperature_height_above_ground'+'_'+reft.astype(str)] = \
abc['Temperature_height_above_ground'] - 272.15
del abc['Temperature_height_above_ground']
nf_tmp.append(abc)
nf2_tmp = pd.concat(nf_tmp,axis=1)
return nf2_tmp
Explanation: Next, we resample the data to 1-month totals.
End of explanation
def reindex_clim_convert_temp():
i_new = 0
ddd_clim_new_indxes = ddd_clim.copy()
new_indexes = []
converted_temp = []
for i,clim_values in enumerate(ddd_clim['TMAX_2maboveground']):
if i == 0:
i_new = 12 - nf2_tmp.index[0].month + 2
else:
i_new = i_new + 1
if i_new == 13:
i_new = 1
new_indexes.append(i_new)
converted_temp.append(ddd_clim_new_indxes['TMAX_2maboveground'][i] -273.15)
ddd_clim_new_indxes['new_index'] = new_indexes
ddd_clim_new_indxes['tmp_c'] = converted_temp
return ddd_clim_new_indxes
Explanation: Give new indexes to climatology dataframe to have data ordered the same way as cfsv2 forecast.
End of explanation
def make_image(loc):
fig=plt.figure(figsize=(10,8))
ax = fig.add_subplot(111)
plt.ylim(np.min(np.min(nf2_tmp))-3,np.max(np.max(nf2_tmp))+3)
plt.boxplot(nf2_tmp,medianprops=dict(color='#1B9AA0'))
dates2 = [n.strftime('%b %Y') for n in nf2_tmp.index]
if len(np.arange(1, len(dates2)+1))== len(ddd_clim_indexed.sort_values(by=['new_index'])['tmp_c'][:-3]):
clim_len = -3
else:
clim_len = -2
plt.plot(np.arange(1, len(dates2)+1),ddd_clim_indexed.sort_values(by=['new_index'])['tmp_c'][:clim_len],"*",color='#EC5840',linestyle='-')
plt.xticks(np.arange(1, len(dates2)+1), dates2, rotation='vertical')
plt.grid(color='#C3C8CE',alpha=1)
plt.ylabel('Monthly Temperature [C]')
ttl = plt.title('Monthly Temperature in ' + loc,fontsize=15,fontweight='bold')
ttl.set_position([.5, 1.05])
fig.autofmt_xdate()
#plt.savefig('Monthly_mean_temp_cfsv2_forecast_{0}.png'.format(loc),dpi=300,bbox_inches='tight')
plt.show()
Explanation: Finally, we are visualizing the monthly mean temperature for each different forecast, in a single plot.
End of explanation
for locat in locations:
ddd_clim,dd_temp = clean_table(locat)
nf2_tmp = resample_1month_totals(locat)
ddd_clim_indexed = reindex_clim_convert_temp()
make_image(locat)
Explanation: Below we can find five location graphs. It's forecasted that November seems to be quite similar to climatological mean (red line) while December might be much warmer than usual. January again is pretty similar to the climatology. After January all the months are forecasted to be colder as average, specially April and May.
Forecast for Innsbruck is intresting. It's forecasted that upcoming seven months will be colder as usual. For example, January mean temperature might be two degrees lower.
In the same time, Paris might face bit colder November, while December and January could be pretty average. After that, all the forecasted values are rather colder as climatology.
Berlin might get pretty average November. However, after that December could get much colder. January and February could be quite average. After that, temperatures might be colder as usual.
Temperatures in Lisbon are forecasted to be bit colder in winter and spring. However, May is forecasted to quite average, while June forecast is way warmer as climatological mean.
End of explanation |
3,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational Inference
Step1: Model specification
A neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.
Step2: That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.
Variational Inference
Step3: < 40 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.
As samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC)
Step4: Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.
Step5: Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).
Step6: Hey, our neural network did all right!
Lets look at what the classifier has learned
For this, we evaluate the class probability predictions on a grid over the whole input space.
Step7: Probability surface
Step8: Uncertainty in predicted value
So far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like
Step9: We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.
Mini-batch ADVI
Step10: While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM.
Lets pass those to advi_minibatch()
Step11: As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.
For fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights. | Python Code:
%matplotlib inline
import theano
theano.config.floatX = 'float64'
import pymc3 as pm
import theano.tensor as T
import sklearn
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_moons
X, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)
X = scale(X)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)
fig, ax = plt.subplots()
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')
sns.despine(); ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');
Explanation: Variational Inference: Bayesian Neural Networks
(c) 2016 by Thomas Wiecki
Original blog post: http://twiecki.github.io/blog/2016/06/01/bayesian-deep-learning/
Current trends in Machine Learning
There are currently three big trends in machine learning: Probabilistic Programming, Deep Learning and "Big Data". Inside of PP, a lot of innovation is in making things scale using Variational Inference. In this blog post, I will show how to use Variational Inference in PyMC3 to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.
Probabilistic Programming at scale
Probabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to consruct and estimate these models. One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference.
Unfortunately, when it comes to traditional ML problems like classification or (non-linear) regression, Probabilistic Programming often plays second fiddle (in terms of accuracy and scalability) to more algorithmic approaches like ensemble learning (e.g. random forests or gradient boosted regression trees.
Deep Learning
Now in its third renaissance, deep learning has been making headlines repeatadly by dominating almost any object recognition benchmark, kicking ass at Atari games, and beating the world-champion Lee Sedol at Go. From a statistical point, Neural Networks are extremely good non-linear function approximators and representation learners. While mostly known for classification, they have been extended to unsupervised learning with AutoEncoders and in all sorts of other interesting ways (e.g. Recurrent Networks, or MDNs to estimate multimodal distributions). Why do they work so well? No one really knows as the statistical properties are still not fully understood.
A large part of the innoviation in deep learning is the ability to train these extremely complex models. This rests on several pillars:
* Speed: facilitating the GPU allowed for much faster processing.
* Software: frameworks like Theano and TensorFlow allow flexible creation of abstract models that can then be optimized and compiled to CPU or GPU.
* Learning algorithms: training on sub-sets of the data -- stochastic gradient descent -- allows us to train these models on massive amounts of data. Techniques like drop-out avoid overfitting.
* Architectural: A lot of innovation comes from changing the input layers, like for convolutional neural nets, or the output layers, like for MDNs.
Bridging Deep Learning and Probabilistic Programming
On one hand we Probabilistic Programming which allows us to build rather small and focused models in a very principled and well-understood way to gain insight into our data; on the other hand we have deep learning which uses many heuristics to train huge and highly complex models that are amazing at prediction. Recent innovations in variational inference allow probabilistic programming to scale model complexity as well as data size. We are thus at the cusp of being able to combine these two approaches to hopefully unlock new innovations in Machine Learning. For more motivation, see also Dustin Tran's recent blog post.
While this would allow Probabilistic Programming to be applied to a much wider set of interesting problems, I believe this bridging also holds great promise for innovations in Deep Learning. Some ideas are:
* Uncertainty in predictions: As we will see below, the Bayesian Neural Network informs us about the uncertainty in its predictions. I think uncertainty is an underappreciated concept in Machine Learning as it's clearly important for real-world applications. But it could also be useful in training. For example, we could train the model specifically on samples it is most uncertain about.
* Uncertainty in representations: We also get uncertainty estimates of our weights which could inform us about the stability of the learned representations of the network.
* Regularization with priors: Weights are often L2-regularized to avoid overfitting, this very naturally becomes a Gaussian prior for the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm).
* Transfer learning with informed priors: If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like GoogLeNet.
* Hierarchical Neural Networks: A very powerful approach in Probabilistic Programming is hierarchical modeling that allows pooling of things that were learned on sub-groups to the overall population (see my tutorial on Hierarchical Linear Regression in PyMC3). Applied to Neural Networks, in hierarchical data sets, we could train individual neural nets to specialize on sub-groups while still being informed about representations of the overall population. For example, imagine a network trained to classify car models from pictures of cars. We could train a hierarchical neural network where a sub-neural network is trained to tell apart models from only a single manufacturer. The intuition being that all cars from a certain manufactures share certain similarities so it would make sense to train individual networks that specialize on brands. However, due to the individual networks being connected at a higher layer, they would still share information with the other specialized sub-networks about features that are useful to all brands. Interestingly, different layers of the network could be informed by various levels of the hierarchy -- e.g. early layers that extract visual lines could be identical in all sub-networks while the higher-order representations would be different. The hierarchical model would learn all that from the data.
* Other hybrid architectures: We can more freely build all kinds of neural networks. For example, Bayesian non-parametrics could be used to flexibly adjust the size and shape of the hidden layers to optimally scale the network architecture to the problem at hand during training. Currently, this requires costly hyper-parameter optimization and a lot of tribal knowledge.
Bayesian Neural Networks in PyMC3
Generating data
First, lets generate some toy data -- a simple binary classification problem that's not linearly separable.
End of explanation
# Trick: Turn inputs and outputs into shared variables.
# It's still the same thing, but we can later change the values of the shared variable
# (to switch in the test-data later) and pymc3 will just use the new data.
# Kind-of like a pointer we can redirect.
# For more info, see: http://deeplearning.net/software/theano/library/compile/shared.html
ann_input = theano.shared(X_train)
ann_output = theano.shared(Y_train)
n_hidden = 5
# Initialize random weights between each layer
init_1 = np.random.randn(X.shape[1], n_hidden)
init_2 = np.random.randn(n_hidden, n_hidden)
init_out = np.random.randn(n_hidden)
with pm.Model() as neural_network:
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network using tanh activation function
act_1 = T.tanh(T.dot(ann_input,
weights_in_1))
act_2 = T.tanh(T.dot(act_1,
weights_1_2))
act_out = T.nnet.sigmoid(T.dot(act_2,
weights_2_out))
# Binary classification -> Bernoulli likelihood
out = pm.Bernoulli('out',
act_out,
observed=ann_output)
Explanation: Model specification
A neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.
End of explanation
%%time
with neural_network:
# Run ADVI which returns posterior means, standard deviations, and the evidence lower bound (ELBO)
v_params = pm.variational.advi(n=50000)
Explanation: That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.
Variational Inference: Scaling model complexity
We could now just run a MCMC sampler like NUTS which works pretty well in this case but as I already mentioned, this will become very slow as we scale our model up to deeper architectures with more layers.
Instead, we will use the brand-new ADVI variational inference algorithm which was recently added to PyMC3. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior.
End of explanation
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
Explanation: < 40 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.
As samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC):
End of explanation
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
Explanation: Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.
End of explanation
# Replace shared variables with testing set
ann_input.set_value(X_test)
ann_output.set_value(Y_test)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
# Use probability of > 0.5 to assume prediction of class 1
pred = ppc['out'].mean(axis=0) > 0.5
fig, ax = plt.subplots()
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
sns.despine()
ax.set(title='Predicted labels in testing set', xlabel='X', ylabel='Y');
print('Accuracy = {}%'.format((Y_test == pred).mean() * 100))
Explanation: Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).
End of explanation
grid = np.mgrid[-3:3:100j,-3:3:100j]
grid_2d = grid.reshape(2, -1).T
dummy_out = np.ones(grid.shape[1], dtype=np.int8)
ann_input.set_value(grid_2d)
ann_output.set_value(dummy_out)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
Explanation: Hey, our neural network did all right!
Lets look at what the classifier has learned
For this, we evaluate the class probability predictions on a grid over the whole input space.
End of explanation
cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');
Explanation: Probability surface
End of explanation
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');
Explanation: Uncertainty in predicted value
So far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like:
End of explanation
# Set back to original data to retrain
ann_input.set_value(X_train)
ann_output.set_value(Y_train)
# Tensors and RV that will be using mini-batches
minibatch_tensors = [ann_input, ann_output]
minibatch_RVs = [out]
# Generator that returns mini-batches in each iteration
def create_minibatch(data):
rng = np.random.RandomState(0)
while True:
# Return random data samples of set size 100 each iteration
ixs = rng.randint(len(data), size=50)
yield data[ixs]
minibatches = zip(
create_minibatch(X_train),
create_minibatch(Y_train),
)
total_size = len(Y_train)
Explanation: We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.
Mini-batch ADVI: Scaling data size
So far, we have trained our model on all data at once. Obviously this won't scale to something like ImageNet. Moreover, training on mini-batches of data (stochastic gradient descent) avoids local minima and can lead to faster convergence.
Fortunately, ADVI can be run on mini-batches as well. It just requires some setting up:
End of explanation
%%time
with neural_network:
# Run advi_minibatch
v_params = pm.variational.advi_minibatch(
n=50000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-2, epsilon=1.0
)
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
sns.despine()
Explanation: While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM.
Lets pass those to advi_minibatch():
End of explanation
pm.traceplot(trace);
Explanation: As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.
For fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights.
End of explanation |
3,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
Step1: Build the libcoral C++ examples
This Colab provides a convenient way to build the libcoral C++ examples.
Simply run this notebook and it produces the downloadable binaries for your target system (default target is aarch64, which is compatible with the Coral Dev Board and Dev Board Mini).
To start the build, select Runtime > Run all in the Colab toolbar.
<a href="https
Step2: Install Bazel
Step3: Install dependencies to cross-compile
Step4: Build all examples for Coral boards
The following line builds for an ARM64 systems (Coral Dev Board and Dev Board Mini). Alternative CPU architectures are k8 and armv7a.
Step5: Download the binaries | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
! git clone https://github.com/google-coral/libcoral.git
%cd libcoral
! git submodule init && git submodule update libedgetpu
Explanation: Build the libcoral C++ examples
This Colab provides a convenient way to build the libcoral C++ examples.
Simply run this notebook and it produces the downloadable binaries for your target system (default target is aarch64, which is compatible with the Coral Dev Board and Dev Board Mini).
To start the build, select Runtime > Run all in the Colab toolbar.
<a href="https://colab.research.google.com/github/google-coral/tutorials/blob/master/build_cpp_examples.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"></a>
<a href="https://github.com/google-coral/tutorials/blob/master/build_cpp_examples.ipynb" target="_parent"><img src="https://img.shields.io/static/v1?logo=GitHub&label=&color=333333&style=flat&message=View%20on%20GitHub" alt="View in GitHub"></a>
Download examples from GitHub
End of explanation
! sudo apt install curl
! curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
! echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
! sudo apt update && sudo apt install bazel
Explanation: Install Bazel
End of explanation
! bash docker/update_sources.sh
! sudo dpkg --add-architecture arm64 && sudo apt-get update
! sudo apt-get install -y crossbuild-essential-arm64 libpython3-dev:arm64 libusb-1.0-0-dev:arm64 xxd
Explanation: Install dependencies to cross-compile
End of explanation
! make CPU=aarch64 examples
Explanation: Build all examples for Coral boards
The following line builds for an ARM64 systems (Coral Dev Board and Dev Board Mini). Alternative CPU architectures are k8 and armv7a.
End of explanation
from google.colab import files
files.download('bazel-out/aarch64-opt/bin/coral/examples/backprop_last_layer')
files.download('bazel-out/aarch64-opt/bin/coral/examples/classify_image')
files.download('bazel-out/aarch64-opt/bin/coral/examples/model_pipelining')
files.download('bazel-out/aarch64-opt/bin/coral/examples/two_models_one_tpu')
files.download('bazel-out/aarch64-opt/bin/coral/examples/two_models_two_tpus_threaded')
Explanation: Download the binaries
End of explanation |
3,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - Optimisation sous contrainte (correction)
Un peu plus de détails dans cet article
Step1: On rappelle le problème d'optimisation à résoudre
Step2: Exercice 2
Step3: La code proposé ici a été repris et modifié de façon à l'inclure dans une fonction qui s'adapte à n'importe quel type de fonction et contrainte dérivables
Step4: Prolongement 1
Step5: Version avec l'algorithme de Arrow-Hurwicz | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - Optimisation sous contrainte (correction)
Un peu plus de détails dans cet article : Damped Arrow-Hurwicz algorithm for sphere packing.
End of explanation
from cvxopt import solvers, matrix
import random
def fonction(x=None,z=None) :
if x is None :
x0 = matrix ( [[ random.random(), random.random() ]])
return 0,x0
f = x[0]**2 + x[1]**2 - x[0]*x[1] + x[1]
d = matrix ( [ x[0]*2 - x[1], x[1]*2 - x[0] + 1 ] ).T
if z is None:
return f, d
else :
h = z[0] * matrix ( [ [ 2.0, -1.0], [-1.0, 2.0] ])
return f, d, h
A = matrix([ [ 1.0, 2.0 ] ]).trans()
b = matrix ( [[ 1.0] ] )
sol = solvers.cp ( fonction, A = A, b = b)
print (sol)
print ("solution:",sol['x'].T)
Explanation: On rappelle le problème d'optimisation à résoudre :
$\left { \begin{array}{l} \min_U J(U) = u_1^2 + u_2^2 - u_1 u_2 + u_2 \ sous \; contrainte \; \theta(U) = u_1 + 2u_2 - 1 = 0 \; et \; u_1 \geqslant 0.5 \end{array}\right .$
Les implémentations de l'algorithme Arrow-Hurwicz proposées ici ne sont pas génériques. Il n'est pas suggéré de les réutiliser à moins d'utiliser pleinement le calcul matriciel de numpy.
Exercice 1 : optimisation avec cvxopt
Le module cvxopt utilise une fonction qui retourne la valeur de la fonction à optimiser, sa dérivée, sa dérivée seconde.
$\begin{array}{rcl} f(x,y) &=& x^2 + y^2 - xy + y \ \frac{\partial f(x,y)}{\partial x} &=& 2x - y \ \frac{\partial f(x,y)}{\partial y} &=& 2y - x + 1 \ \frac{\partial^2 f(x,y)}{\partial x^2} &=& 2 \ \frac{\partial^2 f(x,y)}{\partial y^2} &=& 2 \ \frac{\partial^2 f(x,y)}{\partial x\partial y} &=& -1 \end{array}$
Le paramètre le plus complexe est la fonction F pour lequel il faut lire la documentation de la fonction solvers.cp qui détaille les trois cas d'utilisation de la fonction F :
F() ou F(None,None), ce premier cas est sans doute le plus déroutant puisqu'il faut retourner le nombre de contraintes non linéaires et le premier $x_0$
F(x) ou F(x,None)
F(x,z)
L'algorithme de résolution est itératif : on part d'une point $x_0$ qu'on déplace dans les directions opposés aux gradients de la fonction à minimiser et des contraintes jusqu'à ce que le point $x_t$ n'évolue plus. C'est pourquoi le premier d'utilisation de la focntion $F$ est en fait une initialisation. L'algorithme d'optimisation a besoin d'un premier point $x_0$ dans le domaine de défintion de la fonction $f$.
End of explanation
def fonction(X) :
x,y = X
f = x**2 + y**2 - x*y + y
d = [ x*2 - y, y*2 - x + 1 ]
return f, d
def contrainte(X) :
x,y = X
f = x+2*y-1
d = [ 1,2]
return f, d
X0 = [ random.random(),random.random() ]
p0 = random.random()
epsilon = 0.1
rho = 0.1
diff = 1
iter = 0
while diff > 1e-10 :
f,d = fonction( X0 )
th,dt = contrainte( X0 )
Xt = [ X0[i] - epsilon*(d[i] + dt[i] * p0) for i in range(len(X0)) ]
th,dt = contrainte( Xt )
pt = p0 + rho * th
iter += 1
diff = sum ( [ abs(Xt[i] - X0[i]) for i in range(len(X0)) ] )
X0 = Xt
p0 = pt
if iter % 100 == 0 :
print ("i {0} diff {1:0.000}".format(iter,diff),":", f,X0,p0,th)
print (diff,iter,p0)
print("solution:",X0)
Explanation: Exercice 2 : l'algorithme de Arrow-Hurwicz
End of explanation
def fonction(X,c) :
x,y = X
f = x**2 + y**2 - x*y + y
d = [ x*2 - y, y*2 - x + 1 ]
v = x+2*y-1
v = c/2 * v**2
# la fonction retourne maintenant dv (ce qu'elle ne faisait pas avant)
dv = [ 2*(x+2*y-1), 4*(x+2*y-1) ]
dv = [ c/2 * dv[0], c/2 * dv[1] ]
return f + v, d, dv
def contrainte(X) :
x,y = X
f = x+2*y-1
d = [ 1,2]
return f, d
X0 = [ random.random(),random.random() ]
p0 = random.random()
epsilon = 0.1
rho = 0.1
c = 1
diff = 1
iter = 0
while diff > 1e-10 :
f,d,dv = fonction( X0,c )
th,dt = contrainte( X0 )
# le dv[i] est nouveau
Xt = [ X0[i] - epsilon*(d[i] + dt[i] * p0 + dv[i]) for i in range(len(X0)) ]
th,dt = contrainte( Xt )
pt = p0 + rho * th
iter += 1
diff = sum ( [ abs(Xt[i] - X0[i]) for i in range(len(X0)) ] )
X0 = Xt
p0 = pt
if iter % 100 == 0 :
print ("i {0} diff {1:0.000}".format(iter,diff),":", f,X0,p0,th)
print (diff,iter,p0)
print("solution:",X0)
Explanation: La code proposé ici a été repris et modifié de façon à l'inclure dans une fonction qui s'adapte à n'importe quel type de fonction et contrainte dérivables : Arrow_Hurwicz. Il faut distinguer l'algorithme en lui-même et la preuve de sa convergence. Cet algorithme fonctionne sur une grande classe de fonctions mais sa convergence n'est assurée que lorsque les fonctions sont quadratiques.
Exercice 3 : le lagrangien augmenté
End of explanation
from cvxopt import solvers, matrix
import random
def fonction(x=None,z=None) :
if x is None :
x0 = matrix ( [[ random.random(), random.random() ]])
return 0,x0
f = x[0]**2 + x[1]**2 - x[0]*x[1] + x[1]
d = matrix ( [ x[0]*2 - x[1], x[1]*2 - x[0] + 1 ] ).T
h = matrix ( [ [ 2.0, -1.0], [-1.0, 2.0] ])
if z is None: return f, d
else : return f, d, h
A = matrix([ [ 1.0, 2.0 ] ]).trans()
b = matrix ( [[ 1.0] ] )
G = matrix ( [[0.0, -1.0] ]).trans()
h = matrix ( [[ -0.3] ] )
sol = solvers.cp ( fonction, A = A, b = b, G=G, h=h)
print (sol)
print ("solution:",sol['x'].T)
Explanation: Prolongement 1 : inégalité
Le problème à résoudre est le suivant :
$\left{ \begin{array}{l} \min_U J(U) = u_1^2 + u_1^2 - u_1 u_2 + u_2 \ \; sous \; contrainte \; \theta(U) = u_1 + 2u_2 - 1 = 0 \; et \; u_1 \geqslant 0.3 \end{array}\right.$
End of explanation
import numpy,random
X0 = numpy.matrix ( [[ random.random(), random.random() ]]).transpose()
P0 = numpy.matrix ( [[ random.random(), random.random() ]]).transpose()
A = numpy.matrix([ [ 1.0, 2.0 ], [ 0.0, -1.0] ])
tA = A.transpose()
b = numpy.matrix ( [[ 1.0], [-0.30] ] )
epsilon = 0.1
rho = 0.1
c = 1
first = True
iter = 0
while first or abs(J - oldJ) > 1e-8 :
if first :
J = X0[0,0]**2 + X0[1,0]**2 - X0[0,0]*X0[1,0] + X0[1,0]
oldJ = J+1
first = False
else :
oldJ = J
J = X0[0,0]**2 + X0[1,0]**2 - X0[0,0]*X0[1,0] + X0[1,0]
dj = numpy.matrix ( [ X0[0,0]*2 - X0[1,0], X0[1,0]*2 - X0[0,0] + 1 ] ).transpose()
Xt = X0 - ( dj + tA * P0 ) * epsilon
Pt = P0 + ( A * Xt - b) * rho
if Pt [1,0] < 0 : Pt[1,0] = 0
X0,P0 = Xt,Pt
iter += 1
if iter % 100 == 0 :
print ("iteration",iter, J)
print (iter)
print ("solution:",Xt.T)
Explanation: Version avec l'algorithme de Arrow-Hurwicz
End of explanation |
3,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decomposition framework of the PySAL segregation module
This is a notebook that explains a step-by-step procedure to perform decomposition on comparative segregation measures.
First, let's import all the needed libraries.
Step1: In this example, we are going to use census data that the user must download its own copy, following similar guidelines explained in https
Step2: Then, we read the data
Step3: We are going to work with the variable of the nonhispanic black people (nhblk10) and the total population of each unit (pop10). So, let's read the map of all census tracts of US and select some specific columns for the analysis
Step4: In this notebook, we use the Metropolitan Statistical Area (MSA) of US (we're also using the word 'cities' here to refer them). So, let's read the correspondence table that relates the tract id with the corresponding Metropolitan area...
Step5: ..and merge them with the previous data.
Step6: We now build the composition variable (compo) which is the division of the frequency of the chosen group and total population. Let's inspect the first rows of the data.
Step7: Now, we chose two different metropolitan areas to compare the degree of segregation.
Map of the composition of the Metropolitan area of Los Angeles
Step8: Map of the composition of the Metropolitan area of New York
Step9: We first compare the Gini index of both cities. Let's import the Gini_Seg class from segregation, fit both indexes and check the difference in point estimation.
Step10: Let's decompose these difference according to Rey, S. et al "Comparative Spatial Segregation Analytics". Forthcoming. You can check the options available in this decomposition below
Step11: Composition Approach (default)
The difference of -0.10653 fitted previously, can be decomposed into two components. The Spatial component and the attribute component. Let's estimate both, respectively.
Step12: So, the first thing to notice is that attribute component, i.e., given by a difference in the population structure (in this case, the composition) plays a more important role in the difference, since it has a higher absolute value.
The difference in the composition can be inspected in the plotting method with the type cdfs
Step13: If your data is a GeoDataFrame, it is also possible to visualize the counterfactual compositions with the argument plot_type = 'maps'
The first and second contexts are Los Angeles and New York, respectively.
Step14: Note that in all plotting methods, the title presents each component of the decomposition performed.
Share Approach
The share approach takes into consideration the share of each group in each city. Since this approach takes into consideration the focus group and the complementary group share to build the "counterfactual" total population of each unit, it is of interest to inspect all these four cdf's.
ps.
Step15: We can see that curve between the contexts are closer to each other which represent a drop in the importance of the population structure (attribute component) to -0.062. However, this attribute still overcomes the spatial component (-0.045) in terms of importance due to both absolute magnitudes.
Step16: We can see that the counterfactual maps of the composition (outside of the main diagonal), in this case, are slightly different from the previous approach.
Dual Composition Approach
The dual_composition approach is similar to the composition approach. However, it uses also the counterfactual composition of the cdf of the complementary group.
Step17: It is possible to see that the component values are very similar with slight changes from the composition approach.
Step18: The counterfactual distributions are virtually the same (but not equal) as the one from the composition approach.
Inspecting a different index | Python Code:
import pandas as pd
import pickle
import numpy as np
import matplotlib.pyplot as plt
from pysal.explore import segregation
from pysal.explore.segregation.decomposition import DecomposeSegregation
Explanation: Decomposition framework of the PySAL segregation module
This is a notebook that explains a step-by-step procedure to perform decomposition on comparative segregation measures.
First, let's import all the needed libraries.
End of explanation
#filepath = '~/LTDB_Std_2010_fullcount.csv'
Explanation: In this example, we are going to use census data that the user must download its own copy, following similar guidelines explained in https://github.com/spatialucr/geosnap/tree/master/geosnap/data where you should download the full type file of 2010. The zipped file download will have a name that looks like LTDB_Std_All_fullcount.zip. After extracting the zipped content, the filepath of the data should looks like this:
End of explanation
df = pd.read_csv(filepath, encoding = "ISO-8859-1", sep = ",")
Explanation: Then, we read the data:
End of explanation
# This file can be download here: https://drive.google.com/open?id=1gWF0OCn6xuR_WrEj7Ot2jY6KI2t6taIm
with open('data/tracts_US.pkl', 'rb') as input:
map_gpd = pickle.load(input)
map_gpd['INTGEOID10'] = pd.to_numeric(map_gpd["GEOID10"])
gdf_pre = map_gpd.merge(df, left_on = 'INTGEOID10', right_on = 'tractid')
gdf = gdf_pre[['GEOID10', 'geometry', 'pop10', 'nhblk10']]
Explanation: We are going to work with the variable of the nonhispanic black people (nhblk10) and the total population of each unit (pop10). So, let's read the map of all census tracts of US and select some specific columns for the analysis:
End of explanation
# You can download this file here: https://drive.google.com/open?id=10HUUJSy9dkZS6m4vCVZ-8GiwH0EXqIau
with open('data/tract_metro_corresp.pkl', 'rb') as input:
tract_metro_corresp = pickle.load(input).drop_duplicates()
Explanation: In this notebook, we use the Metropolitan Statistical Area (MSA) of US (we're also using the word 'cities' here to refer them). So, let's read the correspondence table that relates the tract id with the corresponding Metropolitan area...
End of explanation
merged_gdf = gdf.merge(tract_metro_corresp, left_on = 'GEOID10', right_on = 'geoid10')
Explanation: ..and merge them with the previous data.
End of explanation
merged_gdf['compo'] = np.where(merged_gdf['pop10'] == 0, 0, merged_gdf['nhblk10'] / merged_gdf['pop10'])
merged_gdf.head()
Explanation: We now build the composition variable (compo) which is the division of the frequency of the chosen group and total population. Let's inspect the first rows of the data.
End of explanation
la_2010 = merged_gdf.loc[(merged_gdf.name == "Los Angeles-Long Beach-Anaheim, CA")]
la_2010.plot(column = 'compo', figsize = (10, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
Explanation: Now, we chose two different metropolitan areas to compare the degree of segregation.
Map of the composition of the Metropolitan area of Los Angeles
End of explanation
ny_2010 = merged_gdf.loc[(merged_gdf.name == 'New York-Newark-Jersey City, NY-NJ-PA')]
ny_2010.plot(column = 'compo', figsize = (20, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
Explanation: Map of the composition of the Metropolitan area of New York
End of explanation
from pysal.explore.segregation.aspatial import GiniSeg
G_la = GiniSeg(la_2010, 'nhblk10', 'pop10')
G_ny = GiniSeg(ny_2010, 'nhblk10', 'pop10')
G_la.statistic - G_ny.statistic
Explanation: We first compare the Gini index of both cities. Let's import the Gini_Seg class from segregation, fit both indexes and check the difference in point estimation.
End of explanation
help(DecomposeSegregation)
Explanation: Let's decompose these difference according to Rey, S. et al "Comparative Spatial Segregation Analytics". Forthcoming. You can check the options available in this decomposition below:
End of explanation
DS_composition = DecomposeSegregation(G_la, G_ny)
DS_composition.c_s
DS_composition.c_a
Explanation: Composition Approach (default)
The difference of -0.10653 fitted previously, can be decomposed into two components. The Spatial component and the attribute component. Let's estimate both, respectively.
End of explanation
DS_composition.plot(plot_type = 'cdfs')
Explanation: So, the first thing to notice is that attribute component, i.e., given by a difference in the population structure (in this case, the composition) plays a more important role in the difference, since it has a higher absolute value.
The difference in the composition can be inspected in the plotting method with the type cdfs:
End of explanation
DS_composition.plot(plot_type = 'maps')
Explanation: If your data is a GeoDataFrame, it is also possible to visualize the counterfactual compositions with the argument plot_type = 'maps'
The first and second contexts are Los Angeles and New York, respectively.
End of explanation
DS_share = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'share')
DS_share.plot(plot_type = 'cdfs')
Explanation: Note that in all plotting methods, the title presents each component of the decomposition performed.
Share Approach
The share approach takes into consideration the share of each group in each city. Since this approach takes into consideration the focus group and the complementary group share to build the "counterfactual" total population of each unit, it is of interest to inspect all these four cdf's.
ps.: The share is the population frequency of each group in each unit over the total population of that respectively group.
End of explanation
DS_share.plot(plot_type = 'maps')
Explanation: We can see that curve between the contexts are closer to each other which represent a drop in the importance of the population structure (attribute component) to -0.062. However, this attribute still overcomes the spatial component (-0.045) in terms of importance due to both absolute magnitudes.
End of explanation
DS_dual = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'dual_composition')
DS_dual.plot(plot_type = 'cdfs')
Explanation: We can see that the counterfactual maps of the composition (outside of the main diagonal), in this case, are slightly different from the previous approach.
Dual Composition Approach
The dual_composition approach is similar to the composition approach. However, it uses also the counterfactual composition of the cdf of the complementary group.
End of explanation
DS_dual.plot(plot_type = 'maps')
Explanation: It is possible to see that the component values are very similar with slight changes from the composition approach.
End of explanation
from pysal.explore.segregation.spatial import RelativeConcentration
RCO_la = RelativeConcentration(la_2010, 'nhblk10', 'pop10')
RCO_ny = RelativeConcentration(ny_2010, 'nhblk10', 'pop10')
RCO_la.statistic - RCO_ny.statistic
RCO_DS_composition = DecomposeSegregation(RCO_la, RCO_ny)
RCO_DS_composition.c_s
RCO_DS_composition.c_a
Explanation: The counterfactual distributions are virtually the same (but not equal) as the one from the composition approach.
Inspecting a different index: Relative Concentration
End of explanation |
3,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 50 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias = False, activation=tf.nn.relu)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias = False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training = is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder to indicate wheter or not we're traning the model
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits, labels = labels))
# Tell Tensorflow to update the population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
3,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The porepy grid structure
In this tutorial we investigate the PorePy grid structure, and explain how to access information stored in the grid.
Basic grid construction
The simplest grids are Cartesian. PorePy can create Cartesian grids in 1d, 2d, and 3d. In fact, there are 0d point-grids as well, but these are only used in the context of multiple intersecting fractures. To create a 2d Cartesian grid
Step1: The resulting cells will be of unit size, thus the grid covers the domain $[0, 3]\times [0,2]$. To specify the domain size, we need to pass a second argument
Step2: The grids currently only have node coordinates, together with topological information that we come back to below. To check the grid size, several attributes are provided
Step3: The node coordinates are stored as
Step4: and
Step5: As expected the second grid covers a larger area.
We also see that even though the grids are 2d, the nodes have three coordinates. This is general, all geometric quantities in porepy have three dimensions, even if they represent objects that are genuinely lower-dimensional. The reason is that for fractured media, we will often work with grids on fracture surfaces that are embedded in 3d domains, and treating this as special cases throughout the code turned out to be overly cumbersome. Also note that the third dimension was introduced automatically, so the user need not worry about this.
Geometric quantities
To compute additional geometric quantities, grids come with a method compute_geometry(), that will add attributes cell_centers, face_centers and face_normals
Step6: And similar for face information. It is of course possible to set the geometric quantities manually. Be aware that a subsequent call to compute_geometry() will overwrite this information.
It is sometimes useful to consider grids with a Cartesian topology, but with perturbed geometry. This is simply achieved by perturbing the nodes, and then (re)-computing geometry
Step7: When perturbing nodes, make sure to limit the distortion so that the grid topology still is valid; if not, all kinds of problems may arise.
Visualization
porepy provides two ways of visualizing the grid, matplotlib and vtk/paraview. Matplotlib visualization is done by
Step8: As we see, the plot is in 3d, and the third axis adds noise to the plot. The matplotlib interface is most useful for quick visualization, e.g. during debuging. For instance, we can add cell numbers to the plot by writing
Step9: For further information, see the documentation of plot_grid.
The second visualization option dumps the grid to a vtu file
Step10: This file can then be accessed by e.g. paraview.
Topological information
In addition to storing coordinates of cells, faces and nodes, the grid object also keeps track of the relation between them. Specifically, we can access
Step11: We see that the information is stored as a scipy.sparse matrix. From the shape of the matrix, we conclude that the rows represent nodes, while the faces are stored in columns. We can get the nodes for the first face by brute force by writing
Step12: That was hardly elegant, though, and would make for cumbersome implementation of, say, a numerical method. A better approach is to utilize the csc storage format, and write
Step13: To see why this works, confer the scipy.sparse documentation. Getting the faces of a node can be done by converting g.face_nodes to a csr_matrix, and then follow the above procedure.
The map between cells and faces is stored in the same way, thus the faces of cell 0 is found by
Step14: However, cell_faces also keeps track of the direction of the normal vector relative to the neighboring cells, by storing data as $\pm 1$, or zero if there is no connection between the cells (in contrast, face_nodes simply consist of 'True or False).
Step15: Compare this with the face normal vectors
Step16: We observe that positive data corresponds to normal vector pointing out of the cell. This is a very useful feature, since it in effect means that the transpose of g.cell_faces is the discrete divergence operator for the grid.
As with the face-node relations, we can obtain the cells of a face by representing the matrix in a sparse row storage format, and then use the above procedure of indices and index pointers. However, we know that there will be either 1 or 2 cells adjacent to each face. It is thus feasible to create a dense representation of the cell-face relations
Step17: Here, each column represent the cells of a face, and negative values signifies that the face is on the boundary. The cells are ordered so that the normal vector points from the cell in row 0 to row 1.
Finally, we note that to get a cell-node relation, we can combine cell_faces and face_nodes. However, since cell_faces contains both positive and negative values, we need to take the absolute value of the data (without modifying cell_faces directly, since we may want to use the divergence operator later). This procedure is implemented in the method cell_nodes(), which returns a sparse matrix that can be handled in the usual way
Step18: Simplex grids
PorePy has grid constructors for Cartesian grids in 2d and 3d, as well as simplex grids in 2d and 3d. The simplex grids can be specified either by point coordinates and a cell-node map (e.g. a Delaunay triangulation), or simply by the node coordinates. In the latter case, the Delaunay triangulation (or the 3d equivalent) will be used to construct the grid. As an example, we make a triangle grid using the nodes of g, distorting the y coordinate of the two central nodes slightly
Step19: A structured triangular grid (squares divided into two) is also provided
Step20: Import of grids from external meshing tools
Currently, PorePy supports import of grids from Gmsh. This is mostly used for fractured domains (tutorial still to be made).
The grid structure in PorePy is fairly general, and can support a much wider class of grids than those currently implemented. To import a new type of grid, all that is needed is to construct the face-node and cell-face maps, together with importing the node coordinates. Remaining geometric attributes can then be calculated by the compute_geometry() function.
When implementing such a filter, note that the geometry computation tacitly assumes an ordering of the nodes in each face, in the sense that the edges of the faces are found by joining subsequent nodes in the face-node list. To illustrate the danger, consider the following example | Python Code:
import numpy as np
import porepy as pp
nx = np.array([3, 2])
g = pp.CartGrid(nx)
Explanation: The porepy grid structure
In this tutorial we investigate the PorePy grid structure, and explain how to access information stored in the grid.
Basic grid construction
The simplest grids are Cartesian. PorePy can create Cartesian grids in 1d, 2d, and 3d. In fact, there are 0d point-grids as well, but these are only used in the context of multiple intersecting fractures. To create a 2d Cartesian grid
End of explanation
phys_dims = np.array([10, 10])
g_2 = pp.CartGrid(nx, phys_dims)
Explanation: The resulting cells will be of unit size, thus the grid covers the domain $[0, 3]\times [0,2]$. To specify the domain size, we need to pass a second argument
End of explanation
g.num_cells
g.num_faces
g.num_nodes
# And finally dimension
g.dim
Explanation: The grids currently only have node coordinates, together with topological information that we come back to below. To check the grid size, several attributes are provided
End of explanation
g.nodes
Explanation: The node coordinates are stored as
End of explanation
g_2.nodes
Explanation: and
End of explanation
g.compute_geometry()
print(g.cell_centers)
Explanation: As expected the second grid covers a larger area.
We also see that even though the grids are 2d, the nodes have three coordinates. This is general, all geometric quantities in porepy have three dimensions, even if they represent objects that are genuinely lower-dimensional. The reason is that for fractured media, we will often work with grids on fracture surfaces that are embedded in 3d domains, and treating this as special cases throughout the code turned out to be overly cumbersome. Also note that the third dimension was introduced automatically, so the user need not worry about this.
Geometric quantities
To compute additional geometric quantities, grids come with a method compute_geometry(), that will add attributes cell_centers, face_centers and face_normals:
End of explanation
g_2.compute_geometry()
print(g_2.cell_centers)
g_2.nodes[:2] = g_2.nodes[:2] + np.random.random((g_2.nodes[:2].shape))
g_2.compute_geometry()
print(g_2.cell_centers)
Explanation: And similar for face information. It is of course possible to set the geometric quantities manually. Be aware that a subsequent call to compute_geometry() will overwrite this information.
It is sometimes useful to consider grids with a Cartesian topology, but with perturbed geometry. This is simply achieved by perturbing the nodes, and then (re)-computing geometry:
End of explanation
%matplotlib inline
pp.plot_grid(g, figsize=(15,12))
Explanation: When perturbing nodes, make sure to limit the distortion so that the grid topology still is valid; if not, all kinds of problems may arise.
Visualization
porepy provides two ways of visualizing the grid, matplotlib and vtk/paraview. Matplotlib visualization is done by
End of explanation
cell_id = np.arange(g.num_cells)
pp.plot_grid(g, cell_value=cell_id, info='c', alpha=0.5, figsize=(15,12))
Explanation: As we see, the plot is in 3d, and the third axis adds noise to the plot. The matplotlib interface is most useful for quick visualization, e.g. during debuging. For instance, we can add cell numbers to the plot by writing
End of explanation
e = pp.Exporter(g, 'grid')
e.write_vtu()
Explanation: For further information, see the documentation of plot_grid.
The second visualization option dumps the grid to a vtu file:
End of explanation
g.face_nodes
Explanation: This file can then be accessed by e.g. paraview.
Topological information
In addition to storing coordinates of cells, faces and nodes, the grid object also keeps track of the relation between them. Specifically, we can access:
1. The relation between cells and faces
2. The relation between faces and nodes
3. The direction of face_normals, as in which of the neighboring cells has the normal vector as outwards pointing.
Note that there is no notion of edges for 3d grids. These are not usually needed for the type of numerical methods that are primarily of interest in porepy. The information can still be recovered from the face-node relations, see comments below.
The topological information is stored in two attributes, cell_faces and face_nodes. The latter has the simplest interpretation, so we start out with that one:
End of explanation
np.where(g.face_nodes[:, 0].toarray())[0]
Explanation: We see that the information is stored as a scipy.sparse matrix. From the shape of the matrix, we conclude that the rows represent nodes, while the faces are stored in columns. We can get the nodes for the first face by brute force by writing
End of explanation
g.face_nodes.indices[g.face_nodes.indptr[0] : g.face_nodes.indptr[1]]
Explanation: That was hardly elegant, though, and would make for cumbersome implementation of, say, a numerical method. A better approach is to utilize the csc storage format, and write
End of explanation
faces_of_cell_0 = g.cell_faces.indices[g.cell_faces.indptr[0] : g.cell_faces.indptr[1]]
print(faces_of_cell_0)
Explanation: To see why this works, confer the scipy.sparse documentation. Getting the faces of a node can be done by converting g.face_nodes to a csr_matrix, and then follow the above procedure.
The map between cells and faces is stored in the same way, thus the faces of cell 0 is found by
End of explanation
g.cell_faces.data[g.cell_faces.indptr[0] : g.cell_faces.indptr[1]]
Explanation: However, cell_faces also keeps track of the direction of the normal vector relative to the neighboring cells, by storing data as $\pm 1$, or zero if there is no connection between the cells (in contrast, face_nodes simply consist of 'True or False).
End of explanation
g.face_normals[:, faces_of_cell_0]
Explanation: Compare this with the face normal vectors
End of explanation
g.cell_face_as_dense()
Explanation: We observe that positive data corresponds to normal vector pointing out of the cell. This is a very useful feature, since it in effect means that the transpose of g.cell_faces is the discrete divergence operator for the grid.
As with the face-node relations, we can obtain the cells of a face by representing the matrix in a sparse row storage format, and then use the above procedure of indices and index pointers. However, we know that there will be either 1 or 2 cells adjacent to each face. It is thus feasible to create a dense representation of the cell-face relations:
End of explanation
cn = g.cell_nodes()
cn.indices[cn.indptr[0] : cn.indptr[1]]
Explanation: Here, each column represent the cells of a face, and negative values signifies that the face is on the boundary. The cells are ordered so that the normal vector points from the cell in row 0 to row 1.
Finally, we note that to get a cell-node relation, we can combine cell_faces and face_nodes. However, since cell_faces contains both positive and negative values, we need to take the absolute value of the data (without modifying cell_faces directly, since we may want to use the divergence operator later). This procedure is implemented in the method cell_nodes(), which returns a sparse matrix that can be handled in the usual way
End of explanation
nodes = g.nodes[:2]
nodes[1, 5:7] = np.array([1.2, 0.8])
g = pp.TriangleGrid(nodes)
g.compute_geometry()
pp.plot_grid(g, figsize=(15,12))
Explanation: Simplex grids
PorePy has grid constructors for Cartesian grids in 2d and 3d, as well as simplex grids in 2d and 3d. The simplex grids can be specified either by point coordinates and a cell-node map (e.g. a Delaunay triangulation), or simply by the node coordinates. In the latter case, the Delaunay triangulation (or the 3d equivalent) will be used to construct the grid. As an example, we make a triangle grid using the nodes of g, distorting the y coordinate of the two central nodes slightly:
End of explanation
g = pp.StructuredTriangleGrid(np.array([3, 4]))
g.compute_geometry()
pp.plot_grid(g, figsize=(15,12))
Explanation: A structured triangular grid (squares divided into two) is also provided:
End of explanation
g = pp.CartGrid([1, 1])
# Move the second node so that the implicit edges are intersecting
g.nodes[0, 1] = 0.5
g.nodes[1, 1] = 2
g.compute_geometry()
# Print the cell volume, as computed
print('Cell volume ' + str(g.cell_volumes[0]))
# A short calculation will show that the actual cell volume is 1.
pp.plot_grid(g, figsize=(15,12))
Explanation: Import of grids from external meshing tools
Currently, PorePy supports import of grids from Gmsh. This is mostly used for fractured domains (tutorial still to be made).
The grid structure in PorePy is fairly general, and can support a much wider class of grids than those currently implemented. To import a new type of grid, all that is needed is to construct the face-node and cell-face maps, together with importing the node coordinates. Remaining geometric attributes can then be calculated by the compute_geometry() function.
When implementing such a filter, note that the geometry computation tacitly assumes an ordering of the nodes in each face, in the sense that the edges of the faces are found by joining subsequent nodes in the face-node list. To illustrate the danger, consider the following example
End of explanation |
3,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 1
Learn how to use tensorflow basic concepts and variables
First start to learn about the graph structure, tensorflow is built upon the nodes
Step1: Next, we are going to show how to feed data as the parameters
Step2: At last, we are going to learn how to use variable, unlike the placeholder | Python Code:
# basic imported headers
import tensorflow as tf
Second, learn how to fetch the data from the result
input1 = tf.constant(3.0)
input2 = tf.constant(2.0)
input3 = tf.constant(5.0)
intermd = tf.add(input1, input2)
mult = tf.multiply(input3, intermd)
with tf.Session() as sess:
result = sess.run([mult, intermd])
print result
# Create a constant op and adds as a node into the default graph
matrix1 = tf.constant([[3., 3.]])
## Pay attention to this wrong one, DIMENSION
# matrix1 = tf.constant([3., 3.])
matrix2 = tf.constant([[2.], [2.]])
product = tf.matmul(matrix1, matrix2)
with tf.Session() as sess:
print sess.run(product)
Explanation: Exercise 1
Learn how to use tensorflow basic concepts and variables
First start to learn about the graph structure, tensorflow is built upon the nodes
End of explanation
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.multiply(input1, input2)
with tf.Session() as sess:
print (sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
Explanation: Next, we are going to show how to feed data as the parameters
End of explanation
state = tf.Variable(0, name="counter")
one = tf.constant(1)
new_value = tf.add(state, one)
#define the op, or rule to update/assign value
update = tf.assign(state, new_value)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
# print the initial state of state
print sess.run(state)
# use loop to output interatively
for _ in range(3):
sess.run(update)
print sess.run(state)
Explanation: At last, we are going to learn how to use variable, unlike the placeholder
End of explanation |
3,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing source space SNR
This example shows how to compute and plot source space SNR as in
Step1: EEG
Next we do the same for EEG and plot the result on the cortex | Python Code:
# Author: Padma Sundaram <[email protected]>
# Kaisu Lankinen <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
import numpy as np
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# Read inverse operator:
inv_op = make_inverse_operator(evoked.info, fwd, cov, fixed=True, verbose=True)
# Calculate MNE:
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv_op, lambda2, 'MNE', verbose=True)
# Calculate SNR in source space:
snr_stc = stc.estimate_snr(evoked.info, fwd, cov)
# Plot an average SNR across source points over time:
ave = np.mean(snr_stc.data, axis=0)
fig, ax = plt.subplots()
ax.plot(evoked.times, ave)
ax.set(xlabel='Time (sec)', ylabel='SNR MEG-EEG')
fig.tight_layout()
# Find time point of maximum SNR
maxidx = np.argmax(ave)
# Plot SNR on source space at the time point of maximum SNR:
kwargs = dict(initial_time=evoked.times[maxidx], hemi='split',
views=['lat', 'med'], subjects_dir=subjects_dir, size=(600, 600),
clim=dict(kind='value', lims=(-100, -70, -40)),
transparent=True, colormap='viridis')
brain = snr_stc.plot(**kwargs)
Explanation: Computing source space SNR
This example shows how to compute and plot source space SNR as in
:footcite:GoldenholzEtAl2009.
End of explanation
evoked_eeg = evoked.copy().pick_types(eeg=True, meg=False)
inv_op_eeg = make_inverse_operator(evoked_eeg.info, fwd, cov, fixed=True,
verbose=True)
stc_eeg = apply_inverse(evoked_eeg, inv_op_eeg, lambda2, 'MNE', verbose=True)
snr_stc_eeg = stc_eeg.estimate_snr(evoked_eeg.info, fwd, cov)
brain = snr_stc_eeg.plot(**kwargs)
Explanation: EEG
Next we do the same for EEG and plot the result on the cortex:
End of explanation |
3,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting a Deterministic <span style="font-variant
Step1: The function regexp_sum takes a set $S = { r_1, \cdots, r_n }$ of regular expressions
as its argument. It returns the regular expression
$$ r_1 + \cdots + r_n. $$
Step2: The function rpq assumes there is some <span style="font-variant
Step3: The function dfa_2_regexp takes a deterministic <span style="font-variant | Python Code:
def arb(S):
for x in S:
return x
Explanation: Converting a Deterministic <span style="font-variant:small-caps;">Fsm</span> into a Regular Expression
Given a set S, the function arb(S) returns an arbitrary member from S.
End of explanation
def regexp_sum(S):
n = len(S)
if n == 0:
return 0
elif n == 1:
return arb(S)
else:
r = arb(S)
return ('+', r, regexp_sum(S - { r }))
Explanation: The function regexp_sum takes a set $S = { r_1, \cdots, r_n }$ of regular expressions
as its argument. It returns the regular expression
$$ r_1 + \cdots + r_n. $$
End of explanation
def rpq(p1, p2, Σ, 𝛿, Allowed):
if Allowed == set():
AllChars = { c for c in Σ
if 𝛿.get((p1, c)) == p2
}
r = regexp_sum(AllChars)
if p1 == p2:
if AllChars == set():
return ''
else:
return ('+', '', r)
else:
return r
else:
q = arb(Allowed)
RestAllowed = Allowed - { q }
rp1p2 = rpq(p1, p2, Σ, 𝛿, RestAllowed)
rp1q = rpq(p1, q, Σ, 𝛿, RestAllowed)
rqq = rpq( q, q, Σ, 𝛿, RestAllowed)
rqp2 = rpq( q, p2, Σ, 𝛿, RestAllowed)
return ('+', rp1p2, ('&', ('&', rp1q, ('*', rqq)), rqp2))
Explanation: The function rpq assumes there is some <span style="font-variant:small-caps;">Fsm</span>
$$ F = \langle \texttt{States}, \Sigma, \delta, \texttt{q0}, \texttt{Accepting} \rangle $$
given and takes five arguments:
- p1 and p2 are states of the <span style="font-variant:small-caps;">Fsm</span> $F$,
- $\Sigma$ is the alphabet of the <span style="font-variant:small-caps;">Fsm</span>,
- $\delta$ is the transition function of the <span style="font-variant:small-caps;">Fsm</span> $F$, and
- Allowed is a subset of the set States.
The function rpq computes a regular expression that describes those strings that take the
<span style="font-variant:small-caps;">Fsm</span> $F$ from the state p1 to state p2.
When $F$ switches states from p1 to p2 only states in the set Allowed may be visited in-between the states p1 and p2.
The function is defined by recursion on the set Allowed. There are two cases
- $\texttt{Allowed} = {}$.
Define AllCharsas the set of all characters that when read by $F$ in the state p1 cause $F$ to enter the state p2:
$$ \texttt{AllChars} = { c \in \Sigma \mid \delta(p_1, c) = p_2 } $$
Then we need a further case distinction:
- $p_1 = p_2$: In this case we have:
$$ \texttt{rpq}(p_1, p_2, {}) := \sum\limits_{c\in\texttt{AllChars}} c \quad + \varepsilon$$
If $\texttt{AllChars} = {}$ the sum $\sum\limits_{c\in\texttt{AllChars}} c$ is to be interpreted as the
regular expression $\emptyset$ that denotes the empty language.
Otherwise, if $\texttt{AllChars} = \{c_1,\cdots,c_n\}$ we have
$\sum\limits_{c\in\texttt{AllChars}} c \quad = c_1 + \cdots + c_n$.
$p_1 \not= p_2$: In this case we have:
$$ \texttt{rpq}(p_1, p_2, {}) := \sum\limits_{c\in\texttt{AllChars}} c \quad$$
$\texttt{Allowed} = { q } + \texttt{RestAllowed}$. In this case we recursively define the following variables:
$\texttt{rp1p2} := \texttt{rpq}(p_1, p_2, \Sigma, \delta, \texttt{RestAllowed})$,
$\texttt{rp1q } := \texttt{rpq}(p_1, q, \Sigma, \delta, \texttt{RestAllowed})$,
$\texttt{rqq }\texttt{ } := \texttt{rpq}(q, q, \Sigma, \delta, \texttt{RestAllowed})$,
$\texttt{rqp2 } := \texttt{rpq}(q, p_2, \Sigma, \delta, \texttt{RestAllowed})$.
Then we can define:
$$ \texttt{rpq}(p_1, p_2, \texttt{Allowed}) := \texttt{rp1p2} + \texttt{rp1q} \cdot \texttt{rqq}^* \cdot \texttt{rqp} $$
This formula can be understood as follows: If a string $w$ is read in state $p_1$ and reading this string takes the
<span style="font-variant:small-caps;">Fsm</span> $F$ from the state $p_1$ to the state $p_2$ without visiting any state from the set
Allowed in-between, then there are two cases:
- Reading $w$ does not visit the state $q$ in-between. Hence the string $w$ can be described by the regular expression
rp1p2.
- The string $w$ can be written as $w = t u_1 \cdots u_n v$ where:
- reading $t$ in the state $p_1$ takes the <span style="font-variant:small-caps;">Fsm</span> $F$ into the state $q$,
- for all $i \in {1,\cdots,n}$ reading $v_i$ in the state $q$ takes the <span style="font-variant:small-caps;">Fsm</span> $F$ from $q$ to $q$, and
- reading $v$ in the state $q$ takes the <span style="font-variant:small-caps;">Fsm</span> $F$ into the state $p_2$.
End of explanation
def dfa_2_regexp(F):
States, Σ, 𝛿, q0, Accepting = F
r = regexp_sum({ rpq(q0, p, Σ, 𝛿, States) for p in Accepting })
return r
Explanation: The function dfa_2_regexp takes a deterministic <span style="font-variant:small-caps;">Fsm</span> $F$ and computes a regular expression $r$ that describes the same language as $F$, i.e. we have
$$ L(A) = L(r). $$
Furthermore, it tries to simplify the regular expression $r$ using some algebraic rules.
End of explanation |
3,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 56 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points, and the time needed to compute the validation.
Step1: Let's get the data.
Step2: Let's find the best params set for some different models
- Dummy Predictor (mean)
Step3: - Linear Predictor
Step4: - Random Forest model | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
import utils.misc as misc
AHEAD_DAYS = 56
Explanation: On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 56 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points, and the time needed to compute the validation.
End of explanation
datasets_params_list_df = pd.read_pickle('../../data/datasets_params_list_df.pkl')
print(datasets_params_list_df.shape)
datasets_params_list_df.head()
train_days_arr = 252 * np.array([1, 2, 3])
params_list_df = pd.DataFrame()
for train_days in train_days_arr:
temp_df = datasets_params_list_df[datasets_params_list_df['ahead_days'] == AHEAD_DAYS].copy()
temp_df['train_days'] = train_days
params_list_df = params_list_df.append(temp_df, ignore_index=True)
print(params_list_df.shape)
params_list_df.head()
Explanation: Let's get the data.
End of explanation
tic = time()
from predictor.dummy_mean_predictor import DummyPredictor
PREDICTOR_NAME = 'dummy'
# Global variables
eval_predictor = DummyPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
Explanation: Let's find the best params set for some different models
- Dummy Predictor (mean)
End of explanation
tic = time()
from predictor.linear_predictor import LinearPredictor
PREDICTOR_NAME = 'linear'
# Global variables
eval_predictor = LinearPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
Explanation: - Linear Predictor
End of explanation
tic = time()
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor = RandomForestPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
Explanation: - Random Forest model
End of explanation |
3,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../images/qiskit-heading.gif" alt="Note
Step1: Single Qubit Quantum states
A single qubit quantum state can be written as
$$|\psi\rangle = \alpha|0\rangle + \beta |1\rangle$$
where $\alpha$ and $\beta$ are complex numbers. In a measurement the probability of the bit being in $|0\rangle$ is $|\alpha|^2$ and $|1\rangle$ is $|\beta|^2$. As a vector this is
$$
|\psi\rangle =
\begin{pmatrix}
\alpha \
\beta
\end{pmatrix}.
$$
Note due to conservation probability $|\alpha|^2+ |\beta|^2 = 1$ and since global phase is undetectable $|\psi\rangle
Step2: u gates
In Qiskit we give you access to the general unitary using the $u3$ gate
$$
u3(\theta, \phi, \lambda) = U(\theta, \phi, \lambda)
$$
Step3: The $u2(\phi, \lambda) =u3(\pi/2, \phi, \lambda)$ has the matrix form
$$
u2(\phi, \lambda) =
\frac{1}{\sqrt{2}} \begin{pmatrix}
1 & -e^{i\lambda} \
e^{i\phi} & e^{i(\phi + \lambda)}
\end{pmatrix}.
$$
This is a useful gate as it allows us to create superpositions
Step4: The $u1(\lambda)= u3(0, 0, \lambda)$ gate has the matrix form
$$
u1(\lambda) =
\begin{pmatrix}
1 & 0 \
0 & e^{i \lambda}
\end{pmatrix},
$$
which is a useful as it allows us to apply a quantum phase.
Step5: The $u0(\delta)= u3(0, 0, 0)$ gate is the identity matrix. It has the matrix form
$$
u0(\delta) =
\begin{pmatrix}
1 & 0 \
0 & 1
\end{pmatrix}.
$$
The identity gate does nothing (but can add noise in the real device for a period of time equal to fractions of the single qubit gate time)
Step6: Identity gate
The identity gate is $Id = u0(1)$.
Step7: Pauli gates
$X$
Step8: $Y$
Step9: $Z$
Step10: Clifford gates
Hadamard gate
$$
H =
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\
1 & -1
\end{pmatrix}= u2(0,\pi)
$$
Step11: $S$ (or, $\sqrt{Z}$ phase) gate
$$
S =
\begin{pmatrix}
1 & 0\
0 & i
\end{pmatrix}= u1(\pi/2)
$$
Step12: $S^{\dagger}$ (or, conjugate of $\sqrt{Z}$ phase) gate
$$
S^{\dagger} =
\begin{pmatrix}
1 & 0\
0 & -i
\end{pmatrix}= u1(-\pi/2)
$$
Step13: $C3$ gates
$T$ (or, $\sqrt{S}$ phase) gate
$$
T =
\begin{pmatrix}
1 & 0\
0 & e^{i \pi/4}
\end{pmatrix}= u1(\pi/4)
$$
Step14: $T^{\dagger}$ (or, conjugate of $\sqrt{S}$ phase) gate
$$
T^{\dagger} =
\begin{pmatrix}
1 & 0\
0 & e^{-i \pi/4}
\end{pmatrix}= u1(-pi/4)
$$
They can be added as below.
Step15: Standard Rotations
The standard rotation gates are those that define rotations around the Paulis $P={X,Y,Z}$. They are defined as
$$ R_P(\theta) = \exp(-i \theta P/2) = \cos(\theta/2)I -i \sin(\theta/2)P$$
Rotation around X-axis
$$
R_x(\theta) =
\begin{pmatrix}
\cos(\theta/2) & -i\sin(\theta/2)\
-i\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix} = u3(\theta, -\pi/2,\pi/2)
$$
Step16: Rotation around Y-axis
$$
R_y(\theta) =
\begin{pmatrix}
\cos(\theta/2) & - \sin(\theta/2)\
\sin(\theta/2) & \cos(\theta/2).
\end{pmatrix} =u3(\theta,0,0)
$$
Step17: Rotation around Z-axis
$$
R_z(\phi) =
\begin{pmatrix}
e^{-i \phi/2} & 0 \
0 & e^{i \phi/2}
\end{pmatrix}\equiv u1(\phi)
$$
Note here we have used an equivalent as is different to u1 by global phase $e^{-i \phi/2}$.
Step18: Note this is different due only to a global phase
Multi-Qubit Gates
Mathematical Preliminaries
The space of quantum computer grows exponential with the number of qubits. For $n$ qubits the complex vector space has dimensions $d=2^n$. To describe states of a multi-qubit system, the tensor product is used to "glue together" operators and basis vectors.
Let's start by considering a 2-qubit system. Given two operators $A$ and $B$ that each act on one qubit, the joint operator $A \otimes B$ acting on two qubits is
$$\begin{equation}
A\otimes B =
\begin{pmatrix}
A_{00} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix} & A_{01} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix} \
A_{10} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix} & A_{11} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix}
\end{pmatrix},
\end{equation}$$
where $A_{jk}$ and $B_{lm}$ are the matrix elements of $A$ and $B$, respectively.
Analogously, the basis vectors for the 2-qubit system are formed using the tensor product of basis vectors for a single qubit
Step19: Controlled Pauli Gates
Controlled-X (or, controlled-NOT) gate
The controlled-not gate flips the target qubit when the control qubit is in the state $|1\rangle$. If we take the MSB as the control qubit (e.g. cx(q[1],q[0])), then the matrix would look like
$$
C_X =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 1 & 0 & 0\
0 & 0 & 0 & 1\
0 & 0 & 1 & 0
\end{pmatrix}.
$$
However, when the LSB is the control qubit, (e.g. cx(q[0],q[1])), this gate is equivalent to the following matrix
Step20: Controlled $Y$ gate
Apply the $Y$ gate to the target qubit if the control qubit is the MSB
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 1 & 0 & 0\
0 & 0 & 0 & -i\
0 & 0 & i & 0
\end{pmatrix},
$$
or when the LSB is the control
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 0 & 0 & -i\
0 & 0 & 1 & 0\
0 & i & 0 & 0
\end{pmatrix}.
$$
Step21: Controlled $Z$ (or, controlled Phase) gate
Similarly, the controlled Z gate flips the phase of the target qubit if the control qubit is $1$. The matrix looks the same regardless of whether the MSB or LSB is the control qubit
Step22: Controlled Hadamard gate
Apply $H$ gate to the target qubit if the control qubit is $|1\rangle$. Below is the case where the control is the LSB qubit.
$$
C_H =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\
0 & 0 & 1 & 0\
0 & \frac{1}{\sqrt{2}} & 0& -\frac{1}{\sqrt{2}}
\end{pmatrix}
$$
Step23: Controlled rotation gates
Controlled rotation around Z-axis
Perform rotation around Z-axis on the target qubit if the control qubit (here LSB) is $|1\rangle$.
$$
C_{Rz}(\lambda) =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & e^{-i\lambda/2} & 0 & 0\
0 & 0 & 1 & 0\
0 & 0 & 0 & e^{i\lambda/2}
\end{pmatrix}
$$
Step24: Controlled phase rotation
Perform a phase rotation if both qubits are in the $|11\rangle$ state. The matrix looks the same regardless of whether the MSB or LSB is the control qubit.
$$
C_{u1}(\lambda) =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 1 & 0 & 0\
0 & 0 & 1 & 0\
0 & 0 & 0 & e^{i\lambda}
\end{pmatrix}
$$
Step25: I THINK SHOULD BE CALLED $C_\mathrm{PHASE}(\lambda)$
Step26: Controlled $u3$ rotation
Perform controlled-$u3$ rotation on the target qubit if the control qubit (here LSB) is $|1\rangle$.
$$
C_{u3}(\theta, \phi, \lambda) \equiv
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & e^{-i(\phi+\lambda)/2}\cos(\theta/2) & 0 & -e^{-i(\phi-\lambda)/2}\sin(\theta/2)\
0 & 0 & 1 & 0\
0 & e^{i(\phi-\lambda)/2}\sin(\theta/2) & 0 & e^{i(\phi+\lambda)/2}\cos(\theta/2)
\end{pmatrix}.
$$
Step27: NOTE I NEED TO FIX THIS AND DECIDE ON CONVENTION - I ACTUALLY THINK WE WANT A FOUR PARAMETER GATE AND JUST CALL IT CU AND TO REMOVE THIS GATE.
SWAP gate
The SWAP gate exchanges the two qubits. It transforms the basis vectors as
$$|00\rangle \rightarrow |00\rangle~,~|01\rangle \rightarrow |10\rangle~,~|10\rangle \rightarrow |01\rangle~,~|11\rangle \rightarrow |11\rangle,$$
which gives a matrix representation of the form
$$
\mathrm{SWAP} =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 0 & 1 & 0\
0 & 1 & 0 & 0\
0 & 0 & 0 & 1
\end{pmatrix}.
$$
Step28: Three-qubit gates
There are two commonly-used three-qubit gates. For three qubits, the basis vectors are ordered as
$$|000\rangle, |001\rangle, |010\rangle, |011\rangle, |100\rangle, |101\rangle, |110\rangle, |111\rangle,$$
which, as bitstrings, represent the integers $0,1,2,\cdots, 7$. Again, Qiskit uses a representation in which the first qubit is on the right-most side of the tensor product and the third qubit is on the left-most side
Step29: Controlled swap gate (Fredkin Gate)
The Fredkin gate, or the controlled swap gate, exchanges the second and third qubits if the first qubit (LSB) is $|1\rangle$
Step30: Non unitary operations
Now we have gone through all the unitary operations in quantum circuits we also have access to non-unitary operations. These include measurements, reset of qubits, and classical conditional operations.
Step31: Measurements
We don't have access to all the information when we make a measurement in a quantum computer. The quantum state is projected onto the standard basis. Below are two examples showing a circuit that is prepared in a basis state and the quantum computer prepared in a superposition state.
Step32: The simulator predicts that 100 percent of the time the classical register returns 0.
Step33: The simulator predicts that 50 percent of the time the classical register returns 0 or 1.
Reset
It is also possible to reset qubits to the $|0\rangle$ state in the middle of computation. Note that reset is not a Gate operation, since it is irreversible.
Step34: Here we see that for both of these circuits the simulator always predicts that the output is 100 percent in the 0 state.
Conditional operations
It is also possible to do operations conditioned on the state of the classical register
Step35: Here the classical bit always takes the value 0 so the qubit state is always flipped.
Step36: Here the classical bit by the first measurement is random but the conditional operation results in the qubit being deterministically put into $|1\rangle$.
Arbitrary initialization
What if we want to initialize a qubit register to an arbitrary state? An arbitrary state for $n$ qubits may be specified by a vector of $2^n$ amplitudes, where the sum of amplitude-norms-squared equals 1. For example, the following three-qubit state can be prepared
Step37: Fidelity is useful to check whether two states are same or not.
For quantum (pure) states $\left|\psi_1\right\rangle$ and $\left|\psi_2\right\rangle$, the fidelity is
$$
F\left(\left|\psi_1\right\rangle,\left|\psi_2\right\rangle\right) = \left|\left\langle\psi_1\middle|\psi_2\right\rangle\right|^2.
$$
The fidelity is equal to $1$ if and only if two states are same. | Python Code:
# Useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from math import pi
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import available_backends, execute, register, get_backend
from qiskit.tools.visualization import circuit_drawer
from qiskit.tools.qi.qi import state_fidelity
from qiskit import Aer
backend = Aer.get_backend('unitary_simulator')
Explanation: <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Summary of Quantum Operations
In this section we will go into the different operations that are available in Qiskit Terra. These are:
- Single-qubit quantum gates
- Multi-qubit quantum gates
- Measurements
- Reset
- Conditionals
- State initialization
We will also show you how to use the three different simulators:
- unitary_simulator
- qasm_simulator
- statevector_simulator
End of explanation
q = QuantumRegister(1)
Explanation: Single Qubit Quantum states
A single qubit quantum state can be written as
$$|\psi\rangle = \alpha|0\rangle + \beta |1\rangle$$
where $\alpha$ and $\beta$ are complex numbers. In a measurement the probability of the bit being in $|0\rangle$ is $|\alpha|^2$ and $|1\rangle$ is $|\beta|^2$. As a vector this is
$$
|\psi\rangle =
\begin{pmatrix}
\alpha \
\beta
\end{pmatrix}.
$$
Note due to conservation probability $|\alpha|^2+ |\beta|^2 = 1$ and since global phase is undetectable $|\psi\rangle := e^{i\delta} |\psi\rangle$ we only requires two real numbers to describe a single qubit quantum state.
A convenient representation is
$$|\psi\rangle = \cos(\theta/2)|0\rangle + \sin(\theta/2)e^{i\phi}|1\rangle$$
where $0\leq \phi < 2\pi$, and $0\leq \theta \leq \pi$. From this it is clear that there is a one-to-one correspondence between qubit states ($\mathbb{C}^2$) and the points on the surface of a unit sphere ($\mathbb{R}^3$). This is called the Bloch sphere representation of a qubit state.
Quantum gates/operations are usually represented as matrices. A gate which acts on a qubit is represented by a $2\times 2$ unitary matrix $U$. The action of the quantum gate is found by multiplying the matrix representing the gate with the vector which represents the quantum state.
$$|\psi'\rangle = U|\psi\rangle$$
A general unitary must be able to take the $|0\rangle$ to the above state. That is
$$
U = \begin{pmatrix}
\cos(\theta/2) & a \
e^{i\phi}\sin(\theta/2) & b
\end{pmatrix}
$$
where $a$ and $b$ are complex numbers constrained such that $U^\dagger U = I$ for all $0\leq\theta\leq\pi$ and $0\leq \phi<2\pi$. This gives 3 constraints and as such $a\rightarrow -e^{i\lambda}\sin(\theta/2)$ and $b\rightarrow e^{i\lambda+i\phi}\cos(\theta/2)$ where $0\leq \lambda<2\pi$ giving
$$
U = \begin{pmatrix}
\cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \
e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2)
\end{pmatrix}.
$$
This is the most general form of a single qubit unitary.
Single-Qubit Gates
The single-qubit gates available are:
- u gates
- Identity gate
- Pauli gates
- Cliffords gates
- $C3$ gates
- Standard rotation gates
We have provided a backend: unitary_simulator to allow you to calculate the unitary matrices.
End of explanation
qc = QuantumCircuit(q)
qc.u3(pi/2,pi/2,pi/2,q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: u gates
In Qiskit we give you access to the general unitary using the $u3$ gate
$$
u3(\theta, \phi, \lambda) = U(\theta, \phi, \lambda)
$$
End of explanation
qc = QuantumCircuit(q)
qc.u2(pi/2,pi/2,q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: The $u2(\phi, \lambda) =u3(\pi/2, \phi, \lambda)$ has the matrix form
$$
u2(\phi, \lambda) =
\frac{1}{\sqrt{2}} \begin{pmatrix}
1 & -e^{i\lambda} \
e^{i\phi} & e^{i(\phi + \lambda)}
\end{pmatrix}.
$$
This is a useful gate as it allows us to create superpositions
End of explanation
qc = QuantumCircuit(q)
qc.u1(pi/2,q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: The $u1(\lambda)= u3(0, 0, \lambda)$ gate has the matrix form
$$
u1(\lambda) =
\begin{pmatrix}
1 & 0 \
0 & e^{i \lambda}
\end{pmatrix},
$$
which is a useful as it allows us to apply a quantum phase.
End of explanation
qc = QuantumCircuit(q)
qc.u0(pi/2,q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: The $u0(\delta)= u3(0, 0, 0)$ gate is the identity matrix. It has the matrix form
$$
u0(\delta) =
\begin{pmatrix}
1 & 0 \
0 & 1
\end{pmatrix}.
$$
The identity gate does nothing (but can add noise in the real device for a period of time equal to fractions of the single qubit gate time)
End of explanation
qc = QuantumCircuit(q)
qc.iden(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Identity gate
The identity gate is $Id = u0(1)$.
End of explanation
qc = QuantumCircuit(q)
qc.x(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Pauli gates
$X$: bit-flip gate
The bit-flip gate $X$ is defined as:
$$
X =
\begin{pmatrix}
0 & 1\
1 & 0
\end{pmatrix}= u3(\pi,0,\pi)
$$
End of explanation
qc = QuantumCircuit(q)
qc.y(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: $Y$: bit- and phase-flip gate
The $Y$ gate is defined as:
$$
Y =
\begin{pmatrix}
0 & -i\
i & 0
\end{pmatrix}=u3(\pi,\pi/2,\pi/2)
$$
End of explanation
qc = QuantumCircuit(q)
qc.z(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: $Z$: phase-flip gate
The phase flip gate $Z$ is defined as:
$$
Z =
\begin{pmatrix}
1 & 0\
0 & -1
\end{pmatrix}=u1(\pi)
$$
End of explanation
qc = QuantumCircuit(q)
qc.h(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Clifford gates
Hadamard gate
$$
H =
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\
1 & -1
\end{pmatrix}= u2(0,\pi)
$$
End of explanation
qc = QuantumCircuit(q)
qc.s(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: $S$ (or, $\sqrt{Z}$ phase) gate
$$
S =
\begin{pmatrix}
1 & 0\
0 & i
\end{pmatrix}= u1(\pi/2)
$$
End of explanation
qc = QuantumCircuit(q)
qc.sdg(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: $S^{\dagger}$ (or, conjugate of $\sqrt{Z}$ phase) gate
$$
S^{\dagger} =
\begin{pmatrix}
1 & 0\
0 & -i
\end{pmatrix}= u1(-\pi/2)
$$
End of explanation
qc = QuantumCircuit(q)
qc.t(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: $C3$ gates
$T$ (or, $\sqrt{S}$ phase) gate
$$
T =
\begin{pmatrix}
1 & 0\
0 & e^{i \pi/4}
\end{pmatrix}= u1(\pi/4)
$$
End of explanation
qc = QuantumCircuit(q)
qc.tdg(q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: $T^{\dagger}$ (or, conjugate of $\sqrt{S}$ phase) gate
$$
T^{\dagger} =
\begin{pmatrix}
1 & 0\
0 & e^{-i \pi/4}
\end{pmatrix}= u1(-pi/4)
$$
They can be added as below.
End of explanation
qc = QuantumCircuit(q)
qc.rx(pi/2,q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Standard Rotations
The standard rotation gates are those that define rotations around the Paulis $P={X,Y,Z}$. They are defined as
$$ R_P(\theta) = \exp(-i \theta P/2) = \cos(\theta/2)I -i \sin(\theta/2)P$$
Rotation around X-axis
$$
R_x(\theta) =
\begin{pmatrix}
\cos(\theta/2) & -i\sin(\theta/2)\
-i\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix} = u3(\theta, -\pi/2,\pi/2)
$$
End of explanation
qc = QuantumCircuit(q)
qc.ry(pi/2,q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Rotation around Y-axis
$$
R_y(\theta) =
\begin{pmatrix}
\cos(\theta/2) & - \sin(\theta/2)\
\sin(\theta/2) & \cos(\theta/2).
\end{pmatrix} =u3(\theta,0,0)
$$
End of explanation
qc = QuantumCircuit(q)
qc.rz(pi/2,q)
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Rotation around Z-axis
$$
R_z(\phi) =
\begin{pmatrix}
e^{-i \phi/2} & 0 \
0 & e^{i \phi/2}
\end{pmatrix}\equiv u1(\phi)
$$
Note here we have used an equivalent as is different to u1 by global phase $e^{-i \phi/2}$.
End of explanation
q = QuantumRegister(2)
Explanation: Note this is different due only to a global phase
Multi-Qubit Gates
Mathematical Preliminaries
The space of quantum computer grows exponential with the number of qubits. For $n$ qubits the complex vector space has dimensions $d=2^n$. To describe states of a multi-qubit system, the tensor product is used to "glue together" operators and basis vectors.
Let's start by considering a 2-qubit system. Given two operators $A$ and $B$ that each act on one qubit, the joint operator $A \otimes B$ acting on two qubits is
$$\begin{equation}
A\otimes B =
\begin{pmatrix}
A_{00} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix} & A_{01} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix} \
A_{10} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix} & A_{11} \begin{pmatrix}
B_{00} & B_{01} \
B_{10} & B_{11}
\end{pmatrix}
\end{pmatrix},
\end{equation}$$
where $A_{jk}$ and $B_{lm}$ are the matrix elements of $A$ and $B$, respectively.
Analogously, the basis vectors for the 2-qubit system are formed using the tensor product of basis vectors for a single qubit:
$$\begin{equation}\begin{split}
|{00}\rangle &= \begin{pmatrix}
1 \begin{pmatrix}
1 \
0
\end{pmatrix} \
0 \begin{pmatrix}
1 \
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \0 \end{pmatrix}~~~|{01}\rangle = \begin{pmatrix}
1 \begin{pmatrix}
0 \
1
\end{pmatrix} \
0 \begin{pmatrix}
0 \
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix}0 \ 1 \ 0 \ 0 \end{pmatrix}\end{split}
\end{equation}$$
$$\begin{equation}\begin{split}|{10}\rangle = \begin{pmatrix}
0\begin{pmatrix}
1 \
0
\end{pmatrix} \
1\begin{pmatrix}
1 \
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix}~~~ |{11}\rangle = \begin{pmatrix}
0 \begin{pmatrix}
0 \
1
\end{pmatrix} \
1\begin{pmatrix}
0 \
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \1 \end{pmatrix}\end{split}
\end{equation}.$$
Note we've introduced a shorthand for the tensor product of basis vectors, wherein $|0\rangle \otimes |0\rangle$ is written as $|00\rangle$. The state of an $n$-qubit system can described using the $n$-fold tensor product of single-qubit basis vectors. Notice that the basis vectors for a 2-qubit system are 4-dimensional; in general, the basis vectors of an $n$-qubit sytsem are $2^{n}$-dimensional, as noted earlier.
Basis vector ordering in Qiskit
Within the physics community, the qubits of a multi-qubit systems are typically ordered with the first qubit on the left-most side of the tensor product and the last qubit on the right-most side. For instance, if the first qubit is in state $|0\rangle$ and second is in state $|1\rangle$, their joint state would be $|01\rangle$. Qiskit uses a slightly different ordering of the qubits, in which the qubits are represented from the most significant bit (MSB) on the left to the least significant bit (LSB) on the right (big-endian). This is similar to bitstring representation on classical computers, and enables easy conversion from bitstrings to integers after measurements are performed. For the example just given, the joint state would be represented as $|10\rangle$. Importantly, this change in the representation of multi-qubit states affects the way multi-qubit gates are represented in Qiskit, as discussed below.
The representation used in Qiskit enumerates the basis vectors in increasing order of the integers they represent. For instance, the basis vectors for a 2-qubit system would be ordered as $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$. Thinking of the basis vectors as bit strings, they encode the integers 0,1,2 and 3, respectively.
Controlled operations on qubits
A common multi-qubit gate involves the application of a gate to one qubit, conditioned on the state of another qubit. For instance, we might want to flip the state of the second qubit when the first qubit is in $|0\rangle$. Such gates are known as controlled gates. The standard multi-qubit gates consist of two-qubit gates and three-qubit gates. The two-qubit gates are:
- controlled Pauli gates
- controlled Hadamard gate
- controlled rotation gates
- controlled phase gate
- controlled u3 gate
- swap gate
The three-qubit gates are:
- Toffoli gate
- Fredkin gate
Two-qubit gates
Most of the two-gates are of the controlled type (the SWAP gate being the exception). In general, a controlled two-qubit gate $C_{U}$ acts to apply the single-qubit unitary $U$ to the second qubit when the state of the first qubit is in $|1\rangle$. Suppose $U$ has a matrix representation
$$U = \begin{pmatrix} u_{00} & u_{01} \ u_{10} & u_{11}\end{pmatrix}.$$
We can work out the action of $C_{U}$ as follows. Recall that the basis vectors for a two-qubit system are ordered as $|00\rangle, |01\rangle, |10\rangle, |11\rangle$. Suppose the control qubit is qubit 0 (which, according to Qiskit's convention, is one the right-hand side of the tensor product). If the control qubit is in $|1\rangle$, $U$ should be applied to the target (qubit 1, on the left-hand side of the tensor product). Therefore, under the action of $C_{U}$, the basis vectors are transformed according to
$$\begin{align}
C_{U}: \underset{\text{qubit}~1}{|0\rangle}\otimes \underset{\text{qubit}~0}{|0\rangle} &\rightarrow \underset{\text{qubit}~1}{|0\rangle}\otimes \underset{\text{qubit}~0}{|0\rangle}\
C_{U}: \underset{\text{qubit}~1}{|0\rangle}\otimes \underset{\text{qubit}~0}{|1\rangle} &\rightarrow \underset{\text{qubit}~1}{U|0\rangle}\otimes \underset{\text{qubit}~0}{|1\rangle}\
C_{U}: \underset{\text{qubit}~1}{|1\rangle}\otimes \underset{\text{qubit}~0}{|0\rangle} &\rightarrow \underset{\text{qubit}~1}{|1\rangle}\otimes \underset{\text{qubit}~0}{|0\rangle}\
C_{U}: \underset{\text{qubit}~1}{|1\rangle}\otimes \underset{\text{qubit}~0}{|1\rangle} &\rightarrow \underset{\text{qubit}~1}{U|1\rangle}\otimes \underset{\text{qubit}~0}{|1\rangle}\
\end{align}.$$
In matrix form, the action of $C_{U}$ is
$$\begin{equation}
C_U = \begin{pmatrix}
1 & 0 & 0 & 0 \
0 & u_{00} & 0 & u_{01} \
0 & 0 & 1 & 0 \
0 & u_{10} &0 & u_{11}
\end{pmatrix}.
\end{equation}$$
To work out these matrix elements, let
$$C_{(jk), (lm)} = \left(\underset{\text{qubit}~1}{\langle j |} \otimes \underset{\text{qubit}~0}{\langle k |}\right) C_{U} \left(\underset{\text{qubit}~1}{| l \rangle} \otimes \underset{\text{qubit}~0}{| k \rangle}\right),$$
compute the action of $C_{U}$ (given above), and compute the inner products.
As shown in the examples below, this operation is implemented in Qiskit as cU(q[0],q[1]).
If qubit 1 is the control and qubit 0 is the target, then the basis vectors are transformed according to
$$\begin{align}
C_{U}: \underset{\text{qubit}~1}{|0\rangle}\otimes \underset{\text{qubit}~0}{|0\rangle} &\rightarrow \underset{\text{qubit}~1}{|0\rangle}\otimes \underset{\text{qubit}~0}{|0\rangle}\
C_{U}: \underset{\text{qubit}~1}{|0\rangle}\otimes \underset{\text{qubit}~0}{|1\rangle} &\rightarrow \underset{\text{qubit}~1}{|0\rangle}\otimes \underset{\text{qubit}~0}{|1\rangle}\
C_{U}: \underset{\text{qubit}~1}{|1\rangle}\otimes \underset{\text{qubit}~0}{|0\rangle} &\rightarrow \underset{\text{qubit}~1}{|1\rangle}\otimes \underset{\text{qubit}~0}{U|0\rangle}\
C_{U}: \underset{\text{qubit}~1}{|1\rangle}\otimes \underset{\text{qubit}~0}{|1\rangle} &\rightarrow \underset{\text{qubit}~1}{|1\rangle}\otimes \underset{\text{qubit}~0}{U|1\rangle}\
\end{align},$$
which implies the matrix form of $C_{U}$ is
$$\begin{equation}
C_U = \begin{pmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & u_{00} & u_{01} \
0 & 0 & u_{10} & u_{11}
\end{pmatrix}.
\end{equation}$$
End of explanation
qc = QuantumCircuit(q)
qc.cx(q[0],q[1])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Controlled Pauli Gates
Controlled-X (or, controlled-NOT) gate
The controlled-not gate flips the target qubit when the control qubit is in the state $|1\rangle$. If we take the MSB as the control qubit (e.g. cx(q[1],q[0])), then the matrix would look like
$$
C_X =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 1 & 0 & 0\
0 & 0 & 0 & 1\
0 & 0 & 1 & 0
\end{pmatrix}.
$$
However, when the LSB is the control qubit, (e.g. cx(q[0],q[1])), this gate is equivalent to the following matrix:
$$
C_X =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 0 & 0 & 1\
0 & 0 & 1 & 0\
0 & 1 & 0 & 0
\end{pmatrix}.
$$
End of explanation
qc = QuantumCircuit(q)
qc.cy(q[0],q[1])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Controlled $Y$ gate
Apply the $Y$ gate to the target qubit if the control qubit is the MSB
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 1 & 0 & 0\
0 & 0 & 0 & -i\
0 & 0 & i & 0
\end{pmatrix},
$$
or when the LSB is the control
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 0 & 0 & -i\
0 & 0 & 1 & 0\
0 & i & 0 & 0
\end{pmatrix}.
$$
End of explanation
qc = QuantumCircuit(q)
qc.cz(q[0],q[1])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Controlled $Z$ (or, controlled Phase) gate
Similarly, the controlled Z gate flips the phase of the target qubit if the control qubit is $1$. The matrix looks the same regardless of whether the MSB or LSB is the control qubit:
$$
C_Z =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 1 & 0 & 0\
0 & 0 & 1 & 0\
0 & 0 & 0 & -1
\end{pmatrix}
$$
End of explanation
qc = QuantumCircuit(q)
qc.ch(q[0],q[1])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary']/(0.707+0.707j), 3)
Explanation: Controlled Hadamard gate
Apply $H$ gate to the target qubit if the control qubit is $|1\rangle$. Below is the case where the control is the LSB qubit.
$$
C_H =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\
0 & 0 & 1 & 0\
0 & \frac{1}{\sqrt{2}} & 0& -\frac{1}{\sqrt{2}}
\end{pmatrix}
$$
End of explanation
qc = QuantumCircuit(q)
qc.crz(pi/2,q[0],q[1])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Controlled rotation gates
Controlled rotation around Z-axis
Perform rotation around Z-axis on the target qubit if the control qubit (here LSB) is $|1\rangle$.
$$
C_{Rz}(\lambda) =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & e^{-i\lambda/2} & 0 & 0\
0 & 0 & 1 & 0\
0 & 0 & 0 & e^{i\lambda/2}
\end{pmatrix}
$$
End of explanation
qc = QuantumCircuit(q)
qc.cu1(pi/2,q[0], q[1])
circuit_drawer(qc)
Explanation: Controlled phase rotation
Perform a phase rotation if both qubits are in the $|11\rangle$ state. The matrix looks the same regardless of whether the MSB or LSB is the control qubit.
$$
C_{u1}(\lambda) =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 1 & 0 & 0\
0 & 0 & 1 & 0\
0 & 0 & 0 & e^{i\lambda}
\end{pmatrix}
$$
End of explanation
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: I THINK SHOULD BE CALLED $C_\mathrm{PHASE}(\lambda)$
End of explanation
qc = QuantumCircuit(q)
qc.cu3(pi/2, pi/2, pi/2, q[0], q[1])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Controlled $u3$ rotation
Perform controlled-$u3$ rotation on the target qubit if the control qubit (here LSB) is $|1\rangle$.
$$
C_{u3}(\theta, \phi, \lambda) \equiv
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & e^{-i(\phi+\lambda)/2}\cos(\theta/2) & 0 & -e^{-i(\phi-\lambda)/2}\sin(\theta/2)\
0 & 0 & 1 & 0\
0 & e^{i(\phi-\lambda)/2}\sin(\theta/2) & 0 & e^{i(\phi+\lambda)/2}\cos(\theta/2)
\end{pmatrix}.
$$
End of explanation
qc = QuantumCircuit(q)
qc.swap(q[0], q[1])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: NOTE I NEED TO FIX THIS AND DECIDE ON CONVENTION - I ACTUALLY THINK WE WANT A FOUR PARAMETER GATE AND JUST CALL IT CU AND TO REMOVE THIS GATE.
SWAP gate
The SWAP gate exchanges the two qubits. It transforms the basis vectors as
$$|00\rangle \rightarrow |00\rangle~,~|01\rangle \rightarrow |10\rangle~,~|10\rangle \rightarrow |01\rangle~,~|11\rangle \rightarrow |11\rangle,$$
which gives a matrix representation of the form
$$
\mathrm{SWAP} =
\begin{pmatrix}
1 & 0 & 0 & 0\
0 & 0 & 1 & 0\
0 & 1 & 0 & 0\
0 & 0 & 0 & 1
\end{pmatrix}.
$$
End of explanation
q = QuantumRegister(3)
qc = QuantumCircuit(q)
qc.ccx(q[0], q[1], q[2])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Three-qubit gates
There are two commonly-used three-qubit gates. For three qubits, the basis vectors are ordered as
$$|000\rangle, |001\rangle, |010\rangle, |011\rangle, |100\rangle, |101\rangle, |110\rangle, |111\rangle,$$
which, as bitstrings, represent the integers $0,1,2,\cdots, 7$. Again, Qiskit uses a representation in which the first qubit is on the right-most side of the tensor product and the third qubit is on the left-most side:
$$|abc\rangle : \underset{\text{qubit 2}}{|a\rangle}\otimes \underset{\text{qubit 1}}{|b\rangle}\otimes \underset{\text{qubit 0}}{|c\rangle}.$$
Toffoli gate ($ccx$ gate)
The Toffoli gate flips the third qubit if the first two qubits (LSB) are both $|1\rangle$:
$$|abc\rangle \rightarrow |bc\oplus a\rangle \otimes |b\rangle \otimes c \rangle.$$
In matrix form, the Toffoli gate is
$$
C_{CX} =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0
\end{pmatrix}.
$$
End of explanation
qc = QuantumCircuit(q)
qc.cswap(q[0], q[1], q[2])
circuit_drawer(qc)
job = execute(qc, backend)
np.round(job.result().get_data(qc)['unitary'], 3)
Explanation: Controlled swap gate (Fredkin Gate)
The Fredkin gate, or the controlled swap gate, exchanges the second and third qubits if the first qubit (LSB) is $|1\rangle$:
$$ |abc\rangle \rightarrow \begin{cases} |bac\rangle~~\text{if}~c=1 \cr |abc\rangle~~\text{if}~c=0 \end{cases}.$$
In matrix form, the Fredkin gate is
$$
C_{\mathrm{SWAP}} =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
\end{pmatrix}.
$$
End of explanation
q = QuantumRegister(1)
c = ClassicalRegister(1)
Explanation: Non unitary operations
Now we have gone through all the unitary operations in quantum circuits we also have access to non-unitary operations. These include measurements, reset of qubits, and classical conditional operations.
End of explanation
qc = QuantumCircuit(q, c)
qc.measure(q, c)
circuit_drawer(qc)
backend = Aer.get_backend('qasm_simulator')
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
Explanation: Measurements
We don't have access to all the information when we make a measurement in a quantum computer. The quantum state is projected onto the standard basis. Below are two examples showing a circuit that is prepared in a basis state and the quantum computer prepared in a superposition state.
End of explanation
qc = QuantumCircuit(q, c)
qc.h(q)
qc.measure(q, c)
circuit_drawer(qc)
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
Explanation: The simulator predicts that 100 percent of the time the classical register returns 0.
End of explanation
qc = QuantumCircuit(q, c)
qc.reset(q[0])
qc.measure(q, c)
circuit_drawer(qc)
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
qc = QuantumCircuit(q, c)
qc.h(q)
qc.reset(q[0])
qc.measure(q, c)
circuit_drawer(qc)
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
Explanation: The simulator predicts that 50 percent of the time the classical register returns 0 or 1.
Reset
It is also possible to reset qubits to the $|0\rangle$ state in the middle of computation. Note that reset is not a Gate operation, since it is irreversible.
End of explanation
qc = QuantumCircuit(q, c)
qc.x(q[0]).c_if(c, 0)
qc.measure(q,c)
circuit_drawer(qc)
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
Explanation: Here we see that for both of these circuits the simulator always predicts that the output is 100 percent in the 0 state.
Conditional operations
It is also possible to do operations conditioned on the state of the classical register
End of explanation
qc = QuantumCircuit(q, c)
qc.h(q)
qc.measure(q,c)
qc.x(q[0]).c_if(c, 0)
qc.measure(q,c)
circuit_drawer(qc)
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
Explanation: Here the classical bit always takes the value 0 so the qubit state is always flipped.
End of explanation
# Initializing a three-qubit quantum state
import math
desired_vector = [
1 / math.sqrt(16) * complex(0, 1),
1 / math.sqrt(8) * complex(1, 0),
1 / math.sqrt(16) * complex(1, 1),
0,
0,
1 / math.sqrt(8) * complex(1, 2),
1 / math.sqrt(16) * complex(1, 0),
0]
q = QuantumRegister(3)
qc = QuantumCircuit(q)
qc.initialize(desired_vector, [q[0],q[1],q[2]])
backend = Aer.get_backend('statevector_simulator')
job = execute(qc, backend)
qc_state = job.result().get_statevector(qc)
qc_state
Explanation: Here the classical bit by the first measurement is random but the conditional operation results in the qubit being deterministically put into $|1\rangle$.
Arbitrary initialization
What if we want to initialize a qubit register to an arbitrary state? An arbitrary state for $n$ qubits may be specified by a vector of $2^n$ amplitudes, where the sum of amplitude-norms-squared equals 1. For example, the following three-qubit state can be prepared:
$$|\psi\rangle = \frac{i}{4}|000\rangle + \frac{1}{\sqrt{8}}|001\rangle + \frac{1+i}{4}|010\rangle + \frac{1+2i}{\sqrt{8}}|101\rangle + \frac{1}{4}|110\rangle$$
End of explanation
state_fidelity(desired_vector,qc_state)
Explanation: Fidelity is useful to check whether two states are same or not.
For quantum (pure) states $\left|\psi_1\right\rangle$ and $\left|\psi_2\right\rangle$, the fidelity is
$$
F\left(\left|\psi_1\right\rangle,\left|\psi_2\right\rangle\right) = \left|\left\langle\psi_1\middle|\psi_2\right\rangle\right|^2.
$$
The fidelity is equal to $1$ if and only if two states are same.
End of explanation |
3,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this notebook I study the stokeslet, foundamental solution of stokes equation
$$ \nabla \mathbf p +\mu \nabla^2 \mathbf u=\mathbf f$$
The stokeslet gives the flow field $\mathbf u$ in $\mathbf r$ provided the force $\mathbf f(\mathbf r)=\delta(\mathbf r - \mathbf r_0)\mathbf f$ acts on the fluid in $\mathbf r_0$.
Given the boundary condition
Step1: Flow of a point force
Step2: Monopole
Step3: Dipole
Step4: 2 parallel dipoles - as 2 parallel microswimmers
Step5: Questions
1. what's the force between the 2 dipoles?
2. what's the force between the 2 monopoles of a single dipole?
Rod in a fluid
Method 1
Step6: Method 2 | Python Code:
import numpy as np
import pylab as pl
import seaborn as sns
sns.set_style("white")
%matplotlib inline
x,X,y,Y=-5,5,-5,5 #our space
dx,dy=.5,.5 #discretisation
mX,mY=np.meshgrid(np.arange(x,X,dx),np.arange(y,Y,dy))
pl.scatter(mX,mY,s=1,lw=1,c='r')
Explanation: Introduction
In this notebook I study the stokeslet, foundamental solution of stokes equation
$$ \nabla \mathbf p +\mu \nabla^2 \mathbf u=\mathbf f$$
The stokeslet gives the flow field $\mathbf u$ in $\mathbf r$ provided the force $\mathbf f(\mathbf r)=\delta(\mathbf r - \mathbf r_0)\mathbf f$ acts on the fluid in $\mathbf r_0$.
Given the boundary condition: $\mathbf u(\infty)=0$, the stokeslet is the linear operator:
$$ \frac{4 \pi}{\mu}\frac{1}{\lVert \mathbf r\rVert }\left[ \mathbb I + \mathbf r \mathbf r^T\right]$$
where $\mathbf r$ is the vector between $r_0$ and the point where we compute the fluid.
In this notebook I'm interested in:
1. Visualize the flux generated by a single delta point force in $\mathbf r_0=(0,0)$
This can be thought as the far-field flux generated by a small spherical colloidal particle, moving with constant velocity $\gamma^{-1}\mathbf f$.
2. Visualize and compare the flux generated by a rod. A couple of things now come to my mind:
1. Visualize the flow generated by a line of beads that "simulate" the rod. This is the usual approach to compute the drag coeficient as in slender body theory, and to design simulations with interacting filaments.
2. I want to use a tensorial drag to compute the force acting on the flow. It is known from Slender body theory (see lightnill's book) that the force of a rod moving in a fluid can be decomposed into 2 drag coefficients in the "tangent" and "normal" direction:
$$\mathbf f=\left(\gamma_\perp \hat n \hat n^T + \gamma_\parallel \hat t \hat t^T\right) \mathbf v$$
where $\gamma_\parallel$ and $\gamma_\perp$ are defined by the shape of the body.
3. Using the 2.2 I want to use the "Blake tensor". That is the fondamental solution of stokes equation when the fluid is confined in a half-space, with no-slip BC at the wall
Python definitions
End of explanation
r0=np.array([0.05,0.05]) # position of the force
f=np.array([0,1]) # direction of the force
def stokeslet(f,r0,mX,mY):
Id=np.array([[1,0],[0,1]])
r=np.array([mX-r0[0],mY-r0[1]])
Idf=np.dot(Id,f)
rTf=(r*f[:,np.newaxis,np.newaxis]).sum(axis=0)
rrTf=(r*rTf[np.newaxis,])
modr=(r[0]**2+r[1]**2+.01)**.5
u,v=Idf[:,np.newaxis,np.newaxis]/modr[np.newaxis]+rrTf/modr**3.
return u,v
Explanation: Flow of a point force
End of explanation
u,v=stokeslet(f,r0,mX,mY)
pl.streamplot(mX,mY,u,v)
pl.scatter(r0[0],r0[1])
pl.arrow(r0[0],r0[1],f[0],f[1],head_width=0.5, head_length=0.5, fc='r', ec='r')
sns.despine()
pl.savefig("monopole.pdf",bbox_inches=0)
Explanation: Monopole
End of explanation
u,v=stokeslet(f,r0,mX,mY)
f1=-f
r1=r0+np.array((0,-1))
u1,v1=stokeslet(f1,r1,mX,mY)
#superimposition principle because of linearity of Stokes eq
u+=u1
v+=v1
pl.streamplot(mX,mY,u,v)
#draw force arrows
def draw_force(r,f):
pl.scatter(r[0],r[1])
pl.arrow(r[0],r[1],f[0],f[1],head_width=0.5, head_length=0.5, fc='r', ec='r')
draw_force(r0,f)
draw_force(r1,f1)
sns.despine()
pl.savefig("dipole.pdf",bbox_inches=0)
Explanation: Dipole
End of explanation
def dipole(f0,r0,mX,mY,dx):
r1=r0+np.array(dx)
f1=-f0
u,v=stokeslet(f0,r0,mX,mY)
u1,v1=stokeslet(f1,r1,mX,mY)
u+=u1
v+=v1
return u,v,f0,f1,r0,r1
#dipole @ left
rl=np.array([-0.55,0.55])
fl=np.array((0,1))
ul,vl,f0,f1,r0,r1=dipole(fl,rl,mX,mY,(0,-1))
draw_force(r0,f)
draw_force(r1,f1)
#dipole @ right
rr=np.array([+.55,.55])
fr=np.array((0,1))
ur,vr,f0,f1,r0,r1=dipole(fr,rr,mX,mY,(0,-1))
draw_force(r0,f)
draw_force(r1,f1)
pl.streamplot(mX,mY,ur+ul,vl+vr)
Explanation: 2 parallel dipoles - as 2 parallel microswimmers
End of explanation
def nlet(f0,r0,mX,mY,dx,n):
dx=np.array(dx)
f=[]
r=[]
u=np.zeros(mX.shape)
v=np.zeros(mY.shape)
for j in xrange(int(n)):
_r=r0+j*dx
_f=f0
_u,_v=stokeslet(_f,_r,mX,mY)
u+=_u
v+=_v
f.append(_f)
r.append(_r)
return u,v,f,r
#draw force arrows
def draw_force(r,f,c='r'):
pl.scatter(r[0],r[1])
pl.arrow(r[0],r[1],f[0],f[1],head_width=0.5, head_length=0.5, fc=c, ec=c,alpha=.5)
L=6
dx=1
n=L/dx+1
r=np.array([-3,0])
f=np.array((0,1))
u,v,f,r=nlet(f,r,mX,mY,(dx,0),n)
for _f,_r in zip(f,r):
draw_force(_r,_f)
pl.streamplot(mX,mY,u,v,zorder=10)
pl.contourf(mX,mY,(u**2+v**2)**.5)
pl.figure()
r=np.array([-1,0])
f=np.array((1,0))
u,v,f,r=nlet(f,r,mX,mY,(dx,0),n)
for _f,_r in zip(f,r):
draw_force(_r,_f)
pl.streamplot(mX,mY,u,v,zorder=10)
pl.contourf(mX,mY,(u**2+v**2)**.5)
pl.figure()
r=np.array([-3,0])
f=np.array((1,1))/2**.5
u,v,f,r=nlet(f,r,mX,mY,(dx,0),n)
for _f,_r in zip(f,r):
draw_force(_r,_f)
pl.streamplot(mX,mY,u,v,zorder=10)
pl.contourf(mX,mY,(u**2+v**2)**.5)
Explanation: Questions
1. what's the force between the 2 dipoles?
2. what's the force between the 2 monopoles of a single dipole?
Rod in a fluid
Method 1: line of stokeslets
In the following I assume that the rod velocity direction is given by the force and that the force is equal at each rod
End of explanation
def draw_force(r,f,c='r'):
pl.scatter(r[0],r[1])
pl.arrow(r[0],r[1],f[0],f[1],head_width=0.5, head_length=0.5, fc=c, ec=c,alpha=.5)
v0=1
a=np.pi/4.
v=v0*np.array([np.cos(a),np.sin(a)])
n=np.array([0,1])
t=np.array([1,0])
#Cs is the shape coefficient
Cs=1.5
gamma_para=1
gamma_perp=Cs*gamma_para
f=gamma_perp*n*np.dot(n,v)+gamma_para*t*np.dot(t,v)
r0=np.array([0,0])
#for comparisons: same modulo btw force and velocity
#f/=sum(f**2)**.5
#v/=sum(v**2)**.5
ux,uy=stokeslet(f,r0,mX,mY)
pl.streamplot(mX,mY,ux,uy,zorder=10)
_ux,_uy=stokeslet(v,r0,mX,mY)
#pl.streamplot(mX,mY,_ux,_uy,linewidth=.5)
#pl.streamplot(mX,mY,ux-_ux,uy-_uy)
V=((ux-_ux)**2+(uy-_uy)**2)**.5
U=(.25*(ux+_ux)**2+.25*(uy+_uy)**2)**.5
pl.contourf(mX,mY,V/U)
pl.colorbar()
draw_force(r0,f)
draw_force(r0,v,'g')
def nlet(f0,r0,mX,mY,dx,n,Cs=1.):
dx=np.array(dx)
f=[]
r=[]
u=np.zeros(mX.shape)
v=np.zeros(mY.shape)
tt=np.array([1,0])
nn=np.array([0,1])
#Cs is the shape coefficient
gamma_para=1
gamma_perp=Cs*gamma_para
ff=gamma_perp*nn*np.dot(nn,f0)+gamma_para*tt*np.dot(tt,f0)
ff/=(ff**2.).sum()**.5
for j in xrange(int(n)):
_r=r0+j*dx
_f=f0
#_u,_v=stokeslet(_f,_r,mX,mY)
#u+=_u
#v+=_v
f.append(-ff)
r.append(_r)
return u,v,f,r
#draw force arrows
def draw_force(r,f,ax,c='r'):
ax.scatter(r[0],r[1],s=200,c='k',)
ax.arrow(r[0],r[1],f[0],f[1],head_width=0.5, head_length=0.5, fc=c, ec=c,alpha=.75)
L=6
dx=.5
n=L/dx+1
fig,(a,b)=pl.subplots(1,2,figsize=np.asarray((12,5))/1.5)
r=np.array([-3,0])
fin=np.array((.75,1))
fin/=(fin**2).sum()**.5
u,v,f,r=nlet(fin,r,mX,mY,(dx,0),n)
for _f,_r in zip(f,r):
draw_force(_r,_f*0,a)
for _f,_r in zip(f,r)[::2]:
draw_force(_r,_f*1.5,a)
for _r in r[::2]:
draw_force(_r,fin*1.5,a,c='k')
#a.streamplot(mX,mY,u,v,linewidth=.5,color='k',density=.5)
#a.contourf(mX,mY,(u**2+v**2)**.5,zorder=-10)
DragRatio=1.8
r=np.array([-3,0])
u,v,f,r=nlet(fin,r,mX,mY,(dx,0),n,Cs=2.)
for _f,_r in zip(f,r):
draw_force(_r,_f*0,b)
for _f,_r in zip(f,r)[::2]:
draw_force(_r,_f*DragRatio,b,c='g')
for _r in r[::2]:
draw_force(_r,fin*DragRatio,b,c='k')
#b.streamplot(mX,mY,u,v,linewidth=.5,color='k',density=.5)
#b.contourf(mX,mY,(u**2+v**2)**.5,zorder=-10)
a.xaxis.set_ticks([])
a.yaxis.set_ticks([])
b.xaxis.set_ticks([])
b.yaxis.set_ticks([])
a.set_xlim(x,X)
a.set_ylim(y,Y)
b.set_xlim(x,X)
b.set_ylim(y,Y)
a.set_title("Isotropic Drag",fontsize=18)
b.set_title("Anisotropic Drag",fontsize=18)
fig.tight_layout()
fig.savefig("res_forc.pdf",bbox_inches="tight")
Explanation: Method 2: anysotropic drag coeeficient
A rod is moving in the flud with velocity $\mathbf v$. Due to its shape the drag coeffienct is not isotropic and force needed to push the rod at the given velocity is:
$$\mathbf f=\left(\gamma_\perp \hat n \hat n^T + \gamma_\parallel \hat t \hat t^T\right) \mathbf v$$
$$\mathbf f=\left(\gamma_\perp (1-\hat t \hat t^T) + \gamma_\parallel \hat t \hat t^T\right) \mathbf v$$
where $\hat n$ and $\hat t$ are the normal and tangent direction. For simplicity, I initially assume that the rod is parallel to the $\hat x$ axis. It is clear that force and velocity are not parallel as they were in the case of spherical beads.
The drag coefficients depends on the shape, I do not remember the function now.
End of explanation |
3,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Checks
Schema checks
Step1: A bit of basic pandas
Let's first start by reading in the CSV file as a pandas.DataFrame().
Step2: To get the columns of a DataFrame object df, call df.columns. This is a list-like object that can be iterated over.
Step4: YAML Files
Describe data in a human-friendly & computer-readable format. The environment.yml file in your downloaded repository is also a YAML file, by the way!
Structure
Step5: You can also take dictionaries, and return YAML-formatted text.
Step6: By having things YAML formatted, you preserve human-readability and computer-readability simultaneously.
Providing metadata should be something already done when doing analytics; YAML-format is a strong suggestion, but YAML schema will depend on use case.
Let's now switch roles, and pretend that we're on side of the "analyst" and are no longer the "data provider".
How would you check that the columns match the spec? Basically, check that every element in df.columns is present inside the metadata['columns'] list.
Exercise
Inside test_datafuncs.py, write a utility function, check_schema(df, meta_columns) that tests whether every column in a DataFrame is present in some metadata spec file. It should accept two arguments
Step7: Demo
Step8: Immediately it's clear that there's a number of rows with empty values! Nothing beats a quick visual check like this one.
We can get a table version of this using another package called pandas_summary.
Step11: dfs.summary() returns a Pandas DataFrame; this means we can write tests for data completeness!
Exercise
Step12: It's often a good idea to standardize numerical data (that aren't count data). The term standardize often refers to the statistical procedure of subtracting the mean and dividing by the standard deviation, yielding an empirical distribution of data centered on 0 and having standard deviation of 1.
Exercise
Write a test for a function that standardizes a column of data. Then, write the function.
Note
Step13: Exercise
Did we just copy/paste the function?! It's time to stop doing this. Let's refactor the code into a function that can be called.
Categorical Data
For categorical-type data, we can plot the empirical distribution as well. (This example uses the smartphone_sanitization.csv dataset.)
Step14: Statistical Checks
Report on deviations from normality.
Normality?!
The Gaussian (Normal) distribution is commonly assumed in downstream statistical procedures, e.g. outlier detection.
We can test for normality by using a K-S test.
K-S test
From Wikipedia
Step15: Exercise
Re-create the panel of cumulative distribution plots, this time adding on the Normal distribution, and annotating the p-value of the K-S test in the title. | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
Explanation: Data Checks
Schema checks: Making sure that only the columns that are expected are provided.
Datum checks:
Looking for missing values
Ensuring that expected value ranges are correct
Statistical checks:
Visual check of data distributions.
Correlations between columns.
Statistical distribution checks.
Roles in Data Analysis
Data Provider: Someone who's collected and/or curated the data.
Data Analyst: The person who is analyzing the data.
Sometimes they're the same person; at other times they're not. Tasks related to testing can often be assigned to either role, but there are some tasks more naturally suited to each.
Schema Checks
Schema checks are all about making sure that the data columns that you want to have are all present, and that they have the expected data types.
The way data are provided to you should be in two files. The first file is the actual data matrix. The second file should be a metadata specification file, minimally containing the name of the CSV file it describes, and the list of columns present. Why the duplication? The list of columns is basically an implicit contract between your data provider and you, and provides a verifiable way of describing the data matrix's columns.
We're going to use a few datasets from Boston's open data repository. Let's first take a look at Boston's annual budget data, while pretending we're the person who curated the data, the "data provider".
End of explanation
import pandas as pd
df = pd.read_csv('data/boston_budget.csv')
df.head()
Explanation: A bit of basic pandas
Let's first start by reading in the CSV file as a pandas.DataFrame().
End of explanation
df.columns
Explanation: To get the columns of a DataFrame object df, call df.columns. This is a list-like object that can be iterated over.
End of explanation
spec =
filename: boston_budget.csv
columns:
- "Fiscal Year"
- "Service (Cabinet)"
- "Department"
- "Program #"
- "Program"
- "Expense Type"
- "ACCT #"
- "Expense Category (Account)"
- "Fund"
- "Amount"
import yaml
metadata = yaml.load(spec)
metadata
Explanation: YAML Files
Describe data in a human-friendly & computer-readable format. The environment.yml file in your downloaded repository is also a YAML file, by the way!
Structure:
yaml
key1: value
key2:
- value1
- value2
- subkey1:
- value3
Example YAML-formatted schema:
yaml
filename: boston_budget.csv
column_names:
- "Fiscal Year"
- "Service (cabinet)"
- "Department"
- "Program #"
...
- "Fund"
- "Amount"
YAML-formatted text can be read as dictionaries.
End of explanation
print(yaml.dump(metadata))
Explanation: You can also take dictionaries, and return YAML-formatted text.
End of explanation
import pandas as pd
import seaborn as sns
sns.set_style('white')
%matplotlib inline
df = pd.read_csv('data/boston_ei-corrupt.csv')
df.head()
Explanation: By having things YAML formatted, you preserve human-readability and computer-readability simultaneously.
Providing metadata should be something already done when doing analytics; YAML-format is a strong suggestion, but YAML schema will depend on use case.
Let's now switch roles, and pretend that we're on side of the "analyst" and are no longer the "data provider".
How would you check that the columns match the spec? Basically, check that every element in df.columns is present inside the metadata['columns'] list.
Exercise
Inside test_datafuncs.py, write a utility function, check_schema(df, meta_columns) that tests whether every column in a DataFrame is present in some metadata spec file. It should accept two arguments:
df: a pandas.DataFrame
meta_columns: A list of columns from the metadata spec.
```python
def check_schema(df, meta_columns):
for col in df.columns:
assert col in meta_columns, f'"{col}" not in metadata column spec'
```
In your test file, outside the function definition, write another test function, test_budget_schemas(), explicitly runs a test for just the budget data.
```python
def test_budget_schemas():
columns = read_metadata('data/metadata_budget.yml')['columns']
df = pd.read_csv('data/boston_budget.csv')
check_schema(df, columns)
```
Now, run the test. Do you get the following error? Can you spot the error?
```bash
def check_schema(df, meta_columns):
for col in df.columns:
assert col in meta_columns, f'"{col}" not in metadata column spec'
E AssertionError: " Amount" not in metadata column spec
E assert ' Amount' in ['Fiscal Year', 'Service (Cabinet)', 'Department', 'Program #', 'Program', 'Expense Type', ...]
test_datafuncs_soln.py:63: AssertionError
=================================== 1 failed, 7 passed in 0.91 seconds ===================================
```
If there is even a slight mis-spelling, this kind of check will help you pinpoint where that is. Note how the "Amount" column is spelled with an extra space.
At this point, I would contact the data provider to correct errors like this.
It is a logical practice to keep one schema spec file per table provided to you. However, it is also possible to take advantage of YAML "documents" to keep multiple schema specs inside a single YAML file.
The choice is yours - in cases where there are a lot of data files, it may make sense (for the sake of file-system sanity) to keep all of the specs in multiple files that represent logical groupings of data.
Exercise: Write YAML metadata spec.
Put yourself in the shoes of a data provider. Take the boston_ei.csv file in the data/ directory, and make a schema spec file for that file.
Exercise: Write test for metadata spec.
Next, put yourself in the shoes of a data analyst. Take the schema spec file and write a test for it.
Exercise: Auto YAML Spec.
Inside datafuncs.py, write a function with the signature autospec(handle) that takes in a file path, and does the following:
Create a dictionary, with two keys:
a "filename" key, whose value only records the filename (and not the full file path),
a "columns" key, whose value records the list of columns in the dataframe.
Converts the dictionary to a YAML string
Writes the YAML string to disk.
Optional Exercise: Write meta-test
Now, let's go "meta". Write a "meta-test" that ensures that every CSV file in the data/ directory has a schema file associated with it. (The function need not check each schema.) Until we finish filling out the rest of the exercises, this test can be allowed to fail, and we can mark it as a test to skip by marking it with an @skip decorator:
python
@pytest.mark.skip(reason="no way of currently testing this")
def test_my_func():
...
Notes
The point here is to have a trusted copy of schema apart from data file. YAML not necessarily only way!
If no schema provided, manually create one; this is exploratory data analysis anyways - no effort wasted!
Datum Checks
Now that we're done with the schema checks, let's do some sanity checks on the data as well. This is my personal favourite too, as some of the activities here overlap with the early stages of exploratory data analysis.
We're going to switch datasets here, and move to a 'corrupted' version of the Boston Economic Indicators dataset. Its file path is: ./data/boston_ei-corrupt.csv.
End of explanation
# First, we check for missing data.
import missingno as msno
msno.matrix(df)
Explanation: Demo: Visual Diagnostics
We can use a package called missingno, which gives us a quick visual view of the completeness of the data. This is a good starting point for deciding whether you need to manually comb through the data or not.
End of explanation
# We can do the same using pandas-summary.
from pandas_summary import DataFrameSummary
dfs = DataFrameSummary(df)
dfs.summary()
Explanation: Immediately it's clear that there's a number of rows with empty values! Nothing beats a quick visual check like this one.
We can get a table version of this using another package called pandas_summary.
End of explanation
import numpy as np
def compute_dimensions(length):
Given an integer, compute the "square-est" pair of dimensions for plotting.
Examples:
- length: 17 => rows: 4, cols: 5
- length: 14 => rows: 4, cols: 4
This is a utility function; can be tested separately.
sqrt = np.sqrt(length)
floor = int(np.floor(sqrt))
ceil = int(np.ceil(sqrt))
if floor ** 2 >= length:
return (floor, floor)
elif floor * ceil >= length:
return (floor, ceil)
else:
return (ceil, ceil)
compute_dimensions(length=17)
assert compute_dimensions(17) == (4, 5)
assert compute_dimensions(16) == (4, 4)
assert compute_dimensions(15) == (4, 4)
assert compute_dimensions(11) == (3, 4)
# Next, let's visualize the empirical CDF for each column of data.
import matplotlib.pyplot as plt
def empirical_cumdist(data, ax, title=None):
Plots the empirical cumulative distribution of values.
x, y = np.sort(data), np.arange(1, len(data)+1) / len(data)
ax.scatter(x, y)
ax.set_title(title)
data_cols = [i for i in df.columns if i not in ['Year', 'Month']]
n_rows, n_cols = compute_dimensions(len(data_cols))
fig = plt.figure(figsize=(n_cols*3, n_rows*3))
from matplotlib.gridspec import GridSpec
gs = GridSpec(n_rows, n_cols)
for i, col in enumerate(data_cols):
ax = plt.subplot(gs[i])
empirical_cumdist(df[col], ax, title=col)
plt.tight_layout()
plt.show()
Explanation: dfs.summary() returns a Pandas DataFrame; this means we can write tests for data completeness!
Exercise: Test for data completeness.
Write a test named check_data_completeness(df) that takes in a DataFrame and confirms that there's no missing data from the pandas-summary output. Then, write a corresponding test_boston_ei() that tests the schema for the Boston Economic Indicators dataframe.
```python
In test_datafuncs.py
from pandas_summary import DataFrameSummary
def check_data_completeness(df):
df_summary = DataFrameSummary(df).summary()
for col in df_summary.columns:
assert df_summary.loc['missing', col] == 0, f'{col} has missing values'
def test_boston_ei():
df = pd.read_csv('data/boston_ei.csv')
check_data_completeness(df)
```
Exercise: Test for value correctness.
In the Economic Indicators dataset, there are four "rate" columns: ['labor_force_part_rate', 'hotel_occup_rate', 'hotel_avg_daily_rate', 'unemp_rate'], which must have values between 0 and 1.
Add a utility function to test_datafuncs.py, check_data_range(data, lower=0, upper=1), which checks the range of the data such that:
- data is a list-like object.
- data <= upper
- data >= lower
- upper and lower have default values of 1 and 0 respectively.
Then, add to the test_boston_ei() function tests for each of these four columns, using the check_data_range() function.
```python
In test_datafuncs.py
def check_data_range(data, lower=0, upper=1):
assert min(data) >= lower, f"minimum value less than {lower}"
assert max(data) <= upper, f"maximum value greater than {upper}"
def test_boston_ei():
df = pd.read_csv('data/boston_ei.csv')
check_data_completeness(df)
zero_one_cols = ['labor_force_part_rate', 'hotel_occup_rate',
'hotel_avg_daily_rate', 'unemp_rate']
for col in zero_one_cols:
check_data_range(df['labor_force_part_rate'])
```
Distributions
Most of what is coming is going to be a demonstration of the kinds of tools that are potentially useful for you. Feel free to relax from coding, as these aren't necessarily obviously automatable.
Numerical Data
We can take the EDA portion further, by doing an empirical cumulative distribution plot for each data column.
End of explanation
data_cols = [i for i in df.columns if i not in ['Year', 'Month']]
n_rows, n_cols = compute_dimensions(len(data_cols))
fig = plt.figure(figsize=(n_cols*3, n_rows*3))
from matplotlib.gridspec import GridSpec
gs = GridSpec(n_rows, n_cols)
for i, col in enumerate(data_cols):
ax = plt.subplot(gs[i])
empirical_cumdist(standard_scaler(df[col]), ax, title=col)
plt.tight_layout()
plt.show()
Explanation: It's often a good idea to standardize numerical data (that aren't count data). The term standardize often refers to the statistical procedure of subtracting the mean and dividing by the standard deviation, yielding an empirical distribution of data centered on 0 and having standard deviation of 1.
Exercise
Write a test for a function that standardizes a column of data. Then, write the function.
Note: This function is also implemented in the scikit-learn library as part of their preprocessing module. However, in case an engineering decision that you make is that you don't want to import an entire library just to use one function, you can re-implement it on your own.
```python
def standard_scaler(x):
return (x - x.mean()) / x.std()
def test_standard_scaler(x):
std = standard_scaler(x)
assert np.allclose(std.mean(), 0)
assert np.allclose(std.std(), 1)
```
Exercise
Now, plot the grid of standardized values.
End of explanation
from collections import Counter
def empirical_catdist(data, ax, title=None):
d = Counter(data)
print(d)
x = range(len(d.keys()))
labels = list(d.keys())
y = list(d.values())
ax.bar(x, y)
ax.set_xticks(x)
ax.set_xticklabels(labels)
smartphone_df = pd.read_csv('data/smartphone_sanitization.csv')
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
empirical_catdist(smartphone_df['site'], ax=ax)
Explanation: Exercise
Did we just copy/paste the function?! It's time to stop doing this. Let's refactor the code into a function that can be called.
Categorical Data
For categorical-type data, we can plot the empirical distribution as well. (This example uses the smartphone_sanitization.csv dataset.)
End of explanation
from scipy.stats import ks_2samp
import numpy.random as npr
# Simulate a normal distribution with 10000 draws.
normal_rvs = npr.normal(size=10000)
result = ks_2samp(normal_rvs, df['labor_force_part_rate'].dropna())
result.pvalue < 0.05
fig = plt.figure()
ax = fig.add_subplot(111)
empirical_cumdist(normal_rvs, ax=ax)
empirical_cumdist(df['hotel_occup_rate'], ax=ax)
Explanation: Statistical Checks
Report on deviations from normality.
Normality?!
The Gaussian (Normal) distribution is commonly assumed in downstream statistical procedures, e.g. outlier detection.
We can test for normality by using a K-S test.
K-S test
From Wikipedia:
In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). It is named after Andrey Kolmogorov and Nikolai Smirnov.
End of explanation
data_cols = [i for i in df.columns if i not in ['Year', 'Month']]
n_rows, n_cols = compute_dimensions(len(data_cols))
fig = plt.figure(figsize=(n_cols*3, n_rows*3))
from matplotlib.gridspec import GridSpec
gs = GridSpec(n_rows, n_cols)
for i, col in enumerate(data_cols):
ax = plt.subplot(gs[i])
test = ks_2samp(normal_rvs, standard_scaler(df[col]))
empirical_cumdist(normal_rvs, ax)
empirical_cumdist(standard_scaler(df[col]), ax, title=f"{col}, p={round(test.pvalue, 2)}")
plt.tight_layout()
plt.show()
Explanation: Exercise
Re-create the panel of cumulative distribution plots, this time adding on the Normal distribution, and annotating the p-value of the K-S test in the title.
End of explanation |
3,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
You are getting to the point where you can own an analysis from beginning to end. So you'll do more data exploration in this exercise than you've done before. Before you get started, run the following set-up code as usual.
Step1: You'll work with a dataset about taxi trips in the city of Chicago. Run the cell below to fetch the chicago_taxi_trips dataset.
Step2: Exercises
You are curious how much slower traffic moves when traffic volume is high. This involves a few steps.
1) Find the data
Before you can access the data, you need to find the table name with the data.
Hint
Step3: For the solution, uncomment the line below.
Step4: 2) Peek at the data
Use the next code cell to peek at the top few rows of the data. Inspect the data and see if any issues with data quality are immediately obvious.
Step5: After deciding whether you see any important issues, run the code cell below.
Step7: 3) Determine when this data is from
If the data is sufficiently old, we might be careful before assuming the data is still relevant to traffic patterns today. Write a query that counts the number of trips in each year.
Your results should have two columns
Step8: For a hint or the solution, uncomment the appropriate line below.
Step10: 4) Dive slightly deeper
You'd like to take a closer look at rides from 2017. Copy the query you used above in rides_per_year_query into the cell below for rides_per_month_query. Then modify it in two ways
Step11: For a hint or the solution, uncomment the appropriate line below.
Step13: 5) Write the query
It's time to step up the sophistication of your queries. Write a query that shows, for each hour of the day in the dataset, the corresponding number of trips and average speed.
Your results should have three columns
Step14: For the solution, uncomment the appropriate line below. | Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql.ex5 import *
print("Setup Complete")
Explanation: Introduction
You are getting to the point where you can own an analysis from beginning to end. So you'll do more data exploration in this exercise than you've done before. Before you get started, run the following set-up code as usual.
End of explanation
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "chicago_taxi_trips" dataset
dataset_ref = client.dataset("chicago_taxi_trips", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
Explanation: You'll work with a dataset about taxi trips in the city of Chicago. Run the cell below to fetch the chicago_taxi_trips dataset.
End of explanation
# Your code here to find the table name
# Write the table name as a string below
table_name = ____
# Check your answer
q_1.check()
Explanation: Exercises
You are curious how much slower traffic moves when traffic volume is high. This involves a few steps.
1) Find the data
Before you can access the data, you need to find the table name with the data.
Hint: Tab completion is helpful whenever you can't remember a command. Type client. and then hit the tab key. Don't forget the period before hitting tab.
End of explanation
#q_1.solution()
Explanation: For the solution, uncomment the line below.
End of explanation
# Your code here
Explanation: 2) Peek at the data
Use the next code cell to peek at the top few rows of the data. Inspect the data and see if any issues with data quality are immediately obvious.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
Explanation: After deciding whether you see any important issues, run the code cell below.
End of explanation
# Your code goes here
rides_per_year_query = ____
# Set up the query (cancel the query if it would use too much of
# your quota)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
rides_per_year_query_job = ____ # Your code goes here
# API request - run the query, and return a pandas DataFrame
rides_per_year_result = ____ # Your code goes here
# View results
print(rides_per_year_result)
# Check your answer
q_3.check()
Explanation: 3) Determine when this data is from
If the data is sufficiently old, we might be careful before assuming the data is still relevant to traffic patterns today. Write a query that counts the number of trips in each year.
Your results should have two columns:
- year - the year of the trips
- num_trips - the number of trips in that year
Hints:
- When using GROUP BY and ORDER BY, you should refer to the columns by the alias year that you set at the top of the SELECT query.
- The SQL code to SELECT the year from trip_start_timestamp is <code>SELECT EXTRACT(YEAR FROM trip_start_timestamp)</code>
- The FROM field can be a little tricky until you are used to it. The format is:
1. A backick (the symbol `).
2. The project name. In this case it is bigquery-public-data.
3. A period.
4. The dataset name. In this case, it is chicago_taxi_trips.
5. A period.
6. The table name. You used this as your answer in 1) Find the data.
7. A backtick (the symbol `).
End of explanation
#q_3.hint()
#q_3.solution()
Explanation: For a hint or the solution, uncomment the appropriate line below.
End of explanation
# Your code goes here
rides_per_month_query = ____
# Set up the query
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
rides_per_month_query_job = ____ # Your code goes here
# API request - run the query, and return a pandas DataFrame
rides_per_month_result = ____ # Your code goes here
# View results
print(rides_per_month_result)
# Check your answer
q_4.check()
Explanation: 4) Dive slightly deeper
You'd like to take a closer look at rides from 2017. Copy the query you used above in rides_per_year_query into the cell below for rides_per_month_query. Then modify it in two ways:
1. Use a WHERE clause to limit the query to data from 2017.
2. Modify the query to extract the month rather than the year.
End of explanation
#q_4.hint()
#q_4.solution()
Explanation: For a hint or the solution, uncomment the appropriate line below.
End of explanation
# Your code goes here
speeds_query =
WITH RelevantRides AS
(
SELECT ____
FROM ____
WHERE ____
)
SELECT ______
FROM RelevantRides
GROUP BY ____
ORDER BY ____
# Set up the query
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
speeds_query_job = ____ # Your code here
# API request - run the query, and return a pandas DataFrame
speeds_result = ____ # Your code here
# View results
print(speeds_result)
# Check your answer
q_5.check()
Explanation: 5) Write the query
It's time to step up the sophistication of your queries. Write a query that shows, for each hour of the day in the dataset, the corresponding number of trips and average speed.
Your results should have three columns:
- hour_of_day - sort by this column, which holds the result of extracting the hour from trip_start_timestamp.
- num_trips - the count of the total number of trips in each hour of the day (e.g. how many trips were started between 6AM and 7AM, independent of which day it occurred on).
- avg_mph - the average speed, measured in miles per hour, for trips that started in that hour of the day. Average speed in miles per hour is calculated as 3600 * SUM(trip_miles) / SUM(trip_seconds). (The value 3600 is used to convert from seconds to hours.)
Restrict your query to data meeting the following criteria:
- a trip_start_timestamp between 2017-01-01 and 2017-07-01
- trip_seconds > 0 and trip_miles > 0
You will use a common table expression (CTE) to select just the relevant rides. Because this dataset is very big, this CTE should select only the columns you'll need to create the final output (though you won't actually create those in the CTE -- instead you'll create those in the later SELECT statement below the CTE).
This is a much harder query than anything you've written so far. Good luck!
End of explanation
#q_5.solution()
Explanation: For the solution, uncomment the appropriate line below.
End of explanation |
3,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Basics
Step1: Functional method
Step2: Object oriented method
Step3: Matplotlib Basics continued...2
Step4: Figure Size and DPI
Step5: Saving figures
Step6: Matplotlib Basics continued...3 | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.linspace(0,5,11)
y = x ** 2
x
y
Explanation: Matplotlib Basics
End of explanation
plt.plot(x, y)
plt.xlabel('X Label')
plt.ylabel('Y Label')
plt.title('Title')
plt.show()
# Multiplot on same canvas
plt.subplot(1,2,1) # rows, columns, plot_you_are_referring_to
plt.plot(x,y,'r')
plt.subplot(1,2,2)
plt.plot(y,x,'b')
plt.show()
Explanation: Functional method
End of explanation
fig = plt.figure()
fig
axes = fig.add_axes([0.1,0.1,0.8,0.8]) # [left, bottom, width, height]
fig
axes.plot(x,y)
axes.set_xlabel('X Label')
axes.set_ylabel('Y Label')
axes.set_title('Title')
fig
fig = plt.figure()
axes1 = fig.add_axes([0.1,0.1,0.8,0.8])
axes2 = fig.add_axes([0.2,0.5,0.4,0.3])
fig = plt.figure()
axes1 = fig.add_axes([0,0,1,1])
axes2 = fig.add_axes([0.5,0,0.5,1])
fig = plt.figure()
main = fig.add_axes([0,0,1,1]) # main axes
tl = fig.add_axes([0,0.5,0.5,0.5]) # top-left
bl = fig.add_axes([0,0,0.5,0.5]) # bottom left
tr = fig.add_axes([0.5,0.5,0.5,0.5])# top right
br = fig.add_axes([0.5,0,0.5,0.5]) # bottom right
fig = plt.figure()
axes1 = fig.add_axes([0,0,0.45,1])
axes2 = fig.add_axes([0.55,0,0.45,1])
axes1.plot(x,y)
axes2.plot(y,x)
fig = plt.figure()
axes1 = fig.add_axes([0.1,0.1,0.8,0.8])
axes2 = fig.add_axes([0.2,0.5,0.4,0.3])
axes1.set_title('Larger Plot')
axes2.set_title('Smaller Plot')
axes1.plot(x,y)
axes2.plot(y,x)
Explanation: Object oriented method
End of explanation
fig, axes = plt.subplots(nrows = 1, ncols = 2) #tuple unpacking
print(type(axes))
axes
fig, axes = plt.subplots(nrows = 1, ncols = 2) #tuple unpacking
for current_axes in axes:
current_axes.plot(x,y)
fig, axes = plt.subplots(nrows = 1, ncols = 2) #tuple unpacking
axes[0].plot(x,y)
axes[0].set_title('First Plot')
axes[1].plot(y,x)
axes[1].set_title('Second Plot')
# to resolve the over-lapping issue
plt.tight_layout()
Explanation: Matplotlib Basics continued...2
End of explanation
fig = plt.figure(figsize = (3,2))
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y)
fig = plt.figure(figsize = (8,2))
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y)
# With subplots
fig, axes = plt.subplots(figsize=(3,2))
axes.plot(x,y)
fig, axes = plt.subplots(figsize=(8,2))
axes.plot(x,y)
fig, axes = plt.subplots(nrows = 2, ncols= 1,figsize=(8,2))
axes[0].plot(x,y)
axes[1].plot(y,x)
fig.tight_layout()
Explanation: Figure Size and DPI
End of explanation
fig.savefig('my_picture.png', dpi = 800)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Title')
ax.plot(x,x ** 2, label='X Squared')
ax.plot(x,x ** 3, label='X Cubed')
# more details on legend location
# http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.legend
ax.legend(loc = 10)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Title')
ax.plot(x,x ** 2, label='X Squared')
ax.plot(x,x ** 3, label='X Cubed')
# more details on legend location
# http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.legend
ax.legend(loc = (0.1,0.1)) # location tuple (left, bottom)
Explanation: Saving figures
End of explanation
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
# color = purple, #FF8C00,
# linewidth or lw = 3
# alpha (transperancy) = 0.5
# linestyle or ls = '--' or 'steps'
# marker = 'o'
# markersize = 10
# markerfacecolor = 'red', #FF8C00
# markeredgewidth = 3
# markeredgecolor = 'blue', #FF8C00
ax.plot(x,y, color='purple', lw=3, alpha=0.5, ls = '--', marker = 'o', markersize = 10)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y, color='purple', lw=3, ls='--')
ax.set_xlim([0,1]) # lowerbound, upper bound along x-axis
ax.set_ylim([0,2]) # lowerbound, upper bound along y-axis
fig
Explanation: Matplotlib Basics continued...3
End of explanation |
3,734 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have data of sample 1 and sample 2 (`a` and `b`) – size is different for sample 1 and sample 2. I want to do a weighted (take n into account) two-tailed t-test. | Problem:
import numpy as np
import scipy.stats
a = np.random.randn(40)
b = 4*np.random.randn(50)
_, p_value = scipy.stats.ttest_ind(a, b, equal_var = False) |
3,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\title{Combinational-Circuit Building Blocks aka medium scale integrated circuit (MSI) in myHDL}
\author{Steven K Armour}
\maketitle
Table of Contents
<p><div class="lev1 toc-item"><a href="#Refs" data-toc-modified-id="Refs-1"><span class="toc-item-num">1 </span>Refs</a></div><div class="lev1 toc-item"><a href="#Python-Libraries-Utilized" data-toc-modified-id="Python-Libraries-Utilized-2"><span class="toc-item-num">2 </span>Python Libraries Utilized</a></div><div class="lev1 toc-item"><a href="#Multiplexers-(mux)" data-toc-modified-id="Multiplexers-(mux)-3"><span class="toc-item-num">3 </span>Multiplexers (mux)</a></div><div class="lev2 toc-item"><a href="#Shannon’s-Expansion-Theorem" data-toc-modified-id="Shannon’s-Expansion-Theorem-31"><span class="toc-item-num">3.1 </span>Shannon’s Expansion Theorem</a></div><div class="lev2 toc-item"><a href="#2
Step2: Multiplexers (mux)
a junction switch between one of n inputs to a single output; equivalent to a "if" or "case" statement
let $Z$ be its output $m_k$ the minterms of the controls to the mux and $I_k$ be the input feeds to the mux; then the expression for the mux in terms of boolean algebra becomes
$$Z=\sum^{2^k-1}_{k=0} m_k \cdot I_k= \text{OR}(m_k \& I_k) $$
Shannon’s Expansion Theorem
The above is Shannon's theorem
it can be written more sincintly as
Step3: ## 2
Step4: let $f(m_1, m_2, m_3)$ be the total set of minterms for a 3-bit then let $m_1$ be designated the select terms then by shannon's theorem states
$$f(m_1, m_2, m_3)=\bar{m_1} \cdot f_1'(0, m_2, m_3)+m_1 \cdot f_1(1, m_2, m_3)$$
in other words we want select the two subset of the f where $m_1$ is 1 or 0 and call thouse two subsets $f_1'$, $f_1$
Step5: $$f(m_1, m_2, m_3)$$
Step6: $$\bar{m_1} \cdot f_1'(0, m_2, m_3)$$
Step7: $$m_1 \cdot f_1(1, m_2, m_3)$$
Step8: and since this is the lowest order mux this case use of shannon's theorem is kind of trivial
myHDL 2
Step9: myHDL 2
Step10: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL 2
Step11: myHDL 2
Step12: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL behavioral level 2
Step13: myHDL 4
Step14: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL 4
Step15: myHDL 4
Step16: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL behavioral level 4
Step17: myHDL 4
Step18: myHDL 4
Step19: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL behavioral level 4
Step20: myHDL Generic Expression via MUXs and Testing
Step21: myHDL Generic Expression via MUXs HDL Synthesis | Python Code:
import numpy as np
import pandas as pd
from sympy import *
init_printing()
from myhdl import *
from myhdlpeek import *
import random
from sympy_myhdl_tools import *
pass
Explanation: \title{Combinational-Circuit Building Blocks aka medium scale integrated circuit (MSI) in myHDL}
\author{Steven K Armour}
\maketitle
Table of Contents
<p><div class="lev1 toc-item"><a href="#Refs" data-toc-modified-id="Refs-1"><span class="toc-item-num">1 </span>Refs</a></div><div class="lev1 toc-item"><a href="#Python-Libraries-Utilized" data-toc-modified-id="Python-Libraries-Utilized-2"><span class="toc-item-num">2 </span>Python Libraries Utilized</a></div><div class="lev1 toc-item"><a href="#Multiplexers-(mux)" data-toc-modified-id="Multiplexers-(mux)-3"><span class="toc-item-num">3 </span>Multiplexers (mux)</a></div><div class="lev2 toc-item"><a href="#Shannon’s-Expansion-Theorem" data-toc-modified-id="Shannon’s-Expansion-Theorem-31"><span class="toc-item-num">3.1 </span>Shannon’s Expansion Theorem</a></div><div class="lev2 toc-item"><a href="#2:1-MultiPlexer" data-toc-modified-id="2:1-MultiPlexer-32"><span class="toc-item-num">3.2 </span>2:1 MultiPlexer</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Gate-Level-and-Testing" data-toc-modified-id="myHDL-2:1-MUX-Gate-Level-and-Testing-321"><span class="toc-item-num">3.2.1 </span>myHDL 2:1 MUX Gate Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Gate-Level-HDL-Synthesis" data-toc-modified-id="myHDL-2:1-MUX-Gate-Level-HDL-Synthesis-322"><span class="toc-item-num">3.2.2 </span>myHDL 2:1 MUX Gate Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#2:1-Multiplexer-Behavioral" data-toc-modified-id="2:1-Multiplexer-Behavioral-33"><span class="toc-item-num">3.3 </span>2:1 Multiplexer Behavioral</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Behavioral-Level-and-Testing" data-toc-modified-id="myHDL-2:1-MUX-Behavioral-Level-and-Testing-331"><span class="toc-item-num">3.3.1 </span>myHDL 2:1 MUX Behavioral Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Behavioral-Level-HDL-Synthesis" data-toc-modified-id="myHDL-2:1-MUX-Behavioral-Level-HDL-Synthesis-332"><span class="toc-item-num">3.3.2 </span>myHDL 2:1 MUX Behavioral Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#4:1-MUX" data-toc-modified-id="4:1-MUX-34"><span class="toc-item-num">3.4 </span>4:1 MUX</a></div><div class="lev3 toc-item"><a href="#!?-Insert-Digram-below" data-toc-modified-id="!?-Insert-Digram-below-341"><span class="toc-item-num">3.4.1 </span>!? Insert Digram below</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Gate-Level-and-Testing" data-toc-modified-id="myHDL-4:1-MUX-Gate-Level-and-Testing-342"><span class="toc-item-num">3.4.2 </span>myHDL 4:1 MUX Gate Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Gate-Level-HDL-Synthesis" data-toc-modified-id="myHDL-4:1-MUX-Gate-Level-HDL-Synthesis-343"><span class="toc-item-num">3.4.3 </span>myHDL 4:1 MUX Gate Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#4:1-Multiplexer-Behavioral" data-toc-modified-id="4:1-Multiplexer-Behavioral-35"><span class="toc-item-num">3.5 </span>4:1 Multiplexer Behavioral</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-Level-and-Testing" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-Level-and-Testing-351"><span class="toc-item-num">3.5.1 </span>myHDL 4:1 MUX Behavioral Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-Level-HDL-Synthesis" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-Level-HDL-Synthesis-352"><span class="toc-item-num">3.5.2 </span>myHDL 4:1 MUX Behavioral Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#4:1-Multiplexer-Behavioral-with-bitvectors" data-toc-modified-id="4:1-Multiplexer-Behavioral-with-bitvectors-36"><span class="toc-item-num">3.6 </span>4:1 Multiplexer Behavioral with bitvectors</a></div><div class="lev3 toc-item"><a href="#How-bit-vectors-work-in-myHDL-and-in-Verilog/VHDL" data-toc-modified-id="How-bit-vectors-work-in-myHDL-and-in-Verilog/VHDL-361"><span class="toc-item-num">3.6.1 </span>How bit vectors work in myHDL and in Verilog/VHDL</a></div><div class="lev3 toc-item"><a href="#Understanding-BitVector-bit-selection-in-myHDL" data-toc-modified-id="Understanding-BitVector-bit-selection-in-myHDL-362"><span class="toc-item-num">3.6.2 </span>Understanding BitVector bit selection in myHDL</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-with-BitVecters-and-Testing" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-with-BitVecters-and-Testing-363"><span class="toc-item-num">3.6.3 </span>myHDL 4:1 MUX Behavioral with BitVecters and Testing</a></div><div class="lev4 toc-item"><a href="#!?-This-needs-to-be-checked" data-toc-modified-id="!?-This-needs-to-be-checked-3631"><span class="toc-item-num">3.6.3.1 </span>!? This needs to be checked</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-with-BitVecters-HDL-Synthesis" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-with-BitVecters-HDL-Synthesis-364"><span class="toc-item-num">3.6.4 </span>myHDL 4:1 MUX Behavioral with BitVecters HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#Generic-Expressions-via-MUXs" data-toc-modified-id="Generic-Expressions-via-MUXs-37"><span class="toc-item-num">3.7 </span>Generic Expressions via MUXs</a></div><div class="lev3 toc-item"><a href="#myHDL-Generic-Expression-via-MUXs-and-Testing" data-toc-modified-id="myHDL-Generic-Expression-via-MUXs-and-Testing-371"><span class="toc-item-num">3.7.1 </span>myHDL Generic Expression via MUXs and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-Generic-Expression-via-MUXs-HDL-Synthesis" data-toc-modified-id="myHDL-Generic-Expression-via-MUXs-HDL-Synthesis-372"><span class="toc-item-num">3.7.2 </span>myHDL Generic Expression via MUXs HDL Synthesis</a></div><div class="lev1 toc-item"><a href="#Demultiplexers" data-toc-modified-id="Demultiplexers-4"><span class="toc-item-num">4 </span>Demultiplexers</a></div><div class="lev1 toc-item"><a href="#Encoders" data-toc-modified-id="Encoders-5"><span class="toc-item-num">5 </span>Encoders</a></div><div class="lev1 toc-item"><a href="#Decoders" data-toc-modified-id="Decoders-6"><span class="toc-item-num">6 </span>Decoders</a></div>
# Refs
@book{brown_vranesic_2014,
place={New York, NY},
edition={3},
title={Fundamentals of digital logic with Verilog design},
publisher={McGraw-Hill},
author={Brown, Stephen and Vranesic, Zvonko G},
year={2014}
},
@book{lameres_2017,
title={Introduction to logic circuits & logic design with Verilog},
publisher={springer},
author={LaMeres, Brock J},
year={2017}
},
@misc{peeker_simple_mux,
url={http://www.xess.com/static/media/pages/peeker_simple_mux.html},
journal={Xess.com},
year={2017}
},
# Python Libraries Utilized
End of explanation
def shannon_exspanson(f, term):
f is not a full equation
cof0=simplify(f.subs(term, 0)); cof1=simplify(f.subs(term, 1))
return ((~term & cof0 | (term & cof1))), cof0, cof1
Explanation: Multiplexers (mux)
a junction switch between one of n inputs to a single output; equivalent to a "if" or "case" statement
let $Z$ be its output $m_k$ the minterms of the controls to the mux and $I_k$ be the input feeds to the mux; then the expression for the mux in terms of boolean algebra becomes
$$Z=\sum^{2^k-1}_{k=0} m_k \cdot I_k= \text{OR}(m_k \& I_k) $$
Shannon’s Expansion Theorem
The above is Shannon's theorem
it can be written more sincintly as:
$$f(x_1, x_2, ..., x_n)=\bar{x_1}f(0, x_2, ..., x_n)+x_1 f(x_1, x_2, ..., x_n)$$
and then each $f(0, x_2, ..., x_n)$ \& $f(x_1, x_2, ..., x_n)$ is broken down as the above till the maximum number of control statement and minim inputs are needed
End of explanation
sel, x_1in, x_2in=symbols('sel, x_1in, x_2in')
Explanation: ## 2:1 MultiPlexer
End of explanation
x_1in, x_2in, sel=symbols('x_1in, x_2in, sel')
Explanation: let $f(m_1, m_2, m_3)$ be the total set of minterms for a 3-bit then let $m_1$ be designated the select terms then by shannon's theorem states
$$f(m_1, m_2, m_3)=\bar{m_1} \cdot f_1'(0, m_2, m_3)+m_1 \cdot f_1(1, m_2, m_3)$$
in other words we want select the two subset of the f where $m_1$ is 1 or 0 and call thouse two subsets $f_1'$, $f_1$
End of explanation
ConversionTable=pd.DataFrame()
Terms=[bin(i, 3) for i in np.arange(0, 2**3)]
ConversionTable['sel']=[int(j[0]) for j in Terms]
ConversionTable['x_1in']=[int(j[1]) for j in Terms]
ConversionTable['x_2in']=[int(j[2]) for j in Terms]
#this is shannos theorm
ConversionTable['f']=list(ConversionTable.loc[ConversionTable['sel'] == 0]['x_1in'])+list(ConversionTable.loc[ConversionTable['sel'] == 1]['x_2in'])
ConversionTable.index.name='MinMaxTerm'
ConversionTable
POS=list(ConversionTable.loc[ConversionTable['f'] == 0].index)
SOP=list(ConversionTable.loc[ConversionTable['f'] == 1].index)
f"POS: {POS}, SOP:{SOP}"
f, _=POS_SOPformCalcater([sel, x_1in, x_2in], SOP, POS)
f
a, b, c=shannon_exspanson(f, sel)
f,'= via shannaon', a
Explanation: $$f(m_1, m_2, m_3)$$
End of explanation
m1bar_f0=~sel&x_1in; m1bar_f0
f0Table=ConversionTable.loc[ConversionTable['sel'] == 0].copy()
f0Table['f0']=[m1bar_f0.subs({sel:i, x_1in:j}) for i, j in zip(f0Table['sel'], f0Table['x_1in'])]
f0Table
Explanation: $$\bar{m_1} \cdot f_1'(0, m_2, m_3)$$
End of explanation
m1_f1=sel&x_2in; m1_f1
f1Table=ConversionTable.loc[ConversionTable['sel'] == 1].copy()
f1Table['f1']=[m1_f1.subs({sel:i, x_2in:j}) for i, j in zip(f1Table['sel'], f1Table['x_2in'])]
f1Table
Explanation: $$m_1 \cdot f_1(1, m_2, m_3)$$
End of explanation
def mux21_gates(sel, x_1in, x_2in, f_out):
@always_comb
def logic():
f_out.next=(sel and x_2in) or (x_1in and not sel)
return logic
Peeker.clear()
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
Peeker(sel, 'sel'); Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in')
Peeker(f_out, 'f_out')
DUT=mux21_gates(sel, x_1in, x_2in, f_out)
inputs=[sel, x_1in, x_2in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 2:1 gate type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
Explanation: and since this is the lowest order mux this case use of shannon's theorem is kind of trivial
myHDL 2:1 MUX Gate Level and Testing
End of explanation
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
toVerilog(mux21_gates, sel, x_1in, x_2in, f_out)
#toVHDL(mux21_gates sel, x_1in, x_2in, f_out)
_=VerilogTextReader('mux21_gates')
Explanation: myHDL 2:1 MUX Gate Level HDL Synthesis
End of explanation
def mux21_behavioral(sel, x_1in, x_2in, f_out):
@always_comb
def logic():
if sel:
f_out.next=x_1in
else:
f_out.next=x_2in
return logic
Peeker.clear()
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
Peeker(sel, 'sel'); Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in')
Peeker(f_out, 'f_out')
DUT=mux21_behavioral(sel, x_1in, x_2in, f_out)
inputs=[sel, x_1in, x_2in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 2:1 behaviroal type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
Explanation: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL 2:1 MUX Gate level verilog code
<img style="float: center;" src="MUX21GateRTLSch.PNG">
however as will be shown doing gate implementation of MUXs is not sustainable in HDL code and this we will have to implement behavioral syntax as follows, thouse the caveat is that this only works for standard MUXs
2:1 Multiplexer Behavioral
myHDL 2:1 MUX Behavioral Level and Testing
End of explanation
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
toVerilog(mux21_behavioral, sel, x_1in, x_2in, f_out)
#toVHDL(mux21_behavioral sel, x_1in, x_2in, f_out)
_=VerilogTextReader('mux21_behavioral')
Explanation: myHDL 2:1 MUX Behavioral Level HDL Synthesis
End of explanation
def MUX41_gates(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out):
@always_comb
def logic():
f_out.next=((not sel_1) and (not sel_2) and x_1in) or ((not sel_1) and ( sel_2) and x_2in) or (( sel_1) and (not sel_2) and x_3in) or (( sel_1) and ( sel_2) and x_4in)
return logic
Peeker.clear()
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
Peeker(sel_1, 'sel_1'); Peeker(sel_2, 'sel_2');
Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in'); Peeker(x_3in, 'x_3in'); Peeker(x_4in, 'x_4in')
Peeker(f_out, 'f_out')
DUT=MUX41_gates(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
inputs=[sel_1, sel_2, x_1in, x_2in, x_3in, x_4in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 4:1 gate type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
Explanation: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL behavioral level 2:1 MUX's verilog code
<img style="float: center;" src="MUX21BehavioralRTLSch.PNG">
4:1 MUX
If you try to repeat the above using a 4:1 which has four input lines and needs two select lines you can become overwhelmed quickly instead it is easier to use the following diagram to than synthesis the gate level architecture
!? Insert Digram below
myHDL 4:1 MUX Gate Level and Testing
End of explanation
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
toVerilog(MUX41_gates, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
#toVHDL(MUX41_gates, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
_=VerilogTextReader('MUX41_gates')
Explanation: myHDL 4:1 MUX Gate Level HDL Synthesis
End of explanation
def MUX41_behavioral(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out):
@always_comb
def logic():
if (not sel_1) and (not sel_2):
f_out.next=x_1in
elif (not sel_1) and sel_2:
f_out.next=x_2in
elif sel_1 and (not sel_2):
f_out.next=x_3in
else:
f_out.next=x_4in
return logic
Peeker.clear()
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
Peeker(sel_1, 'sel_1'); Peeker(sel_2, 'sel_2');
Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in'); Peeker(x_3in, 'x_3in'); Peeker(x_4in, 'x_4in')
Peeker(f_out, 'f_out')
DUT=MUX41_behavioral(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
inputs=[sel_1, sel_2, x_1in, x_2in, x_3in, x_4in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 4:1 behavioral type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
Explanation: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL 4:1 MUX Gate level verilog code
<img style="float: center;" src="MUX41GateRTLSch.PNG">
4:1 Multiplexer Behavioral
As one can clearly see this is not sustainable and thus 'if' Statements need to be used via behavioral logic modeling
myHDL 4:1 MUX Behavioral Level and Testing
End of explanation
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
toVerilog(MUX41_behavioral, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
#toVHDL(MUX41_behavioral, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
_=VerilogTextReader('MUX41_behavioral')
Explanation: myHDL 4:1 MUX Behavioral Level HDL Synthesis
End of explanation
sel=intbv(1)[2:]; x_in=intbv(7)[4:]; f_out=bool(0)
for i in x_in:
print(i)
for i in range(4):
print(x_in[i])
Explanation: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL behavioral level 4:1 MUX's verilog code
<img style="float: center;" src="MUX41BehaviroalRTLSch.PNG">
4:1 Multiplexer Behavioral with bitvectors
taking this a step further using bytes we can implement the behavioral using vector inputs instead of single bit inputs as follows
How bit vectors work in myHDL and in Verilog/VHDL
Understanding BitVector bit selection in myHDL
End of explanation
def MUX41_behavioralVec(sel, x_in, f_out):
@always_comb
def logic():
f_out.next=x_in[sel]
return logic
Peeker.clear()
sel=Signal(intbv(0)[2:]); Peeker(sel, 'sel')
x_in=Signal(intbv(0)[4:]); Peeker(x_in, 'x_in')
f_out=Signal(bool(0)); Peeker(f_out, 'f_out')
DUT=MUX41_behavioralVec(sel, x_in, f_out)
def MUX41_behavioralVec_TB(sel, x_in):
selLen=len(sel); x_inLen=len(x_in)
for i in range(2**x_inLen):
x_in.next=i
for j in range(selLen):
sel.next=j
yield delay(1)
now()
im=Simulation(DUT, MUX41_behavioralVec_TB(sel, x_in), *Peeker.instances()).run()
Peeker.to_wavedrom(tock=True,
title='MUX 4:1 behavioral vectype simulation')
MakeDFfromPeeker(Peeker.to_wavejson())
Explanation: myHDL 4:1 MUX Behavioral with BitVecters and Testing
!? This needs to be checked
End of explanation
sel=Signal(intbv(0)[2:]); x_in=Signal(intbv(0)[4:]);
f_out=Signal(bool(0))
toVerilog(MUX41_behavioralVec,sel, x_in, f_out)
#toVHDL(MUX41_behavioralVec,sel, x_in, f_out)
_=VerilogTextReader('MUX41_behavioralVec')
Explanation: myHDL 4:1 MUX Behavioral with BitVecters HDL Synthesis
End of explanation
w1, w2, w3=symbols('w_1, w_2, w_3')
f=(~w1&~w3)|(w1&w2)|(w1&w3)
f
s1=w1
fp, fp0, fp1=shannon_exspanson(f, s1)
fp, fp0, fp1
s2=w2
fpp0, fpp00, fpp01=shannon_exspanson(fp0, s2)
fpp1, fpp10, fpp11=shannon_exspanson(fp1, s2)
fpp0, fpp00, fpp01, fpp1, fpp10, fpp11
Explanation: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL behavioral level 4:1 MUX using Bitvecters verilog code
<img style="float: center;" src="MUX41BehaviroalVecRTLSch.PNG">
Generic Expressions via MUXs
(clean this and find harder exsample)
while shannon's theorem did not prove very much useful in designing a 4:1 MUX it's true power lies converting boolean logic expression from and or gates to MUX's
using example 4.5 from Brown & Vranesic 3rd Ed
End of explanation
def Shannon21MUX(s1, s2, w_3in, f_out):
@always_comb
def logic():
if (not s1) and (not s2):
f_out.next=not w_3in
elif (not s1) and ( s2):
f_out.next=not w_3in
elif ( s1) and (not s2):
f_out.next= w_3in
else:
f_out.next=1
return logic
Peeker.clear()
s1, s2, w_3in, f_out=[Signal(bool(0)) for _ in range(4)]
Peeker(s1, 's1'); Peeker(s2, 's2');
Peeker(w_3in, 'w_3in')
Peeker(f_out, 'f_out')
DUT=Shannon21MUX(s1, s2, w_3in, f_out)
inputs=[s1, s2, w_3in, f_out]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='Shannon 2:1 MUX gate type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
Explanation: myHDL Generic Expression via MUXs and Testing
End of explanation
s1, s2, w_3in, f_out=[Signal(bool(0)) for _ in range(4)]
toVerilog(Shannon21MUX,s1, s2, w_3in, f_out)
#toVHDL(Shannon21MUX, s1, s2, w_3in, f_out)
_=VerilogTextReader('Shannon21MUX')
Explanation: myHDL Generic Expression via MUXs HDL Synthesis
End of explanation |
3,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Geomath - Conhecendo os Recursos
Explorando Pontos
Pontos são a unidade basica da Geometria Analítica, eles são os objetos que podem definir se algo existe ou não existe e muitos outros fatores.
Step1: Entendendo Linhas
Step2: Figuras | Python Code:
from geomath.point import Point
A = Point(0,0)
B = Point(4,4)
A.distance(B)
A.midpoint(B)
B.quadrant()
Explanation: Geomath - Conhecendo os Recursos
Explorando Pontos
Pontos são a unidade basica da Geometria Analítica, eles são os objetos que podem definir se algo existe ou não existe e muitos outros fatores.
End of explanation
from geomath.line import Line
Linha = Line()
Linha.create_via_equation("1x+2y+3=0")
Linha.equation()
Linha.create(Point(0,0),Point(4,4))
Linha.equation()
Explanation: Entendendo Linhas
End of explanation
from geomath.figure import Figure
FiguraEstranha = Figure()
FiguraEstranha.add_points([Point(2,10),Point(0,4),Point(0,0),Point(10,5),Point(3,9)])
FiguraEstranha.area()
FiguraEstranha.perimeter()
Explanation: Figuras
End of explanation |
3,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This images and the equacions are from
Step1: For now I´m using only the forces b and d, the force b are the force applied in the point b, and the force d are the force applied on the point d.
The forces below (Force_ab and Force_cd) are the forces maked by the SMAs.
Step2: Here we will put the code of the SMA model.
The code above are incomplete.
Step4: The function below gives us the direction of the forces, using the concept of the Unit vector.
Step5: The functoin Directed_Force multiply the force by the Unit vector.
Giving the directed force.
Step6: For now the balance of forces have just two forces, but this number will be increased as the problem is being better prepared.
Below are placed the main function. | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import math
#Ipython Libraries
# Remember to remove when you pass to Spyder.
from IPython.display import Image
Image(filename='axis.png')
a1 = 1
a2 = 1
b1 = 1
b2 = 1
c1 = 1
c2 = 1
d1 = 1
d2 = 1
theta = math.radians(56.6737103129) # Here this angle will be converted from degrees to radians
inicial_linear_spring_lenght = 0.1 # This is the lenght of the spring
k = 10 # linear constant of the linear spring
Image(filename='forces.png')
Explanation: This images and the equacions are from: Cássio Thomé de Faria,“Controle da Variação do Arqueamento de um Aerofólio. 2010
End of explanation
def Linear_spring(k,inicial_linear_spring_lenght,x1,y1,x2,y2,theta):
'''Calculate the final lenght of the spring'''
def r_ab(inicial_linear_spring_lenght,x1,y1,x2,y2,theta):
x = x1*math.cos(theta) - y2*math.sin(theta)+x1
y = x1*math.sin(theta) + y2*math.cos(theta)-y1
z = 0
final_linear_spring_lenght = math.sqrt(x**2 + y**2 + z**2)
return final_linear_spring_lenght
def delta_r_ab(inicial_lenght,final_lenght):
'''Calculate the diference btween final, and the inicial lenght of the spring'''
delta_linear_spring_lenght = abs(final_lenght - inicial_lenght)
return delta_linear_spring_lenght
lenght = r_ab(inicial_linear_spring_lenght,x1,y1,x2,y2,theta)
delta_r = delta_r_ab(inicial_linear_spring_lenght,lenght)
force = k*delta_r # Calculate the force maked by the linear spring (F= K*dx)
return force
print Linear_spring(k,inicial_linear_spring_lenght,a1,a2,b1,b2,theta)
Explanation: For now I´m using only the forces b and d, the force b are the force applied in the point b, and the force d are the force applied on the point d.
The forces below (Force_ab and Force_cd) are the forces maked by the SMAs.
End of explanation
def Brinson_spring(inicial_linear_spring_lenght,x1,y1,x2,y2,theta):
def x_ab(inicial_linear_spring_lenght,x1,y1,x2,y2,theta):
x = x1*math.cos(theta) - y2*math.sin(theta)+x1
y = x1*math.sin(theta) + y2*math.cos(theta)-y1
z = 0
final_linear_spring_lenght = math.sqrt(x**2 + y**2 + z**2)
return final_linear_spring_lenght
def delta_x_ab(inicial_lenght,final_lenght):
delta_linear_spring_lenght = final_lenght - inicial_linear_spring_lenght
return delta_linear_spring_lenght
return 2 # its a number (now this force is a constante force)
Explanation: Here we will put the code of the SMA model.
The code above are incomplete.
End of explanation
def Unit_vector(x1,y1,x2,y2,theta):
def Vector(x1,y1,x2,y2,theta):
'''Create de vector will be used to get de direction to put the force produced for the SMA to the x axis'''
x = x2*math.cos(theta) - y2*math.sin(theta)+x1
y = x2*math.sin(theta) + y2*math.cos(theta)-y1
vector = [x, y , 0]
return vector
def Length_r(x1,y1,x2,y2,theta):
Calculate length between points A and B using Pitagoras.
Inputs: - coordinates x and y of points 1 and 2
- theta:rotation
x = x2*math.cos(theta) - y2*math.sin(theta) + x1
y = x2*math.sin(theta) + y2*math.cos(theta) - y1
z = 0
length = math.sqrt(x**2 + y**2 + z**2)
return length
vec = Vector(x1,y1,x2,y2,theta)
length = Length_r(x1,y1,x2,y2,theta)
length = length*(-1)
#Function map apply function division to every item of iterable and return a list of the results.
unit = (map(lambda x: x/length, vec))
return unit
print Unit_vector(a1,a2,b1,b2,theta)
print Unit_vector(c1,-c2,d1,-d2,theta)
Explanation: The function below gives us the direction of the forces, using the concept of the Unit vector.
End of explanation
def Directed_Force(Force,Unit_Vector):
''' This function directs the force produced to the -(x-axis) in the cartesian coordinate system'''
direction = Unit_Vector
#Function map apply function multiplication to every item of iterable and return a list of the results.
force = map(lambda x: x*Force, Unit_Vector)
return force
Explanation: The functoin Directed_Force multiply the force by the Unit vector.
Giving the directed force.
End of explanation
unit_vector_ab = Unit_vector(a1,a2,b1,b2,theta)
unit_vector_cd = Unit_vector(c1,-c2,d1,-d2,theta)
force_ab = Linear_spring(k,inicial_linear_spring_lenght,a1,a2,b1,b2,theta)
#force_cd = Force_cd(1,x_cd) # TEM QUE CALCULAR QUEM EH O DESLOCAMENTO
'''The forces, b and c, are the forces produced by the SMAs in the points B and C.'''
force_b = Directed_Force(force_ab,unit_vector_ab)
#force_d = Directed_Force(force_cd,unit_vector_cd)
# the force d is the force maked by the SMA
print force_b
Explanation: For now the balance of forces have just two forces, but this number will be increased as the problem is being better prepared.
Below are placed the main function.
End of explanation |
3,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A significant portion of the time you spend on the problem sets in CogSci131 will be spent debugging. In this notebook we discuss simple strategies to minimize hair loss and maximize coding pleasure. This problem is not worth any points, but we strongly encourage you to still go through it -- it will save you a ton of time in the future!
Writing Readable Code
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Brian Kernighan
The number one key to easy debugging is writing readable code. A few helpful tips
Step1: Unfortunately, when you go to execute the code block, Python throws an error. Some friend! How can we fix the code so that it runs correctly?
Check the traceback
The presence of a traceback (the multicolored text that appears when we try to run the preceding code block) is the first indication that your code isn't behaving correctly. In the current example the traceback suggests that an error is occurring at the method call axis.plot(x, np.log(x)) on line 10. This is helpful, although somewhat baffling -- we used the same axis.plot() syntax in Notebook 4, and it ran fine! What's going on?
Inspect the local variables
Inspecting the traceback gives us a general idea of where our issue is, but its output can often be fairly cryptic. A good next step is to inspect the local variables and objects defined during the execution of your code
Step2: <div class="alert alert-danger">
Warning
Step3: Using print Statements
An alternative technique for inspecting the behavior of your code is to check the values of local variables using print statements. The print command evaluates its argument and writes the result to the standard output. We could use a print statement to inspect the ax object in our plot_log function as follows
Step4: This runs the code to completion, resulting in the same error we saw earlier. However, because we placed the print statement in our code immediately before the error occurred, we see that IPython also printed the contents of the axis object above the traceback. Thus, print statements are an alternative means of checking the values of local variables without using the IPython debugger. Just remember to remove the print statements before validating your code!
Getting Help
Check the Docs
Although we will try to make this course as self-contained as possible, you may still need to refer to external sources while solving the homework problems. You can look up the documentation for a particular function within the IPython notebook by creating a new code block and typing the name of the function either preceded or succeeded by a ?. For example, if you wanted to see the documentation for the matplotlib method subplots, you could write ?plt.subplots (or plt.sublots?), which will open a pager displaying the docstring for plt.subplots | Python Code:
# for inline plotting in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def plot_log():
figure, axis = plt.subplots(2, 1)
x = np.linspace(1, 2, 10)
axis.plot(x, np.log(x))
plt.show()
plot_log() # Call the function, generate plot
Explanation: A significant portion of the time you spend on the problem sets in CogSci131 will be spent debugging. In this notebook we discuss simple strategies to minimize hair loss and maximize coding pleasure. This problem is not worth any points, but we strongly encourage you to still go through it -- it will save you a ton of time in the future!
Writing Readable Code
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Brian Kernighan
The number one key to easy debugging is writing readable code. A few helpful tips:
1. Write short notes to yourself in the comments. These will help you to quickly orient yourself.
2. Use descriptive variable names. Avoid naming variables things like a or foo, as you will easily forget what they were used for.
1. An exception to this rule is when using temporary variables (e.g., counts), which can be as short as a single character.
3. Try to write your code in a consistent style to ensure that it is predictable across problem sets. You'll thank yourself for this later!
4. Don't reinvent the wheel. Check the docs to see if a particular function exists before you spend hours trying to implement it on your own. You'd be surprised at how often this happens.
Although these tips won't save you from having to debug your code, they will make the time you spend debugging much more productive.
Debugging in the IPython Notebook
Imagine that a friend wrote you a function plot_log for plotting the function $\log(x)$ over the interval $[1,2]$. How sweet! Their code is below:
End of explanation
# Uncomment the following line and run the cell to debug the previous function:
#%debug
Explanation: Unfortunately, when you go to execute the code block, Python throws an error. Some friend! How can we fix the code so that it runs correctly?
Check the traceback
The presence of a traceback (the multicolored text that appears when we try to run the preceding code block) is the first indication that your code isn't behaving correctly. In the current example the traceback suggests that an error is occurring at the method call axis.plot(x, np.log(x)) on line 10. This is helpful, although somewhat baffling -- we used the same axis.plot() syntax in Notebook 4, and it ran fine! What's going on?
Inspect the local variables
Inspecting the traceback gives us a general idea of where our issue is, but its output can often be fairly cryptic. A good next step is to inspect the local variables and objects defined during the execution of your code: if there's a mismatch between what the code should be generating on each line and what it actually generates, you can trace it back until you've found the line containing the bug.
In the current example, we might first try inspecting the variables and objects present on the line where the traceback indicates our error is occurring. These include the axis object, the local variable x, and the method call np.log(x). We can do this with IPython's debug magic function, or by using print statements. Both methods are outlined below.
Using the Interactive Debugger
The IPython magic function %debug pauses code execution upon encountering an error and drops us into an interactive debugging console. In the current example, this means that the code execution will pause just before running line 7. Once the debugger opens, we can inspect the local variables in the interactive debugger to see whether they match what we'd expect.
To invoke the debugger, just type the magic command %debug in a new code cell immediately after encountering the error. When you run this new cell, it will drop you into a debugger where you can investigate what went wrong:
End of explanation
def plot_log():
figure, axis = plt.subplots()
x = np.linspace(1, 2, 10)
axis.plot(x, np.log(x))
plt.show()
plot_log() # Call the function, generate plot
Explanation: <div class="alert alert-danger">
Warning: make sure you remove or comment out any <code>%debug</code> statements from your code before turning in your problem set. If you do not, then they will cause the grading scripts to break and you may not receive full credit. Always make sure you run the <code>nbgrader validate</code> commands in <a href="Submit.ipynb">Submit.ipynb</a> and ensure that they complete properly before submitting your assignment!
</div>
If you run the above code block, you should see something like this:
```
<ipython-input-4-cf8c844b7e23>(5)plot_log()
4 x = np.linspace(1, 2, 10)
----> 5 axis.plot(x, np.log(x))
6 plt.show()
ipdb>
The presence of the `ipdb>` prompt at the bottom indicates that we have entered the IPython debugger. Any command you enter here will be evaluated and its output will be returned in the console. To see the stock commands available within the debugger, type `h` (short for "help") at the prompt.
ipdb> h
Documented commands (type help <topic>):
========================================
EOF bt cont enable jump pdef r tbreak w
a c continue exit l pdoc restart u whatis
alias cl d h list pinfo return unalias where
args clear debug help n pp run unt
b commands disable ignore next q s until
break condition down j p quit step up
Miscellaneous help topics:
exec pdb
Undocumented commands:
retval rv
For information on a particular command, you can type `h` followed by the command. For example, to see what the `c` command does, type
ipdb> h c
c(ont(inue))
Continue execution, only stop when a breakpoint is encountered.
``
We can use the debugger to inspect the contents of the variables and objects defined so far in our code. For example, we can inspect the contents of ouraxisobject by typingaxisat theipdb>` prompt:
ipdb> axis
array([<matplotlib.axes._subplots.AxesSubplot object at 0x10a5e8950>,
<matplotlib.axes._subplots.AxesSubplot object at 0x108dcf790>], dtype=object)
Aha! Instead of a single instance of the matplotlib.axes class (as we might expect), it appears that axis is actually an array containing two separate matplotlib.axes instances. Why might this be? Tracing the axis object back to its definition on line 3, we see that the subplots method is the culprit. Looking up subplots in the matplotlib documentation, we see that this method returns as many axis objects as there are cells in a subplot grid. In our case, since we specified a grid of size $2 \times 1$, it returned two separate axis objects inside a single array. When we asked Python to access the plot method of our array on line 7, it understandably got confused -- arrays don't have a plot method! With this in mind, we can adjust our code to resolve the issue. One solution would be ignore the second subplot entirely:
End of explanation
def plot_log():
figure, axis = plt.subplots(2,1)
x = np.linspace(1, 2, 10)
print(axis)
axis.plot(x, np.log(x))
plt.show()
plot_log() # Call the function, generate plot
Explanation: Using print Statements
An alternative technique for inspecting the behavior of your code is to check the values of local variables using print statements. The print command evaluates its argument and writes the result to the standard output. We could use a print statement to inspect the ax object in our plot_log function as follows:
End of explanation
?plt.subplots
Explanation: This runs the code to completion, resulting in the same error we saw earlier. However, because we placed the print statement in our code immediately before the error occurred, we see that IPython also printed the contents of the axis object above the traceback. Thus, print statements are an alternative means of checking the values of local variables without using the IPython debugger. Just remember to remove the print statements before validating your code!
Getting Help
Check the Docs
Although we will try to make this course as self-contained as possible, you may still need to refer to external sources while solving the homework problems. You can look up the documentation for a particular function within the IPython notebook by creating a new code block and typing the name of the function either preceded or succeeded by a ?. For example, if you wanted to see the documentation for the matplotlib method subplots, you could write ?plt.subplots (or plt.sublots?), which will open a pager displaying the docstring for plt.subplots:
End of explanation |
3,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, isTraining):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None)
# We can also set use_bias=False, because batch normalization have bias beta inside
layer = tf.layers.batch_normalization(inputs=layer, training=isTraining)
layer = tf.nn.relu(layer)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, isTraining):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
# We can also set use_bias=False, because batch normalization have bias beta inside
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
conv_layer = tf.layers.batch_normalization(inputs=conv_layer, training=isTraining)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
isTrainingP = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, isTrainingP)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, isTrainingP)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Force control dependencies to update batch normalization population statistics
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, isTrainingP: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
isTrainingP: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, isTrainingP: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
isTrainingP: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
isTrainingP: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
isTrainingP: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
# Define variables for batch_normalization
gamma = tf.Variable(tf.ones(num_units))
beta = tf.Variable(tf.zeros(num_units))
epsilon = 0.001
pop_mean = tf.Variable(tf.zeros(num_units), trainable=False)
pop_variance = tf.Variable(tf.ones(num_units), trainable=False)
layer = tf.layers.dense(prev_layer, num_units, activation=None, use_bias=False)
# We need to define 2 functions to use tf.conf on a bool placeholder
def training_batch_normalization():
# Update population statistics
batch_mean, batch_variance = tf.nn.moments(layer, [0])
decay = 0.99
update_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
update_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# Force the population statistics update
with tf.control_dependencies([update_mean, update_variance]):
return tf.nn.batch_normalization(x=layer, mean=batch_mean, variance=batch_variance,
offset=beta, scale=gamma, variance_epsilon=epsilon)
def test_batch_normalization():
return tf.nn.batch_normalization(x=layer, mean=pop_mean, variance=pop_variance,
offset=beta, scale=gamma, variance_epsilon=epsilon)
layer = tf.cond(is_training, training_batch_normalization, test_batch_normalization)
return tf.nn.relu(layer)
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
#bias = tf.Variable(tf.zeros(out_channels))
# Define variables for batch_normalization (normalize each filter map)
gamma = tf.Variable(tf.ones(out_channels))
beta = tf.Variable(tf.zeros(out_channels))
epsilon = 0.001
pop_mean = tf.Variable(tf.zeros(out_channels), trainable=False)
pop_variance = tf.Variable(tf.ones(out_channels), trainable=False)
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
# We need to define 2 functions to use tf.conf on a bool placeholder
def training_batch_normalization():
# Update population statistics
# BHWC, we calculate moments for each channel
batch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)
decay = 0.99
update_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
update_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# Force the population statistics update
with tf.control_dependencies([update_mean, update_variance]):
return tf.nn.batch_normalization(x=conv_layer, mean=batch_mean, variance=batch_variance,
offset=beta, scale=gamma, variance_epsilon=epsilon)
def test_batch_normalization():
# Use population statistics
return tf.nn.batch_normalization(x=conv_layer, mean=pop_mean, variance=pop_variance,
offset=beta, scale=gamma, variance_epsilon=epsilon)
conv_layer = tf.cond(is_training, training_batch_normalization, test_batch_normalization)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training_p = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training_p)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training_p)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training_p: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training_p: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training_p: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training_p: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training_p: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training_p: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
3,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Numerical Integration
Step3: Below is, mathematically, $f_{-h}
Step4: Then, we can use sympy to calculate, symbolically, $f_{h}
Step5: Success! Trapezoid rule was rederived (stop using pen/pencil and paper or chalkboard; computers can do computations faster and without mistakes)
For a second order polynomial, $p_{N=2}(x)$,
Step6: Legendre Polynomials
I don't find the sympy documentation very satisfying (other than listing the argument syntax, no examples of usage, nor further explanation, beyond the barebones argument syntax, is given). So what I've done here is to try to show what I've done. | Python Code:
from itertools import combinations
import sympy
from sympy import Function, integrate, Product, Sum, Symbol, symbols
from sympy.abc import a,b,h,i,k,m,n,x
from sympy import Rational as Rat
def lagrange_basis_polys(N,x,xpts=None):
lagrange_basis_polynomials(N,x,xpts)
returns the Lagrange basis polynomials as a list
INPUTS/PARAMETERS
-----------------
<int> N - N > 0. Note that there are N+1 points total
<sympy.Symbol> x
<list> xpts
assert N > 0
if xpts != None:
assert len(xpts) == N + 1
if xpts == None:
print "I'll generate symbolic sympy symbols for you for xpts"
xpts = symbols('x0:'+str(N+1))
basis_polys = []
for i in range(N+1):
tmpprod = Rat(1)
for k in [k for k in range(N+1) if k != i]:
tmpprod = tmpprod * (x - xpts[k])/(xpts[i]-xpts[k])
basis_polys.append(tmpprod)
return basis_polys
def lagrange_interp(N,x,xpts=None,ypts=None):
lagrange_interp(N,x,xpts,ypts)
Lagrange interpolation formula
if xpts != None and ypts != None:
assert len(xpts) == len(ypts)
if xpts == None:
print "I'll generate symbolic sympy symbols for you for xpts"
xpts = symbols('x0:'+str(N+1))
if ypts == None:
print "I'll generate symbolic sympy symbols for you for xpts"
ypts = symbols('y0:'+str(N+1))
basis = lagrange_basis_polys(N,x,xpts)
p_N = sum( [ypts[i]*basis[i] for i in range(N+1)] )
return p_N
xpts = symbols('x0:'+str(1+1))
ypts = symbols('y0:'+str(1+1))
p_1x = lagrange_interp(1,x,xpts,ypts)
Explanation: Numerical Integration
End of explanation
x_0 = Symbol('x_0',real=True)
f = Function('f')
f_minush = p_1x.subs({xpts[0]:x_0-h,xpts[1]:x_0, ypts[0]:f(x_0-h), ypts[1]:f(x_0) })
integrate( f_minush, (x,x_0-h,x_0 ) )
Explanation: Below is, mathematically, $f_{-h} := p_1(x)$ with $(x_0,y_0) = (x_0-h, f(x_0-h)), (x_1,y_1) = (x_0,f(x_0))$ and
$\int_{x_0-h}^{x_0} f_{-h}$
End of explanation
f_h = p_1x.subs({xpts[0]:x_0,xpts[1]:x_0+h, ypts[0]:f(x_0), ypts[1]:f(x_0+h) })
integrate( f_h, (x,x_0,x_0+h ) )
( integrate( f_minush, (x,x_0-h,x_0 ) ) + integrate( f_h, (x,x_0,x_0+h ) ) ).simplify()
Explanation: Then, we can use sympy to calculate, symbolically, $f_{h} := p_1(x)$ with $(x_0,y_0) = (x_0, f(x_0)), (x_1,y_1) = (x_0+h,f(x_0+h))$ and
$\int_{x_0}^{x_0+h} f_{h}$
End of explanation
xpts = symbols('x0:'+str(2+1))
ypts = symbols('y0:'+str(2+1))
p_2x = lagrange_interp(2,x,xpts,ypts)
f2_h = p_2x.subs({xpts[0]:x_0-h,xpts[1]:x_0,xpts[2]:x_0+h,ypts[0]:f(x_0-h), ypts[1]:f(x_0),ypts[2]:f(x_0+h) })
integrate( f2_h,(x,x_0-h,x_0+h)).simplify()
Explanation: Success! Trapezoid rule was rederived (stop using pen/pencil and paper or chalkboard; computers can do computations faster and without mistakes)
For a second order polynomial, $p_{N=2}(x)$,
End of explanation
from sympy.polys.orthopolys import legendre_poly
print "n \t \t \t \t P_n(x) \n"
for i in range(11):
print str(i) + "\t \t \t \t " , legendre_poly(i,x)
sympy.latex(legendre_poly(2,x))
sympy.N( sympy.integrate(1/(2+x**2),(x,0,3)) )
Explanation: Legendre Polynomials
I don't find the sympy documentation very satisfying (other than listing the argument syntax, no examples of usage, nor further explanation, beyond the barebones argument syntax, is given). So what I've done here is to try to show what I've done.
End of explanation |
3,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manual publication DB insertion from raw text using syntax features
Publications and conferences of Dr. AVRAM Sanda, Profesor Universitar
http
Step1: 47 pubs obtained
DB Storage (TODO)
Time to store the entries in the papers DB table. | Python Code:
class HelperMethods:
@staticmethod
def IsDate(text):
# print("text")
# print(text)
for c in text.lstrip():
if c not in "1234567890 ":
return False
return True
import pandas
import requests
page = requests.get('http://www.cs.ubbcluj.ro/~sanda/html/publications/')
data = page.text
from bs4 import BeautifulSoup
soup = BeautifulSoup(data)
pubs = []
for e in soup.find_all('li'):
if "value" in e.attrs:
# print(e.contents)
line = e.contents
authors = line[0].lstrip('\n ').rstrip('\n ')
print("authors: ", authors)
title = line[2]
print("title: ", title.text)
affiliation = line[5]
print("affiliation: ", affiliation.contents[0])
url = line[5].attrs["href"] if "href" in line[5].attrs else line[5]
print("url: ", url)
year = ""
try:
year_line = line[8].split()
for i in year_line:
val = i.split(',')[0]
if HelperMethods.IsDate(val):
year = val
print("year: ", year)
except:
pass
#year = [k[0] for k in line[8].split(" ") if HelperMethods.IsDate(k[0])][0]
#print("year: ", year)
pubs.append((authors, title, affiliation, url, year))
print(len(pubs))
for pub in pubs:
print(pub)
Explanation: Manual publication DB insertion from raw text using syntax features
Publications and conferences of Dr. AVRAM Sanda, Profesor Universitar
http://www.cs.ubbcluj.ro/~sanda
End of explanation
import mariadb
import json
with open('../credentials.json', 'r') as crd_json_fd:
json_text = crd_json_fd.read()
json_obj = json.loads(json_text)
credentials = json_obj["Credentials"]
username = credentials["username"]
password = credentials["password"]
table_name = "publications_cache"
db_name = "ubbcluj"
mariadb_connection = mariadb.connect(user=username, password=password, database=db_name)
mariadb_cursor = mariadb_connection.cursor()
print(table_name)
for paper in pubs:
title = ""
pub_date = ""
affiliations = ""
authors = ""
print(paper)
print()
try:
pub_date = paper[4].lstrip()
pub_date = str(pub_date) + "-01-01"
if len(pub_date) != 10:
pub_date = ""
except:
pass
try:
authors = paper[0].lstrip()
except Exception as e:
print(e)
try:
affiliations = paper[2].text.lstrip()
except Exception as e:
print(e)
try:
# print(type(paper[1]))
title = paper[1].text
if ('\'') in title:
title = title.split('\'')[0]
except AttributeError:
pass
insert_string = "INSERT INTO {0} SET ".format(table_name) # OK
insert_string += "Title=\'{0}\', ".format(title) # OK
insert_string += "ProfessorId=\'{0}\', ".format(6) # OK
if pub_date != "":
insert_string += "PublicationDate=\'{0}\', ".format(str(pub_date)) # TODO
insert_string += "Authors=\'{0}\', ".format(authors) # OK
insert_string += "Affiliations=\'{0}\' ".format(affiliations) # OK
print(insert_string)
try:
mariadb_cursor.execute(insert_string)
except mariadb.ProgrammingError as pe:
print("Error")
raise pe
except mariadb.IntegrityError:
continue
mariadb_connection.close()
Explanation: 47 pubs obtained
DB Storage (TODO)
Time to store the entries in the papers DB table.
End of explanation |
3,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 11
Input File Interlude
Wednesday, October 11th 2017
Input Files and Parsing
We usually want to read data into our software
Step1: Looping Over Child Elements
Step2: Accessing Children by Index
Step3: The Element.iter() Method
From the documentation
Step4: The Element.findall() Method
From the documentation | Python Code:
import xml.etree.ElementTree as ET
tree = ET.parse('shelterdogs.xml')
dogshelter = tree.getroot()
print(dogshelter)
print(dogshelter.tag)
print(dogshelter.attrib)
Explanation: Lecture 11
Input File Interlude
Wednesday, October 11th 2017
Input Files and Parsing
We usually want to read data into our software:
* Input parameters to the code (e.g. time step, linear algebra solvers, physical parameters, etc)
* Input fields (e.g. fields to visualize)
* Calibration data
* $\vdots$
This data can be provided by us, or the client, or come from a database somewhere.
There are many ways of reading in and parsing data. In fact, this is often a non-trivial exercise depending on the quality of the data as well as its size.
Our immediate concern will be with how to read chemical reaction information into our chemical kinetics code.
Many kinetics codes read reaction information in from files in .xml format.
XML Intro
```xml
<?xml version="1.0"?>
<ctml>
<reactionData id="test_mechanism">
<!-- reaction 01 -->
<reaction reversible="yes" type="Elementary" id="reaction01">
<equation>H + O2 [=] OH + O</equation>
<rateCoeff>
<Kooij>
<A units="cm3/mol/s">3.52e+16</A>
<b>-0.7</b>
<E units="kJ/mol">71.4</E>
</Kooij>
</rateCoeff>
<reactants>H:1 O2:1</reactants>
<products>OH:1 O:1</products>
</reaction>
<!-- reaction 02 -->
<reaction reversible="yes" type="Elementary" id="reaction02">
<equation>H2 + O [=] OH + H</equation>
<rateCoeff>
<Kooij>
<A units="cm3/mol/s">5.06e+4</A>
<b>2.7</b>
<E units="kJ/mol">26.3</E>
</Kooij>
</rateCoeff>
<reactants>H2:1 O:1</reactants>
<products>OH:1 H:1</products>
</reaction>
</reactionData>
</ctml>
```
What is XML?
Note: Material presented here taken from the following sources
* w3schools XML tutorial
* Python xml.etree.ElementTree documentation
* XML Documentation
* XML Wikipedia Page
Some basic XML comments:
* XML stands for Extensible Markup Language
* XML is just information wrapped in tags
* It doesn't do anything per se
* Its format is both machine- and human-readable
What is our business with XML?
We need to know enough about XML to be able to read in chemical reactions to our chemical kinetics library.
To accomplish this, we must know a little bit about the structure of XML and what Python libraries are out there to help us actually do the parsing.
Some Basic XML Anatomy
```xml
<!-- This is an XML comment -->
<?xml version="1.0" encoding="UTF-8"?>
<dogshelter> <!-- This is the root element -->
<dog id="dog1"> <!-- This is the first child element.
It has an id attribute -->
<name> Cloe </name> <!-- First subchild element -->
<age> 3 </age> <!-- Second subchild element -->
<breed> Border Collie </breed>
<playgroup> Yes </playgroup>
</dog>
<dog id="dog2">
<name> Karl </name>
<age> 7 </age>
<breed> Beagle </breed>
<playgroup> Yes </playgroup>
</dog>
</dogshelter>
```
Note that all XML elements have a closing tag!
Some More Basic XML Anatomy
See w3schools XML tutorial for a very nice summary of the essential XML rules.
XML elements: a few things to be aware of:
* Elements can contain text, attributes, and other elements
* XML names are case sensitive and cannot contain spaces
* Be consistent in your naming convention
XML attributes: a few things to be aware of:
* XML attributes must be in quotes
* There are no rules about when to use elements or attributes
- You could make an attribute an element and it might read better
* Rule of thumb: Data should be stored as elements. Metadata should be stored as attributes.
Python and XML
We will use the ElementTree class to read in and parse XML input files in Python.
A very nice tutorial can be found in the
Python ElementTree documentation.
We'll work with the shelterdogs.xml file to start.
<!-- This is the optional XML prolog -->
End of explanation
for child in dogshelter:
print(child.tag, child.attrib)
Explanation: Looping Over Child Elements
End of explanation
print(dogshelter[0][0].text)
print(dogshelter[1][0].text)
print(dogshelter[0][2].text)
Explanation: Accessing Children by Index
End of explanation
for age in dogshelter.iter('age'):
print(age.text)
Explanation: The Element.iter() Method
From the documentation:
Creates a tree iterator with the current element as the root. The iterator iterates over this element and all elements below it, in document (depth first) order.
End of explanation
print(dogshelter.findall('dog'))
for dog in dogshelter.findall('dog'): # Iterate over each child
print('ID: {}'.format(dog.get('id'))) # Use the get() method to get the attribute of the child
print('----------')
print('Name: {}'.format(dog.find('name').text)) # Use the find() method to find a specific subchild
age = float(dog.find('age').text)
if (dog.find('age').attrib == 'months'):
years = age / 12.0
print('Age: {} years'.format(years))
else:
print('Age: {} years'.format(age))
print('Breed: {}'.format(dog.find('breed').text))
if (dog.find('playgroup').text.split()[0] == 'Yes'):
print('PLAYGROUP')
else:
print('NO PLAYGROUP')
print('\n::::::::::::::::::::\n')
Explanation: The Element.findall() Method
From the documentation:
Finds all matching subelements, by tag name or path. Returns a list containing all matching elements in document order.
End of explanation |
3,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We can then canonicalize the MPS
Step1: And we can compute the inner product as
Step2: This relies on them sharing the same physical indices, site_ind_id,
which the conjugated copy p.H naturally does.
Like any TN, we can graph the overlap for example, and make use of the
site tags to color it
Step3: Which doubles the bond dimension, as expected, but should still be normalized
Step4: Because the MPS is the addition of two identical states, it should also compress right back down
Step5: Where we have also set the orthogonality center at the site 10.
When tensor networks are imbued with a structure, they
can be indexed with integers and slices, which automatically get
converted using TN.site_tag_id
Step6: Note the tensor has matching physical index 'k10'.
This tensor is the orthogonality center so
Step7: Or equivalently
Step8: If two tensor networks with the same structure are combined, it is propagated.
For example (p2.H & p2) can still be sliced.
Since the MPS is in canonical form, left and right pieces of the overlap
should form the identity. The following forms a TN of the inner product,
selects the 2 tensors corresponding to the last site (-1), contracts them,
then gets the underlying data
Step9: Compute the actual contraction (... means contract everything, but use the structure if possible)
Step10: The DMRG object will automatically detect OBC/PBC. Now we can solve to a certain absolute energy tolerance, showing progress and a schematic of the final state
Step11: Now we are ready to evolve. By setting a tol, the required timestep dt is computed for us
Step12: After the evolution we can see that entanglement has been generated throughout the chain
Step13: Note we have used the inplace gate_ (with a trailing
underscore) which modifies the original psi0 object.
However psi0 has its physical site indices mantained
such that it overall looks like the same object
Step14: But the network now contains the gates as additional tensors
Step15: With the swap and split method MPS form is always maintained, which
allows a canonical form and thus optimal trimming of singular values
Step16: We now still have an MPS, but with increased bond dimension
Step17: Finally, the eager (contract=True) method works fairly simply | Python Code:
p.left_canonize()
p.show()
Explanation: We can then canonicalize the MPS:
End of explanation
p.H @ p
Explanation: And we can compute the inner product as:
End of explanation
(p.H & p).graph(color=[f'I{i}' for i in range(30)], initial_layout='random')
p2 = (p + p) / 2
p2.show()
Explanation: This relies on them sharing the same physical indices, site_ind_id,
which the conjugated copy p.H naturally does.
Like any TN, we can graph the overlap for example, and make use of the
site tags to color it:
End of explanation
p2.H @ p2
Explanation: Which doubles the bond dimension, as expected, but should still be normalized:
End of explanation
p2.compress(form=10)
p2.show()
Explanation: Because the MPS is the addition of two identical states, it should also compress right back down:
End of explanation
p2[10] # get the tensor(s) with tag 'I10'.
Explanation: Where we have also set the orthogonality center at the site 10.
When tensor networks are imbued with a structure, they
can be indexed with integers and slices, which automatically get
converted using TN.site_tag_id:
End of explanation
p2[10].H @ p2[10] # all indices match -> inner product
Explanation: Note the tensor has matching physical index 'k10'.
This tensor is the orthogonality center so:
->->-O-<-<- +-O-+
... | | | | | ... = | | |
->->-O-<-<- +-O-+
i=10 i=10
should compute the normalization of the whole state:
End of explanation
p2[10].norm()
Explanation: Or equivalently:
End of explanation
((p2.H & p2).select(-1) ^ all).data # should be close to the identity
A = MPO_rand_herm(20, bond_dim=7, tags=['HAM'])
pH = p.H
# This inplace modifies the indices of each to form overlap
p.align_(A, pH)
(pH & A & p).graph(color='HAM', iterations=1000)
Explanation: If two tensor networks with the same structure are combined, it is propagated.
For example (p2.H & p2) can still be sliced.
Since the MPS is in canonical form, left and right pieces of the overlap
should form the identity. The following forms a TN of the inner product,
selects the 2 tensors corresponding to the last site (-1), contracts them,
then gets the underlying data:
End of explanation
(pH & A & p) ^ ...
builder = SpinHam(S=1)
builder += 1/2, '+', '-'
builder += 1/2, '-', '+'
builder += 1, 'Z', 'Z'
H = builder.build_mpo(n=100)
dmrg = DMRG2(H, bond_dims=[10, 20, 100, 100, 200], cutoffs=1e-10)
Explanation: Compute the actual contraction (... means contract everything, but use the structure if possible):
End of explanation
dmrg.solve(tol=1e-6, verbosity=1)
dmrg.state.show(max_width=80)
builder = SpinHam(S=1 / 2)
builder.add_term(1.0, 'Z', 'Z')
builder.add_term(0.9, 'Y', 'Y')
builder.add_term(0.8, 'X', 'X')
builder.add_term(0.6, 'Z')
H = NNI_ham_heis(20, bz=0.1)
# check the two site term
H()
psi0 = MPS_neel_state(20)
tebd = TEBD(psi0, H)
Explanation: The DMRG object will automatically detect OBC/PBC. Now we can solve to a certain absolute energy tolerance, showing progress and a schematic of the final state:
End of explanation
tebd.update_to(T=3, tol=1e-3)
Explanation: Now we are ready to evolve. By setting a tol, the required timestep dt is computed for us:
End of explanation
tebd.pt.show()
import quimb as qu
Z = qu.pauli('Z')
# compute <psi0|Z_i|psi0> for neel state above
[
psi0.gate(Z, i).H @ psi0
for i in range(10)
]
import quimb as qu
# some operators to apply
H = qu.hadamard()
CNOT = qu.controlled('not')
# setup an intitial register of qubits
n = 10
psi0 = MPS_computational_state('0' * n, tags='PSI0')
# apply hadamard to each site
for i in range(n):
psi0.gate_(H, i, tags='H')
# apply CNOT to even pairs
for i in range(0, n, 2):
psi0.gate_(CNOT, (i, i + 1), tags='CNOT')
# apply CNOT to odd pairs
for i in range(1, n - 1, 2):
psi0.gate_(CNOT, (i, i + 1), tags='CNOT')
Explanation: After the evolution we can see that entanglement has been generated throughout the chain:
End of explanation
sorted(psi0.outer_inds())
(psi0.H & psi0) ^ all
Explanation: Note we have used the inplace gate_ (with a trailing
underscore) which modifies the original psi0 object.
However psi0 has its physical site indices mantained
such that it overall looks like the same object:
End of explanation
psi0.graph(color=['PSI0', 'H', 'CNOT'], show_inds=True)
Explanation: But the network now contains the gates as additional tensors:
End of explanation
n = 10
psi0 = MPS_computational_state('0' * n)
for i in range(n):
# 'swap+split' will be ignore to one-site gates
psi0.gate_(H, i, contract='swap+split')
# use Z-phase to create entanglement
Rz = qu.phase_gate(0.42)
for i in range(n):
psi0.gate_(Rz, i, contract='swap+split')
for i in range(0, n, 2):
psi0.gate_(CNOT, (i, i + 1), contract='swap+split')
for i in range(1, n - 1, 2):
psi0.gate_(CNOT, (i, i + 1), contract='swap+split')
# act with one long-range CNOT
psi0.gate_(CNOT, (2, n - 2), contract='swap+split')
Explanation: With the swap and split method MPS form is always maintained, which
allows a canonical form and thus optimal trimming of singular values:
End of explanation
psi0.show()
Explanation: We now still have an MPS, but with increased bond dimension:
End of explanation
psi0_CNOT = psi0.gate(CNOT, (1, n -2 ), contract=True)
psi0_CNOT.graph(color=[psi0.site_tag(i) for i in range(n)])
Explanation: Finally, the eager (contract=True) method works fairly simply:
End of explanation |
3,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sigma to Pressure Interpolation
By using metpy.calc.log_interp, data with sigma as the vertical coordinate can be
interpolated to isobaric coordinates.
Step1: Data
The data for this example comes from the outer domain of a WRF-ARW model forecast
initialized at 1200 UTC on 03 June 1980. Model data courtesy Matthew Wilson, Valparaiso
University Department of Geography and Meteorology.
Step2: Array of desired pressure levels
Step3: Interpolate The Data
Now that the data is ready, we can interpolate to the new isobaric levels. The data is
interpolated from the irregular pressure values for each sigma level to the new input
mandatory isobaric levels. mpcalc.log_interp will interpolate over a specified dimension
with the axis argument. In this case, axis=1 will correspond to interpolation on the
vertical axis. The interpolated data is output in a list, so we will pull out each
variable for plotting.
Step4: Plotting the Data for 700 hPa. | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from netCDF4 import Dataset, num2date
from metpy.cbook import get_test_data
from metpy.interpolate import log_interpolate_1d
from metpy.plots import add_metpy_logo, add_timestamp
from metpy.units import units
Explanation: Sigma to Pressure Interpolation
By using metpy.calc.log_interp, data with sigma as the vertical coordinate can be
interpolated to isobaric coordinates.
End of explanation
data = Dataset(get_test_data('wrf_example.nc', False))
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time']
vtimes = num2date(time[:], time.units)
temperature = units.Quantity(data.variables['temperature'][:], 'degC')
pres = units.Quantity(data.variables['pressure'][:], 'Pa')
hgt = units.Quantity(data.variables['height'][:], 'meter')
Explanation: Data
The data for this example comes from the outer domain of a WRF-ARW model forecast
initialized at 1200 UTC on 03 June 1980. Model data courtesy Matthew Wilson, Valparaiso
University Department of Geography and Meteorology.
End of explanation
plevs = [700.] * units.hPa
Explanation: Array of desired pressure levels
End of explanation
height, temp = log_interpolate_1d(plevs, pres, hgt, temperature, axis=1)
Explanation: Interpolate The Data
Now that the data is ready, we can interpolate to the new isobaric levels. The data is
interpolated from the irregular pressure values for each sigma level to the new input
mandatory isobaric levels. mpcalc.log_interp will interpolate over a specified dimension
with the axis argument. In this case, axis=1 will correspond to interpolation on the
vertical axis. The interpolated data is output in a list, so we will pull out each
variable for plotting.
End of explanation
# Set up our projection
crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0)
# Set the forecast hour
FH = 1
# Create the figure and grid for subplots
fig = plt.figure(figsize=(17, 12))
add_metpy_logo(fig, 470, 320, size='large')
# Plot 700 hPa
ax = plt.subplot(111, projection=crs)
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Plot the heights
cs = ax.contour(lon, lat, height[FH, 0, :, :], transform=ccrs.PlateCarree(),
colors='k', linewidths=1.0, linestyles='solid')
cs.clabel(fontsize=10, inline=1, inline_spacing=7, fmt='%i', rightside_up=True,
use_clabeltext=True)
# Contour the temperature
cf = ax.contourf(lon, lat, temp[FH, 0, :, :], range(-20, 20, 1), cmap=plt.cm.RdBu_r,
transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', aspect=65, shrink=0.5, pad=0.05,
extendrect='True')
cb.set_label('Celsius', size='x-large')
ax.set_extent([-106.5, -90.4, 34.5, 46.75], crs=ccrs.PlateCarree())
# Make the axis title
ax.set_title(f'{plevs[0]:~.0f} Heights (m) and Temperature (C)', loc='center', fontsize=10)
# Set the figure title
fig.suptitle(f'WRF-ARW Forecast VALID: {vtimes[FH]} UTC', fontsize=14)
add_timestamp(ax, vtimes[FH], y=0.02, high_contrast=True)
plt.show()
Explanation: Plotting the Data for 700 hPa.
End of explanation |
3,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 2
Imports
Step2: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
Explanation: Algorithms Exercise 2
Imports
End of explanation
def find_peaks(a):
Find the indices of the local maxima in a sequence.
b = np.array(a)
c = b.max()
return b[c]
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the pi digits histogram
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation |
3,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Neurons
Step1: Activation of a logistic neuron
Step2: Step 2
Step3: Step 3
Step4: Exercise | Python Code:
import numpy as np
from utils import make_classification, draw_decision_boundary, sigmoid
from sklearn.metrics import accuracy_score
from theano import tensor as T
from theano import function, shared
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rc('figure', figsize=(8, 6))
%matplotlib inline
Explanation: Logistic Neurons
End of explanation
X, Y = make_classification()
W = np.random.rand(2, 1)
B = np.random.rand(1,)
draw_decision_boundary(W.ravel().tolist() + [B[0]], X, Y)
Explanation: Activation of a logistic neuron:
$$ z = \sum_{i \in L} x_{i}w_{i} + b$$
Predicted output:
$$ y = \frac{1}{1 + e^{-z}} $$
Loss function: Mean Squared Error:
$$ E = \frac{1}{2}\sum_{i \in L} (t^{i} - y^{i})^{2} $$
Where $L$ is the set of training cases, and $t$ is the target value
Logistic Neuron in NumPy:
Step 1: Make dummy data
End of explanation
# activation
Z = np.dot(X, W) + B
# prediction
Y_pred = sigmoid(Z)
Explanation: Step 2: Get activation and prediction
End of explanation
def predict(X, weights, bias=None):
if bias is not None:
z = np.dot(X, weights) + bias
else:
z = np.dot(X, weights)
return sigmoid(z)
def train(X, Y, weights, alpha=0.3):
y_hat = predict(X, weights)
_gw = -1 * (Y - y_hat) * y_hat * (1 - y_hat)
_gw = np.repeat(_gw, X.shape[1], axis=1)
weights -= (alpha * _gw * X).sum(0).reshape(-1, 1)
return weights
def loss(y1, y2):
return (0.5 * ((y1 - y2) ** 2)).sum()
for i in range(10000):
y_hat = predict(X, W)
W = train(X, Y, W)
if i % 1000 == 0:
print("Loss: ", loss(Y, y_hat))
draw_decision_boundary(W.ravel().tolist() + [B[0]], X, Y)
Explanation: Step 3: Derive gradient for loss function
Gradient: $\nabla{E} = \frac{\partial{E}}{\partial{w_{j}}}$
Trick:
$$
\begin{equation}
\frac{\partial{\mathbf{E}}}{\partial{\mathbf{W}}} = \frac{\partial{\mathbf{y}}}{\partial{\mathbf{W}}}\frac{\partial{\mathbf{E}}}{\partial{\mathbf{y}}}
\end{equation}
$$
Second term on RHS:
$$\frac{\partial{\mathbf{E}}}{\partial{\mathbf{y}}} = -(\mathbf{t} - \mathbf{y})$$
First term on RHS: (using same trick):
$$\frac{\partial{\mathbf{y}}}{\partial{\mathbf{W}}} = \frac{\partial{\mathbf{y}}}{\partial{\mathbf{z}}}\frac{\partial{\mathbf{z}}}{\partial{\mathbf{W}}}$$
From first exercise, first term on RHS reduces to:
$$\frac{\partial{\mathbf{y}}}{\partial{\mathbf{z}}} = \mathbf{y}(1 - \mathbf{y})$$
From definition of logistic activation:
$$\mathbf{z} = \mathbf{X}\mathbf{W} + \mathbf{b} $$
Second term in RHS:
$$\frac{\partial{\mathbf{z}}}{\partial{\mathbf{W}}} = \mathbf{X}$$
Substituting:
$$\frac{\partial{\mathbf{y}}}{\partial{\mathbf{W}}} = \mathbf{y}(1 - \mathbf{y})\mathbf{X}$$
Substituting back in original equation
$$\frac{\partial{\mathbf{E}}}{\partial{\mathbf{W}}} = -(\mathbf{t} - \mathbf{y})\mathbf{y}(1 - \mathbf{y})\mathbf{X}$$
Using this gradient to train neuron with NumPy
End of explanation
# enter code here
Explanation: Exercise: Implement logistic neuron with Theano
End of explanation |
3,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Opgave 1
Maak een lege lijst aan en noem deze lijst getallen_a.
Vul deze lijst via een for loop met de getallen 2, 4, 6 , 8 en 10
Check of de lijst er zo uitziet als verwacht, hoeveel elementen bevat je lijst? Welk getal staat op plek 0? En op plek 3? En op plek -1?
Voeg het getal 14.0 toe achter aan je lijst, en het getal 7 op plek 2.
Maak een lijst getallen_b aan
Step1: Opgave 2
Schrijf een programma dat het aantal elementen (de lengte) van een list variable afdrukt naar het scherm zonder de len()-functie te gebruiken.
Schrijf een programma dat alle elementen in een list vermenigvuldigt met een door de gebruiker opgegeven getal en de uitkomsten afdrukt.
Schrijf een programma dat bepaalt of een door de gebruiker opgegeven getal een element is van een list. Gebruik daarbij niet de in-operator of de functies index() of count().
Schrijf een programma dat kan bepalen of twee lists (minstens) een gemeenschappelijk element hebben. Gebruik een geneste (dubbele) for-loop om dit te bepalen.
Schrijf en programma dat een 'histogram' kan afdrukken voor een list van integers. Voor de list [4, 9, 7] bijvoorbeeld, zou het onderstaande afgedrukt moeten worden
Step2: Opdracht 3
Tijdens het college is (naive) Bayesian spam detection besproken. Er werden twee voorbeelden behandeld en in beide gevallen werd de kans op spam bepaald voor een bericht van één woord. Maar de meeste e-mails bestaan uit meer woorden.
Wanneer we, helaas wat optimistisch, de aanname doen dat woorden in e-mails onafhankelijk van elkaar optreden, kunnen we met de behandelde vergelijking (Bayes' theorem) uitspraken doen over de kans dat een e-mail met meerdere woorden spam is.
Gegeven de trainingsdata zoals gebruikt in het college, zie evt. hieronder, bereken met hand de kans op het bericht M = "rolex korting amsterdam" als je uitgaat van onderlinge onafhankelijkheid van de woorden in een bericht.
Spam berichten | Python Code:
# 1.1
getallen_a = []
# 1.2
for i in range(2, 11, 2):
getallen_a.append(i)
# 1.3
print("Lijst getallen_a:", getallen_a)
print("Lengte of aantal elementen:", len(getallen_a))
print("Getal op plek 0:", getallen_a[0])
print("Getal op plek 3:", getallen_a[3])
print("Getal op plek -1:", getallen_a[-1])
# 1.4
getallen_a.append(14.0)
getallen_a.insert(2, 7)
# 1.5
getallen_b = [3.14, 8, 0]
getallen_a = getallen_a + getallen_b # ook: getallen_a += getallen_b
print("Lijst getallen_a:", getallen_a)
# 1.6
print("Index van 3.14:", getallen_a.index(3.14))
print("Aantal van 8:", getallen_a.count(8))
print("Is 4 element van de lijst:", 4 in getallen_a)
# of: print("Is 4 element van de lijst:", getallen_a.count(4) > 0)
print("Minimum van lijst:", min(getallen_a))
print("Maximum van lijst:", min(getallen_b))
# 1.7
del getallen_a[4]
print("Lijst getallen_a:", getallen_a)
# 1.8
getallen_a.sort()
print("Lijst getallen_a:", getallen_a)
getallen_a.reverse()
print("Lijst getallen_a:", getallen_a)
Explanation: Opgave 1
Maak een lege lijst aan en noem deze lijst getallen_a.
Vul deze lijst via een for loop met de getallen 2, 4, 6 , 8 en 10
Check of de lijst er zo uitziet als verwacht, hoeveel elementen bevat je lijst? Welk getal staat op plek 0? En op plek 3? En op plek -1?
Voeg het getal 14.0 toe achter aan je lijst, en het getal 7 op plek 2.
Maak een lijst getallen_b aan: [3.14, 8, 0] en vervang getallen_a door de samenvoeging van getallen_a en getallen_b
In getallen_a: op welke plek staat het getal 3.14? En het getal 8? Hoe vaak komt het getal 8 voor in de lijst? Bevat de lijst het getal 4? Wat zijn de minimale en maximale waarde die in de lijst voorkomen?
Verwijder het getal op plek 4.
Sorteer de lijst en geef hem ook eens van achter naar voren weer.
End of explanation
# 2.1
numbers = [2, 4, 6, 8]
length = 0
for i in numbers:
length += 1
print("Lengte van numbers:", length)
# 2.2
numbers = range(2, 10, 2)
multiplier = float(input("Geef de multiplier op:"))
for number in numbers:
print(number * multiplier)
# of:
numbers = list(range(2, 10, 2))
multiplier = float(input("Geef de multiplier op:"))
for i in range(len(numbers)):
numbers[i] *= multiplier # dit past de inhoud van de list aan!
print("Producten:", numbers)
# 2.3
numbers = [1, 3, 5, 7]
print("Getallen:", numbers)
query = int(input("Welk geheel getal zoek je?"))
is_element = False
for number in numbers:
if number == query:
is_element = True
print("Het getal {} zit in de lijst: {}".format(query, is_element))
# 2.4
numbers_a = [1, 3, 5, 7]
numbers_b = [2, 4, 6, 8]
has_common_elem = False
for number_a in numbers_a:
for number_b in numbers_b:
if number_a == number_b:
has_common_elem = True
print(("De lijsten hebben (minstens) een "
"gemeenschappelijk element:"), has_common_elem)
# 2.5
values = [4, 9, 7]
for value in values:
print("*" * value)
Explanation: Opgave 2
Schrijf een programma dat het aantal elementen (de lengte) van een list variable afdrukt naar het scherm zonder de len()-functie te gebruiken.
Schrijf een programma dat alle elementen in een list vermenigvuldigt met een door de gebruiker opgegeven getal en de uitkomsten afdrukt.
Schrijf een programma dat bepaalt of een door de gebruiker opgegeven getal een element is van een list. Gebruik daarbij niet de in-operator of de functies index() of count().
Schrijf een programma dat kan bepalen of twee lists (minstens) een gemeenschappelijk element hebben. Gebruik een geneste (dubbele) for-loop om dit te bepalen.
Schrijf en programma dat een 'histogram' kan afdrukken voor een list van integers. Voor de list [4, 9, 7] bijvoorbeeld, zou het onderstaande afgedrukt moeten worden:
```
```
End of explanation
# 3.2
spam = [
"rolex", "replica", "korting",
"klik", "korting", "viagra",
"korting", "politiek", "krediet",
]
ham = [
"politiek", "bepaalt", "korting",
"lariekoek", "in", "politiek",
"klik", "politiek", "verslag",
"journalist", "bespeelt", "politiek",
"politiek", "amsterdam", "stagneert",
]
message = "rolex korting amsterdam"
words = message.split()
P_words_spam = 1.0
P_words_ham = 1.0
for word in words:
if word in spam:
P_words_spam *= spam.count(word) / len(spam)
if word in ham:
P_words_ham *= ham.count(word) / len(ham)
P_spam = len(spam) / (len(spam) + len(ham))
P_ham = len(ham) / (len(spam) + len(ham))
P_message_spam = ((P_words_spam * P_spam) /
((P_words_spam * P_spam) + (P_words_ham * P_ham)))
print("P(M|Spam) = {:.4f}".format(P_message_spam))
Explanation: Opdracht 3
Tijdens het college is (naive) Bayesian spam detection besproken. Er werden twee voorbeelden behandeld en in beide gevallen werd de kans op spam bepaald voor een bericht van één woord. Maar de meeste e-mails bestaan uit meer woorden.
Wanneer we, helaas wat optimistisch, de aanname doen dat woorden in e-mails onafhankelijk van elkaar optreden, kunnen we met de behandelde vergelijking (Bayes' theorem) uitspraken doen over de kans dat een e-mail met meerdere woorden spam is.
Gegeven de trainingsdata zoals gebruikt in het college, zie evt. hieronder, bereken met hand de kans op het bericht M = "rolex korting amsterdam" als je uitgaat van onderlinge onafhankelijkheid van de woorden in een bericht.
Spam berichten: rolex replica korting, klik korting viagra, korting politiek krediet
Ham berichten: politiek bepaalt korting, lariekoek in politiek, klik politiek verslag, journalist bespeelt politiek, politiek amsterdam stagneert
Schrijf een programma om voor berichten van meerdere woorden (minstens 1) de kans op spam uit te kunnen rekenen.
Je kunt een bericht eenvoudig in woorden splitsen in Python:
python
message = "rolex korting amsterdam"
words = message.split() # split maakt een list van de elementen
# na het splitsen op whitespace
Hint: begin met uit uitdrukken van $P(M|Spam)$ als $P(W_1, W_2, W_3|Spam) = P(W_1|Spam) \cdot P(W_2|Spam) \cdot P(W_3|Spam)$ en evenzo voor $P(M|Ham)$.
3.1
$$
\begin{align}
P(Spam|M) &= \scriptsize{\frac{P(M|Spam) \cdot P(Spam)}
{P(M|Spam) \cdot P(Spam) + P(M|Ham) \cdot P(Ham)}} \
&= \scriptsize{\frac{P(W_1, W_2, W_3|Spam) \cdot P(Spam)}
{P(W_1, W_2, W_3|Spam) \cdot P(Spam) + P(W_1, W_2, W_3|Ham) \cdot P(Ham)}} \
&= \scriptsize{\frac{P(W_1|Spam) \cdot \ldots \cdot P(W_3|Spam) \cdot P(Spam)}
{P(W_1|Spam) \cdot \ldots \cdot P(W_3|Spam) \cdot P(Spam) + P(W_1|Ham) \cdot \ldots \cdot P(W_3|Ham) \cdot P(Ham)}} \
&= \scriptsize{\frac{1/9 \cdot 3/9 \cdot 1 \cdot 9/24}
{1/9 \cdot 3/9 \cdot 1 \cdot 9/15 + 1 \cdot 1/15 \cdot 1/15 \cdot 15/24}} \
&= \scriptsize{\frac{1/72}{1/72 + 1/360}} = \scriptsize{\frac{5}{6}}
\end{align}
$$
End of explanation |
3,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Q1
Step3: Let's solve ${\bf Hx} ={\bf b}$. Create a linear system by picking an ${\bf x}$ and generating a ${\bf b}$ by multiplying by the matrix ${\bf H}$. Then use the scipy.linalg.solve() function to recover ${\bf x}$. Compute the error in ${\bf x}$ as a function of the size of the matrix.
You won't need a large matrix, $n \sim 13$ or so, will start showing big errors.
You can compute the condition number with numpy.linalg.cond()
There are methods that can do a better job with nearly-singular matricies. Take a look at scipy.linalg.lstsq() for example.
Q3
Step5: Note that the pendulum can flip over, giving values of $\theta$ outside of $[-\pi, \pi]$. The following function can be used to restrict it back to $[-\pi, \pi]$ for plotting. | Python Code:
def hilbert(n):
return a Hilbert matrix, H_ij = (i + j - 1)^{-1}
H = np.zeros((n,n), dtype=np.float64)
for i in range(1, n+1):
for j in range(1, n+1):
H[i-1,j-1] = 1.0/(i + j - 1.0)
return H
Explanation: Q1: integrating a sampled vs. analytic function
Numerical integration methods work differently depending on whether you have the analytic function available (in which case you can evaluate it freely at any point you please) or if it is sampled for you.
Create a function to integrate, and use NumPy to sample it at $N$ points. Compare the answer you get from integrating the function directly (using integrate.quad to the integral of the sampled function (using integrate.simps).
To get a better sense of the accuracy, vary $N$, and look at how the error changes (if you plot the error vs. $N$, you can measure the convergence).
Q2: Condition number
For a linear system, ${\bf A x} = {\bf b}$, we can only solve for $x$ if the determinant of the matrix ${\bf A}$ is non-zero. If the determinant is zero, then we call the matrix singular. The condition number of a matrix is a measure of how close we are to being singular. The formal definition is:
\begin{equation}
\mathrm{cond}({\bf A}) = \| {\bf A}\| \| {\bf A}^{-1} \|
\end{equation}
But we can think of it as a measure of how much ${\bf x}$ would change due to a small change in ${\bf b}$. A large condition number means that our solution for ${\bf x}$ could be inaccurate.
A Hilbert matrix has $H_{ij} = (i + j + 1)^{-1}$, and is known to have a large condition number. Here's a routine to generate a Hilbert matrix
End of explanation
def rhs(t, Y, q, omega_d, b):
damped driven pendulum system derivatives. Here, Y = (theta, omega) are
the solution variables.
f = np.zeros_like(Y)
f[0] = Y[1]
f[1] = -q*Y[1] - np.sin(Y[0]) + b*np.cos(omega_d*t)
return f
Explanation: Let's solve ${\bf Hx} ={\bf b}$. Create a linear system by picking an ${\bf x}$ and generating a ${\bf b}$ by multiplying by the matrix ${\bf H}$. Then use the scipy.linalg.solve() function to recover ${\bf x}$. Compute the error in ${\bf x}$ as a function of the size of the matrix.
You won't need a large matrix, $n \sim 13$ or so, will start showing big errors.
You can compute the condition number with numpy.linalg.cond()
There are methods that can do a better job with nearly-singular matricies. Take a look at scipy.linalg.lstsq() for example.
Q3: damped driven pendulum and chaos
There are a large class of ODE integration methods available through the scipy.integrate.ode() function. Not all of them provide dense output -- most will just give you the value at the end of the integration.
The explicit dopri5 integrator will store the solution at intermediate points and allow you to access them. We'll use that here. You'll need to use the set_solout() method to define a function that takes the current integration solution and store it).
The damped driven pendulum obeys the following equations:
$$\dot{\theta} = \omega$$
$$\dot{\omega} = -q \omega - \sin \theta + b \cos \omega_d t$$
here, $\theta$ is the angle of the pendulum from vertical and $\omega$ is the angular velocity. $q$ is a damping coefficient, $b$ is a forcing amplitude, and $\omega_d$ is a driving frequency.
Choose $q = 0.5$ and $\omega_d = 2/3$.
Integrate the system for different values of $b$ (start with $b = 0.9$ and increase by $0.05$, and plot the results ($\theta$ vs. $t$). Here's a RHS function to get you started:
End of explanation
def restrict_theta(theta):
convert theta to be restricted to lie between -pi and pi
tnew = theta + np.pi
tnew += -2.0*np.pi*np.floor(tnew/(2.0*np.pi))
tnew -= np.pi
return tnew
Explanation: Note that the pendulum can flip over, giving values of $\theta$ outside of $[-\pi, \pi]$. The following function can be used to restrict it back to $[-\pi, \pi]$ for plotting.
End of explanation |
3,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Sentiment Classification on Large Movie Reviews
Sentiment Analysis is understood as a classic natural language processing problem. In this example, a large moview review dataset was chosen from IMDB to do a sentiment classification task with some deep learning approaches. The labeled data set consists of 50,000 IMDB movie reviews (good or bad), in which 25000 highly polar movie reviews for training, and 25,000 for testing. The dataset is originally collected by Stanford researchers and was used in a 2011 paper, and the highest accuray of 88.33% was achieved without using the unbalanced data. This example illustrates some deep learning approaches to do the sentiment classification with BigDL python API.
Load the IMDB Dataset
The IMDB dataset need to be loaded into BigDL, note that the dataset has been pre-processed, and each review was encoded as a sequence of integers. Each integer represents the index of the overall frequency of dataset, for instance, '5' means the 5-th most frequent words occured in the data. It is very convinient to filter the words by some conditions, for example, to filter only the top 5,000 most common word and/or eliminate the top 30 most common words. Let's define functions to load the pre-processed data.
Step3: In order to set a proper max sequence length, we need to go througth the property of the data and see the length distribution of each sentence in the dataset. A box and whisker plot is shown below for reviewing the length distribution in words.
Step5: Looking the box and whisker plot, the max length of a sample in words is 500, and the mean and median are below 250. According to the plot, we can probably cover the mass of the distribution with a clipped length of 400 to 500. Here we set the max sequence length of each sample as 500.
The corresponding vocabulary sorted by frequency is also required, for further embedding the words with pre-trained vectors. The downloaded vocabulary is in {word
Step9: Text pre-processing
Before we train the network, some pre-processing steps need to be applied to the dataset.
Next let's go through the mechanisms that used to be applied to the data.
We insert a start_char at the beginning of each sentence to mark the start point. We set it as 2 here, and each other word index will plus a constant index_from to differentiate some 'helper index' (eg. start_char, oov_char, etc.).
A max_words variable is defined as the maximum index number (the least frequent word) included in the sequence. If the word index number is larger than max_words, it will be replaced by a out-of-vocabulary number oov_char, which is 3 here.
Each word index sequence is restricted to the same length. We used left-padding here, which means the right (end) of the sequence will be keep as many as possible and drop the left (head) of the sequence if its length is more than pre-defined sequence_len, or padding the left (head) of the sequence with padding_value.
Step10: Word Embedding
Word embedding is a recent breakthrough in natural language field. The key idea is to encode words and phrases into distributed representations in the format of word vectors, which means each word is represented as a vector. There are two widely used word vector training alogirhms, one is published by Google called word to vector, the other is published by Standford called Glove. In this example, pre-trained glove is loaded into a lookup table and will be fine-tuned during the training process. BigDL provides a method to download and load glove in news20 package.
Step11: For each word whose index less than the max_word should try to match its embedding and store in an array.
With regard to those words which can not be found in glove, we randomly sample it from a [-0.05, 0.05] uniform distribution.
BigDL usually use a LookupTable layer to do word embedding, so the matrix will be loaded to the LookupTable by seting the weight.
Step12: Build models
Next, let's build some deep learning models for the sentiment classification.
As an example, several deep learning models are illustrated for tutorial, comparison and demonstration.
LSTM, GRU, Bi-LSTM, CNN and CNN + LSTM models are implemented as options. To decide which model to use, just assign model_type the corresponding string.
Step13: Optimization
Optimizer need to be created to optimise the model.
Here we use the CNN model.
Step14: To make the training process be visualized by TensorBoard, training summaries should be saved as a format of logs.
Step15: Now, let's start training!
Step16: Test
Validation accuracy is shown in the training log, here let's get the accuracy on validation set by hand.
Predict the test_rdd (validation set data), and obtain the predicted label and ground truth label in the list.
Step17: Then let's see the prediction accuracy on validation set.
Step18: Show the confusion matrix | Python Code:
from bigdl.dataset import base
import numpy as np
def download_imdb(dest_dir):
Download pre-processed IMDB movie review data
:argument
dest_dir: destination directory to store the data
:return
The absolute path of the stored data
file_name = "imdb.npz"
file_abs_path = base.maybe_download(file_name,
dest_dir,
'https://s3.amazonaws.com/text-datasets/imdb.npz')
return file_abs_path
def load_imdb(dest_dir='/tmp/.bigdl/dataset'):
Load IMDB dataset.
:argument
dest_dir: where to cache the data (relative to `~/.bigdl/dataset`).
:return
the train, test separated IMDB dataset.
path = download_imdb(dest_dir)
f = np.load(path, allow_pickle=True)
x_train = f['x_train']
y_train = f['y_train']
x_test = f['x_test']
y_test = f['y_test']
f.close()
return (x_train, y_train), (x_test, y_test)
print('Processing text dataset')
(x_train, y_train), (x_test, y_test) = load_imdb()
print('finished processing text')
Explanation: Sentiment Classification on Large Movie Reviews
Sentiment Analysis is understood as a classic natural language processing problem. In this example, a large moview review dataset was chosen from IMDB to do a sentiment classification task with some deep learning approaches. The labeled data set consists of 50,000 IMDB movie reviews (good or bad), in which 25000 highly polar movie reviews for training, and 25,000 for testing. The dataset is originally collected by Stanford researchers and was used in a 2011 paper, and the highest accuray of 88.33% was achieved without using the unbalanced data. This example illustrates some deep learning approaches to do the sentiment classification with BigDL python API.
Load the IMDB Dataset
The IMDB dataset need to be loaded into BigDL, note that the dataset has been pre-processed, and each review was encoded as a sequence of integers. Each integer represents the index of the overall frequency of dataset, for instance, '5' means the 5-th most frequent words occured in the data. It is very convinient to filter the words by some conditions, for example, to filter only the top 5,000 most common word and/or eliminate the top 30 most common words. Let's define functions to load the pre-processed data.
End of explanation
import matplotlib
matplotlib.use('Agg')
%pylab inline
# Summarize review length
from matplotlib import pyplot
print("Review length: ")
X = np.concatenate((x_train, x_test), axis=0)
result = [len(x) for x in X]
print("Mean %.2f words (%f)" % (np.mean(result), np.std(result)))
# plot review length
# Create a figure instance
fig = pyplot.figure(1, figsize=(6, 6))
pyplot.boxplot(result)
pyplot.show()
Explanation: In order to set a proper max sequence length, we need to go througth the property of the data and see the length distribution of each sentence in the dataset. A box and whisker plot is shown below for reviewing the length distribution in words.
End of explanation
import json
def get_word_index(dest_dir='/tmp/.bigdl/dataset', ):
Retrieves the dictionary mapping word indices back to words.
:argument
path: where to cache the data (relative to `~/.bigdl/dataset`).
:return
The word index dictionary.
file_name = "imdb_word_index.json"
path = base.maybe_download(file_name,
dest_dir,
source_url='https://s3.amazonaws.com/text-datasets/imdb_word_index.json')
f = open(path)
data = json.load(f)
f.close()
return data
print('Processing vocabulary')
word_idx = get_word_index()
idx_word = {v:k for k,v in word_idx.items()}
print('finished processing vocabulary')
Explanation: Looking the box and whisker plot, the max length of a sample in words is 500, and the mean and median are below 250. According to the plot, we can probably cover the mass of the distribution with a clipped length of 400 to 500. Here we set the max sequence length of each sample as 500.
The corresponding vocabulary sorted by frequency is also required, for further embedding the words with pre-trained vectors. The downloaded vocabulary is in {word: index}, where each word as a key and the index as a value. It needs to be transformed into {index: word} format.
Let's define a function to obtain the vocabulary.
End of explanation
def replace_oov(x, oov_char, max_words):
Replace the words out of vocabulary with `oov_char`
:param x: a sequence
:param max_words: the max number of words to include
:param oov_char: words out of vocabulary because of exceeding the `max_words`
limit will be replaced by this character
:return: The replaced sequence
return [oov_char if w >= max_words else w for w in x]
def pad_sequence(x, fill_value, length):
Pads each sequence to the same length
:param x: a sequence
:param fill_value: pad the sequence with this value
:param length: pad sequence to the length
:return: the padded sequence
if len(x) >= length:
return x[(len(x) - length):]
else:
return [fill_value] * (length - len(x)) + x
def to_sample(features, label):
Wrap the `features` and `label` to a training sample object
:param features: features of a sample
:param label: label of a sample
:return: a sample object including features and label
return Sample.from_ndarray(np.array(features, dtype='float'), np.array(label))
padding_value = 1
start_char = 2
oov_char = 3
index_from = 3
max_words = 5000
sequence_len = 500
print('start transformation')
from zoo.common.nncontext import *
sc = init_nncontext("Sentiment Analysis Example")
train_rdd = sc.parallelize(zip(x_train, y_train), 2) \
.map(lambda record: ([start_char] + [w + index_from for w in record[0]], record[1])) \
.map(lambda record: (replace_oov(record[0], oov_char, max_words), record[1])) \
.map(lambda record: (pad_sequence(record[0], padding_value, sequence_len), record[1])) \
.map(lambda record: to_sample(record[0], record[1]))
test_rdd = sc.parallelize(zip(x_test, y_test), 2) \
.map(lambda record: ([start_char] + [w + index_from for w in record[0]], record[1])) \
.map(lambda record: (replace_oov(record[0], oov_char, max_words), record[1])) \
.map(lambda record: (pad_sequence(record[0], padding_value, sequence_len), record[1])) \
.map(lambda record: to_sample(record[0], record[1]))
print('finish transformation')
Explanation: Text pre-processing
Before we train the network, some pre-processing steps need to be applied to the dataset.
Next let's go through the mechanisms that used to be applied to the data.
We insert a start_char at the beginning of each sentence to mark the start point. We set it as 2 here, and each other word index will plus a constant index_from to differentiate some 'helper index' (eg. start_char, oov_char, etc.).
A max_words variable is defined as the maximum index number (the least frequent word) included in the sequence. If the word index number is larger than max_words, it will be replaced by a out-of-vocabulary number oov_char, which is 3 here.
Each word index sequence is restricted to the same length. We used left-padding here, which means the right (end) of the sequence will be keep as many as possible and drop the left (head) of the sequence if its length is more than pre-defined sequence_len, or padding the left (head) of the sequence with padding_value.
End of explanation
from bigdl.dataset import news20
import itertools
embedding_dim = 100
print('loading glove')
glove = news20.get_glove_w2v(source_dir='/tmp/.bigdl/dataset', dim=embedding_dim)
print('finish loading glove')
Explanation: Word Embedding
Word embedding is a recent breakthrough in natural language field. The key idea is to encode words and phrases into distributed representations in the format of word vectors, which means each word is represented as a vector. There are two widely used word vector training alogirhms, one is published by Google called word to vector, the other is published by Standford called Glove. In this example, pre-trained glove is loaded into a lookup table and will be fine-tuned during the training process. BigDL provides a method to download and load glove in news20 package.
End of explanation
print('processing glove')
w2v = [glove.get(idx_word.get(i - index_from), np.random.uniform(-0.05, 0.05, embedding_dim))
for i in range(1, max_words + 1)]
w2v = np.array(list(itertools.chain(*np.array(w2v, dtype='float'))), dtype='float') \
.reshape([max_words, embedding_dim])
print('finish processing glove')
Explanation: For each word whose index less than the max_word should try to match its embedding and store in an array.
With regard to those words which can not be found in glove, we randomly sample it from a [-0.05, 0.05] uniform distribution.
BigDL usually use a LookupTable layer to do word embedding, so the matrix will be loaded to the LookupTable by seting the weight.
End of explanation
from bigdl.nn.layer import *
p = 0.2
def build_model(w2v):
model = Sequential()
embedding = LookupTable(max_words, embedding_dim)
embedding.set_weights([w2v])
model.add(embedding)
if model_type.lower() == "gru":
model.add(Recurrent()
.add(GRU(embedding_dim, 128, p))) \
.add(Select(2, -1))
elif model_type.lower() == "lstm":
model.add(Recurrent()
.add(LSTM(embedding_dim, 128, p)))\
.add(Select(2, -1))
elif model_type.lower() == "bi_lstm":
model.add(BiRecurrent(CAddTable())
.add(LSTM(embedding_dim, 128, p)))\
.add(Select(2, -1))
elif model_type.lower() == "cnn":
model.add(Transpose([(2, 3)]))\
.add(Dropout(p))\
.add(Reshape([embedding_dim, 1, sequence_len]))\
.add(SpatialConvolution(embedding_dim, 128, 5, 1))\
.add(ReLU())\
.add(SpatialMaxPooling(sequence_len - 5 + 1, 1, 1, 1))\
.add(Reshape([128]))
elif model_type.lower() == "cnn_lstm":
model.add(Transpose([(2, 3)]))\
.add(Dropout(p))\
.add(Reshape([embedding_dim, 1, sequence_len])) \
.add(SpatialConvolution(embedding_dim, 64, 5, 1)) \
.add(ReLU()) \
.add(SpatialMaxPooling(4, 1, 1, 1)) \
.add(Squeeze(3)) \
.add(Transpose([(2, 3)])) \
.add(Recurrent()
.add(LSTM(64, 128, p))) \
.add(Select(2, -1))
model.add(Linear(128, 100))\
.add(Dropout(0.2))\
.add(ReLU())\
.add(Linear(100, 1))\
.add(Sigmoid())
return model
Explanation: Build models
Next, let's build some deep learning models for the sentiment classification.
As an example, several deep learning models are illustrated for tutorial, comparison and demonstration.
LSTM, GRU, Bi-LSTM, CNN and CNN + LSTM models are implemented as options. To decide which model to use, just assign model_type the corresponding string.
End of explanation
from bigdl.optim.optimizer import *
from bigdl.nn.criterion import *
# max_epoch = 4
max_epoch = 1
batch_size = 64
model_type = 'gru'
optimizer = Optimizer(
model=build_model(w2v),
training_rdd=train_rdd,
criterion=BCECriterion(),
end_trigger=MaxEpoch(max_epoch),
batch_size=batch_size,
optim_method=Adam())
optimizer.set_validation(
batch_size=batch_size,
val_rdd=test_rdd,
trigger=EveryEpoch(),
val_method=Top1Accuracy())
Explanation: Optimization
Optimizer need to be created to optimise the model.
Here we use the CNN model.
End of explanation
import datetime as dt
logdir = '/tmp/.bigdl/'
app_name = 'adam-' + dt.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary = TrainSummary(log_dir=logdir, app_name=app_name)
train_summary.set_summary_trigger("Parameters", SeveralIteration(50))
val_summary = ValidationSummary(log_dir=logdir, app_name=app_name)
optimizer.set_train_summary(train_summary)
optimizer.set_val_summary(val_summary)
Explanation: To make the training process be visualized by TensorBoard, training summaries should be saved as a format of logs.
End of explanation
%%time
train_model = optimizer.optimize()
print ("Optimization Done.")
Explanation: Now, let's start training!
End of explanation
predictions = train_model.predict(test_rdd)
def map_predict_label(l):
if l > 0.5:
return 1
else:
return 0
def map_groundtruth_label(l):
return l.to_ndarray()[0]
y_pred = np.array([ map_predict_label(s) for s in predictions.collect()])
y_true = np.array([map_groundtruth_label(s.label) for s in test_rdd.collect()])
Explanation: Test
Validation accuracy is shown in the training log, here let's get the accuracy on validation set by hand.
Predict the test_rdd (validation set data), and obtain the predicted label and ground truth label in the list.
End of explanation
correct = 0
for i in range(0, y_pred.size):
if (y_pred[i] == y_true[i]):
correct += 1
accuracy = float(correct) / y_pred.size
print ('Prediction accuracy on validation set is: ', accuracy)
Explanation: Then let's see the prediction accuracy on validation set.
End of explanation
matplotlib.use('Agg')
%pylab inline
import matplotlib.pyplot as plt
import seaborn as sn
import pandas as pd
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_true, y_pred)
cm.shape
df_cm = pd.DataFrame(cm)
plt.figure(figsize = (5,4))
sn.heatmap(df_cm, annot=True,fmt='d')
Explanation: Show the confusion matrix
End of explanation |
3,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
textblob
Step1: Vamos a crear nuestro primer ejemplo de textblob a través del objeto TextBlob. Piensa en estos textblobs como una especie de cadenas de texto de Python, analaizadas y enriquecidas con algunas características extra.
Step2: Procesando oraciones, palabras y entidades
Podemos segmentar en oraciones y en palabras nuestra texto de ejemplo simplemente accediendo a las propiedades .sentences y .words. Imprimimos por pantalla
Step3: La propiedad .noun_phrases nos permite acceder a la lista de entidades (en realidad, son sintagmas nominales) incluídos en nuestro textblob. Así es como funciona.
Step4: Análisis sintático
Aunque podemos utilizar otros analizadores, por defecto el método .parse() invoca al analizador morfosintáctico del módulo pattern.en que ya conoces.
Step5: Traducción automática
A partir de cualquier texto procesado con TextBlob, podemos acceder a un traductor automático de bastante calidad con el método .translate. Fíjate en cómo lo usamos. Es obligatorio indicar la lengua de destinto. La lengua de origen, se puede predecir a partir del texto de entrada.
Step6: WordNet
textblob, más concretamente, cualquier objeto de la clase Word, nos permite acceder a la información de WordNet.
Step7: Análisis de opinion
Step8: Otras curiosidades | Python Code:
from textblob import TextBlob
Explanation: textblob: otro módulo para tareas de PLN (NLTK + pattern)
textblob es una librería de procesamiento del texto para Python que permite realizar tareas de Procesamiento del Lenguaje Natural como análisis morfológico, extracción de entidades, análisis de opinión, traducción automática, etc.
Está construida sobre otras dos librerías muy famosas de Python: NLTK y pattern. La principal ventaja de textblob es que permite combinar el uso de las dos herramientas anteriores en un interfaz más simple.
Vamos a apoyarnos en este tutorial para aprender a utilizar algunas de sus funcionalidades más llamativas.
Lo primero es importar el objeto TextBlob que nos permite acceder a todas las herramentas que incluye.
End of explanation
texto = '''In new lawsuits brought against the ride-sharing companies Uber and Lyft, the top prosecutors in Los Angeles
and San Francisco counties make an important point about the lightly regulated sharing economy. The consumers who
participate deserve a very clear picture of the risks they're taking'''
t = TextBlob(texto)
print(t.sentences)
print('Tenemos', len(t.sentences), 'oraciones.\n')
for sentence in t.sentences:
print(sentence)
print('-' * 75)
Explanation: Vamos a crear nuestro primer ejemplo de textblob a través del objeto TextBlob. Piensa en estos textblobs como una especie de cadenas de texto de Python, analaizadas y enriquecidas con algunas características extra.
End of explanation
# imprimimos las oraciones
for sentence in t.sentences:
print(sentence)
print('-' * 75)
# y las palabras
print(t.words)
print(texto.split())
Explanation: Procesando oraciones, palabras y entidades
Podemos segmentar en oraciones y en palabras nuestra texto de ejemplo simplemente accediendo a las propiedades .sentences y .words. Imprimimos por pantalla:
End of explanation
print("el texto de ejemplo contiene", len(t.noun_phrases), "entidades")
for element in t.noun_phrases:
print("-", element)
# jugando con lemas, singulares y plurales
for word in t.words:
if word.endswith("s"):
print(word.lemmatize(), word, word.singularize())
else:
print(word.lemmatize(), word, word.pluralize())
# ¿cómo podemos hacer la lematización más inteligente?
for item in t.tags:
if item[1] == 'NN':
print(item[0], '-->', item[0].pluralize())
elif item[1] == 'NNS':
print(item[0], '-->', item[0].singularize())
else:
print(item[0], item[0].lemmatize())
Explanation: La propiedad .noun_phrases nos permite acceder a la lista de entidades (en realidad, son sintagmas nominales) incluídos en nuestro textblob. Así es como funciona.
End of explanation
# análisis sintáctico
print(t.parse())
Explanation: Análisis sintático
Aunque podemos utilizar otros analizadores, por defecto el método .parse() invoca al analizador morfosintáctico del módulo pattern.en que ya conoces.
End of explanation
# de chino a inglés y español
oracion_zh = "中国探月工程 亦稱嫦娥工程,是中国启动的第一个探月工程,于2003年3月1日正式启动"
t_zh = TextBlob(oracion_zh)
print(t_zh.translate(from_lang="zh-CN", to="en"))
print(t_zh.translate(from_lang="zh-CN", to="es"))
print("--------------")
t_es = TextBlob(u"La deuda pública ha marcado nuevos récords en España en el tercer trimestre")
print(t_es.translate(to="el"))
print(t_es.translate(to="ru"))
print(t_es.translate(to="eu"))
print(t_es.translate(to="fi"))
print(t_es.translate(to="fr"))
print(t_es.translate(to="nl"))
print(t_es.translate(to="gl"))
print(t_es.translate(to="ca"))
print(t_es.translate(to="zh"))
print(t_es.translate(to="la"))
print(t_es.translate(to="cs"))
# con el slang no funciona tan bien
print("--------------")
t_ita = TextBlob(u"Sono andato a Milano e mi sono divertito un bordello.")
print(t_ita.translate(to="en"))
print(t_ita.translate(to="es"))
Explanation: Traducción automática
A partir de cualquier texto procesado con TextBlob, podemos acceder a un traductor automático de bastante calidad con el método .translate. Fíjate en cómo lo usamos. Es obligatorio indicar la lengua de destinto. La lengua de origen, se puede predecir a partir del texto de entrada.
End of explanation
# WordNet
from textblob import Word
from textblob.wordnet import VERB
# ¿cuántos synsets tiene "car"
word = Word("car")
print(word.synsets)
# dame los synsets de la palabra "hack" como verbo
print(Word("hack").get_synsets(pos=VERB))
# imprime la lista de definiciones de "car"
print(Word("car").definitions)
# recorre la jerarquía de hiperónimos
for s in word.synsets:
print(s.hypernym_paths())
Explanation: WordNet
textblob, más concretamente, cualquier objeto de la clase Word, nos permite acceder a la información de WordNet.
End of explanation
# análisis de opinión
opinion1 = TextBlob("This new restaurant is great. I had so much fun!! :-P")
print(opinion1.sentiment)
opinion2 = TextBlob("Google News to close in Spain.")
print(opinion2.sentiment)
# subjetividad 0:1
# polaridad -1:1
print(opinion1.sentiment.polarity)
if opinion1.sentiment.subjectivity > 0.5:
print("Hey, esto es una opinion")
t = TextBlob("I like this restaurant")
print(t.sentiment)
t = TextBlob("I love this restaurant")
print(t.sentiment)
t = TextBlob("I fucking love this restaurant ")
print(t.sentiment)
t = TextBlob("I fucking love this restaurant :-) ")
print(t.sentiment)
t = TextBlob("I love this FUCKING restaurant :-( Grrr!! ")
print(t.sentiment)
Explanation: Análisis de opinion
End of explanation
# corrección ortográfica
b1 = TextBlob("I havv goood speling!")
print(b1.correct())
b2 = TextBlob("Miy naem iz Jonh!")
print(b2.correct())
b3 = TextBlob("Boyz dont cri")
print(b3.correct())
b4 = TextBlob("psicological posesion achifmen comitment")
print(b4.correct())
Explanation: Otras curiosidades
End of explanation |
3,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Plots
Step1: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https
Step2: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
Step3: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
Step4: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
Step5: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
Step6: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
Step7: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
Step8: Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
Step9: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
Step10: Statewide Crime 2009 Dataset
Compare the following to http
Step11: Partial Regression Plots (Crime Data)
Step12: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
Step13: Influence Plot
Step14: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
Step15: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888) | Python Code:
%matplotlib inline
from statsmodels.compat import lzip
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
Explanation: Regression Plots
End of explanation
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
Explanation: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>.
End of explanation
fig = sm.graphics.influence_plot(prestige_model, criterion="cooks")
fig.tight_layout(pad=1.0)
Explanation: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
End of explanation
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige)
fig.tight_layout(pad=1.0)
Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
End of explanation
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
End of explanation
fig = sm.graphics.plot_partregress_grid(prestige_model)
fig.tight_layout(pad=1.0)
Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
End of explanation
fig = sm.graphics.plot_ccpr(prestige_model, "education")
fig.tight_layout(pad=1.0)
Explanation: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
End of explanation
fig = sm.graphics.plot_ccpr_grid(prestige_model)
fig.tight_layout(pad=1.0)
Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
End of explanation
fig = sm.graphics.plot_regress_exog(prestige_model, "education")
fig.tight_layout(pad=1.0)
Explanation: Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
End of explanation
fig = sm.graphics.plot_fit(prestige_model, "education")
fig.tight_layout(pad=1.0)
Explanation: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
End of explanation
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
Explanation: Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
End of explanation
fig = sm.graphics.plot_partregress_grid(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], data=dta)
fig.tight_layout(pad=1.0)
Explanation: Partial Regression Plots (Crime Data)
End of explanation
fig = sm.graphics.plot_leverage_resid2(crime_model)
fig.tight_layout(pad=1.0)
Explanation: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
End of explanation
fig = sm.graphics.influence_plot(crime_model)
fig.tight_layout(pad=1.0)
Explanation: Influence Plot
End of explanation
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
Explanation: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
End of explanation
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
Explanation: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
End of explanation |
3,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
如何使用和开发微信聊天机器人的系列教程
A workshop to develop & use an intelligent and interactive chat-bot in WeChat
WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='../reference/logo.png' width=12% style="float
Step1: * 用微信App扫QR码图片来自动登录
https
Step2: * 查找指定联系人或群组
使用search_friends方法可以搜索用户,有几种搜索方式:
1.仅获取自己的用户信息
2.获取昵称'NickName'、微信号'Alias'、备注名'RemarkName'中的任何一项等于name键值的用户
3.获取分别对应相应键值的用户
Step3: * 自定义复杂消息处理,例如:信息存档、回复群组中被@的消息 | Python Code:
# from __future__ import unicode_literals, division
# import time, datetime, requests
import itchat
from itchat.content import *
Explanation: 如何使用和开发微信聊天机器人的系列教程
A workshop to develop & use an intelligent and interactive chat-bot in WeChat
WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='../reference/logo.png' width=12% style="float: right;">
<img src='../reference/WeChat_SamGu_QR.png' width=10% style="float: right;">
http://www.KudosData.com
by: [email protected]
April 2017 ========== Scan the QR code to become trainer's friend in WeChat ========>>
第一课:使用微信问答机制
Lesson 1: Basic usage of WeChat Python API
使用和开发微信个人号聊天机器人:一种Python编程接口 (Use WeChat Python API)
用微信App扫QR码图片来自动登录 (Log-in, contact scan, and processing of text, image, file, video, etc)
查找指定联系人或群组 (Scan ccontact list)
发送信息(文字、图片、文件、音频、视频等) (Send message: text, image, file, voice, video, etc)
接收信息 (Receive message, and keep 'listening')
自动回复 (Receive message and then automaticaly reply)
自定义复杂消息处理,例如:信息存档、回复群组中被@的消息 (Advanced message processing and reply)
导入需要用到的一些功能程序库:
End of explanation
# Running in Jupyther Notebook:
# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。
# or
# itchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: Jupyter Notebook 命令行显示QR图片
# Running in Terminal:
itchat.auto_login(enableCmdQR=2) # enableCmdQR=2: 命令行显示QR图片
Explanation: * 用微信App扫QR码图片来自动登录
https://itchat.readthedocs.io/zh/latest/
命令行二维码
通过以下命令可以在登陆的时候使用命令行显示二维码:
itchat.auto_login(enableCmdQR=True)
部分系统可能字幅宽度有出入,可以通过将enableCmdQR赋值为特定的倍数进行调整:
itchat.auto_login(enableCmdQR=2) # 如部分的linux系统,块字符的宽度为一个字符(正常应为两字符),故赋值为2
默认控制台背景色为暗色(黑色),若背景色为浅色(白色),可以将enableCmdQR赋值为负值:
itchat.auto_login(enableCmdQR=-1)
退出程序后暂存登陆状态
通过如下命令登陆,即使程序关闭,一定时间内重新开启也可以不用重新扫码。
itchat.auto_login(hotReload=True)
End of explanation
friend = itchat.search_friends()
# print(friend)
print('NickName : %s' % friend['NickName'])
print('Alias A-ID: %s' % friend['Alias'])
print('RemarkName: %s' % friend['RemarkName'])
print('UserName : %s' % friend['UserName'])
print()
print(u'[ WeChat Software Robot 微信人工智能助手 ] Copyright © 2018 GU Zhan (Sam) SOME RIGHTS RESERVED')
print()
print(u'[ Functions 演示功能介绍 ]')
print(u'[ 1 ] 如果收到[TEXT, MAP, CARD, NOTE, SHARING]类的信息,会自动回复')
print(u' @itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING]) # 文字、位置、名片、通知、分享')
print(u'[ 2 ] 如果收到[PICTURE, RECORDING, ATTACHMENT, VIDEO]类的信息,会自动保存')
print(u' @itchat.msg_register([PICTURE, RECORDING, ATTACHMENT, VIDEO]) # 图片、语音、文件、视频')
print(u'[ 3 ] 如果收到新朋友的请求,会自动通过验证添加加好友,并主动打个招呼:幸会幸会!Nice to meet you!')
print(u' @itchat.msg_register(FRIENDS)')
print(u'[ 4 ] 在群里,如果收到@自己的文字信息,会自动回复')
print(u' @itchat.msg_register(TEXT, isGroupChat=True)')
print()
print(u'[ Source Code 源代码 ] https://github.com/telescopeuser/workshop_blog')
print()
Explanation: * 查找指定联系人或群组
使用search_friends方法可以搜索用户,有几种搜索方式:
1.仅获取自己的用户信息
2.获取昵称'NickName'、微信号'Alias'、备注名'RemarkName'中的任何一项等于name键值的用户
3.获取分别对应相应键值的用户
End of explanation
# 如果收到[TEXT, MAP, CARD, NOTE, SHARING]类的信息,会自动回复:
@itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING]) # 文字、位置、名片、通知、分享
def text_reply(msg):
print(u'[ Terminal Info ] 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s'
% (msg['Type'], msg['Text'], msg['FromUserName']))
itchat.send(u'谢谢亲[嘴唇]我已收到\nI received:\n[ %s ]\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])
# 如果收到[PICTURE, RECORDING, ATTACHMENT, VIDEO]类的信息,会自动保存:
@itchat.msg_register([PICTURE, RECORDING, ATTACHMENT, VIDEO]) # 图片、语音、文件、视频
def download_files(msg):
msg['Text'](msg['FileName'])
print(u'[ Terminal Info ] 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s'
% ({'Picture': 'img', 'Video': 'vid'}.get(msg['Type'], 'fil'), msg['FileName'], msg['FromUserName']))
itchat.send(u'谢谢亲[嘴唇]我已收到\nI received:', msg['FromUserName'])
return '@%s@%s' % ({'Picture': 'img', 'Video': 'vid'}.get(msg['Type'], 'fil'), msg['FileName'])
# 如果收到新朋友的请求,会自动通过验证添加加好友,并主动打个招呼:幸会幸会!Nice to meet you!
@itchat.msg_register(FRIENDS)
def add_friend(msg):
print(u'[ Terminal Info ] 新朋友的请求,自动通过验证添加加好友 From: %s' % msg['RecommendInfo']['UserName'])
itchat.add_friend(**msg['Text']) # 该操作会自动将新好友的消息录入,不需要重载通讯录
itchat.send_msg(u'幸会幸会!Nice to meet you!', msg['RecommendInfo']['UserName'])
# 在群里,如果收到@自己的文字信息,会自动回复:
@itchat.msg_register(TEXT, isGroupChat=True)
def text_reply(msg):
if msg['isAt']:
print(u'[ Terminal Info ] 在群里收到@自己的文字信息: %s From: %s %s'
% (msg['Content'], msg['ActualNickName'], msg['FromUserName']))
itchat.send(u'@%s\u2005I received: %s' % (msg['ActualNickName'], msg['Content']), msg['FromUserName'])
itchat.run()
# interupt, then logout
# itchat.logout() # 安全退出
Explanation: * 自定义复杂消息处理,例如:信息存档、回复群组中被@的消息
End of explanation |
3,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 26
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: Case studies!
Bungee jumping
Suppose you want to set the world record for the highest "bungee dunk",
which is a stunt in which a bungee jumper dunks a cookie in a cup of tea
at the lowest point of a jump. An example is shown in this video
Step2: As before, Rmin is the minimum radius and Rmax is the maximum. L
is the length of the paper. Mcore is the mass of the cardboard tube at
the center of the roll; Mroll is the mass of the paper. tension is
the force applied by the kitten, in N. I chose a value that yields
plausible results.
At http
Step3: rho_h is the product of density and height, $\rho h$, which is the
mass per area. rho_h is computed in make_system
Step4: make_system also computes k using
Equation [eqn4]{reference-type="ref" reference="eqn4"}.
In the repository for this book, you will find a notebook,
kitten.ipynb, which contains starter code for this case study. Use it
to implement this model and check whether the results seem plausible.
Simulating a yo-yo
Suppose you are holding a yo-yo with a length of string wound around its
axle, and you drop it while holding the end of the string stationary. As
gravity accelerates the yo-yo downward, tension in the string exerts a
force upward. Since this force acts on a point offset from the center of
mass, it exerts a torque that causes the yo-yo to spin.
{height="2.5in"}
Figure [yoyo]{reference-type="ref" reference="yoyo"} is a
diagram of the forces on the yo-yo and the resulting torque. The outer
shaded area shows the body of the yo-yo. The inner shaded area shows the
rolled up string, the radius of which changes as the yo-yo unrolls.
In this model, we can't figure out the linear and angular acceleration
independently; we have to solve a system of equations | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 26
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
params = Params(Rmin = 0.02 * m,
Rmax = 0.055 * m,
Mcore = 15e-3 * kg,
Mroll = 215e-3 * kg,
L = 47 * m,
tension = 2e-4 * N,
t_end = 180 * s)
Explanation: Case studies!
Bungee jumping
Suppose you want to set the world record for the highest "bungee dunk",
which is a stunt in which a bungee jumper dunks a cookie in a cup of tea
at the lowest point of a jump. An example is shown in this video:
http://modsimpy.com/dunk.
Since the record is 70 m, let's design a jump for 80 m. We'll start with
the following modeling assumptions:
Initially the bungee cord hangs from a crane with the attachment
point 80 m above a cup of tea.
Until the cord is fully extended, it applies no force to the jumper.
It turns out this might not be a good assumption; we will revisit
it.
After the cord is fully extended, it obeys Hooke's Law; that is, it
applies a force to the jumper proportional to the extension of the
cord beyond its resting length. See http://modsimpy.com/hooke.
The mass of the jumper is 75 kg.
The jumper is subject to drag force so that their terminal velocity
is 60 m/s.
Our objective is to choose the length of the cord, L, and its spring
constant, k, so that the jumper falls all the way to the tea cup, but
no farther!
In the repository for this book, you will find a notebook,
bungee.ipynb, which contains starter code and exercises for this case
study.
Bungee dunk revisited
In the previous case study, we assume that the cord applies no force to
the jumper until it is stretched. It is tempting to say that the cord
has no effect because it falls along with the jumper, but that intuition
is incorrect. As the cord falls, it transfers energy to the jumper.
At http://modsimpy.com/bungee you'll find a paper[^1] that explains
this phenomenon and derives the acceleration of the jumper, $a$, as a
function of position, $y$, and velocity, $v$:
$$a = g + \frac{\mu v^2/2}{\mu(L+y) + 2L}$$ where $g$ is acceleration
due to gravity, $L$ is the length of the cord, and $\mu$ is the ratio of
the mass of the cord, $m$, and the mass of the jumper, $M$.
If you don't believe that their model is correct, this video might
convince you: http://modsimpy.com/drop.
In the repository for this book, you will find a notebook,
bungee2.ipynb, which contains starter code and exercises for this case
study. How does the behavior of the system change as we vary the mass of
the cord? When the mass of the cord equals the mass of the jumper, what
is the net effect on the lowest point in the jump?
Spider-Man
In this case study we'll develop a model of Spider-Man swinging from a
springy cable of webbing attached to the top of the Empire State
Building. Initially, Spider-Man is at the top of a nearby building, as
shown in Figure [spiderman]{reference-type="ref"
reference="spiderman"}.
{height="2.8in"}
The origin, O, is at the base of the Empire State Building. The vector
H represents the position where the webbing is attached to the
building, relative to O. The vector P is the position of Spider-Man
relative to O. And L is the vector from the attachment point to
Spider-Man.
By following the arrows from O, along H, and along L, we can see
that
H + L = P
So we can compute L like this:
L = P - H
The goals of this case study are:
Implement a model of this scenario to predict Spider-Man's
trajectory.
Choose the right time for Spider-Man to let go of the webbing in
order to maximize the distance he travels before landing.
Choose the best angle for Spider-Man to jump off the building, and
let go of the webbing, to maximize range.
We'll use the following parameters:
According to the Spider-Man Wiki[^2], Spider-Man weighs 76 kg.
Let's assume his terminal velocity is 60 m/s.
The length of the web is 100 m.
The initial angle of the web is 45 ° to the left of straight down.
The spring constant of the web is 40 N/m when the cord is stretched,
and 0 when it's compressed.
In the repository for this book, you will find a notebook,
spiderman.ipynb, which contains starter code. Read through the
notebook and run the code. It uses minimize, which is a SciPy function
that can search for an optimal set of parameters (as contrasted with
minimize_scalar, which can only search along a single axis).
Kittens
Let's simulate a kitten unrolling toilet paper. As reference material,
see this video: http://modsimpy.com/kitten.
The interactions of the kitten and the paper roll are complex. To keep
things simple, let's assume that the kitten pulls down on the free end
of the roll with constant force. Also, we will neglect the friction
between the roll and the axle.
{height="2.5in"}
Figure [kitten]{reference-type="ref" reference="kitten"}
shows the paper roll with $r$, $F$, and $\tau$. As a vector quantity,
the direction of $\tau$ is into the page, but we only care about its
magnitude for now.
Here's the Params object with the parameters we'll need:
End of explanation
def moment_of_inertia(r, system):
Mcore, Rmin = system.Mcore, system.Rmin
rho_h = system.rho_h
Icore = Mcore * Rmin**2
Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
Explanation: As before, Rmin is the minimum radius and Rmax is the maximum. L
is the length of the paper. Mcore is the mass of the cardboard tube at
the center of the roll; Mroll is the mass of the paper. tension is
the force applied by the kitten, in N. I chose a value that yields
plausible results.
At http://modsimpy.com/moment you can find moments of inertia for
simple geometric shapes. I'll model the cardboard tube at the center of
the roll as a "thin cylindrical shell\", and the paper roll as a
"thick-walled cylindrical tube with open ends\".
The moment of inertia for a thin shell is just $m r^2$, where $m$ is the
mass and $r$ is the radius of the shell.
For a thick-walled tube the moment of inertia is
$$I = \frac{\pi \rho h}{2} (r_2^4 - r_1^4)$$ where $\rho$ is the density
of the material, $h$ is the height of the tube, $r_2$ is the outer
diameter, and $r_1$ is the inner diameter.
Since the outer diameter changes as the kitten unrolls the paper, we
have to compute the moment of inertia, at each point in time, as a
function of the current radius, r. Here's the function that does it:
End of explanation
def make_system(params):
L, Rmax, Rmin = params.L, params.Rmax, params.Rmin
Mroll = params.Mroll
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L)
area = pi * (Rmax**2 - Rmin**2)
rho_h = Mroll / area
k = (Rmax**2 - Rmin**2) / 2 / L / radian
return System(params, init=init, area=area,
rho_h=rho_h, k=k)
Explanation: rho_h is the product of density and height, $\rho h$, which is the
mass per area. rho_h is computed in make_system:
End of explanation
T, a, alpha, I, m, g, r = symbols('T a alpha I m g r')
eq1 = Eq(a, -r * alpha)
eq2 = Eq(T - m*g, m * a)
eq3 = Eq(T * r, I * alpha)
soln = solve([eq1, eq2, eq3], [T, a, alpha])
Explanation: make_system also computes k using
Equation [eqn4]{reference-type="ref" reference="eqn4"}.
In the repository for this book, you will find a notebook,
kitten.ipynb, which contains starter code for this case study. Use it
to implement this model and check whether the results seem plausible.
Simulating a yo-yo
Suppose you are holding a yo-yo with a length of string wound around its
axle, and you drop it while holding the end of the string stationary. As
gravity accelerates the yo-yo downward, tension in the string exerts a
force upward. Since this force acts on a point offset from the center of
mass, it exerts a torque that causes the yo-yo to spin.
{height="2.5in"}
Figure [yoyo]{reference-type="ref" reference="yoyo"} is a
diagram of the forces on the yo-yo and the resulting torque. The outer
shaded area shows the body of the yo-yo. The inner shaded area shows the
rolled up string, the radius of which changes as the yo-yo unrolls.
In this model, we can't figure out the linear and angular acceleration
independently; we have to solve a system of equations: $$\begin{aligned}
\sum F &= m a \
\sum \tau &= I \alpha\end{aligned}$$ where the summations indicate that
we are adding up forces and torques.
As in the previous examples, linear and angular velocity are related
because of the way the string unrolls:
$$\frac{dy}{dt} = -r \frac{d \theta}{dt}$$ In this example, the linear
and angular accelerations have opposite sign. As the yo-yo rotates
counter-clockwise, $\theta$ increases and $y$, which is the length of
the rolled part of the string, decreases.
Taking the derivative of both sides yields a similar relationship
between linear and angular acceleration:
$$\frac{d^2 y}{dt^2} = -r \frac{d^2 \theta}{dt^2}$$ Which we can write
more concisely: $$a = -r \alpha$$ This relationship is not a general law
of nature; it is specific to scenarios like this where one object rolls
along another without stretching or slipping.
Because of the way we've set up the problem, $y$ actually has two
meanings: it represents the length of the rolled string and the height
of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$
represents acceleration in the length of the rolled string and the
height of the yo-yo.
We can compute the acceleration of the yo-yo by adding up the linear
forces: $$\sum F = T - mg = ma$$ Where $T$ is positive because the
tension force points up, and $mg$ is negative because gravity points
down.
Because gravity acts on the center of mass, it creates no torque, so the
only torque is due to tension: $$\sum \tau = T r = I \alpha$$ Positive
(upward) tension yields positive (counter-clockwise) angular
acceleration.
Now we have three equations in three unknowns, $T$, $a$, and $\alpha$,
with $I$, $m$, $g$, and $r$ as known quantities. It is simple enough to
solve these equations by hand, but we can also get SymPy to do it for
us:
End of explanation |
3,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is part of the clifford documentation
Step1: Convert a complex number to a spinor
Step2: Convert a spinor to a complex number
Step3: Make sure we get what we started with when we make a round trip
Step4: The spinor is then mapped to a vector by choosing a reference direction. This may be done by left multiplying with $e_{1}$
.
$$Z\Longrightarrow e_{1}Z=e_{1}\alpha+\beta e_{1}e_{12}=\underbrace{\alpha e_{1}+\beta e_{2}}_{\mbox{vector}}$$
Step5: Geometrically, this is interpreted as having the spinor rotate a specific vector, in this case $e_1$. Building off of the previously defined functions
Step6: Depending on your applications, you may wish to have the bivector be an argument to the c2s and s2c functions.
This allows you to map input data given in the form of complex number onto the planes of your choice.
For example, in three dimensional space there are three bivector-planes; $e_{12}, e_{23}$ and $e_{13}$, so there are many bivectors which could be interpreted as the unit imaginary.
Complex numbers mapped in this way can be used to enact rotations within the specified planes.
Step7: This brings us to the subject of quaternions, which are used to handle rotations in three dimensions much like complex numbers do in two dimensions. With geometric algebra, they are just spinors acting in a different geometry.
Quaternions
<div class="alert alert-info">
**Note
Step8: This leads to the commutations relations familiar to quaternion users
Step9: Quaternion data could be stored in a variety of ways. Assuming you have the scalar components for the quaternion, all you will need to do is setup a map each component to the correct bivector.
Step10: Then all the quaternion computations can be done using GA
Step11: This prints $i,j$ and $k$ in reverse order but whatever,
Step12: quaternion conjugation is implemented with reversion
Step13: The norm
Step14: Taking the dual() of the "vector" part actually returns a vector,
Step15: If you want to keep using a left-handed frame and names like $i,j$ and $k$ to label the planes in 3D space, ok.
If you think it makes more sense to use the consistent and transparent approach provided by GA, we think you have good taste.
If we make this switch, the basis and q2S() functions will be changed to | Python Code:
import clifford as cf
layout, blades = cf.Cl(2) # instantiate a 2D- GA
locals().update(blades) # put all blades into local namespace
def c2s(z):
'''convert a complex number to a spinor'''
return z.real + z.imag*e12
def s2c(S):
'''convert a spinor to a complex number'''
S0 = float(S(0))
S2 = float(-S|e12)
return S0 + S2*1j
Explanation: This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.
Interfacing Other Mathematical Systems
Geometric Algebra is known as a universal algebra because it subsumes several other mathematical systems.
Two algebras commonly used by engineers and scientists are complex numbers and quaternions.
These algebras can be subsumed as the even sub-algebras of 2 and 3 dimensional geometric algebras, respectively.
This notebook demonstrates how clifford can be used to incorporate data created with these systems into geometric algebra.
Complex Numbers
Given a two dimensional GA with the orthonormal basis,
$$e_{i}\cdot e_{j}=\delta_{ij}$$
The geometric algebra consists of scalars, two vectors, and a bivector,
$${\quad\underbrace{\alpha,}{\mbox{scalar}}\qquad\underbrace{e{1},\qquad e_{2},}{\mbox{vector}}\qquad\underbrace{e{12}}_{\mbox{bivector}}\quad}$$
A complex number can be directly associated with a 2D spinor in the $e_{12}$-plane,
$$\underbrace{\mathbf{z}=\alpha+\beta i}{\mbox{complex number}}\quad\Longrightarrow\quad\underbrace{Z=\alpha+\beta e{12}}_{\mbox{2D spinor}}$$
The even subalgebra of a two dimensional geometric algebra is isomorphic to the complex numbers.
We can setup translating functions which converts a 2D spinor into a complex number and vice-versa. In two dimensions the spinor can be also be mapped into vectors if desired.
Below is an illustration of the three different planes, the later two being contained within the geometric algebra of two dimensions, $G_2$.
Both spinors and vectors in $G_2$ can be modeled as points on a plane, but they have distinct algebraic properties.
End of explanation
c2s(1+2j)
Explanation: Convert a complex number to a spinor
End of explanation
s2c(1+2*e12)
Explanation: Convert a spinor to a complex number
End of explanation
s2c(c2s(1+2j)) == 1+2j
Explanation: Make sure we get what we started with when we make a round trip
End of explanation
s = 1+2*e12
e1*s
Explanation: The spinor is then mapped to a vector by choosing a reference direction. This may be done by left multiplying with $e_{1}$
.
$$Z\Longrightarrow e_{1}Z=e_{1}\alpha+\beta e_{1}e_{12}=\underbrace{\alpha e_{1}+\beta e_{2}}_{\mbox{vector}}$$
End of explanation
def c2v(c):
'''convert a complex number to a vector'''
return e1*c2s(c)
def v2c(v):
'''convert a vector to a complex number'''
return s2c(e1*v)
c2v(1+2j)
v2c(1*e1+2*e2)
Explanation: Geometrically, this is interpreted as having the spinor rotate a specific vector, in this case $e_1$. Building off of the previously defined functions
End of explanation
import clifford as cf
layout, blades = cf.Cl(3)
locals().update(blades)
def c2s(z,B):
'''convert a complex number to a spinor'''
return z.real + z.imag*B
def s2c(S,B):
'''convert a spinor to a complex number'''
S0 = float(S(0))
S2 = float(-S|B)
return S0 + S2*1j
c2s(1+2j, e23)
c2s(3+4j, e13)
Explanation: Depending on your applications, you may wish to have the bivector be an argument to the c2s and s2c functions.
This allows you to map input data given in the form of complex number onto the planes of your choice.
For example, in three dimensional space there are three bivector-planes; $e_{12}, e_{23}$ and $e_{13}$, so there are many bivectors which could be interpreted as the unit imaginary.
Complex numbers mapped in this way can be used to enact rotations within the specified planes.
End of explanation
import clifford as cf
# the vector/bivector order is reversed because Hamilton defined quaternions using a
# left-handed frame. wtf.
names = ['', 'z', 'y', 'x', 'k', 'j', 'i', 'I']
layout, blades = cf.Cl(3, names=names)
locals().update(blades)
Explanation: This brings us to the subject of quaternions, which are used to handle rotations in three dimensions much like complex numbers do in two dimensions. With geometric algebra, they are just spinors acting in a different geometry.
Quaternions
<div class="alert alert-info">
**Note:**
There is support for quaternions in numpy through the package [quaternion](https://github.com/moble/quaternion).
</div>
For some reason people think quaternions (wiki page) are mystical or something.
They are just spinors in a three dimensional geometric algebra.
In either case, we can pass the names parameters to Cl() to explicitly label the bivectors i,j, and k.
End of explanation
for m in [i, j, k]:
for n in [i, j, k]:
print ('{}*{}={}'.format(m, n, m*n))
Explanation: This leads to the commutations relations familiar to quaternion users
End of explanation
def q2S(*args):
'''convert tuple of quaternion coefficients to a spinor'''
q = args
return q[0] + q[1]*i + q[2]*j + q[3]*k
Explanation: Quaternion data could be stored in a variety of ways. Assuming you have the scalar components for the quaternion, all you will need to do is setup a map each component to the correct bivector.
End of explanation
q1 = q2S(1,2,3,4)
q1
Explanation: Then all the quaternion computations can be done using GA
End of explanation
# 'scalar' part
q1(0)
# 'vector' part (more like bivector part!)
q1(2)
Explanation: This prints $i,j$ and $k$ in reverse order but whatever,
End of explanation
~q1
Explanation: quaternion conjugation is implemented with reversion
End of explanation
abs(q1)
Explanation: The norm
End of explanation
q1(2).dual()
q1 = q2S(1, 2, 3, 4)
q2 = q2S(5, 6, 7, 8)
# quaternion product
q1*q2
Explanation: Taking the dual() of the "vector" part actually returns a vector,
End of explanation
import clifford as cf
layout, blades = cf.Cl(3)
locals().update(blades)
blades
def q2S(*args):
'''
convert tuple of quaternion coefficients to a spinor'''
q = args
return q[0] + q[1]*e13 +q[2]*e23 + q[3]*e12
q1 = q2S(1,2,3,4)
q1
Explanation: If you want to keep using a left-handed frame and names like $i,j$ and $k$ to label the planes in 3D space, ok.
If you think it makes more sense to use the consistent and transparent approach provided by GA, we think you have good taste.
If we make this switch, the basis and q2S() functions will be changed to
End of explanation |
3,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EventVestor
Step1: Let's go over the columns
Step2: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API.
There are a few factors available using the M&A dataset through Pipeline. They allow you to identify securities that are the current target of an acquisition. You can also view the payment mode used in the offer as well as the number of business days since the offer was made.
Step5: Filtering out ANNOUNCED targets
The following code below shows you how to filter out targets of acquisitions.
Step9: Filtering out PROPOSED targets
If you'd also like to filter out proposed targets, please view below | Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import mergers_and_acquisitions_free as dataset
# or if you want to import the free dataset, use:
#from quantopian.data.eventvestor import buyback_auth_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
dataset.asof_date.min()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
dataset.is_crossboarder.distinct()
Explanation: EventVestor: Mergers and Acquisitions
In this notebook, we'll take a look at EventVestor's Mergers and Acquisitions dataset, available on the Quantopian Store. This dataset spans January 01, 2007 through the current day.
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Free samples and limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.
To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
# get the sid for MSFT
symbols('MSFT')
# knowing that the MSFT sid is 5061:
msft = dataset[dataset.sid==5061]
msft[:5]
Explanation: Let's go over the columns:
- event_id: the unique identifier for this buyback authorization.
- asof_date: EventVestor's timestamp of event capture.
- trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.
- symbol: stock ticker symbol of the affected company.
- event_type: this should always be Buyback.
- event_headline: a short description of the event.
- timestamp: this is our timestamp on when we registered the data.
- sid: the equity's unique identifier. Use this instead of the symbol.
- news_type: the type of news - Announcement, Close, Proposal, Termination, Rumor, Rejection, None
- firm_type: either Target or Acquirer
- payment_mode: the type of offer made - Mixed Offer, Cash Offer, Other, Stock Offer, None
- target_type: Public, Private, PE Holding, VC Funded, None
- is_crossboarder: None, National, Other, Cross Border
- deal_amount, deal_currency: the amount of the deal and its corresponding currency
- related_ticker: if present, this indicates the ticker being acquired or that is acquiring
- price_pershare, premium_pct: the price per share and the premium paid
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all entries for Microsoft. We're really only interested in the buyback amount, the units, and the date, so we'll display only those columns.
End of explanation
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API.
There are a few factors available using the M&A dataset through Pipeline. They allow you to identify securities that are the current target of an acquisition. You can also view the payment mode used in the offer as well as the number of business days since the offer was made.
End of explanation
from quantopian.pipeline.classifiers.eventvestor import (
AnnouncedAcqTargetType,
ProposedAcqTargetType,
)
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysSinceAnnouncedAcquisition,
BusinessDaysSinceProposedAcquisition
)
from quantopian.pipeline.filters.eventvestor import (
IsAnnouncedAcqTarget
)
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
def screen_ma_targets_by_type(target_type='cash'):
target_type:
(string) Available options are 'cash', 'stock', 'mixed', 'all'.
This will filter all offers of type target_type.
if target_type == 'all':
return (~IsAnnouncedAcqTarget())
else:
if target_type == 'cash':
filter_offer = 'Cash Offer'
elif target_type == 'stock':
filter_offer = 'Stock Offer'
elif target_type == 'mixed':
filter_offer = 'Mixed Offer'
return (~AnnouncedAcqTargetType().eq(filter_offer))
def screen_ma_targets_by_days(days=200):
days:
(int) Filters out securities that have had an announcement
less than X days. So if days is 200, all securities
that have had an announcement less than 200 days ago will be
filtered out.
b_days = BusinessDaysSinceAnnouncedAcquisition()
return ((b_days > days) | b_days.isnull())
pipe = Pipeline(
columns={
'AnnouncedAcqTargetType': AnnouncedAcqTargetType(),
'BusinessDays': BusinessDaysSinceAnnouncedAcquisition()
},
screen=(screen_ma_targets_by_days(60) &
screen_ma_targets_by_type(target_type='stock'))
)
output = run_pipeline(pipe, start_date='2016-07-28', end_date='2016-07-28')
Explanation: Filtering out ANNOUNCED targets
The following code below shows you how to filter out targets of acquisitions.
End of explanation
Similar functions for M&A Proposals (different from Announcements)
def screen_ma_proposal_targets_by_type(target_type='cash'):
target_type:
(string) Available options are 'cash', 'stock', 'mixed', 'all'.
This will filter all offers of type target_type.
if target_type == 'all':
return (ProposedAcqTargetType().isnull() &
BusinessDaysSinceProposedAcquisition().isnull())
if target_type == 'cash':
filter_offer = 'Cash Offer'
elif target_type == 'stock':
filter_offer = 'Stock Offer'
elif target_type == 'mixed':
filter_offer = 'Mixed Offer'
return (~ProposedAcqTargetType().eq(filter_offer))
def screen_ma_proposal_targets_by_days(days=200):
days:
(int) Filters out securities that have had an announcement
less than X days. So if days is 200, all securities
that have had an announcement less than 200 days ago will be
filtered out.
b_days = BusinessDaysSinceProposedAcquisition()
return ((b_days > days) | b_days.isnull())
Explanation: Filtering out PROPOSED targets
If you'd also like to filter out proposed targets, please view below
End of explanation |
3,756 | Given the following text description, write Python code to implement the functionality described.
Description:
How to swap two bits in a given integer ?
This function swaps bit at positions p1 and p2 in an integer n ; Move p1 'th to rightmost side ; Move p2 'th to rightmost side ; XOR the two bits ; Put the xor bit back to their original positions ; XOR ' x ' with the original number so that the two sets are swapped ; Driver program to test above function
| Python Code:
def swapBits(n , p1 , p2 ) :
bit1 =(n >> p1 ) & 1
bit2 =(n >> p2 ) & 1
x =(bit1 ^ bit2 )
x =(x << p1 ) |(x << p2 )
result = n ^ x
return result
if __name__== ' __main __' :
res = swapBits(28 , 0 , 3 )
print("Result ▁ = ▁ ", res )
|
3,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding Hyperbole using RSA
"My new kettle cost a million dollars."
Hyperbole -- using an exagerated utterance to convey strong opinions -- is a common non-literal use of language. Yet non-literal uses of langauge are impossible under the simplest RSA model. Kao, et al, suggested that two ingredients could be added to ennable RSA to capture hyperbole. First, the state conveyed by the speaker and reasoned about by the listener should include affective dimensions. Second, the speaker only intends to convey information relevant to a particular topic, such as "how expensive was it?" or "how am I feeling about the price?"; pragmatic listeners hence jointly reason about this topic and the state.
Step1: As in the simple RSA example, the inferece helper Marginal takes an un-normalized stochastic function, constructs the distribution over execution traces by using Search, and constructs the marginal distribution on return values (via HashingMarginal).
Step2: The domain for this example will be states consisting of price (e.g. of a tea kettle) and the speaker's emotional arousal (whether the speaker thinks this price is irritatingly expensive). Priors here are adapted from experimental data.
Step3: Now we define a version of the RSA speaker that only produces relevant information for the literal listener. We define relevance with respect to a Question Under Discussion (QUD) -- this can be thought of as defining the speaker's current attention or topic.
The speaker is defined mathematically by
Step4: The possible QUDs capture that the speaker may be attending to the price, her affect, or some combination of these. We assume a uniform QUD prior.
Step5: Now we specify the utterance meanings (standard number word denotations
Step6: OK, let's see what number term this speaker will say to express different states and QUDs.
Step7: Try different values above! When will the speaker favor non-literal utterances?
Finally, the pragmatic listener doesn't know what the QUD is and so jointly reasons abut this and the state.
Step8: How does this listener interpret the uttered price "10,000"? On the one hand this is a very unlikely price a priori, on the other if it were true it would come with strong arousal. Altogether this becomes a plausible hyperbolic utterence
Step9: Pragmatic Halo
"It cost fifty dollars" is often interpretted as costing around 50 -- plausibly 51; yet "it cost fiftyone dollars" is interpretted as 51 and definitely not 50. This assymetric imprecision is often called the pragmatic halo or pragmatic slack.
We can extend the hyperole model to capture this additional non-literal use of numbers by including QUD functions that collapse nearby numbers and assuming that round numbers are slightly more likely (because they are less difficult to utter).
Step10: The RSA speaker and listener definitions are unchanged
Step11: OK, let's see if we get the desired assymetric slack (we're only interested in the interpretted price here, so we marginalize out the arousal). | Python Code:
#first some imports
import torch
torch.set_default_dtype(torch.float64) # double precision for numerical stability
import collections
import argparse
import matplotlib.pyplot as plt
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from search_inference import HashingMarginal, memoize, Search
Explanation: Understanding Hyperbole using RSA
"My new kettle cost a million dollars."
Hyperbole -- using an exagerated utterance to convey strong opinions -- is a common non-literal use of language. Yet non-literal uses of langauge are impossible under the simplest RSA model. Kao, et al, suggested that two ingredients could be added to ennable RSA to capture hyperbole. First, the state conveyed by the speaker and reasoned about by the listener should include affective dimensions. Second, the speaker only intends to convey information relevant to a particular topic, such as "how expensive was it?" or "how am I feeling about the price?"; pragmatic listeners hence jointly reason about this topic and the state.
End of explanation
def Marginal(fn):
return memoize(lambda *args: HashingMarginal(Search(fn).run(*args)))
Explanation: As in the simple RSA example, the inferece helper Marginal takes an un-normalized stochastic function, constructs the distribution over execution traces by using Search, and constructs the marginal distribution on return values (via HashingMarginal).
End of explanation
State = collections.namedtuple("State", ["price", "arousal"])
def price_prior():
values = [50, 51, 500, 501, 1000, 1001, 5000, 5001, 10000, 10001]
probs = torch.tensor([0.4205, 0.3865, 0.0533, 0.0538, 0.0223, 0.0211, 0.0112, 0.0111, 0.0083, 0.0120])
ix = pyro.sample("price", dist.Categorical(probs=probs))
return values[ix]
def arousal_prior(price):
probs = {
50: 0.3173,
51: 0.3173,
500: 0.7920,
501: 0.7920,
1000: 0.8933,
1001: 0.8933,
5000: 0.9524,
5001: 0.9524,
10000: 0.9864,
10001: 0.9864
}
return pyro.sample("arousal", dist.Bernoulli(probs=probs[price])).item() == 1
def state_prior():
price = price_prior()
state = State(price=price, arousal=arousal_prior(price))
return state
Explanation: The domain for this example will be states consisting of price (e.g. of a tea kettle) and the speaker's emotional arousal (whether the speaker thinks this price is irritatingly expensive). Priors here are adapted from experimental data.
End of explanation
@Marginal
def project(dist,qud):
v = pyro.sample("proj",dist)
return qud_fns[qud](v)
@Marginal
def literal_listener(utterance):
state=state_prior()
pyro.factor("literal_meaning", 0. if meaning(utterance, state.price) else -999999.)
return state
@Marginal
def speaker(state, qud):
alpha = 1.
qudValue = qud_fns[qud](state)
with poutine.scale(scale=torch.tensor(alpha)):
utterance = utterance_prior()
literal_marginal = literal_listener(utterance)
projected_literal = project(literal_marginal, qud)
pyro.sample("listener", projected_literal, obs=qudValue)
return utterance
Explanation: Now we define a version of the RSA speaker that only produces relevant information for the literal listener. We define relevance with respect to a Question Under Discussion (QUD) -- this can be thought of as defining the speaker's current attention or topic.
The speaker is defined mathematically by:
$$P_S(u|s,q) \propto \left[ \sum_{w'} \delta_{q(w')=q(w)} P_\text{Lit}(w'|u) p(u) \right]^\alpha $$
To implement this as a probabilistic program, we start with a helper function project, which takes a distribution over some (discrete) domain and a function qud on this domain. It creates the push-forward distribution, using Marginal (as a Python decorator). The speaker's relevant information is then simply information about the state in this projection.
End of explanation
#The QUD functions we consider:
qud_fns = {
"price": lambda state: State(price=state.price, arousal=None),
"arousal": lambda state: State(price=None, arousal=state.arousal),
"priceArousal": lambda state: State(price=state.price, arousal=state.arousal),
}
def qud_prior():
values = list(qud_fns.keys())
ix = pyro.sample("qud", dist.Categorical(probs=torch.ones(len(values)) / len(values)))
return values[ix]
Explanation: The possible QUDs capture that the speaker may be attending to the price, her affect, or some combination of these. We assume a uniform QUD prior.
End of explanation
def utterance_prior():
utterances = [50, 51, 500, 501, 1000, 1001, 5000, 5001, 10000, 10001]
ix = pyro.sample("utterance", dist.Categorical(probs=torch.ones(len(utterances)) / len(utterances)))
return utterances[ix]
def meaning(utterance, price):
return utterance == price
Explanation: Now we specify the utterance meanings (standard number word denotations: "N" means exactly $N$) and a uniform utterance prior.
End of explanation
#silly plotting helper:
def plot_dist(d):
support = d.enumerate_support()
data = [d.log_prob(s).exp().item() for s in d.enumerate_support()]
names = support
ax = plt.subplot(111)
width=0.3
bins = list(map(lambda x: x-width/2,range(1,len(data)+1)))
ax.bar(bins,data,width=width)
ax.set_xticks(list(map(lambda x: x, range(1,len(data)+1))))
ax.set_xticklabels(names,rotation=45, rotation_mode="anchor", ha="right")
# plot_dist( speaker(State(price=50, arousal=False), "arousal") )
# plot_dist( speaker(State(price=50, arousal=True), "price") )
plot_dist( speaker(State(price=50, arousal=True), "arousal") )
Explanation: OK, let's see what number term this speaker will say to express different states and QUDs.
End of explanation
@Marginal
def pragmatic_listener(utterance):
state = state_prior()
qud = qud_prior()
speaker_marginal = speaker(state, qud)
pyro.sample("speaker", speaker_marginal, obs=utterance)
return state
Explanation: Try different values above! When will the speaker favor non-literal utterances?
Finally, the pragmatic listener doesn't know what the QUD is and so jointly reasons abut this and the state.
End of explanation
plot_dist( pragmatic_listener(10000) )
Explanation: How does this listener interpret the uttered price "10,000"? On the one hand this is a very unlikely price a priori, on the other if it were true it would come with strong arousal. Altogether this becomes a plausible hyperbolic utterence:
End of explanation
#A helper to round a number to the nearest ten:
def approx(x, b=None):
if b is None:
b = 10.
div = float(x)/b
rounded = int(div) + 1 if div - float(int(div)) >= 0.5 else int(div)
return int(b) * rounded
#The QUD functions we consider:
qud_fns = {
"price": lambda state: State(price=state.price, arousal=None),
"arousal": lambda state: State(price=None, arousal=state.arousal),
"priceArousal": lambda state: State(price=state.price, arousal=state.arousal),
"approxPrice": lambda state: State(price=approx(state.price), arousal=None),
"approxPriceArousal": lambda state: State(price=approx(state.price), arousal=state.arousal),
}
def qud_prior():
values = list(qud_fns.keys())
ix = pyro.sample("qud", dist.Categorical(probs=torch.ones(len(values)) / len(values)))
return values[ix]
def utterance_cost(numberUtt):
preciseNumberCost = 10.
return 0. if approx(numberUtt) == numberUtt else preciseNumberCost
def utterance_prior():
utterances = [50, 51, 500, 501, 1000, 1001, 5000, 5001, 10000, 10001]
utteranceLogits = -torch.tensor(list(map(utterance_cost, utterances)),
dtype=torch.float64)
ix = pyro.sample("utterance", dist.Categorical(logits=utteranceLogits))
return utterances[ix]
Explanation: Pragmatic Halo
"It cost fifty dollars" is often interpretted as costing around 50 -- plausibly 51; yet "it cost fiftyone dollars" is interpretted as 51 and definitely not 50. This assymetric imprecision is often called the pragmatic halo or pragmatic slack.
We can extend the hyperole model to capture this additional non-literal use of numbers by including QUD functions that collapse nearby numbers and assuming that round numbers are slightly more likely (because they are less difficult to utter).
End of explanation
@Marginal
def literal_listener(utterance):
state=state_prior()
pyro.factor("literal_meaning", 0. if meaning(utterance, state.price) else -999999.)
return state
@Marginal
def speaker(state, qud):
alpha = 1.
qudValue = qud_fns[qud](state)
with poutine.scale(scale=torch.tensor(alpha)):
utterance = utterance_prior()
literal_marginal = literal_listener(utterance)
projected_literal = project(literal_marginal, qud)
pyro.sample("listener", projected_literal, obs=qudValue)
return utterance
@Marginal
def pragmatic_listener(utterance):
state = state_prior()
qud = qud_prior()
speaker_marginal = speaker(state, qud)
pyro.sample("speaker", speaker_marginal, obs=utterance)
return state
Explanation: The RSA speaker and listener definitions are unchanged:
End of explanation
@Marginal
def pragmatic_listener_price_marginal(utterance):
return pyro.sample("pm", pragmatic_listener(utterance)).price
plot_dist(pragmatic_listener_price_marginal(50))
plot_dist(pragmatic_listener_price_marginal(51))
Explanation: OK, let's see if we get the desired assymetric slack (we're only interested in the interpretted price here, so we marginalize out the arousal).
End of explanation |
3,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Get the Data
Read in the advertising.csv file and set it to a data frame called ad_data.
Step2: Check the head of ad_data
Step3: Use info and describe() on ad_data
Step4: Exploratory Data Analysis
Let's use seaborn to explore the data!
Try recreating the plots shown below!
Create a histogram of the Age
Step5: Create a jointplot showing Area Income versus Age.
Step6: Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.
Step7: Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'
Step8: Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.
Step9: Logistic Regression
Now it's time to do a train test split, and train our model!
You'll have the freedom here to choose columns that you want to train on!
Split the data into training set and testing set using train_test_split
Step10: Train and fit a logistic regression model on the training set.
Step11: Predictions and Evaluations
Now predict values for the testing data.
Step12: Create a classification report for the model.
Step13: Great Job! | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Logistic Regression Project
In this project we will be working with a fake advertising data set, indicating whether or not a particular internet user clicked on an Advertisement. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user.
This data set contains the following features:
'Daily Time Spent on Site': consumer time on site in minutes
'Age': cutomer age in years
'Area Income': Avg. Income of geographical area of consumer
'Daily Internet Usage': Avg. minutes a day consumer is on the internet
'Ad Topic Line': Headline of the advertisement
'City': City of consumer
'Male': Whether or not consumer was male
'Country': Country of consumer
'Timestamp': Time at which consumer clicked on Ad or closed window
'Clicked on Ad': 0 or 1 indicated clicking on Ad
Import Libraries
Import a few libraries you think you'll need (Or just import them as you go along!)
End of explanation
ad_data = pd.read_csv('advertising.csv')
Explanation: Get the Data
Read in the advertising.csv file and set it to a data frame called ad_data.
End of explanation
ad_data.head()
Explanation: Check the head of ad_data
End of explanation
ad_data.info()
ad_data.describe()
Explanation: Use info and describe() on ad_data
End of explanation
sns.distplot(ad_data['Age'],kde=False,bins=30,color='blue')
Explanation: Exploratory Data Analysis
Let's use seaborn to explore the data!
Try recreating the plots shown below!
Create a histogram of the Age
End of explanation
sns.jointplot(data=ad_data,x='Age',y='Area Income')
Explanation: Create a jointplot showing Area Income versus Age.
End of explanation
sns.jointplot(data=ad_data,x='Age',y='Daily Time Spent on Site',kind='kde')
Explanation: Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.
End of explanation
sns.jointplot(data=ad_data,x='Daily Time Spent on Site',y='Daily Internet Usage',color='green')
Explanation: Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'
End of explanation
sns.pairplot(ad_data,hue='Clicked on Ad')
Explanation: Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.
End of explanation
ad_data.columns
from sklearn.model_selection import train_test_split
X = ad_data[['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage','Male']]
y = ad_data['Clicked on Ad']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
Explanation: Logistic Regression
Now it's time to do a train test split, and train our model!
You'll have the freedom here to choose columns that you want to train on!
Split the data into training set and testing set using train_test_split
End of explanation
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
logmodel.coef_
Explanation: Train and fit a logistic regression model on the training set.
End of explanation
predictions = logmodel.predict(X_test)
Explanation: Predictions and Evaluations
Now predict values for the testing data.
End of explanation
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
Explanation: Create a classification report for the model.
End of explanation
from collections import OrderedDict
d = OrderedDict({'Daily Time Spent on Site': 500, 'Age': 18, 'Area Income':23000,'Daily Internet Usage': 160,'Male': 1})
df = pd.DataFrame(d,index=[0])
sample_predict = logmodel.predict(df)
print(sample_predict)
Explanation: Great Job!
End of explanation |
3,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark version of wordcount examples
Prepare the pyspark environment.
Step1: Make sure your HDFS is still on and the input files (the three books) are still in the input folder.
Create the input RDD from the files on the HDFS (hdfs
Step2: Simple Word Count
Perform the counting, by flatMap, map, and reduceByKey.
Step3: Take the top 10 frequently used words
Step4: Pattern Matching WordCount
Read the pattern file into a set. (file
Step5: Perform the counting, by flatMap, filter, map, and reduceByKey.
Step6: Collect and show the results. | Python Code:
import findspark
import os
findspark.init('/home/ubuntu/shortcourse/spark-1.5.1-bin-hadoop2.6')
from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("test").setMaster("local[2]")
sc = SparkContext(conf=conf)
Explanation: Spark version of wordcount examples
Prepare the pyspark environment.
End of explanation
lines = sc.textFile('hdfs://localhost:54310/user/ubuntu/input')
lines.count()
Explanation: Make sure your HDFS is still on and the input files (the three books) are still in the input folder.
Create the input RDD from the files on the HDFS (hdfs://localhost:54310/user/ubuntu/input).
End of explanation
from operator import add
counts = lines.flatMap(lambda x: x.split()).map(lambda x: (x, 1)).reduceByKey(add)
Explanation: Simple Word Count
Perform the counting, by flatMap, map, and reduceByKey.
End of explanation
counts.takeOrdered(10, lambda x: -x[1])
Explanation: Take the top 10 frequently used words
End of explanation
pattern = set()
f = open('/home/ubuntu/shortcourse/notes/scripts/wordcount2/wc2-pattern.txt')
for line in f:
words = line.split()
for word in words:
pattern.add(word)
Explanation: Pattern Matching WordCount
Read the pattern file into a set. (file: /home/ubuntu/shortcourse/notes/scripts/wordcount2/wc2-pattern.txt)
End of explanation
result = lines.flatMap(lambda x: x.split()).filter(lambda x: x in pattern).map(lambda x: (x, 1)).reduceByKey(add)
Explanation: Perform the counting, by flatMap, filter, map, and reduceByKey.
End of explanation
result.collect()
# stop the spark context
sc.stop()
Explanation: Collect and show the results.
End of explanation |
3,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reproducing the black hole discovery in Thompson et al. 2019
In this science demo tutorial, we will reproduce the results in Thompson et al. 2019, who found and followed-up a candidate stellar-mass black hole companion to a giant star in the Milky Way. We will first use The Joker to constrain the orbit of the system using the TRES follow-up radial velocity data released in their paper and show that we get consistent period and companion mass constraints from modeling these data. We will then do a joint analysis of the TRES and APOGEE data for this source by simultaneously fitting for and marginalizing over an unknown constant velocity offset between the two surveys.
A bunch of imports we will need later
Step3: Load the data
We will start by loading data, copy-pasted from Table S2 in Thompson et al. 2019)
Step4: Let's now plot the data from these two instruments
Step5: Run The Joker with just the TRES data
The two data sets are separated by a large gap in observations between the end of APOGEE and the start of the RV follow-up with TRES. Since there are more observations with TRES, we will start by running The Joker with just data from TRES before using all of the data. Let's plot the TRES data alone
Step6: It is pretty clear that there is a periodic signal in the data, with a period between 10s to ~100 days (from eyeballing the plot above), so this limits the range of periods we need to sample over with The Joker below. The reported uncertainties on the individual RV measurements (plotted above, I swear) are all very small (typically smaller than the markers). So, we may want to allow for the fact that these could be under-estimated. With The Joker, we support this by accepting an additional nonlinear parameter, s, that specifies a global, extra uncertainty that is added in quadrature to the data uncertainties while running the sampler. That is, the uncertainties used for computing the likelihood in The Joker are computed as
Step7: With the prior set up, we can now generate prior samples, and run the rejection sampling step of The Joker
Step8: Only 1 sample is returned from the rejection sampling step - let's see how well it matches the data
Step9: Let's look at the values of the sample that was returned, and compare that to the values reported in Thompson et al. 2019, included below for convenience
Step10: Already these look very consistent with the values inferred in the paper!
Let's now also plot the data phase-folded on the period returned in the one sample we got from The Joker
Step11: At this point, since the data are very constraining, we could use this one Joker sample to initialize standard MCMC to generate posterior samplings in the orbital parameters for this system. We will do that below, but first let's see how things look if we include both TRES and APOGEE data in our modeling.
Run The Joker with TRES+APOGEE data
One of the challenges with incorporating data from the two surveys is that they were taken with two different spectrographs, and there could be instrumental offsets that manifest as shifts in the absolute radial velocities measured between the two instruments. The Joker now supports simultaneously sampling over additional parameters that represent instrumental or calibratrion offsets, so let's take a look at how to run The Joker in this mode.
To start, we can pack the two datasets into a single list that contains data from both surveys
Step12: Before we run anything, let's try phase-folding both datasets on the period value we got from running on the TRES data alone
Step13: That looks pretty good, but the period is clearly slightly off and there seems to be a constant velocity offset between the two surveys, given that the APOGEE RV points don't seem to lie in the RV curve. So, let's now try running The Joker on the joined dataset!
To allow for an unknown constant velocity offset between TRES and APOGEE, we have to define a new parameter for this offset and specify a prior. We'll put a Gaussian prior on this offset parameter (named dv0_1 below), with a mean of 0 and a standard deviation of 10 km/s, because it doesn't look like the surveys have a huge offset.
Step14: Here we again only get one sample back from The Joker, because these ata are so constraining
Step15: Now, let's fire up standard MCMC, using the one Joker sample to initialize. We will use the NUTS sampler in pymc3 to run here. When running MCMC to model radial velocities with Keplerian orbits, it is typically important to think about the parametrization. There are several angle parameters in the two-body problem (e.g., argument of pericenter, phase, inclination, etc.) that can be especially hard to sample over naïvely. Here, for running MCMC, we will instead sample over $M_0 - \omega, \omega$ instead of $M_0, \omega$, and we will define these angles as pymc3_ext.distributions.Angle distributions, which internally transform and sample in $\cos{x}, \sin{x}$ instead
Step16: We can now use pymc3 to look at some statistics of the MC chains to assess convergence
Step17: We can then transform the MCMC samples back into a JokerSamples instance so we can manipulate and visualize the samples
Step18: For example, we can make a corner plot of the orbital parameters (note the strong degenceracy between M0 and omega! But also note that we don't sample in these parameters explicitly, so this shouldn't affect convergence)
Step19: We can also use the median MCMC sample to fold the data and plot residuals relative to our inferred RV model
Step20: Finally, let's convert our orbit samples into binary mass function, $f(M)$, values to compare with one of the main conclusions of the Thompson et al. paper. We can do this by first converting the samples to KeplerOrbit objects, and then using the .m_f attribute to get the binary mass function values | Python Code:
from astropy.io import ascii
from astropy.time import Time
import astropy.units as u
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pymc3 as pm
import pymc3_ext as pmx
import exoplanet.units as xu
import exoplanet as xo
import corner
import arviz as az
import thejoker as tj
from twobody.transforms import get_m2_min
# set up a random number generator to ensure reproducibility
seed = 42
rnd = np.random.default_rng(seed=seed)
Explanation: Reproducing the black hole discovery in Thompson et al. 2019
In this science demo tutorial, we will reproduce the results in Thompson et al. 2019, who found and followed-up a candidate stellar-mass black hole companion to a giant star in the Milky Way. We will first use The Joker to constrain the orbit of the system using the TRES follow-up radial velocity data released in their paper and show that we get consistent period and companion mass constraints from modeling these data. We will then do a joint analysis of the TRES and APOGEE data for this source by simultaneously fitting for and marginalizing over an unknown constant velocity offset between the two surveys.
A bunch of imports we will need later:
End of explanation
tres_tbl = ascii.read(
8006.97517 0.000 0.075
8023.98151 -43.313 0.075
8039.89955 -27.963 0.045
8051.98423 10.928 0.118
8070.99556 43.782 0.075
8099.80651 -30.033 0.054
8106.91698 -42.872 0.135
8112.81800 -44.863 0.088
8123.79627 -25.810 0.115
8136.59960 15.691 0.146
8143.78352 34.281 0.087,
names=['HJD', 'rv', 'rv_err'])
tres_tbl['rv'].unit = u.km/u.s
tres_tbl['rv_err'].unit = u.km/u.s
apogee_tbl = ascii.read(
6204.95544 -37.417 0.011
6229.92499 34.846 0.010
6233.87715 42.567 0.010,
names=['HJD', 'rv', 'rv_err'])
apogee_tbl['rv'].unit = u.km/u.s
apogee_tbl['rv_err'].unit = u.km/u.s
tres_data = tj.RVData(
t=Time(tres_tbl['HJD'] + 2450000, format='jd', scale='tcb'),
rv=u.Quantity(tres_tbl['rv']),
rv_err=u.Quantity(tres_tbl['rv_err']))
apogee_data = tj.RVData(
t=Time(apogee_tbl['HJD'] + 2450000, format='jd', scale='tcb'),
rv=u.Quantity(apogee_tbl['rv']),
rv_err=u.Quantity(apogee_tbl['rv_err']))
Explanation: Load the data
We will start by loading data, copy-pasted from Table S2 in Thompson et al. 2019):
End of explanation
for d, name in zip([tres_data, apogee_data], ['TRES', 'APOGEE']):
d.plot(color=None, label=name)
plt.legend(fontsize=18)
Explanation: Let's now plot the data from these two instruments:
End of explanation
_ = tres_data.plot()
Explanation: Run The Joker with just the TRES data
The two data sets are separated by a large gap in observations between the end of APOGEE and the start of the RV follow-up with TRES. Since there are more observations with TRES, we will start by running The Joker with just data from TRES before using all of the data. Let's plot the TRES data alone:
End of explanation
with pm.Model() as model:
# Allow extra error to account for under-estimated error bars
s = xu.with_unit(pm.Lognormal('s', -2, 1),
u.km/u.s)
prior = tj.JokerPrior.default(
P_min=16*u.day, P_max=128*u.day, # Range of periods to consider
sigma_K0=30*u.km/u.s, P0=1*u.year, # scale of the prior on semiamplitude, K
sigma_v=25*u.km/u.s, # std dev of the prior on the systemic velocity, v0
s=s
)
Explanation: It is pretty clear that there is a periodic signal in the data, with a period between 10s to ~100 days (from eyeballing the plot above), so this limits the range of periods we need to sample over with The Joker below. The reported uncertainties on the individual RV measurements (plotted above, I swear) are all very small (typically smaller than the markers). So, we may want to allow for the fact that these could be under-estimated. With The Joker, we support this by accepting an additional nonlinear parameter, s, that specifies a global, extra uncertainty that is added in quadrature to the data uncertainties while running the sampler. That is, the uncertainties used for computing the likelihood in The Joker are computed as:
$$
\sigma_n = \sqrt{\sigma_{n,0}^2 + s^2}
$$
where $\sigma_{n,0}$ are the values reported for each $n$ data point in the tables above. We'll use a log-normal prior on this extra error term, but will otherwise use the default prior form for The Joker:
End of explanation
# Generate a large number of prior samples:
prior_samples = prior.sample(size=1_000_000,
random_state=rnd)
# Run rejection sampling with The Joker:
joker = tj.TheJoker(prior, random_state=rnd)
samples = joker.rejection_sample(tres_data, prior_samples,
max_posterior_samples=256)
samples
Explanation: With the prior set up, we can now generate prior samples, and run the rejection sampling step of The Joker:
End of explanation
_ = tj.plot_rv_curves(samples, data=tres_data)
Explanation: Only 1 sample is returned from the rejection sampling step - let's see how well it matches the data:
End of explanation
samples.tbl['P', 'e', 'K']
Explanation: Let's look at the values of the sample that was returned, and compare that to the values reported in Thompson et al. 2019, included below for convenience:
$$
P = 83.205 \pm 0.064\
e = 0.00476 \pm 0.00255\
K = 44.615 \pm 0.123
$$
End of explanation
_ = tres_data.plot(phase_fold=samples[0]['P'])
Explanation: Already these look very consistent with the values inferred in the paper!
Let's now also plot the data phase-folded on the period returned in the one sample we got from The Joker:
End of explanation
data = [apogee_data, tres_data]
Explanation: At this point, since the data are very constraining, we could use this one Joker sample to initialize standard MCMC to generate posterior samplings in the orbital parameters for this system. We will do that below, but first let's see how things look if we include both TRES and APOGEE data in our modeling.
Run The Joker with TRES+APOGEE data
One of the challenges with incorporating data from the two surveys is that they were taken with two different spectrographs, and there could be instrumental offsets that manifest as shifts in the absolute radial velocities measured between the two instruments. The Joker now supports simultaneously sampling over additional parameters that represent instrumental or calibratrion offsets, so let's take a look at how to run The Joker in this mode.
To start, we can pack the two datasets into a single list that contains data from both surveys:
End of explanation
tres_data.plot(color=None, phase_fold=np.mean(samples['P']))
apogee_data.plot(color=None, phase_fold=np.mean(samples['P']))
Explanation: Before we run anything, let's try phase-folding both datasets on the period value we got from running on the TRES data alone:
End of explanation
with pm.Model() as model:
# The parameter that represents the constant velocity offset between
# APOGEE and TRES:
dv0_1 = xu.with_unit(pm.Normal('dv0_1', 0, 5.),
u.km/u.s)
# The same extra uncertainty parameter as previously defined
s = xu.with_unit(pm.Lognormal('s', -2, 1),
u.km/u.s)
# We can restrict the prior on prior now, using the above
prior_joint = tj.JokerPrior.default(
# P_min=16*u.day, P_max=128*u.day,
P_min=75*u.day, P_max=90*u.day,
sigma_K0=30*u.km/u.s, P0=1*u.year,
sigma_v=25*u.km/u.s,
v0_offsets=[dv0_1],
s=s
)
prior_samples_joint = prior_joint.sample(size=10_000_000,
random_state=rnd)
# Run rejection sampling with The Joker:
joker_joint = tj.TheJoker(prior_joint, random_state=rnd)
samples_joint = joker_joint.rejection_sample(data,
prior_samples_joint,
max_posterior_samples=256)
samples_joint
Explanation: That looks pretty good, but the period is clearly slightly off and there seems to be a constant velocity offset between the two surveys, given that the APOGEE RV points don't seem to lie in the RV curve. So, let's now try running The Joker on the joined dataset!
To allow for an unknown constant velocity offset between TRES and APOGEE, we have to define a new parameter for this offset and specify a prior. We'll put a Gaussian prior on this offset parameter (named dv0_1 below), with a mean of 0 and a standard deviation of 10 km/s, because it doesn't look like the surveys have a huge offset.
End of explanation
_ = tj.plot_rv_curves(samples_joint, data=data)
Explanation: Here we again only get one sample back from The Joker, because these ata are so constraining:
End of explanation
from pymc3_ext.distributions import Angle
with pm.Model():
# See note above: when running MCMC, we will sample in the parameters
# (M0 - omega, omega) instead of (M0, omega)
M0_m_omega = xu.with_unit(Angle('M0_m_omega'), u.radian)
omega = xu.with_unit(Angle('omega'), u.radian)
# M0 = xu.with_unit(Angle('M0'), u.radian)
M0 = xu.with_unit(pm.Deterministic('M0', M0_m_omega + omega),
u.radian)
# The same offset and extra uncertainty parameters as above:
dv0_1 = xu.with_unit(pm.Normal('dv0_1', 0, 5.), u.km/u.s)
s = xu.with_unit(pm.Lognormal('s', -2, 0.5),
u.km/u.s)
prior_mcmc = tj.JokerPrior.default(
P_min=16*u.day, P_max=128*u.day,
sigma_K0=30*u.km/u.s, P0=1*u.year,
sigma_v=25*u.km/u.s,
v0_offsets=[dv0_1],
s=s,
pars={'M0': M0, 'omega': omega}
)
joker_mcmc = tj.TheJoker(prior_mcmc, random_state=rnd)
mcmc_init = joker_mcmc.setup_mcmc(data, samples_joint)
trace = pmx.sample(
tune=500, draws=1000,
start=mcmc_init,
random_seed=seed,
cores=1, chains=2)
Explanation: Now, let's fire up standard MCMC, using the one Joker sample to initialize. We will use the NUTS sampler in pymc3 to run here. When running MCMC to model radial velocities with Keplerian orbits, it is typically important to think about the parametrization. There are several angle parameters in the two-body problem (e.g., argument of pericenter, phase, inclination, etc.) that can be especially hard to sample over naïvely. Here, for running MCMC, we will instead sample over $M_0 - \omega, \omega$ instead of $M_0, \omega$, and we will define these angles as pymc3_ext.distributions.Angle distributions, which internally transform and sample in $\cos{x}, \sin{x}$ instead:
End of explanation
az.summary(trace, var_names=prior_mcmc.par_names)
Explanation: We can now use pymc3 to look at some statistics of the MC chains to assess convergence:
End of explanation
mcmc_samples = joker_mcmc.trace_to_samples(trace, data=data)
mcmc_samples.wrap_K()
Explanation: We can then transform the MCMC samples back into a JokerSamples instance so we can manipulate and visualize the samples:
End of explanation
df = mcmc_samples.tbl.to_pandas()
_ = corner.corner(df)
Explanation: For example, we can make a corner plot of the orbital parameters (note the strong degenceracy between M0 and omega! But also note that we don't sample in these parameters explicitly, so this shouldn't affect convergence):
End of explanation
fig, axes = plt.subplots(2, 1, figsize=(6, 8), sharex=True)
_ = tj.plot_phase_fold(mcmc_samples.median(), data, ax=axes[0], add_labels=False)
_ = tj.plot_phase_fold(mcmc_samples.median(), data, ax=axes[1], residual=True)
for ax in axes:
ax.set_ylabel(f'RV [{apogee_data.rv.unit:latex_inline}]')
axes[1].axhline(0, zorder=-10, color='tab:green', alpha=0.5)
axes[1].set_ylim(-1, 1)
Explanation: We can also use the median MCMC sample to fold the data and plot residuals relative to our inferred RV model:
End of explanation
mfs = u.Quantity([mcmc_samples.get_orbit(i).m_f
for i in np.random.choice(len(mcmc_samples), 1024)])
plt.hist(mfs.to_value(u.Msun), bins=32);
plt.xlabel(rf'$f(M)$ [{u.Msun:latex_inline}]');
# Values from Thompson et al., showing 1-sigma region
plt.axvline(0.766, zorder=100, color='tab:orange')
plt.axvspan(0.766 - 0.00637,
0.766 + 0.00637,
zorder=10, color='tab:orange',
alpha=0.4, lw=0)
Explanation: Finally, let's convert our orbit samples into binary mass function, $f(M)$, values to compare with one of the main conclusions of the Thompson et al. paper. We can do this by first converting the samples to KeplerOrbit objects, and then using the .m_f attribute to get the binary mass function values:
End of explanation |
3,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1
Step1: コメント
1. 行頭に#をつける
Step3: 2. 文字列としてコメントを書く.
ソースコードの中に文字列を埋め込むことができる.
表示しない文字列を書いても,正常に実行することができる.
Step4: 1-2. 変数
代入と参照
代入:具体的な値を変数に入れること.
参照:変数に代入した値を利用すること
Step5: 数値に対する演算
四則演算と剰余計算
Step6: 1-3. 関数
関数とは,以下の3つのようなもの.
- 定義した個数の引数をとる
- なんらかの処理を行う
- 返り値を返す
関数定義
python
def 関数名(引数名1, 引数名2, ...)
Step7: デフォルト引数
関数を呼び出す時に,代入しなくてもよい変数.
その変数には,関数の定義時に設定した値が代入され,関数を実行する.
Step8: 1-4. 標準入力
Step9: 1-5. ファイルの読み書き
ファイルを読み込む
Step10: ファイル書き込み | Python Code:
print("Hello")
Explanation: Chapter 1: Pythonの基本編
Pythonの概要
対話モード
スクリプトの実行
コメントの書き方
変数
代入と参照
数値に対する演算
関数
標準入力
ファイルの読み書き
練習問題
1-1. Pythonの概要
対話モード
コマンドライン上で`python`というコマンドを入力することで起動する.対話モードでは,プログラムを入力し,`Enter`キーを押してすぐに実行することができる.また,対話モードを終了するためには,`exit()`と入力し実行するか`Ctrl+D`を押す.
スクリプトの実行
ファイル名`filename.py`のようにファイルの拡張子`.py`をつけたスクリプトを用意する.
例題として,`test.py`を作る.
python
print("Hello")
このスクリプトは以下のように実行し,結果を出力する.
text
$ python test.py
Hello
End of explanation
print("ここは実行される")
#print("ここはコメントになる")
print("これも実行される")
Explanation: コメント
1. 行頭に#をつける
End of explanation
print("この文字列は表示される")
複数行のコメントを記入することができる.
例えば,どのようなスクリプトなのかをコメントとして書き込むことなどがある.
"一行のコメントは,このように書いたりもできる."
print("この文字列も表示される")
Explanation: 2. 文字列としてコメントを書く.
ソースコードの中に文字列を埋め込むことができる.
表示しない文字列を書いても,正常に実行することができる.
End of explanation
x = 0.9 # 値を代入
y = 1 + 5 # 演算結果を代入
print(x+y)
message = "Hello" # 文字列も同様
print(message)
Explanation: 1-2. 変数
代入と参照
代入:具体的な値を変数に入れること.
参照:変数に代入した値を利用すること
End of explanation
x = 11
y = 3
print(x+y, x-y, x*y, x/y, x%y) # 加算,減算,乗算,除算,剰余
print(x//y) # 小数切り捨ての除算
print(x**y) # xのy乗
Explanation: 数値に対する演算
四則演算と剰余計算
End of explanation
# xとyの積を求める関数
def product(x, y):
z = x * y
return z
result = product(10, 20) #product(x=10, y=20)としてもよい
print(result)
Explanation: 1-3. 関数
関数とは,以下の3つのようなもの.
- 定義した個数の引数をとる
- なんらかの処理を行う
- 返り値を返す
関数定義
python
def 関数名(引数名1, 引数名2, ...):
# 処理
return 返り値
End of explanation
# yをデフォルト引数として,10を代入する
def product(x, y=10):
return x * y
x1 = 2
y1 = 3
print(product(x=x1, y=y1))
print(product(x=x1))
Explanation: デフォルト引数
関数を呼び出す時に,代入しなくてもよい変数.
その変数には,関数の定義時に設定した値が代入され,関数を実行する.
End of explanation
text = input('標準入力>>')
print("入力された文字列: " + text)
Explanation: 1-4. 標準入力
End of explanation
f = open('./data/file.txt', 'r') # "file.txt"というファイルを読み込みモードで開く
result = f.readlines() # 複数行を文字列のリストとして読み込む
f.close()
print(result)
Explanation: 1-5. ファイルの読み書き
ファイルを読み込む
End of explanation
f = open('./data/file.txt', 'w') # 書き込みモードで開く
f.write("Hello world\n") # 文字列を書き込む
f.close()
Explanation: ファイル書き込み
End of explanation |
3,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Digits
This notebook one-vs-all logistic regression and neural networks to recognize hand-written digits.
1 - Overview of the data set
The dataset contains 5000 training examples of handwritten digits. This is a subset of the MNIST handwritten digit dataset (http
Step1: Visualise the data
Each line of X is an array representing an image. You can visualize an example by running the following code. Feel free also to change the indexImage value and re-run to see other images.
Step2: 2 - Data preprocessing
Dataset pre-processing
Step3: The second part of the training set is a 5000-dimensional vector y that contains labels for the training set.
Step4: One problem
Step5: One hot encoding
Another problem
Step6: Split into train and test sets
Split into 20% of test and 80% of train sets
Step7: 3 - Deep Neural Network for Image Classification
Now we will build and apply a deep neural network to the problem.
Building the parts of our algorithm ##
The main steps for building a Neural Network are as usual
Step8: Build the 3-layer neural network
We will re-use all the helper functions defined previously to build the neural network, such as the linear forward and the backward propagation.
Please refer to the Python file nn_helpers.py for the details.
Step10: Now we can put together all the functions to build a 3-layer neural network with this structure
Step11: We will now train the model as a 3-layer neural network.
Run the cell below to train the model. The cost should decrease on every iteration. It may take up to 5 minutes to run 3500 iterations.
You can click on the square (⬛) on the upper bar of the notebook to stop the cell.
Step13: 4. Results analysis
Now we can check the performance of the trained network by predicting the results of the test set and comparing them with the actual labels.
Note that the predict() function has been adapted to cope with the multi-class labels.
Step15: 5 - Initializing parameters
There are two types of parameters to initialize in a neural network
Step17: 6 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization.
It consists of appropriately modifying your cost function, from
Step19: Of course, because we changed the cost, we have to change backward propagation as well!
All the gradients have to be computed with respect to this new cost
Step21: Putting all together
Step22: Let's check the new accuracy values
Step26: 7 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration.
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. | Python Code:
from scipy.io import loadmat
dataset = loadmat('../datasets/mnist-data.mat') # comes as dictionary
dataset.keys()
Explanation: Digits
This notebook one-vs-all logistic regression and neural networks to recognize hand-written digits.
1 - Overview of the data set
The dataset contains 5000 training examples of handwritten digits. This is a subset of the MNIST handwritten digit dataset (http://yann.lecun.com/exdb/mnist/).
Each training example is a 20 pixel by 20 pixel grayscale image of the digit. Each pixel is represented by a floating point number indicating the grayscale intensity at that location. The 20 by 20 grid of pixels is “unrolled” into a 400-dimensional vector. Each of these training examples becomes a single row in our data matrix X. This gives us a 5000 by 400 matrix X where every row is a training example for a handwritten digit image.
The original dataset has its own format and you need to write your own program to read it but this dataset has already been converted into the Matlab format for Andrew Ng's wonderful course of Machine Learning at Stanford.
Let's get more familiar with the dataset.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# Example of a picture
indexImage = 4000 # try any index between 0 and 4999. They are sorted, from 1 to 10 (=0)
renderImage = np.reshape(dataset['X'][indexImage], (20,20))
labelImage = dataset['y'][indexImage]
plt.imshow(renderImage, cmap='gray')
print ("Label: this is a ", labelImage)
Explanation: Visualise the data
Each line of X is an array representing an image. You can visualize an example by running the following code. Feel free also to change the indexImage value and re-run to see other images.
End of explanation
X = dataset['X'] # the images
X.shape
Explanation: 2 - Data preprocessing
Dataset pre-processing:
Figure out the dimensions and shapes of the problem
Split the dataset into training a test subsets
"Standardise" the data
End of explanation
y = dataset['y'] # the labels
y.shape
y[499]
Explanation: The second part of the training set is a 5000-dimensional vector y that contains labels for the training set.
End of explanation
list_y = [0 if i == 10 else i for i in y] # apply to each item in y
y = np.asarray(list_y)
y = y.reshape(-1,1)
y.shape
y[0:10] # verify that the label is now zero
Explanation: One problem: The label representing the digit 0 (zero) is coded as ten (10). Change this.
End of explanation
n_classes = 10 # 10 digits = 10 classes/labels
# np.eye(n) creates an identity matrix of shape (n,n)
OHE_y = np.eye(n_classes)[y.reshape(-1)]
OHE_y.shape
OHE_y[1000] # this is the new encoding for e.g. label = 2
Explanation: One hot encoding
Another problem: the original labels (in the variable y) are a number between 0, 1, 2, ..., 9.
For the purpose of training a neural network, we need to recode the labels as vectors containing only binary values 0 or 1.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, OHE_y, test_size=0.2, random_state=7)
input_layer_size = X.shape[1]
num_px = np.sqrt(input_layer_size) # 400 = 20x20 Input Images of Digits
n_y = y_train.shape[1]
m_train = X_train.shape[0]
m_test = X_test.shape[0]
print ("Dataset dimensions:")
print ("Number of training examples = " + str(m_train))
print ("Number of testing examples = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: <" + str(num_px) + ", " + str(num_px) + ">")
print ("X train shape: " + str(X_train.shape))
print ("y train shape: " + str(y_train.shape))
Explanation: Split into train and test sets
Split into 20% of test and 80% of train sets
End of explanation
### CONSTANTS DEFINING THE MODEL ####
# we define a neural network with total 3 layers, x, y and 1 hidden:
n_h = 25
nn_layers = [input_layer_size, n_h, n_y] # length is 3 (layers)
Explanation: 3 - Deep Neural Network for Image Classification
Now we will build and apply a deep neural network to the problem.
Building the parts of our algorithm ##
The main steps for building a Neural Network are as usual:
1. Define the model structure (such as number and size of layers) and the hyperparameters
1. Initialize the model's weights
1. Loop for the number of iterations:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
1. Use the trained weights to predict the labels
Defining the neural network structure
Our neural network has 3 layers – an input layer, a hidden layer and an output layer.
Recall that our inputs are pixel values of digit images. Since the images are of size 20×20, this gives us 400 input layer units (excluding the extra bias unit which always outputs +1).
There are 25 units in the second layer and 10 output units (corresponding to the 10 digit classes).
End of explanation
from nn_helpers import *
# automatically reload the imported module in case of changes
%load_ext autoreload
%autoreload 2
Explanation: Build the 3-layer neural network
We will re-use all the helper functions defined previously to build the neural network, such as the linear forward and the backward propagation.
Please refer to the Python file nn_helpers.py for the details.
End of explanation
nn_layers
np.random.seed(1)
train_set_x = X_train.T
train_set_x.shape
# y is the original output array, with labels
# train_set_y is that set, one-hot-encoded
train_set_y = y_train.T
train_set_y.shape
# FUNCTION: L_layer_model
def simpleNN_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px)
Y -- true "label" vector (containing 0 or 1), of shape (10, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimisation loop
print_cost -- if True, it prints the cost every 200 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
costs = [] # keep track of cost
iterations2cost = 200 # Print the cost every these iterations
# Parameters initialization.
parameters = initialise_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
AL, caches = L_model_forward(X, parameters)
# Compute cost.
cost = compute_cost(AL, Y)
# Backward propagation.
grads = L_model_backward(AL, Y, caches)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the cost every iterations2cost training example
if print_cost and i % iterations2cost == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % iterations2cost == 0:
costs.append(cost)
if print_cost:
# plot the cost
fig, ax = plt.subplots(1,1)
plt.plot(np.squeeze(costs))
ticks = ax.get_xticks()
ax.locator_params(axis='x', nticks=len(costs))
ax.set_xticklabels([int(x*iterations2cost) for x in ticks])
plt.ylabel('cost')
plt.xlabel('iterations')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: Now we can put together all the functions to build a 3-layer neural network with this structure:
End of explanation
fit_params = simpleNN_model(train_set_x, train_set_y, nn_layers, learning_rate = 0.3, num_iterations = 3500, print_cost = True)
Explanation: We will now train the model as a 3-layer neural network.
Run the cell below to train the model. The cost should decrease on every iteration. It may take up to 5 minutes to run 3500 iterations.
You can click on the square (⬛) on the upper bar of the notebook to stop the cell.
End of explanation
def predict(X, yOHE, parameters):
This function is used to predict the results of a L-layer neural network.
It also checks them against the true labels and print the accuracy
Arguments:
X -- data set of examples you would like to label
yOHE -- the true labels, as multi-class vectors
parameters -- parameters of the trained model
Returns:
p -- predictions (the label) for the given dataset X
m = X.shape[1]
nLabels = yOHE.shape[1]
n = len(parameters) // 2 # number of layers in the neural network
p = np.zeros((1, m)) # the predicted output, initialised to zero
y = np.zeros((1, m)) # the actual output
# Forward propagation
probas, caches = L_model_forward(X, parameters)
# probas is a matrix of shape [nLabels, m] (one-hot-encoded)
assert (probas.shape[1] == m)
for i in range(0, m):
# convert probs to label predictions:
# just take the label with max prob
p[0,i] = np.argmax(probas[:,i])
# convert expected results into label: takes the value with one
y[0,i] = np.argmax(yOHE[:,i])
# print results
print("Accuracy: " + str(np.sum((p == y)/m)))
return p
print ("On the training set:")
predictions_train = predict(train_set_x, train_set_y, fit_params)
print ("On the test set:")
predictions_test = predict(X_test.T, y_test.T, fit_params)
Explanation: 4. Results analysis
Now we can check the performance of the trained network by predicting the results of the test set and comparing them with the actual labels.
Note that the predict() function has been adapted to cope with the multi-class labels.
End of explanation
# FUNCTION: initialize_parameters
def initialise_parameters_he(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing the parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1])*np.sqrt(2./layer_dims[l-1])
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
# unit tests
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
Explanation: 5 - Initializing parameters
There are two types of parameters to initialize in a neural network:
- the weight matrices $W^{[i]}$
- the bias vectors $b^{[i]}$
The weight matrix is initialised with random values while the bias vector as a vector of zeros.
In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing and the network is no more powerful than a linear classifier such as logistic regression.
To break symmetry, we initialise the weights randomly. Following random initialisation, each neuron can then proceed to learn a different function of its inputs.
Of course, different initializations lead to different results and poor initialisation can slow down the optimisation algorithm.
One good practice is not to initialise to values that are too large, instead what bring good results are the so-called Xavier or the He (for ReLU activation) initialisations.
Finally, we try here the "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).)
This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
End of explanation
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularisation(A3, Y, parameters, lambdaHyper):
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
lambdaHyper -- the lambda regularisation hyper-parameter.
Returns:
cost - value of the regularized loss function (formula (2))
# This gives you the cross-entropy part of the cost
cross_entropy_cost = compute_cost(A3, Y)
sum_regularization_cost = 0
m = Y.shape[1]
L = len(parameters) // 2 # number of layers (2 because we have W and b)
for i in range(1, L+1):
W_i = parameters['W' + str(i)]
sum_regularization_cost += np.sum(np.square(W_i))
regularization_cost = (1/m)*(lambdaHyper/2)*(sum_regularization_cost)
cost = cross_entropy_cost + regularization_cost
return cost
def compute_cost_with_regularisation_test_case():
np.random.seed(1)
Y_assess = np.array([[1, 1, 0, 1, 0]])
W1 = np.random.randn(2, 3)
b1 = np.random.randn(2, 1)
W2 = np.random.randn(3, 2)
b2 = np.random.randn(3, 1)
W3 = np.random.randn(1, 3)
b3 = np.random.randn(1, 1)
parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3}
a3 = np.array([[ 0.40682402, 0.01629284, 0.16722898, 0.10118111, 0.40682402]])
cost = compute_cost_with_regularisation(a3, Y_assess, parameters, lambdaHyper = 0.1)
np.testing.assert_approx_equal(cost, 1.78649, significant=5)
return "OK"
compute_cost_with_regularisation_test_case()
Explanation: 6 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization.
It consists of appropriately modifying your cost function, from:
$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)}$
To:
$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost}$
End of explanation
def backward_propagation_with_regularisation(X, Y, Yhat, caches, lambdaHyper):
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Yhat -- "true" labels vector, of shape (output size, number of examples)
caches -- cache output from forward_propagation()
lambdaHyper -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
L = len(caches) # the number of layers
gradients = {}
last_layer_cache = caches[L-1]
((A, W, b), Z) = last_layer_cache
assert (Yhat.shape == Y.shape)
dZ = Yhat - Y
gradients["dZ" + str(L)] = dZ
for i in reversed(range(L-1)):
current_layer_cache = caches[i]
((A_prev, W_prev, b_prev), Z_prev) = current_layer_cache
dW_entropy = 1./m * np.dot(dZ, A.T)
dW_reg = (lambdaHyper/m)*W
dW = dW_entropy + dW_reg
db = 1./m * np.sum(dZ, axis=1, keepdims = True)
dA_prev = np.dot(W.T, dZ)
dZ_prev = np.multiply(dA_prev, np.int64(A > 0))
gradients["dW" + str(i + 2)] = dW
gradients["db" + str(i + 2)] = db
gradients["dA" + str(i + 1)] = dA_prev
gradients["dZ" + str(i + 1)] = dZ_prev
((A, W, b), Z) = ((A_prev, W_prev, b_prev), Z_prev)
dZ = dZ_prev
# finally add the gradients for the first layer
dW_prev = 1./m * np.dot(dZ_prev, X.T) + (lambdaHyper/m)*W_prev
db_prev = 1./m * np.sum(dZ_prev, axis=1, keepdims = True)
gradients["dW1"] = dW_prev
gradients["db1"] = db_prev
return gradients
Explanation: Of course, because we changed the cost, we have to change backward propagation as well!
All the gradients have to be computed with respect to this new cost: add the regularization term's gradient.
End of explanation
def NN_model(X, Y, layers_dims, learning_rate = 0.0075,
num_iterations = 3000, print_cost=False,
lambdaHyper = 0, init="standard"):
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
lambdaHyper -- regularisation hyperparameter, scalar
init -- type of initialisation: standard or He.
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
costs = [] # keep track of cost
iterations2cost = 200 # Print the cost every these iterations
# Parameters initialization.
if init == "he":
parameters = initialise_parameters_he(layers_dims)
else:
parameters = initialise_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
Yhat, caches = L_model_forward(X, parameters)
# Compute cost.
if lambdaHyper == 0:
cost = compute_cost(Yhat, Y)
else:
cost = compute_cost_with_regularisation(Yhat, Y, parameters, lambdaHyper)
# Backward propagation.
if lambdaHyper == 0:
grads = L_model_backward(Yhat, Y, caches)
else:
grads = backward_propagation_with_regularisation(X, Y, Yhat, caches, lambdaHyper)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the cost every iterations2cost training example
if print_cost and i % iterations2cost == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % iterations2cost == 0:
costs.append(cost)
if print_cost:
# plot the cost
fig, ax = plt.subplots(1,1)
plt.plot(np.squeeze(costs))
ticks = ax.get_xticks()
ax.locator_params(axis='x', nticks=len(costs))
ax.set_xticklabels([int(x*iterations2cost) for x in ticks])
plt.ylabel('cost')
plt.xlabel('iterations')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
fit_params_reg = NN_model(train_set_x, train_set_y, nn_layers,
learning_rate = 0.3, num_iterations = 3500, print_cost = True,
lambdaHyper = 5, init="he")
Explanation: Putting all together
End of explanation
print ("On the training set:")
predictions_train = predict(train_set_x, train_set_y, fit_params_reg)
print ("On the test set:")
predictions_test = predict(X_test.T, y_test.T, fit_params_reg)
Explanation: Let's check the new accuracy values:
End of explanation
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing the parameters of a 3-layers network.
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A2 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
L = len(parameters) // 2 # number of layers in the neural network
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1,cache_temp = relu(Z1)
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
Z2 = np.dot(W2, A1) + b2
A2, cache_temp = sigmoid(Z2)
caches = (Z1, D1, A1, W1, b1, Z2, A2, W2, b2)
return A2, caches
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, A2, W2, b2) = cache
dZ2 = A2 - Y
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
def NN_model_drop(X, Y, layers_dims, learning_rate = 0.0075,
num_iterations = 3000, print_cost=False,
keep_prob = 1, init="standard"):
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
costs = [] # keep track of cost
iterations2cost = 200 # Print the cost every these iterations
# Parameters initialization.
if init == "he":
parameters = initialise_parameters_he(layers_dims)
else:
parameters = initialise_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
Yhat, caches = forward_propagation_with_dropout(X, parameters, keep_prob)
# Compute cost.
cost = compute_cost(Yhat, Y)
# Backward propagation.
grads = backward_propagation_with_dropout(X, Y, caches, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the cost every iterations2cost training example
if print_cost and i % iterations2cost == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % iterations2cost == 0:
costs.append(cost)
if print_cost:
# plot the cost
fig, ax = plt.subplots(1,1)
plt.plot(np.squeeze(costs))
ticks = ax.get_xticks()
ax.locator_params(axis='x', nticks=len(costs))
ax.set_xticklabels([int(x*iterations2cost) for x in ticks])
plt.ylabel('cost')
plt.xlabel('iterations')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
fit_params_drop = NN_model_drop(train_set_x, train_set_y, nn_layers,
learning_rate = 0.3, num_iterations = 3500, print_cost = True,
keep_prob = 0.8, init="he")
print ("On the train set:")
predictions_train = predict(train_set_x, train_set_y, fit_params_drop)
print ("On the test set:")
predictions_test = predict(X_test.T, y_test.T, fit_params_drop)
Explanation: 7 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration.
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
End of explanation |
3,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = None # signals into hidden layer
hidden_outputs = None # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = None # signals into final output layer
final_outputs = None # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = None # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = None
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = None
hidden_error_term = None
# Weight step (input to hidden)
delta_weights_i_h += None
# Weight step (hidden to output)
delta_weights_h_o += None
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += None # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += None # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = None # signals into hidden layer
hidden_outputs = None # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = None # signals into final output layer
final_outputs = None # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
3,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Required
Step1: Install Jekyll
by Ruxi
Feb 8, 2016
What is Jekyll? Its a static templating library enables blogging on github pages
It follows this directory structure
Step2: Instructions
http
Step3: (2) _config.yml
Step5: Version control
Saving | Python Code:
import os.path, gitpath #pip install git+'https://github.com/ruxi/python-gitpath.git'
os.chdir(gitpath.root()) # changes path to .git root
#os.getcwd() #check current work directory
Explanation: Required:
End of explanation
from IPython.display import IFrame
url = 'http://jekyllrb.com/docs/structure/'
IFrame(url, width=300, height=400)
Explanation: Install Jekyll
by Ruxi
Feb 8, 2016
What is Jekyll? Its a static templating library enables blogging on github pages
It follows this directory structure
End of explanation
# write files to .gitignore if not exist
with open('.gitignore', 'r+') as f:
ignorelist = ['_sites']
for path in ignorelist:
if path in f.read():
print('Found:\t"{}"\tmoving to next'.format(path))
else:
print('Not found:\t"{}"\twriting to file'.format(path))
f.writelines("{}\n".format(path))
!cat .gitignore
Explanation: Instructions
http://michaelchelen.net/81fa/install-jekyll-2-ubuntu-14-04/
sudo apt-get install ruby ruby-dev make gcc nodejs
(1) .gitignore __sites
End of explanation
%%writefile _config.yml
name: Ruxi
markdown:
Apprently karmdown is the only flavor of
Explanation: (2) _config.yml
End of explanation
py_commit_msg =
write code cell to add file to .gitignore if not found
%%bash -s "$py_commit_msg"
echo $1
git add --all :/
git commit -a -m "$1" #message from py_commit_msg
git push origin master
Explanation: Version control
Saving
End of explanation |
3,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MaxPooling3D
[pooling.MaxPooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'
Step1: [pooling.MaxPooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'
Step2: [pooling.MaxPooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'
Step3: [pooling.MaxPooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'
Step4: [pooling.MaxPooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'
Step5: [pooling.MaxPooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'
Step6: [pooling.MaxPooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'
Step7: [pooling.MaxPooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'
Step8: [pooling.MaxPooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'
Step9: [pooling.MaxPooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'
Step10: [pooling.MaxPooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'
Step11: [pooling.MaxPooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'
Step12: [pooling.MaxPooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'
Step13: export for Keras.js tests | Python Code:
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: MaxPooling3D
[pooling.MaxPooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 5, 2, 3)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(283)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(284)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(285)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(286)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 5, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(287)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(288)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(289)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (2, 3, 3, 4)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'
End of explanation
data_in_shape = (2, 3, 3, 4)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'
End of explanation
data_in_shape = (3, 4, 4, 3)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(292)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'
End of explanation
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
3,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Convolutional Neural Networks
Convolutional Neural Networks (CNNs) are deep neural networks with the addition of two very special types of layers
Step2: Now let's create a filter we'll apply using TensorFlow's tf.nn.conv2d function.
For illustrative purposes we'll create a filter to extract the red out of the image we just created. The filter will be 10 x 10 x 3. (10 x 10 is the size of our receptor because our vertical red lines are centered within every 10 pixels. 3 is the number of color channels we are reading because our image has RGB values.) The final number in the filter (1) is the number of output channels we'd like the filter to produce. These output channels are called "feature maps." You get one feature map per filter.
Step3: We created our filter and set it to all zeros. We now need to indicate what portion of the receptor field we want to extract data from. In this case we are trying to extract the vertical red line, which we know is centered every ten pixels (pixels 5 and 6). To capture the red line, we'll tell the filter that we only care about the 5th and 6th pixel in every row of data.
Step4: Now let's get our image ready to pass to our convolutional layer. To do that we package the 3-dimensional image in yet another array to create a dataset for TensorFlow. TensorFlow's convolutional function expects a 4-dimensional dataset.
Step5: To get the image into TensorFlow we need to convert it into a Tensor.
Step6: To create our convolutional layer, we use tf.nn.conv2d. The arguments we are passing it are
Step7: We can now run our convolutional layer using a TensorFlow session.
Notice our output shape reduces the input image to a 10 x 10 x 1 matrix from a 100 x 100 x 3 matrix. This is because we processed the image using a 10 x 10 single-channel output filter and stepped 10 pixels each time.
Step8: Looking at the image isn't very telling. It simply looks like a single-color image.
Step9: When we look at the data, we can see that the values are uniformly 10.
Step10: What happens if we include some black pixels by increasing our vertical filter to capture all four vertical pixels in the center (pixels 4-7, rather than just pixels 5-6)? Our output number changes to 20.
Step11: If we move our filter to only capture black pixels, our output becomes 0.
Step12: Let's look at a convolutional layer on a real image. We'll load a sample image from scikit-learn.
Step13: We will package the image in a 4-dimensional matrix for processing by TensorFlow.
Step14: To see the convolutional layer in action, let's recreate our vertical line filter and apply it to the image.
Step15: You won't typically define your own filters. You can let TensorFlow discover them by using tf.keras.layers.Conv2D instead of tf.nn.conv2d.
In this example we ask for three features with a 5x5 visual receptor, stepping two pixels at a time.
Step16: Let's look at the first feature map.
Step17: Here is the second feature map.
Step18: And the third.
Step19: Pooling Layers
Pooling layers are used to shrink the data from their input layer by sampling the data per receptor. Let's look at an example. We'll first load a sample image.
Step20: We can package this image in a 4-dimensional matrix and pass it to the tf.nn.max_pool function. This function extracts the maximum value from each receptor field.
In the example below, we create a 2 x 2 receptor and move it around the image, shifting 2 pixels each time. This reduces the height and width of the image by half, effectively reducing our dataset size by 75%.
Step21: Exercise 1
Step22: Building a CNN
Now that we have learned about the component parts of a convolutional neural network, let's actually build one.
In this section we will use the Fruits 360 dataset that is hosted on Kaggle.
Upload your kaggle.json file and run the code below to download the file with the Kaggle API.
Step23: The dataset file is fruits.zip. Let's unzip and inspect it.
Step24: We've listed the unzipped directory. Inside it there are two primary folders we'll work with in this dataset
Step25: 131 categories, each with representation in test and train.
According to the documentation, the images are all 100x100 pixels. Let's load one and see what the images look like.
Step26: We can also verify that the shape is what we expect.
Step27: We find a 100x100 pixel image with three channels of color.
We can see the color encoding range
Step28: This hints at a [0, 255] range. Depending on how long our model takes to train, it might be wise to scale the values down to [0.0, 1.0], but we'll hold off for now.
Now we need to find a way to get the images into the model. TensorFlow Keras has a class called DirectoryIterator that can help with that.
The iterator pulls images from a directory and passes them to our model in batches. There are many settings we can change. In our example here, we set the target_size to the size of our input images. Notice that we don't provide a third dimension even though these are RGB files. This is because the default color_mode is 'rgb', which implies three values.
We also set image_data_generator to None. If we wanted to, we could have passed an ImageDataGenerator to augment the image and increase the size of our dataset. We'll save this for an exercise.
Step29: The output for the code above notes that 67,692 images were found across 131 classes. These classes are the directories that were in our root folder. They are sorted, so the actual values of the classes are
Step30: Let's build our model now. We'll use the Sequential and Dense classes that we've used in many previous labs, as well as a few new classes
Step31: Now let's start training. Let one or two epochs run but then !!!! STOP THE CELL FROM RUNNING !!!!
How long was each epoch taking? Ours was taking about 4 minutes. Let's do the math. If each epoch took 4 minutes and we ran 100 epochs, then we'd be training for 400 minutes. That's just under 7 hours of training!
Luckily there is a better way. In the menu click on 'Runtime' and then 'Change runtime type'. In the modal that appears, there is an option called 'Hardware accelerator' that is set to 'None'. Change this to 'GPU' and save your settings.
Your runtime will change, so you'll need to go back to the start of this section and run all of the cells from the start. Don't forget to upload your kaggle.json again.
When you get back to this cell a second time and start it running, you should notice a big improvement in training time. We were getting 9 seconds per epoch, which is about 900 seconds total. This totals 15 minutes, which is much better. Let the cell run to completion (hopefully about 15 minutes). You should see it progressing as it is running.
Step32: You might have noticed that each epoch only processed 529 items. These are batches, not images. We set our DirectoryIterator batch size to 128. We have 67,692 images. 67,692 / 129 = 524.744186047, which is close to the 529 number.
Now let's plot our training accuracy over time.
Step33: And our loss.
Step34: Over 99% training accuracy. Let's see how well this generalizes
Step35: When we ran this, we got just under 90% accuracy, so we are definitely overfitting.
We can also make predictions. The code below selects the next batch, gets predictions for it, and then returns the first prediction.
Step36: This maps to the directory in that position.
Step37: Overall the model seemed to train well, though overfit a bit. We'll try to address this in the exercise below by augmenting our images.
Exercise 2 | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/05_deep_learning/00_convolutional_neural_networks/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
# Create an image that is completely black.
vertical_stripes = np.zeros((100, 100, 3))
# Loop over the image 10 pixels at a time, turning the centerline of vertical
# pixels red.
for x in range(4, 101, 10):
vertical_stripes[:, x:x+2, 0] = 1.0
_ = plt.imshow(vertical_stripes)
Explanation: Convolutional Neural Networks
Convolutional Neural Networks (CNNs) are deep neural networks with the addition of two very special types of layers: convolutional layers and pooling layers. We will take a look at both in this lesson.
Convolutional Layers
Convolutional layers are layers in a neural network that only partially connect to their input layers. The layer is divided into receptive fields that each only look at a portion of the input layer and apply filters to it.
Let's see this in action. First, we will create a 100 x 100 x 3 image that contains red vertical stripes centered every 10 pixels on the image.
End of explanation
receptor_height, receptor_width = 10, 10
input_color_channels, output_color_channels = 3, 1
filters = np.zeros(shape=(receptor_height, receptor_width, input_color_channels,
output_color_channels), dtype=np.float32)
Explanation: Now let's create a filter we'll apply using TensorFlow's tf.nn.conv2d function.
For illustrative purposes we'll create a filter to extract the red out of the image we just created. The filter will be 10 x 10 x 3. (10 x 10 is the size of our receptor because our vertical red lines are centered within every 10 pixels. 3 is the number of color channels we are reading because our image has RGB values.) The final number in the filter (1) is the number of output channels we'd like the filter to produce. These output channels are called "feature maps." You get one feature map per filter.
End of explanation
filters[:, 5:7, :, 0] = 1
Explanation: We created our filter and set it to all zeros. We now need to indicate what portion of the receptor field we want to extract data from. In this case we are trying to extract the vertical red line, which we know is centered every ten pixels (pixels 5 and 6). To capture the red line, we'll tell the filter that we only care about the 5th and 6th pixel in every row of data.
End of explanation
dataset = np.array([vertical_stripes], dtype=np.float32)
image_count, image_height, image_width, color_channels = dataset.shape
image_count, image_height, image_width, color_channels
Explanation: Now let's get our image ready to pass to our convolutional layer. To do that we package the 3-dimensional image in yet another array to create a dataset for TensorFlow. TensorFlow's convolutional function expects a 4-dimensional dataset.
End of explanation
import tensorflow as tf
X = tf.convert_to_tensor(dataset, dtype=tf.float32)
Explanation: To get the image into TensorFlow we need to convert it into a Tensor.
End of explanation
convolution = tf.nn.conv2d(X, filters, strides=[1, 10, 10, 1], padding="SAME")
Explanation: To create our convolutional layer, we use tf.nn.conv2d. The arguments we are passing it are:
The image that we are processing.
The filters we want to apply to the data. In this case we are passing in the filter that will capture the middle vertical pixels in a 10x10 receptor.
The strides we want the layer to take when operating on the data. In this case we want the input data to be processed for every image and every color channel. The 10s cause the receptor to shift by 10 pixels every vertical and horizontal step through the image. This is exactly our filter size, and it allows us to stay centered on the red vertical lines. In practice you'd likely want some overlap.
A padding argument we input as "SAME", which causes TensorFlow to pad the image if necessary (equal padding on each size) in order to make the filter process the entire image.
End of explanation
output = convolution.numpy()
output.shape
Explanation: We can now run our convolutional layer using a TensorFlow session.
Notice our output shape reduces the input image to a 10 x 10 x 1 matrix from a 100 x 100 x 3 matrix. This is because we processed the image using a 10 x 10 single-channel output filter and stepped 10 pixels each time.
End of explanation
plt.imshow(output[0, :, :, 0 ])
Explanation: Looking at the image isn't very telling. It simply looks like a single-color image.
End of explanation
np.unique(output)
Explanation: When we look at the data, we can see that the values are uniformly 10.
End of explanation
filters = np.zeros(shape=(receptor_height, receptor_width, input_color_channels,
output_color_channels), dtype=np.float32)
filters[:, 4:8, :, :] = 1
X = tf.convert_to_tensor(dataset)
convolution = tf.nn.conv2d(X, filters, strides=[1,10,10,1], padding="SAME")
output = convolution.numpy()
np.unique(output)
Explanation: What happens if we include some black pixels by increasing our vertical filter to capture all four vertical pixels in the center (pixels 4-7, rather than just pixels 5-6)? Our output number changes to 20.
End of explanation
filters = np.zeros(shape=(receptor_height, receptor_width, input_color_channels,
output_color_channels), dtype=np.float32)
filters[:, :2, :, :] = 1
X = tf.convert_to_tensor(dataset)
convolution = tf.nn.conv2d(X, filters, strides=[1,10,10,1], padding="SAME")
output = convolution.numpy()
np.unique(output)
Explanation: If we move our filter to only capture black pixels, our output becomes 0.
End of explanation
from sklearn.datasets import load_sample_image
china = load_sample_image('china.jpg')
plt.imshow(china)
Explanation: Let's look at a convolutional layer on a real image. We'll load a sample image from scikit-learn.
End of explanation
dataset = np.array([china], dtype=np.float32)
image_count, image_height, image_width, color_channels = dataset.shape
image_count, image_height, image_width, color_channels
Explanation: We will package the image in a 4-dimensional matrix for processing by TensorFlow.
End of explanation
receptor_height, receptor_width = 10, 10
input_color_channels, output_color_channels = 3, 1
filters = np.zeros(shape=(receptor_height, receptor_width, input_color_channels,
output_color_channels), dtype=np.float32)
filters[:, 5:7, :, :] = 1
image_count, image_height, image_width, color_channels = dataset.shape
X = tf.convert_to_tensor(dataset)
convolution = tf.nn.conv2d(X, filters, strides=[1,4,4,1], padding="SAME")
output = convolution.numpy()
plt.imshow(output[0, :, :, 0], cmap="gray")
plt.show()
Explanation: To see the convolutional layer in action, let's recreate our vertical line filter and apply it to the image.
End of explanation
image_count, image_height, image_width, color_channels = dataset.shape
X = tf.convert_to_tensor(dataset)
convolution = tf.keras.layers.Conv2D(filters=3, kernel_size=5, strides=[2,2],
padding="SAME")
output = convolution(X)
output = output.numpy()
Explanation: You won't typically define your own filters. You can let TensorFlow discover them by using tf.keras.layers.Conv2D instead of tf.nn.conv2d.
In this example we ask for three features with a 5x5 visual receptor, stepping two pixels at a time.
End of explanation
plt.imshow(output[0, :, :, 0])
plt.show()
Explanation: Let's look at the first feature map.
End of explanation
plt.imshow(output[0, :, :, 1])
plt.show()
Explanation: Here is the second feature map.
End of explanation
plt.imshow(output[0, :, :, 2])
plt.show()
Explanation: And the third.
End of explanation
flower = load_sample_image('flower.jpg')
plt.imshow(flower)
plt.show()
Explanation: Pooling Layers
Pooling layers are used to shrink the data from their input layer by sampling the data per receptor. Let's look at an example. We'll first load a sample image.
End of explanation
dataset = np.array([flower], dtype=np.float32)
X = tf.convert_to_tensor(dataset)
max_pool = tf.nn.max_pool(X, ksize=[1,2,2,1], strides=[1,2,2,1],
padding="VALID")
output = max_pool.numpy()
plt.imshow(output[0].astype(np.uint8))
plt.show()
Explanation: We can package this image in a 4-dimensional matrix and pass it to the tf.nn.max_pool function. This function extracts the maximum value from each receptor field.
In the example below, we create a 2 x 2 receptor and move it around the image, shifting 2 pixels each time. This reduces the height and width of the image by half, effectively reducing our dataset size by 75%.
End of explanation
# Create your filters and apply them to the flower image using TensorFlow here.
# Use PyPlot to output the first feature map here.
# Use PyPlot to output the second feature map here.
Explanation: Exercise 1: Manual Filtering
Use tf.nn.conv2d to apply a stack of filters to the scikit-learn built-in flower image mentioned earlier in this colab.
Create a (7, 7, 3, 2) filter set. The 2 at the end indicates that we'll create two filters and get two output channels (feature maps).
Make the first filter be a vertical line filter on the middle pixel of each row.
Make the second filter be a horizontal line filter on the middle pixel of each row.
Pass the flower image and filters to tf.nn.conv2d, stepping 3 pixels vertically and horizontally.
Display the first feature map as an image.
Display the second feature map as an image.
Student Solution
End of explanation
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
! kaggle datasets download moltean/fruits
! ls
Explanation: Building a CNN
Now that we have learned about the component parts of a convolutional neural network, let's actually build one.
In this section we will use the Fruits 360 dataset that is hosted on Kaggle.
Upload your kaggle.json file and run the code below to download the file with the Kaggle API.
End of explanation
import os
import zipfile
zipfile.ZipFile('fruits.zip').extractall()
os.listdir('./fruits-360/')
Explanation: The dataset file is fruits.zip. Let's unzip and inspect it.
End of explanation
train_dir = './fruits-360/Training'
train_categories = set(os.listdir(train_dir))
test_dir = './fruits-360/Test'
test_categories = set(os.listdir(test_dir))
if train_categories.symmetric_difference(test_categories):
print("Warning!: ", train_categories.symmetric_difference(test_categories))
print(sorted(train_categories))
print(len(train_categories))
Explanation: We've listed the unzipped directory. Inside it there are two primary folders we'll work with in this dataset:
Test
Training
There are folders for each category in the Test and Training folders. Let's make sure all of the categories are represented in test and train, and let's see how many categories we are working with.
End of explanation
import cv2 as cv
import matplotlib.pyplot as plt
sample_dir = os.path.join(train_dir, 'Lychee')
img = cv.imread(os.path.join(sample_dir, os.listdir(sample_dir)[0]))
_ = plt.imshow(img)
Explanation: 131 categories, each with representation in test and train.
According to the documentation, the images are all 100x100 pixels. Let's load one and see what the images look like.
End of explanation
img.shape
Explanation: We can also verify that the shape is what we expect.
End of explanation
img.min(), img.max()
Explanation: We find a 100x100 pixel image with three channels of color.
We can see the color encoding range:
End of explanation
import tensorflow as tf
train_dir = './fruits-360/Training'
train_image_iterator = tf.keras.preprocessing.image.DirectoryIterator(
target_size=(100, 100),
directory=train_dir,
batch_size=128,
image_data_generator=None)
Explanation: This hints at a [0, 255] range. Depending on how long our model takes to train, it might be wise to scale the values down to [0.0, 1.0], but we'll hold off for now.
Now we need to find a way to get the images into the model. TensorFlow Keras has a class called DirectoryIterator that can help with that.
The iterator pulls images from a directory and passes them to our model in batches. There are many settings we can change. In our example here, we set the target_size to the size of our input images. Notice that we don't provide a third dimension even though these are RGB files. This is because the default color_mode is 'rgb', which implies three values.
We also set image_data_generator to None. If we wanted to, we could have passed an ImageDataGenerator to augment the image and increase the size of our dataset. We'll save this for an exercise.
End of explanation
print(train_image_iterator.filepaths[np.where(train_image_iterator.labels == 0)[0][0]])
print(train_image_iterator.filepaths[np.where(train_image_iterator.labels == 1)[0][0]])
print(train_image_iterator.filepaths[np.where(train_image_iterator.labels == 2)[0][0]])
print('...')
print(train_image_iterator.filepaths[np.where(train_image_iterator.labels == 128)[0][0]])
print(train_image_iterator.filepaths[np.where(train_image_iterator.labels == 129)[0][0]])
print(train_image_iterator.filepaths[np.where(train_image_iterator.labels == 130)[0][0]])
Explanation: The output for the code above notes that 67,692 images were found across 131 classes. These classes are the directories that were in our root folder. They are sorted, so the actual values of the classes are:
0 - Apple Braeburn
1 - Apple Crimson Snow
2 - Apple Golden 1
...
128 - Tomato not Ripened
129 - Walnut
130 - Watermelon
We can validate that using the code below.
End of explanation
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16, 3, padding='same', activation='relu',
input_shape=(100, 100, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(64, 3, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(131, activation='softmax')
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
Explanation: Let's build our model now. We'll use the Sequential and Dense classes that we've used in many previous labs, as well as a few new classes:
Conv2D which creates a convolutional layer.
MaxPool2D which creates a pooling layer.
Flatten which creates a layer that converts a multidimensional tensor down to a flat tensor.
You can see the entire model below. We input our images into a convolutional layer followed by a pooling layer. After stacking a few convolutional layers and pooling layers, we flatten the final pooling output and finish with some traditional dense layers. The final dense layer is 131 nodes wide and is activated by softmax. This layer represents our classification predictions.
End of explanation
history = model.fit(
train_image_iterator,
epochs=10,
)
Explanation: Now let's start training. Let one or two epochs run but then !!!! STOP THE CELL FROM RUNNING !!!!
How long was each epoch taking? Ours was taking about 4 minutes. Let's do the math. If each epoch took 4 minutes and we ran 100 epochs, then we'd be training for 400 minutes. That's just under 7 hours of training!
Luckily there is a better way. In the menu click on 'Runtime' and then 'Change runtime type'. In the modal that appears, there is an option called 'Hardware accelerator' that is set to 'None'. Change this to 'GPU' and save your settings.
Your runtime will change, so you'll need to go back to the start of this section and run all of the cells from the start. Don't forget to upload your kaggle.json again.
When you get back to this cell a second time and start it running, you should notice a big improvement in training time. We were getting 9 seconds per epoch, which is about 900 seconds total. This totals 15 minutes, which is much better. Let the cell run to completion (hopefully about 15 minutes). You should see it progressing as it is running.
End of explanation
import matplotlib.pyplot as plt
plt.plot(list(range(len(history.history['accuracy']))),
history.history['accuracy'])
plt.show()
Explanation: You might have noticed that each epoch only processed 529 items. These are batches, not images. We set our DirectoryIterator batch size to 128. We have 67,692 images. 67,692 / 129 = 524.744186047, which is close to the 529 number.
Now let's plot our training accuracy over time.
End of explanation
import matplotlib.pyplot as plt
plt.plot(list(range(len(history.history['loss']))), history.history['loss'])
plt.show()
Explanation: And our loss.
End of explanation
import tensorflow as tf
test_dir = './fruits-360/Test'
test_image_iterator = tf.keras.preprocessing.image.DirectoryIterator(
target_size=(100, 100),
directory=test_dir,
batch_size=128,
shuffle=False,
image_data_generator=None)
model.evaluate(test_image_iterator)
Explanation: Over 99% training accuracy. Let's see how well this generalizes:
End of explanation
predicted_class = np.argmax(model(next(test_image_iterator)[0])[0])
predicted_class
Explanation: When we ran this, we got just under 90% accuracy, so we are definitely overfitting.
We can also make predictions. The code below selects the next batch, gets predictions for it, and then returns the first prediction.
End of explanation
os.listdir(train_dir)[predicted_class]
Explanation: This maps to the directory in that position.
End of explanation
# Your code goes here
Explanation: Overall the model seemed to train well, though overfit a bit. We'll try to address this in the exercise below by augmenting our images.
Exercise 2: ImageDataGenerator
Recreate the model above using an ImageDataGenerator to augment the training dataset. When running fit be sure to pay attention to the steps_per_epoch parameter. It defaults to unbounded, and the generator just keeps on generating if you don't set it.
When you have finished training your model, visualize your training loss.
Next, use the model to make predictions, and then calculate the F1 score of your validation results.
Explain your work.
Use as many code blocks and text blocks as necessary below.
Student Solution
End of explanation |
3,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hugging Face Accelerate Demo
Note
Step1: Import the required modules.
Step2: wandb initialization. See wandb_demo notebook for more details.
Step3: Build the model
Use a ResNet18 from torchvision. See wandb_demo notebook for more details.
Step4: Loss function, Optimizer, Scheduler and DataLoader
See wandb_demo notebook for more details.
Step5: Visulaizing sample data from test split
See wandb_demo notebook for more details.
Note the last line that uses Accelerate API to wrap the model, optimizer, data loaders and scheduler.
Step6: The train loop
Using Accelerate, we do not need to transfer the model to the device.
See wandb_demo notebook for more details.
Step7: The validation loop
After every epoch, we will run the validation loop for the model. Again, no need to transfer the data to the device.
See wandb_demo notebook for more details.
Step8: wandb plots
Finally, we will use wandb to visualize the training progress.
See wandb_demo notebook for more details.
Step9: Load the best performing model
In the following code, we load the best performing model. The model is saved in ./resnet18_best_acc.pth. The average accuracy of the model is the same as the one in the previous section. | Python Code:
!pip install accelerate
Explanation: Hugging Face Accelerate Demo
Note: Before running this demo, please make sure that you have wandb.ai free account.
Let us install accelerate.
End of explanation
import torch
import torchvision
import wandb
import datetime
from torch.optim import SGD
from torch.optim.lr_scheduler import CosineAnnealingLR
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from ui import progress_bar
# This is a demo of the PyTorch Accelerate API.
from accelerate import Accelerator
Explanation: Import the required modules.
End of explanation
wandb.login()
config = {
"learning_rate": 0.1,
"epochs": 100,
"batch_size": 128,
"dataset": "cifar10"
}
run = wandb.init(project="accelerate-project", entity="upeee", config=config)
Explanation: wandb initialization. See wandb_demo notebook for more details.
End of explanation
# Shows the code to be replaced with the Accelerate API.
#device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
accelerator = Accelerator()
model = torchvision.models.resnet18(pretrained=False, progress=True)
model.fc = torch.nn.Linear(model.fc.in_features, 10)
# Replace the model with the Accelerate API.
#model.to(device)
# watch model gradients during training
wandb.watch(model)
Explanation: Build the model
Use a ResNet18 from torchvision. See wandb_demo notebook for more details.
End of explanation
loss = torch.nn.CrossEntropyLoss()
optimizer = SGD(model.parameters(), lr=wandb.config.learning_rate)
scheduler = CosineAnnealingLR(optimizer, T_max=wandb.config.epochs)
x_train = datasets.CIFAR10(root='./data', train=True,
download=True,
transform=transforms.ToTensor())
x_test = datasets.CIFAR10(root='./data',
train=False,
download=True,
transform=transforms.ToTensor())
train_loader = DataLoader(x_train,
batch_size=wandb.config.batch_size,
shuffle=True,
num_workers=2)
test_loader = DataLoader(x_test,
batch_size=wandb.config.batch_size,
shuffle=False,
num_workers=2)
# Accelerate API
model = accelerator.prepare(model)
optimizer = accelerator.prepare(optimizer)
scheduler = accelerator.prepare(scheduler)
train_loader = accelerator.prepare(train_loader)
Explanation: Loss function, Optimizer, Scheduler and DataLoader
See wandb_demo notebook for more details.
End of explanation
label_human = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
table_test = wandb.Table(columns=['Image', "Ground Truth", "Initial Pred Label",])
image, label = iter(test_loader).next()
test_loader = accelerator.prepare(test_loader)
image = image.to(accelerator.device)
model.eval()
with torch.no_grad():
pred = torch.argmax(model(image), dim=1).cpu().numpy()
for i in range(8):
table_test.add_data(wandb.Image(image[i]),
label_human[label[i]],
label_human[pred[i]])
print(label_human[label[i]], "vs. ", label_human[pred[i]])
Explanation: Visulaizing sample data from test split
See wandb_demo notebook for more details.
Note the last line that uses Accelerate API to wrap the model, optimizer, data loaders and scheduler.
End of explanation
def train(epoch):
model.train()
train_loss = 0
correct = 0
train_samples = 0
# sample a batch. compute loss and backpropagate
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
# Replaced by the Accelerate API.
#target = target.to(device)
#output = model(data.to(device))
output = model(data)
loss_value = loss(output, target)
# Replaced by the Accelerate API.
#loss_value.backward()
accelerator.backward(loss_value)
optimizer.step()
scheduler.step(epoch)
train_loss += loss_value.item()
train_samples += len(data)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
if batch_idx % 10 == 0:
accuracy = 100. * correct / len(train_loader.dataset)
progress_bar(batch_idx,
len(train_loader),
'Train Epoch: {}, Loss: {:.6f}, Acc: {:.2f}%'.format(epoch+1,
train_loss/train_samples, accuracy))
train_loss /= len(train_loader.dataset)
accuracy = 100. * correct / len(train_loader.dataset)
return accuracy, train_loss
Explanation: The train loop
Using Accelerate, we do not need to transfer the model to the device.
See wandb_demo notebook for more details.
End of explanation
def test():
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
# Replaced by the Accelerate API.
#output = model(data.to(device))
#target = target.to(device)
output = model(data)
test_loss += loss(output, target).item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
accuracy = 100. * correct / len(test_loader.dataset)
print('\nTest Loss: {:.4f}, Acc: {:.2f}%\n'.format(test_loss, accuracy))
return accuracy, test_loss
Explanation: The validation loop
After every epoch, we will run the validation loop for the model. Again, no need to transfer the data to the device.
See wandb_demo notebook for more details.
End of explanation
run.display(height=1000)
start_time = datetime.datetime.now()
best_acc = 0
for epoch in range(wandb.config["epochs"]):
train_acc, train_loss = train(epoch)
test_acc, test_loss = test()
if test_acc > best_acc:
wandb.run.summary["Best accuracy"] = test_acc
best_acc = test_acc
accelerator.save(model, "resnet18_best_acc.pth")
wandb.log({
"Train accuracy": train_acc,
"Test accuracy": test_acc,
"Train loss": train_loss,
"Test loss": test_loss,
"Learning rate": optimizer.param_groups[0]['lr']
})
elapsed_time = datetime.datetime.now() - start_time
print("Elapsed time: %s" % elapsed_time)
wandb.run.summary["Elapsed train time"] = str(elapsed_time)
model.eval()
with torch.no_grad():
pred = torch.argmax(model(image), dim=1).cpu().numpy()
final_pred = []
for i in range(8):
final_pred.append(label_human[pred[i]])
print(label_human[label[i]], "vs. ", final_pred[i])
table_test.add_column(name="Final Pred Label", data=final_pred)
wandb.log({"Test data": table_test})
wandb.finish()
Explanation: wandb plots
Finally, we will use wandb to visualize the training progress.
See wandb_demo notebook for more details.
End of explanation
model = torch.load("resnet18_best_acc.pth")
# Using Accelerator API
model = accelerator.prepare(model)
accuracy, _ = test()
print("Best accuracy: %.2f" % accuracy)
Explanation: Load the best performing model
In the following code, we load the best performing model. The model is saved in ./resnet18_best_acc.pth. The average accuracy of the model is the same as the one in the previous section.
End of explanation |
3,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtering and resampling data
This tutorial covers filtering and resampling, and gives examples of how
filtering can be used for artifact repair.
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>. We'll also crop the data to 60 seconds
(to save memory on the documentation server)
Step1: Background on filtering
A filter removes or attenuates parts of a signal. Usually, filters act on
specific frequency ranges of a signal — for example, suppressing all
frequency components above or below a certain cutoff value. There are many
ways of designing digital filters; see disc-filtering for a longer
discussion of the various approaches to filtering physiological signals in
MNE-Python.
Repairing artifacts by filtering
Artifacts that are restricted to a narrow frequency range can sometimes
be repaired by filtering the data. Two examples of frequency-restricted
artifacts are slow drifts and power line noise. Here we illustrate how each
of these can be repaired by filtering.
Slow drifts
Low-frequency drifts in raw data can usually be spotted by plotting a fairly
long span of data with the
Step2: A half-period of this slow drift appears to last around 10 seconds, so a full
period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be
sure those components are excluded, we want our highpass to be higher than
that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5}
\mathrm{Hz}$ filters to see which works best
Step3: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.
Notice that the text output summarizes the relevant characteristics of the
filter that was created. If you want to visualize the filter, you can pass
the same arguments used in the call to
Step4: Notice that the output is the same as when we applied this filter to the data
using
Step5: Power line noise
Power line noise is an environmental artifact that manifests as persistent
oscillations centered around the AC power line frequency_. Power line
artifacts are easiest to see on plots of the spectrum, so we'll use
Step6: It should be evident that MEG channels are more susceptible to this kind of
interference than EEG that is recorded in the magnetically shielded room.
Removing power-line noise can be done with a notch filter,
applied directly to the
Step7:
Step8: Resampling
EEG and MEG recordings are notable for their high temporal precision, and are
often recorded with sampling rates around 1000 Hz or higher. This is good
when precise timing of events is important to the experimental design or
analysis plan, but also consumes more memory and computational resources when
processing the data. In cases where high-frequency components of the signal
are not of interest and precise timing is not needed (e.g., computing EOG or
ECG projectors on a long recording), downsampling the signal can be a useful
time-saver.
In MNE-Python, the resampling methods (
Step9: Because resampling involves filtering, there are some pitfalls to resampling
at different points in the analysis stream | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(0, 60).load_data() # use just 60 seconds of data, to save memory
Explanation: Filtering and resampling data
This tutorial covers filtering and resampling, and gives examples of how
filtering can be used for artifact repair.
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>. We'll also crop the data to 60 seconds
(to save memory on the documentation server):
End of explanation
mag_channels = mne.pick_types(raw.info, meg='mag')
raw.plot(duration=60, order=mag_channels, proj=False,
n_channels=len(mag_channels), remove_dc=False)
Explanation: Background on filtering
A filter removes or attenuates parts of a signal. Usually, filters act on
specific frequency ranges of a signal — for example, suppressing all
frequency components above or below a certain cutoff value. There are many
ways of designing digital filters; see disc-filtering for a longer
discussion of the various approaches to filtering physiological signals in
MNE-Python.
Repairing artifacts by filtering
Artifacts that are restricted to a narrow frequency range can sometimes
be repaired by filtering the data. Two examples of frequency-restricted
artifacts are slow drifts and power line noise. Here we illustrate how each
of these can be repaired by filtering.
Slow drifts
Low-frequency drifts in raw data can usually be spotted by plotting a fairly
long span of data with the :meth:~mne.io.Raw.plot method, though it is
helpful to disable channel-wise DC shift correction to make slow drifts
more readily visible. Here we plot 60 seconds, showing all the magnetometer
channels:
End of explanation
for cutoff in (0.1, 0.2):
raw_highpass = raw.copy().filter(l_freq=cutoff, h_freq=None)
fig = raw_highpass.plot(duration=60, order=mag_channels, proj=False,
n_channels=len(mag_channels), remove_dc=False)
fig.subplots_adjust(top=0.9)
fig.suptitle('High-pass filtered at {} Hz'.format(cutoff), size='xx-large',
weight='bold')
Explanation: A half-period of this slow drift appears to last around 10 seconds, so a full
period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be
sure those components are excluded, we want our highpass to be higher than
that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5}
\mathrm{Hz}$ filters to see which works best:
End of explanation
filter_params = mne.filter.create_filter(raw.get_data(), raw.info['sfreq'],
l_freq=0.2, h_freq=None)
Explanation: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.
Notice that the text output summarizes the relevant characteristics of the
filter that was created. If you want to visualize the filter, you can pass
the same arguments used in the call to :meth:raw.filter()
<mne.io.Raw.filter> above to the function :func:mne.filter.create_filter
to get the filter parameters, and then pass the filter parameters to
:func:mne.viz.plot_filter. :func:~mne.filter.create_filter also requires
parameters data (a :class:NumPy array <numpy.ndarray>) and sfreq
(the sampling frequency of the data), so we'll extract those from our
:class:~mne.io.Raw object:
End of explanation
mne.viz.plot_filter(filter_params, raw.info['sfreq'], flim=(0.01, 5))
Explanation: Notice that the output is the same as when we applied this filter to the data
using :meth:raw.filter() <mne.io.Raw.filter>. You can now pass the filter
parameters (and the sampling frequency) to :func:~mne.viz.plot_filter to
plot the filter:
End of explanation
def add_arrows(axes):
# add some arrows at 60 Hz and its harmonics
for ax in axes:
freqs = ax.lines[-1].get_xdata()
psds = ax.lines[-1].get_ydata()
for freq in (60, 120, 180, 240):
idx = np.searchsorted(freqs, freq)
# get ymax of a small region around the freq. of interest
y = psds[(idx - 4):(idx + 5)].max()
ax.arrow(x=freqs[idx], y=y + 18, dx=0, dy=-12, color='red',
width=0.1, head_width=3, length_includes_head=True)
fig = raw.plot_psd(fmax=250, average=True)
add_arrows(fig.axes[:2])
Explanation: Power line noise
Power line noise is an environmental artifact that manifests as persistent
oscillations centered around the AC power line frequency_. Power line
artifacts are easiest to see on plots of the spectrum, so we'll use
:meth:~mne.io.Raw.plot_psd to illustrate. We'll also write a little
function that adds arrows to the spectrum plot to highlight the artifacts:
End of explanation
meg_picks = mne.pick_types(raw.info, meg=True)
freqs = (60, 120, 180, 240)
raw_notch = raw.copy().notch_filter(freqs=freqs, picks=meg_picks)
for title, data in zip(['Un', 'Notch '], [raw, raw_notch]):
fig = data.plot_psd(fmax=250, average=True)
fig.subplots_adjust(top=0.85)
fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')
add_arrows(fig.axes[:2])
Explanation: It should be evident that MEG channels are more susceptible to this kind of
interference than EEG that is recorded in the magnetically shielded room.
Removing power-line noise can be done with a notch filter,
applied directly to the :class:~mne.io.Raw object, specifying an array of
frequencies to be attenuated. Since the EEG channels are relatively
unaffected by the power line noise, we'll also specify a picks argument
so that only the magnetometers and gradiometers get filtered:
End of explanation
raw_notch_fit = raw.copy().notch_filter(
freqs=freqs, picks=meg_picks, method='spectrum_fit', filter_length='10s')
for title, data in zip(['Un', 'spectrum_fit '], [raw, raw_notch_fit]):
fig = data.plot_psd(fmax=250, average=True)
fig.subplots_adjust(top=0.85)
fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')
add_arrows(fig.axes[:2])
Explanation: :meth:~mne.io.Raw.notch_filter also has parameters to control the notch
width, transition bandwidth and other aspects of the filter. See the
docstring for details.
It's also possible to try to use a spectrum fitting routine to notch filter.
In principle it can automatically detect the frequencies to notch, but our
implementation generally does not do so reliably, so we specify the
frequencies to remove instead, and it does a good job of removing the
line noise at those frequencies:
End of explanation
raw_downsampled = raw.copy().resample(sfreq=200)
for data, title in zip([raw, raw_downsampled], ['Original', 'Downsampled']):
fig = data.plot_psd(average=True)
fig.subplots_adjust(top=0.9)
fig.suptitle(title)
plt.setp(fig.axes, xlim=(0, 300))
Explanation: Resampling
EEG and MEG recordings are notable for their high temporal precision, and are
often recorded with sampling rates around 1000 Hz or higher. This is good
when precise timing of events is important to the experimental design or
analysis plan, but also consumes more memory and computational resources when
processing the data. In cases where high-frequency components of the signal
are not of interest and precise timing is not needed (e.g., computing EOG or
ECG projectors on a long recording), downsampling the signal can be a useful
time-saver.
In MNE-Python, the resampling methods (:meth:raw.resample()
<mne.io.Raw.resample>, :meth:epochs.resample() <mne.Epochs.resample> and
:meth:evoked.resample() <mne.Evoked.resample>) apply a low-pass filter to
the signal to avoid aliasing, so you don't need to explicitly filter it
yourself first. This built-in filtering that happens when using
:meth:raw.resample() <mne.io.Raw.resample>, :meth:epochs.resample()
<mne.Epochs.resample>, or :meth:evoked.resample() <mne.Evoked.resample> is
a brick-wall filter applied in the frequency domain at the Nyquist
frequency of the desired new sampling rate. This can be clearly seen in the
PSD plot, where a dashed vertical line indicates the filter cutoff; the
original data had an existing lowpass at around 172 Hz (see
raw.info['lowpass']), and the data resampled from 600 Hz to 200 Hz gets
automatically lowpass filtered at 100 Hz (the Nyquist frequency_ for a
target rate of 200 Hz):
End of explanation
current_sfreq = raw.info['sfreq']
desired_sfreq = 90 # Hz
decim = np.round(current_sfreq / desired_sfreq).astype(int)
obtained_sfreq = current_sfreq / decim
lowpass_freq = obtained_sfreq / 3.
raw_filtered = raw.copy().filter(l_freq=None, h_freq=lowpass_freq)
events = mne.find_events(raw_filtered)
epochs = mne.Epochs(raw_filtered, events, decim=decim)
print('desired sampling frequency was {} Hz; decim factor of {} yielded an '
'actual sampling frequency of {} Hz.'
.format(desired_sfreq, decim, epochs.info['sfreq']))
Explanation: Because resampling involves filtering, there are some pitfalls to resampling
at different points in the analysis stream:
Performing resampling on :class:~mne.io.Raw data (before epoching) will
negatively affect the temporal precision of Event arrays, by causing
jitter_ in the event timing. This reduced temporal precision will
propagate to subsequent epoching operations.
Performing resampling after epoching can introduce edge artifacts on
every epoch, whereas filtering the :class:~mne.io.Raw object will only
introduce artifacts at the start and end of the recording (which is often
far enough from the first and last epochs to have no affect on the
analysis).
The following section suggests best practices to mitigate both of these
issues.
Best practices
To avoid the reduction in temporal precision of events that comes with
resampling a :class:~mne.io.Raw object, and also avoid the edge artifacts
that come with filtering an :class:~mne.Epochs or :class:~mne.Evoked
object, the best practice is to:
low-pass filter the :class:~mne.io.Raw data at or below
$\frac{1}{3}$ of the desired sample rate, then
decimate the data after epoching, by either passing the decim
parameter to the :class:~mne.Epochs constructor, or using the
:meth:~mne.Epochs.decimate method after the :class:~mne.Epochs have
been created.
<div class="alert alert-danger"><h4>Warning</h4><p>The recommendation for setting the low-pass corner frequency at
$\frac{1}{3}$ of the desired sample rate is a fairly safe rule of
thumb based on the default settings in :meth:`raw.filter()
<mne.io.Raw.filter>` (which are different from the filter settings used
inside the :meth:`raw.resample() <mne.io.Raw.resample>` method). If you
use a customized lowpass filter (specifically, if your transition
bandwidth is wider than 0.5× the lowpass cutoff), downsampling to 3× the
lowpass cutoff may still not be enough to avoid `aliasing`_, and
MNE-Python will not warn you about it (because the :class:`raw.info
<mne.Info>` object only keeps track of the lowpass cutoff, not the
transition bandwidth). Conversely, if you use a steeper filter, the
warning may be too sensitive. If you are unsure, plot the PSD of your
filtered data *before decimating* and ensure that there is no content in
the frequencies above the `Nyquist frequency`_ of the sample rate you'll
end up with *after* decimation.</p></div>
Note that this method of manually filtering and decimating is exact only when
the original sampling frequency is an integer multiple of the desired new
sampling frequency. Since the sampling frequency of our example data is
600.614990234375 Hz, ending up with a specific sampling frequency like (say)
90 Hz will not be possible:
End of explanation |
3,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Padding
We're almost ready to train our model. There's just one hitch though
Step1: Let's inspect the shapes of our padded questions.
Step2: Great, each of our questions is now of length maxlen.
At this point, we need to get the "vocabulary" of the training data. This is the number of unique indices in the data, so in this case it's easy to calculate by taking the length of the word_indices dictionary.
Step3: The batch_size controls how many training instances we process (do a gradient update on) at once, since it's impossible to train on all of the data at once. 32 is a fairly standard number.
Step4: Building the model
Now that we have our data sorted out, we can finally build our Keras model. As a computation graph framework, Keras has a nice "functional API"; the notion is that you "construct" layers, and then "apply" these layers to tensors by calling them. This probably sounds quite abstract, but hopefully the code below illustrates.
Setting up input layers
At the beginning of every model in Keras, you need "input" layers which indicate what the shape of the incoming data arrays are going to be, and provides a means for the other layers to interface with it.
Step5: In the above printed representation of our tensor, you'll notice that the shape is a weird (?, maxlen). In this case, the ? refers to a dimension that can be of any size. Since that is our batch_size, we can vary the batch size to be whatever we want and the model will still work.
The Embedding Layer
Now that our questions are in the graph, we want to use an embedding layer to project each int index (which actually represents one word) into a higher-dimensional space. The way we do this is by using an Embedding layer. This layer replaces each index with a vector, and the vector should ideally represent the semantic meaning of the index. In this way, the model can get some notion of "meaning" between the indices.
In this model, the Embedding layer is randomly initialized --- every index is assigned a random vector at first. As the model trains, it will tweak the vector assigned to each word in order to minimize the loss. However, this naturally leads to a lot more parameters to tune, which makes the model harder to learn.
It's thus common practice in the field to use pre-trained embeddings. Pre-trained embeddings are what they sound like, embeddings for a word that already have gotten to a pretty good representation. By using these pretrained embeddings and not updating them (so keeping them fixed and not letting the model change them), you drastically lower the amount of parameters the model has to fiddle with and also prevent the model from overfitting (as it can make the embeddings overly domain-specific, while the pretrained embeddings are quite general).
Step6: Encoding the words
Now, our data consists of matrices of shape (batch_size, maxlen, embedding_dimension). It might be hard to intuitively think about what this means, but you've intuitively replaced each "int" index in the sentence with a vector of size embedding dimension (so from (batch_size, maxlen) to (batch_size, maxlen, embedding_dimension)).
Now that we have embedded our questions, it's time to encode them. A popular choice in modern NLP is to use a recurrent neural networks, especially the Bidirectional LSTM (biLSTM). An LSTM essentially takes a single question as input (something of shape (maxlen, embedding_dimension) in this case), and squeezes it into a fixed-length vector of size (LSTM_output_units). In this manner, you can think of the LSTM as "encoding" the question. into a single vector.
The "Bidirectional" part comes from the idea that you should run the question (a sequence of vectors) through the LSTM, and then reverse the question and run it through another LSTM. Then, you take the vector that was outputted from both and concatenate it. This intuitively lets the LSTM "read from both directions".
Step7: Getting an output probability
Lastly, we compute a similarity metric between each of the two vectors, over the batch. Our similarity metric will be
Step8: Wrapping up the model
Now that we've successfully strung together a bunch of our layers and inputs to get a final probability, we can create a keras Model to seamlessly take the input numpy arrays, run them through the computation graph we built in the way we specified, to get an output probability that it will automatically compare to the label in order to adjust the loss.
To do all of this, we just need to create an instance of the Model class and specify which Input layers are our inputs, and what value from the graph is our final output. Note that since we have multiple inputs, we need to pass a list of Input tensors.
Step9: Compiling the model
Now, we compile our model into a Tensorflow/Theano graph. Keras handles this for us, but we need to specify an optimization algorithm to use, as well as a loss function. adam is generally a good choice of optimizer, and binary_crossentropy is appropriate for a binary classification task like the one we have.
We can also specify a list of metrics to be printed during training and testing, so we'll print the accuracy.
Step10: Training our model
Now, we can finally pass in our input arrays and output labels and watch the model train! | Python Code:
maxlen = 100
max_training_instances=10000
# It takes a long time to train on all 400,000 samples on CPU (5 hours/epoch) --- let's cut it down to
# max_training_instances size. The dataset itself is a bit unbalanced, around 67% non-duplicate
# / 33% duplicate. We can use this opportunity to make it more balanced as well.
indices_with_0 = [index for index,value in enumerate(labels_list) if value==0]
indices_with_1 = [index for index,value in enumerate(labels_list) if value==1]
reduced_indexed_question_1s = []
reduced_indexed_question_2s = []
reduced_labels_list = []
for i in range(max_training_instances):
# if i is even (~50%), pull something from indices_with_0 and add it to
# the truncated dataset. Else, pull something from indices_with_1 and
# add it to the truncatd dataset. If any of the list of indices are empty,
# use the other one.
# TODO: I'm pretty sure this if can be refactored, but it's late and I can't think
# right now.
if i % 2 == 0:
if indices_with_0:
index = indices_with_0.pop()
else:
index = indices_with_1.pop()
else:
if indices_with_1:
index = indices_with_1.pop()
else:
index = indices_with_0.pop()
reduced_indexed_question_1s.append(indexed_question_1s[index])
reduced_indexed_question_2s.append(indexed_question_2s[index])
reduced_labels_list.append(labels_list[index])
print(len(reduced_indexed_question_1s))
print(len(reduced_indexed_question_2s))
print(len(reduced_labels_list))
# Now we want to pad / truncate our instances to a max length.
# Keras has a handy function to do this, but it isn't hard to implement yourself as well.
padded_question_1s = sequence.pad_sequences(reduced_indexed_question_1s, maxlen=maxlen)
padded_question_2s = sequence.pad_sequences(reduced_indexed_question_2s, maxlen=maxlen)
padded_question_1s_shape = padded_question_1s.shape
padded_question_2s_shape = padded_question_2s.shape
# We also want to convert our list of labels to a numpy array for use in the model.
labels = np.array(reduced_labels_list)
Explanation: Padding
We're almost ready to train our model. There's just one hitch though: neural networks take as input fixed-length vectors. What are we to do, since our questions are sequences of ints with variable length?
The answer is to pad the shorter instances to the length of the longest instance, thus making them all the same length! We'll pad with the 0 character -- this is why we set the padding character to have a 0 index in the word to index dictionary. Keras will automatically figure out that these 0's are padding, and not take them into account when doing model computations (this is called masking).
It's common to also truncate sequences. For example, say that the average length of our questions is 10 words, but there's one outlier with 900 words. Padding all of the other questions to 900 words would be a huge waste of space, when we could simply truncate that one outlier with 900 words to 10 words. Thus, we'll set a max length of 100 words; if a question is less than 100 words, it'll be padded up, and if it's longer it'll be truncated.
Note that since the two questions are actually separate inputs to the model, as you'll see later, their max length could be set to different values if you wanted. This is useful if you're comparing, say, a question and a document -- you'd expect the question to be much shorter than the document, and adjust your lengths accordingly.
End of explanation
print("padded_question_1s_shape: {}".format(padded_question_1s_shape))
print("padded_question_2s_shape: {}".format(padded_question_1s_shape))
print("labels shape: {}".format(labels.shape))
Explanation: Let's inspect the shapes of our padded questions.
End of explanation
vocabulary_size = len(word_indices)
print("Vocabulary size: {}".format(vocabulary_size))
Explanation: Great, each of our questions is now of length maxlen.
At this point, we need to get the "vocabulary" of the training data. This is the number of unique indices in the data, so in this case it's easy to calculate by taking the length of the word_indices dictionary.
End of explanation
batch_size = 32
Explanation: The batch_size controls how many training instances we process (do a gradient update on) at once, since it's impossible to train on all of the data at once. 32 is a fairly standard number.
End of explanation
# We are passed in two matrices, one of shape (batch_size, question_1_length) and
# (batch_size, question_2_length). In this case, these are both (32, 100) by default.
# Note that the input layer's shape argument does not include the batch size, and it is a
# tuple with a value of (maxlen,)
question_1_input = Input(shape=(padded_question_1s_shape[-1:]))
question_2_input = Input(shape=(padded_question_2s_shape[-1:]))
print("question_1_input {}".format(question_1_input))
print("question_2_input {}".format(question_2_input))
Explanation: Building the model
Now that we have our data sorted out, we can finally build our Keras model. As a computation graph framework, Keras has a nice "functional API"; the notion is that you "construct" layers, and then "apply" these layers to tensors by calling them. This probably sounds quite abstract, but hopefully the code below illustrates.
Setting up input layers
At the beginning of every model in Keras, you need "input" layers which indicate what the shape of the incoming data arrays are going to be, and provides a means for the other layers to interface with it.
End of explanation
# Embedding layer for question 1. For each word in the question, it'll
# transform it into a fixed-length vector of size 128.
embedding_layer_1 = Embedding(input_dim=vocabulary_size, output_dim=128,
mask_zero=True, input_length=maxlen)
# Embedding layer for question 2. For each word in the question, it'll
# transform it into a fixed-length vector of size 128.
embedding_layer_2 = Embedding(vocabulary_size, 128,
mask_zero=True, input_length=maxlen)
# Now, we apply the embedding layers that we constructed to the input
# shape: (batch_size, question_1_length, embedding_output_dim) or (32, 100, 128) by default
question_1_embedded = embedding_layer_1(question_1_input)
print("question_1_embedded {}".format(question_1_embedded))
# shape: (batch_size, question_2_length, embedding_output_dim) or (32, 100, 128) by default
question_2_embedded = embedding_layer_2(question_2_input)
print("question_2_embedded {}".format(question_2_embedded))
Explanation: In the above printed representation of our tensor, you'll notice that the shape is a weird (?, maxlen). In this case, the ? refers to a dimension that can be of any size. Since that is our batch_size, we can vary the batch size to be whatever we want and the model will still work.
The Embedding Layer
Now that our questions are in the graph, we want to use an embedding layer to project each int index (which actually represents one word) into a higher-dimensional space. The way we do this is by using an Embedding layer. This layer replaces each index with a vector, and the vector should ideally represent the semantic meaning of the index. In this way, the model can get some notion of "meaning" between the indices.
In this model, the Embedding layer is randomly initialized --- every index is assigned a random vector at first. As the model trains, it will tweak the vector assigned to each word in order to minimize the loss. However, this naturally leads to a lot more parameters to tune, which makes the model harder to learn.
It's thus common practice in the field to use pre-trained embeddings. Pre-trained embeddings are what they sound like, embeddings for a word that already have gotten to a pretty good representation. By using these pretrained embeddings and not updating them (so keeping them fixed and not letting the model change them), you drastically lower the amount of parameters the model has to fiddle with and also prevent the model from overfitting (as it can make the embeddings overly domain-specific, while the pretrained embeddings are quite general).
End of explanation
# Now we take the embedded questions, and we encode them with a bidirectional LSTM.
# Think of a LSTM as converting/encoding a sequence of vectors into a fixed length vector.
# In this case, it takes in a single question of size (100, 128) and returns something of
# size (2*LSTM_output_units). Since it is batched, we go from (32, 100, 128) to (32, 2*LSTM_output_units)
# Bidirectional LSTM encoder for question_1_embedded
question_1_encoder = Bidirectional(LSTM(units=64))
# Bidirectional LSTM encoder for question_2_embedded
question_2_encoder = Bidirectional(LSTM(units=64))
# Now, we apply the Bidirectional LSTM encoders to our embedded questions.
# shape: (batch_size, 2*LSTM_output_units), or (32, 128) by default
question_1_encoded = question_1_encoder(question_1_embedded)
print("question_1_encoded: {}".format(question_1_encoded))
# shape: (batch_size, 2*LSTM_output_units), or (32, 128) by default
question_2_encoded = question_2_encoder(question_2_embedded)
print("question_2_encoded: {}".format(question_2_encoded))
Explanation: Encoding the words
Now, our data consists of matrices of shape (batch_size, maxlen, embedding_dimension). It might be hard to intuitively think about what this means, but you've intuitively replaced each "int" index in the sentence with a vector of size embedding dimension (so from (batch_size, maxlen) to (batch_size, maxlen, embedding_dimension)).
Now that we have embedded our questions, it's time to encode them. A popular choice in modern NLP is to use a recurrent neural networks, especially the Bidirectional LSTM (biLSTM). An LSTM essentially takes a single question as input (something of shape (maxlen, embedding_dimension) in this case), and squeezes it into a fixed-length vector of size (LSTM_output_units). In this manner, you can think of the LSTM as "encoding" the question. into a single vector.
The "Bidirectional" part comes from the idea that you should run the question (a sequence of vectors) through the LSTM, and then reverse the question and run it through another LSTM. Then, you take the vector that was outputted from both and concatenate it. This intuitively lets the LSTM "read from both directions".
End of explanation
# The L1 Norm/Manhattan distance formula is simple: subtract vector 1 from vector 2, and add up the
# absolute value of the resulting vector.
# We'll first write a function to calculate our similarity metric given two tensors.
def l1_similarity(vectors):
vector_1, vector_2 = vectors
# Note that vector_1 and vector_2 are of shape (batch_size, LSTM_units*2)
# First, take the absolute value of the difference. shape(batch_size, LSTM_units*2)
abs_diff = K.abs(vector_1-vector_2)
# Now, sum across the "first" axis and negate it (which thus negates every element of it).
# This is roughly analogous to summing the rows.
# keepdims=True does not reduce the dimensionality, and just leaves it as 1.
# shape: (batch_size, 1)
negative_l1_distance = -K.sum(abs_diff, axis=1, keepdims=True)
# Finally, apply the exponential function and return the output.
# shape: (batch_size, 1), where the "1" is a value in [0, 1] that
# describes the probability of the two vectors being semantically similar.
return K.exp(negative_l1_distance)
# We now want to pass our two encoded questions to our similarity function.
# To do so, we'll use a keras Lambda layer, which lets us wrap an arbitrary
# function in a Lambda object. Note that _ALL_ operations on keras tensors
# in the Model class _must_ be a layer; we thus cannot call the function directly.
# Here, we're creating a layer and using it in one line.
# output shape: (batch_size, 1)
duplicate_probabilities = Lambda(l1_similarity)([question_1_encoded, question_2_encoded])
print("duplicate_probabilities: {}".format(duplicate_probabilities))
Explanation: Getting an output probability
Lastly, we compute a similarity metric between each of the two vectors, over the batch. Our similarity metric will be: exp(-||question_1_encoded-question_2_encoded||), or in words, e to the power of the negative L1 norm (a.k.a manhattan distance). With this metric, for each question pair (vector of size LSTM_units*2) we get a value between 0 and 1, with questions having a larger L1 norm being closer to 0 and questions having a smaller L1 norm being closer to 1. We can intuitively interpret this as the probability that two sentences are semantically the same, assuming that if two sentences have the same semantic meaning, they are probably duplicate questions.
End of explanation
# These duplicate probabilties are what we want to output from our model, so we'll create
# the model now.
duplicate_questions_model = Model(inputs=[question_1_input, question_2_input], outputs=duplicate_probabilities)
Explanation: Wrapping up the model
Now that we've successfully strung together a bunch of our layers and inputs to get a final probability, we can create a keras Model to seamlessly take the input numpy arrays, run them through the computation graph we built in the way we specified, to get an output probability that it will automatically compare to the label in order to adjust the loss.
To do all of this, we just need to create an instance of the Model class and specify which Input layers are our inputs, and what value from the graph is our final output. Note that since we have multiple inputs, we need to pass a list of Input tensors.
End of explanation
duplicate_questions_model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
# Print a summary of the layers of our model and their inputs and outputs
duplicate_questions_model.summary()
Explanation: Compiling the model
Now, we compile our model into a Tensorflow/Theano graph. Keras handles this for us, but we need to specify an optimization algorithm to use, as well as a loss function. adam is generally a good choice of optimizer, and binary_crossentropy is appropriate for a binary classification task like the one we have.
We can also specify a list of metrics to be printed during training and testing, so we'll print the accuracy.
End of explanation
# Now, we can finally fit our model on training data!
# Note that the order of the input x matters.
duplicate_questions_model.fit(x=[padded_question_1s, padded_question_2s], y=labels,
batch_size=batch_size, epochs=4, validation_split=0.1)
Explanation: Training our model
Now, we can finally pass in our input arrays and output labels and watch the model train!
End of explanation |
3,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Windrose with MesoWest Data
Introduction
Who are we?
http
Step1: #### Customize matplotlib
It's so much easier to modify matplotlib defaults like this rather than inline with the plot functions.
See more here http
Step3: Define a few functions
First function will get MesoWest data and return a Python dictionary.
Find a list of all available variables
Step4: These two functions set up the windroses axes and legend
Step5: Ok, lets get some data
Step6: Find other stations with PM 25 concentrations here
Step7: What is the variable air_data?
Step8: air_data is a dictionary. Each key is associated with a value or object.
What data are in the dictionary?
Step9: You can access the values or objects of each key like so...
Step10: Visualize the data
Each datetime object in a['DATETIME'] matches PM 2.5 concentrations in a['PM_25_concentration'] and wind directions in a['wind_direction'].
Plot a time series of PM 2.5 concentration for the time period
Step11: Plot a wind rose, to show how PM 2.5 is related to wind direction
ax.bar() is a function that makes wind roses. It requires two inputs
Step12: Questions ???
What does ncestors do? (Try increasing or decreasing it)
How can you change the color of each bin? Find matplotlib named colors here
How can you change the color range for each bin?
How can you change the number of bins?
What happens if you uncomment the last line ax.set_rmax(40)?
Instead of using the ax.bar() function, try ax.contour(), ax.contourf(), ax.box()
What does this data tell us?
Where do winds typically blow from? Why?
Do you know where MTMET station is?
Can you find the latitude and longitude for MTMET and find it's location in Google maps?
From what direction did MTMET get the highest PM 2.5 pollution?
How does this compare to the same time period last year?
What data is used to make this plot? What did the ax.bar() function do?
Step13: Questions ???
Step14: What if we only want a wind rose when PM 2.5 was high?
Step15: How would you make a wind rose for another variable?
First, we need to get another variable from the MesoWest API. Lets try air temperature and wind speed.
Step16: Question ???
Can you tell where the wind typically blows at night, when it's cold?
Can you make a rose for another time of year? Another station?
Wind Rose, in m/s | Python Code:
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
from datetime import datetime
import json
from urllib.request import urlopen
# Confirm that `pm25rose.py` is in your directory
from pm25rose import WindroseAxes
import mesowest
Explanation: Python Windrose with MesoWest Data
Introduction
Who are we?
http://meso1.chpc.utah.edu/mesowest_overview/
Introduction to api services:
What are the MesoWest/SynopticLabs api services?
https://synopticlabs.org/api/
How do you find out where particulate concentrations are measured in Utah?
https://api.synopticlabs.org/v2/stations/latest?&token=demotoken&state=UT&vars=PM_25_concentration
http://meso2.chpc.utah.edu/aq/
Learning objectives:
Evaluate data from different sensor types.
Use an api service to access data in real time and retrospectively.
Visualize air quality data relative to wind conditions.
We will use Python to view air quality data from the MesoWest API.
But first...
1. Install the JSON Viewer for your Chrome browser. This will help you look at JSON-formated data in your browser.
2. Make sure you have pm25rose.py in the current directory. That package makes wind roses, you won't change anything in that file. (The original wind rose code is found here).
Import some stuff we'll use
End of explanation
mpl.rcParams['xtick.labelsize'] = 8
mpl.rcParams['ytick.labelsize'] = 8
mpl.rcParams['axes.labelsize'] = 10
mpl.rcParams['legend.fontsize'] = 10
mpl.rcParams['figure.figsize'] = [5, 10]
mpl.rcParams['grid.linewidth'] = .25
mpl.rcParams['savefig.bbox'] = 'tight'
Explanation: #### Customize matplotlib
It's so much easier to modify matplotlib defaults like this rather than inline with the plot functions.
See more here http://matplotlib.org/users/customizing.html
End of explanation
default_vars = 'altimeter,pressure,sea_level_pressure,wind_direction,\
wind_speed,air_temp,relative_humidity,dew_point_temperature,wind_gust'
def get_mesowest_ts(stationID, start_time, end_time, variables=default_vars):
Get MesoWest Time Series data:
Makes a time series query from the MesoWest API for a single station.
Input:
stationID : string of the station ID
start_time : datetime object of the start time in UTC
end_time : datetime object of the end time in UTC
variables : a string of variables available through the MesoWest API
see https://synopticlabs.org/api/mesonet/variables/ for
a list of variables.
Output:
A dictionary of the data.
# Hey! You can get your own token! https://synopticlabs.org/api/guides/?getstarted
token = 'demotoken'
# Convert the start and end time to the string format requried by the API
start = start_time.strftime("%Y%m%d%H%M")
end = end_time.strftime("%Y%m%d%H%M")
tz = 'utc' # Timezone is hard coded for now. Could allow local time later.
# Build the API request URL
URL = 'http://api.mesowest.net/v2/stations/timeseries?&token=' + token \
+ '&stid=' + stationID \
+ '&start=' + start \
+ '&end=' + end \
+ '&vars=' + variables \
+ '&obtimezone=' + tz \
+ '&output=json'
print ("Here is the URL you asked for:", URL)
# Open URL and read JSON content. Convert JSON string to some python
# readable format.
f = urlopen(URL)
data = f.read()
data = json.loads(data)
# Create a new dictionary to store the data in.
return_this = {}
# Get basic station information
return_this['URL'] = URL
return_this['NAME'] = str(data['STATION'][0]['NAME'])
return_this['STID'] = str(data['STATION'][0]['STID'])
return_this['LAT'] = float(data['STATION'][0]['LATITUDE'])
return_this['LON'] = float(data['STATION'][0]['LONGITUDE'])
return_this['ELEVATION'] = float(data['STATION'][0]['ELEVATION'])
# Note: Elevation is in feet, NOT METERS!
# Dynamically create keys in the dictionary for each requested variable
for v in data['STATION'][0]['SENSOR_VARIABLES'].keys():
if v == 'date_time':
# Dates: Convert the strings to a python datetime object.
dates = data["STATION"][0]["OBSERVATIONS"]["date_time"]
DATES = [datetime.strptime(x, '%Y-%m-%dT%H:%M:%SZ') for x in dates]
return_this['DATETIME'] = np.array(DATES)
else:
# v represents all the variables, but each variable may have
# more than one set.
# For now, just return the first set.
key_name = str(v)
set_num = 0
grab_this_set = str(list(data['STATION'][0]['SENSOR_VARIABLES']\
[key_name].keys())[set_num]) # This could be problematic. No guarantee of order
# Always grab the first set (either _1 or _1d)
# ! Should make exceptions to this rule for certain stations and certain
# ! variables (a project for another day :p).
if grab_this_set[-1] != '1' and grab_this_set[-1] != 'd':
grab_this_set = grab_this_set[0:-1]+'1'
if grab_this_set[-1] == 'd':
grab_this_set = grab_this_set[0:-2]+'1d'
variable_data = np.array(data['STATION'][0]['OBSERVATIONS']\
[grab_this_set], dtype=np.float)
return_this[key_name] = variable_data
return return_this
Explanation: Define a few functions
First function will get MesoWest data and return a Python dictionary.
Find a list of all available variables:
https://synopticlabs.org/api/mesonet/variables/
End of explanation
# Make Rose
#A quick way to create new windrose axes...
def new_axes():
fig = plt.figure(facecolor='w', edgecolor='w')
rect = [0.1, 0.1, 0.8, 0.8]
ax = WindroseAxes(fig, rect, facecolor='w')
fig.add_axes(ax)
return ax
#...and adjust the legend box
def set_legend(ax):
l = ax.legend()
#plt.setp(l.get_texts())
plt.legend(loc='center left', bbox_to_anchor=(1.2, 0.5), prop={'size':10})
Explanation: These two functions set up the windroses axes and legend
End of explanation
# Date range for data we are interested
start = datetime(2016, 12, 1)
end = datetime(2017, 3, 1)
# MesoWest station ID.
stn = 'MTMET'
Explanation: Ok, lets get some data
End of explanation
# Get MesoWest Data
air_data = get_mesowest_ts(stn, start, end, variables='wind_direction,PM_25_concentration')
Explanation: Find other stations with PM 25 concentrations here:
https://api.synopticlabs.org/v2/stations/metadata?&token=demotoken&state=UT&vars=PM_25_concentration&status=active
End of explanation
air_data
Explanation: What is the variable air_data?
End of explanation
air_data.keys()
Explanation: air_data is a dictionary. Each key is associated with a value or object.
What data are in the dictionary?
End of explanation
print ("Station Name:", air_data['NAME'])
print ("Number of Observations:", len(air_data['DATETIME']))
print ("List of dates:", air_data['DATETIME'])
Explanation: You can access the values or objects of each key like so...
End of explanation
# Create a new figure
plt.figure(figsize=[10,5])
# Plot data lines
plt.plot(air_data['DATETIME'], air_data['PM_25_concentration'],
color='dodgerblue',
label="PM 2.5")
plt.axhline(35,
linestyle = '--',
color='r',
label="EPA Standard")
# Add labels, etc.
plt.legend()
plt.ylabel(r'PM 2.5 Concentration ($\mu$g m$\mathregular{^{-3}}$)')
plt.title('PM 2.5 Concentration at %s (%s)' % (air_data['NAME'], air_data['STID']))
plt.xlim([air_data['DATETIME'][0], air_data['DATETIME'][-1]])
plt.ylim([0, np.nanmax(air_data['PM_25_concentration']+5)])
Explanation: Visualize the data
Each datetime object in a['DATETIME'] matches PM 2.5 concentrations in a['PM_25_concentration'] and wind directions in a['wind_direction'].
Plot a time series of PM 2.5 concentration for the time period
End of explanation
# Make the wind rose
ax = new_axes()
ax.bar(air_data['wind_direction'], air_data['PM_25_concentration'],
nsector=16,
normed=True, # displays a normalized wind rose, in percent instead of count.
bins=[0, 12.1, 35.5, 55.5, 150.5],
colors=('green', 'yellow', 'orange', 'red', 'purple'))
# Create a legend
set_legend(ax)
plt.title("PM2.5 Rose %s \n %s - %s" % (air_data['NAME'], start.strftime('%d %b %Y'), end.strftime('%d %b %Y')))
plt.grid(True)
# Grid at 5% intervals
plt.yticks(np.arange(5, 105, 5))
ax.set_yticklabels(['5%', '10%', '15%', '20%', '25%', '30%', '35%', '40%'])
# Change the plot range
ax.set_rmax(np.max(np.sum(ax._info['table'], axis=0)))
#ax.set_rmax(40)
Explanation: Plot a wind rose, to show how PM 2.5 is related to wind direction
ax.bar() is a function that makes wind roses. It requires two inputs:
1. An array of wind directions.
2. An array of some variable related to wind direction, in this case PM 2.5.
The other inputs are not required, but allow us to custimize the figure.
End of explanation
# Values used to create the plot
ax._info["table"]
Explanation: Questions ???
What does ncestors do? (Try increasing or decreasing it)
How can you change the color of each bin? Find matplotlib named colors here
How can you change the color range for each bin?
How can you change the number of bins?
What happens if you uncomment the last line ax.set_rmax(40)?
Instead of using the ax.bar() function, try ax.contour(), ax.contourf(), ax.box()
What does this data tell us?
Where do winds typically blow from? Why?
Do you know where MTMET station is?
Can you find the latitude and longitude for MTMET and find it's location in Google maps?
From what direction did MTMET get the highest PM 2.5 pollution?
How does this compare to the same time period last year?
What data is used to make this plot? What did the ax.bar() function do?
End of explanation
print ('Why does it have this shape?', np.shape(ax._info["table"]))
print ('Why is the last item all zeros?')
print ('The total frequency in each direction:', np.sum(ax._info["table"], axis=0))
print ('Maximum freqency (what we set rmax to)', np.max(np.sum(ax._info["table"], axis=0)))
Explanation: Questions ???
End of explanation
# Find where air_data['PM_25_concentration'] is high
high_PM_idx = air_data['PM_25_concentration'] > 35.5
# Note: You'll get a warning becuase there may be nans in the data
# What did we just do? This variable contains a True/False for every position
high_PM_idx
# Only get the dates and data when high_PM_idx is true.
direction_highPM = air_data['wind_direction'][high_PM_idx]
PM25_highPM = air_data['PM_25_concentration'][high_PM_idx]
# Create a new figure axis
axH = new_axes()
axH.bar(direction_highPM, PM25_highPM,
nsector=16,
normed=True,
bins=[0, 12.1, 35.5, 55.5, 150.5],
colors=('green', 'yellow', 'orange', 'red', 'purple'))
# Create a legend
set_legend(axH)
plt.title("PM2.5 Rose %s \n %s - %s" % (a['NAME'], start.strftime('%d %b %Y'), end.strftime('%d %b %Y')))
plt.grid(True)
# Grid at 5% intervals
plt.yticks(np.arange(5, 105, 5))
axH.set_yticklabels(['5%', '10%', '15%', '20%', '25%', '30%', '35%', '40%'])
# Change the plot range
axH.set_rmax(np.max(np.sum(axH._info['table'], axis=0)))
Explanation: What if we only want a wind rose when PM 2.5 was high?
End of explanation
a1 = get_mesowest_ts(stn, start, end, variables='wind_direction,air_temp,wind_speed')
# These are the availalbe keys
print (a1.keys())
# Make a wind rose for air temperature
ax1 = new_axes()
ax1.bar(a1['wind_direction'], a1['air_temp'],
nsector=16,
normed=True,
bins=range(-10,25,5),
cmap=cm.Spectral_r) # For a lit of other colormap options type: dir(cm)
# Add a legend and title
set_legend(ax1)
plt.title("Temperature Rose %s \n %s - %s" % (a1['NAME'], start.strftime('%d %b %Y'), end.strftime('%d %b %Y')))
# Add the grid lines
plt.grid(True)
# Grid at 5% intervals, between 5 and 100
plt.yticks(np.arange(5, 105, 5))
# Label each grid with a % sign
ax1.set_yticklabels(['5%', '10%', '15%', '20%', '25%', '30%', '35%', '40%'])
# Change the plot range
#ax.set_rmax(25)
ax1.set_rmax(np.max(np.sum(ax1._info['table'], axis=0)))
Explanation: How would you make a wind rose for another variable?
First, we need to get another variable from the MesoWest API. Lets try air temperature and wind speed.
End of explanation
ax2 = new_axes()
ax2.bar(a1['wind_direction'], a1['wind_speed'],
nsector=16,
normed=True,
bins=range(0,10))
set_legend(ax2)
ax2.set_title('Wind Rose: bar')
ax3 = new_axes()
ax3.contourf(a1['wind_direction'], a1['wind_speed'],
nsector=180,
normed=True,
bins=range(0,8),
cmap=cm.inferno_r)
ax3.set_title('Wind Rose: contourf')
set_legend(ax3)
Explanation: Question ???
Can you tell where the wind typically blows at night, when it's cold?
Can you make a rose for another time of year? Another station?
Wind Rose, in m/s
End of explanation |
3,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bivariate
A bivariate analysis differs from a univariate, or distribution analysis, in that it is the analysis of two separate sets of data. These two sets of data are compared to one another to check for correlation, or a tendency of one of the sets of data to "predict" corresponding values in the other data set. If a linear or higher order model can be applied to describe, or model, the two sets of data, they are said to be correlated.
When two distributions are correlated, it is possible that the data in one of the distributions can be used to predict a corresponding value in the second distribution. This first distribution is referred to as the predictor and the second distribution as the response. Both predictor and response are graphed by a scatter plot, typically with the predictor on the x-axis and the response on the y-axis.
Note
Step1: Scatter Plot
A scatter plot is used in sci-analysis to visualize the correlation between two sets of data. For this to work, each value in the first set of data has a corresponding value in the second set of data. The two values are tied together by the matching index value in each set of data. The length of each set of data have to be equal to one another, and the index values of each data set have to contiguous. If there is a missing value or values in one data set, the matching value at the same index in the other data set will be dropped.
By default, the best-fit line (assuming a linear relationship) is drawn as a dotted red line.
Step2: Boxplot Borders
Boxplots can be displayed along-side the x and y axis of the scatter plot. This is a useful tool for visualizing the distribution of the sets of data on the x and y axis while still displaying the scatter plot.
Step3: Contours
In certain cases, such as when one of the sets of data is discrete and the other is continuous, it might be difficult to determine where the data points are centered. In this case, density contours can be used to help visualize the join probability distribution between the two sets of data.
Step4: Grouped Scatter Plot
If each set of data contains discrete and equivalent groups, the scatter plot can show each group in a separate color.
Step5: Interpreting the Statistics
Linear Regression
The Linear Regression finds the least-squares best-fit line between the predictor and response. The linear relationship between the predictor and response is described by the relationship y = mx + b, where x is the predictor, y is the response, m is the slope, and b is the y-intercept.
n - The number of data points in the analysis.
Slope - The slope of the best-fit line between predictor and response.
Intercept - The y-intercept of the best-fit line between predictor and response.
r - The correlation coefficient of the linear regression.
r^2 - The amount of error that can be described by the linear regression. The higher the number, the more accurate the linear regression models the relationship between the predictor and response.
Std Err - Standard error of the best-fit line.
p value - The p value of the hypothesis test that the slope of the best-fit line is actually zero.
Correlation Coefficient
If the data points from both sets of data are normally distributed, the Pearson correlation coefficient is calculated, otherwise, the Spearman Rank correlation coefficient is calculated. A correlation coefficient of 0 indicates no relationship, whereas 1 indicates a perfect correlation between predictor and response. In the case of both correlation coefficients, the null hypothesis is that the correlation coefficient is 0, signifying no relationship between the predictor and response. If the p value is less than the significance $\alpha$, the predictor and response are correlated.
Usage
Argument Examples
x-sequence, y-sequence
The bare minimum requirements for performing a Bivariate analysis. The length of x-sequence and y-sequence should be equal and will raise an UnequalVectorLengthError if not.
Step6: fit
Controls whether the best fit line is displayed or not.
Step7: points
Controls whether the data points of the scatter plot are displayed or not.
Step8: boxplot_borders
Controls whether boxplots are displayed for x-sequence and y-sequence.
Step9: contours
Controls whether the density contours are displayed or not. The contours can be useful when analyzing joint probability distributions.
Step10: labels, highlight
Used in conjunction with one another, labels and highlight are used for displaying data values for the data points on the scatter plot.
Step11: groups
The groups argument can be used to perform a Bivariate analysis on separate collections of data points that can be compared to one another.
Step12: groups, highlight
Using the groups argument is a great way to compare treatments. When combined with the highlight argument, a particular group can be highlighted on the scatter plot to stand out from the others.
Step13: Multiple groups can also be highlighted.
Step14: title
The title of the distribution to display above the graph.
Step15: xname
The name of the data on the x-axis.
Step16: yname
The name of the data on the y-axis. | Python Code:
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
%matplotlib inline
# Create x-sequence and y-sequence from random variables.
np.random.seed(987654321)
x_sequence = st.norm.rvs(2, size=2000)
y_sequence = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in x_sequence])
Explanation: Bivariate
A bivariate analysis differs from a univariate, or distribution analysis, in that it is the analysis of two separate sets of data. These two sets of data are compared to one another to check for correlation, or a tendency of one of the sets of data to "predict" corresponding values in the other data set. If a linear or higher order model can be applied to describe, or model, the two sets of data, they are said to be correlated.
When two distributions are correlated, it is possible that the data in one of the distributions can be used to predict a corresponding value in the second distribution. This first distribution is referred to as the predictor and the second distribution as the response. Both predictor and response are graphed by a scatter plot, typically with the predictor on the x-axis and the response on the y-axis.
Note: Just because two sets of data correlate with one another does not necessarily mean that one predicts the other. It merely means it's a possibility that one predicts the other. This is summarized by the saying "Correlation does not imply causation." Use caution when drawing conclusions of a bivariate analysis. It is a good idea to study both data sets more carefully to determine if the two data sets are in fact correlated.
Interpreting the Graphs
Let's first import sci-analysis and setup some variables to use in these examples.
End of explanation
analyze(x_sequence, y_sequence)
Explanation: Scatter Plot
A scatter plot is used in sci-analysis to visualize the correlation between two sets of data. For this to work, each value in the first set of data has a corresponding value in the second set of data. The two values are tied together by the matching index value in each set of data. The length of each set of data have to be equal to one another, and the index values of each data set have to contiguous. If there is a missing value or values in one data set, the matching value at the same index in the other data set will be dropped.
By default, the best-fit line (assuming a linear relationship) is drawn as a dotted red line.
End of explanation
analyze(x_sequence, y_sequence, boxplot_borders=True)
Explanation: Boxplot Borders
Boxplots can be displayed along-side the x and y axis of the scatter plot. This is a useful tool for visualizing the distribution of the sets of data on the x and y axis while still displaying the scatter plot.
End of explanation
x_continuous = st.weibull_max.rvs(2.7, size=2000)
y_discrete = st.geom.rvs(0.5, loc=0, size=2000)
analyze(x_continuous, y_discrete, contours=True, fit=False)
Explanation: Contours
In certain cases, such as when one of the sets of data is discrete and the other is continuous, it might be difficult to determine where the data points are centered. In this case, density contours can be used to help visualize the join probability distribution between the two sets of data.
End of explanation
# Create new x-grouped and y-grouped from independent groups A, B, and C.
a_x = st.norm.rvs(2, size=500)
a_y = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in a_x])
b_x = st.norm.rvs(4, size=500)
b_y = np.array([1.5 * x + st.norm.rvs(0, 0.65, size=1) for x in b_x])
c_x = st.norm.rvs(1.5, size=500)
c_y = np.array([3 * x + st.norm.rvs(0, 0.95, size=1) - 1 for x in c_x])
x_grouped = np.concatenate((a_x, b_x, c_x))
y_grouped = np.concatenate((a_y, b_y, c_y))
grps = np.concatenate((['Group A'] * 500, ['Group B'] * 500, ['Group C'] * 500))
analyze(
x_grouped,
y_grouped,
groups=grps,
boxplot_borders=False,
)
Explanation: Grouped Scatter Plot
If each set of data contains discrete and equivalent groups, the scatter plot can show each group in a separate color.
End of explanation
analyze(
x_sequence,
y_sequence,
)
Explanation: Interpreting the Statistics
Linear Regression
The Linear Regression finds the least-squares best-fit line between the predictor and response. The linear relationship between the predictor and response is described by the relationship y = mx + b, where x is the predictor, y is the response, m is the slope, and b is the y-intercept.
n - The number of data points in the analysis.
Slope - The slope of the best-fit line between predictor and response.
Intercept - The y-intercept of the best-fit line between predictor and response.
r - The correlation coefficient of the linear regression.
r^2 - The amount of error that can be described by the linear regression. The higher the number, the more accurate the linear regression models the relationship between the predictor and response.
Std Err - Standard error of the best-fit line.
p value - The p value of the hypothesis test that the slope of the best-fit line is actually zero.
Correlation Coefficient
If the data points from both sets of data are normally distributed, the Pearson correlation coefficient is calculated, otherwise, the Spearman Rank correlation coefficient is calculated. A correlation coefficient of 0 indicates no relationship, whereas 1 indicates a perfect correlation between predictor and response. In the case of both correlation coefficients, the null hypothesis is that the correlation coefficient is 0, signifying no relationship between the predictor and response. If the p value is less than the significance $\alpha$, the predictor and response are correlated.
Usage
Argument Examples
x-sequence, y-sequence
The bare minimum requirements for performing a Bivariate analysis. The length of x-sequence and y-sequence should be equal and will raise an UnequalVectorLengthError if not.
End of explanation
analyze(
x_sequence,
y_sequence,
fit=False,
)
Explanation: fit
Controls whether the best fit line is displayed or not.
End of explanation
analyze(
x_sequence,
y_sequence,
points=False,
)
Explanation: points
Controls whether the data points of the scatter plot are displayed or not.
End of explanation
analyze(
x_sequence,
y_sequence,
boxplot_borders=True,
)
Explanation: boxplot_borders
Controls whether boxplots are displayed for x-sequence and y-sequence.
End of explanation
analyze(
x_sequence,
y_sequence,
contours=True,
)
Explanation: contours
Controls whether the density contours are displayed or not. The contours can be useful when analyzing joint probability distributions.
End of explanation
labels = np.random.randint(low=10000, high=99999, size=2000)
analyze(
x_sequence,
y_sequence,
labels=labels,
highlight=[66286]
)
Explanation: labels, highlight
Used in conjunction with one another, labels and highlight are used for displaying data values for the data points on the scatter plot.
End of explanation
# Create new x-grouped and y-grouped from independent groups A, B, and C.
a_x = st.norm.rvs(2, size=500)
a_y = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in a_x])
b_x = st.norm.rvs(4, size=500)
b_y = np.array([1.5 * x + st.norm.rvs(0, 0.65, size=1) for x in b_x])
c_x = st.norm.rvs(1.5, size=500)
c_y = np.array([3 * x + st.norm.rvs(0, 0.95, size=1) - 1 for x in c_x])
x_grouped = np.concatenate((a_x, b_x, c_x))
y_grouped = np.concatenate((a_y, b_y, c_y))
grps = np.concatenate((['Group A'] * 500, ['Group B'] * 500, ['Group C'] * 500))
analyze(
x_grouped,
y_grouped,
groups=grps,
)
Explanation: groups
The groups argument can be used to perform a Bivariate analysis on separate collections of data points that can be compared to one another.
End of explanation
analyze(
x_grouped,
y_grouped,
groups=grps,
highlight=['Group A'],
)
Explanation: groups, highlight
Using the groups argument is a great way to compare treatments. When combined with the highlight argument, a particular group can be highlighted on the scatter plot to stand out from the others.
End of explanation
analyze(
x_grouped,
y_grouped,
groups=grps,
highlight=['Group A', 'Group B'],
)
Explanation: Multiple groups can also be highlighted.
End of explanation
x_sequence = st.norm.rvs(2, size=2000)
y_sequence = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in x_sequence])
analyze(
x_sequence,
y_sequence,
title='This is a Title',
)
Explanation: title
The title of the distribution to display above the graph.
End of explanation
analyze(
x_sequence,
y_sequence,
xname='This is the x-axis data'
)
Explanation: xname
The name of the data on the x-axis.
End of explanation
analyze(
x_sequence,
y_sequence,
yname='This is the y-axis data'
)
Explanation: yname
The name of the data on the y-axis.
End of explanation |
3,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trabajo en clase 02
Step1: Primer Punto
Leemos primero los datos de los tiempos medidos en el laboratorio guardados en el archivo 'data.csv'
Step2: # Análisis
Para el caso en el que se usa un experimento con 5 datos, todos los errores cruzan el valor medio aproximadamente por el centro. Pero para 1 sigma, el error abarca sólo un dato; el error con 3 sigma abarca la mayoría de los datos y el error con t-student abarca la totalidad de los datos. El intervalo de cada uno encierra el valor medio esperado, pero en este caso con t-student y 3sigma se obtiene el mejor margen de error del experimento.
Segundo Punto
Step3: Análisis
En el caso en el que se realizan 5 experimentos con 5 datos cada uno es importante notar que los promedio (datos de la figura) de cada experimento están mucho más cerca del valor medio calculado en comparación con los datos del caso en el que se hizo sólo un experimento con 5 datos.
El promedio de las medias de cada experimento tiene un error de desviación estándar bastante pequeño a 1 sigma, lo cual hace que el rango datos del experimento abarque sólo un dato, esto aumenta la posibilidad de obtener un valor errado. Por otra parte, el error con 3 sigma abarca casi todos los datos, mejorando la posibilidad de obtener un valor más preciso y exacto.
Tercer Punto
Step4: Análisis
Cuando realizamos los cálculos en un experimento con 30 datos es posible notar que la distrubución de dichos datos se comporta de manera gaussiana. El valor de corrección de la gaussiana se cambio por el valor más alto de las frecuencias, debido a que el valor teórico de corrección no superaba la mitad de la altura en la campana de la gráfica, sin embargo, aún así el comportamiento gaussiano era evidente.
El error a 1 sigma del promedio de los 30 datos es mucho menor que la desviación estándar de la distrubución, implicando que pueda existir una mayor exactitud pero una baja precisión por la alta disperción de datos. Mientas que el error a 3 sigma abarca mayor cantidad de datos, similar a los experimentos anteriores.
Cuarto Punto
Para calcular la gravedad se usó la siguiente ecuación
Step5: El procedimiento del cálculo de la propagación de errores se hizo en el cuaderno de protocolo a partir de la siguiente fórmula | Python Code:
# Librerías
import matplotlib
from scipy import misc
from scipy import stats
from scipy import special
import pylab as plt
import numpy as np
%matplotlib inline
font = {'weight' : 'bold',
'size' : 16}
matplotlib.rc('font', **font)
from IPython.display import display
from IPython.display import HTML
import IPython.core.display as di # Example: di.display_html('<h3>%s:</h3>' % str, raw=True)
# This line will hide code by default when the notebook is exported as HTML
di.display_html('<script>jQuery(function() {if (jQuery("body.notebook_app").length == 0) { jQuery(".input_area").toggle(); jQuery(".prompt").toggle();}});</script>', raw=True)
# This line will add a button to toggle visibility of code blocks, for use with the HTML export version
di.display_html('''<button onclick="jQuery('.input_area').toggle(); jQuery('.prompt').toggle();">Pulse para codigo</button>''', raw=True)
Explanation: Trabajo en clase 02: t-student y aceleración de la gravedad
Yennifer Angarita Arenas
Alejandro Mesa Gómez
A partir de las 30 medidas del tiempo que demora caer un objeto de una altura (medida con su respectiva incertidumbre) hacer lo siguiente:
Con 5 medidas determinar el tiempo y su error estándar. Para sigma, tres sigma y t-student (alfa=1%).
Hacer el análogo a 5 experimentos de 5 medidas cada uno. Determinar el promedio de cada experimento. A los 5 datos de los promedios, suponiendo una distribución normal determinar el promedio y el error estándar para 3sigma.
Con las 30 medidas determinar el tiempo y su error estándar. Reportar con 3sigma.
Determinar usando propagación de errores (recuerde que la medida de la altura tiene error) la aceleración de la gravedad. Reporte tres valores correspondientes al tiempo con 5 medidas(t-stduent), con 5 experimentos (3sgima), y con 30 medidas (3sgima).
End of explanation
tiempos=np.loadtxt('data.csv')
#conversion de milisegundos a segundos
tiempos/=100
#print ('tiempos[s]: ',tiempos, '\n')
incerteza=0.01 #1%
datos1=np.array(tiempos[0:5]) # tiempo en segundos
n=5 #numero de datos
#for i in range(len(datos1)):
# print ('Datos: %.3f' %datos1[i] )
media = np.mean(datos1) # Valor medio del tiempo, comando directo de python
devstd = np.std(datos1) # Desviacion estandar, comando directo de python
ErrorSTD = devstd / np.sqrt(datos1.size) # Error de la Desviacion estandar
ErrorSTD3 = 3*(devstd / np.sqrt(datos1.size)) # 3 veces el error de la Desviacion estandar
gdl = n - 1
confi = 1. - incerteza
aux = stats.t.interval(confi,gdl,loc=0,scale=1) # loc sirve para desplazar la distribución, scale para escalarla.
valor_t = aux[1] # corrección de t-student
Errort_student=valor_t*(devstd / np.sqrt(datos1.size)) #Error estandar t-student
print('El promedio de la medida experimental de tiempo de caida es de %.3f s' % media)
print('La desviación estándar es %.3f ' %devstdt)
print('1 sigma es %.3f ' %ErrorSTD)
print('3 sigma es %.3f ' %ErrorSTD3)
print('Error t-student es %.3f ' %Errort_student)
print('valor t = %.3f ' %valor_t)
print('_________________________________________________ ')
print('(1 sigma): El tiempo de caida se encuentra en el intervalo (%.3f,%.3f) s' %(media-ErrorSTD,media+ErrorSTD))
print('(3 sigma): El tiempo de caida se encuentra en el intervalo (%.3f,%.3f) s' %(media-ErrorSTD3,media+ErrorSTD3))
print('(t-student): El tiempo de caida se encuentra en el intervalo (%.3f,%.3f) s' %(media-Errort_student,media+Errort_student))
# Graficación de los resultados
numdatos = datos1.size
l1 = np.ones(numdatos) # para graficar los datos de cada muestra en una línea
plt.figure(figsize=(16,3))
plt.axvline(media,linewidth=3, c="black",label='valor medio')
plt.plot(datos1,l1,linewidth=0,marker='.',ms=12,c='black',label='Datos')
plt.plot([media-ErrorSTD, media+ErrorSTD], [1, 1], linewidth=3, linestyle="-", color="red",solid_capstyle="butt",label='Error sigma')
plt.plot([media-ErrorSTD3, media+ErrorSTD3], [2, 2], linewidth=3, linestyle="-", color="blue",solid_capstyle="butt",label='Error 3sigma')
plt.plot([media-Errort_student, media+Errort_student], [3, 3], linewidth=3, linestyle="-", color="green",solid_capstyle="butt",label='Error t-student')
plt.legend()
plt.xlim(0,1)
plt.ylim(0,4)
Explanation: Primer Punto
Leemos primero los datos de los tiempos medidos en el laboratorio guardados en el archivo 'data.csv'
End of explanation
#Datos de los experimentos
datos1=np.array(tiempos[0:5]) # tiempo experimento 1
datos2=np.array(tiempos[6:11]) # tiempo experimento 2
datos3=np.array(tiempos[12:17]) # tiempo experimento 3
datos4=np.array(tiempos[18:23]) # tiempo experimento 4
datos5=np.array(tiempos[24:29]) # tiempo experimento 5
#Promedios de cada experimento
mediat1 = np.mean(datos1) # Valor medio del tiempo, experimento 1
mediat2 = np.mean(datos2) # Valor medio del tiempo, experimento 2
mediat3 = np.mean(datos3) # Valor medio del tiempo, experimento 3
mediat4 = np.mean(datos4) # Valor medio del tiempo, experimento 4
mediat5 = np.mean(datos5) # Valor medio del tiempo, experimento 5
medias = np.array([mediat1,mediat2,mediat3,mediat4,mediat5])
prom =np.mean(medias)
STDV = np.std(medias) #Desviacion estandar del promedio total
ErrorSTDV = STDV / np.sqrt(medias.size) # Error de la Desviacion estandar
ErrorSTDV3 = 3*(STDV / np.sqrt(medias.size)) # 3 veces el error de la Desviacion estandar
for i in range(len(medias)):
print ('Promedio del tiempo de caida para el experimento %d : %.3f s' %(i+1,medias[i]))
print ('\nPromedio de los promedios de cada experimento: %.3f s' %prom)
print ('Desviacion estandar: %.3f ' %STDV)
print ('1 sigma: %.3f' %ErrorSTDV)
print ('3 sigma: %.3f' %ErrorSTDV3)
print('________________________________________________________________________ ')
print('(1 sigma): El tiempo de caida se encuentra en el intervalo (%.3f,%.3f) s' %(prom-ErrorSTDV,prom+ErrorSTDV))
print('(3 sigma): El tiempo de caida se encuentra en el intervalo (%.3f,%.3f) s' %(prom-ErrorSTDV3,prom+ErrorSTDV3))
# Graficación de los resultados
numdatos = medias.size
l1 = np.ones(numdatos) # para graficar los datos de cada muestra en una línea
plt.figure(figsize=(16,3))
plt.axvline(prom,linewidth=3, c="black",label='valor medio')
plt.plot(medias,l1,linewidth=0,marker='.',ms=12,c='black',label='Datos')
plt.plot([prom-ErrorSTDV, prom+ErrorSTDV], [1, 1], linewidth=3, linestyle="-", color="red",solid_capstyle="butt",label='Error sigma')
plt.plot([prom-ErrorSTDV3, prom+ErrorSTDV3], [2, 2], linewidth=3, linestyle="-", color="blue",solid_capstyle="butt",label='Error 3sigma')
plt.legend()
plt.xlim(0,1)
plt.ylim(0,4)
Explanation: # Análisis
Para el caso en el que se usa un experimento con 5 datos, todos los errores cruzan el valor medio aproximadamente por el centro. Pero para 1 sigma, el error abarca sólo un dato; el error con 3 sigma abarca la mayoría de los datos y el error con t-student abarca la totalidad de los datos. El intervalo de cada uno encierra el valor medio esperado, pero en este caso con t-student y 3sigma se obtiene el mejor margen de error del experimento.
Segundo Punto
End of explanation
mediat = np.mean(tiempos) # Valor medio del tiempo, Comando directo de python
devstdt = np.std(tiempos) # Desviacion estandar del tiempo, Comando directo de python
#clasest = int(np.sqrt(tiempos.size))
clasest=6
errorstdt=devstdt/np.sqrt(tiempos.size)
sigma3=3.*errorstdt
histt, binst = np.histogram(tiempos,bins=clasest)
print('El promedio de tiempo de caida es de %.3f s' % mediat)
print('La desviación estándar del tiempo es %.3f ' %devstdt)
print('El error estándar del tiempo es %.3f ' %errorstdt)
print('El error con 3sigma del tiempo es %.3f ' %sigma3)
print('________________________________________________________________________ ')
print('(1 sigma): El tiempo de caida se encuentra en el intervalo (%.3f,%.3f) s' %(mediat-errorstdt,mediat+errorstdt))
print('(3 sigma): El tiempo de caida se encuentra en el intervalo (%.3f,%.3f) s' %(mediat-sigma3,mediat+sigma3))
#print (histt)
#print (binst)
# Medidas de longitud
plt.figure(figsize=(16,9))
plt.bar(binst[0:6],histt,width=0.05,color='cyan', label='Hist. datos')
plt.axvline(mediat,linewidth=3, c="red", label='Valor medio')
plt.plot([mediat-errorstdt,mediat+errorstdt], [5, 5], linewidth=5, linestyle="-", color="green",
solid_capstyle="butt", label='Error sigma')
plt.plot([mediat-sigma3, mediat+sigma3], [4, 4], linewidth=5, linestyle="-", color="blue",
solid_capstyle="butt", label='Error 3sigma')
plt.plot([mediat-devstdt, mediat+devstdt], [6, 6], linewidth=5, linestyle="-", color="purple",
solid_capstyle="butt", label='STDV')
plt.legend()
t=np.arange(0,0.8,0.01)
ft =12.*np.exp(-(t-mediat)**2/(2*devstdt**2))
plt.plot(t, ft, 'k--', linewidth=3)
plt.xlabel('tiempos [s]')
plt.ylabel('Frecuencia absoluta')
plt.xlim(0.1,0.8)
#plt.ylim(0,4)
plt.show()
Explanation: Análisis
En el caso en el que se realizan 5 experimentos con 5 datos cada uno es importante notar que los promedio (datos de la figura) de cada experimento están mucho más cerca del valor medio calculado en comparación con los datos del caso en el que se hizo sólo un experimento con 5 datos.
El promedio de las medias de cada experimento tiene un error de desviación estándar bastante pequeño a 1 sigma, lo cual hace que el rango datos del experimento abarque sólo un dato, esto aumenta la posibilidad de obtener un valor errado. Por otra parte, el error con 3 sigma abarca casi todos los datos, mejorando la posibilidad de obtener un valor más preciso y exacto.
Tercer Punto
End of explanation
h = 2 #medida de altura en metros
herr = 0.001 #error medida altura en metros
gt = 9.77 #gravedad teorica en Medellin
#Calculo de la GRAVEDAD con 5 medidas(t-student)
g1=2*(h)/(media)**2
print ('g1 = %.3f' %g1)
#Calculo de la GRAVEDAD con 5 experimentos (3sgima)
g2=2*(h)/(prom)**2
print ('g2 = %.3f' %g2)
#Calculo de la GRAVEDAD con 30 medidas (3sgima).
g3=2*(h)/(mediat)**2
print ('g3 = %.3f' %g3)
Explanation: Análisis
Cuando realizamos los cálculos en un experimento con 30 datos es posible notar que la distrubución de dichos datos se comporta de manera gaussiana. El valor de corrección de la gaussiana se cambio por el valor más alto de las frecuencias, debido a que el valor teórico de corrección no superaba la mitad de la altura en la campana de la gráfica, sin embargo, aún así el comportamiento gaussiano era evidente.
El error a 1 sigma del promedio de los 30 datos es mucho menor que la desviación estándar de la distrubución, implicando que pueda existir una mayor exactitud pero una baja precisión por la alta disperción de datos. Mientas que el error a 3 sigma abarca mayor cantidad de datos, similar a los experimentos anteriores.
Cuarto Punto
Para calcular la gravedad se usó la siguiente ecuación:
$$g =\frac{2h}{t^2} $$
Donde h es la altura medida con el flexómetro cuya apreciación es de ~0.001 m.
End of explanation
#Resultados de calculo de propagacion de errores
deltag1 = 25.33 #m/s^2
deltag2 = 3.65 #m/s^2
deltag3 = 3.39 #m/s^2
print ('Gravedad con 5 medidas(t-student) = (%0.3f +/- %0.3f) m/s2'%(g1,deltag1))
print ('Gravedad con 5 experimentos (3sgima) = (%0.3f +/- %0.3f) m/s2'%(g2,deltag2))
print ('Gravedad con 30 medidas (3sgima) = (%0.3f +/- %0.3f) m/s2'%(g3,deltag3))
# Graficación de los resultados
numdatos = datos1.size
l1 = np.ones(numdatos) # para graficar los datos de cada muestra en una línea
plt.figure(figsize=(16,3))
plt.axvline(gt,linewidth=3, c="black",label='Gravedad Teorica')
plt.plot([g1-deltag1, g1+deltag1], [1, 1], linewidth=3, linestyle="-", color="red",solid_capstyle="butt",label='Gravedad t-student')
plt.plot([g2-deltag2, g2+deltag2], [2, 2], linewidth=3, linestyle="-", color="blue",solid_capstyle="butt",label='Gravedad 5 Experm.')
plt.plot([g3-deltag3, g3+deltag3], [3, 3], linewidth=3, linestyle="-", color="green",solid_capstyle="butt",label='Gravedad 30 datos')
plt.legend()
#plt.xlim(0,1)
plt.ylim(0,4)
Explanation: El procedimiento del cálculo de la propagación de errores se hizo en el cuaderno de protocolo a partir de la siguiente fórmula:
$$\delta g = \frac{2h}{t^2}\big(\frac{\delta (2h)}{2h} + \frac{\delta (t^2)}{t^2}\big)$$
Donde
$$ \delta (2h) = 2\delta h$$
$$\delta (t^2) = 2t\delta t $$
End of explanation |
3,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's create a polygon that can be transformed (i.e. rotated and scaled) and dragged. You can drag the polygon around, or use the handler to rotate it and to scale it. Note that the transformations are synced across all the map views.
Step1: The scaling can be set to be uniform, meaning that it will preserve the height / width ratio.
Step2: The ability to scale, rotate and drag can be turned off. | Python Code:
pg = Polygon(locations=polygon_coords, transform=True, draggable=True)
m += pg
Explanation: Let's create a polygon that can be transformed (i.e. rotated and scaled) and dragged. You can drag the polygon around, or use the handler to rotate it and to scale it. Note that the transformations are synced across all the map views.
End of explanation
pg.uniform_scaling = True
Explanation: The scaling can be set to be uniform, meaning that it will preserve the height / width ratio.
End of explanation
pg.scaling = False
pg.rotation = False
pg.draggable = False
Explanation: The ability to scale, rotate and drag can be turned off.
End of explanation |
3,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
reduce()
Many times students have difficulty understanding reduce() so pay careful attention to this lecture. The function reduce(function, sequence) continually applies the function to the sequence. It then returns a single value.
If seq = [ s1, s2, s3, ... , sn ], calling reduce(function, sequence) works like this
Step1: Lets look at a diagram to get a better understanding of what is going on here
Step2: Note how we keep reducing the sequence until a single final value is obtained. Lets see another example | Python Code:
lst =[47,11,42,13]
reduce(lambda x,y: x+y,lst)
Explanation: reduce()
Many times students have difficulty understanding reduce() so pay careful attention to this lecture. The function reduce(function, sequence) continually applies the function to the sequence. It then returns a single value.
If seq = [ s1, s2, s3, ... , sn ], calling reduce(function, sequence) works like this:
At first the first two elements of seq will be applied to function, i.e. func(s1,s2)
The list on which reduce() works looks now like this: [ function(s1, s2), s3, ... , sn ]
In the next step the function will be applied on the previous result and the third element of the list, i.e. function(function(s1, s2),s3)
The list looks like this now: [ function(function(s1, s2),s3), ... , sn ]
It continues like this until just one element is left and return this element as the result of reduce()
Lets see an example:
End of explanation
from IPython.display import Image
Image('http://www.python-course.eu/images/reduce_diagram.png')
Explanation: Lets look at a diagram to get a better understanding of what is going on here:
End of explanation
#Find the maximum of a sequence (This already exists as max())
max_find = lambda a,b: a if (a > b) else b
#Find max
reduce(max_find,lst)
Explanation: Note how we keep reducing the sequence until a single final value is obtained. Lets see another example:
End of explanation |
3,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chained Visualizations with Yellowbrick Pipelines
In Yellowbrick, VisualPipelines are modeled on Scikit-Learn Pipelines, which allow us to chain estimators together in a sane way and use them as a single estimator. This is very useful for models that require a series of extraction, normalization, and transformation steps in advance of prediction. For more about Scikit-Learn Pipelines, check out this post by Zac Stewart.
VisualPipelines sequentially apply a list of transforms, visualizers, and a final estimator which may be evaluated by additional visualizers. Intermediate steps of the pipeline must be kinds of 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit.
Any step that implements draw or show methods can be called sequentially directly from the VisualPipeline, allowing multiple visual diagnostics to be generated, displayed, and saved on demand. If draw or show is not called, the visual pipeline should be equivalent to the simple pipeline to ensure no reduction in performance.
The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. These steps can be visually diagnosed by visualizers at every point in the pipeline.
Step2: Fetching the data
Step3: Ok now try with VisualPipeline | Python Code:
%matplotlib inline
import os
import sys
# Modify the path
sys.path.append("/Users/rebeccabilbro/Desktop/waves/stuff/yellowbrick")
import requests
import numpy as np
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
Explanation: Chained Visualizations with Yellowbrick Pipelines
In Yellowbrick, VisualPipelines are modeled on Scikit-Learn Pipelines, which allow us to chain estimators together in a sane way and use them as a single estimator. This is very useful for models that require a series of extraction, normalization, and transformation steps in advance of prediction. For more about Scikit-Learn Pipelines, check out this post by Zac Stewart.
VisualPipelines sequentially apply a list of transforms, visualizers, and a final estimator which may be evaluated by additional visualizers. Intermediate steps of the pipeline must be kinds of 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit.
Any step that implements draw or show methods can be called sequentially directly from the VisualPipeline, allowing multiple visual diagnostics to be generated, displayed, and saved on demand. If draw or show is not called, the visual pipeline should be equivalent to the simple pipeline to ensure no reduction in performance.
The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. These steps can be visually diagnosed by visualizers at every point in the pipeline.
End of explanation
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"credit": os.path.join(FIXTURES, "credit", "credit.csv"),
"concrete": os.path.join(FIXTURES, "concrete", "concrete.csv"),
"occupancy": os.path.join(FIXTURES, "occupancy", "occupancy.csv"),
"mushroom": os.path.join(FIXTURES, "mushroom", "mushroom.csv"),
}
def load_data(name, download=False):
Loads and wrangles the passed in dataset by name.
If download is specified, this method will download any missing files.
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Return the data frame
return pd.read_csv(path)
# Load the classification data set
data = load_data('credit')
# Specify the features of interest
features = [
'limit', 'sex', 'edu', 'married', 'age', 'apr_delay', 'may_delay',
'jun_delay', 'jul_delay', 'aug_delay', 'sep_delay', 'apr_bill', 'may_bill',
'jun_bill', 'jul_bill', 'aug_bill', 'sep_bill', 'apr_pay', 'may_pay', 'jun_pay',
'jul_pay', 'aug_pay', 'sep_pay',
]
classes = ['default', 'paid']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.default.as_matrix()
from yellowbrick.features.rankd import Rank2D
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y)
visualizer.transform(X)
visualizer.show()
from yellowbrick.features.radviz import RadViz
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y)
visualizer.transform(X)
visualizer.show()
Explanation: Fetching the data
End of explanation
from yellowbrick.pipeline import VisualPipeline
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
multivisualizer = VisualPipeline([
('rank2d', Rank2D(features=features, algorithm='covariance')),
('radviz', RadViz(classes=classes, features=features)),
])
multivisualizer.fit(X, y)
multivisualizer.transform(X)
multivisualizer.show()
Explanation: Ok now try with VisualPipeline
End of explanation |
3,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Wifi_Scan Data
Step1: <br/>
2. Housing Project Data
https
Step2: - "OpenStreetMap"
- "Mapbox Bright" (Limited levels of zoom for free tiles)
- "Mapbox Control Room" (Limited levels of zoom for free tiles)
- "Stamen" (Terrain, Toner, and Watercolor)
- "Cloudmade" (Must pass API key)
- "Mapbox" (Must pass API key)
- "CartoDB" (positron and dark_matter)
- "OpenStreetMap"
- "Stamen Watercolor"
- "Stamen Toner"
- "Cartodb Positron"
- "Cartodb dark_matter"
- customized tail
Step3: Update Free Wifi ssid List (next step)
<br/><br/><br/>
4. Choropleth Maps
- grid (select a reasonable size of grid)+ sjoin (data point) + count (unique bssid in each cell) | Python Code:
# Read File
df = pd.read_csv("/home/dj/Desktop/motoG4_062212.csv")
# convert Unix timestamp into readable timestamp
df['time2'] = map(lambda x: dt.datetime.fromtimestamp(x), df.time.astype(float)/1000)
df['month'] = map(lambda x: x.month, df['time2'])
df['day'] = map(lambda x: x.day, df['time2'])
df['hour'] = map(lambda x: x.hour, df['time2'])
df['minute'] = map(lambda x: x.minute, df['time2'])
df['sec'] = map(lambda x: x.second, df['time2'])
'''
df2 -> Data Collected on 06/14/2017 by 'Moto G (4)'
including 3 housing project:
- Fulton
- Eliot
- Chelsea
'''
# Filter
df2 = df.copy()
df2 = df2[((df2['month'] == 6) & (df2['day'] == 14)) | ((df2['month'] == 6) & (df2['day'] == 22) & (df2['hour'] <10))]
print "df2.shape: ", df2.shape
print "df2 unique time: ", df2.time.unique().shape
# Transform to geo format
df2.reset_index(drop=True, inplace=True)
df2['geo'] = zip(df2.lng, df2.lat)
df2['geometry'] = map(lambda x: Point(x), zip(df2.lng, df2.lat))
# Prove: with the same timestamp, one ssid may have more than one bssid.
# Conclusion: use 'bssid' unique value to count accessible wifi signals instead of 'ssid'.
# access -> bssid
print "df2 unique bssid: ", df2.bssid.unique().shape; print "df2 unique ssid: ", df2.ssid.unique().shape
print df2.groupby(df2.time).apply(lambda x: x.groupby(x.ssid).apply(lambda x: len(x.bssid)))[:2]
# groupby and agg
access_count = df2.groupby(df2.geo).apply(lambda x: len(x.bssid.unique()))
access_bssidList = df2.groupby(df2.geo).apply(lambda x: list(x.bssid.unique()))
df3 = pd.DataFrame(map(lambda x: Point(x), access_count.index), columns=['geometry'])
df3['unique_bssid_count'] = access_count.values
df3['unique_bssid_list'] = access_bssidList.values
# convert DataFrame -> GeoDataFrame
# Original 'epsg':4326
# to 'epsg': 2263 -> usft
df3= gpd.GeoDataFrame(df3)
df3.crs = from_epsg(4326)
df3.to_crs(epsg=2263, inplace=True)
print 'df3 shape: ', df3.shape
df3.head(2)
df3.to_pickle('unique_bssid.p')
Explanation: 1. Wifi_Scan Data
End of explanation
# geo data
hp_geo = gpd.read_file("./NYCHA/geo_export_f8e33e41-d8a4-4a67-8445-6472b630d185.shp")
hp_geo.to_crs(epsg=2263, inplace=True)
# names
hp_list = ['FULTON', 'CHELSEA', 'CHELSEA ADDITION', 'ELLIOTT']
hp_target = hp_geo[hp_geo.developmen.isin(hp_list)]
# shapes
fig, (ax1,ax2,ax3,ax4) = pl.subplots(1,4,figsize=(20,5))
d_ax = {'ax1':ax1, 'ax2':ax2, 'ax3':ax3, 'ax4':ax4}
col_ax = ['green', 'yellow', 'red', 'darkblue']
for i in range(4):
hp_geo[hp_geo.developmen == hp_list[i]].plot(ax=d_ax['ax'+str(i+1)], color=col_ax[i], alpha=0.9)
d_ax['ax'+str(i+1)].set_title(hp_list[i])
d_ax['ax'+str(i+1)].set_xlabel('Area: ' + str(round(hp_geo[hp_geo.developmen == hp_list[i]].geometry.area.values[0], 1))+" usft2")
# mplleaflet.display(fig=ax1.figure, crs=hp_geo.crs)
# web page
# mplleaflet.show(fig=d_ax['ax'+str(i)].figure, crs=hp_geo.crs)
import warnings; warnings.simplefilter('ignore')
hp_target['style'] = [
{'fillColor': 'yellow', 'weight': 1, 'color': 'black'},
{'fillColor': 'red', 'weight': 1, 'color': 'black'},
{'fillColor': 'darkblue', 'weight': 1, 'color': 'black'},
{'fillColor': 'green', 'weight': 1, 'color': 'black'}]
m = folium.Map([40.743, -74], zoom_start=15, tiles='Stamen Toner', crs='EPSG3857')
folium.GeoJson(hp_target).add_to(m) # folium.plugins.HeatMap(hp_target.index).add_to(m)
m #m.save('geopandas.html')
Explanation: <br/>
2. Housing Project Data
https://data.cityofnewyork.us/Housing-Development/Map-of-NYCHA-Developments/i9rv-hdr5/data
https://data.cityofnewyork.us/Housing-Development/NYCHA-GIS-file/tqnb-xmxw/data
Offical Names of my target housing projects are:
- FULTON
- CHELSEA
- CHELSEA ADDITION
- ELLIOTT
End of explanation
# Free Wifi List from last year capstone project
free_wifi = [
'#flatiron free wifi',
'freewifibysurface',
'bryantpark.org',
'DowntownBrooklynWiFi_Fon',
'linknyc free wi-fi',
'Metrotech',
'usp park wifi',
'Red Hook Wifi']
s1 = set(df2.ssid)
s2 = set(free_wifi)
s1.intersection(s2)
# df4: 'free-wifi' records
# df4 only contains records with ssid = 'linknyc free wi-fi'
df4 = df2.copy()
df4 = gpd.GeoDataFrame(df4)
df4 = df4[df4.ssid == 'linknyc free wi-fi']
df4.crs = from_epsg(4326)
df4.to_crs(epsg=2263, inplace=True)
df4['style']= [{'fillColor': 'yellow', 'weight': 1, 'color': 'black'}] * len(df4)
f, ax = pl.subplots(1,1,figsize=(5,5))
df4.plot(ax=ax)
mplleaflet.display(fig=ax.figure, crs=df4.crs)
df4.drop(['time2', 'geo'], axis=1).to_file('hp_plot')
#hp_target.to_pickle("hp_target.p")
# Points where free wifi signals are detected.
m2 = folium.Map([40.743, -74], zoom_start=15, tiles='Cartodb dark_matter', crs='EPSG3857', control_scale=False, max_zoom=30)
folium.GeoJson(df4.drop('time2', axis=1)).add_to(m2) # folium.plugins.HeatMap(hp_target.index).add_to(m)
folium.GeoJson(hp_target).add_to(m2)
m2
Explanation: - "OpenStreetMap"
- "Mapbox Bright" (Limited levels of zoom for free tiles)
- "Mapbox Control Room" (Limited levels of zoom for free tiles)
- "Stamen" (Terrain, Toner, and Watercolor)
- "Cloudmade" (Must pass API key)
- "Mapbox" (Must pass API key)
- "CartoDB" (positron and dark_matter)
- "OpenStreetMap"
- "Stamen Watercolor"
- "Stamen Toner"
- "Cartodb Positron"
- "Cartodb dark_matter"
- customized tail:
- attr = ('&copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'contributors, &copy; <a href="http://cartodb.com/attributions">CartoDB</a>')
- tiles = 'http://{s}.basemaps.cartocdn.com/light_nolabels/{z}/{x}/{y}.png'
- eg: m = folium.Map([40.743, -74], zoom_start=15, tiles=tiles, attr=attr, crs='EPSG3857')
- tiles:
- from IPython.display import IFrame
- IFrame('http://leaflet-extras.github.io/leaflet-providers/preview/', width=900, height=750)
<br/><br/>
3. Free Wifi
End of explanation
all_x = map(lambda p: p.x, df3.geometry)
all_y = map(lambda p: p.y, df3.geometry)
minx, maxx, miny, maxy = min(all_x), max(all_x), min(all_y), max(all_y)
print minx, maxx, miny, maxy
print minx, maxx, miny, maxy
Explanation: Update Free Wifi ssid List (next step)
<br/><br/><br/>
4. Choropleth Maps
- grid (select a reasonable size of grid)+ sjoin (data point) + count (unique bssid in each cell):
- for ALL wifi density
- for example
- grid + sjoin + median (level: signal strength - all points in one single cell)
- for Free wifi signal strength
- Based on Free wifi List (need to be update)
5. Which CT / CB / Zipcode / Community / Neighborhood / Household do these housing projects belong to or contain?
- shapefile
- ct
- cb
- zipcode
- household
- ...
- sjoin
- merge dataframes
<br/><br/>
6. Demographic Data
-level
-ct
-cb
-zip
-household
7. Story of Target Housing Projects: How come people built them?
- FULTON
- CHELSEA
- CHELSEA ADDITION
- ELLIOTT
8. Find out Expensive Apartment or Household Nearby ( or Other Public Facilities)
-For further comparision...
End of explanation |
3,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Exploration of Nueral Net Capabilities
Step1: Abstract
A nueral network is a computational analogy to the methods by which humans think. Their design builds upon the idea of a neuron either firing or not firing based on some stimuli and learn whether or not they made the right choice. To allow
for richer results with less complicated networks, boolean response is replaced with a continuous analog, the sigmoid
function. The network learns by taking our definition of how incorrect they are in the form of a so-called cost function and find the most effective way to reduce the function to a minimum, i.e. be the least incorrect. It is ideal to minimize the number of training sessions that must be used to get a maximum accuracy due to computational cost and time. In this
project, the minimum number of training sets to reach a sufficient accuracy will be explored for multiple standard cost functions. As well, a new cost function may be explored along with a method for generating cost functions. And finally,
given a sufficient amount of time, the network will be tested with nonconformant input, in this case, scanned and
partitioned handwritten digits.
Base Question
Does it work?
Does it work well?
The first step in building a neural net is simply understanding and building the base algorithms. There are three things that define a network
Step2: So, how does it work?
There are three core algorithms behind every neural net
Step3: Back Propagation/Error Computation
Back Propagation is one of the scary buzz words in the world of neural nets, it doesn't have to be so scary. I prefer to call it error computation to be more transparent because, in essence, that is what it does. Let's dig in!
Cost Function
The cost function is a major factor in how your network learns. It defines, numerically, how wrong your network is. The function itself is typically defined by some sort of difference of your networks output to the actual correct answer. Because it is a function of the output, it is also a function of every weight and bias in your network. This means that it could have potentially thousands of independant variables. In its simplest form, a cost function should have some quite definite properties | Python Code:
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
import IPython as ipynb
%matplotlib inline
Explanation: An Exploration of Nueral Net Capabilities
End of explanation
z = np.linspace(-10, 10, 100)
f=plt.figure(figsize=(15, 5))
plt.subplot(1, 2,1)
plt.plot(z, 1/(1+np.exp(-z)));
plt.xlabel("Input to Nueron")
plt.title("Sigmoid Response with Bias=0")
plt.ylabel("Sigmoid Response");
plt.subplot(1, 2,2)
plt.plot(z, 1/(1+np.exp(-z+5)));
plt.xlabel("Input to Nueron")
plt.title("Sigmoid Response with Bias=5")
plt.ylabel("Sigmoid Response");
Explanation: Abstract
A nueral network is a computational analogy to the methods by which humans think. Their design builds upon the idea of a neuron either firing or not firing based on some stimuli and learn whether or not they made the right choice. To allow
for richer results with less complicated networks, boolean response is replaced with a continuous analog, the sigmoid
function. The network learns by taking our definition of how incorrect they are in the form of a so-called cost function and find the most effective way to reduce the function to a minimum, i.e. be the least incorrect. It is ideal to minimize the number of training sessions that must be used to get a maximum accuracy due to computational cost and time. In this
project, the minimum number of training sets to reach a sufficient accuracy will be explored for multiple standard cost functions. As well, a new cost function may be explored along with a method for generating cost functions. And finally,
given a sufficient amount of time, the network will be tested with nonconformant input, in this case, scanned and
partitioned handwritten digits.
Base Question
Does it work?
Does it work well?
The first step in building a neural net is simply understanding and building the base algorithms. There are three things that define a network:
Shape
The shape of a network merely describes how many neurons there are and where they are. There are typically the locations that neurons live in: The Input Layer, The Hidden Layer, and The Output Layer. The Hidden Layer can be composed of more than one layer, but by convention, it is referred to as one layer. The Input Layer is significant because it takes the inputs. It typically does not do any discrimination before passing it along, but there is nothing barring that from occurring. The Output Layer produces a result. In most cases, the result still requires some interpretation, but is in its final form as far as the network is concerned. Each of the layers can have as many neurons as are needed but it is favorable to reduce the number to the bare minimum for both computational reasons and for accuracy.
Weights
Weights live in between individual neurons and dictate how much the decision made by a neuron in the layer before it matters to the next neurons decision. A good analogy might be that Tom(a neuron) has two friends, Sally(a neurette?) and Joe(also a neuron). They are good friends so Tom likes to ask Sally and Joe's opinion about decisions he is about to make. However, Joe is a bit crazy, likes to go out and party, etc. so Tom trusts Sally's opinion a bit more than Joe's. If Tom quantified how much he trusted Sally or Joe, that quantification would be called a weight.
Biases
Biases are tied to each neuron and its decision making proccess. A bias in the boolean sense acts as a threshold at which point a true is returned. In the continuous generalization of the boolean proccess, the bias corresponds to the threshold at which point a value above 0.5 is returned. Back to our analogy with Tom and his friends, a bias might constitute how strongly each person feels about their opinion on a subject. So when Tim asks Sally and Joe about their opinion about someone else, call her Julie, Sally responds with a fairly nuetral response because she doesn't know Julie, so her bias is around 0. Joe, on the other hand, used to date Julie and they had a bad break up, so he responds quite negatively, and somewhat unintuitively, his bias is very high. (See the graph of the sigmoid function below with zero bias) In other words, he has a very high threshold for speaking positively about Julie.
End of explanation
ipynb.display.Image("http://neuralnetworksanddeeplearning.com/images/tikz11.png")
Explanation: So, how does it work?
There are three core algorithms behind every neural net: Feed Forward, Back Propagation/Error Computation, and Gradient Descent.
Feed Forward
The Feed Forward algorithm could be colloquially called the "Gimme an Answer" algorithm. It sends the inputs through the network and returns the outputs. We can break it down step by step and see what is really going on:
Inputs
Each input value is fed into the corresponding input nueron, that's it. In a more sophisticated network, some inputs could be rejected based on bias criterion, but for now we leave them alone.
Channels
Each input neuron is connected to every neuron in the first hidden layer through a channel, to see this visually, look at the diagram below. Each channel is given a weight that is multiplied by the value passed on by the input neuron and is then summed with all the channels feeding the same neuron and is passed into the hidden layer neuron. The channels can be thought of as pipes allowing water to flow from each input neuron to each hidden layer neuron. The weights in our network represent the diameter of these pipes(is it large or small). As well, pipes converge to a hidden layer neuron and dump all of their water into a basin representing the neuron.
Neurons
Once a value reaches a neuron that is not an input neuron, the value is passed through a sigmoid function similar to those above with the proper bias for that neuron. The sigmoid response is the value that gets passed on to the next layer of neurons.
Repeat
The Channels and Neurons steps are repeated through each layer until the final output is reached.
End of explanation
ipynb.display.Image("http://blog.datumbox.com/wp-content/uploads/2013/10/gradient-descent.png")
Explanation: Back Propagation/Error Computation
Back Propagation is one of the scary buzz words in the world of neural nets, it doesn't have to be so scary. I prefer to call it error computation to be more transparent because, in essence, that is what it does. Let's dig in!
Cost Function
The cost function is a major factor in how your network learns. It defines, numerically, how wrong your network is. The function itself is typically defined by some sort of difference of your networks output to the actual correct answer. Because it is a function of the output, it is also a function of every weight and bias in your network. This means that it could have potentially thousands of independant variables. In its simplest form, a cost function should have some quite definite properties: when the ouput is near the correct answer, the cost function should be near zero, a small change in any single weight or bias should result in a small change in the cost function, and the cost function must be non-negative everywhere.
Error Computation
Through a set of nifty equations which will not be shown here, once you have a cost function and take the gradient with respect to the output of said cost function, you are able to calculate a metric for the error of the output. Through some clever deductions based on the fact that a small change in any independent variable results in a small change in the cost function we can calculate that same metric for each independent variable. (That is the Back Propagation bit) You can then calculate, through further clever deductions, the partial derivative of the cost function with respect to each independent variable. The partial derivative of the cost function with respect to each variable will come in handy for when we do Gradient Descent.
Gradient Descent
Gradient Descent uses the fact that we want to minimize our cost function together with the idea of the gradient as the path of steepest descent.
Down the Mountain
The Gradient Descent uses the gradients we calculated in the Error Computation step and tells us how we should change our variables if we want to reach a minimum in the fastest way possible. The algorithm usess the fact that the gradient with respect to an independent variable represents the component of the vector pointing in the direction of most change in that variables dimension. Because even Euler couldn't imagine a thousand dimensional space, we draw some intuition from the familiar three dimensioanl case. Suppose that you are dropped at a random location on a mountain. Suppose further that you are blind.(or it is so foggy that you can't see anything) How do you find the fastest way to the bottom? Well, the only thing that you can do is sense the slope that seems to be the steepest and walk down it. But you are a mathemetician and have no grasp on estimating things, so you calculate the gradient with respect to your left-right direction and your front-back direction. You see that if you take a half step to the left and a quarter step forward you will move the furthest downwards. Wait! Why just one step? First of all, mountains are complicated surfaces and their slopes change from place to place so continuing to make the same steps may not take you the most downwards, or even downwards at all. Secondly, you are blind!(or it is really foggy) If you start running or jumping down the slope, you may overshoot a minimum and have to stop and turn around. In the actual gradient descent algorithm, the step size is represented by something called the learning rate. A step in the right direction is performed in the algorithm by reducing each individual variable by this learning constant multiplied by the gradient with respect to that particular variable. After doing this thousands of times, we find the local minimums of our cost funtion.
End of explanation |
3,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Learn to calculate with seq2seq model
In this assignment, you will learn how to use neural networks to solve sequence-to-sequence prediction tasks. Seq2Seq models are very popular these days because they achieve great results in Machine Translation, Text Summarization, Conversational Modeling and more.
Using sequence-to-sequence modeling you are going to build a calculator for evaluating arithmetic expressions, by taking an equation as an input to the neural network and producing an answer as it's output.
The resulting solution for this problem will be based on state-of-the-art approaches for sequence-to-sequence learning and you should be able to easily adapt it to solve other tasks. However, if you want to train your own machine translation system or intellectual chat bot, it would be useful to have access to compute resources like GPU, and be patient, because training of such systems is usually time consuming.
Libraries
For this task you will need the following libraries
Step2: To check the corectness of your implementation, use test_generate_equations function
Step3: Finally, we are ready to generate the train and test data for the neural network
Step4: Prepare data for the neural network
The next stage of data preparation is creating mappings of the characters to their indices in some vocabulary. Since in our task we already know which symbols will appear in the inputs and outputs, generating the vocabulary is a simple step.
How to create dictionaries for other task
First of all, you need to understand what is the basic unit of the sequence in your task. In our case, we operate on symbols and the basic unit is a symbol. A number of symbols is small, so we don't need to think about filtering/normalization steps. However, in other task, the basic unit is often a word, and in this case the mapping would be word $\to$ integer. A number of words might be huge, so it would be reasonable to filter them, for example, by frequency and leave only the frequent ones. Other strategies that your should consider are
Step5: Special symbols
Step7: You could notice that we have added 3 special symbols
Step8: Check that your implementation is correct
Step10: We also need to be able to get back from indices to symbols
Step12: Generating batches
The final step of data preparation is a function that transforms a batch of sentences to a list of lists of indices.
Step13: The function generate_batches will help to generate batches with defined size from given samples.
Step14: To illustrate the result of the implemented functions, run the following cell
Step15: Encoder-Decoder architecture
Encoder-Decoder is a successful architecture for Seq2Seq tasks with different lengths of input and output sequences. The main idea is to use two recurrent neural networks, where the first neural network encodes the input sequence into a real-valued vector and then the second neural network decodes this vector into the output sequence. While building the neural network, we will specify some particular characteristics of this architecture.
Step16: Let us use TensorFlow building blocks to specify the network architecture.
Step18: First, we need to create placeholders to specify what data we are going to feed into the network during the exectution time. For this task we will need
Step20: Now, let us specify the layers of the neural network. First, we need to prepare an embedding matrix. Since we use the same vocabulary for input and output, we need only one such matrix. For tasks with different vocabularies there would be multiple embedding layers.
- Create embeddings matrix with tf.Variable. Specify its name, type (tf.float32), and initialize with random values.
- Perform embeddings lookup for a given input batch.
Step22: Encoder
The first RNN of the current architecture is called an encoder and serves for encoding an input sequence to a real-valued vector. Input of this RNN is an embedded input batch. Since sentences in the same batch could have different actual lengths, we also provide input lengths to avoid unnecessary computations. The final encoder state will be passed to the second RNN (decoder), which we will create soon.
TensorFlow provides a number of RNN cells ready for use. We suggest that you use GRU cell, but you can also experiment with other types.
Wrap your cells with DropoutWrapper. Dropout is an important regularization technique for neural networks. Specify input keep probability using the dropout placeholder that we created before.
Combine the defined encoder cells with Dynamic RNN. Use the embedded input batches and their lengths here.
Use dtype=tf.float32 everywhere.
Step25: Decoder
The second RNN is called a decoder and serves for generating the output sequence. In the simple seq2seq arcitecture, the input sequence is provided to the decoder only as the final state of the encoder. Obviously, it is a bottleneck and Attention techniques can help to overcome it. So far, we do not need them to make our calculator work, but this would be a necessary ingredient for more advanced tasks.
During training, decoder also uses information about the true output. It is feeded in as input symbol by symbol. However, during the prediction stage (which is called inference in this architecture), the decoder can only use it's own generated output from the previous step to feed it in at the next step. Because of this difference (training vs inference), we will create two distinct instances, which will serve for the described scenarios.
The picture below illustrates the point. It also shows our work with the special characters, e.g. look how the start symbol ^ is used. The transparent parts are ignored. In decoder, it is masked out in the loss computation. In encoder, the green state is considered as final and passed to the decoder.
<img src="encoder-decoder-pic.png" style="width
Step27: In this task we will use sequence_loss, which is a weighted cross-entropy loss for a sequence of logits. Take a moment to understand, what is your train logits and targets. Also note, that we do not want to take into account loss terms coming from padding symbols, so we will mask the out using weights.
Step29: The last thing to specify is the optimization of the defined loss.
We suggest that you use optimize_loss with Adam optimizer and a learning rate from the corresponding placeholder. You might also need to pass global step (e.g. as tf.train.get_global_step()) and clip gradients by 1.0.
Step30: Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipies on how the network should function.
Now we will put them to the constructor of our Seq2SeqModel class to use it in the next section.
Step31: Train the network and predict output
Session.run is a point which initiates computations in the graph that we have defined. To train the network, we need to compute self.train_op. To predict output, we just need to compute self.infer_predictions. In any case, we need to feed actual data through the placeholders that we defined above.
Step32: We implemented two predictions functions
Step33: Run your experiment
Create Seq2SeqModel model with the following parameters
Step34: Finally, we are ready to run the training! A good indicator that everything works fine is decreasing loss during the training. You should account on the loss value equal to approximately 2.7 at the beginning of the training and near 1 after the 10th epoch.
Step35: Evaluate results
Because our task is simple and the output is straight-forward, we will use MAE metric to evaluate the trained model during the epochs. Compute the value of the metric for the output from each epoch. | Python Code:
import random
def generate_equations(allowed_operators, dataset_size, min_value, max_value):
Generates pairs of equations and solutions to them.
Each equation has a form of two integers with an operator in between.
Each solution is an integer with the result of the operaion.
allowed_operators: list of strings, allowed operators.
dataset_size: an integer, number of equations to be generated.
min_value: an integer, min value of each operand.
min_value: an integer, max value of each operand.
result: a list of tuples of strings (equation, solution).
sample = []
for _ in range(dataset_size):
######################################
######### YOUR CODE HERE #############
######################################
return sample
Explanation: Learn to calculate with seq2seq model
In this assignment, you will learn how to use neural networks to solve sequence-to-sequence prediction tasks. Seq2Seq models are very popular these days because they achieve great results in Machine Translation, Text Summarization, Conversational Modeling and more.
Using sequence-to-sequence modeling you are going to build a calculator for evaluating arithmetic expressions, by taking an equation as an input to the neural network and producing an answer as it's output.
The resulting solution for this problem will be based on state-of-the-art approaches for sequence-to-sequence learning and you should be able to easily adapt it to solve other tasks. However, if you want to train your own machine translation system or intellectual chat bot, it would be useful to have access to compute resources like GPU, and be patient, because training of such systems is usually time consuming.
Libraries
For this task you will need the following libraries:
- TensorFlow — an open-source software library for Machine Intelligence.
- scikit-learn — a tool for data mining and data analysis.
If you have never worked with TensorFlow, you will probably want to read some tutorials during your work on this assignment, e.g. Neural Machine Translation tutorial deals with very similar task and can explain some concepts to you.
Data
One benefit of this task is that you don't need to download any data — you will generate it on your own! We will use two operators (addition and subtraction) and work with positive integer numbers in some range. Here are examples of correct inputs and outputs:
Input: '1+2'
Output: '3'
Input: '0-99'
Output: '-99'
Note, that there are no spaces between operators and operands.
Now you need to implement the function generate_equations, which will be used to generate the data.
End of explanation
def test_generate_equations():
allowed_operators = ['+', '-']
dataset_size = 10
for (input_, output_) in generate_equations(allowed_operators, dataset_size, 0, 100):
if not (type(input_) is str and type(output_) is str):
return "Both parts should be strings."
if eval(input_) != int(output_):
return "The (equation: {!r}, solution: {!r}) pair is incorrect.".format(input_, output_)
return "Tests passed."
print(test_generate_equations())
Explanation: To check the corectness of your implementation, use test_generate_equations function:
End of explanation
from sklearn.model_selection import train_test_split
allowed_operators = ['+', '-']
dataset_size = 100000
data = generate_equations(allowed_operators, dataset_size, min_value=0, max_value=9999)
train_set, test_set = train_test_split(data, test_size=0.2, random_state=42)
Explanation: Finally, we are ready to generate the train and test data for the neural network:
End of explanation
word2id = {symbol:i for i, symbol in enumerate('^$#+-1234567890')}
id2word = {i:symbol for symbol, i in word2id.items()}
Explanation: Prepare data for the neural network
The next stage of data preparation is creating mappings of the characters to their indices in some vocabulary. Since in our task we already know which symbols will appear in the inputs and outputs, generating the vocabulary is a simple step.
How to create dictionaries for other task
First of all, you need to understand what is the basic unit of the sequence in your task. In our case, we operate on symbols and the basic unit is a symbol. A number of symbols is small, so we don't need to think about filtering/normalization steps. However, in other task, the basic unit is often a word, and in this case the mapping would be word $\to$ integer. A number of words might be huge, so it would be reasonable to filter them, for example, by frequency and leave only the frequent ones. Other strategies that your should consider are: data normalization (lowercasing, tokenization, how to consider punctuation marks), separate vocabulary for input and for output (e.g. for machine translation), some specifics of the task.
End of explanation
start_symbol = '^'
end_symbol = '$'
padding_symbol = '#'
Explanation: Special symbols
End of explanation
def sentence_to_ids(sentence, word2id, padded_len):
Converts a sequence of symbols to a padded sequence of their ids.
sentence: a string, input/output sequence of symbols.
word2id: a dict, a mapping from original symbols to ids.
padded_len: an integer, a desirable length of the sequence.
result: a tuple of (a list of ids, an actual length of sentence).
sent_ids = ######### YOUR CODE HERE #############
sent_len = ######### YOUR CODE HERE #############
return sent_ids, sent_len
Explanation: You could notice that we have added 3 special symbols: '^', '\$' and '#':
- '^' symbol will be passed to the network to indicate the beginning of the decoding procedure. We will discuss this one later in more details.
- '\$' symbol will be used to indicate the end of a string, both for input and output sequences.
- '#' symbol will be used as a padding character to make lengths of all strings equal within one training batch.
People have a bit different habits when it comes to special symbols in encoder-decoder networks, so don't get too much confused if you come across other variants in tutorials you read.
Padding
When vocabularies are ready, we need to be able to convert a sentence to a list of vocabulary word indices and back. At the same time, let's care about padding. We are going to preprocess each sequence from the input (and output ground truth) in such a way that:
- it has a predefined length padded_len
- it is probably cut off or padded with the padding symbol '#'
- it always ends with the end symbol '$'
We will treat the original characters of the sequence and the end symbol as the valid part of the input. We will store the actual length of the sequence, which includes the end symbol, but does not include the padding symbols.
Now you need to implement the function sentence_to_ids that does the described job.
End of explanation
def test_sentence_to_ids():
sentences = [("123+123", 7), ("123+123", 8), ("123+123", 10)]
expected_output = [([5, 6, 7, 3, 5, 6, 1], 7),
([5, 6, 7, 3, 5, 6, 7, 1], 8),
([5, 6, 7, 3, 5, 6, 7, 1, 2, 2], 8)]
for (sentence, padded_len), (sentence_ids, expected_length) in zip(sentences, expected_output):
output, length = sentence_to_ids(sentence, word2id, padded_len)
if output != sentence_ids:
return("Convertion of '{}' for padded_len={} to {} is incorrect.".format(
sentence, padded_len, output))
if length != expected_length:
return("Convertion of '{}' for padded_len={} has incorrect actual length {}.".format(
sentence, padded_len, length))
return("Tests passed.")
print(test_sentence_to_ids())
Explanation: Check that your implementation is correct:
End of explanation
def ids_to_sentence(ids, id2word):
Converts a sequence of ids to a sequence of symbols.
ids: a list, indices for the padded sequence.
id2word: a dict, a mapping from ids to original symbols.
result: a list of symbols.
return [id2word[i] for i in ids]
Explanation: We also need to be able to get back from indices to symbols:
End of explanation
def batch_to_ids(sentences, word2id, max_len):
Prepares batches of indices.
Sequences are padded to match the longest sequence in the batch,
if it's longer than max_len, then max_len is used instead.
sentences: a list of strings, original sequences.
word2id: a dict, a mapping from original symbols to ids.
max_len: an integer, max len of sequences allowed.
result: a list of lists of ids, a list of actual lengths.
max_len_in_batch = min(max(len(s) for s in sentences) + 1, max_len)
batch_ids, batch_ids_len = [], []
for sentence in sentences:
ids, ids_len = sentence_to_ids(sentence, word2id, max_len_in_batch)
batch_ids.append(ids)
batch_ids_len.append(ids_len)
return batch_ids, batch_ids_len
Explanation: Generating batches
The final step of data preparation is a function that transforms a batch of sentences to a list of lists of indices.
End of explanation
def generate_batches(samples, batch_size=64):
X, Y = [], []
for i, (x, y) in enumerate(samples, 1):
X.append(x)
Y.append(y)
if i % batch_size == 0:
yield X, Y
X, Y = [], []
if X and Y:
yield X, Y
Explanation: The function generate_batches will help to generate batches with defined size from given samples.
End of explanation
sentences = train_set[0]
ids, sent_lens = batch_to_ids(sentences, word2id, max_len=10)
print('Input:', sentences)
print('Ids: {}\nSentences lengths: {}'.format(ids, sent_lens))
Explanation: To illustrate the result of the implemented functions, run the following cell:
End of explanation
import tensorflow as tf
Explanation: Encoder-Decoder architecture
Encoder-Decoder is a successful architecture for Seq2Seq tasks with different lengths of input and output sequences. The main idea is to use two recurrent neural networks, where the first neural network encodes the input sequence into a real-valued vector and then the second neural network decodes this vector into the output sequence. While building the neural network, we will specify some particular characteristics of this architecture.
End of explanation
class Seq2SeqModel(object):
pass
Explanation: Let us use TensorFlow building blocks to specify the network architecture.
End of explanation
def declare_placeholders(self):
Specifies placeholders for the model.
# Placeholders for input and its actual lengths.
self.input_batch = tf.placeholder(shape=(None, None), dtype=tf.int32, name='input_batch')
self.input_batch_lengths = tf.placeholder(shape=(None, ), dtype=tf.int32, name='input_batch_lengths')
# Placeholders for groundtruth and its actual lengths.
self.ground_truth = ######### YOUR CODE HERE #############
self.ground_truth_lengths = ######### YOUR CODE HERE #############
self.dropout_ph = tf.placeholder_with_default(1.0, shape=[])
self.learning_rate_ph = ######### YOUR CODE HERE #############
Seq2SeqModel.__declare_placeholders = classmethod(declare_placeholders)
Explanation: First, we need to create placeholders to specify what data we are going to feed into the network during the exectution time. For this task we will need:
- input_batch — sequences of sentences (the shape equals to [batch_size, max_sequence_len_in_batch]);
- input_batch_lengths — lengths of not padded sequences (the shape equals to [batch_size]);
- ground_truth — sequences of sentences (the shape equals to [batch_size, max_sequence_len_in_batch]);
- ground_truth_lengths — lengths of not padded sequences (the shape equals to [batch_size]);
- dropout_ph — dropout keep probability; this placeholder has a predifined value 1;
- learning_rate_ph — learning rate.
End of explanation
def create_embeddings(self, vocab_size, embeddings_size):
Specifies embeddings layer and embeds an input batch.
random_initializer = tf.random_uniform((vocab_size, embeddings_size), -1.0, 1.0)
self.embeddings = ######### YOUR CODE HERE #############
# Perform embeddings lookup for self.input_batch.
self.input_batch_embedded = ######### YOUR CODE HERE #############
Seq2SeqModel.__create_embeddings = classmethod(create_embeddings)
Explanation: Now, let us specify the layers of the neural network. First, we need to prepare an embedding matrix. Since we use the same vocabulary for input and output, we need only one such matrix. For tasks with different vocabularies there would be multiple embedding layers.
- Create embeddings matrix with tf.Variable. Specify its name, type (tf.float32), and initialize with random values.
- Perform embeddings lookup for a given input batch.
End of explanation
def build_encoder(self, hidden_size):
Specifies encoder architecture and computes its output.
# Create GRUCell with dropout.
encoder_cell = ######### YOUR CODE HERE #############
# Create RNN with the predefined cell.
_, self.final_encoder_state = ######### YOUR CODE HERE #############
Seq2SeqModel.__build_encoder = classmethod(build_encoder)
Explanation: Encoder
The first RNN of the current architecture is called an encoder and serves for encoding an input sequence to a real-valued vector. Input of this RNN is an embedded input batch. Since sentences in the same batch could have different actual lengths, we also provide input lengths to avoid unnecessary computations. The final encoder state will be passed to the second RNN (decoder), which we will create soon.
TensorFlow provides a number of RNN cells ready for use. We suggest that you use GRU cell, but you can also experiment with other types.
Wrap your cells with DropoutWrapper. Dropout is an important regularization technique for neural networks. Specify input keep probability using the dropout placeholder that we created before.
Combine the defined encoder cells with Dynamic RNN. Use the embedded input batches and their lengths here.
Use dtype=tf.float32 everywhere.
End of explanation
def build_decoder(self, hidden_size, vocab_size, max_iter, start_symbol_id, end_symbol_id):
Specifies decoder architecture and computes the output.
Uses different helpers:
- for train: feeding ground truth
- for inference: feeding generated output
As a result, self.train_outputs and self.infer_outputs are created.
Each of them contains two fields:
rnn_output (predicted logits)
sample_id (predictions).
# Use start symbols as the decoder inputs at the first time step.
batch_size = tf.shape(self.input_batch)[0]
start_tokens = tf.fill([batch_size], start_symbol_id)
ground_truth_as_input = tf.concat([tf.expand_dims(start_tokens, 1), self.ground_truth], 1)
# Use the embedding layer defined before to lookup embedings for ground_truth_as_input.
self.ground_truth_embedded = ######### YOUR CODE HERE #############
# Create TrainingHelper for the train stage.
train_helper = tf.contrib.seq2seq.TrainingHelper(self.ground_truth_embedded,
self.ground_truth_lengths)
# Create GreedyEmbeddingHelper for the inference stage.
# You should provide the embedding layer, start_tokens and index of the end symbol.
infer_helper = ######### YOUR CODE HERE #############
def decode(helper, scope, reuse=None):
Creates decoder and return the results of the decoding with a given helper.
with tf.variable_scope(scope, reuse=reuse):
# Create GRUCell with dropout. Do not forget to set the reuse flag properly.
decoder_cell = ######### YOUR CODE HERE #############
# Create a projection wrapper.
decoder_cell = tf.contrib.rnn.OutputProjectionWrapper(decoder_cell, vocab_size, reuse=reuse)
# Create BasicDecoder, pass the defined cell, a helper, and initial state.
# The initial state should be equal to the final state of the encoder!
decoder = ######### YOUR CODE HERE #############
# The first returning argument of dynamic_decode contains two fields:
# rnn_output (predicted logits)
# sample_id (predictions)
outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder, maximum_iterations=max_iter,
output_time_major=False, impute_finished=True)
return outputs
self.train_outputs = decode(train_helper, 'decode')
self.infer_outputs = decode(infer_helper, 'decode', reuse=True)
Seq2SeqModel.__build_decoder = classmethod(build_decoder)
Explanation: Decoder
The second RNN is called a decoder and serves for generating the output sequence. In the simple seq2seq arcitecture, the input sequence is provided to the decoder only as the final state of the encoder. Obviously, it is a bottleneck and Attention techniques can help to overcome it. So far, we do not need them to make our calculator work, but this would be a necessary ingredient for more advanced tasks.
During training, decoder also uses information about the true output. It is feeded in as input symbol by symbol. However, during the prediction stage (which is called inference in this architecture), the decoder can only use it's own generated output from the previous step to feed it in at the next step. Because of this difference (training vs inference), we will create two distinct instances, which will serve for the described scenarios.
The picture below illustrates the point. It also shows our work with the special characters, e.g. look how the start symbol ^ is used. The transparent parts are ignored. In decoder, it is masked out in the loss computation. In encoder, the green state is considered as final and passed to the decoder.
<img src="encoder-decoder-pic.png" style="width: 500px;">
Now, it's time to implement the decoder:
- First, we should create two helpers. These classes help to determine the benaviour of the decoder. During the training time, we will use TrainingHelper. For the inference we recommend to use GreedyEmbeddingHelper.
- To share all parameters during training and inference, we use one scope and set the flag 'reuse' to True at inference time. You might be interested to know more about how variable scopes work in TF.
- To create the decoder itself, we will use BasicDecoder class. As previously, you should choose some RNN cell, e.g. GRU cell. To turn hidden states into logits, we will need a projection layer. One of the simple solutions is using OutputProjectionWrapper.
- For getting the predictions, it will be convinient to use dynamic_decode. This function uses the provided decoder to perform decoding.
End of explanation
def compute_loss(self):
Computes sequence loss (masked cross-entopy loss with logits).
weights = tf.cast(tf.sequence_mask(self.ground_truth_lengths), dtype=tf.float32)
self.loss = ######### YOUR CODE HERE #############
Seq2SeqModel.__compute_loss = classmethod(compute_loss)
Explanation: In this task we will use sequence_loss, which is a weighted cross-entropy loss for a sequence of logits. Take a moment to understand, what is your train logits and targets. Also note, that we do not want to take into account loss terms coming from padding symbols, so we will mask the out using weights.
End of explanation
def perform_optimization(self):
Specifies train_op that optimizes self.loss.
self.train_op = ######### YOUR CODE HERE #############
Seq2SeqModel.__perform_optimization = classmethod(perform_optimization)
Explanation: The last thing to specify is the optimization of the defined loss.
We suggest that you use optimize_loss with Adam optimizer and a learning rate from the corresponding placeholder. You might also need to pass global step (e.g. as tf.train.get_global_step()) and clip gradients by 1.0.
End of explanation
def init_model(self, vocab_size, embeddings_size, hidden_size,
max_iter, start_symbol_id, end_symbol_id, padding_symbol_id):
self.__declare_placeholders()
self.__create_embeddings(vocab_size, embeddings_size)
self.__build_encoder(hidden_size)
self.__build_decoder(hidden_size, vocab_size, max_iter, start_symbol_id, end_symbol_id)
# Compute loss and back-propagate.
self.__compute_loss()
self.__perform_optimization()
# Get predictions for evaluation.
self.train_predictions = self.train_outputs.sample_id
self.infer_predictions = self.infer_outputs.sample_id
Seq2SeqModel.__init__ = classmethod(init_model)
Explanation: Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipies on how the network should function.
Now we will put them to the constructor of our Seq2SeqModel class to use it in the next section.
End of explanation
def train_on_batch(self, session, X, X_seq_len, Y, Y_seq_len, learning_rate, dropout_keep_probability):
feed_dict = {
self.input_batch: X,
self.input_batch_lengths: X_seq_len,
self.ground_truth: Y,
self.ground_truth_lengths: Y_seq_len,
self.learning_rate_ph: learning_rate,
self.dropout_ph: dropout_keep_probability
}
pred, loss, _ = session.run([
self.train_predictions,
self.loss,
self.train_op], feed_dict=feed_dict)
return pred, loss
Seq2SeqModel.train_on_batch = classmethod(train_on_batch)
Explanation: Train the network and predict output
Session.run is a point which initiates computations in the graph that we have defined. To train the network, we need to compute self.train_op. To predict output, we just need to compute self.infer_predictions. In any case, we need to feed actual data through the placeholders that we defined above.
End of explanation
def predict_for_batch(self, session, X, X_seq_len):
feed_dict = ######### YOUR CODE HERE #############
pred = session.run([
self.infer_predictions
], feed_dict=feed_dict)[0]
return pred
def predict_for_batch_with_loss(self, session, X, X_seq_len, Y, Y_seq_len):
feed_dict = ######### YOUR CODE HERE #############
pred, loss = session.run([
self.infer_predictions,
self.loss,
], feed_dict=feed_dict)
return pred, loss
Seq2SeqModel.predict_for_batch = classmethod(predict_for_batch)
Seq2SeqModel.predict_for_batch_with_loss = classmethod(predict_for_batch_with_loss)
Explanation: We implemented two predictions functions: predict_for_batch and predict_for_batch_with_loss. The first one allows only to predict output for some input sequence, while the second one could compute loss because we provide also ground truth values. Both these functions might be useful since the first one could be used for predicting only, and the second one is helpful for validating results on not-training data during the training.
End of explanation
tf.reset_default_graph()
model = ######### YOUR CODE HERE #############
batch_size = ######### YOUR CODE HERE #############
n_epochs = ######### YOUR CODE HERE #############
learning_rate = ######### YOUR CODE HERE #############
dropout_keep_probability = ######### YOUR CODE HERE #############
max_len = ######### YOUR CODE HERE #############
n_step = int(len(train_set) / batch_size)
Explanation: Run your experiment
Create Seq2SeqModel model with the following parameters:
- vocab_size — number of tokens;
- embeddings_size — dimension of embeddings, recommended value: 20;
- max_iter — maximum number of steps in decoder, recommended value: 7;
- hidden_size — size of hidden layers for RNN, recommended value: 512;
- start_symbol_id — an index of the start token (^).
- end_symbol_id — an index of the end token ($).
- padding_symbol_id — an index of the padding token (#).
Set hyperparameters. You might want to start with the following values and see how it works:
- batch_size: 128;
- at least 10 epochs;
- value of learning_rate: 0.001
- dropout_keep_probability equals to 0.5 for training (typical values for dropout probability are ranging from 0.1 to 0.5);
- max_len: 20.
End of explanation
session = tf.Session()
session.run(tf.global_variables_initializer())
invalid_number_prediction_counts = []
all_model_predictions = []
all_ground_truth = []
print('Start training... \n')
for epoch in range(n_epochs):
random.shuffle(train_set)
random.shuffle(test_set)
print('Train: epoch', epoch + 1)
for n_iter, (X_batch, Y_batch) in enumerate(generate_batches(train_set, batch_size=batch_size)):
######################################
######### YOUR CODE HERE #############
######################################
# prepare the data (X_batch and Y_batch) for training
# using function batch_to_ids
predictions, loss = ######### YOUR CODE HERE #############
if n_iter % 200 == 0:
print("Epoch: [%d/%d], step: [%d/%d], loss: %f" % (epoch + 1, n_epochs, n_iter + 1, n_step, loss))
X_sent, Y_sent = next(generate_batches(test_set, batch_size=batch_size))
######################################
######### YOUR CODE HERE #############
######################################
# prepare test data (X_sent and Y_sent) for predicting
# quality and computing value of the loss function
# using function batch_to_ids
predictions, loss = ######### YOUR CODE HERE #############
print('Test: epoch', epoch + 1, 'loss:', loss,)
for x, y, p in list(zip(X, Y, predictions))[:3]:
print('X:',''.join(ids_to_sentence(x, id2word)))
print('Y:',''.join(ids_to_sentence(y, id2word)))
print('O:',''.join(ids_to_sentence(p, id2word)))
print('')
model_predictions = []
ground_truth = []
invalid_number_prediction_count = 0
# For the whole test set calculate ground-truth values (as integer numbers)
# and prediction values (also as integers) to calculate metrics.
# If generated by model number is not correct (e.g. '1-1'),
# increase err counter and don't append this and corresponding
# ground-truth value to the arrays.
for X_batch, Y_batch in generate_batches(test_set, batch_size=batch_size):
######################################
######### YOUR CODE HERE #############
######################################
all_model_predictions.append(model_predictions)
all_ground_truth.append(ground_truth)
invalid_number_prediction_counts.append(invalid_number_prediction_count)
print('\n...training finished.')
Explanation: Finally, we are ready to run the training! A good indicator that everything works fine is decreasing loss during the training. You should account on the loss value equal to approximately 2.7 at the beginning of the training and near 1 after the 10th epoch.
End of explanation
from sklearn.metrics import mean_absolute_error
for i, (gts, predictions, invalid_number_prediction_count) in enumerate(zip(all_ground_truth,
all_model_predictions,
invalid_number_prediction_counts), 1):
mae = ######### YOUR CODE HERE #############
print("Epoch: %i, MAE: %f, Invalid numbers: %i" % (i, mae, invalid_number_prediction_count))
Explanation: Evaluate results
Because our task is simple and the output is straight-forward, we will use MAE metric to evaluate the trained model during the epochs. Compute the value of the metric for the output from each epoch.
End of explanation |
3,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arrow
Vaex supports Arrow. We will demonstrate vaex+arrow by giving a quick look at a large dataset that does not fit into memory. The NYC taxi dataset for the year 2015 contains about 150 million rows containing information about taxi trips in New York, and is about 23GB in size. You can download it here
Step1: Opens instantly
Opening the file goes instantly, since nothing is being copied to memory. The data is only memory mapped, a technique that will only read the data when needed.
Step2: Quick viz of 146 million rows
As can be seen, this dataset contains 146 million rows.
Using plot, we can generate a quick overview what the data contains. The pickup locations nicely outline Manhattan.
Step3: Data cleansing
Step4: Shallow copies
This filtered dataset did not copy any data (otherwise it would have costed us about ~23GB of RAM). Shallow copies of the data are made instead and a booleans mask tracks which rows should be used.
Step5: Virtual column
The new column ratio does not do any computation yet, it only stored the expression and does not waste any memory. However, the new (virtual) column can be used in calculations as if it were a normal column.
Step6: Result
Our final result, the percentage of the tip, can be easily calcualted for this large dataset, it did not require any excessive amount of memory.
Interoperability
Since the data lives as Arrow arrays, we can pass them around to other libraries such as pandas, or even pass it to other processes. | Python Code:
!ls -alh /Users/maartenbreddels/datasets/nytaxi/nyc_taxi2015.arrow
import vaex
Explanation: Arrow
Vaex supports Arrow. We will demonstrate vaex+arrow by giving a quick look at a large dataset that does not fit into memory. The NYC taxi dataset for the year 2015 contains about 150 million rows containing information about taxi trips in New York, and is about 23GB in size. You can download it here:
https://docs.vaex.io/en/latest/datasets.html
In case you want to convert it to the arrow format, use the code below:
```python
ds_hdf5 = vaex.open('/Users/maartenbreddels/datasets/nytaxi/nyc_taxi2015.hdf5')
this may take a while to export
ds_hdf5.export('./nyc_taxi2015.arrow')
Also make sure you install vaex-arrow:bash
$ pip install vaex-arrow
```
End of explanation
%time
df = vaex.open('/Users/maartenbreddels/datasets/nytaxi/nyc_taxi2015.arrow')
df
Explanation: Opens instantly
Opening the file goes instantly, since nothing is being copied to memory. The data is only memory mapped, a technique that will only read the data when needed.
End of explanation
df.plot(df.pickup_longitude, df.pickup_latitude, f='log');
df.total_amount.minmax()
Explanation: Quick viz of 146 million rows
As can be seen, this dataset contains 146 million rows.
Using plot, we can generate a quick overview what the data contains. The pickup locations nicely outline Manhattan.
End of explanation
df.plot1d(df.total_amount, shape=100, limits=[0, 100])
# filter the dataset
dff = df[(df.total_amount >= 0) & (df.total_amount < 100)]
Explanation: Data cleansing: outliers
As can be seen from the total_amount columns (how much people payed), this dataset contains outliers. From a quick 1d plot, we can see reasonable ways to filter the data
End of explanation
dff['ratio'] = dff.tip_amount/dff.total_amount
Explanation: Shallow copies
This filtered dataset did not copy any data (otherwise it would have costed us about ~23GB of RAM). Shallow copies of the data are made instead and a booleans mask tracks which rows should be used.
End of explanation
dff.ratio.mean()
Explanation: Virtual column
The new column ratio does not do any computation yet, it only stored the expression and does not waste any memory. However, the new (virtual) column can be used in calculations as if it were a normal column.
End of explanation
arrow_table = df.to_arrow_table()
arrow_table
# Although you can 'convert' (pass the data) in to pandas,
# some memory will be wasted (at least an index will be created by pandas)
# here we just pass a subset of the data
df_pandas = df[:10000].to_pandas_df()
df_pandas
Explanation: Result
Our final result, the percentage of the tip, can be easily calcualted for this large dataset, it did not require any excessive amount of memory.
Interoperability
Since the data lives as Arrow arrays, we can pass them around to other libraries such as pandas, or even pass it to other processes.
End of explanation |
3,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducción al aprendizaje automático con scikit-learn
En los últimos tiempos habrás oído hablar de machine learning, deep learning, reinforcement learning, muchas más cosas que contienen la palabra learning y, por supuesto, Big Data. Con los avances en capacidad de cálculo de los últimos años y la popularización de lenguajes de alto nivel, hemos entrado de lleno en la fiebre de hacer que las máquinas aprendan. En esta clase veremos cómo utilizar el paquete scikit-learn de Python para poder crear modelos predictivos a partir de nuestros datos de una manera rápida y sencilla.
En primer lugar vamos a probar con un ejemplo muy sencillo
Step1: Para visualizar estas imágenes tendremos que hacer un .reshape
Step2: Y una vez que hemos ajustado el modelo, comprobemos cuáles son sus predicciones usando los mismos datos de entrenamiento
Step3: De nuevo usamos sklearn.metrics para medir la eficacia del algoritmo
Step4: Hemos creado dos grupos y algunos puntos se solapan, pero ¿qué pasaría si no tuviésemos esta información visual? Vamos a emplear un modelo de clustering para agrupar los datos
Step5: Si lo metemos todo en una función interactiva
Step6: Reducción de dimensionalidad
Vamos a rescatar nuestro dataset de los dígitos y tratar de visualizarlo en dos dimensiones, lo que se conoce como reducción de dimensionalidad.
Y ahora proyectamos los datos usando .transform | Python Code:
# X_train, X_test, Y_train, Y_test =
# preserve
X_train.shape, Y_train.shape
# preserve
X_test.shape, Y_test.shape
Explanation: Introducción al aprendizaje automático con scikit-learn
En los últimos tiempos habrás oído hablar de machine learning, deep learning, reinforcement learning, muchas más cosas que contienen la palabra learning y, por supuesto, Big Data. Con los avances en capacidad de cálculo de los últimos años y la popularización de lenguajes de alto nivel, hemos entrado de lleno en la fiebre de hacer que las máquinas aprendan. En esta clase veremos cómo utilizar el paquete scikit-learn de Python para poder crear modelos predictivos a partir de nuestros datos de una manera rápida y sencilla.
En primer lugar vamos a probar con un ejemplo muy sencillo: ajustar una recta a unos datos. Esto difícilmente se puede llamar machine learning, pero nos servirá para ver cómo es la forma de trabajar con scikit-learn, cómo se entrenan los modelos y cómo se calculan las predicciones.
En primer lugar fabricamos unos datos distribuidos a lo largo de una recta con un poco de ruido:
El proceso para usar scikit-learn es el siguiente:
Separar los datos en matriz de características features y variable a predecir y
Seleccionar el modelo
Elegir los hiperparámetros
Ajustar o entrenar el modelo (model.fit)
Predecir con datos nuevos (model.predict)
<div class="alert alert-info">Tenemos que hacer este `reshape` para transformar nuestro vector en una matriz de columnas. Rara vez tendremos que repetir este paso, puesto que en la práctica siempre tendremos varias variables.</div>
Para calcular el error, en el módulo sklearn.metrics tenemos varias funciones útiles:
Y ahora predecimos con datos nuevos:
¡Y ya está! Lo básico de scikit-learn está aquí. Lo próximo será usar diferentes tipos de modelos y examinar con rigor su rendimiento para poder seleccionar el que mejor funcione para nuestros datos.
Introducción rápida al aprendizaje automático
En aprendizaje automático tenemos dos tipos de problemas:
Aprendizaje supervisado, cuando tengo datos etiquetados, es decir: conozco la variable a predecir de un cierto número de observaciones. Pasándole esta información al algoritmo, este será capaz de predecir dicha variable cuando reciba observaciones nuevas. Dependiendo de la naturaleza de la variable a predecir, tendremos a su vez:
Regresión, si es continua (como el caso anterior), o
Clasificación, si es discreta o categórica (sí/no, color de ojos, etc)
Aprendizaje no supervisado, cuando no tenemos datos etiquetados y por tanto no tengo ninguna información a priori. En este caso usaremos los algoritmos para descubrir patrones en los datos y agruparlos, pero tendremos que manualmente inspeccionar el resultado después y ver qué sentido podemos darle a esos grupos.
En función de la naturaleza de nuestro problema, scikit-learn proporciona una gran variedad de algoritmos que podemos elegir.
Clasificación
En scikit-learn tenemos disponibles muchos datasets clásicos de ejemplo que podemos utilizar para practicar. Uno de ellos es el dataset MNIST, que consiste en imágenes escaneadas de números escritos a mano por funcionarios de los EEUU. Para cargarlo, importamos la función correspondiente de sklearn.datasets:
Ya tenemos los datos separados en matriz de características y vector de predicción. En este caso, tendré 64 = 8x8 características (un valor numérico por cada pixel de la imagen) y mi variable a predecir será el número en sí.
Siempre que se hace aprendizaje supervisado, se ha de dividir el dataset en una parte para entrenamiento y otra para test (incluso a veces hay una partición más para validación)
End of explanation
# Inicializamos el modelo
# Lo entrenamos
Explanation: Para visualizar estas imágenes tendremos que hacer un .reshape:
Ten en cuenta que nosotros sabemos qué número es cada imagen porque somos humanos y podemos leerlas. El ordenador lo sabe porque están etiquetadas, pero ¿qué pasa si viene una imagen nueva? Para eso tendremos que construir un modelo de clasificación. En este caso aplicaremos la regresión logística
End of explanation
# Vemos los resultados para los datos de test
Explanation: Y una vez que hemos ajustado el modelo, comprobemos cuáles son sus predicciones usando los mismos datos de entrenamiento:
End of explanation
# preserve
# https://github.com/amueller/scipy-2016-sklearn/blob/master/notebooks/05%20Supervised%20Learning%20-%20Classification.ipynb
from sklearn.datasets import make_blobs
# preserve
features, labels = make_blobs(centers=[[6, 0], [2, -1]], random_state=0)
features.shape
# preserve
plt.scatter(features[:, 0], features[:, 1], c=labels)
Explanation: De nuevo usamos sklearn.metrics para medir la eficacia del algoritmo:
¡Parece que hemos acertado prácticamente todas! Más tarde volveremos sobre este porcentaje de éxito, que bien podría ser engañoso. De momento, representemos otra medida de éxito que es la matriz de confusión:
Clustering y reducción de dimensionalidad
Una vez que hemos visto los dos tipos de problemas supervisados, vamos a ver cómo se trabajan los problemas no supervisados. En primer lugar vamos a fabricar dos nubes de puntos usando la función make_blobs:
End of explanation
# preserve
xmin, xmax = features[:, 0].min(), features[:, 0].max()
ymin, ymax = features[:, 1].min(), features[:, 1].max()
xx, yy = np.meshgrid(
np.linspace(xmin, xmax),
np.linspace(ymin, ymax)
)
mesh = np.c_[xx.ravel(), yy.ravel()]
mesh
# http://pybonacci.org/2015/01/14/introduccion-a-machine-learning-con-python-parte-1/
Explanation: Hemos creado dos grupos y algunos puntos se solapan, pero ¿qué pasaría si no tuviésemos esta información visual? Vamos a emplear un modelo de clustering para agrupar los datos: en este caso KMeans
Observa que por defecto tenemos 8 clusters. Veamos qué ocurre:
Ahora no pasamos la información de las etiquetas al algoritmo a la hora de entrenar. En la práctica por supuesto no la tendremos.
Y ahora preparamos el código para representar todas las regiones:
End of explanation
# preseve
Explanation: Si lo metemos todo en una función interactiva:
End of explanation
# preserve
import pandas as pd
def load_iris_df():
from sklearn.datasets import load_iris
iris = load_iris()
features, labels = iris.data, iris.target
df = pd.DataFrame(features, columns=iris.feature_names)
df["species"] = pd.Categorical.from_codes(iris.target, categories=iris.target_names)
#df = df.replace({'species': {0: iris.target_names[0], 1: iris.target_names[1], 2: iris.target_names[2]}})
return df
iris_df = load_iris_df()
# preserve
iris_df.head()
# preserve
_ = pd.tools.plotting.scatter_matrix(iris_df, c=iris_df["species"].cat.codes, figsize=(10, 10))
Explanation: Reducción de dimensionalidad
Vamos a rescatar nuestro dataset de los dígitos y tratar de visualizarlo en dos dimensiones, lo que se conoce como reducción de dimensionalidad.
Y ahora proyectamos los datos usando .transform:
Ejercicio
Visualiza el dataset de las flores (load_iris) utilizando las funciones que tienes más abajo. ¿Hay alguna forma clara de separar las tres especies de flores?
Separa el dataset en matriz de características features y vector de etiquetas labels. Conviértelos a arrays de NumPy usando .as_matrix().
Reduce la dimensionalidad del dataset a 2 usando sklearn.manifold.Isomap o sklearn.decomposition.PCA y usa un algoritmo de clustering con 3 clusters. ¿Se parecen los clusters que aparecen a los grupos originales?
Predice el tipo de flor usando un algoritmo de clasificación. Visualiza la matriz de confusión. ¿Cuál es el porcentaje de aciertos del algoritmo? ¿Es más certero en algún tipo de flor en concreto? ¿Concuerda esto con lo que pensaste en el apartado 1?
End of explanation |
3,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare the read depth and number of strains
This data is the average read depth of each metagenome. The table in read_depth.strains.tsv has the SRA ID, the average read depth across the amplicon region, and the number of strains that we recovered for each of the amplicon regions, A, B, and C.
Step1: Filter the data
Because (0,0) correlates strongly, we filter out any row where the read_depth and the strain count are 0. We have to do this on a per SRR basis.
Step2: Note that we have reduced our matrix from having 11,054 entries with all the zeros to only having 1,397 entries now!
plot the data.
This just provides a quick overview of the data
Step3: Note that this plot is skewed by a few outliers. Lets limit it to anything where read_depth < 1000 and redraw the plot
Step4: When we zoom in, this doesn't look like a strong correlation. Note that there are a lot of data points here compared to the whole data set. In the data set excluding (0,0) we had 1,397 entries, and now we have 1,386 entries, so we only removed 9 values!
Linear regression
What is the correlation between these two data sets. Note that we use the data with all the non-zero's removed (1,397 data points).
As a reminder, the statsmodels OLS uses y ~ x
Step5: Removing the outliers
Remember the nine outliers above? If we use the data set where we have removed them, we can compare the r<sup>2</sup> value for that dataset. | Python Code:
#instantiate our environment
import os
import sys
%matplotlib inline
import pandas as pd
import statsmodels.api as sm
# read the data into a pandas dataframe
df = pd.read_csv("read_depth.strains.tsv", header=0, delimiter="\t")
print("Shape: {}".format(df.shape))
df.head()
Explanation: Compare the read depth and number of strains
This data is the average read depth of each metagenome. The table in read_depth.strains.tsv has the SRA ID, the average read depth across the amplicon region, and the number of strains that we recovered for each of the amplicon regions, A, B, and C.
End of explanation
dfa = df[(df["A_read_depth"] > 0) & (df["A_strains"] > 0)]
dfb = df[(df["B_read_depth"] > 0) & (df["B_strains"] > 0)]
dfc = df[(df["C_read_depth"] > 0) & (df["C_strains"] > 0)]
print("Shape: {}".format(dfa.shape))
dfa.head()
Explanation: Filter the data
Because (0,0) correlates strongly, we filter out any row where the read_depth and the strain count are 0. We have to do this on a per SRR basis.
End of explanation
ax = dfa.plot('A_read_depth', 'A_strains', kind='scatter')
ax.set(ylabel="# strains", xlabel="read depth")
Explanation: Note that we have reduced our matrix from having 11,054 entries with all the zeros to only having 1,397 entries now!
plot the data.
This just provides a quick overview of the data
End of explanation
dfas = dfa[dfa['A_read_depth'] < 1000]
print("Shape: {}".format(dfas.shape))
ax = dfas.plot('A_read_depth', 'A_strains', kind='scatter')
ax.set(ylabel="# strains", xlabel="read depth")
Explanation: Note that this plot is skewed by a few outliers. Lets limit it to anything where read_depth < 1000 and redraw the plot
End of explanation
model = sm.OLS(dfa['A_strains'], dfa['A_read_depth']).fit()
predictions = model.predict(dfa['A_read_depth'])
model.summary()
Explanation: When we zoom in, this doesn't look like a strong correlation. Note that there are a lot of data points here compared to the whole data set. In the data set excluding (0,0) we had 1,397 entries, and now we have 1,386 entries, so we only removed 9 values!
Linear regression
What is the correlation between these two data sets. Note that we use the data with all the non-zero's removed (1,397 data points).
As a reminder, the statsmodels OLS uses y ~ x
End of explanation
model = sm.OLS(dfas['A_strains'], dfas['A_read_depth']).fit()
predictions = model.predict(dfas['A_read_depth'])
model.summary()
Explanation: Removing the outliers
Remember the nine outliers above? If we use the data set where we have removed them, we can compare the r<sup>2</sup> value for that dataset.
End of explanation |
3,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create one column as a function of two columns
Step2: Create two columns as a function of one column | Python Code:
# Import modules
import pandas as pd
# Example dataframe
raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'],
'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'],
'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'],
'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],
'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'name', 'preTestScore', 'postTestScore'])
df
Explanation: Title: Make New Columns Using Functions
Slug: pandas_make_new_columns_using_functions
Summary: Make New Columns Using Functions
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
End of explanation
# Create a function that takes two inputs, pre and post
def pre_post_difference(pre, post):
# returns the difference between post and pre
return post - pre
# Create a variable that is the output of the function
df['score_change'] = pre_post_difference(df['preTestScore'], df['postTestScore'])
# View the dataframe
df
Explanation: Create one column as a function of two columns
End of explanation
# Create a function that takes one input, x
def score_multipler_2x_and_3x(x):
# returns two things, x multiplied by 2 and x multiplied by 3
return x*2, x*3
# Create two new variables that take the two outputs of the function
df['post_score_x2'], df['post_score_x3'] = zip(*df['postTestScore'].map(score_multipler_2x_and_3x))
df
Explanation: Create two columns as a function of one column
End of explanation |
3,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Today
Step1: Q. What should this yield?
Step2: Q. And this?
Step3: Q. So, what will this print out?
Step4: I'm smelling a violation of the DRY principle!
How can we improve? Another function
Step5: Ahh, better. ;)
The ternary operator for if-else branching
Ternary operators are statements with 3 arguments, but usually this is the only one per language, so it's often called THE ternary operator
Step6: Q. What will this do?
Step7: Using break to end a while loop
a while loop with a limit on the maximum number of allowed iterations.
Step8: Notice the new reserved words
Step9: Gold case for while loop
The previous code cell is not terribly useful, but consider an approximation of the sine curve.
To make this more interesting, let's calculate sine to a particular accuracy (AKA tolerance).
$$\sin(x) \approx \sum_{n=0}^N \frac{(-1)^n}{(2n + 1)!} x^{2n + 1}$$
Step10: By adding the "if" statement, we can compare the previous total with the current total. When the difference between the previous and current totals is less than the tolerance, the code "breaks" and the while loop is stopped.
if statements in list comprehensions
We can add an if condition to a list comprehension. We would do this when we wanted to limit or filter the values that are put into the resulting list.
Step11: User Input
Until now, we have provided the information necessary for a program to run
by typing it into our notebook cells.
This can be inconvenient, especially if we want to
Step12: Why bother?
Anytime we supply input, whether it be in an iPython session (via input), the Linux terminal, or from a file (later this semester), that input will be interpreted as a string.
Let's try inputting some values for the escape velocity equation.
$$v=\sqrt{\frac{2GM}{r}}$$
Equation for escape velocity in $\frac{meters}{second}$ where $G$ is the gravitational constant, $M$ is the mass of the planet, and $r$ is the radius of the planet.
Step13: What will this do? | Python Code:
x = 6.28 # set the variable x equal to tau, the real circular constant
# Note this is the only correct way to check equality on floats.
# Subtract the expected value and compare to your allowed error:
eps = 1e-10 # my deviation I allow until I call them 'equal'
if((x - 6.28) < eps): # if x is equal to tau
print("2*PI. Mmmm, lots of PI.") # print something
else: # else
print("No PI. Alas.") # print something else
from math import exp
def func1(value):
if(0 <= value <= 1): # note the math-like syntax here!
result = exp(value) # executed if value is between 0 and 1
else:
result = -10 # executed otherwise (if is false)
return result # Return the result
Explanation: Today: Branching and User Input
Branching (If/Else blocks) -- Section 3.2
"The flow of computer programs often needs to branch.
That is, if a condition is met, we do one thing, and if not, we do another thing." -p.104 of your book
This is referred to as branching into a block of statements.
==== Basic if ====
python
if <condition>:
<do something>
==== Add an else ====
python
if <condition>:
<do something>
else:
<do something else>
==== Add an elif ====
python
if <condition>:
<do something>
elif <a different condition>:
<do something else>
else:
<do something else>
End of explanation
result = func1(1)
result
Explanation: Q. What should this yield?
End of explanation
func1(1.5)
def func2(value):
if 0 <= value < 1:
result = exp(value) # executed if value is between 0 (inclusive) and 1 (exclusive)
elif 1 <= value <= 2:
result = -100 # executed if value is between 1 and 2 (inclusive)
else:
result = 0
print('value is not between 0 and 2')
return value, result
Explanation: Q. And this?
End of explanation
print("{}\t{}".format(*func2(0.5)))
print("{}\t{}".format(*func2(1)))
print("{}\t{}".format(*func2(1.5)))
print("{}\t{}".format(*func2(20)))
Explanation: Q. So, what will this print out?
End of explanation
def calc_and_print(value):
print("{}\t{}".format(*func2(value)))
for val in [0.5, 1, 1.5, 20]:
calc_and_print(val)
Explanation: I'm smelling a violation of the DRY principle!
How can we improve? Another function:
End of explanation
x = 3
'x equals 2' if x == 2 else 'x does not equal 2'
import math
def squareRoot(value):
return math.sqrt(value) if value >= 0 else\
'Imaginary numbers not supported!'
Explanation: Ahh, better. ;)
The ternary operator for if-else branching
Ternary operators are statements with 3 arguments, but usually this is the only one per language, so it's often called THE ternary operator:
End of explanation
print(squareRoot(2.0))
print(squareRoot(-2.0)) # my function protects against error
print(math.sqrt(-2)) # the original not
Explanation: Q. What will this do?
End of explanation
count = 0
while count < 100:
count += 1
count
count = 0
while True: # obviously always True. So the while block needs to interrupt
if count == 100:
break # Immediately jump out of the while loop
else:
pass # this doesn't do anyting, maybe reminder to have an else case?
count += 1
count
Explanation: Using break to end a while loop
a while loop with a limit on the maximum number of allowed iterations.
End of explanation
i = 0
while True:
if i == 100:
break
i += 1
i
Explanation: Notice the new reserved words: "break" and "pass"
In this example, the else statement is optional, i.e. we could just do:
End of explanation
from math import pi, factorial, sin
tau = 2*pi # read http://tauday.com ! ;)
x = tau / 3 # Evaluate at x = pi / 5 (just a number I chose)
mathSin = sin(x) # Use math.sin to calculate sin(x)
prevTotal = 1e5 # The previous value of the sine series.
# Just something big to start with; will be overwritten first
# time through loop.
tolerance = 0.01
total = 0.0 # The summation's running total
n = 0 # The current summation term number
print("%5s %12s %12s" % ("count", "approx", "math.sin(x)"))
while True:
term = (-1)**n * x**(2*n + 1) / factorial(2*n + 1) # Calculate the current term
total += term # Add the term to the running total
# Print the current term number, running total, and math.sin value
print('%5i %12.8g %12.8g' % (n, total, mathSin))
# If the diff between prevTotal and total is less than the tolerance, stop the loop
if abs(prevTotal - total) < tolerance:
break
prevTotal = total # Update the previous total value
n += 1 # Increment the summation count
Explanation: Gold case for while loop
The previous code cell is not terribly useful, but consider an approximation of the sine curve.
To make this more interesting, let's calculate sine to a particular accuracy (AKA tolerance).
$$\sin(x) \approx \sum_{n=0}^N \frac{(-1)^n}{(2n + 1)!} x^{2n + 1}$$
End of explanation
divisor = 3.25 # Number to divide by
# Create a list of numbers between 1 and 100 that are divisible by "divisor"
numList = [num for num in range(1, 101) if num % divisor == 0]
numList
nameList = ['Samual', 'Charlie', 'Zarah', 'Robert', 'Liangyu', 'Jeffery', 'Brian', 'Aidan', 'Melissa', 'Gerardo', \
'Emily', 'Parker', 'Amanda', 'Kristine', 'Tarek', 'Christian', 'Ian', 'Alex', 'Nathaniel', \
'Samantha', 'Pengqi']
searchstr = 'ar'
# when reading list comprehensions, always make a mental break
# in front of the `for` keyword
filterList = [name for name in nameList if searchstr in name]
print("Zarah".find(text))
filterList
Explanation: By adding the "if" statement, we can compare the previous total with the current total. When the difference between the previous and current totals is less than the tolerance, the code "breaks" and the while loop is stopped.
if statements in list comprehensions
We can add an if condition to a list comprehension. We would do this when we wanted to limit or filter the values that are put into the resulting list.
End of explanation
x = input('Enter a float: ')
x
print(type(x))
x = float(x)
Explanation: User Input
Until now, we have provided the information necessary for a program to run
by typing it into our notebook cells.
This can be inconvenient, especially if we want to:
Write an interactive program, or
If the amount of information the program requires is huge.
(e.g., 10-body system vs. 3-body system)
Today, we'll cover the first case: writing an interactive program.
Later, we'll discuss #2, where we must supply a lot of information
to a program, and it's best to read that information from a file rather than
supplying it "by hand."
input
The "input" function allows you (a user) to supply new information to
Python code as it runs.
This can be useful for:
* Supplying parameters (e.g., a blackbody temperature).
* Changing how the code branches (e.g., what if user supplies
bogus input? More discussion about this coming up.).
End of explanation
from math import sqrt
def escapeVel(mass, radius): # Define the function with 2 input variables
G = 6.67e-11 # Gravitational constant
velocity = sqrt(2*G*mass / radius) # Escape velocity equation
return velocity # Return the escape velocity
Explanation: Why bother?
Anytime we supply input, whether it be in an iPython session (via input), the Linux terminal, or from a file (later this semester), that input will be interpreted as a string.
Let's try inputting some values for the escape velocity equation.
$$v=\sqrt{\frac{2GM}{r}}$$
Equation for escape velocity in $\frac{meters}{second}$ where $G$ is the gravitational constant, $M$ is the mass of the planet, and $r$ is the radius of the planet.
End of explanation
maxCount = 100
count = 0
while True and count < maxCount:
mass = input("Please enter the planet's mass in kg: ")
radius = input("Please enter the planet's radius in m: ")
if mass != "" and radius != "":
print("The escape velocity is: %.1f m/s" % escapeVel(float(mass), float(radius)))
else:
print("Ending program!")
break
print()
count += 1
Explanation: What will this do?
End of explanation |
3,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>Python the essentials
Step1: Python is a calculator
Step2: also logical operators
Step3: Variable assignment
Step4: More information on print format
Step5: <div class="alert alert-warning">
<b>R comparison
Step6: Loading with defined short name (community agreement)
Step10: Loading functions from any file/module/package
Step11: <div class="alert alert-info">
<b>REMEMBER</b>
Step12: integers
Step13: booleans
Step14: <div class="alert alert-warning">
<b>R comparison
Step15: Containers
Strings
Step16: A string is a collection of characters...
Step17: Lists
A list can contain mixed data types (character, float, int, other lists,...)
Step18: <div class="alert alert-info">
<b>REMEMBER</b>
Step19: ADVANCED users area
Step20: list comprehensions are basically a short-handed version of a for-loop inside a list. Hence, the previous action is similar to
Step21: Another example checks the methods available for the list data type
Step22: <div class="alert alert-success">
<b>EXERCISE</b>
Step23: <div class="alert alert-success">
<b>EXERCISE</b>
Step24: <div class="alert alert-warning">
<b>R comparison
Step25: <div class="alert alert-warning">
<b>R comparison
Step26: <div class="alert alert-info">
<b>REMEMBER</b>
Step27: Accessing container values
Step28: <div class="alert alert-info">
<b>REMEMBER</b>
Step29: Select from...till
Step30: Select, counting backward
Step31: <div class="alert alert-warning">
<b>R comparison
Step32: From the first element until a given index
Step33: Dictionaries
Step34: Tuples
Step35: <div class="alert alert-info">
<b>REMEMBER</b>
Step36: Control flows (optional)
for-loop
Step37: <div class="alert alert-danger">
**Indentation** is VERY IMPORTANT in Python. Note that the second line in the example above is indented</li>
</div>
Step38: <div class="alert alert-success">
<b>EXERCISE</b>
Step39: <div class="alert alert-info">
<b>REMEMBER</b>
Step40: if statement
Step41: Functions
We've been using functions the whole time...
Step42: <div class="alert alert-danger">
It is all about calling a **method/function** on an **object**!
</div>
Step44: <div class="alert alert-info">
<b>REMEMBER</b>
Step45: Setup of a function
Step46: <div class="alert alert-success">
<b>EXERCISE</b>
Step47: Anonymous functions (lambda) | Python Code:
print("Hello INBO_course!") # python 3(!)
Explanation: <p><font size="6"><b>Python the essentials: A minimal introduction</b></font></p>
Introduction to GIS scripting
May, 2017
© 2017, Stijn Van Hoey (stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
First steps
the obligatory...
End of explanation
4*5
3**2
(3 + 4)/2, 3 + 4/2,
21//5, 21%5 # floor division, modulo
Explanation: Python is a calculator
End of explanation
3 > 4, 3 != 4, 3 == 4
Explanation: also logical operators:
End of explanation
my_variable_name = 'DS_course'
my_variable_name
name, age = 'John', 30
print('The age of {} is {:d}'.format(name, age))
Explanation: Variable assignment
End of explanation
import os
Explanation: More information on print format: https://pyformat.info/
<div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
<li>Use relevant variable names, e.g. `name` instead of `n`
<li>Keep variable names lowercase, with underscore for clarity, e.g. `darwin_core` instead of `DarwinCore`
</div>
Loading functionalities
End of explanation
os.listdir()
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>You would <b>load</b> a <b>library</b> (`library("ggplot2")`) instead of <b>importing</b> a package</p>
</div>
End of explanation
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Loading with defined short name (community agreement)
End of explanation
%%file rehears1.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
Dummy function to print the string it
print('it')
import rehears1
rehears1.print_it()
%%file rehears2.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
Dummy function to print the string it
print('it')
def print_custom(my_input):
Dummy function to print the string that
print(my_input)
from rehears2 import print_it, print_custom
print_custom('DS_course')
Explanation: Loading functions from any file/module/package:
End of explanation
a_float = 5.
type(a_float)
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
Importing **packages** is always the first thing you do in python, since it offers the functionalities to work with!
</div>
Different options are available:
<span style="color:green">import <i>package-name</i></span> <br>importing all functionalities as such
<span style="color:green">from <i>package-name</i> import <i>specific function</i></span><br>importing a specific function or subset of the package
<span style="color:green">import <i>package-name</i> as <i>short-package-name</i></span><br>Very good way to keep a good insight in where you use what package
<div class="alert alert-danger">
<b>DON'T</b>: `from os import *`. Just don't!
</div>
Datatypes
Numerical
floats
End of explanation
an_integer = 4
type(an_integer)
Explanation: integers
End of explanation
a_boolean = True
a_boolean
type(a_boolean)
3 > 4 # results in boolean
Explanation: booleans
End of explanation
print(False) # test yourself with FALSE
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>Booleans are written as <b>False</b> or <b>True</b>, NOT as <b>FALSE/TRUE</b></p>
</div>
End of explanation
a_string = "abcde"
a_string
Explanation: Containers
Strings
End of explanation
a_string.capitalize(), a_string.upper(), a_string.endswith('f') # Check the other available methods for a_string yourself!
a_string.upper().replace('B', 'A')
a_string + a_string
a_string * 5
Explanation: A string is a collection of characters...
End of explanation
a_list = [1, 'a', 3, 4]
a_list
another_list = [1, 'a', 8.2, 4, ['z', 'y']]
another_list
a_list.append(8.2)
a_list
a_list.reverse()
a_list
Explanation: Lists
A list can contain mixed data types (character, float, int, other lists,...)
End of explanation
a_list + ['b', 5]
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
The list is updated <b>in-place</b>; a_list.reverse() does not return anything, it updates the list
</div>
End of explanation
[el*2 for el in a_list] # list comprehensions...a short for-loop
Explanation: ADVANCED users area: list comprehensions
End of explanation
new_list = []
for element in a_list:
new_list.append(element*2)
print(new_list)
Explanation: list comprehensions are basically a short-handed version of a for-loop inside a list. Hence, the previous action is similar to:
End of explanation
[el for el in dir(list) if not el[0] == '_']
Explanation: Another example checks the methods available for the list data type:
End of explanation
[el for el in dir(list) if not el.startswith('_')]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Rewrite the previous list comprehension by using a builtin string method to test if the element starts with an underscore</li>
</ul>
</div>
End of explanation
sentence = "the quick brown fox jumps over the lazy dog"
#split in words and get word lengths
[len(word) for word in sentence.split()]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Given the sentence `the quick brown fox jumps over the lazy dog`, split the sentence in words and put all the word-lengths in a list.</li>
</ul>
</div>
End of explanation
a_dict = {'a': 1, 'b': 2}
a_dict['c'] = 3
a_dict['a'] = 5
a_dict
a_dict.keys(), a_dict.values(), a_dict.items()
an_empty_dic = dict() # or just {}
an_empty_dic
example_dict = {"timeseries": [2, 5, 3],
"parameter": 21.3,
"scenario": "a"}
example_dict
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>R also has lists as data type, e.g. `list(c(2, 5, 3), 21.3, "a")`</p>
</div>
Dictionary
A dictionary is basically an efficient table that maps keys to values. It is an unordered container
It can be used to conveniently store and retrieve values associated with a name
End of explanation
a_tuple = (1, 2, 4)
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>R also has a dictionary like data type, e.g. </p>
</div>
```R
example_dict <- list(c(2,5,3),21.3,"a")
names(example_dict) <- c("timeseries", "parameter", "scenario")
example_dict
$timeseries
[1] 2 5 3
$parameter
[1] 21.3
$scenario
[1] "a"
```
Tuple
End of explanation
collect = a_list, a_dict
type(collect)
serie_of_numbers = 3, 4, 5
# Using tuples on the left-hand side of assignment allows you to extract fields
a, b, c = serie_of_numbers
print(c, b, a)
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
The type of brackets - (), [], {} - to use depends from the data type you want to create!
<li> [] -> list
<li> () -> tuple
<li> {} -> dictionary
</div>
End of explanation
grades = [88, 72, 93, 94]
from IPython.display import SVG, display
display(SVG("../img/slicing-indexing.svg"))
grades[2]
Explanation: Accessing container values
End of explanation
from IPython.display import SVG, display
display(SVG("../img/slicing-slicing.svg"))
grades[1:3]
a_list = [1, 'a', 8.2, 4]
a_list[0], a_list[2]
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> Python starts counting from <b>0</b> !
</ul>
</div>
End of explanation
a_string = "abcde"
a_string
a_string[2:4]
Explanation: Select from...till
End of explanation
a_list[-2]
Explanation: Select, counting backward:
End of explanation
a_list = [0, 1, 2, 3]
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>The `-` symbol in R has a completely different meaning: `NOT`</p>
</div>
```R
test <- c(1, 2, 3, 4, 5, 6)
test[-2]
[1] 1 3 4 5 6
```
End of explanation
a_list[:3]
a_list[::2]
Explanation: From the first element until a given index:
End of explanation
a_dict = {'a': 1, 'b': 2}
a_dict['a']
Explanation: Dictionaries
End of explanation
a_tuple = (1, 2, 4)
a_tuple[1]
Explanation: Tuples
End of explanation
a_list
a_list[2] = 10 # element 2 changed -- mutable
a_list
a_tuple[1] = 10 # cfr. a_string -- immutable
a_string[3] = 'q'
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> [] for accessing elements
</ul>
</div>
Note that L[start:stop] contains the elements with indices i such as start <= i < stop
(i ranging from start to stop-1). Therefore, L[start:stop] has (stop-start) elements.
Slicing syntax: L[start:stop:stride]
all slicing parameters are optional
Assigning new values to items -> mutable vs immutable
End of explanation
for i in [1, 2, 3, 4]:
print(i)
Explanation: Control flows (optional)
for-loop
End of explanation
for i in a_list: # anything that is a collection/container can be looped
print(i)
Explanation: <div class="alert alert-danger">
**Indentation** is VERY IMPORTANT in Python. Note that the second line in the example above is indented</li>
</div>
End of explanation
for char in 'Hello DS':
print(char)
for i in a_dict: # items, keys, values
print(i)
for j, key in enumerate(a_dict.keys()):
print(j, key)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Loop through the characters of the string `Hello DS` and print each character separately within the loop</li>
</ul>
</div>
End of explanation
b = 7
while b < 10:
b+=1
print(b)
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
When needing a iterator to count, just use `enumerate`. you mostly do not need i = 0 for... i = i +1.
<br>
Check [itertools](http://pymotw.com/2/itertools/) as well...
</div>
while
End of explanation
if 'a' in a_dict:
print('a is in!')
if 3 > 4:
print('This is valid')
testvalue = False # 0, 1, None, False, 4 > 3
if testvalue:
print('valid')
else:
raise Exception("Not valid!")
myvalue = 3
if isinstance(myvalue, str):
print('this is a string')
elif isinstance(myvalue, float):
print('this is a float')
elif isinstance(myvalue, list):
print('this is a list')
else:
print('no idea actually')
Explanation: if statement
End of explanation
len(a_list)
Explanation: Functions
We've been using functions the whole time...
End of explanation
a_list.reverse()
a_list
Explanation: <div class="alert alert-danger">
It is all about calling a **method/function** on an **object**!
</div>
End of explanation
def custom_sum(a, b, verbose=False):
custom summation function
Parameters
----------
a : number
first number to sum
b : number
second number to sum
verbose: boolean
require additional information (True) or not (False)
Returns
-------
my_sum : number
sum of the provided two input elements
if verbose:
print('print a lot of information to the user')
my_sum = a + b
return my_sum
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
Getting an overview of the available methods on the variable (i.e. object):
<img src="../img/tabbutton.jpg"></img>
</div>
Defining a function:
End of explanation
custom_sum(2, 3, verbose=False) # [3], '4'
Explanation: Setup of a function:
definition starts with def
function body is indented
return keyword precedes returned value
<div class="alert alert-danger">
**Indentation** is VERY IMPORTANT in Python. Note that the second line in the example above is indented</li>
</div>
End of explanation
def f1():
print('this is function 1 speaking...')
def f2():
print('this is function 2 speaking...')
def function_of_functions(inputfunction):
return inputfunction()
function_of_functions(f1)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Try **SHIFT-TAB** combination to read your own documentation!</li>
</ul>
</div>
<div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
() for calling functions!
</div>
ADVANCED users area:
Functions are objects as well... (!)
End of explanation
add_two = (lambda x: x + 2)
add_two(10)
Explanation: Anonymous functions (lambda)
End of explanation |
3,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom generators
Step1: Independent field generators
At its most basic, a custom generator provides simply a convenient way of grouping other generators together in a single namespace.
Step2: Simple dependency between field generators
Step3: Complex dependency between field generators
Step4: Custom generators can have complex dependencies between their field generators. For example, in Quux1Generator below the field generator bb depends on ll (and thus indirectly also on aa) and nn.
Step5: We can get the same output for bb without explicitly needing to define the input generators.
Step6: Let's check that both g1 and g2 really produce the same elements in column bb.
Step7: Field generators defined in the __init__() method
It is possible to define field generators in the __init__() method of a custom generator. Note that you can use the __fields__ attribute to easily define the order in which fields should be output in generated items. | Python Code:
import tohu
from tohu.v6.primitive_generators import *
from tohu.v6.derived_generators import *
from tohu.v6.generator_dispatch import *
from tohu.v6.custom_generator import *
from tohu.v6.utils import print_generated_sequence, make_dummy_tuples
#tohu.v6.logging.logger.setLevel('DEBUG')
from pandas.util.testing import assert_frame_equal, assert_series_equal
print(f'Tohu version: {tohu.__version__}')
Explanation: Custom generators
End of explanation
class QuuxGenerator(CustomGenerator):
__fields__ = ["dd", "bb", "cc"]
aa = Integer(1, 7)
bb = HashDigest(length=8)
cc = FakerGenerator(method="name")
dd = Integer(100, 200)
#__fields__ = ['aa', 'cc'] # only these will be exported
g = QuuxGenerator()
print(f"Field names: {g.field_names}")
# NBVAL_IGNORE_OUTPUT
print(g.ns_gen_templates.to_str())
# NBVAL_IGNORE_OUTPUT
print(g.ns_gens.to_str())
print_generated_sequence(g, num=5, sep='\n', seed=12345)
Explanation: Independent field generators
At its most basic, a custom generator provides simply a convenient way of grouping other generators together in a single namespace.
End of explanation
chars = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
n_vals = Integer(1, 5)
g = SelectMultiple(chars, num=n_vals)
n_vals.reset(seed=11111)
g.reset(seed=99999)
print_generated_sequence(g, num=10, sep='\n')
class QuuxGenerator(CustomGenerator):
n_vals = Integer(1, 5)
vals = SelectMultiple(chars, num=n_vals)
g = QuuxGenerator()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
Explanation: Simple dependency between field generators
End of explanation
mapping = {
'A': ['a', 'aa', 'aaa', 'aaaa', 'aaaaa'],
'B': ['b', 'bb', 'bbb', 'bbbb', 'bbbbb'],
'C': ['c', 'cc', 'ccc', 'cccc', 'ccccc'],
'D': ['d', 'dd', 'ddd', 'dddd', 'ddddd'],
'E': ['e', 'ee', 'eee', 'eeee', 'eeeee'],
'F': ['f', 'ff', 'fff', 'ffff', 'fffff'],
'G': ['g', 'gg', 'ggg', 'gggg', 'ggggg'],
}
Explanation: Complex dependency between field generators
End of explanation
class Quux1Generator(CustomGenerator):
aa = SelectOne(['A', 'B', 'C', 'D', 'E', 'F', 'G'])
ll = Lookup(key=aa, mapping=mapping)
nn = Integer(1, 5)
bb = SelectMultiple(ll, num=nn)
g1 = Quux1Generator()
print_generated_sequence(g1, num=5, sep='\n', seed=99999)
Explanation: Custom generators can have complex dependencies between their field generators. For example, in Quux1Generator below the field generator bb depends on ll (and thus indirectly also on aa) and nn.
End of explanation
class Quux2Generator(CustomGenerator):
bb = SelectMultiple(Lookup(SelectOne(['A', 'B', 'C', 'D', 'E', 'F', 'G']), mapping), num=Integer(1, 5))
g2 = Quux2Generator()
print_generated_sequence(g2, num=5, sep='\n', seed=99999)
Explanation: We can get the same output for bb without explicitly needing to define the input generators.
End of explanation
df1 = g1.generate(num=20, seed=99999).to_df()
df2 = g2.generate(num=20, seed=99999).to_df()
assert_series_equal(df1["bb"], df2["bb"])
Explanation: Let's check that both g1 and g2 really produce the same elements in column bb.
End of explanation
class QuuxGenerator(CustomGenerator):
__fields__ = ['aa', 'bb', 'cc'] # define the order of fields in generated items
cc = HashDigest(length=8)
aa = Integer(100, 200)
def __init__(self, method):
self.bb = FakerGenerator(method=method)
g = QuuxGenerator(method="first_name")
print_generated_sequence(g, num=10, seed=12345, sep='\n')
Explanation: Field generators defined in the __init__() method
It is possible to define field generators in the __init__() method of a custom generator. Note that you can use the __fields__ attribute to easily define the order in which fields should be output in generated items.
End of explanation |
3,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
long-short-portfolio
On the first trading day of every month, rebalance portfolio to given percentages. One of the positions is a short position.
Step1: Define Portfolios
Note
Step2: Some global data | Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots.
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
Explanation: long-short-portfolio
On the first trading day of every month, rebalance portfolio to given percentages. One of the positions is a short position.
End of explanation
portfolio_option = {'SPY': 0.50, 'TLT': 0.30, 'GLD': 0.10, 'TLT_SHRT': 0.10}
directions = {'SPY': pf.Direction.LONG, 'TLT': pf.Direction.LONG,
'GLD': pf.Direction.LONG, 'TLT_SHRT' : pf.Direction.SHORT}
Explanation: Define Portfolios
Note: By using an underscore, we can use a symbol multiple times in a portfolio under a different name. This is useful when you want to have a short and long position at the same time.
End of explanation
symbols = list(portfolio_option.keys())
capital = 10000
start = datetime.datetime(1900, 1, 1)
end = datetime.datetime.now()
options = {
'use_adj' : True,
'use_cache' : True,
}
options
# Fetch timeseries
portfolio = pf.Portfolio()
ts = portfolio.fetch_timeseries(symbols, start, end, fields=['close'],
use_cache=options['use_cache'], use_adj=options['use_adj'])
# Add calendar columns
ts = portfolio.calendar(ts)
# Finalize timeseries
ts, start = portfolio.finalize_timeseries(ts, start)
# Init trade logs
portfolio.init_trade_logs(ts)
pf.TradeLog.cash = capital
# Trading algorithm
for i, row in enumerate(ts.itertuples()):
date = row.Index.to_pydatetime()
end_flag = pf.is_last_row(ts, i)
# Rebalance on the first trading day of each month
if row.first_dotm or end_flag:
#portfolio.print_holdings(date, row)
# If last row, then zero out all weights. Otherwise use portfolio_option weights.
weights = portfolio_option if not end_flag else pf.set_dict_values(portfolio_option, 0)
# Get closing prices for all symbols
p = portfolio.get_prices(row, fields=['close'])
prices = {symbol:p[symbol]['close'] for symbol in portfolio.symbols}
# Adjust weights of all symbols in portfolio
portfolio.adjust_percents(date, prices, weights, row, directions)
# Record daily balance.
portfolio.record_daily_balance(date, row)
# Get logs
rlog, tlog, dbal = portfolio.get_logs()
rlog.head(10)
tlog.tail(100)
dbal.tail()
stats = pf.stats(ts, tlog, dbal, capital)
pf.print_full(stats)
totals = portfolio.performance_per_symbol(portfolio_option)
totals
benchmark = pf.Benchmark('SPY', capital, start, end, use_adj=True)
benchmark.run()
pf.plot_equity_curve(dbal, benchmark=benchmark.dbal)
df = pf.summary(stats, benchmark.stats, metrics=pf.currency_metrics)
df
df = pf.plot_bar_graph(stats, benchmark.stats)
df
Explanation: Some global data
End of explanation |
3,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
Covariance estimation and diagnostic plots are based on [1]_.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals, vol.
108, 328-342, NeuroImage.
Step1: Set parameters
Step2: Compute covariance using automated regularization
Step3: Show the evoked data
Step4: We can then show whitening for our various noise covariance estimates.
Here we should look to see if baseline signals match the
assumption of Gaussian white noise. we expect values centered at
0 within 2 standard deviations for 95% of the time points.
For the Global field power we expect a value of 1. | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Denis A. Engemann <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
Explanation: Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
Covariance estimation and diagnostic plots are based on [1]_.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals, vol.
108, 328-342, NeuroImage.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, n_jobs=1, fir_design='firwin')
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
Explanation: Set parameters
End of explanation
method_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01))
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None, rank=None,
method_params=method_params)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
Explanation: Compute covariance using automated regularization
End of explanation
evoked = epochs.average()
evoked.plot(time_unit='s') # plot evoked response
Explanation: Show the evoked data:
End of explanation
evoked.plot_white(noise_covs, time_unit='s')
Explanation: We can then show whitening for our various noise covariance estimates.
Here we should look to see if baseline signals match the
assumption of Gaussian white noise. we expect values centered at
0 within 2 standard deviations for 95% of the time points.
For the Global field power we expect a value of 1.
End of explanation |
3,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From raw data to dSPM on SPM Faces dataset
Runs a full pipeline using MNE-Python
Step1: Load and filter data, set up epochs
Step2: Visualize fields on MEG helmet
Step3: Look at the whitened evoked daat
Step4: Compute forward model
Step5: Compute inverse solution | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne.datasets import spm_face
from mne.preprocessing import ICA, create_eog_epochs
from mne import io, combine_evoked
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = spm_face.data_path()
subjects_dir = data_path / 'subjects'
spm_path = data_path / 'MEG' / 'spm'
Explanation: From raw data to dSPM on SPM Faces dataset
Runs a full pipeline using MNE-Python:
- artifact removal
- averaging Epochs
- forward model computation
- source reconstruction using dSPM on the contrast : "faces - scrambled"
<div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a
fast machine it can take several minutes to complete.</p></div>
End of explanation
raw_fname = spm_path / 'SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run
# Here to save memory and time we'll downsample heavily -- this is not
# advised for real data as it can effectively jitter events!
raw.resample(120., npad='auto')
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, 30, method='fir', fir_design='firwin')
events = mne.find_events(raw, stim_channel='UPPT001')
# plot the events to get an idea of the paradigm
mne.viz.plot_events(events, raw.info['sfreq'])
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.6
baseline = None # no baseline as high-pass is applied
reject = dict(mag=5e-12)
epochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks,
baseline=baseline, preload=True, reject=reject)
# Fit ICA, find and remove major artifacts
ica = ICA(n_components=0.95, max_iter='auto', random_state=0)
ica.fit(raw, decim=1, reject=reject)
# compute correlation scores, get bad indices sorted by score
eog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject)
eog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908')
ica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on
ica.plot_components(eog_inds) # view topographic sensitivity of components
ica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar
ica.plot_overlay(eog_epochs.average()) # inspect artifact removal
ica.apply(epochs) # clean data, default in place
evoked = [epochs[k].average() for k in event_ids]
contrast = combine_evoked(evoked, weights=[-1, 1]) # Faces - scrambled
evoked.append(contrast)
for e in evoked:
e.plot(ylim=dict(mag=[-400, 400]))
plt.show()
# estimate noise covarariance
noise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk',
rank=None)
Explanation: Load and filter data, set up epochs
End of explanation
# The transformation here was aligned using the dig-montage. It's included in
# the spm_faces dataset and is named SPM_dig_montage.fif.
trans_fname = spm_path / 'SPM_CTF_MEG_example_faces1_3D_raw-trans.fif'
maps = mne.make_field_map(evoked[0], trans_fname, subject='spm',
subjects_dir=subjects_dir, n_jobs=1)
evoked[0].plot_field(maps, time=0.170)
Explanation: Visualize fields on MEG helmet
End of explanation
evoked[0].plot_white(noise_cov)
Explanation: Look at the whitened evoked daat
End of explanation
src = subjects_dir / 'spm' / 'bem' / 'spm-oct-6-src.fif'
bem = subjects_dir / 'spm' / 'bem' / 'spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(contrast.info, trans_fname, src, bem)
Explanation: Compute forward model
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
inverse_operator = make_inverse_operator(contrast.info, forward, noise_cov,
loose=0.2, depth=0.8)
# Compute inverse solution on contrast
stc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None)
# stc.save('spm_%s_dSPM_inverse' % contrast.comment)
# Plot contrast in 3D with mne.viz.Brain if available
brain = stc.plot(hemi='both', subjects_dir=subjects_dir, initial_time=0.170,
views=['ven'], clim={'kind': 'value', 'lims': [3., 6., 9.]})
# brain.save_image('dSPM_map.png')
Explanation: Compute inverse solution
End of explanation |
3,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2
Step1: Example 1
Step2: Example 2
Step3: Example 3
Step4: Example 4
Step5: Example 5
Step6: Example 6 | Python Code:
# Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import healpy as hp
from NPTFit import create_mask as cm # Module for creating masks
Explanation: Example 2: Creating Masks
In this example we show how to create masks using create_mask.py.
Often it is convenient to consider only a reduced Region of Interest (ROI) when analyzing the data. In order to do this we need to create a mask. The masks are boolean arrays where pixels labelled as True are masked and those labelled False are unmasked. In this notebook we give examples of how to create various masks.
The masks are created by create_mask.py and can be passed to an instance of nptfit via the function load_mask for a run, or an instance of dnds_analysis via load_mask_analysis for an analysis. If no mask is specified the code defaults to the full sky as the ROI.
NB: Before you can call functions from NPTFit, you must have it installed. Instructions to do so can be found here:
http://nptfit.readthedocs.io/
End of explanation
example1 = cm.make_mask_total()
hp.mollview(example1, title='', cbar=False, min=0,max=1)
Explanation: Example 1: Mask Nothing
If no options are specified, create mask returns an empty mask. In the plot here and for those below, blue represents unmasked, red masked.
End of explanation
example2 = cm.make_mask_total(band_mask = True, band_mask_range = 30)
hp.mollview(example2, title='', cbar = False, min=0, max=1)
Explanation: Example 2: Band Mask
Here we show an example of how to mask a region either side of the plane - specifically we mask 30 degrees either side
End of explanation
example3a = cm.make_mask_total(l_mask = False, l_deg_min = -30, l_deg_max = 30,
b_mask = True, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3a,title='',cbar=False,min=0,max=1)
example3b = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30,
b_mask = False, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3b,title='',cbar=False,min=0,max=1)
example3c = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30,
b_mask = True, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3c,title='',cbar=False,min=0,max=1)
Explanation: Example 3: Mask outside a band in b and l
This example shows several methods of masking outside specified regions in galactic longitude (l) and latitude (b). The third example shows how when two different masks are specified, the mask returned is the combination of both.
End of explanation
example4a = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 0, ring_l = 0)
hp.mollview(example4a,title='',cbar=False,min=0,max=1)
example4b = cm.make_mask_total(mask_ring = True, inner = 30, outer = 180, ring_b = 0, ring_l = 0)
hp.mollview(example4b,title='',cbar=False,min=0,max=1)
example4c = cm.make_mask_total(mask_ring = True, inner = 30, outer = 90, ring_b = 0, ring_l = 0)
hp.mollview(example4c,title='',cbar=False,min=0,max=1)
example4d = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 45, ring_l = 45)
hp.mollview(example4d,title='',cbar=False,min=0,max=1)
Explanation: Example 4: Ring and Annulus Mask
Next we show examples of masking outside a ring or annulus. The final example demonstrates that the ring need not be at the galactic center.
End of explanation
random_custom_mask = np.random.choice(np.array([True, False]), hp.nside2npix(128))
example5 = cm.make_mask_total(custom_mask = random_custom_mask)
hp.mollview(example5,title='',cbar=False,min=0,max=1)
Explanation: Example 5: Custom Mask
In addition to the options above, we can also add in custom masks. In this example we highlight this by adding a random mask.
End of explanation
pscmask=np.array(np.load('fermi_data/fermidata_pscmask.npy'), dtype=bool)
example6 = cm.make_mask_total(band_mask = True, band_mask_range = 2,
mask_ring = True, inner = 0, outer = 30,
custom_mask = pscmask)
hp.mollview(example6,title='',cbar=False,min=0,max=1)
Explanation: Example 6: Full Analysis Mask including Custom Point Source Catalog Mask
Finally we show an example of a full analysis mask that we will use for an analysis of the Galactic Center Excess in Example 3 and 7. Here we mask the plane with a band mask, mask outside a ring and also include a custom point source mask. The details of the point source mask are given in Example 1.
NB: before the point source mask can be loaded, the Fermi Data needs to be downloaded. See details in Example 1.
End of explanation |
3,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bienvenid@s a otra reunión de Pyladies!!
En esta sesión aprenderemos a crear nuestras propias funciones en python.Pero primero que son funciones?
Una función en python es un bloque de código organizado y reusable que sirve para realizar una tarea. Recuerdas las funciones que hemos usado en python, por ejemplo, cuando quisimos saber cuántos elementos hay en una lista usamos la función len. En python ya hay una gran colección de funciones que puedes utilizar (así no tenemos que re inventar la rueda cada vez que necesitemos algo) y aquí hay una lista de funciones que ya vienen incluídas en python.
Usando funciones en python
Como se los he dicho en varias ocasiones todas las funciones en python tienen la misma estructura que es como se ilustra a continuación
Step1: Ejercicio 1.
Cada un@ va a escoger una función de las que ya están incluídas en python y la va a explicar a sus compañer@s
Creando tus propias funciones en python
Ya que estás más familiarizada con las funciones pre hechas en python te podrás dar cuenta de que no siempre van a tener una que necesites, entonces, ¿cómo puedo hacer mis propias funciones?
En python la forma de hacerlo es la siguiente
Step2: Pausa para dudas
3 ..
2..
1
Muy bien! ahora te toca a tí
Step3: Ahora prueba tu función con estos valores de porcentage
Step4: Ahora veamos que pasa cuando llamamos a la función
Step5: Esto no significa que la función que acabo de escribir sea definitiva y no pueda yo modificarla para sacar las potencias con otros números. Como veremos a continuación, la función puede tomar cualquier número. Sólo tenemos que hacerlo explícito esta vez... | Python Code:
animales = ['perro', 'gato', 'perico']
len(animales)
animales[1]
x = 4
type(int('43'))
Explanation: Bienvenid@s a otra reunión de Pyladies!!
En esta sesión aprenderemos a crear nuestras propias funciones en python.Pero primero que son funciones?
Una función en python es un bloque de código organizado y reusable que sirve para realizar una tarea. Recuerdas las funciones que hemos usado en python, por ejemplo, cuando quisimos saber cuántos elementos hay en una lista usamos la función len. En python ya hay una gran colección de funciones que puedes utilizar (así no tenemos que re inventar la rueda cada vez que necesitemos algo) y aquí hay una lista de funciones que ya vienen incluídas en python.
Usando funciones en python
Como se los he dicho en varias ocasiones todas las funciones en python tienen la misma estructura que es como se ilustra a continuación:
nombre + paréntesis + argumentos
En el caso de len, la estructura sería la siguiente:
len(lista)
len toma como argumento la lista o arreglo del cual quieras saber su longitud. Una vez que la función sea ejecutada, nos va a devolver un objeto (que evidentemente será lo que le hemos pedido).
End of explanation
def cuadrado(numero):
'''Función que da como resultado el cuadrado de un número
necesitas un número como argumento'''
resultado = numero**2
return resultado
# Probemos la función
cuadrado(8)
cuadrado(8.0)
cuadrado(-8)
Explanation: Ejercicio 1.
Cada un@ va a escoger una función de las que ya están incluídas en python y la va a explicar a sus compañer@s
Creando tus propias funciones en python
Ya que estás más familiarizada con las funciones pre hechas en python te podrás dar cuenta de que no siempre van a tener una que necesites, entonces, ¿cómo puedo hacer mis propias funciones?
En python la forma de hacerlo es la siguiente:
Primero tienes que hacerle claro a python que el bloque de código (o pequeño programa) que vas a escribir forma va a ser una función para esto se escribe def que es la abreviatura para definir.
Después tienes que inventar un nombre para tu función. En teoría puedes llamarlo como quieras, sin embargo, es de buena práctica en python llamar a tus funciones de forma tal que cuando las leas después de meses o años puedas claramente recordar que es lo que hacen.
Después de escribir def y el nombre de la función viene algo crucial para crear funciones, basándote en la estructura de las que ya vienen incluídas en python, qué crees que sea...
... Exactamente!! los argumentos!!
Esta parte es crucial porque de aquí vas a sacar la información necesaria para poder dar un resultado. Veremos esto más adelante.
Después viene el bloque de código que deseas ejecutar y esto puede constar de operaciones complejas y transformación de los datos.
Finalmente, para que quede claro para python lo que te debe devolver al final de la función necesitas escribir un return seguido de lo que será el resultado de la función.
La estructura para definir funciones queda de la siguiente manera:
def nombre_función(argumento 1, argumento 2, ... , argumento n):
operación 1
operación 2
resultado = operación 1 + operación 2
return resultado
Hagamos una pequeña función como ejemplo.
End of explanation
def barras(porcentaje):
gatos = (porcentaje*20)//100
guiones = 20 - gatos
print('['+'#'* gatos + '-' * guiones + ']'+str(porcentaje)+'%')
barras(167)
gatos = (35*20)//100
print(20*'gatos')
Explanation: Pausa para dudas
3 ..
2..
1
Muy bien! ahora te toca a tí :)
Ejercicio 2
Crea una función que dibuje una barra de carga con un porcentaje. Digamos que queremos que se dibuje el 35% entonces el resultado de correr la función sería:
[#######-------------] 35%
End of explanation
def exponente(numero=4, exponente=2):
'''Toma un número y lo eleva a la potencia de otro'''
resultado = numero**exponente
return resultado
Explanation: Ahora prueba tu función con estos valores de porcentage:
* 12.5%
* 167 %
* -20 *
Ejercicio 3
Escribe una función que te diga cuantas consonantes hay en una palabra. Ejemplo: La palabra "carroza" tiene 4 consonantes
Argumentos predeterminados
Hay ocasiones en las cuales los argumentos para una función que vamos a crear los vamos a ocuparemos cotidianamente o simplemente tienen más sentido y para no tener que escribirlos cada vez que llamamos a la función lo que podemos hacer es definirlos desde el momento en el que estamos creando una función
Vamos a asumir que yo quiero hacer una función que eleve a la potencia n un número x. Digamos que de acuerdo a mi experiencia, la mayoría de las personas quiere saber el cuadrado de 4. Lo que hago entonces es una función que tenga como argumentos predeterminados el 4 y el 2... Veamos el ejemplo
End of explanation
exponente()
Explanation: Ahora veamos que pasa cuando llamamos a la función
End of explanation
exponente(4, 0.5)
exponente(5, -1)
exponente(0.5, 2)
Explanation: Esto no significa que la función que acabo de escribir sea definitiva y no pueda yo modificarla para sacar las potencias con otros números. Como veremos a continuación, la función puede tomar cualquier número. Sólo tenemos que hacerlo explícito esta vez...
End of explanation |
3,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras Hello World
Install Keras
https
Step1: TODO | Python Code:
import tensorflow as tf
tf.__version__
import keras
keras.__version__
import h5py
h5py.__version__
import pydot
pydot.__version__
from keras.models import Sequential
model = Sequential()
from keras.layers import Dense
model.add(Dense(units=6, activation='relu', input_dim=4))
model.add(Dense(units=3, activation='softmax'))
from keras.utils import plot_model
plot_model(model, show_shapes=True, to_file="model.png")
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
iris = datasets.load_iris()
Explanation: Keras Hello World
Install Keras
https://keras.io/#installation
Install dependencies
Install TensorFlow backend: https://www.tensorflow.org/install/
pip install tensorflow
Insall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels
pip install h5py
Install pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation
pip install pydot
Install Keras
pip install keras
Import packages and check versions
End of explanation
x_train = iris.data
y_train = np.zeros(shape=[x_train.shape[0], 3])
y_train[(iris.target == 0), 0] = 1
y_train[(iris.target == 1), 1] = 1
y_train[(iris.target == 2), 2] = 1
x_test = x_train
y_test = y_train
model.fit(x_train, y_train)
model.evaluate(x_test, y_test)
model.predict(x_test)
Explanation: TODO: shuffle
End of explanation |
3,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: List assignment
Step2: Break up a string into variables
Step3: Breaking up a number into separate variables
Step4: Assign the first letter of 'spam' into varible a, assign all the remaining letters to variable b | Python Code:
variableName = 'This is a string.'
Explanation: Title: Breaking Up String Variables
Slug: breaking_up_string_variables
Summary: Breaking Up String Variables
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Basic name assignment
End of explanation
One, Two, Three = [1, 2, 3]
Explanation: List assignment
End of explanation
firstLetter, secondLetter, thirdLetter, fourthLetter = 'Bark'
firstLetter
secondLetter
thirdLetter
fourthLetter
Explanation: Break up a string into variables
End of explanation
firstNumber, secondNumber, thirdNumber, fourthNumber = '9485'
firstNumber
secondNumber
thirdNumber
fourthNumber
Explanation: Breaking up a number into separate variables
End of explanation
a, *b = 'spam'
a
b
Explanation: Assign the first letter of 'spam' into varible a, assign all the remaining letters to variable b
End of explanation |
3,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Toric Code Ground State
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Toric code Hamiltonian
The toric code Hamiltonian
\begin{equation}
H = -\sum_s A_s - \sum_p B_p
\end{equation}
involves local four-qubit parity operators, where each qubit lives on an edge in a square lattice. Here, the "star" operators $A_s$ are products of Pauli $Z$ operators around a vertex, while the "plaquette" operators $B_p$ are products of $X$ operators around a square, for example,
\begin{equation}
A_s = Z_i \otimes Z_j \otimes Z_k \otimes Z_l
\end{equation}
\begin{equation}
B_p = X_a \otimes X_b \otimes X_c \otimes X_d.
\end{equation}
These local parity operators all commute with each other
Step2: We can also see the full circuit of how to create this code (using CNOT gates) using these objects as well. By printing out the circuit moment by moment, we can see the gates lined up in a visual manner.
Step5: Simulating the parities
For a given circuit, we can determine all the parity expectation values $\langle A_s\rangle$ by sampling 22-qubit bitstrings and then computing each expectation value. We do the same thing with for $\langle B_p \rangle$, but we include a layer of Hadamards before measurement to effectively "measure in $X$ basis."
Step6: We can step through the circuit one moment at a time to see how the parities $A_s$ and $B_p$ evolve through the circuit. This is similar to Figure 1B in paper (but simulating instead of using experimental data). We begin with $|0\rangle^{\otimes 22}$, which corresponds to n_moments_to_include=0. There, all $\langle A_s \rangle = +1$ but $\langle B_p \rangle = 0$ (see colorbars below). The subsequent moments apply Hadamard and CNOT gates to stitch entanglement across the device and create $|G\rangle$.
Step7: After the final step, all the parities are $+1$ (see colorbars below), indicating we have successfully created $|G\rangle$. | Python Code:
try:
import recirq
except ImportError:
!pip install --quiet git+https://github.com/quantumlib/ReCirq
import recirq
try:
import qsimcirq
except ImportError:
!pip install qsimcirq --quiet
import qsimcirq
import cirq
import matplotlib.pyplot as plt
import recirq.toric_code.toric_code_plaquettes as tcp
import recirq.toric_code.toric_code_plotter as tcplot
import recirq.toric_code.toric_code_rectangle as tcr
import recirq.toric_code.toric_code_state_prep as tcsp
plt.rcParams['figure.dpi'] = 144
Explanation: Toric Code Ground State
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/toric_code/toric_code_ground_state"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/toric_code/toric_code_ground_state.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/toric_code/toric_code_ground_state.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/toric_code/toric_code_ground_state.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Before beginning, we will import the necessary modules into the colab.
End of explanation
short_rectangle = tcr.ToricCodeRectangle(
origin_qubit=cirq.GridQubit(3, 0), row_vector=(1, 1), rows=2, cols=4
)
plotter = tcplot.ToricCodePlotter()
plotter.plot_code(short_rectangle)
Explanation: Toric code Hamiltonian
The toric code Hamiltonian
\begin{equation}
H = -\sum_s A_s - \sum_p B_p
\end{equation}
involves local four-qubit parity operators, where each qubit lives on an edge in a square lattice. Here, the "star" operators $A_s$ are products of Pauli $Z$ operators around a vertex, while the "plaquette" operators $B_p$ are products of $X$ operators around a square, for example,
\begin{equation}
A_s = Z_i \otimes Z_j \otimes Z_k \otimes Z_l
\end{equation}
\begin{equation}
B_p = X_a \otimes X_b \otimes X_c \otimes X_d.
\end{equation}
These local parity operators all commute with each other: all $A_s$ commute, all $B_p$ commute, and $A_s$ and $B_p$ commute with each other because they overlap on an even number of qubits. They can thus all be simultaneously diagonalized, and those shared eigenstates are also the eigenstates of $H$.
<img src="../images/toric_code1.png" alt="Toric Code Example"/>
In our paper, we mostly work with the 31-qubit lattice above. With these boundary conditions, there is a unique ground state that has a $+1$ eigenvalue for all $A_s$ and $B_p$. Note for different boundary conditions, we can have degeneracies that are locally-indistinguishable (for example on a torus, or with the "surface code" logical qubits we explore in Figure 4 of our paper).
In this module, we will primarily work with the smaller 22-qubit to avoid time and memory constraints associated with the larger rectangle.
Understanding the ground state
In this example, we focus on reproducing our first figure, where we create this unique ground state $|G\rangle$ using a shallow unitary circuit. The general idea is to start out with $|0\rangle^{\otimes 22}$, so all $\langle A_s \rangle = +1$. We then apply projection operators $\mathbb{I} + B_p$ which project the state into a $+1$ eigenstate of $B_p$, after which all the local parities are $+1$:
\begin{equation}
|G\rangle \propto \prod_p (\mathbb{I} + B_p)|0\rangle^{\otimes 22}.
\end{equation}
To create this state, we assign a "team captain" qubit to each plaquette $B_p$. Starting from $|0\rangle^{\otimes n}$, we perform a Hadamard on each team captain, and then each team captain is responsible for performing a CNOT to each of its team mates. We have to be careful with the ordering to keep things efficient and avoid the captains stepping on each other's toes. This is easier to visualize for a smaller system, for example the 12-qubit version in Figure S2, reproduced below. Note the superposition of $2^4$ states, as there are four plaquettes $B_p$.
<img src="../images/toric_code2.png" alt="12 qubit toric code example"/>
Creating $|G\rangle$ with ReCirq
Basics: 22-qubit circuit
First, we can create a example 22-qubit grid by instantiate it using a ToricCodeRectangle object and then plot a visualization using a ToricCodePlotter object, both found in the ReCirq repository.
End of explanation
full_circuit = tcsp.toric_code_cnot_circuit(short_rectangle)
for idx, moment in enumerate(full_circuit):
print(f'moment {idx}\n{moment}\n')
Explanation: We can also see the full circuit of how to create this code (using CNOT gates) using these objects as well. By printing out the circuit moment by moment, we can see the gates lined up in a visual manner.
End of explanation
def partial_circuit(
n_moments_to_include: int, *, x_basis: bool
) -> cirq.Circuit:
Create the first N moments of a toric in Z or X basis.
Args:
n_moments_to_include: number of moments to include
x_basis: If True, add Hadamards to effectively measure in the X basis.
If False, measure in the computational (Z) basis.
Returns: First N moments of a toric code circuit plus an optional
layer of Hadamard gates to effectively measure in the X basis.
This circuit also includes measurement gates.
sliced_circuit = full_circuit[:n_moments_to_include]
qubits = sorted(short_rectangle.qubits)
if x_basis:
sliced_circuit += cirq.Moment(cirq.H.on_each(*qubits))
return sliced_circuit + cirq.measure(*qubits)
def get_plaquettes(
n_moment_to_include: int, repetitions: int = 1000,
sampler: cirq.Sampler = qsimcirq.QSimSimulator()
) -> tcp.ToricCodePlaquettes:
Simulates the results in both bases and determine plaquette values.
Args:
n_moments_to_include: number of moments to include
repetitions: number of repetitions (shots) to sample
sampler: Sampler (simulator) to execute circuits. Defaults to qsim.
x_data = sampler.run(
partial_circuit(n_moment_to_include, x_basis=True), repetitions=repetitions
)
z_data = sampler.run(
partial_circuit(n_moment_to_include, x_basis=False), repetitions=repetitions
)
return tcp.ToricCodePlaquettes.from_global_measurements(
short_rectangle, x_data.data, z_data.data
)
Explanation: Simulating the parities
For a given circuit, we can determine all the parity expectation values $\langle A_s\rangle$ by sampling 22-qubit bitstrings and then computing each expectation value. We do the same thing with for $\langle B_p \rangle$, but we include a layer of Hadamards before measurement to effectively "measure in $X$ basis."
End of explanation
for n in range(len(tcsp.toric_code_cnot_circuit(short_rectangle)) + 1):
p = get_plaquettes(n)
ax = plotter.plot_expectation_values(p)
ax.set_title(f'n_moments_to_include={n}')
plt.pause(0.001)
Explanation: We can step through the circuit one moment at a time to see how the parities $A_s$ and $B_p$ evolve through the circuit. This is similar to Figure 1B in paper (but simulating instead of using experimental data). We begin with $|0\rangle^{\otimes 22}$, which corresponds to n_moments_to_include=0. There, all $\langle A_s \rangle = +1$ but $\langle B_p \rangle = 0$ (see colorbars below). The subsequent moments apply Hadamard and CNOT gates to stitch entanglement across the device and create $|G\rangle$.
End of explanation
ax_z = plotter.make_colorbar(x_basis=False, orientation='horizontal')
ax_z.set_label(r'Z parity, $\langle A\rangle$')
ax_x = plotter.make_colorbar(x_basis=True, orientation='horizontal')
ax_x.set_label(r'X parity, $\langle B\rangle$')
Explanation: After the final step, all the parities are $+1$ (see colorbars below), indicating we have successfully created $|G\rangle$.
End of explanation |
3,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Poisson
$$f\left(\left.y\right|x_{i}\right)=\frac{\exp\left(-\mu\left(x_{i}\right)\right)\mu\left(x_{i}\right)^{y}}{y!}$$
$$\mu\left(X_{i}\right)=\exp\left(X_{i}\theta\right)$$
Step1: Generate data
Step2: Plot the data and the model
Step3: Maximize log-likelihood
$$l\left(y|x,\theta\right)=\sum_{i=1}^{n}\log\frac{\exp\left(-\mu\left(x_{i}\right)\right)\mu\left(x_{i}\right)^{y_{i}}}{y_{i}!}$$
Step4: Plot objective function, true parameter, and the estimate
Step5: Solve first order conditions
Step6: Plot first order condition
Step7: Plot original data and fitted mean | Python Code:
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
np.set_printoptions(precision=4, suppress=True)
sns.set_context('notebook')
%matplotlib inline
Explanation: Poisson
$$f\left(\left.y\right|x_{i}\right)=\frac{\exp\left(-\mu\left(x_{i}\right)\right)\mu\left(x_{i}\right)^{y}}{y!}$$
$$\mu\left(X_{i}\right)=\exp\left(X_{i}\theta\right)$$
End of explanation
# True parameter
theta = .5
# Sample size
n = int(1e2)
# Independent variable, N(0,1)
X = np.random.normal(0, 1, n)
# Sort data for nice plots
X = np.sort(X)
mu = np.exp(X * theta)
# Error term, N(0,1)
Y = np.random.poisson(mu, n)
Explanation: Generate data
End of explanation
plt.figure(figsize = (8, 8))
plt.scatter(X, Y, label='Observed data')
plt.ylabel(r'$Y$')
plt.xlabel(r'$X$')
plt.show()
Explanation: Plot the data and the model
End of explanation
import scipy.optimize as opt
from scipy.stats import poisson
# Define objective function
def f(theta, X, Y):
Q = - np.sum(np.log(1e-3 + poisson.pmf(Y, np.exp(X * theta))))
return Q
# Run optimization routine
theta_hat = opt.fmin_bfgs(f, 0., args=(X, Y))
print(theta_hat)
Explanation: Maximize log-likelihood
$$l\left(y|x,\theta\right)=\sum_{i=1}^{n}\log\frac{\exp\left(-\mu\left(x_{i}\right)\right)\mu\left(x_{i}\right)^{y_{i}}}{y_{i}!}$$
End of explanation
# Generate data for objective function plot
th = np.linspace(-3., 3., 1e2)
Q = [f(z, X, Y) for z in th]
# Plot the data
plt.figure(figsize=(8, 4))
plt.plot(th, Q, label='Q')
plt.xlabel(r'$\theta$')
plt.axvline(x=theta_hat, c='red', label='Estimated')
plt.axvline(x=theta, c='black', label='True')
plt.legend()
plt.show()
Explanation: Plot objective function, true parameter, and the estimate
End of explanation
from scipy.optimize import fsolve
# Define the first order condition
def df(theta, X, Y):
return - np.sum(X * (Y - np.exp(X * theta)))
# Solve FOC
theta_hat = fsolve(df, 0., args=(X, Y))
print(theta_hat)
Explanation: Solve first order conditions
End of explanation
# Generate data for the plot
th = np.linspace(-3., 3., 1e2)
Q = np.array([df(z, X, Y) for z in th])
# Plot the data
plt.figure(figsize=(8, 4))
plt.plot(th, Q, label='Q')
plt.xlabel(r'$\beta$')
plt.axvline(x=theta_hat, c='red', label='Estimated')
plt.axvline(x=theta, c='black', label='True')
plt.axhline(y=0, c='green')
plt.legend()
plt.show()
Explanation: Plot first order condition
End of explanation
plt.figure(figsize=(8, 8))
plt.scatter(X, Y, label='Observed data')
plt.plot(X, np.exp(X * theta_hat), label='Fitted mean')
plt.ylabel(r'$Y$')
plt.xlabel(r'$X$')
plt.legend()
plt.show()
Explanation: Plot original data and fitted mean
End of explanation |
3,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Check for missing values
Step2: Take a quick look at the ham and spam label column
Step3: <font color=green>We see that 4825 out of 5572 messages, or 86.6%, are ham.<br>This means that any machine learning model we create has to perform better than 86.6% to beat random chance.</font>
Visualize the data
Step4: <font color=green>This dataset is extremely skewed. The mean value is 80.5 and yet the max length is 910. Let's plot this on a logarithmic x-axis.</font>
Step5: <font color=green>It looks like there's a small range of values where a message is more likely to be spam than ham.</font>
Now let's look at the punct column
Step6: <font color=green>This looks even worse - there seem to be no values where one would pick spam over ham. We'll still try to build a machine learning classification model, but we should expect poor results.</font>
Split the data into train & test sets
Step7: Additional train/test/split arguments
Step8: Now we can pass these sets into a series of different training & testing algorithms and compare their results.
Train a Logistic Regression classifier
One of the simplest multi-class classification tools is logistic regression. Scikit-learn offers a variety of algorithmic solvers; we'll use L-BFGS.
Step9: Test the Accuracy of the Model
Step10: <font color=green>These results are terrible! More spam messages were confused as ham (241) than correctly identified as spam (5), although a relatively small number of ham messages (46) were confused as spam.</font>
Step11: <font color=green>This model performed worse than a classifier that assigned all messages as "ham" would have!</font>
Train a naïve Bayes classifier
Step12: Run predictions and report on metrics
Step13: <font color=green>The total number of confusions dropped from 287 to 256. [241+46=287, 246+10=256]</font>
Step14: <font color=green>Better, but still less accurate than 86.6%</font>
Train a support vector machine (SVM) classifier
Among the SVM options available, we'll use C-Support Vector Classification (SVC)
Step15: Run predictions and report on metrics
Step16: <font color=green>The total number of confusions dropped even further to 209.</font> | Python Code:
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/smsspamcollection.tsv', sep='\t')
df.head()
len(df)
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Scikit-learn Primer
Scikit-learn (http://scikit-learn.org/) is an open-source machine learning library for Python that offers a variety of regression, classification and clustering algorithms.
In this section we'll perform a fairly simple classification exercise with scikit-learn. In the next section we'll leverage the machine learning strength of scikit-learn to perform natural language classifications.
Installation and Setup
From the command line or terminal:
conda install scikit-learn
<br>or<br>
pip install -U scikit-learn
Scikit-learn additionally requires that NumPy and SciPy be installed. For more info visit http://scikit-learn.org/stable/install.html
Perform Imports and Load Data
For this exercise we'll be using the SMSSpamCollection dataset from UCI datasets that contains more than 5 thousand SMS phone messages.<br>You can check out the sms_readme file for more info.
The file is a tab-separated-values (tsv) file with four columns:
label - every message is labeled as either ham or spam<br>
message - the message itself<br>
length - the number of characters in each message<br>
punct - the number of punctuation characters in each message
End of explanation
df.isnull().sum()
Explanation: Check for missing values:
Machine learning models usually require complete data.
End of explanation
df['label'].unique()
df['label'].value_counts()
Explanation: Take a quick look at the ham and spam label column:
End of explanation
df['length'].describe()
Explanation: <font color=green>We see that 4825 out of 5572 messages, or 86.6%, are ham.<br>This means that any machine learning model we create has to perform better than 86.6% to beat random chance.</font>
Visualize the data:
Since we're not ready to do anything with the message text, let's see if we can predict ham/spam labels based on message length and punctuation counts. We'll look at message length first:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.xscale('log')
bins = 1.15**(np.arange(0,50))
plt.hist(df[df['label']=='ham']['length'],bins=bins,alpha=0.8)
plt.hist(df[df['label']=='spam']['length'],bins=bins,alpha=0.8)
plt.legend(('ham','spam'))
plt.show()
Explanation: <font color=green>This dataset is extremely skewed. The mean value is 80.5 and yet the max length is 910. Let's plot this on a logarithmic x-axis.</font>
End of explanation
df['punct'].describe()
plt.xscale('log')
bins = 1.5**(np.arange(0,15))
plt.hist(df[df['label']=='ham']['punct'],bins=bins,alpha=0.8)
plt.hist(df[df['label']=='spam']['punct'],bins=bins,alpha=0.8)
plt.legend(('ham','spam'))
plt.show()
Explanation: <font color=green>It looks like there's a small range of values where a message is more likely to be spam than ham.</font>
Now let's look at the punct column:
End of explanation
# Create Feature and Label sets
X = df[['length','punct']] # note the double set of brackets
y = df['label']
Explanation: <font color=green>This looks even worse - there seem to be no values where one would pick spam over ham. We'll still try to build a machine learning classification model, but we should expect poor results.</font>
Split the data into train & test sets:
If we wanted to divide the DataFrame into two smaller sets, we could use
train, test = train_test_split(df)
For our purposes let's also set up our Features (X) and Labels (y). The Label is simple - we're trying to predict the label column in our data. For Features we'll use the length and punct columns. By convention, X is capitalized and y is lowercase.
Selecting features
There are two ways to build a feature set from the columns we want. If the number of features is small, then we can pass those in directly:
X = df[['length','punct']]
If the number of features is large, then it may be easier to drop the Label and any other unwanted columns:
X = df.drop(['label','message'], axis=1)
These operations make copies of df, but do not change the original DataFrame in place. All the original data is preserved.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
print('Training Data Shape:', X_train.shape)
print('Testing Data Shape: ', X_test.shape)
Explanation: Additional train/test/split arguments:
The default test size for train_test_split is 30%. Here we'll assign 33% of the data for testing.<br>
Also, we can set a random_state seed value to ensure that everyone uses the same "random" training & testing sets.
End of explanation
from sklearn.linear_model import LogisticRegression
lr_model = LogisticRegression(solver='lbfgs')
lr_model.fit(X_train, y_train)
Explanation: Now we can pass these sets into a series of different training & testing algorithms and compare their results.
Train a Logistic Regression classifier
One of the simplest multi-class classification tools is logistic regression. Scikit-learn offers a variety of algorithmic solvers; we'll use L-BFGS.
End of explanation
from sklearn import metrics
# Create a prediction set:
predictions = lr_model.predict(X_test)
# Print a confusion matrix
print(metrics.confusion_matrix(y_test,predictions))
# You can make the confusion matrix less confusing by adding labels:
df = pd.DataFrame(metrics.confusion_matrix(y_test,predictions), index=['ham','spam'], columns=['ham','spam'])
df
Explanation: Test the Accuracy of the Model
End of explanation
# Print a classification report
print(metrics.classification_report(y_test,predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))
Explanation: <font color=green>These results are terrible! More spam messages were confused as ham (241) than correctly identified as spam (5), although a relatively small number of ham messages (46) were confused as spam.</font>
End of explanation
from sklearn.naive_bayes import MultinomialNB
nb_model = MultinomialNB()
nb_model.fit(X_train, y_train)
Explanation: <font color=green>This model performed worse than a classifier that assigned all messages as "ham" would have!</font>
Train a naïve Bayes classifier:
One of the most common - and successful - classifiers is naïve Bayes.
End of explanation
predictions = nb_model.predict(X_test)
print(metrics.confusion_matrix(y_test,predictions))
Explanation: Run predictions and report on metrics
End of explanation
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
Explanation: <font color=green>The total number of confusions dropped from 287 to 256. [241+46=287, 246+10=256]</font>
End of explanation
from sklearn.svm import SVC
svc_model = SVC(gamma='auto')
svc_model.fit(X_train,y_train)
Explanation: <font color=green>Better, but still less accurate than 86.6%</font>
Train a support vector machine (SVM) classifier
Among the SVM options available, we'll use C-Support Vector Classification (SVC)
End of explanation
predictions = svc_model.predict(X_test)
print(metrics.confusion_matrix(y_test,predictions))
Explanation: Run predictions and report on metrics
End of explanation
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
Explanation: <font color=green>The total number of confusions dropped even further to 209.</font>
End of explanation |
3,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow IO Authors.
Step1: 医療画像処理向けに DICOM ファイルをデコードする
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 必要なパッケージをインストールし、ランタイムを再起動する
Step3: DICOM 画像をデコードする
Step4: DICOM Metadata をデコードし、タグを操作する
decode_dicom_data はタグ情報をデコードします。dicom_tags には、患者の年齢と性別といった有用な情報が含まれているため、dicom_tags.PatientsAge や dicom_tags.PatientsSex などの DICOM タグを子よできます。tensorflow_io は pydicom dicom パッケージと同じタグ表記法を採用しています。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow IO Authors.
End of explanation
!curl -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/dicom/dicom_00000001_000.dcm
!ls -l dicom_00000001_000.dcm
Explanation: 医療画像処理向けに DICOM ファイルをデコードする
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/dicom"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/dicom.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/dicom.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/io/tutorials/dicom.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td>
</table>
概要
このチュートリアルでは、TensorFlow IO で tfio.image.decode_dicom_image を使用し、TensorFlow で DICOM ファイルをデコードする方法を説明します。
セットアップと使用方法
DICOM 画像のダウンロード
このチュートリアルでは、NIH Chest X-ray データセットの DICOM 画像を使用します。
NIH Chest X-ray データセットには、100,000 件の匿名化された胸部レントゲン画像が PNG 形式で含まれています。NIH Clinical Center が提供しているデータセットで、こちらのリンクからダウンロードできます。
Google Cloud でも、DICOM バージョンの画像が提供されています。Cloud Storage で入手可能です。
このチュートリアルでは、GitHub リポジトリより、データセットのサンプルファイルをダウンロードします。
注意: データセットの詳細については、次のリファレンスをご覧ください。
Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, Ronald Summers, ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, IEEE CVPR, pp. 3462-3471, 2017
End of explanation
try:
# Use the Colab's preinstalled TensorFlow 2.x
%tensorflow_version 2.x
except:
pass
!pip install tensorflow-io
Explanation: 必要なパッケージをインストールし、ランタイムを再起動する
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_io as tfio
image_bytes = tf.io.read_file('dicom_00000001_000.dcm')
image = tfio.image.decode_dicom_image(image_bytes, dtype=tf.uint16)
skipped = tfio.image.decode_dicom_image(image_bytes, on_error='skip', dtype=tf.uint8)
lossy_image = tfio.image.decode_dicom_image(image_bytes, scale='auto', on_error='lossy', dtype=tf.uint8)
fig, axes = plt.subplots(1,2, figsize=(10,10))
axes[0].imshow(np.squeeze(image.numpy()), cmap='gray')
axes[0].set_title('image')
axes[1].imshow(np.squeeze(lossy_image.numpy()), cmap='gray')
axes[1].set_title('lossy image');
Explanation: DICOM 画像をデコードする
End of explanation
tag_id = tfio.image.dicom_tags.PatientsAge
tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id)
print(tag_value)
print(f"PatientsAge : {tag_value.numpy().decode('UTF-8')}")
tag_id = tfio.image.dicom_tags.PatientsSex
tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id)
print(f"PatientsSex : {tag_value.numpy().decode('UTF-8')}")
Explanation: DICOM Metadata をデコードし、タグを操作する
decode_dicom_data はタグ情報をデコードします。dicom_tags には、患者の年齢と性別といった有用な情報が含まれているため、dicom_tags.PatientsAge や dicom_tags.PatientsSex などの DICOM タグを子よできます。tensorflow_io は pydicom dicom パッケージと同じタグ表記法を採用しています。
End of explanation |
3,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise from which we expect values around
0 with less than 2 standard deviations. Covariance estimation and diagnostic
plots are based on [1]_.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals, vol.
108, 328-342, NeuroImage.
Step1: Set parameters
Step2: Compute covariance using automated regularization
Step3: Show whitening | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Denis A. Engemann <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
Explanation: Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise from which we expect values around
0 with less than 2 standard deviations. Covariance estimation and diagnostic
plots are based on [1]_.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals, vol.
108, 328-342, NeuroImage.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, n_jobs=1, fir_design='firwin')
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
Explanation: Set parameters
End of explanation
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
Explanation: Compute covariance using automated regularization
End of explanation
evoked = epochs.average()
evoked.plot() # plot evoked response
# plot the whitened evoked data for to see if baseline signals match the
# assumption of Gaussian white noise from which we expect values around
# 0 with less than 2 standard deviations. For the Global field power we expect
# a value of 1.
evoked.plot_white(noise_covs)
Explanation: Show whitening
End of explanation |
3,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of using the Google API Client to access BigQuery
Note that this is <b>not</b> the recommended approach. You should use the BigQuery client library because that is idiomatic Python.
See the bigquery_client notebook for examples.
Authenticate and build stubs
Step1: Get info about a dataset
Step2: List tables and creation times
Step3: Query and get result
Step4: Asynchronous query and paging through results | Python Code:
PROJECT='cloud-training-demos' # CHANGE THIS
from googleapiclient.discovery import build
service = build('bigquery', 'v2')
Explanation: Example of using the Google API Client to access BigQuery
Note that this is <b>not</b> the recommended approach. You should use the BigQuery client library because that is idiomatic Python.
See the bigquery_client notebook for examples.
Authenticate and build stubs
End of explanation
# information about the ch04 dataset
dsinfo = service.datasets().get(datasetId="ch04", projectId=PROJECT).execute()
for info in dsinfo.items():
print(info)
Explanation: Get info about a dataset
End of explanation
# list tables in dataset
tables = service.tables().list(datasetId="ch04", projectId=PROJECT).execute()
for t in tables['tables']:
print(t['tableReference']['tableId'] + ' was created at ' + t['creationTime'])
Explanation: List tables and creation times
End of explanation
# send a query request
request={
"useLegacySql": False,
"query": "SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5"
}
print(request)
response = service.jobs().query(projectId=PROJECT, body=request).execute()
print('----' * 10)
for r in response['rows']:
print(r['f'][0]['v'])
Explanation: Query and get result
End of explanation
# send a query request that will not terminate within the timeout specified and will require paging
request={
"useLegacySql": False,
"timeoutMs": 0,
"useQueryCache": False,
"query": "SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5"
}
response = service.jobs().query(projectId=PROJECT, body=request).execute()
print(response)
jobId = response['jobReference']['jobId']
print(jobId)
# get query results
while (not response['jobComplete']):
response = service.jobs().getQueryResults(projectId=PROJECT,
jobId=jobId,
maxResults=2,
timeoutMs=5).execute()
while (True):
# print responses
for row in response['rows']:
print(row['f'][0]['v']) # station name
print('--' * 5)
# page through responses
if 'pageToken' in response:
pageToken = response['pageToken']
# get next page
response = service.jobs().getQueryResults(projectId=PROJECT,
jobId=jobId,
maxResults=2,
pageToken=pageToken,
timeoutMs=5).execute()
else:
break
Explanation: Asynchronous query and paging through results
End of explanation |
3,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 12 de Agosto del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo1.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 12 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 11:05
* Hora final : 11:35
* Filamento extruido: 435cm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 3.4 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son distintas:
* En el caso 5 se pasa de un incremento de velocidad de +1 a un incremento de +2.
End of explanation
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.