Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised Learning
Step1: One approach to building a predictive model is to subdivide the variable space into regions, by sequentially subdividing each variable. For example, if we split ltg at a threshold value of -0.01, it does a reasonable job of isolating the large values in one of the resulting subspaces.
Step2: However, that region still contains a fair number of low (light) values, so we can similarly bisect the region using a bmi value of -0.03 as a threshold value
Step3: We can use this partition to create a piecewise-constant function, which returns the average value of the observations in each region defined by the threshold values. We could then use this rudimentary function as a predictive model.
Step4: The choices for splitting the variables here were relatively arbitrary. Better choices can be made using a cost function $C$, such as residual sums of squares (RSS).
$$C = \sum_j \sum_{i \in R_j} (y_i - \hat{y}_{R_j})^2$$
where $\hat{y}_{R_j}$ is the mean response for the training observations in the jth region.
Exercise
Use residual sums of squares to select competitive threshold values for the predictive model defined above
Step5: The recursive partitioning demonstrated above results in a decision tree. The regions defined by the trees are called terminal nodes. Locations at which a predictor is split, such as bmi=-0.03, are called internal nodes. As with this simple example, splits are not generally symmetric, in the sense that splits do not occur similarly on all branches.
Now consider a subset of three variables from the Titanic dataset, which we would like to use to predict survival from the disaster. The following describes one such decision tree
Step6: However, if the variable splits responses into equal numbers of positive and negative values, then entropy is maximized, and we wish to know about the feature
Step7: The entropy calculation tells us how much additional information we would obtain with knowledge of the variable.
So, if we have a set of candidate covariates from which to choose as a node in a decision tree, we should choose the one that gives us the most information about the response variable (i.e. the one with the highest entropy).
Misclassification Rate
Alternatively, we can use the misclassification rate
Step8: ID3
A given cost function can be used to construct a decision tree via one of several algorithms. The Iterative Dichotomiser 3 (ID3) is on such algorithm, which uses entropy, and a related concept, information gain, to choose features and partitions at each classification step in the tree.
Information gain is the difference between the current entropy of a system and the entropy measured after a feature is chosen. If $S$ is a set of examples and $X$ is a possible feature on which to partition the examples, then
Step9: Consider a few variables from the titanic database
Step10: Here, we have selected pasenger class (pclass), sex, port of embarcation (embarked), and a derived variable called adult. We can calculate the information gain for each of these.
Step11: Hence, the ID3 algorithm computes the information gain for each variable, selecting the one with the highest value (in this case, adult). In this way, it searches the "tree space" according to a greedy strategy.
A tree can be constructed by recursively selecting the feature from the current dataset with the largest information gain, then removing it from the datset. Recursion stops when there are either no variables remaining, or there is only one class left in the subset (e.g. all True or all False).
The ID3 algorithm is as follows
Step12: If you have GraphViz installed, you can draw the resulting tree
Step13: Pruning
Despite the inductive bias associated with trees that tend to make them small, the ID3 algorithm continues choosing nodes and branches until either it runs out of variables, or all outputs are of the same class. This can clearly lead to overfit trees.
To prevent overfitting, we can stop growing the tree if the information gain (or reduction in error, etc.) is not sufficient to justify the extra complexity of adding another node. However, this simple rule is not optimal, because an uninformative subtree can lead to informative ones later on.
The standard approach is therefore to grow a full tree, and then to prune it. The easiest approach is to remove branches that give the least increase in the error (information gain). To determine how far back to prune, we can evaluate the cross-validated error on each candidate pruning, and then pick the tree whose CV error is within 1 standard error of the minimum.
Analogous to the lasso or ridge regression, we can penalize the number of terminal nodes in a tree
Step14: Test error of a bagged model is measured by estimating out-of-bag error.
On average, each bagged tree uses about 2/3 of observations, leaving the remaining third as "out-of bag". The response for the ith observation for each of the trees in which that observation was excluded (on average, B/3) is averaged. This essentially the same as performing leave-one-out (LOO) cross-validation.
Step15: This approach is an ensemble learning method, because it takes a set of weak learners, and combines them to construct a strong learner that is more robust, with lower generalization error.
An average of B trees, each with variance $\sigma^2$, has variance $\sigma^2/B$. If the variables are simply identically distributed, with positive pairwise correlation $\rho$, then the variance of the average of the B trees is
Step16: With random forests, it is possible to quantify the relative importance of feature inputs for classification. In scikit-learn, the Gini index (recall, a measure of error reduction) is calculated for each internal node that splits on a particular feature of a given tree, which is multiplied by the number of samples that were routed to the node (this approximates the probability of reaching that node). For each variable, this quantity is averaged over the trees in the forest to yield a measure of importance.
Step17: RandomForestClassifier uses the Gini impurity index by default; one may instead use the entropy information gain as a criterion.
Step18: Decision Tree Regression
While it may not be apparent how to use trees for regression analysis, it requires only a straightforward modification to the algorithm. A popular tree-based regression algorithm is the classification and regression tree (CART).
The file TNNASHVI.txt in your data directory contains daily temperature readings for Nashville, courtesy of the Average Daily Temperature Archive. This data, as one would expect, oscillates annually. We can use a decision tree regression model to fit the data.
Step19: In this context, none of the cost functions considered so far would be appropriate. Instead, it would be more suitable to use something like mean squared error (MSE) to guide the growth of the tree. With this, we can proceed to choose (1) a variable on which to split the dataset and (2) (in the case of continuous features) a value of the variable at which to place a node.
Recall that the output of a tree is just a constant value for each leaf; here, we simply return the average of all the response values in the region. Thus, we choose a cut point that minimizes the MSE at each step.
Step20: A single decision tree allows us to estimate the signal in a non-parametric way,
but clearly has some issues. In some regions, the model shows high bias and
under-fits the data
(seen in the long flat lines which don't follow the contours of the data),
while in other regions the model shows high variance and over-fits the data
(reflected in the narrow spikes which are influenced by noise in single points).
One way to address this is to use an ensemble method, like random forests, so that the
effects of their over-fitting go away on average.
Here we will use a random forest of 200 trees to reduce the tendency of each
tree to over-fitting the data.
Step21: Prediction intervals
The predictions from random forests are not accompanied by estimates of uncertainty, as is the case with Bayesian regression models. However, it is possible to obtain probability intervals using a random forests approach. Since we are using an ensemble of trees, it is possible to track all predicted values for all leaf nodes in a random forest, rather than just the mean or modal value. This results in conditional distributions $P(y|X=x)$ for every x, from which percentiles can be calculated for desired endpoints in a prediction interval. This approach is called quantile regression forests.
To implement quantile regression forests in scikit-learn, we need to allow each tree to grow so that each leaf node contains exactly one value. Then, each tree returns a single response variable, from which a conditional distribution can be approximated. Of course, fully expanding trees will result in overfitting, but these can also be cross-validated.
scikit-learn does not automatically calculate prediction intervals, but the estimators from each constitutent tree in the RandomForestRegressor is avaialable, from which individual tree predictions can be made.
Step22: Exercise
Select the optimal random forest regression model for the nashville daily temperature data via cross-validation in scikit-learn. Use the number of estimators and the maximim leaf nodes as tuning parameters. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set()
from sklearn.datasets import load_diabetes
# Predictors: "age" "sex" "bmi" "map" "tc" "ldl" "hdl" "tch" "ltg" "glu"
diabetes = load_diabetes()
y = diabetes['target']
bmi, ltg = diabetes['data'][:,[2,8]].T
plt.scatter(ltg, bmi, c=y, cmap="Reds")
plt.colorbar()
plt.xlabel('ltg'); plt.ylabel('bmi');
Explanation: Supervised Learning: Decision Trees
One very direct way of performing supervised learning is expressing output as a combination of the predictors (features). An adaptive basis-function model (ABM) is one example of this.
$$f(x) = w_0 + \sum_{j=1}^k w_j \phi_j(\mathbf{x})$$
here, $\phi_j$ is a basis function, which is typically parametric:
$$\phi_j(\mathbf{x}) = \phi_j(\mathbf{x}|\alpha_j)$$
The parameter set for this model is thus $\theta = {\mathbf{w} = w_0,\ldots,w_k; \mathbf{\alpha} = \alpha_1, \ldots, \alpha_k}$. This model is not linear in the parameters.
Decision trees use an ABM to recursively partition the space of predictor variables into a piecewise-constant response surface. We can consider each component $j=1,\ldots,k$ to be a region in the response surface, and $w_j$ the expected response in that region.
$$f(x) = \sum_{j=1}^k w_j I(\mathbf{x} \in R_j)$$
Each paramter $\alpha_j$ encodes both (1) a variable used for splitting and (2) the corresponding threshold value. Specifically, the basis functions define the regions, and the weights encode the response value in each region.
This particular formulation implies a regression-type model, but we can generalize this to classification by storing the distribution over classes in each leaf, instead of the mean response.
To get a sense of how decision trees work, consider a diabetes dataset from which we wish to predict disease progression from a range of predictors. In the plot below, the response variable (target, an index of disease progression) is color-coded as a function of two variables, metabolic rate (bmi) and a blood serum measurement (ltg).
End of explanation
ltg_split = -0.01
plt.scatter(ltg, bmi, c=y, cmap="Reds")
plt.vlines(ltg_split, *plt.gca().get_ylim(), linestyles='dashed')
plt.colorbar()
plt.xlabel('ltg'); plt.ylabel('bmi');
Explanation: One approach to building a predictive model is to subdivide the variable space into regions, by sequentially subdividing each variable. For example, if we split ltg at a threshold value of -0.01, it does a reasonable job of isolating the large values in one of the resulting subspaces.
End of explanation
bmi_split = -0.03
plt.scatter(ltg, bmi, c=y, cmap="Reds")
plt.vlines(ltg_split, *plt.gca().get_ylim(), linestyles='dashed')
plt.hlines(bmi_split, ltg_split, plt.gca().get_xlim()[1], linestyles='dashed')
plt.colorbar()
plt.xlabel('ltg'); plt.ylabel('bmi');
Explanation: However, that region still contains a fair number of low (light) values, so we can similarly bisect the region using a bmi value of -0.03 as a threshold value:
End of explanation
np.mean(y[(bmi>bmi_split) & (ltg>ltg_split)])
np.mean(y[(bmi<=bmi_split) & (ltg>ltg_split)])
np.mean(y[ltg<ltg_split])
Explanation: We can use this partition to create a piecewise-constant function, which returns the average value of the observations in each region defined by the threshold values. We could then use this rudimentary function as a predictive model.
End of explanation
# Write your answer here
Explanation: The choices for splitting the variables here were relatively arbitrary. Better choices can be made using a cost function $C$, such as residual sums of squares (RSS).
$$C = \sum_j \sum_{i \in R_j} (y_i - \hat{y}_{R_j})^2$$
where $\hat{y}_{R_j}$ is the mean response for the training observations in the jth region.
Exercise
Use residual sums of squares to select competitive threshold values for the predictive model defined above
End of explanation
import numpy as np
entropy = lambda p: -np.sum(p * np.log2(p)) if not 0 in p else 0
entropy([.4,.6])
Explanation: The recursive partitioning demonstrated above results in a decision tree. The regions defined by the trees are called terminal nodes. Locations at which a predictor is split, such as bmi=-0.03, are called internal nodes. As with this simple example, splits are not generally symmetric, in the sense that splits do not occur similarly on all branches.
Now consider a subset of three variables from the Titanic dataset, which we would like to use to predict survival from the disaster. The following describes one such decision tree:
We first check if gender of the passenger is male. If "no", we follow the right branch and end up in a leaf where the probability of survival is $p(y=1,x_1=F)=0.73$, so we predict survival ($y=1$) at this node (36% of observations fall under this leaf). If the passenger is male, we then check the age of the passenger. If he is older than 9.5 years, then the probability of survival $p(y=1,x_1=M,x_2>9.5)=0.17$, so we predict death ($y=0$). If, on the other hand, the passenger is younger than 9.5 years, we then check if the number of siblings and spouses on board was higher than 2.5; if "yes", then the probability of survival $p(y=1, x_1=M, x_2>9.5, x_3>2.5)=0.05$, so we predict death, otherwise we predict survival with $p(y=1, x_1=M, x_2>9.5 , x_3 \lt 2.5)=0.89$. Hence, these probabilities are just the empirical fraction of positive examples that satisfy each conjunction of feature values, which defines a path from the root to a leaf.
There is no way to feasibly evaluate all possible partitions. Instead, the strategy is to use a top-down, greedy approach that is optimal (according to a particular cost function) for the current split only. By "greedy", we mean that at each step it chooses the most advantageous binary partition, not taking into account the impact of the choice on the quality of subsequent partitions.
$$(j^, t^) = \text{argmin}{j,t} C({\mathbf{x}_i,y_i: x{ij} \le t}) + C({\mathbf{x}i,y_i: x{ij} \gt t})$$
where $C$ is a cost function, $j$ and $t$ are a variable index and cutpoint, respectively. We will restrict consideration to binary partitions.
Classification Trees
In addition to regression trees, we can also use decision trees on categorical outcomes, and these are called classification trees. The primary difference in implementation is that residual sums of squares is no longer an appropriate splitting criterion.
Entropy
An alternative splitting criterion for decision tree learning algorithms is information gain. It measures how well a particular attribute distinguishes among different target classifications. Information gain is measured in terms of the expected reduction in the entropy or impurity of the data. The entropy of a set of probabilities is:
$$H(p) = -\sum_i p_i log_2(p_i)$$
If we have a set of binary responses from some variable, all of which are positive/true/1, then knowing the values of the variable does not hold any predictive value for us, since all the outcomes are positive. Hence, the entropy is zero:
End of explanation
entropy([0.5, 0.5])
pvals = np.linspace(0, 1)
plt.plot(pvals, [entropy([p,1-p]) for p in pvals])
Explanation: However, if the variable splits responses into equal numbers of positive and negative values, then entropy is maximized, and we wish to know about the feature:
End of explanation
gini = lambda p: 1. - (np.array(p)**2).sum()
pvals = np.linspace(0, 1)
plt.plot(pvals, [entropy([p,1-p])/2. for p in pvals], label='Entropy')
plt.plot(pvals, [gini([p,1-p]) for p in pvals], label='Gini')
plt.legend()
Explanation: The entropy calculation tells us how much additional information we would obtain with knowledge of the variable.
So, if we have a set of candidate covariates from which to choose as a node in a decision tree, we should choose the one that gives us the most information about the response variable (i.e. the one with the highest entropy).
Misclassification Rate
Alternatively, we can use the misclassification rate:
$$C(j,t) = \frac{1}{n_{jt}} \sum_{y_i: x_{ij} \gt t} I(y_i \ne \hat{y})$$
where $\hat{y}$ is the most probable class label and $n_{ij}$ is the number of observations in the data subset obtained from splitting via $j,t$.
Gini index
The Gini index is simply the expected error rate:
$$C(j,t) = \sum_{k=1}^K \hat{\pi}{jt}[k] (1 - \hat{\pi}{jt}[k]) = 1 - \sum_{k=1}^K \hat{\pi}_{jt}[k]^2$$
where $\hat{\pi}{jt}[k]$ is the probability of an observation being correctly classified as class $k$ for the data subset obtained from splitting via $j,t$ (hence, $(1 - \hat{\pi}{jt}[k])$ is the misclassification probability).
End of explanation
import numpy as np
def info_gain(X, y, feature):
# Calculates the information gain based on entropy
gain = 0
n = len(X)
# List the values that feature can take
values = list(set(X[feature]))
feature_counts = np.zeros(len(values))
E = np.zeros(len(values))
ivalue = 0
# Find where those values appear in X[feature] and the corresponding class
for value in values:
new_y = [y[i] for i,d in enumerate(X[feature].values) if d==value]
feature_counts[ivalue] += len(new_y)
# Get the values in newClasses
class_values = list(set(new_y))
class_counts = np.zeros(len(class_values))
iclass = 0
for v in class_values:
for c in new_y:
if c == v:
class_counts[iclass] += 1
iclass += 1
nc = float(np.sum(class_counts))
new_entropy = entropy([class_counts[c] / nc for c in range(len(class_values))])
E[ivalue] += new_entropy
# Computes both the Gini gain and the entropy
gain += float(feature_counts[ivalue])/n * E[ivalue]
ivalue += 1
return gain
Explanation: ID3
A given cost function can be used to construct a decision tree via one of several algorithms. The Iterative Dichotomiser 3 (ID3) is on such algorithm, which uses entropy, and a related concept, information gain, to choose features and partitions at each classification step in the tree.
Information gain is the difference between the current entropy of a system and the entropy measured after a feature is chosen. If $S$ is a set of examples and $X$ is a possible feature on which to partition the examples, then:
$$G(S,X) = \text{Entropy}(S) - \sum_{x \in X} \frac{#(S_x)}{#(S)} \text{Entropy}(S_x)$$
where $#$ is the count function and $x$ is a particular value of $X$.
Let's say $S$ is a set of survival events, $S = {s_1=survived, s_2=died, s_3=died, s_4=died}$ and a particular variable $X$ can have values ${x_1, x_2, x_3}$. To perform a sample calculation of information gain, we will say that:
$X(s_1) = x_2$
$X(s_2) = x_2$
$X(s_3) = x_3$
$X(s_4) = x_1$
The current entropy of this state is:
$$\begin{align}
\text{Entropy}(S) &= -p^{(+)} \log_2(p^{(+)}) - p^{(-)} \log_2(p^{(-)}) \
&= -0.25 \log_2(0.25) - 0.75 \log_2(0.75) \
&= 0.5 + 0.311 = 0.811
\end{align}$$
Now, we need to compute the information after selecting variable $X$, which is the sum of three terms:
$$\begin{align}
\frac{#(S_{x1})}{#(S)} \text{Entropy}(S) &= 0.25 (-0 \log_2(0) - 1 \log_2(1)) = 0\
\frac{#(S_{x2})}{#(S)} \text{Entropy}(S) &= 0.5 (-0.5 \log_2(0.5) - 0.5 \log_2(0.5) = 0.5\
\frac{#(S_{x3})}{#(S)} \text{Entropy}(S) &= 0.25 (-0 \log_2(0) - 1 \log_2 1) = 0\
\end{align}$$
Therefore, the information gain is:
$$G(S,X) = 0.811 - (0 + 0.5 + 0) = 0.311$$
End of explanation
titanic = pd.read_excel("../data/titanic.xls", "titanic")
titanic.head(1)
Explanation: Consider a few variables from the titanic database:
End of explanation
y = titanic['survived']
X = titanic[['pclass','sex','embarked']]
X['adult'] = titanic.age<17
info_gain(X, y, 'pclass')
info_gain(X, y, 'sex')
info_gain(X, y, 'embarked')
info_gain(X, y, 'adult')
Explanation: Here, we have selected pasenger class (pclass), sex, port of embarcation (embarked), and a derived variable called adult. We can calculate the information gain for each of these.
End of explanation
wine = pd.read_table("../data/wine.dat", sep='\s+')
attributes = ['Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline']
grape = wine.pop('region')
y = grape
wine.columns = attributes
X = wine
from sklearn import tree
from sklearn import model_selection
X_train, X_test, y_train, y_test = model_selection.train_test_split(
X, y, test_size=0.4, random_state=0)
clf = tree.DecisionTreeClassifier(criterion='entropy',
max_features="auto",
min_samples_leaf=10)
clf.fit(X_train, y_train)
Explanation: Hence, the ID3 algorithm computes the information gain for each variable, selecting the one with the highest value (in this case, adult). In this way, it searches the "tree space" according to a greedy strategy.
A tree can be constructed by recursively selecting the feature from the current dataset with the largest information gain, then removing it from the datset. Recursion stops when there are either no variables remaining, or there is only one class left in the subset (e.g. all True or all False).
The ID3 algorithm is as follows:
if all response data have the same class:
return leaf with data label
else if no features:
return leaf with most common label
else:
choose variable $X'$ that maximizes information gain to be a tree node
add branch from node for each value of $X'$
for each branch of node:
calculate $S_{x}$ by removing $X'$ from $S$
set $S=S_{x}$ and call algorithm again
The greedy approach of maximizing information gain at each step tends to bias solutions towards smaller trees.
Decision Trees in scikit-learn
Classification trees, either binary or multi-class, are implemented in scikit-learn in the DecisionTreeClassifier class. Where trees are binary, it expects the response variable to be coded as [-1,1] for negative and positive outcomes.
Let's build a decision tree on a wine dataset.
End of explanation
with open("wine.dot", 'w') as f:
f = tree.export_graphviz(clf, out_file=f)
! dot -Tpng wine.dot -o wine.png
for i,x in enumerate(X.columns):
print(i,x)
from IPython.core.display import Image
Image("wine.png")
preds = clf.predict(X_test)
pd.crosstab(y_test, preds, rownames=['actual'],
colnames=['prediction'])
Explanation: If you have GraphViz installed, you can draw the resulting tree:
End of explanation
from sklearn.ensemble import BaggingClassifier
bc = BaggingClassifier(n_jobs=4, oob_score=True)
bc
bc.fit(X_train, y_train)
preds = bc.predict(X_test)
pd.crosstab(y_test, preds, rownames=['actual'],
colnames=['prediction'])
Explanation: Pruning
Despite the inductive bias associated with trees that tend to make them small, the ID3 algorithm continues choosing nodes and branches until either it runs out of variables, or all outputs are of the same class. This can clearly lead to overfit trees.
To prevent overfitting, we can stop growing the tree if the information gain (or reduction in error, etc.) is not sufficient to justify the extra complexity of adding another node. However, this simple rule is not optimal, because an uninformative subtree can lead to informative ones later on.
The standard approach is therefore to grow a full tree, and then to prune it. The easiest approach is to remove branches that give the least increase in the error (information gain). To determine how far back to prune, we can evaluate the cross-validated error on each candidate pruning, and then pick the tree whose CV error is within 1 standard error of the minimum.
Analogous to the lasso or ridge regression, we can penalize the number of terminal nodes in a tree:
$$\sum_{m=1}^{|T|} \sum_{x_i \in R_m} (y_i - \hat{y}_{R_j})^2 + \alpha |T|$$
where $|T|$ is the number of terminal nodes in tree T.
Pruned Decision Tree Algorithm
Use recursive binary splitting to grow a large tree, such that each terminal node has fewer than some minimum number of observations.
Apply pruning to obtain a sequence of best subtrees, as a function of $\alpha$.
Use k-fold cross-validation to choose $\alpha$. Average results and pick $\alpha$ to minimize the average error.
Return subtree from (2) that corresponds to chosen $\alpha$.
Random Forests
Decision trees have several advantages:
ease of interpretation
handles continuous and discrete features
invariant to monotone transformation of features
variable selection automated
robust
scalable
However, relative to other statistical learning methods, trees do not predict very accurately, due to the greedy nature of the tree construction algorithm. Also, trees tend to be unstable, as small changes to the inputs can have large effects on the structure of the tree; poor decisions near the root of the tree will propogate to the rest of the tree. Hence, trees are high variance (i.e. noisy) estimators.
One way to reduce the variance of an estimate is to average together many estimates. In the case of decision trees, we can train $T$ different trees on random subsets of the data (with replacement) then average according to:
$$\hat{f}(\mathbf{x}) = \frac{1}{T} \sum_{i=1}^T f_t(\mathbf{x})$$
where $f_t$ is the $t^{th}$ tree. This approach is called "bootstrap aggregating", or bagging.
Note that, since we are averaging over trees, there is no need to prune. With bagging, we reduce variance by averaging, rather than by pruning.
End of explanation
bc.oob_score_
Explanation: Test error of a bagged model is measured by estimating out-of-bag error.
On average, each bagged tree uses about 2/3 of observations, leaving the remaining third as "out-of bag". The response for the ith observation for each of the trees in which that observation was excluded (on average, B/3) is averaged. This essentially the same as performing leave-one-out (LOO) cross-validation.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_jobs=4)
rf.fit(X_train, y_train)
preds = rf.predict(X_test)
pd.crosstab(y_test, preds, rownames=['actual'],
colnames=['prediction'])
Explanation: This approach is an ensemble learning method, because it takes a set of weak learners, and combines them to construct a strong learner that is more robust, with lower generalization error.
An average of B trees, each with variance $\sigma^2$, has variance $\sigma^2/B$. If the variables are simply identically distributed, with positive pairwise correlation $\rho$, then the variance of the average of the B trees is:
$$\rho \sigma^2 + \frac{1-\rho}{B}\sigma^2$$
As the number of trees becomes large, the second term goes to zero. Further reductions in variance are limited by the size of the correlation among the trees $\rho$.
Random forests improves upon bagging by creating a set of decision trees that are less correlated than bootstrapped trees. This is done by selecting from only a subset $m$ out of $M$ possible predictors at each split. Typically, we choose approximately the square root of the available number.
This procedure is used to create a set of trees, most of which will be poor at predicting a given observation. However, classification is based on a majority vote of the constituent trees.
End of explanation
importances = rf.feature_importances_
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. %s (%f)" % (f + 1, X.columns[indices[f]], importances[indices[f]]))
plt.figure()
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", align="center")
plt.xticks(range(X.shape[1]), X.columns[indices], rotation=90)
plt.xlim([-1, X.shape[1]]);
Explanation: With random forests, it is possible to quantify the relative importance of feature inputs for classification. In scikit-learn, the Gini index (recall, a measure of error reduction) is calculated for each internal node that splits on a particular feature of a given tree, which is multiplied by the number of samples that were routed to the node (this approximates the probability of reaching that node). For each variable, this quantity is averaged over the trees in the forest to yield a measure of importance.
End of explanation
rf = RandomForestClassifier(n_jobs=4, criterion='entropy')
rf.fit(X_train, y_train)
importances = rf.feature_importances_
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. %s (%f)" % (f + 1, X.columns[indices[f]], importances[indices[f]]))
Explanation: RandomForestClassifier uses the Gini impurity index by default; one may instead use the entropy information gain as a criterion.
End of explanation
daily_temps = pd.read_table("../data/TNNASHVI.txt", sep='\s+',
names=['month','day','year','temp'], na_values=-99)
daily_temps.temp[daily_temps.year>2010].plot(style='b.', figsize=(10,6))
Explanation: Decision Tree Regression
While it may not be apparent how to use trees for regression analysis, it requires only a straightforward modification to the algorithm. A popular tree-based regression algorithm is the classification and regression tree (CART).
The file TNNASHVI.txt in your data directory contains daily temperature readings for Nashville, courtesy of the Average Daily Temperature Archive. This data, as one would expect, oscillates annually. We can use a decision tree regression model to fit the data.
End of explanation
# Transmogrify data
y = daily_temps.temp[daily_temps.year>2010]
X = np.atleast_2d(np.arange(len(y))).T
from sklearn.tree import DecisionTreeRegressor
clf = DecisionTreeRegressor(max_depth=7, min_samples_leaf=2)
clf.fit(X, y)
X_fit = np.linspace(0, len(X), 1000).reshape((-1, 1))
y_fit_1 = clf.predict(X_fit)
plt.plot(X.ravel(), y, '.k', alpha=0.3)
plt.plot(X_fit.ravel(), y_fit_1, color='red')
Explanation: In this context, none of the cost functions considered so far would be appropriate. Instead, it would be more suitable to use something like mean squared error (MSE) to guide the growth of the tree. With this, we can proceed to choose (1) a variable on which to split the dataset and (2) (in the case of continuous features) a value of the variable at which to place a node.
Recall that the output of a tree is just a constant value for each leaf; here, we simply return the average of all the response values in the region. Thus, we choose a cut point that minimizes the MSE at each step.
End of explanation
from sklearn.ensemble import RandomForestRegressor
clf = RandomForestRegressor(n_estimators=200, max_depth=9,
min_samples_leaf=10)
clf.fit(X, y)
y_fit_200 = clf.predict(X_fit)
plt.plot(X.ravel(), y, '.k', alpha=0.3)
plt.plot(X_fit.ravel(), y_fit_200, color='red')
Explanation: A single decision tree allows us to estimate the signal in a non-parametric way,
but clearly has some issues. In some regions, the model shows high bias and
under-fits the data
(seen in the long flat lines which don't follow the contours of the data),
while in other regions the model shows high variance and over-fits the data
(reflected in the narrow spikes which are influenced by noise in single points).
One way to address this is to use an ensemble method, like random forests, so that the
effects of their over-fitting go away on average.
Here we will use a random forest of 200 trees to reduce the tendency of each
tree to over-fitting the data.
End of explanation
def prediction_intervals(mod, X, alpha=0.05):
preds = [pred.predict(X) for pred in mod.estimators_]
lower = np.percentile(preds, 100*(alpha/2), axis=0)
upper = np.percentile(preds, 100*(1-alpha/2), axis=0)
return lower, upper, np.array(preds)
X_train, X_test, y_train, y_test = model_selection.train_test_split(
X, y, test_size=0.4, random_state=0)
clf = RandomForestRegressor(n_estimators=1000, min_samples_leaf=1)
clf.fit(X_train, y_train)
y_fit_200 = clf.predict(X_fit)
lower, upper, preds = prediction_intervals(clf, X_test, alpha=0.1)
x_sorted = np.sort(X_test.ravel())
order = np.argsort(X_test.ravel())
plt.errorbar(x_sorted, y_test.values[order],
yerr=[(y_test.values-lower)[order], (upper-y_test.values)[order]])
plt.plot(X_test, y_test, '.r', alpha=0.3)
Explanation: Prediction intervals
The predictions from random forests are not accompanied by estimates of uncertainty, as is the case with Bayesian regression models. However, it is possible to obtain probability intervals using a random forests approach. Since we are using an ensemble of trees, it is possible to track all predicted values for all leaf nodes in a random forest, rather than just the mean or modal value. This results in conditional distributions $P(y|X=x)$ for every x, from which percentiles can be calculated for desired endpoints in a prediction interval. This approach is called quantile regression forests.
To implement quantile regression forests in scikit-learn, we need to allow each tree to grow so that each leaf node contains exactly one value. Then, each tree returns a single response variable, from which a conditional distribution can be approximated. Of course, fully expanding trees will result in overfitting, but these can also be cross-validated.
scikit-learn does not automatically calculate prediction intervals, but the estimators from each constitutent tree in the RandomForestRegressor is avaialable, from which individual tree predictions can be made.
End of explanation
# Write your answer here
Explanation: Exercise
Select the optimal random forest regression model for the nashville daily temperature data via cross-validation in scikit-learn. Use the number of estimators and the maximim leaf nodes as tuning parameters.
End of explanation |
2,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Bayesian Optimization with GPyOpt
Written by Javier Gonzalez, Amazon Research Cambridge
Last updated Monday, 22 May 2017.
=====================================================================================================
1. How to use GPyOpt?
The Basics of Bayesian Optimization
Gaussian Processes
Acquisition functions
Applications of Bayesian Optimization
1D optimization example
2D optimization example
=====================================================================================================
1. How to use GPyOpt?
We start by loading GPyOpt and GPy.
Step1: GPyOpt is easy to use as a black-box functions optimizer. To start you only need
Step2: A set of box constraints, the interval [-1,1] in our case. You can define a list of dictionaries where each element defines the name, type and domain of the variables.
Step3: A budget, or number of allowed evaluations of $f$.
Step4: With this three pieces of information GPyOpt has enough to find the minimum of $f$ in the selected region. GPyOpt solves the problem in two steps. First, you need to create a GPyOpt object that stores the problem (f and and box-constraints). You can do it as follows.
Step5: Next you need to run the optimization for the given budget of iterations. This bit it is a bit slow because many default options are used. In the next notebooks of this manual you can learn how to change other parameters to optimize the optimization performance.
Step6: Now you can check the best found location $x^*$ by
Step7: and the predicted value value of $f$ at $x^*$ optimum by
Step8: And that's it! Keep reading to learn how GPyOpt uses Bayesian Optimization to solve this an other optimization problem. You will also learn all the features and options that you can use to solve your problems efficiently.
=====================================================================================================
2. The Basics of Bayesian Optimization
Bayesian optimization (BO) is an strategy for global optimization of black-box functions (Snoek et al., 2012). Let $f
Step9: 3. One dimensional example
In this example we show how GPyOpt works in a one-dimensional example a bit more difficult that the one we analyzed in Section 3. Let's consider here the Forrester function
$$f(x) =(6x-2)^2 \sin(12x-4)$$ defined on the interval $[0, 1]$.
The minimum of this function is located at $x_{min}=0.78$. The Forrester function is part of the benchmark of functions of GPyOpt. To create the true function, the perturbed version and boundaries of the problem you need to run the following cell.
Step10: We plot the true Forrester function.
Step11: As we did in Section 3, we need to create the GPyOpt object that will run the optimization. We specify the function, the boundaries and we add the type of acquisition function to use.
Step12: Now we want to run the optimization. Apart from the number of iterations you can select
how do you want to optimize the acquisition function. You can run a number of local optimizers (acqu_optimize_restart) at random or in grid (acqu_optimize_method).
Step13: When the optimization is done you should receive a message describing if the method converged or if the maximum number of iterations was reached. In one dimensional examples, you can see the result of the optimization as follows.
Step14: In problems of any dimension two evaluations plots are available.
The distance between the last two observations.
The value of $f$ at the best location previous to each iteration.
To see these plots just run the following cell.
Step15: Now let's make a video to track what the algorithm is doing in each iteration. Let's use the LCB in this case with parameter equal to 2.
4. Two dimensional example
Next, we try a 2-dimensional example. In this case we minimize the use the Six-hump camel function
$$f(x_1,x_2) = \left(4-2.1x_1^2 = \frac{x_1^4}{3} \right)x_1^2 + x_1x_2 + (-4 +4x_2^2)x_2^2,$$
in $[-3,3]\times [-2,2]$. This functions has two global minimum, at $(0.0898,-0.7126)$ and $(-0.0898,0.7126)$. As in the previous case we create the function, which is already in GPyOpt. In this case we generate observations of the function perturbed with white noise of $sd=0.1$.
Step16: We create the GPyOpt object. In this case we use the Lower Confidence bound acquisition function to solve the problem.
Step17: We run the optimization for 40 iterations and show the evaluation plot and the acquisition function.
Step18: Finally, we plot the acquisition function and the convergence plot. | Python Code:
%pylab inline
import GPy
import GPyOpt
from numpy.random import seed
import matplotlib
Explanation: Introduction to Bayesian Optimization with GPyOpt
Written by Javier Gonzalez, Amazon Research Cambridge
Last updated Monday, 22 May 2017.
=====================================================================================================
1. How to use GPyOpt?
The Basics of Bayesian Optimization
Gaussian Processes
Acquisition functions
Applications of Bayesian Optimization
1D optimization example
2D optimization example
=====================================================================================================
1. How to use GPyOpt?
We start by loading GPyOpt and GPy.
End of explanation
def myf(x):
return (2*x)**2
Explanation: GPyOpt is easy to use as a black-box functions optimizer. To start you only need:
Your favorite function $f$ to minimize. We use $f(x)=2x^2$ in this toy example, whose global minimum is at x=0.
End of explanation
bounds = [{'name': 'var_1', 'type': 'continuous', 'domain': (-1,1)}]
Explanation: A set of box constraints, the interval [-1,1] in our case. You can define a list of dictionaries where each element defines the name, type and domain of the variables.
End of explanation
max_iter = 15
Explanation: A budget, or number of allowed evaluations of $f$.
End of explanation
myProblem = GPyOpt.methods.BayesianOptimization(myf,bounds)
Explanation: With this three pieces of information GPyOpt has enough to find the minimum of $f$ in the selected region. GPyOpt solves the problem in two steps. First, you need to create a GPyOpt object that stores the problem (f and and box-constraints). You can do it as follows.
End of explanation
myProblem.run_optimization(max_iter)
Explanation: Next you need to run the optimization for the given budget of iterations. This bit it is a bit slow because many default options are used. In the next notebooks of this manual you can learn how to change other parameters to optimize the optimization performance.
End of explanation
myProblem.x_opt
Explanation: Now you can check the best found location $x^*$ by
End of explanation
myProblem.fx_opt
Explanation: and the predicted value value of $f$ at $x^*$ optimum by
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('ualnbKfkc3Q')
Explanation: And that's it! Keep reading to learn how GPyOpt uses Bayesian Optimization to solve this an other optimization problem. You will also learn all the features and options that you can use to solve your problems efficiently.
=====================================================================================================
2. The Basics of Bayesian Optimization
Bayesian optimization (BO) is an strategy for global optimization of black-box functions (Snoek et al., 2012). Let $f: {\mathcal X} \to R$ be a L-Lipschitz continuous function defined on a compact subset ${\mathcal X} \subseteq R^d$. We are interested in solving the global optimization problem of finding
$$ x_{M} = \arg \min_{x \in {\mathcal X}} f(x). $$
We assume that $f$ is a black-box from which only perturbed evaluations of the type $y_i = f(x_i) + \epsilon_i$, with $\epsilon_i \sim\mathcal{N}(0,\psi^2)$, are available. The goal is to make a series of $x_1,\dots,x_N$ evaluations of $f$ such that the cumulative regret
$$r_N= Nf(x_{M})- \sum_{n=1}^N f(x_n),$$
is minimized. Essentially, $r_N$ is minimized if we start evaluating $f$ at $x_{M}$ as soon as possible.
There are two crucial bits in any Bayesian Optimization (BO) procedure approach.
Define a prior probability measure on $f$: this function will capture the our prior beliefs on $f$. The prior will be updated to a 'posterior' using the available data.
Define an acquisition function $acqu(x)$: this is a criteria to decide where to sample next in order to gain the maximum information about the location of the global maximum of $f$.
Every time a new data point is collected. The model is re-estimated and the acquisition function optimized again until convergence. Given a prior over the function $f$ and an acquisition function, a BO procedure will converge to the optimum of $f$ under some conditions (Bull, 2011).
2.1 Prior probability meassure on $f$: Gaussian processes
A Gaussian process (GP) is a probability distribution across classes functions, typically smooth, such that each linear finite-dimensional restriction is multivariate Gaussian (Rasmussen and Williams, 2006). GPs are fully parametrized by a mean $\mu(x)$ and a covariance function $k(x,x')$. Without loss of generality $\mu(x)$ is assumed to be zero. The covariance function $k(x,x')$ characterizes the smoothness and other properties of $f$. It is known as the
kernel of the process and has to be continuous, symmetric and positive definite. A widely used kernel is the square exponential, given by
$$ k(x,x') = l \cdot \exp{ \left(-\frac{\|x-x'\|^2}{2\sigma^2}\right)} $$
where $\sigma^2$ and and $l$ are positive parameters.
To denote that $f$ is a sample from a GP with mean $\mu$ and covariance $k$ we write
$$f(x) \sim \mathcal{GP}(\mu(x),k(x,x')).$$
For regression tasks, the most important feature of GPs is that process priors are conjugate to the likelihood from finitely many observations $y= (y_1,\dots,y_n)^T$ and $X ={x_1,...,x_n}$, $x_i\in \mathcal{X}$ of the form $y_i = f(x_i) + \epsilon_i $
where $\epsilon_i \sim \mathcal{N} (0,\sigma^2)$. We obtain the Gaussian posterior posterior $f(x^)|X, y, \theta \sim \mathcal{N}(\mu(x^),\sigma^2(x^))$, where $\mu(x^)$ and $\sigma^2(x^*)$ have close form. See (Rasmussen and Williams, 2006) for details.
2.2 Acquisition Function
Acquisition functions are designed represents our beliefs over the maximum of $f(x)$. Denote by $\theta$ the parameters of the GP model and by ${x_i,y_i}$ the available sample. Three of the most common acquisition functions, all available in GPyOpt are:
Maximum probability of improvement (MPI):
$$acqu_{MPI}(x;{x_n,y_n},\theta) = \Phi(\gamma(x)), \mbox{where}\ \gamma(x)=\frac{\mu(x;{x_n,y_n},\theta)-f(x_{best})-\psi}{\sigma(x;{x_n,y_n},\theta)}.$$
Expected improvement (EI):
$$acqu_{EI}(x;{x_n,y_n},\theta) = \sigma(x;{x_n,y_n},\theta) (\gamma(x) \Phi(\gamma(x))) + N(\gamma(x);0,1).$$
Upper confidence bound (UCB):
$$acqu_{UCB}(x;{x_n,y_n},\theta) = -\mu(x;{x_n,y_n},\theta)+\psi\sigma(x;{x_n,y_n},\theta).$$
$\psi$ is a tunable parameters that help to make the acquisition functions more flexible. Also, in the case of the UBC, the parameter $\eta$ is useful to define the balance between the importance we give to the mean and the variance of the model. This is know as the exploration/exploitation trade off.
2.3 Applications of Bayesian Optimization
Bayesian Optimization has been applied to solve a wide range of problems. Among many other, some nice applications of Bayesian Optimization include:
Sensor networks (http://www.robots.ox.ac.uk/~parg/pubs/ipsn673-garnett.pdf),
Automatic algorithm configuration (http://www.cs.ubc.ca/labs/beta/Projects/SMAC/papers/11-LION5-SMAC.pdf),
Deep learning (http://www.mlss2014.com/files/defreitas_slides1.pdf),
Gene design (http://bayesopt.github.io/papers/paper5.pdf),
and a long etc!
In this Youtube video you can see Bayesian Optimization working in a real time in a robotics example. (Calandra1 et al. 2008)
End of explanation
%pylab inline
import GPy
import GPyOpt
# Create the true and perturbed Forrester function and the boundaries of the problem
f_true= GPyOpt.objective_examples.experiments1d.forrester() # noisy version
bounds = [{'name': 'var_1', 'type': 'continuous', 'domain': (0,1)}] # problem constraints
Explanation: 3. One dimensional example
In this example we show how GPyOpt works in a one-dimensional example a bit more difficult that the one we analyzed in Section 3. Let's consider here the Forrester function
$$f(x) =(6x-2)^2 \sin(12x-4)$$ defined on the interval $[0, 1]$.
The minimum of this function is located at $x_{min}=0.78$. The Forrester function is part of the benchmark of functions of GPyOpt. To create the true function, the perturbed version and boundaries of the problem you need to run the following cell.
End of explanation
f_true.plot()
Explanation: We plot the true Forrester function.
End of explanation
# Creates GPyOpt object with the model and anquisition fucntion
seed(123)
myBopt = GPyOpt.methods.BayesianOptimization(f=f_true.f, # function to optimize
domain=bounds, # box-constraints of the problem
acquisition_type='EI',
exact_feval = True) # Selects the Expected improvement
Explanation: As we did in Section 3, we need to create the GPyOpt object that will run the optimization. We specify the function, the boundaries and we add the type of acquisition function to use.
End of explanation
# Run the optimization
max_iter = 15 # evaluation budget
max_time = 60 # time budget
eps = 10e-6 # Minimum allows distance between the las two observations
myBopt.run_optimization(max_iter, max_time, eps)
Explanation: Now we want to run the optimization. Apart from the number of iterations you can select
how do you want to optimize the acquisition function. You can run a number of local optimizers (acqu_optimize_restart) at random or in grid (acqu_optimize_method).
End of explanation
myBopt.plot_acquisition()
myBopt.plot_convergence()
Explanation: When the optimization is done you should receive a message describing if the method converged or if the maximum number of iterations was reached. In one dimensional examples, you can see the result of the optimization as follows.
End of explanation
myBopt.plot_convergence()
Explanation: In problems of any dimension two evaluations plots are available.
The distance between the last two observations.
The value of $f$ at the best location previous to each iteration.
To see these plots just run the following cell.
End of explanation
# create the object function
f_true = GPyOpt.objective_examples.experiments2d.sixhumpcamel()
f_sim = GPyOpt.objective_examples.experiments2d.sixhumpcamel(sd = 0.1)
bounds =[{'name': 'var_1', 'type': 'continuous', 'domain': f_true.bounds[0]},
{'name': 'var_2', 'type': 'continuous', 'domain': f_true.bounds[1]}]
f_true.plot()
Explanation: Now let's make a video to track what the algorithm is doing in each iteration. Let's use the LCB in this case with parameter equal to 2.
4. Two dimensional example
Next, we try a 2-dimensional example. In this case we minimize the use the Six-hump camel function
$$f(x_1,x_2) = \left(4-2.1x_1^2 = \frac{x_1^4}{3} \right)x_1^2 + x_1x_2 + (-4 +4x_2^2)x_2^2,$$
in $[-3,3]\times [-2,2]$. This functions has two global minimum, at $(0.0898,-0.7126)$ and $(-0.0898,0.7126)$. As in the previous case we create the function, which is already in GPyOpt. In this case we generate observations of the function perturbed with white noise of $sd=0.1$.
End of explanation
# Creates three identical objects that we will later use to compare the optimization strategies
myBopt2D = GPyOpt.methods.BayesianOptimization(f_sim.f,
domain=bounds,
model_type = 'GP',
acquisition_type='EI',
normalize_Y = True,
acquisition_weight = 2)
Explanation: We create the GPyOpt object. In this case we use the Lower Confidence bound acquisition function to solve the problem.
End of explanation
# runs the optimization for the three methods
max_iter = 40 # maximum time 40 iterations
max_time = 60 # maximum time 60 seconds
myBopt2D.run_optimization(max_iter,max_time,verbosity=False)
Explanation: We run the optimization for 40 iterations and show the evaluation plot and the acquisition function.
End of explanation
myBopt2D.plot_acquisition()
myBopt2D.plot_convergence()
Explanation: Finally, we plot the acquisition function and the convergence plot.
End of explanation |
2,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of Monte Carlo simulations of mutation accumulation in stem cells
This Notebook lives at Github.
Many adult tissues renew themselves continually via a pool of cells called stem cells. Stem cell divisions are either asymmetric—in which one daughter cell remains a stem cell and one does not—or symmetric, in which both daughter cells adopt the same fate, either stem or non-stem. Recent studies show that in many tissues stem cell division patterns are strongly biased toward the symmetric outcome, raising the question of whether symmetry confers some benefit.
The DNA of stem cells is continually subject to damage. For example, errors occur in DNA when it is replicated, and if this damage is not repaired it can lead to cancer. In this publication, we showed that symmetry, via extinction of damaged stem-cell clones, reduces the lifetime risk of accumulating phenotypically silent heritable damage (mutations or aberrant epigenetic changes) in individual stem cells.
That work focused on the homeostatic case in which stem cells, though dividing, do not change their number, e.g., as occurs in the adult intestine. Here, I assess the expected rate of mutation accumulation in a non-homeostatic tissue, e.g. a developing tumor.
One non-homeostatic model of mutation accumulation is a discrete-time multi-type branching process. The stochastic process starts with a number of wild-type stem cells, each cell accumulates mutations at each of $K$ different genes, and the process ends when the first stem cell with $K$ mutations arises. Under this model, the division of a stem cell $S_i$ with $i = 0 \ldots K-1$ mutations results in one of five possible outcomes with the following probabilities
Step1: The jagged lines represent the fluctuating output of the Monte Carlo simulation while the smooth lines represent the predictions of a deterministic model of mutation accumulation described by the following dynamical equations
\begin{equation}
dn_i/dt = n_{i-1}u_{i-1} + n_i \nu
\end{equation}
where $n_i$ represents the average number of stem cells with $i$ mutations, and $\nu = (2r-1)s$ represents the average tumor growth rate. As you can see, the agreement between simulation and the formula is good. What the formula doesn't predict, however, but which the simulation does, is the probability that a tissue of a given age contains at least one stem cell with $K$ mutations | Python Code:
from IPython.display import Image
Image(filename="trajectories.png", width=350, height=350)
Explanation: Analysis of Monte Carlo simulations of mutation accumulation in stem cells
This Notebook lives at Github.
Many adult tissues renew themselves continually via a pool of cells called stem cells. Stem cell divisions are either asymmetric—in which one daughter cell remains a stem cell and one does not—or symmetric, in which both daughter cells adopt the same fate, either stem or non-stem. Recent studies show that in many tissues stem cell division patterns are strongly biased toward the symmetric outcome, raising the question of whether symmetry confers some benefit.
The DNA of stem cells is continually subject to damage. For example, errors occur in DNA when it is replicated, and if this damage is not repaired it can lead to cancer. In this publication, we showed that symmetry, via extinction of damaged stem-cell clones, reduces the lifetime risk of accumulating phenotypically silent heritable damage (mutations or aberrant epigenetic changes) in individual stem cells.
That work focused on the homeostatic case in which stem cells, though dividing, do not change their number, e.g., as occurs in the adult intestine. Here, I assess the expected rate of mutation accumulation in a non-homeostatic tissue, e.g. a developing tumor.
One non-homeostatic model of mutation accumulation is a discrete-time multi-type branching process. The stochastic process starts with a number of wild-type stem cells, each cell accumulates mutations at each of $K$ different genes, and the process ends when the first stem cell with $K$ mutations arises. Under this model, the division of a stem cell $S_i$ with $i = 0 \ldots K-1$ mutations results in one of five possible outcomes with the following probabilities:
\begin{equation}
S_i \xrightarrow{\hspace{1cm}}
\left{
\begin{array}{lcrcl}
S_i + S_i & & & \mbox{with prob.} & rs (1-2u_i) \
S_i + S_{i+1} & & & \mbox{with prob.} & rs \, 2 u_i \
S_i & & & \mbox{with prob.}&(1-s) (1-u_i) \
S_{i+1} & & & \mbox{with prob.}&(1-s)u_i \
\emptyset & & & \mbox{with prob.}&(1-r)s
\end{array}
\right.
\end{equation}
Here, the fraction of divisions that are symmetric — producing daughter cells with a common fate — is denoted $s$; the mutation rate for a stem cell with $i$ mutations is denoted $u_i$; and $r$ measures the imbalance of symmetric renewal versus symmetric differentiation.
To simulate this stochastic process, I used mutation_accumulation, following these instructions. The following is some data output from the Monte Carlo simulation. It is a time course tracking the predicted number of wild-type (blue; 0 mutations), single-mutant (green), and double-mutant stem cells (red) prior to the appearance of the first triple-mutant stem cell (not shown).
End of explanation
import numpy as np
data = np.loadtxt('data/cdf/histogram__pop0__spe3__xxx0.dat')
from matplotlib import pyplot as plt
%matplotlib inline
fontsize = 16
fontsize_tick = 14
fig = plt.figure(figsize=(7,5), facecolor='w')
ax = fig.add_subplot(111)
ax.plot(data[:,0], data[:,1], linewidth=2)
ax.set_xlabel('time (stem cell cycles)', fontsize=fontsize)
ax.set_ylabel('Risk of accumulating three mutations', fontsize=fontsize)
ax.tick_params(axis='both', which='major', labelsize=fontsize_tick)
Explanation: The jagged lines represent the fluctuating output of the Monte Carlo simulation while the smooth lines represent the predictions of a deterministic model of mutation accumulation described by the following dynamical equations
\begin{equation}
dn_i/dt = n_{i-1}u_{i-1} + n_i \nu
\end{equation}
where $n_i$ represents the average number of stem cells with $i$ mutations, and $\nu = (2r-1)s$ represents the average tumor growth rate. As you can see, the agreement between simulation and the formula is good. What the formula doesn't predict, however, but which the simulation does, is the probability that a tissue of a given age contains at least one stem cell with $K$ mutations:
End of explanation |
2,203 | Given the following text description, write Python code to implement the functionality described.
Description:
Length of longest common prefix possible by rearranging strings in a given array
Python3 program to implement the above approach ; Function to get the length of the longest common prefix by rearranging the strings ; freq [ i ] [ j ] : stores the frequency of a character ( = j ) in a arr [ i ] ; Traverse the given array ; Stores length of current string ; Traverse current string of the given array ; Update the value of freq [ i ] [ arr [ i ] [ j ] ] ; Stores the length of longest common prefix ; Count the minimum frequency of each character in in all the strings of arr [ ] ; Stores minimum value in each row of freq [ ] [ ] ; Calculate minimum frequency of current character in all the strings . ; Update minRowVal ; Update maxLen ; Driver Code
| Python Code:
import sys
def longComPre(arr , N ) :
freq =[[ 0 for i in range(256 ) ] for i in range(N ) ]
for i in range(N ) :
M = len(arr[i ] )
for j in range(M ) :
freq[i ][ord(arr[i ][j ] ) ] += 1
maxLen = 0
for j in range(256 ) :
minRowVal = sys . maxsize
for i in range(N ) :
minRowVal = min(minRowVal , freq[i ][j ] )
maxLen += minRowVal
return maxLen
if __name__== ' __main __' :
arr =["aabdc ", "abcd ", "aacd "]
N = 3
print(longComPre(arr , N ) )
|
2,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>
Download
Step1: NumPy
Para importar numpy, utilize
Step2: Criando Arrays
Step3: Funções NumPy
Step4: Criando Matrizes
Step5: Usando o Método random() do NumPy
Step6: Operações com datasets
Step7: Estatística
Step8: Outras Operações com Arrays | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>
Download: http://github.com/dsacademybr
End of explanation
# Importando o NumPy
import numpy as np
np.__version__
Explanation: NumPy
Para importar numpy, utilize:
import numpy as np
Você também pode utilizar:
from numpy import * . Isso evitará a utilização de np., mas este comando importará todos os módulos do NumPy.
Para atualizar o NumPy, abra o prompt de comando e digite: pip install numpy -U
End of explanation
# Help
help(np.array)
# Array criado a partir de uma lista:
vetor1 = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8])
print(vetor1)
# Um objeto do tipo ndarray é um recipiente multidimensional de itens do mesmo tipo e tamanho.
type(vetor1)
# Usando métodos do array NumPy
vetor1.cumsum()
# Criando uma lista. Perceba como listas e arrays são objetos diferentes, com diferentes propriedades
lst = [0, 1, 2, 3, 4, 5, 6, 7, 8]
lst
type(lst)
# Imprimindo na tela um elemento específico no array
vetor1[0]
# Alterando um elemento do array
vetor1[0] = 100
print(vetor1)
# Não é possível incluir elemento de outro tipo
vetor1[0] = 'Novo elemento'
# Verificando o formato do array
print(vetor1.shape)
Explanation: Criando Arrays
End of explanation
# A função arange cria um vetor contendo uma progressão aritmética a partir de um intervalo - start, stop, step
vetor2 = np.arange(0., 4.5, .5)
print(vetor2)
# Verificando o tipo do objeto
type(vetor2)
# Formato do array
np.shape(vetor2)
print (vetor2.dtype)
x = np.arange(1, 10, 0.25)
print(x)
print(np.zeros(10))
# Retorna 1 nas posições em diagonal e 0 no restante
z = np.eye(3)
z
# Os valores passados como parâmetro, formam uma diagonal
d = np.diag(np.array([1, 2, 3, 4]))
d
# Array de números complexos
c = np.array([1+2j, 3+4j, 5+6*1j])
c
# Array de valores booleanos
b = np.array([True, False, False, True])
b
# Array de strings
s = np.array(['Python', 'R', 'Julia'])
s
# O método linspace (linearly spaced vector) retorna um número de
# valores igualmente distribuídos no intervalo especificado
np.linspace(0, 10)
print(np.linspace(0, 10, 15))
print(np.logspace(0, 5, 10))
Explanation: Funções NumPy
End of explanation
# Criando uma matriz
matriz = np.array([[1,2,3],[4,5,6]])
print(matriz)
print(matriz.shape)
# Criando uma matriz 2x3 apenas com números "1"
matriz1 = np.ones((2,3))
print(matriz1)
# Criando uma matriz a partir de uma lista de listas
lista = [[13,81,22], [0, 34, 59], [21, 48, 94]]
# A função matrix cria uma matria a partir de uma sequência
matriz2 = np.matrix(lista)
matriz2
type(matriz2)
# Formato da matriz
np.shape(matriz2)
matriz2.size
print(matriz2.dtype)
matriz2.itemsize
matriz2.nbytes
print(matriz2[2,1])
# Alterando um elemento da matriz
matriz2[1,0] = 100
matriz2
x = np.array([1, 2]) # NumPy decide o tipo dos dados
y = np.array([1.0, 2.0]) # NumPy decide o tipo dos dados
z = np.array([1, 2], dtype=np.float64) # Forçamos um tipo de dado em particular
print (x.dtype, y.dtype, z.dtype)
matriz3 = np.array([[24, 76], [35, 89]], dtype=float)
matriz3
matriz3.itemsize
matriz3.nbytes
matriz3.ndim
matriz3[1,1]
matriz3[1,1] = 100
matriz3
Explanation: Criando Matrizes
End of explanation
print(np.random.rand(10))
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib as mat
mat.__version__
print(np.random.rand(10))
plt.show((plt.hist(np.random.rand(1000))))
print(np.random.randn(5,5))
plt.show(plt.hist(np.random.randn(1000)))
imagem = np.random.rand(30, 30)
plt.imshow(imagem, cmap = plt.cm.hot)
plt.colorbar()
Explanation: Usando o Método random() do NumPy
End of explanation
import os
filename = os.path.join('iris.csv')
# No Windows use !more iris.csv. Mac ou Linux use !head iris.csv
!head iris.csv
#!more iris.csv
# Carregando um dataset para dentro de um array
arquivo = np.loadtxt(filename, delimiter=',', usecols=(0,1,2,3), skiprows=1)
print (arquivo)
type(arquivo)
# Gerando um plot a partir de um arquivo usando o NumPy
var1, var2 = np.loadtxt(filename, delimiter=',', usecols=(0,1), skiprows=1, unpack=True)
plt.show(plt.plot(var1, var2, 'o', markersize=8, alpha=0.75))
Explanation: Operações com datasets
End of explanation
# Criando um array
A = np.array([15, 23, 63, 94, 75])
# Em estatística a média é o valor que aponta para onde mais se concentram os dados de uma distribuição.
np.mean(A)
# O desvio padrão mostra o quanto de variação ou "dispersão" existe em
# relação à média (ou valor esperado).
# Um baixo desvio padrão indica que os dados tendem a estar próximos da média.
# Um desvio padrão alto indica que os dados estão espalhados por uma gama de valores.
np.std(A)
# Variância de uma variável aleatória é uma medida da sua dispersão
# estatística, indicando "o quão longe" em geral os seus valores se
# encontram do valor esperado
np.var(A)
d = np.arange(1, 10)
d
np.sum(d)
# Retorna o produto dos elementos
np.prod(d)
# Soma acumulada dos elementos
np.cumsum(d)
a = np.random.randn(400,2)
m = a.mean(0)
print (m, m.shape)
plt.plot(a[:,0], a[:,1], 'o', markersize=5, alpha=0.50)
plt.plot(m[0], m[1], 'ro', markersize=10)
plt.show()
Explanation: Estatística
End of explanation
# Slicing
a = np.diag(np.arange(3))
a
a[1, 1]
a[1]
b = np.arange(10)
b
# [start:end:step]
b[2:9:3]
# Comparação
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
a == b
np.array_equal(a, b)
a.min()
a.max()
# Somando um elemento ao array
np.array([1, 2, 3]) + 1.5
# Usando o método around
a = np.array([1.2, 1.5, 1.6, 2.5, 3.5, 4.5])
b = np.around(a)
b
# Criando um array
B = np.array([1, 2, 3, 4])
B
# Copiando um array
C = B.flatten()
C
# Criando um array
v = np.array([1, 2, 3])
# Adcionando uma dimensão ao array
v[:, np.newaxis], v[:,np.newaxis].shape, v[np.newaxis,:].shape
# Repetindo os elementos de um array
np.repeat(v, 3)
# Repetindo os elementos de um array
np.tile(v, 3)
# Criando um array
w = np.array([5, 6])
# Concatenando
np.concatenate((v, w), axis=0)
# Copiando arrays
r = np.copy(v)
r
Explanation: Outras Operações com Arrays
End of explanation |
2,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Symbol Tutorial
Besides the tensor computation interface NDArray, another main object in MXNet is the Symbol provided by mxnet.symbol, or mxnet.sym for short. A symbol represents a multi-output symbolic expression. They are composited by operators, such as simple matrix operations (e.g. “+”), or a neural network layer (e.g. convolution layer). An operator can take several input variables, produce more than one output variables, and have internal state variables. A variable can be either free, which we can bind with value later, or an output of another symbol.
Symbol Composition
Basic Operators
The following example composites a simple expression a+b. We first create the placeholders a and b with names using mx.sym.Variable, and then construct the desired symbol by using the operator +. When the string name is not given during creating, MXNet will automatically generate a unique name for the symbol, which is the case for c.
Step1: Most NDArray operators can be applied to Symbol, for example
Step2: Basic Neural Networks
Besides the basic operators, Symbol has a rich set of neural network layers. The following codes construct a two layer fully connected neural work and then visualize the structure by given the input data shape.
Step3: Modulelized Construction for Deep Networks
For deep networks, such as the Google Inception, constructing layer by layer is painful given the large number of layers. For these networks, we often modularize the construction. Take the Google Inception as an example, we can first define a factory function to chain the convolution layer, batch normalization layer, and Relu activation layer together
Step4: Then we define a function that constructs an Inception module based on ConvFactory
Step5: Finally we can obtain the whole network by chaining multiple inception modulas. A complete example is available at mxnet/example/image-classification/symbol_inception-bn.py
Group Multiple Symbols
To construct neural networks with multiple loss layers, we can use mxnet.sym.Group to group multiple symbols together. The following example group two outputs
Step6: Relations to NDArray
As can be seen now, both Symbol and NDArray provide multi-dimensional array operations, such as c=a+b in MXNet. Sometimes users are confused which way to use. We briefly clarify the difference here, more detailed explanation are available here.
The NDArray provides an imperative programming alike interface, in which the computations are evaluated sentence by sentence. While Symbol is closer to declarative programming, in which we first declare the computation, and then evaluate with data. Examples in this category include regular expression and SQL.
The pros for NDArray
Step7: Bind with Data and Evaluate
The symbol c we constructed declares what computation should be run. To evaluate it, we need to feed arguments, namely free variables, with data first. We can do it by using the bind method, which accepts device context and a dict mapping free variable names to NDArrays as arguments and returns an executor. The executor provides method forward for evaluation and attribute outputs to get all results.
Step8: We can evaluate the same symbol on GPU with different data
Step9: Load and Save
Similar to NDArray, we can either serialize a Symbol object by using pickle, or use save and load directly. Different to the binary format chosen by NDArray, Symbol uses the more readable json format for serialization. The tojson method returns the json string.
Step10: Customized Symbol *
Most operators such as mx.sym.Convolution and mx.sym.Reshape are implemented in C++ for better performance. MXNet also allows users to write new operators using any frontend language such as Python. It often makes the developing and debugging much easier.
To implement an operator in Python, we just need to define the two computation methods forward and backward with several methods for querying the properties, such as list_arguments and infer_shape.
NDArray is the default type of arguments in both forward and backward. Therefore we often also implement the computation with NDArray operations. To show the flexibility of MXNet, however, we will demonstrate an implementation of the softmax layer using NumPy. Though a NumPy based operator can be only run on CPU and also lose some optimizations which can be applied on NDArray, it enjoys the rich functionalities provided by NumPy.
We first create a subclass of mx.operator.CustomOp and then define forward and backward.
Step11: Here we use asnumpy to convert the NDArray inputs into numpy.ndarray. Then using CustomOp.assign to assign the results back to mxnet.NDArray based on the value of req, which could be "over write" or "add to".
Next we create a subclass of mx.operator.CustomOpProp for querying the properties.
Step12: Finally, we can use mx.sym.Custom with the register name to use this operator
python
net = mx.symbol.Custom(data=prev_input, op_type='softmax')
Advanced Usages *
Type Cast
MXNet uses 32-bit float in default. Sometimes we want to use a lower precision data type for better accuracy-performance trade-off. For example, The Nvidia Tesla Pascal GPUs (e.g. P100) have improved 16-bit float performance, while GTX Pascal GPUs (e.g. GTX 1080) are fast on 8-bit integers.
We can use the mx.sym.Cast operator to convert the data type.
Step13: Variable Sharing
Sometimes we want to share the contents between several symbols. This can be simply done by bind these symbols with the same array. | Python Code:
import mxnet as mx
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = a + b
(a, b, c)
Explanation: Symbol Tutorial
Besides the tensor computation interface NDArray, another main object in MXNet is the Symbol provided by mxnet.symbol, or mxnet.sym for short. A symbol represents a multi-output symbolic expression. They are composited by operators, such as simple matrix operations (e.g. “+”), or a neural network layer (e.g. convolution layer). An operator can take several input variables, produce more than one output variables, and have internal state variables. A variable can be either free, which we can bind with value later, or an output of another symbol.
Symbol Composition
Basic Operators
The following example composites a simple expression a+b. We first create the placeholders a and b with names using mx.sym.Variable, and then construct the desired symbol by using the operator +. When the string name is not given during creating, MXNet will automatically generate a unique name for the symbol, which is the case for c.
End of explanation
# elemental wise times
d = a * b
# matrix multiplication
e = mx.sym.dot(a, b)
# reshape
f = mx.sym.Reshape(d+e, shape=(1,4))
# broadcast
g = mx.sym.broadcast_to(f, shape=(2,4))
mx.viz.plot_network(symbol=g)
Explanation: Most NDArray operators can be applied to Symbol, for example:
End of explanation
# Output may vary
net = mx.sym.Variable('data')
net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=128)
net = mx.sym.Activation(data=net, name='relu1', act_type="relu")
net = mx.sym.FullyConnected(data=net, name='fc2', num_hidden=10)
net = mx.sym.SoftmaxOutput(data=net, name='out')
mx.viz.plot_network(net, shape={'data':(100,200)})
Explanation: Basic Neural Networks
Besides the basic operators, Symbol has a rich set of neural network layers. The following codes construct a two layer fully connected neural work and then visualize the structure by given the input data shape.
End of explanation
# Output may vary
def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''):
conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix))
bn = mx.symbol.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix))
act = mx.symbol.Activation(data=bn, act_type='relu', name='relu_%s%s' %(name, suffix))
return act
prev = mx.symbol.Variable(name="Previos Output")
conv_comp = ConvFactory(data=prev, num_filter=64, kernel=(7,7), stride=(2, 2))
shape = {"Previos Output" : (128, 3, 28, 28)}
mx.viz.plot_network(symbol=conv_comp, shape=shape)
Explanation: Modulelized Construction for Deep Networks
For deep networks, such as the Google Inception, constructing layer by layer is painful given the large number of layers. For these networks, we often modularize the construction. Take the Google Inception as an example, we can first define a factory function to chain the convolution layer, batch normalization layer, and Relu activation layer together:
End of explanation
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3, pool, proj, name):
# 1x1
c1x1 = ConvFactory(data=data, num_filter=num_1x1, kernel=(1, 1), name=('%s_1x1' % name))
# 3x3 reduce + 3x3
c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce')
c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), name=('%s_3x3' % name))
# double 3x3 reduce + double 3x3
cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce')
cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_0' % name))
cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_1' % name))
# pool + proj
pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name)))
cproj = ConvFactory(data=pooling, num_filter=proj, kernel=(1, 1), name=('%s_proj' % name))
# concat
concat = mx.symbol.Concat(*[c1x1, c3x3, cd3x3, cproj], name='ch_concat_%s_chconcat' % name)
return concat
prev = mx.symbol.Variable(name="Previos Output")
in3a = InceptionFactoryA(prev, 64, 64, 64, 64, 96, "avg", 32, name="in3a")
mx.viz.plot_network(symbol=in3a, shape=shape)
Explanation: Then we define a function that constructs an Inception module based on ConvFactory
End of explanation
net = mx.sym.Variable('data')
fc1 = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=128)
net = mx.sym.Activation(data=fc1, name='relu1', act_type="relu")
out1 = mx.sym.SoftmaxOutput(data=net, name='softmax')
out2 = mx.sym.LinearRegressionOutput(data=net, name='regression')
group = mx.sym.Group([out1, out2])
group.list_outputs()
Explanation: Finally we can obtain the whole network by chaining multiple inception modulas. A complete example is available at mxnet/example/image-classification/symbol_inception-bn.py
Group Multiple Symbols
To construct neural networks with multiple loss layers, we can use mxnet.sym.Group to group multiple symbols together. The following example group two outputs:
End of explanation
arg_name = c.list_arguments() # get the names of the inputs
out_name = c.list_outputs() # get the names of the outputs
arg_shape, out_shape, _ = c.infer_shape(a=(2,3), b=(2,3))
{'input' : dict(zip(arg_name, arg_shape)),
'output' : dict(zip(out_name, out_shape))}
Explanation: Relations to NDArray
As can be seen now, both Symbol and NDArray provide multi-dimensional array operations, such as c=a+b in MXNet. Sometimes users are confused which way to use. We briefly clarify the difference here, more detailed explanation are available here.
The NDArray provides an imperative programming alike interface, in which the computations are evaluated sentence by sentence. While Symbol is closer to declarative programming, in which we first declare the computation, and then evaluate with data. Examples in this category include regular expression and SQL.
The pros for NDArray:
- straightforward
- easy to work with other language features (for loop, if-else condition, ..) and libraries (numpy, ..)
- easy to step-by-step debug
The pros for Symbol:
- provides almost all functionalities of NDArray, such as +, *, sin, and reshape
- provides a large number of neural network related operators such as Convolution, Activation, and BatchNorm
- provides automatic differentiation
- easy to construct and manipulate complex computations such as deep neural networks
- easy to save, load, and visualization
- easy for the backend to optimize the computation and memory usage
We will show on the mixed programming tutorial how these two interfaces can be used together to develop a complete training program. This tutorial will focus on the usage of Symbol.
Symbol Manipulation *
One important difference of Symbol comparing to NDArray is that, we first declare the computation, and then bind with data to run.
In this section we introduce the functions to manipulate a symbol directly. But note that, most of them are wrapped nicely by the mx.module. One can skip this section safely.
Shape Inference
For each symbol, we can query its inputs (or arguments) and outputs. We can also inference the output shape by given the input shape, which facilitates memory allocation.
End of explanation
ex = c.bind(ctx=mx.cpu(), args={'a' : mx.nd.ones([2,3]),
'b' : mx.nd.ones([2,3])})
ex.forward()
print 'number of outputs = %d\nthe first output = \n%s' % (
len(ex.outputs), ex.outputs[0].asnumpy())
Explanation: Bind with Data and Evaluate
The symbol c we constructed declares what computation should be run. To evaluate it, we need to feed arguments, namely free variables, with data first. We can do it by using the bind method, which accepts device context and a dict mapping free variable names to NDArrays as arguments and returns an executor. The executor provides method forward for evaluation and attribute outputs to get all results.
End of explanation
ex_gpu = c.bind(ctx=mx.gpu(), args={'a' : mx.nd.ones([3,4], mx.gpu())*2,
'b' : mx.nd.ones([3,4], mx.gpu())*3})
ex_gpu.forward()
ex_gpu.outputs[0].asnumpy()
Explanation: We can evaluate the same symbol on GPU with different data
End of explanation
print(c.tojson())
c.save('symbol-c.json')
c2 = mx.symbol.load('symbol-c.json')
c.tojson() == c2.tojson()
Explanation: Load and Save
Similar to NDArray, we can either serialize a Symbol object by using pickle, or use save and load directly. Different to the binary format chosen by NDArray, Symbol uses the more readable json format for serialization. The tojson method returns the json string.
End of explanation
class Softmax(mx.operator.CustomOp):
def forward(self, is_train, req, in_data, out_data, aux):
x = in_data[0].asnumpy()
y = np.exp(x - x.max(axis=1).reshape((x.shape[0], 1)))
y /= y.sum(axis=1).reshape((x.shape[0], 1))
self.assign(out_data[0], req[0], mx.nd.array(y))
def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
l = in_data[1].asnumpy().ravel().astype(np.int)
y = out_data[0].asnumpy()
y[np.arange(l.shape[0]), l] -= 1.0
self.assign(in_grad[0], req[0], mx.nd.array(y))
Explanation: Customized Symbol *
Most operators such as mx.sym.Convolution and mx.sym.Reshape are implemented in C++ for better performance. MXNet also allows users to write new operators using any frontend language such as Python. It often makes the developing and debugging much easier.
To implement an operator in Python, we just need to define the two computation methods forward and backward with several methods for querying the properties, such as list_arguments and infer_shape.
NDArray is the default type of arguments in both forward and backward. Therefore we often also implement the computation with NDArray operations. To show the flexibility of MXNet, however, we will demonstrate an implementation of the softmax layer using NumPy. Though a NumPy based operator can be only run on CPU and also lose some optimizations which can be applied on NDArray, it enjoys the rich functionalities provided by NumPy.
We first create a subclass of mx.operator.CustomOp and then define forward and backward.
End of explanation
# register this operator into MXNet by name "softmax"
@mx.operator.register("softmax")
class SoftmaxProp(mx.operator.CustomOpProp):
def __init__(self):
# softmax is a loss layer so we don’t need gradient input
# from layers above.
super(SoftmaxProp, self).__init__(need_top_grad=False)
def list_arguments(self):
return ['data', 'label']
def list_outputs(self):
return ['output']
def infer_shape(self, in_shape):
data_shape = in_shape[0]
label_shape = (in_shape[0][0],)
output_shape = in_shape[0]
return [data_shape, label_shape], [output_shape], []
def create_operator(self, ctx, shapes, dtypes):
return Softmax()
Explanation: Here we use asnumpy to convert the NDArray inputs into numpy.ndarray. Then using CustomOp.assign to assign the results back to mxnet.NDArray based on the value of req, which could be "over write" or "add to".
Next we create a subclass of mx.operator.CustomOpProp for querying the properties.
End of explanation
a = mx.sym.Variable('data')
b = mx.sym.Cast(data=a, dtype='float16')
arg, out, _ = b.infer_type(data='float32')
print({'input':arg, 'output':out})
c = mx.sym.Cast(data=a, dtype='uint8')
arg, out, _ = c.infer_type(data='int32')
print({'input':arg, 'output':out})
Explanation: Finally, we can use mx.sym.Custom with the register name to use this operator
python
net = mx.symbol.Custom(data=prev_input, op_type='softmax')
Advanced Usages *
Type Cast
MXNet uses 32-bit float in default. Sometimes we want to use a lower precision data type for better accuracy-performance trade-off. For example, The Nvidia Tesla Pascal GPUs (e.g. P100) have improved 16-bit float performance, while GTX Pascal GPUs (e.g. GTX 1080) are fast on 8-bit integers.
We can use the mx.sym.Cast operator to convert the data type.
End of explanation
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = mx.sym.Variable('c')
d = a + b * c
data = mx.nd.ones((2,3))*2
ex = d.bind(ctx=mx.cpu(), args={'a':data, 'b':data, 'c':data})
ex.forward()
ex.outputs[0].asnumpy()
Explanation: Variable Sharing
Sometimes we want to share the contents between several symbols. This can be simply done by bind these symbols with the same array.
End of explanation |
2,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data Processing Functions
Step2: Create Data Pipeline
Step3: Send First Two Pieces Of Raw Data Through Pipeline
Step4: Send All Raw Data Through Pipeline | Python Code:
raw_data = [1,2,3,4,5,6,7,8,9,10]
Explanation: Title: Streaming Data Pipeline
Slug: streaming_data_pipeline
Summary: Streaming Data Pipeline Using Python.
Date: 2017-02-02 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Create Some Raw Data
End of explanation
# Define a generator that yields input+6
def add_6(numbers):
for x in numbers:
output = x+6
yield output
# Define a generator that yields input-2
def subtract_2(numbers):
for x in numbers:
output = x-2
yield output
# Define a generator that yields input*100
def multiply_by_100(numbers):
for x in numbers:
output = x*100
yield output
Explanation: Create Data Processing Functions
End of explanation
# Step 1 of the pipeline
step1 = add_6(raw_data)
# Step 2 of the pipeline
step2 = subtract_2(step1)
# Step 3 of the pipeline
pipeline = multiply_by_100(step2)
Explanation: Create Data Pipeline
End of explanation
# First element of the raw data
next(pipeline)
# Second element of the raw data
next(pipeline)
Explanation: Send First Two Pieces Of Raw Data Through Pipeline
End of explanation
# Process all data
for raw_data in pipeline:
print(raw_data)
Explanation: Send All Raw Data Through Pipeline
End of explanation |
2,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 3 Problem 5
Step1: Part 5.a.
Here we load the large mandrill image and display it.
Step2: Part 5.b.
Here we load the small mandrill image and display it.
Step3: Now apply the K-Means algorithm with k centroids to the small image.
Step4: The color palette obtained by running K-Means is the following.
Step5: Part 5.c.
Do the same as above for the large mandrill image. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import kmeans
KCENTROIDS = 16
MAX_ITERS = 200
EPSILON = 1e-7
Explanation: Homework 3 Problem 5
End of explanation
Ilarge = plt.imread('mandrill-large.tiff')
Ilarge = np.uint8(np.round(Ilarge))
plt.imshow(Ilarge)
plt.show()
Explanation: Part 5.a.
Here we load the large mandrill image and display it.
End of explanation
Ismall = plt.imread('mandrill-small.tiff')
Ismall = np.uint8(np.round(Ismall))
plt.imshow(Ismall)
plt.show()
Explanation: Part 5.b.
Here we load the small mandrill image and display it.
End of explanation
clusters_small, centroids_small = kmeans.kmeans(Ismall, kcentroids=KCENTROIDS, epsilon=EPSILON, max_iterations=200)
Explanation: Now apply the K-Means algorithm with k centroids to the small image.
End of explanation
plt.imshow(np.array([centroids_small]))
plt.show()
new_image_small = kmeans.make_image(clusters_small, centroids_small)
plt.imshow(new_image_small)
plt.show()
Explanation: The color palette obtained by running K-Means is the following.
End of explanation
clusters_large, centroids_large = kmeans.kmeans(Ilarge, kcentroids=KCENTROIDS, epsilon=EPSILON, max_iterations=MAX_ITERS)
new_image_large = kmeans.make_image(clusters_large, centroids_large)
plt.imshow(new_image_large)
plt.show()
Explanation: Part 5.c.
Do the same as above for the large mandrill image.
End of explanation |
2,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alcune applicazioni delle matrici
Subito dopo aver introdotto le matrici e viste le operazioni fondamentali di somma, prodotto e determinante una domanda sorge spontanea. A cosa servono? In verità è una questione ricorrente per tanti argomenti della matematica, ma le matrici sembrano davvero un ''oggetto'' complicato inutilmente. Perché non potevo chiamarle direttamente "tabelle di numeri"? Tutto sommato anche con le tabelle in alcuni casi può essere fatta la somma tra elementi nella stessa posizione della tabella. E la statistica ci ha insegnato come trattare le tabelle di numeri calcolando somme e medie, mediane e deviazioni standard. E ancora, il prodotto tra due matrici era proprio necessario definirlo in quel modo così "artificioso"? Non bastava moltiplicare i singoli componenti come si fa per l'addizione? Tra l'altro in questo modo avremmo ottenuto un'operazione commutativa che è un terreno che conosciamo molto meglio. Per non parlare del determinante, che per calcolarlo abbiamo dovuto scomodare funzioni ricorsive senza però avere nulla di vantaggioso per la matematica che abbiamo studiato.
Una prima risposta viene dalla soluzione dei sistemi lineari. Se consideriamo il sistema
$$\begin{matrix}
a_{1,1}x + a_{1,2}y + a_{1,3}z = b_1\
a_{2,1}x + a_{2,2}y + a_{2,3}z = b_2\
a_{3,1}x + a_{3,2}y + a_{3,3}z = b_3\
\end{matrix}
$$
possiamo scriverlo in forma più compatta utilizzando il prodotto tra matrici così
$$\begin{pmatrix}
a_{1,1} & a_{1,2} & a_{1,3} \
a_{2,1} & a_{2,2} & a_{2,3} \
a_{3,1} & a_{3,2} & a_{3,3}\
\end{pmatrix}
\begin{pmatrix}
x \
y \
z \
\end{pmatrix}=
\begin{pmatrix}
b_1\
b_2 \
b_3 \
\end{pmatrix}
$$
In effetti un sistema è determinato dal valore dei coefficienti $a_{i,j}$ e $b_{k}$. Con la scrittura in forma matriciale posso lavorare solo sui coefficienti senza dovermi portare dietro le $x$,$y$ e $z$. Il metodo di riduzione (talvolta chiamato di addizione e sottrazione) opportunamente generalizzato alle matrici permette di risolvere sistemi lineari di qualsiasi dimensione. Tale metodo viene chiamato Eliminazione di Gauss.
Ritengo tuttavia che la soluzione di sistemi lineari, per quanto sia un ottima applicazione delle matrici, non permetta di cogliere la potenza di questo nuovo strumento. Nel seguito vederemo tre applicazioni delle matrici a diversi settori.
Trasformazioni geometriche piane
Una trasformazione del piano è una funzione che ad un punto del piano, che indicheremo con le due componenti cartesiane $(x,y)$, associa un altro punto del piano $(x',y')$. Ad esempio la trasformazione
$$\begin{matrix}
x' = x\
y' = -y\
\end{matrix}
$$
è la simmetria rispetto all'asse delle x. Usando la notazione matriciale, analogamente a quanto abbiamo visto per i sistemi, possiamo scrivere questa trasformazione così
Step1: Composizione di due trasformazioni
Se ad esempio volessi applicare prima una simmetria rispetto all'asse x e poi una rotazione, di solito si indicano le due trasformazioni in questo modo
$$\begin{matrix}
x' = -x\
y' = y\
\end{matrix}
$$
$$\begin{matrix}
x'' = x' \cos \alpha + y' \sin \alpha\
y'' = -x' \sin \alpha + y' \cos \alpha\
\end{matrix}
$$
Sostituendo gli $x'$ e $y'$ nel secondo sistema si ottiene la trasformazione composta.
Un primo fatto notevole che possiamo osservare è che la composizione di trasformazioni non è sempre commutativa. Basta un semplice esempio per rendersene conto. In effetti le trasformazioni non sono altro che funzioni e come sappiamo la composizione di funzioni non è commutativa.
Step2: Abbiamo già visto che la moltiplicazione tra matrici non è commutativa. Se facciamo qualche prova vediamo che rappresentando le trasformazioni con matrici la composizione di trasformazioni altro non è che il prodotto tra matrici. Proviamo a comporre due rotazioni di angoli $\alpha$ e $\beta$.
$$
\begin{pmatrix}
\cos \alpha & \sin \alpha \
-\sin \alpha & \cos \alpha \
\end{pmatrix}
\begin{pmatrix}
\cos \beta & \sin \beta \
-\sin \beta & \cos \beta \
\end{pmatrix}=
\begin{pmatrix}
\cos \alpha \cos \beta - \sin \alpha \sin \beta & \cos \alpha \sin \beta + \sin \alpha \cos \beta \
-(\cos \alpha \sin \beta + \sin \alpha \cos \beta)& \cos \alpha \cos \beta - \sin \alpha \sin \beta \
\end{pmatrix}
$$
Che usando le formule di addizione del seno risulta essere proprio la rotazione di $\alpha + \beta$
Determinante
Proviamo ora a capire il senso del determinante. Calcoliamo alcuni determinanti (intanto a mano in attesa che venga completato il programma in Python). Notiamo subito che per le simmetrie assiali vale $-1$, per le rotazioni sempre $1$ (dal teorema di Pitagora) e per le omotetie $k^2$ dove $k$ è il fattore di omotetia. Facciamo un po' di esperimenti con geogebra per capire come si comporta il determinante | Python Code:
import sys; sys.path.append('pyggb')
%reload_ext geogebra_magic
%ggb --width 800 --height 400 --showToolBar 0 --showResetIcon 1 trasformazioni.ggb
Explanation: Alcune applicazioni delle matrici
Subito dopo aver introdotto le matrici e viste le operazioni fondamentali di somma, prodotto e determinante una domanda sorge spontanea. A cosa servono? In verità è una questione ricorrente per tanti argomenti della matematica, ma le matrici sembrano davvero un ''oggetto'' complicato inutilmente. Perché non potevo chiamarle direttamente "tabelle di numeri"? Tutto sommato anche con le tabelle in alcuni casi può essere fatta la somma tra elementi nella stessa posizione della tabella. E la statistica ci ha insegnato come trattare le tabelle di numeri calcolando somme e medie, mediane e deviazioni standard. E ancora, il prodotto tra due matrici era proprio necessario definirlo in quel modo così "artificioso"? Non bastava moltiplicare i singoli componenti come si fa per l'addizione? Tra l'altro in questo modo avremmo ottenuto un'operazione commutativa che è un terreno che conosciamo molto meglio. Per non parlare del determinante, che per calcolarlo abbiamo dovuto scomodare funzioni ricorsive senza però avere nulla di vantaggioso per la matematica che abbiamo studiato.
Una prima risposta viene dalla soluzione dei sistemi lineari. Se consideriamo il sistema
$$\begin{matrix}
a_{1,1}x + a_{1,2}y + a_{1,3}z = b_1\
a_{2,1}x + a_{2,2}y + a_{2,3}z = b_2\
a_{3,1}x + a_{3,2}y + a_{3,3}z = b_3\
\end{matrix}
$$
possiamo scriverlo in forma più compatta utilizzando il prodotto tra matrici così
$$\begin{pmatrix}
a_{1,1} & a_{1,2} & a_{1,3} \
a_{2,1} & a_{2,2} & a_{2,3} \
a_{3,1} & a_{3,2} & a_{3,3}\
\end{pmatrix}
\begin{pmatrix}
x \
y \
z \
\end{pmatrix}=
\begin{pmatrix}
b_1\
b_2 \
b_3 \
\end{pmatrix}
$$
In effetti un sistema è determinato dal valore dei coefficienti $a_{i,j}$ e $b_{k}$. Con la scrittura in forma matriciale posso lavorare solo sui coefficienti senza dovermi portare dietro le $x$,$y$ e $z$. Il metodo di riduzione (talvolta chiamato di addizione e sottrazione) opportunamente generalizzato alle matrici permette di risolvere sistemi lineari di qualsiasi dimensione. Tale metodo viene chiamato Eliminazione di Gauss.
Ritengo tuttavia che la soluzione di sistemi lineari, per quanto sia un ottima applicazione delle matrici, non permetta di cogliere la potenza di questo nuovo strumento. Nel seguito vederemo tre applicazioni delle matrici a diversi settori.
Trasformazioni geometriche piane
Una trasformazione del piano è una funzione che ad un punto del piano, che indicheremo con le due componenti cartesiane $(x,y)$, associa un altro punto del piano $(x',y')$. Ad esempio la trasformazione
$$\begin{matrix}
x' = x\
y' = -y\
\end{matrix}
$$
è la simmetria rispetto all'asse delle x. Usando la notazione matriciale, analogamente a quanto abbiamo visto per i sistemi, possiamo scrivere questa trasformazione così:
$$
\begin{pmatrix}
x'\
y' \
\end{pmatrix}=
\begin{pmatrix}
1 & 0 \
0 & -1 \
\end{pmatrix}
\begin{pmatrix}
x \
y \
\end{pmatrix}
$$
Con questo file di Geogebra si possono vedere le matrici che generano le simmetrie assiali e centrale (rispetto all'origine), le omotetie e le rotazioni.
End of explanation
from IPython.display import Image
Image(filename='noncomm.png')
Explanation: Composizione di due trasformazioni
Se ad esempio volessi applicare prima una simmetria rispetto all'asse x e poi una rotazione, di solito si indicano le due trasformazioni in questo modo
$$\begin{matrix}
x' = -x\
y' = y\
\end{matrix}
$$
$$\begin{matrix}
x'' = x' \cos \alpha + y' \sin \alpha\
y'' = -x' \sin \alpha + y' \cos \alpha\
\end{matrix}
$$
Sostituendo gli $x'$ e $y'$ nel secondo sistema si ottiene la trasformazione composta.
Un primo fatto notevole che possiamo osservare è che la composizione di trasformazioni non è sempre commutativa. Basta un semplice esempio per rendersene conto. In effetti le trasformazioni non sono altro che funzioni e come sappiamo la composizione di funzioni non è commutativa.
End of explanation
%ggb --width 800 --height 400 --showToolBar 1 --showResetIcon 1 determinante.ggb
Explanation: Abbiamo già visto che la moltiplicazione tra matrici non è commutativa. Se facciamo qualche prova vediamo che rappresentando le trasformazioni con matrici la composizione di trasformazioni altro non è che il prodotto tra matrici. Proviamo a comporre due rotazioni di angoli $\alpha$ e $\beta$.
$$
\begin{pmatrix}
\cos \alpha & \sin \alpha \
-\sin \alpha & \cos \alpha \
\end{pmatrix}
\begin{pmatrix}
\cos \beta & \sin \beta \
-\sin \beta & \cos \beta \
\end{pmatrix}=
\begin{pmatrix}
\cos \alpha \cos \beta - \sin \alpha \sin \beta & \cos \alpha \sin \beta + \sin \alpha \cos \beta \
-(\cos \alpha \sin \beta + \sin \alpha \cos \beta)& \cos \alpha \cos \beta - \sin \alpha \sin \beta \
\end{pmatrix}
$$
Che usando le formule di addizione del seno risulta essere proprio la rotazione di $\alpha + \beta$
Determinante
Proviamo ora a capire il senso del determinante. Calcoliamo alcuni determinanti (intanto a mano in attesa che venga completato il programma in Python). Notiamo subito che per le simmetrie assiali vale $-1$, per le rotazioni sempre $1$ (dal teorema di Pitagora) e per le omotetie $k^2$ dove $k$ è il fattore di omotetia. Facciamo un po' di esperimenti con geogebra per capire come si comporta il determinante
End of explanation |
2,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fine-tuning a CNN with MXNet
Step1: Dowload pre-trained model from the model zoo (Resnet-152)
We then download a pretrained 152-layer ResNet model and load into memory.
Note
Step3: Fine tuning the model
To fine-tune a network, we must first replace the last fully-connected layer with a new one that outputs the desired number of classes. We initialize its weights randomly. Then we continue training as normal. Sometimes it’s common use a smaller learning rate based on the intuition that we may already be close to a good result.
We first define a function which replaces the the last fully-connected layer for a given network.
Step4: Training the model
We now define a fit function that creates an MXNet module instance that we'll bind the data and symbols to.
init_params is called to randomly initialize parameters
set_params is called to replace all parameters except for the last fully-connected layer with pre-trained model.
Note
Step5: Now that we have the helper functions setup, we can start training.
Its recommended that you train on a GPU instance, preferably p2.* family. In this example we assume an AWS EC2 p2.xlarge, which has one NVIDIA K80 GPU.
Step6: After 1 epoch we achive 97.22% training accuracy.
Lets save the newly trained model
Step8: Loading saved model
Step9: Generate Predictions for an arbitrary image
Step10: Inspecting incorrect labels | Python Code:
# Data Iterators for cats vs dogs dataset
import mxnet as mx
def get_iterators(batch_size, data_shape=(3, 224, 224)):
train = mx.io.ImageRecordIter(
path_imgrec = './cats_dogs_train.rec',
data_name = 'data',
label_name = 'softmax_label',
batch_size = batch_size,
data_shape = data_shape,
shuffle = True,
rand_crop = True,
rand_mirror = True)
val = mx.io.ImageRecordIter(
path_imgrec = './cats_dogs_val.rec',
data_name = 'data',
label_name = 'softmax_label',
batch_size = batch_size,
data_shape = data_shape,
rand_crop = False,
rand_mirror = False)
return (train, val)
Explanation: Fine-tuning a CNN with MXNet: Cats vs Dogs (Kaggle Redux)
In this tutorial we'll learn how to build a model to classifiy if an image is a cat or a dog. We'll use a pre-trained imagenet model from the MXNet model zoo. For practical problems we may not have a large dataset, hence its difficult to train these generalized models. However we can take advantage of models that are pre-trained on large dataset like imagenet where in the model has already learnt a lot of the image features.
The model used is based on the Convolution Neural Network (CNN) architecture. CNN's consist of multiple layer of fields that are model on biological visual receptors. At each layer the neuron collection process portions of input images and the outputs get tiled so as to obtain a higher level representation of the image. For more details on the how CNN's work check out CS231n course and MNIST example with MXNet.
To fine-tune a network we'll update all of the network’s weights and also replace the last fully-connected layer with the new number of output classes. In most cases to train we use a smaller learning rate given that we typically may have a small dataset. For more in depth reading on fine-tuning with MXNet check this tutorial
Setting up a deep learning environment with AWS Deep Learning AMI for MXNet
In this tutorial, we are going to use Deep Learning AMI. The Deep Learning AMI is a base Amazon Linux image provided by Amazon Web Services for use on Amazon Elastic Compute Cloud (Amazon EC2).It is designed to provide a stable, secure, and high performance execution environment for deep learning applications running on Amazon EC2. It includes popular deep learning frameworks, including MXNet.
For setting up an Deep Learning environment on AWS using Deep Learning AMI, please read this post on AWS AI Blog for detailed instruction.
Or you can choose to install MXNet to your own machine.
Prerequisites
MXNet (0.9.3 or higher)
Python 2.7 or higher
im2rec.py (clone https://github.com/dmlc/mxnet)
[recommended] Amazon EC2 instance with GPU (p2.* family) with Deep Learning AMI
Dataset: downoad and preprocessing
Download data from https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data
Extract train.zip into a folder "data" and create two folders "train" and "valid"
Create additonal directories to get a directory structure as shown below, We'll label dogs as class 0 and cats as 1 (hence the prefix)
```
train/
├── 1cats
└── 0dogs
valid/
├── 1cats
└── 0dogs
```
First move all the cat images into train/1cats and dog images into train/0dogs.
Now lets move a percentage of these images in to the validation directory to create the validation set. You could use the code below to execute in a python script
```
import os
import random
import shutil
cats_dir = 'train/1cats'
dogs_dir = 'train/0dogs'
all_cats = os.listdir(cats_dir)
all_dogs = os.listdir(dogs_dir)
p = 20.0
N = int(len(all_cats)/p)
N = int(len(all_dogs)/p)
for f in random.sample(all_cats, N):
shutil.move( cats_dir + "/" + f, "valid/1cats/" + f)
for f in random.sample(all_dogs, N):
shutil.move( dogs_dir + "/" + f, "valid/0dogs/" + f)
```
Create a list for training and validation set
```
python ~/mxnet/tools/im2rec.py --list True --recursive True cats_dogs_train.lst data/train
python ~/mxnet/tools/im2rec.py --list True --recursive True cats_dogs_val.lst data/valid
```
Convert the images in to MXNet RecordIO format
```
python ~/mxnet/tools/im2rec.py --resize 224 --quality 90 --num-thread 16 cats_dogs_train.lst data/train
python ~/mxnet/tools/im2rec.py --resize 224 --quality 90 --num-thread 16 cats_dogs_val.lst data/valid
```
You should see cats_dogs_train.rec and cats_dogs_val.rec files created.
Next we define the function which returns the data iterators.
CODE
End of explanation
# helper functions
import os, urllib
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.urlretrieve(url, filename)
def get_model(prefix, epoch):
download(prefix+'-symbol.json')
download(prefix+'-%04d.params' % (epoch,))
get_model('http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152', 0)
sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0)
Explanation: Dowload pre-trained model from the model zoo (Resnet-152)
We then download a pretrained 152-layer ResNet model and load into memory.
Note: If load_checkpoint reports error, we can remove the downloaded files and try get_model again.
End of explanation
def get_fine_tune_model(symbol, arg_params, num_classes, layer_name='flatten0'):
symbol: the pre-trained network symbol
arg_params: the argument parameters of the pre-trained model
num_classes: the number of classes for the fine-tune datasets
layer_name: the layer name before the last fully-connected layer
all_layers = sym.get_internals()
net = all_layers[layer_name+'_output']
net = mx.symbol.FullyConnected(data=net, num_hidden=num_classes, name='fc1')
net = mx.symbol.SoftmaxOutput(data=net, name='softmax')
new_args = dict({k:arg_params[k] for k in arg_params if 'fc1' not in k})
return (net, new_args)
Explanation: Fine tuning the model
To fine-tune a network, we must first replace the last fully-connected layer with a new one that outputs the desired number of classes. We initialize its weights randomly. Then we continue training as normal. Sometimes it’s common use a smaller learning rate based on the intuition that we may already be close to a good result.
We first define a function which replaces the the last fully-connected layer for a given network.
End of explanation
import logging
head = '%(asctime)-15s %(message)s'
logging.basicConfig(level=logging.DEBUG, format=head)
def fit(symbol, arg_params, aux_params, train, val, batch_size, num_gpus=1, num_epoch=1):
devs = [mx.gpu(i) for i in range(num_gpus)] # replace mx.gpu by mx.cpu for CPU training
mod = mx.mod.Module(symbol=new_sym, context=devs)
mod.bind(data_shapes=train.provide_data, label_shapes=train.provide_label)
mod.init_params(initializer=mx.init.Xavier(rnd_type='gaussian', factor_type="in", magnitude=2))
mod.set_params(new_args, aux_params, allow_missing=True)
mod.fit(train, val,
num_epoch=num_epoch,
batch_end_callback = mx.callback.Speedometer(batch_size, 10),
kvstore='device',
optimizer='sgd',
optimizer_params={'learning_rate':0.009},
eval_metric='acc')
return mod
Explanation: Training the model
We now define a fit function that creates an MXNet module instance that we'll bind the data and symbols to.
init_params is called to randomly initialize parameters
set_params is called to replace all parameters except for the last fully-connected layer with pre-trained model.
Note: change mx.gpu to mx.cpu to run training on CPU (much slower)
End of explanation
num_classes = 2 # This is binary classification (dogs vs cat)
batch_per_gpu = 4
num_gpus = 1
(new_sym, new_args) = get_fine_tune_model(sym, arg_params, num_classes)
batch_size = batch_per_gpu * num_gpus
(train, val) = get_iterators(batch_size)
mod = fit(new_sym, new_args, aux_params, train, val, batch_size, num_gpus)
metric = mx.metric.Accuracy()
mod_score = mod.score(val, metric)
print mod_score
Explanation: Now that we have the helper functions setup, we can start training.
Its recommended that you train on a GPU instance, preferably p2.* family. In this example we assume an AWS EC2 p2.xlarge, which has one NVIDIA K80 GPU.
End of explanation
prefix = 'resnet-mxnet-catsvsdogs'
epoch = 1
mc = mod.save_checkpoint(prefix, epoch)
Explanation: After 1 epoch we achive 97.22% training accuracy.
Lets save the newly trained model
End of explanation
# load the model, make sure you have executed previous cells to train
import cv2
dshape = [('data', (1,3,224,224))]
def load_model(s_fname, p_fname):
Load model checkpoint from file.
:return: (arg_params, aux_params)
arg_params : dict of str to NDArray
Model parameter, dict of name to NDArray of net's weights.
aux_params : dict of str to NDArray
Model parameter, dict of name to NDArray of net's auxiliary states.
symbol = mx.symbol.load(s_fname)
save_dict = mx.nd.load(p_fname)
arg_params = {}
aux_params = {}
for k, v in save_dict.items():
tp, name = k.split(':', 1)
if tp == 'arg':
arg_params[name] = v
if tp == 'aux':
aux_params[name] = v
return symbol, arg_params, aux_params
model_symbol = "resnet-mxnet-catsvsdogs-symbol.json"
model_params = "resnet-mxnet-catsvsdogs-0002.params"
sym, arg_params, aux_params = load_model(model_symbol, model_params)
mod = mx.mod.Module(symbol=sym)
# bind the model and set training == False; Define the data shape
mod.bind(for_training=False, data_shapes=dshape)
mod.set_params(arg_params, aux_params)
Explanation: Loading saved model
End of explanation
import urllib2
from collections import namedtuple
Batch = namedtuple('Batch', ['data'])
def preprocess_image(img, show_img=False):
'''
convert the image to a numpy array
'''
img = cv2.resize(img, (224, 224))
img = np.swapaxes(img, 0, 2)
img = np.swapaxes(img, 1, 2)
img = img[np.newaxis, :]
return img
url = 'https://images-na.ssl-images-amazon.com/images/G/01/img15/pet-products/small-tiles/23695_pets_vertical_store_dogs_small_tile_8._CB312176604_.jpg'
req = urllib2.urlopen(url)
image = np.asarray(bytearray(req.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
img = preprocess_image(image)
mod.forward(Batch([mx.nd.array(img)]))
# predict
prob = mod.get_outputs()[0].asnumpy()
print prob
Explanation: Generate Predictions for an arbitrary image
End of explanation
# Generate predictions for entire validation dataset
import os
import cv2
path = 'data/valid/cats/' # change as needed
files = [path + f for f in os.listdir(path)]
incorrect_labels = []
# incorrect cat labels
for f in files:
img = cv2.imread(f)
img = preprocess_image(img)
mod.forward(Batch([mx.nd.array(img)]))
prob = mod.get_outputs()[0].asnumpy()
if prob.argmax() != 1: # not a cat
print f
incorrect_labels.append(f)
from matplotlib import pyplot as plt
%matplotlib inline
import numpy as np
# Plot helper
def plots(ims, figsize=(12,6), rows=1, interp=False, titles=None):
if type(ims[0]) is np.ndarray:
ims = np.array(ims).astype(np.uint8)
if (ims.shape[-1] != 3):
ims = ims.transpose((0,2,3,1))
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None:
sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i], interpolation=None if interp else 'none')
#individual plot of incorrect label
img_path = incorrect_labels[0]
plots([cv2.imread(img_path)])
plt.show()
Explanation: Inspecting incorrect labels
End of explanation |
2,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-Dimension Visualization
Step1: Scatter Matrix
The scatter matrix allows users to identify correlations between pairs of dimensions in a matrix form.
Step2: Histograms
Once you want identify one or more dimensions that you would like to inspect, you can use histograms and kernel density estimates to get a sense for the variance in that field.
Step3: Joint Plots
The more fields in the scatter plot, the more difficult it is to identify what is going on. You can use joint plots to insepct the relationship and correlation between the two fields.
Step4: RadViz
Once you move into attempting to visualize more than three dimensions, things get a bit tricky. The radviz plot attempts to create clusters of points by pulling them towards an outer ring.
Step5: Parallel Coordinates
Parallel coordinates intend to do the same thing as the radviz, but instead of having a circle with the dimensions, extend those dimensions out along the horizontal access. | Python Code:
%matplotlib inline
import os
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
# Setup context and style
sns.set_context('talk')
sns.set_style('whitegrid')
IRIS = os.path.join("..", "data", "iris.csv")
data = pd.read_csv(IRIS)
Explanation: Multi-Dimension Visualization
End of explanation
sns.pairplot(data, hue='class', diag_kind="kde", size=3)
Explanation: Scatter Matrix
The scatter matrix allows users to identify correlations between pairs of dimensions in a matrix form.
End of explanation
sns.distplot(data['sepal width'], rug=True)
Explanation: Histograms
Once you want identify one or more dimensions that you would like to inspect, you can use histograms and kernel density estimates to get a sense for the variance in that field.
End of explanation
sns.jointplot("petal length", "petal width", data=data, kind='reg', size=12)
Explanation: Joint Plots
The more fields in the scatter plot, the more difficult it is to identify what is going on. You can use joint plots to insepct the relationship and correlation between the two fields.
End of explanation
from pandas.tools.plotting import radviz
plt.figure(figsize=(14,14))
mpl.rcParams.update({'font.size': 22})
radviz(data, 'class', color=sns.color_palette())
Explanation: RadViz
Once you move into attempting to visualize more than three dimensions, things get a bit tricky. The radviz plot attempts to create clusters of points by pulling them towards an outer ring.
End of explanation
from pandas.tools.plotting import parallel_coordinates
plt.figure(figsize=(14,14))
mpl.rcParams.update({'font.size': 22})
parallel_coordinates(data, 'class', color=sns.color_palette())
Explanation: Parallel Coordinates
Parallel coordinates intend to do the same thing as the radviz, but instead of having a circle with the dimensions, extend those dimensions out along the horizontal access.
End of explanation |
2,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ausgeführtes Beispiel für den Buchberger-Algorithmus
Step1: Zuerst wird ein Polynomring geschaffen. QQ ist $\mathbb Q$. Alternative Termordnungen sind grlex und grevlex.
Step2: Es gibt auch zwei Funktionen, welche die reduzierte Gröbner-Basis direkt bestimmen. Zuerst die Funktion für den Fall, dass die Variablen wie oben einem vorab definierten Polynomring angehören.
Step3: Die zweite Funktion akzeptiert Polynome aus Symbolen. Hier muss man die Termordnung explizit angeben. Wenn alle Koeffizienten ganz sind, arbeitet der Algorithmus über $\mathbb Z$; eine Gröbner-Basis über $\mathbb Z$ ist i.a. über $\mathbb Q$ nicht reduziert. Diese Funktion ist durch den allgemeinen import in der ersten Zeile bereits geladen. | Python Code:
from sympy import *
init_printing()
Explanation: Ausgeführtes Beispiel für den Buchberger-Algorithmus
End of explanation
P, erzeuger = xring('x,y', QQ, lex)
x, y = erzeuger
def S_poly(p, q):
kgv = p.leading_monom().lcm(q.leading_monom())
t1 = kgv / p.leading_term() * p
t2 = kgv / q.leading_term() * q
return t1 - t2
f1 = x*y**2-x
f2 = x - y**3
G = [f1, f2]
G
s = S_poly(f1, f2)
s
s.div(G)
f3 = s.div(G)[1]
f3
G.append(f3)
S_poly(f1, f3)
s = S_poly(f2, f3)
s
s.div(G)
G
Explanation: Zuerst wird ein Polynomring geschaffen. QQ ist $\mathbb Q$. Alternative Termordnungen sind grlex und grevlex.
End of explanation
from sympy.polys import groebnertools as gt
gt.groebner(G, P)
Explanation: Es gibt auch zwei Funktionen, welche die reduzierte Gröbner-Basis direkt bestimmen. Zuerst die Funktion für den Fall, dass die Variablen wie oben einem vorab definierten Polynomring angehören.
End of explanation
x = Symbol('x')
y = Symbol('y')
f1 = 3*x*y**2-x
f2 = x - y**3
f1, f2
G = groebner([f1, f2], x, y, order='lex')
G
G.polys
Explanation: Die zweite Funktion akzeptiert Polynome aus Symbolen. Hier muss man die Termordnung explizit angeben. Wenn alle Koeffizienten ganz sind, arbeitet der Algorithmus über $\mathbb Z$; eine Gröbner-Basis über $\mathbb Z$ ist i.a. über $\mathbb Q$ nicht reduziert. Diese Funktion ist durch den allgemeinen import in der ersten Zeile bereits geladen.
End of explanation |
2,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiments for extracting images of Boyd's Bird Journal into computer readable form
(See image below)
The journals are PDFs containing a series of scanned images of observations of birds. The observations are scanned handwritten notes on graph paper. There are bird species labels running down the left side of the page and date information across the top. The charts are organized by month with days of the month being column headings. There are between 2 and three months of information for each image.
Each cell has a mark indicating the presence or absence of a bird species on a given day. So there is, potentially, one mark per bird species per day. The mark on the page is typically a forward slash "/" but it can also be an "x" or an asterisk "*". We are treating all types of marks the same, a cell either has a mark or it doesn't.
Somethings to note here
Step1: Extract images from PDF files
First we need to extract individual images from the PDFs. This is easily accomplished in Linux with the command pdfimages. This is part of either the poppler or xpdf packages. We're using bash to make a directory to hold the images and then extracting the PDF images into that directory. The first 20 images are not relevant here.
Step2: Setup
We are using a fairly standard scipy stack
Step3: The Hough transform
We're using the Hough Transform to find lines in the image. See the Wikipedia Page for an description.
Get the grid for the full image
Step4: Get the horizontal and vertical grid lines
As described above, we need to define a line as a threshold on the line count. However, there is a wrinkle, the images are not square with the width being the shorter dimension (3300px width x 5100px height). To accommodate this we will make two passes over the image. One for the horizontal lines and one for the vertical line.
Step5: Look at the grid lines for the image
We want to verify the horizontal grid lines and the vertical grid lines at left side of the image.
Notice that the grid lines for the full image are not complete and do not line up exactly. Therefore, we will chop up the image further to see if shorter lines are better grid lines.
What we do care about is the left most vertical line. It should separate the grid line numbers from the rest of the grid.
Step6: Crop the left side to see if we can get better grid lines for the row labels
We're expecting two columns of cells on the left side of the image. The cells are rather long and typically have lots of whitespace toward the left end. We expect the 1st cell to have a row number and the 2nd cell to have the bird's species identification. We are going to look at the 1st cell to see if there is any writing in it. To help boost the signal we are going to chop the 1st cell at a fixed width and look at that part for writing.
Step7: Look for writing in the row label cells
Step8: Now split the right side into separate graphs
Step9: Look the resulting grid
Step10: Find column labels
Step11: Look for forward slashes in grid cells
Step13: Stitch image parts back together to report output
This image shows how we broke up the input image to get the monthly charts. Doing it this way reduces distortion.
Step14: Output the results | Python Code:
%load_ext watermark
%watermark -a 'Raphael LaFrance' -i -u -v -r -g -p numpy,matplotlib,skimage
Explanation: Experiments for extracting images of Boyd's Bird Journal into computer readable form
(See image below)
The journals are PDFs containing a series of scanned images of observations of birds. The observations are scanned handwritten notes on graph paper. There are bird species labels running down the left side of the page and date information across the top. The charts are organized by month with days of the month being column headings. There are between 2 and three months of information for each image.
Each cell has a mark indicating the presence or absence of a bird species on a given day. So there is, potentially, one mark per bird species per day. The mark on the page is typically a forward slash "/" but it can also be an "x" or an asterisk "*". We are treating all types of marks the same, a cell either has a mark or it doesn't.
Somethings to note here:
- The graphs are not clean and contain notes and stray marks.
- The scans do not always have nice strong lines to pick out.
- The scans of the graphs are crooked and contain distortions, so the lines are slightly bent, typically near the edges.
- Some of the lines are incomplete or missing. In the image below, May 1986 has more grid cells than June 1986. And the line to the left of May 1st is incomplete.
<img src="assets/Boyd_M_Bird_journal_section1-024.png"/>
End of explanation
%%bash
RAW_DATA='raw_data'
DIRECTORY='images'
PDF1="$RAW_DATA/Boyd_M_Bird_journal_section1.pdf"
PDF2="$RAW_DATA/Boyd_M_Bird_journal_section2.pdf"
PREFIX1="$DIRECTORY/Boyd_M_Bird_journal_section1"
PREFIX2="$DIRECTORY/Boyd_M_Bird_journal_section2"
if [ ! -d "$DIRECTORY" ]; then
mkdir $DIRECTORY
pdfimages -png $PDF1 $PREFIX1
pdfimages -png $PDF2 $PREFIX2
fi
Explanation: Extract images from PDF files
First we need to extract individual images from the PDFs. This is easily accomplished in Linux with the command pdfimages. This is part of either the poppler or xpdf packages. We're using bash to make a directory to hold the images and then extracting the PDF images into that directory. The first 20 images are not relevant here.
End of explanation
# %matplotlib notebook
%matplotlib inline
import os
import csv
from itertools import product
from collections import namedtuple
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import patches, cm
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# import cv2
from skimage import io
from skimage import util
# from skimage.filters import sobel
from skimage.transform import hough_line, hough_line_peaks
from skimage.transform import probabilistic_hough_line, rotate
from lib.util import Crop, too_close
from lib.cell import Cell
from lib.grid import Grid
from boyd_journal_extraction import get_month_graph_areas, build_month_graphs
from boyd_journal_extraction import init_csv_file, output_results, process_image
from boyd_journal_extraction import get_left_side
in_file = 'images/Boyd_M_Bird_journal_section1-024.png'
# in_file = 'images/Boyd_M_Bird_journal_section2-125.png'
CSV_PATH = 'output/boyd_bird_journal.csv'
Explanation: Setup
We are using a fairly standard scipy stack: numpy & matplotlib. The only addition is the use of scikit-image.
End of explanation
grid = Grid(file_name=in_file)
print(grid.image.shape)
Explanation: The Hough transform
We're using the Hough Transform to find lines in the image. See the Wikipedia Page for an description.
Get the grid for the full image
End of explanation
grid.find_grid_lines()
Explanation: Get the horizontal and vertical grid lines
As described above, we need to define a line as a threshold on the line count. However, there is a wrinkle, the images are not square with the width being the shorter dimension (3300px width x 5100px height). To accommodate this we will make two passes over the image. One for the horizontal lines and one for the vertical line.
End of explanation
fig, ax = plt.subplots(figsize=(4, 8))
ax.imshow(grid.image, cmap=plt.cm.gray)
for ((x0, y0), (x1, y1)) in grid.horiz.lines:
ax.plot((x0, x1), (y0, y1), '-y', linewidth=1)
for ((x0, y0), (x1, y1)) in grid.vert.lines:
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
Explanation: Look at the grid lines for the image
We want to verify the horizontal grid lines and the vertical grid lines at left side of the image.
Notice that the grid lines for the full image are not complete and do not line up exactly. Therefore, we will chop up the image further to see if shorter lines are better grid lines.
What we do care about is the left most vertical line. It should separate the grid line numbers from the rest of the grid.
End of explanation
left_side = get_left_side(grid)
print(len(left_side.cells))
print(len(left_side.cells[0]))
fig, ax = plt.subplots(figsize=(4, 8))
ax.imshow(left_side.image, cmap=plt.cm.gray)
for ((x0, y0), (x1, y1)) in left_side.horiz.lines:
ax.plot((x0, x1), (y0, y1), '-y', linewidth=1)
for ((x0, y0), (x1, y1)) in left_side.vert.lines:
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
Explanation: Crop the left side to see if we can get better grid lines for the row labels
We're expecting two columns of cells on the left side of the image. The cells are rather long and typically have lots of whitespace toward the left end. We expect the 1st cell to have a row number and the 2nd cell to have the bird's species identification. We are going to look at the 1st cell to see if there is any writing in it. To help boost the signal we are going to chop the 1st cell at a fixed width and look at that part for writing.
End of explanation
# @interact(row=(0, len(left_side.cells) - 1))
def draw_row_label_interior(row):
print('yes' if left_side.row_labels[row] else '')
inside = left_side.cells[row][0].interior(crop=Cell.crop)
print(np.mean(inside))
fig, ax = plt.subplots(figsize=(6, 2))
ax.imshow(inside, cmap=plt.cm.gray)
lines = left_side.cells[row][0].has_line(Cell.label_lines)
for ((x0, y0), (x1, y1)) in lines:
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
draw_row_label_interior(24)
Explanation: Look for writing in the row label cells
End of explanation
months = get_month_graph_areas(grid, left_side)
for month in months:
print(month.width, month.height)
build_month_graphs(months)
for m, month in enumerate(months):
print('month: {} rows: {} cols: {}'.format(
m, len(month.cells), len(month.cells[0])))
Explanation: Now split the right side into separate graphs
End of explanation
# @interact(mon=(0, len(months) - 1))
def show_month_grid(mon):
month = months[mon]
fig, ax = plt.subplots(figsize=(6, 6))
ax.imshow(month.image, cmap=plt.cm.gray)
ax.set_title('Grid {}'.format(mon))
for ((x0, y0), (x1, y1)) in month.horiz.lines:
ax.plot((x0, x1), (y0, y1), '-y', linewidth=1)
if not month.top or not month.bottom:
for ((x0, y0), (x1, y1)) in month.vert.lines:
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
if month.top:
for ((x0, y0), (x1, y1)) in month.top.vert.lines:
ax.plot((x0, x1), (y0, y1), '-c', linewidth=1)
if month.bottom:
for ((x0, y0), (x1, y1)) in month.bottom.vert.lines:
y0 += month.bottom.offset.y
y1 += month.bottom.offset.y
ax.plot((x0, x1), (y0, y1), '-b', linewidth=1)
plt.tight_layout()
plt.show()
show_month_grid(0)
Explanation: Look the resulting grid
End of explanation
# @interact(mon=(0, len(months) - 1), col=(0, 35))
def draw_column_header_interior(mon, col):
month = months[mon]
col = -1 if col >= len(month.cells[0]) else col
cell = month.cells[0][col]
interior = cell.interior(crop=Cell.crop)
mean = np.mean(interior)
print('mean', mean)
print('yes' if cell.is_label() else '')
print(interior.shape)
fig, ax = plt.subplots(figsize=(3, 3))
ax.imshow(interior, cmap=plt.cm.gray)
lines = cell.has_line()
for ((x0, y0), (x1, y1)) in lines:
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
draw_column_header_interior(1, 2)
Explanation: Find column labels
End of explanation
# @interact(mon=(0, len(months) - 1), row=(1, 60), col=(0, 35))
def draw_cell_interior(mon, row, col):
month = months[mon]
row = -1 if row >= len(month.cells) else row
col = -1 if col >= len(month.cells[0]) else col
cell = month.cells[row][col]
interior = cell.interior(Cell.crop)
fig, ax = plt.subplots(figsize=(3, 3))
ax.imshow(interior, cmap=plt.cm.gray)
lines = probabilistic_hough_line(
interior, line_length=15, theta=Cell.forward_slashes)
for ((x0, y0), (x1, y1)) in lines:
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
print('lines', len(lines))
print('yes' if len(lines) else '')
draw_cell_interior(1, 18, 31)
# draw_cell_interior(1, 27, 27)
# @interact(mon=(0, len(months) - 1))
def show_slashes(mon):
month = months[mon]
for r, row in enumerate(month.cells[1:]):
for col, cell in enumerate(row):
if month.col_labels[col]:
print('/' if cell.has_line(Cell.forward_slashes) else '.', end=' ')
print()
show_slashes(0)
Explanation: Look for forward slashes in grid cells
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(grid.image, cmap=plt.cm.gray)
for ((x0, y0), (x1, y1)) in left_side.horiz.lines:
ax.plot((x0, x1), (y0, y1), '-m', linewidth=1)
for ((x0, y0), (x1, y1)) in left_side.vert.lines:
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
for month in months:
for ((x0, y0), (x1, y1)) in month.horiz.lines:
x0 += month.offset.x
x1 += month.offset.x
y0 += month.offset.y
y1 += month.offset.y
ax.plot((x0, x1), (y0, y1), '-y', linewidth=1)
if not month.top or not month.bottom:
for ((x0, y0), (x1, y1)) in month.vert.lines:
x0 += month.offset.x
x1 += month.offset.x
y0 += month.offset.y
y1 += month.offset.y
ax.plot((x0, x1), (y0, y1), '-r', linewidth=1)
if month.top:
for ((x0, y0), (x1, y1)) in month.top.vert.lines:
x0 += month.top.offset.x
x1 += month.top.offset.x
y0 += month.top.offset.y
y1 += month.top.offset.y
ax.plot((x0, x1), (y0, y1), '-c', linewidth=1)
if month.bottom:
for ((x0, y0), (x1, y1)) in month.bottom.vert.lines:
x0 += month.bottom.offset.x
x1 += month.bottom.offset.x
y0 += month.bottom.offset.y
y1 += month.bottom.offset.y
ax.plot((x0, x1), (y0, y1), '-b', linewidth=1)
plt.show()
def get_month_graph_areas(grid, left_side):
Chop the right side image into images for each month.
months = []
for label_idx, label in enumerate(left_side.row_labels[1:], 1):
prev_idx = label_idx - 1
prev_label = left_side.row_labels[prev_idx]
if label and not prev_label:
top_line = grid.horiz.lines[label_idx - 1]
elif not label and prev_label:
bottom_line = grid.horiz.lines[label_idx + 1]
months.append(crop_rows(grid, top_line, bottom_line))
return months
Explanation: Stitch image parts back together to report output
This image shows how we broke up the input image to get the monthly charts. Doing it this way reduces distortion.
End of explanation
CSV_PATH = 'output/boyd_bird_journal.csv'
output_results(in_file, CSV_PATH, grid, months, left_side)
plt.show()
Explanation: Output the results
End of explanation |
2,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo 2 - Remote parallel computation [distributed]
Demo for site visit | Brendan Smithyman | April 8, 2015
Choice of IPython / jupyter cluster profile
Step1: Importing libraries
numpy is the de facto standard Python numerical computing library
zephyr.Dispatcher is zephyr's primary parallel remote problem interface
IPython.parallel provides parallel task control (nominally, this is to be handled inside the Dispatcher object)
Step2: Plotting configuration
These lines import matplotlib, which is a standard Python plotting library, and configure the output formats for figures.
Step3: These lines define some plotting functions, which are used later.
Step4: System / modelling configuration
This code sets up the seismic problem; see the comments inline. In a live inversion problem this would most likely be read from a configuration file (but could be defined interactively for development purposes).
Properties of the grid and forward modelling
Step5: Properties of the model
Step6: Array geometry
Step7: Numerical / parallel parameters
Step8: Computed properties
Step9: Problem geometry
Step10: Configuration dictionary
(assembled from previous sections)
Step11: Parallel computations
This section runs each of the parallel computations on the remote worker nodes.
Set up problem
Create the Dispatcher object using the systemConfig dictionary as input
Spawn survey and problem interfaces, which implement the SimPEG standard properties
Generate a set of "transmitter" objects, each of which knows about its respective "receivers" (in seismic parlance, these would be "sources" and "receivers"; the term "transmitter" is more common in EM and potential fields geophysics)
Tell the dispatcher object about the transmitters
Step12: Forward modelling and backpropagation
Example (commented out) showing how to generate synthetic data using the SimPEG-style survey and problem interfaces. In this implementation, both are essentially expressions of the Dispatcher. The Dispatcher API has yet to be merged into SimPEG.
Step13: This code runs the forward modelling on the [remote] workers. It returns asynchronously, so the code can run in the background.
Step14: However, it will block if we ask for the data or wavefields
Step15: Results
We show the resulting data and wavefield properties.
Data selection
Step16: Geometry, data and forward wavefield | Python Code:
# profile = 'phobos' # remote workstation
# profile = 'pantheon' # remote cluster
profile = 'mpi' # local machine
Explanation: Demo 2 - Remote parallel computation [distributed]
Demo for site visit | Brendan Smithyman | April 8, 2015
Choice of IPython / jupyter cluster profile
End of explanation
import numpy as np
from zephyr.Dispatcher import SeisFDFDDispatcher
from IPython.parallel import Reference
Explanation: Importing libraries
numpy is the de facto standard Python numerical computing library
zephyr.Dispatcher is zephyr's primary parallel remote problem interface
IPython.parallel provides parallel task control (nominally, this is to be handled inside the Dispatcher object)
End of explanation
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
import mpld3
mpld3.enable_notebook()
Explanation: Plotting configuration
These lines import matplotlib, which is a standard Python plotting library, and configure the output formats for figures.
End of explanation
lclip = 2000
hclip = 3000
clipscale = 0.1
sms = 0.5
rms = 0.5
def plotField(u):
clip = clipscale*abs(u).max()
plt.imshow(u.real, cmap=cm.bwr, vmin=-clip, vmax=clip)
def plotModel(v):
plt.imshow(v.real, cmap=cm.jet, vmin=lclip, vmax=hclip)
def plotGeometry(geom):
srcpos = geom['src'][:,::2]
recpos = geom['rec'][:,::2]
axistemp = plt.axis()
plt.plot(srcpos[:,0], srcpos[:,1], 'kx', markersize=sms)
plt.plot(recpos[:,0], recpos[:,1], 'kv', markersize=rms)
plt.axis(axistemp)
Explanation: These lines define some plotting functions, which are used later.
End of explanation
cellSize = 1 # m
nx = 164 # count
nz = 264 # count
freqs = [1e2] # Hz
freeSurf = [False, False, False, False] # t r b l
nPML = 32 # number of PML points
nky = 80 # number of y-directional plane-wave components
Explanation: System / modelling configuration
This code sets up the seismic problem; see the comments inline. In a live inversion problem this would most likely be read from a configuration file (but could be defined interactively for development purposes).
Properties of the grid and forward modelling
End of explanation
velocity = 2500 # m/s
vanom = 500 # m/s
density = 2700 # units of density
Q = 500 # can be inf
Explanation: Properties of the model
End of explanation
srcs = np.array([np.ones(101)*32, np.zeros(101), np.linspace(32, 232, 101)]).T
recs = np.array([np.ones(101)*132, np.zeros(101), np.linspace(32, 232, 101)]).T
nsrc = len(srcs)
nrec = len(recs)
recmode = 'fixed'
geom = {
'src': srcs,
'rec': recs,
'mode': 'fixed',
}
Explanation: Array geometry
End of explanation
cache = False # whether to cache computed wavefields for a given source
cacheDir = '.'
parFac = 2
chunksPerWorker = 0.5 # NB: parFac * chunksPerWorker = number of source array subsets
ensembleClear = False
Explanation: Numerical / parallel parameters
End of explanation
dims = (nx,nz) # tuple
rho = np.fliplr(np.ones(dims) * density)
nfreq = len(freqs) # number of frequencies
nsp = nfreq * nky # total number of 2D subproblems
cPert = np.zeros(dims)
cPert[(nx/2)-20:(nx/2)+20,(nz/2)-20:(nz/2)+20] = vanom
c = np.fliplr(np.ones(dims) * velocity)
cFlat = c
c += np.fliplr(cPert)
cTrue = c
Explanation: Computed properties
End of explanation
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
plotModel(c.T)
plotGeometry(geom)
ax1.set_title('Velocity Model')
ax1.set_xlabel('X')
ax1.set_ylabel('Z')
fig.tight_layout()
Explanation: Problem geometry
End of explanation
# Base configuration for all subproblems
systemConfig = {
'dx': cellSize, # m
'dz': cellSize, # m
'c': c.T, # m/s
'rho': rho.T, # density
'Q': Q, # can be inf
'nx': nx, # count
'nz': nz, # count
'freeSurf': freeSurf, # t r b l
'nPML': nPML,
'geom': geom,
'cache': cache,
'cacheDir': cacheDir,
'freqs': freqs,
'nky': nky,
'parFac': parFac,
'chunksPerWorker': chunksPerWorker,
'profile': profile,
'ensembleClear': ensembleClear,
# 'MPI': False,
# 'Solver': Reference('SimPEG.SolverWrapD(scipy.sparse.linalg.splu)'),#Solver,
}
Explanation: Configuration dictionary
(assembled from previous sections)
End of explanation
%%time
sp = SeisFDFDDispatcher(systemConfig)
survey, problem = sp.spawnInterfaces()
sxs = survey.genSrc()
sp.srcs = sxs
Explanation: Parallel computations
This section runs each of the parallel computations on the remote worker nodes.
Set up problem
Create the Dispatcher object using the systemConfig dictionary as input
Spawn survey and problem interfaces, which implement the SimPEG standard properties
Generate a set of "transmitter" objects, each of which knows about its respective "receivers" (in seismic parlance, these would be "sources" and "receivers"; the term "transmitter" is more common in EM and potential fields geophysics)
Tell the dispatcher object about the transmitters
End of explanation
# d = survey.projectFields()
# uF = problem.fields()
Explanation: Forward modelling and backpropagation
Example (commented out) showing how to generate synthetic data using the SimPEG-style survey and problem interfaces. In this implementation, both are essentially expressions of the Dispatcher. The Dispatcher API has yet to be merged into SimPEG.
End of explanation
%%time
sp.forward()
sp.forwardGraph
Explanation: This code runs the forward modelling on the [remote] workers. It returns asynchronously, so the code can run in the background.
End of explanation
%%time
d = sp.dPred
uF = sp.uF
d[0].shape
Explanation: However, it will block if we ask for the data or wavefields:
End of explanation
freqNum = 0
srcNum = 0
frt = uF[freqNum]
drt = d[freqNum]
clipScaleF = 1e-1 * abs(frt[srcNum]).max()
Explanation: Results
We show the resulting data and wavefield properties.
Data selection:
End of explanation
fig = plt.figure()
ax1 = fig.add_subplot(1,3,1)
plotModel(c.T)
plotGeometry(geom)
ax1.set_title('Velocity Model')
ax1.set_xlabel('X')
ax1.set_ylabel('Z')
ax2 = fig.add_subplot(1,3,2)
plt.imshow(drt.real, cmap=cm.bwr)
ax2.set_title('Real part of d: $\omega = %0.1f$'%(freqs[freqNum],))
ax2.set_xlabel('Receiver #')
ax2.set_ylabel('Source #')
ax3 = fig.add_subplot(1,3,3)
plt.imshow(frt[srcNum].real, vmin=-clipScaleF, vmax=clipScaleF, cmap=cm.bwr)
plt.title('uF: $\omega = %0.1f$, src. %d'%(freqs[freqNum], srcNum))
fig.tight_layout()
Explanation: Geometry, data and forward wavefield:
End of explanation |
2,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook an estimator for the Volume will be trained. No hyperparameters will be searched for, and the ones from the 'Close' values estimator will be used instead.
Step1: Let's generate the datasets
Step2: Let's generate the test dataset, also
Step3: Let's train a predictor for the 'Volume' with the same hyperparameters as for the 'Close' one. | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
from sklearn.externals import joblib
import utils.preprocessing as pp
import predictor.feature_extraction as fe
Explanation: In this notebook an estimator for the Volume will be trained. No hyperparameters will be searched for, and the ones from the 'Close' values estimator will be used instead.
End of explanation
def generate_one_set(params):
# print(('-'*70 + '\n {}, {} \n' + '-'*70).format(params['base_days'].values, params['ahead_days'].values))
tic = time()
train_val_time = int(params['train_val_time'])
base_days = int(params['base_days'])
step_days = int(params['step_days'])
ahead_days = int(params['ahead_days'])
print('Generating: base{}_ahead{}'.format(base_days, ahead_days))
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Getting the data
data_df = pd.read_pickle('../../data/data_train_val_df.pkl')
today = data_df.index[-1] # Real date
print(pid + ') data_df loaded')
# Drop symbols with many missing points
data_df = pp.drop_irrelevant_symbols(data_df, params['GOOD_DATA_RATIO'])
print(pid + ') Irrelevant symbols dropped.')
# Generate the intervals for the predictor
x, y = fe.generate_train_intervals(data_df,
train_val_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_volume_one_to_one,
target_feature=fe.VOLUME_FEATURE)
print(pid + ') Intervals generated')
# Drop "bad" samples and fill missing data
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, params['SAMPLES_GOOD_DATA_RATIO'])
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
print(pid + ') Irrelevant samples dropped and missing data filled.')
# Pickle that
x.to_pickle('../../data/x_volume_{}.pkl'.format(pid))
y.to_pickle('../../data/y_volume_{}.pkl'.format(pid))
toc = time()
print('%s) %i intervals generated in: %i seconds.' % (pid, x.shape[0], (toc-tic)))
return pid, x, y
best_params_df = pd.read_pickle('../../data/best_params_final_df.pkl').loc[1,:]
to_drop = [
'model',
'mre',
'r2',
'x_filename',
'y_filename',
'train_days'
]
best_params_df.drop(to_drop, inplace=True)
best_params_df
generate_one_set(best_params_df)
x_volume = pd.read_pickle('../../data/x_volume_base112_ahead1.pkl')
print(x_volume.shape)
x_volume.head()
y_volume = pd.read_pickle('../../data/y_volume_base112_ahead1.pkl')
print(y_volume.shape)
y_volume.head()
Explanation: Let's generate the datasets
End of explanation
def generate_one_test_set(params, data_df):
# print(('-'*70 + '\n {}, {} \n' + '-'*70).format(params['base_days'].values, params['ahead_days'].values))
tic = time()
train_val_time = int(params['train_val_time'])
base_days = int(params['base_days'])
step_days = int(params['step_days'])
ahead_days = int(params['ahead_days'])
print('Generating: base{}_ahead{}'.format(base_days, ahead_days))
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Getting the data
today = data_df.index[-1] # Real date
print(pid + ') data_df loaded')
# Drop symbols with many missing points
y_train_df = pd.read_pickle('../../data/y_volume_{}.pkl'.format(pid))
kept_symbols = y_train_df.index.get_level_values(1).unique().tolist()
data_df = data_df.loc[:, (slice(None), kept_symbols)]
print(pid + ') Irrelevant symbols dropped.')
# Generate the intervals for the predictor
x, y = fe.generate_train_intervals(data_df,
train_val_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_volume_one_to_one,
target_feature=fe.VOLUME_FEATURE)
print(pid + ') Intervals generated')
# Drop "bad" samples and fill missing data
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, params['SAMPLES_GOOD_DATA_RATIO'])
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
print(pid + ') Irrelevant samples dropped and missing data filled.')
# Pickle that
x.to_pickle('../../data/x_volume_{}_test.pkl'.format(pid))
y.to_pickle('../../data/y_volume_{}_test.pkl'.format(pid))
toc = time()
print('%s) %i intervals generated in: %i seconds.' % (pid, x.shape[0], (toc-tic)))
return pid, x,
data_test_df = pd.read_pickle('../../data/data_test_df.pkl')
generate_one_test_set(best_params_df, data_test_df)
x_volume_test = pd.read_pickle('../../data/x_volume_base112_ahead1_test.pkl')
print(x_volume_test.shape)
x_volume_test.head()
y_volume_test = pd.read_pickle('../../data/y_volume_base112_ahead1_test.pkl')
print(y_volume_test.shape)
y_volume_test.head()
Explanation: Let's generate the test dataset, also
End of explanation
best_params_df = pd.read_pickle('../../data/best_params_final_df.pkl')
import predictor.feature_extraction as fe
from predictor.linear_predictor import LinearPredictor
import utils.misc as misc
import predictor.evaluation as ev
ahead_days = 1
# Get some parameters
train_days = int(best_params_df.loc[ahead_days, 'train_days'])
GOOD_DATA_RATIO, \
train_val_time, \
base_days, \
step_days, \
ahead_days, \
SAMPLES_GOOD_DATA_RATIO, \
x_filename, \
y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:])
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Get the datasets
x_train = pd.read_pickle('../../data/x_volume_{}.pkl'.format(pid))
y_train = pd.read_pickle('../../data/y_volume_{}.pkl'.format(pid))
x_test = pd.read_pickle('../../data/x_volume_{}_test.pkl'.format(pid)).sort_index()
y_test = pd.DataFrame(pd.read_pickle('../../data/y_volume_{}_test.pkl'.format(pid))).sort_index()
# Let's cut the training set to use only the required number of samples
end_date = x_train.index.levels[0][-1]
start_date = fe.add_market_days(end_date, -train_days)
x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:]
y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))])
# Create the estimator and train
estimator = LinearPredictor()
estimator.fit(x_sub_df, y_sub_df)
# Get the training and test predictions
y_train_pred = estimator.predict(x_sub_df)
y_test_pred = estimator.predict(x_test)
# Get the training and test metrics for each symbol
metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred)
metrics_test = ev.get_metrics_df(y_test, y_test_pred)
# Show the mean metrics
metrics_df = pd.DataFrame(columns=['train', 'test'])
metrics_df['train'] = metrics_train.mean()
metrics_df['test'] = metrics_test.mean()
print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70))
# Plot the metrics in time
metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days)
metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days)
plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.')
plt.title('$r^2$ metrics')
plt.legend()
plt.figure()
plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.')
plt.title('MRE metrics')
plt.legend()
joblib.dump(estimator, '../../data/best_volume_predictor.pkl')
Explanation: Let's train a predictor for the 'Volume' with the same hyperparameters as for the 'Close' one.
End of explanation |
2,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Take a look at a typical review. This one is labeled "negative"
Step2: Check for missing values
Step3: 35 records show NaN (this stands for "not a number" and is equivalent to None). These are easily removed using the .dropna() pandas function.
<div class="alert alert-info" style="margin
Step4: Detect & remove empty strings
Technically, we're dealing with "whitespace only" strings. If the original .tsv file had contained empty strings, pandas .read_csv() would have assigned NaN values to those cells by default.
In order to detect these strings we need to iterate over each row in the DataFrame. The .itertuples() pandas method is a good tool for this as it provides access to every field. For brevity we'll assign the names i, lb and rv to the index, label and review columns.
Step5: Next we'll pass our list of index numbers to the .drop() method, and set inplace=True to make the change permanent.
Step6: Great! We dropped 62 records from the original 2000. Let's continue with the analysis.
Take a quick look at the label column
Step7: Split the data into train & test sets
Step8: Build pipelines to vectorize the data, then train and fit a model
Now that we have sets to train and test, we'll develop a selection of pipelines, each with a different model.
Step9: Feed the training data through the first pipeline
We'll run naïve Bayes first
Step10: Run predictions and analyze the results (naïve Bayes)
Step11: Naïve Bayes gave us better-than-average results at 76.4% for classifying reviews as positive or negative based on text alone. Let's see if we can do better.
Feed the training data through the second pipeline
Next we'll run Linear SVC
Step12: Run predictions and analyze the results (Linear SVC)
Step13: Not bad! Based on text alone we correctly classified reviews as positive or negative 84.7% of the time. In an upcoming section we'll try to improve this score even further by performing sentiment analysis on the reviews.
Advanced Topic - Adding Stopwords to CountVectorizer
By default, CountVectorizer and TfidfVectorizer do not filter stopwords. However, they offer some optional settings, including passing in your own stopword list.
<div class="alert alert-info" style="margin
Step14: Now let's repeat the process above and see if the removal of stopwords improves or impairs our score.
Step15: Our score didn't change that much. We went from 84.7% without filtering stopwords to 84.4% after adding a stopword filter to our pipeline. Keep in mind that 2000 movie reviews is a relatively small dataset. The real gain from stripping stopwords is improved processing speed; depending on the size of the corpus, it might save hours.
Feed new data into a trained model
Once we've developed a fairly accurate model, it's time to feed new data through it. In this last section we'll write our own review, and see how accurately our model assigns a "positive" or "negative" label to it.
First, train the model
Step16: Next, feed new data to the model's predict() method | Python Code:
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/moviereviews.tsv', sep='\t')
df.head()
len(df)
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Text Classification Project
Now we're at the point where we should be able to:
* Read in a collection of documents - a corpus
* Transform text into numerical vector data using a pipeline
* Create a classifier
* Fit/train the classifier
* Test the classifier on new data
* Evaluate performance
For this project we'll use the Cornell University Movie Review polarity dataset v2.0 obtained from http://www.cs.cornell.edu/people/pabo/movie-review-data/
In this exercise we'll try to develop a classification model as we did for the SMSSpamCollection dataset - that is, we'll try to predict the Positive/Negative labels based on text content alone. In an upcoming section we'll apply Sentiment Analysis to train models that have a deeper understanding of each review.
Perform imports and load the dataset
The dataset contains the text of 2000 movie reviews. 1000 are positive, 1000 are negative, and the text has been preprocessed as a tab-delimited file.
End of explanation
from IPython.display import Markdown, display
display(Markdown('> '+df['review'][0]))
Explanation: Take a look at a typical review. This one is labeled "negative":
End of explanation
# Check for the existence of NaN values in a cell:
df.isnull().sum()
Explanation: Check for missing values:
We have intentionally included records with missing data. Some have NaN values, others have short strings composed of only spaces. This might happen if a reviewer declined to provide a comment with their review. We will show two ways using pandas to identify and remove records containing empty data.
* NaN records are efficiently handled with .isnull() and .dropna()
* Strings that contain only whitespace can be handled with .isspace(), .itertuples(), and .drop()
Detect & remove NaN values:
End of explanation
df.dropna(inplace=True)
len(df)
Explanation: 35 records show NaN (this stands for "not a number" and is equivalent to None). These are easily removed using the .dropna() pandas function.
<div class="alert alert-info" style="margin: 20px">CAUTION: By setting inplace=True, we permanently affect the DataFrame currently in memory, and this can't be undone. However, it does *not* affect the original source data. If we needed to, we could always load the original DataFrame from scratch.</div>
End of explanation
blanks = [] # start with an empty list
for i,lb,rv in df.itertuples(): # iterate over the DataFrame
if type(rv)==str: # avoid NaN values
if rv.isspace(): # test 'review' for whitespace
blanks.append(i) # add matching index numbers to the list
print(len(blanks), 'blanks: ', blanks)
Explanation: Detect & remove empty strings
Technically, we're dealing with "whitespace only" strings. If the original .tsv file had contained empty strings, pandas .read_csv() would have assigned NaN values to those cells by default.
In order to detect these strings we need to iterate over each row in the DataFrame. The .itertuples() pandas method is a good tool for this as it provides access to every field. For brevity we'll assign the names i, lb and rv to the index, label and review columns.
End of explanation
df.drop(blanks, inplace=True)
len(df)
Explanation: Next we'll pass our list of index numbers to the .drop() method, and set inplace=True to make the change permanent.
End of explanation
df['label'].value_counts()
Explanation: Great! We dropped 62 records from the original 2000. Let's continue with the analysis.
Take a quick look at the label column:
End of explanation
from sklearn.model_selection import train_test_split
X = df['review']
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Explanation: Split the data into train & test sets:
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
# Naïve Bayes:
text_clf_nb = Pipeline([('tfidf', TfidfVectorizer()),
('clf', MultinomialNB()),
])
# Linear SVC:
text_clf_lsvc = Pipeline([('tfidf', TfidfVectorizer()),
('clf', LinearSVC()),
])
Explanation: Build pipelines to vectorize the data, then train and fit a model
Now that we have sets to train and test, we'll develop a selection of pipelines, each with a different model.
End of explanation
text_clf_nb.fit(X_train, y_train)
Explanation: Feed the training data through the first pipeline
We'll run naïve Bayes first
End of explanation
# Form a prediction set
predictions = text_clf_nb.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test,predictions))
# Print a classification report
print(metrics.classification_report(y_test,predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))
Explanation: Run predictions and analyze the results (naïve Bayes)
End of explanation
text_clf_lsvc.fit(X_train, y_train)
Explanation: Naïve Bayes gave us better-than-average results at 76.4% for classifying reviews as positive or negative based on text alone. Let's see if we can do better.
Feed the training data through the second pipeline
Next we'll run Linear SVC
End of explanation
# Form a prediction set
predictions = text_clf_lsvc.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test,predictions))
# Print a classification report
print(metrics.classification_report(y_test,predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))
Explanation: Run predictions and analyze the results (Linear SVC)
End of explanation
stopwords = ['a', 'about', 'an', 'and', 'are', 'as', 'at', 'be', 'been', 'but', 'by', 'can', \
'even', 'ever', 'for', 'from', 'get', 'had', 'has', 'have', 'he', 'her', 'hers', 'his', \
'how', 'i', 'if', 'in', 'into', 'is', 'it', 'its', 'just', 'me', 'my', 'of', 'on', 'or', \
'see', 'seen', 'she', 'so', 'than', 'that', 'the', 'their', 'there', 'they', 'this', \
'to', 'was', 'we', 'were', 'what', 'when', 'which', 'who', 'will', 'with', 'you']
Explanation: Not bad! Based on text alone we correctly classified reviews as positive or negative 84.7% of the time. In an upcoming section we'll try to improve this score even further by performing sentiment analysis on the reviews.
Advanced Topic - Adding Stopwords to CountVectorizer
By default, CountVectorizer and TfidfVectorizer do not filter stopwords. However, they offer some optional settings, including passing in your own stopword list.
<div class="alert alert-info" style="margin: 20px">CAUTION: There are some [known issues](http://aclweb.org/anthology/W18-2502) using Scikit-learn's built-in stopwords list. Some words that are filtered may in fact aid in classification. In this section we'll pass in our own stopword list, so that we know exactly what's being filtered.</div>
The CountVectorizer class accepts the following arguments:
CountVectorizer(input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\b\w\w+\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.int64'>)
TfidVectorizer supports the same arguments and more. Under stop_words we have the following options:
stop_words : string {'english'}, list, or None (default)
That is, we can run TfidVectorizer(stop_words='english') to accept scikit-learn's built-in list,<br>
or TfidVectorizer(stop_words=[a, and, the]) to filter these three words. In practice we would assign our list to a variable and pass that in instead.
Scikit-learn's built-in list contains 318 stopwords:
<pre>from sklearn.feature_extraction import text
print(text.ENGLISH_STOP_WORDS)</pre>
['a', 'about', 'above', 'across', 'after', 'afterwards', 'again', 'against', 'all', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', 'am', 'among', 'amongst', 'amoungst', 'amount', 'an', 'and', 'another', 'any', 'anyhow', 'anyone', 'anything', 'anyway', 'anywhere', 'are', 'around', 'as', 'at', 'back', 'be', 'became', 'because', 'become', 'becomes', 'becoming', 'been', 'before', 'beforehand', 'behind', 'being', 'below', 'beside', 'besides', 'between', 'beyond', 'bill', 'both', 'bottom', 'but', 'by', 'call', 'can', 'cannot', 'cant', 'co', 'con', 'could', 'couldnt', 'cry', 'de', 'describe', 'detail', 'do', 'done', 'down', 'due', 'during', 'each', 'eg', 'eight', 'either', 'eleven', 'else', 'elsewhere', 'empty', 'enough', 'etc', 'even', 'ever', 'every', 'everyone', 'everything', 'everywhere', 'except', 'few', 'fifteen', 'fifty', 'fill', 'find', 'fire', 'first', 'five', 'for', 'former', 'formerly', 'forty', 'found', 'four', 'from', 'front', 'full', 'further', 'get', 'give', 'go', 'had', 'has', 'hasnt', 'have', 'he', 'hence', 'her', 'here', 'hereafter', 'hereby', 'herein', 'hereupon', 'hers', 'herself', 'him', 'himself', 'his', 'how', 'however', 'hundred', 'i', 'ie', 'if', 'in', 'inc', 'indeed', 'interest', 'into', 'is', 'it', 'its', 'itself', 'keep', 'last', 'latter', 'latterly', 'least', 'less', 'ltd', 'made', 'many', 'may', 'me', 'meanwhile', 'might', 'mill', 'mine', 'more', 'moreover', 'most', 'mostly', 'move', 'much', 'must', 'my', 'myself', 'name', 'namely', 'neither', 'never', 'nevertheless', 'next', 'nine', 'no', 'nobody', 'none', 'noone', 'nor', 'not', 'nothing', 'now', 'nowhere', 'of', 'off', 'often', 'on', 'once', 'one', 'only', 'onto', 'or', 'other', 'others', 'otherwise', 'our', 'ours', 'ourselves', 'out', 'over', 'own', 'part', 'per', 'perhaps', 'please', 'put', 'rather', 're', 'same', 'see', 'seem', 'seemed', 'seeming', 'seems', 'serious', 'several', 'she', 'should', 'show', 'side', 'since', 'sincere', 'six', 'sixty', 'so', 'some', 'somehow', 'someone', 'something', 'sometime', 'sometimes', 'somewhere', 'still', 'such', 'system', 'take', 'ten', 'than', 'that', 'the', 'their', 'them', 'themselves', 'then', 'thence', 'there', 'thereafter', 'thereby', 'therefore', 'therein', 'thereupon', 'these', 'they', 'thick', 'thin', 'third', 'this', 'those', 'though', 'three', 'through', 'throughout', 'thru', 'thus', 'to', 'together', 'too', 'top', 'toward', 'towards', 'twelve', 'twenty', 'two', 'un', 'under', 'until', 'up', 'upon', 'us', 'very', 'via', 'was', 'we', 'well', 'were', 'what', 'whatever', 'when', 'whence', 'whenever', 'where', 'whereafter', 'whereas', 'whereby', 'wherein', 'whereupon', 'wherever', 'whether', 'which', 'while', 'whither', 'who', 'whoever', 'whole', 'whom', 'whose', 'why', 'will', 'with', 'within', 'without', 'would', 'yet', 'you', 'your', 'yours', 'yourself', 'yourselves']
However, there are words in this list that may influence a classification of movie reviews. With this in mind, let's trim the list to just 60 words:
End of explanation
# YOU DO NOT NEED TO RUN THIS CELL UNLESS YOU HAVE
# RECENTLY OPENED THIS NOTEBOOK OR RESTARTED THE KERNEL:
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/moviereviews.tsv', sep='\t')
df.dropna(inplace=True)
blanks = []
for i,lb,rv in df.itertuples():
if type(rv)==str:
if rv.isspace():
blanks.append(i)
df.drop(blanks, inplace=True)
from sklearn.model_selection import train_test_split
X = df['review']
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from sklearn import metrics
# RUN THIS CELL TO ADD STOPWORDS TO THE LINEAR SVC PIPELINE:
text_clf_lsvc2 = Pipeline([('tfidf', TfidfVectorizer(stop_words=stopwords)),
('clf', LinearSVC()),
])
text_clf_lsvc2.fit(X_train, y_train)
predictions = text_clf_lsvc2.predict(X_test)
print(metrics.confusion_matrix(y_test,predictions))
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
Explanation: Now let's repeat the process above and see if the removal of stopwords improves or impairs our score.
End of explanation
# YOU DO NOT NEED TO RUN THIS CELL UNLESS YOU HAVE
# RECENTLY OPENED THIS NOTEBOOK OR RESTARTED THE KERNEL:
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/moviereviews.tsv', sep='\t')
df.dropna(inplace=True)
blanks = []
for i,lb,rv in df.itertuples():
if type(rv)==str:
if rv.isspace():
blanks.append(i)
df.drop(blanks, inplace=True)
from sklearn.model_selection import train_test_split
X = df['review']
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from sklearn import metrics
# Naïve Bayes Model:
text_clf_nb = Pipeline([('tfidf', TfidfVectorizer()),
('clf', MultinomialNB()),
])
# Linear SVC Model:
text_clf_lsvc = Pipeline([('tfidf', TfidfVectorizer()),
('clf', LinearSVC()),
])
# Train both models on the moviereviews.tsv training set:
text_clf_nb.fit(X_train, y_train)
text_clf_lsvc.fit(X_train, y_train)
Explanation: Our score didn't change that much. We went from 84.7% without filtering stopwords to 84.4% after adding a stopword filter to our pipeline. Keep in mind that 2000 movie reviews is a relatively small dataset. The real gain from stripping stopwords is improved processing speed; depending on the size of the corpus, it might save hours.
Feed new data into a trained model
Once we've developed a fairly accurate model, it's time to feed new data through it. In this last section we'll write our own review, and see how accurately our model assigns a "positive" or "negative" label to it.
First, train the model
End of explanation
myreview = "A movie I really wanted to love was terrible. \
I'm sure the producers had the best intentions, but the execution was lacking."
# Use this space to write your own review. Experiment with different lengths and writing styles.
myreview =
print(text_clf_nb.predict([myreview])) # be sure to put "myreview" inside square brackets
print(text_clf_lsvc.predict([myreview]))
Explanation: Next, feed new data to the model's predict() method
End of explanation |
2,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
简单线图
先设置ipython notebook 作图环境
Step1: 对于所有Matplotlib图,我们首先创建一个图形和一个轴。以最简单的形式,可以如下创建图形和轴:
Step2: 在Matplotlib中,图形(类plt.Figure的一个实例)可以被认为是一个包含所有代表轴,图形,文本和标签的对象的容器。轴(类plt.Axes的实例)就是我们在上面看到的:一个带有刻度和标签的边界框,该边界框最终将包含构成我们可视化的绘图元素。在本书中,我们通常使用变量名fig来引用图形实例,并使用ax来引用轴实例或一组轴实例。
创建轴后,可以使用ax.plot函数绘制一些数据。让我们从一个简单的正弦波开始:
Step3: 或者,我们可以使用pylab界面,并在后台为我们创建图形和轴:
Step4: 如果想在一个图中,画出多条曲线,可以调用多次plot()
Step5: 这就是在Matplotlib中绘制简单函数!现在,我们将深入探讨有关如何控制轴和线的外观的更多详细信息。
调整图:线条颜色和样式
您可能希望对图形进行的第一个调整是控制线条的颜色和样式。 plt.plot()函数采用其他可用于指定这些参数的参数。要调整颜色,可以使用color关键字,该关键字接受一个表示几乎任何可想象的颜色的字符串参数。可以通过多种方式指定颜色:
Step6: 如果未指定颜色,则Matplotlib将自动在一组默认颜色中循环显示多行。
同样,可以使用linestyle关键字调整线条样式:
Step7: 如果您想更简洁,可以将这些线型和颜色代码组合到plt.plot()函数的单个非关键字参数中:
Step8: 这些单字符颜色代码反映了通常用于数字彩色图形的RGB(红色/绿色/蓝色)和CMYK(青色/品红色/黄色/黑色)颜色系统中的标准缩写。
还有许多其他关键字参数可用于微调图的外观。有关更多详细信息,建议使用IPython的帮助工具查看plt.plot()函数的文档字符串(请参阅IPython中的帮助和文档)。
调整图:轴极限
Matplotlib在为绘图选择默认轴限制方面做得不错,但是有时候更好的控制会很好。调整轴限制的最基本方法是使用plt.xlim()和plt.ylim()方法:
Step9: 一个有用的相关方法是plt.axis()(在此注意带有e的轴和带有i的轴之间的潜在混淆)。通过传递一个指定[xmin,xmax,ymin,ymax]的列表,可以使用plt.axis()方法通过一次调用来设置x和y限制:
Step10: plt.axis()方法甚至超越了此范围,允许您执行诸如自动收紧当前图的边界之类的操作:
Step11: 它甚至允许更高级别的规格,例如确保长宽比相等,以便在屏幕上x中的一个单位等于y中的一个单位:
Step12: 有关轴限制和plt.axis方法的其他功能的更多信息,可以参考plt.axis文档字符串。
标记图
作为本节的最后一部分,我们将简要介绍绘图的标签:标题,轴标签和简单图例。
标题和轴标签是最简单的此类标签-有些方法可用于快速设置它们:
Step13: 可以使用函数的可选参数来调整这些标签的位置,大小和样式。有关更多信息,请参见Matplotlib文档和每个函数的文档字符串。
当在单个轴上显示多条线时,创建标记每个线型的打印图例可能很有用。同样,Matplotlib具有快速创建此类图例的内置方法。通过plt.legend()方法完成。尽管有几种有效的用法,但我发现使用plot函数的label关键字指定每条线的标签最简单:
Step14: 如您所见,plt.legend()函数跟踪线条样式和颜色,并将它们与正确的标签匹配。在plt.legend文档字符串中可以找到有关指定和格式化图例的更多信息。此外,我们将在“自定义绘图图例”中介绍一些更高级的图例选项。
Matplotlib陷阱
尽管大多数plt函数直接转换为ax方法(例如plt.plot()→ax.plot(),plt.legend()→ax.legend()等),但并非所有命令都如此。特别是,用于设置限制,标签和标题的功能略有修改。为了在MATLAB样式的函数和面向对象的方法之间转换,请进行以下更改:
plt.xlabel() → ax.set_xlabel()
plt.ylabel() → ax.set_ylabel()
plt.xlim() → ax.set_xlim()
plt.ylim() → ax.set_ylim()
plt.title() → ax.set_title()
在绘图的面向对象的界面中,而不是单独调用这些函数,使用ax.set()方法一次设置所有这些属性通常更为方便: | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
#使用seaborn-whitegrid风格
plt.style.use('seaborn-whitegrid')
import numpy as np
Explanation: 简单线图
先设置ipython notebook 作图环境
End of explanation
fig = plt.figure()
ax = plt.axes()
Explanation: 对于所有Matplotlib图,我们首先创建一个图形和一个轴。以最简单的形式,可以如下创建图形和轴:
End of explanation
fig = plt.figure()
ax = plt.axes()
x = np.linspace(0, 10, 1000)
ax.plot(x, np.sin(x));
Explanation: 在Matplotlib中,图形(类plt.Figure的一个实例)可以被认为是一个包含所有代表轴,图形,文本和标签的对象的容器。轴(类plt.Axes的实例)就是我们在上面看到的:一个带有刻度和标签的边界框,该边界框最终将包含构成我们可视化的绘图元素。在本书中,我们通常使用变量名fig来引用图形实例,并使用ax来引用轴实例或一组轴实例。
创建轴后,可以使用ax.plot函数绘制一些数据。让我们从一个简单的正弦波开始:
End of explanation
plt.plot(x, np.sin(x));
Explanation: 或者,我们可以使用pylab界面,并在后台为我们创建图形和轴:
End of explanation
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x));
Explanation: 如果想在一个图中,画出多条曲线,可以调用多次plot()
End of explanation
plt.plot(x, np.sin(x - 0), color='blue') # specify color by name
plt.plot(x, np.sin(x - 1), color='g') # short color code (rgbcmyk)
plt.plot(x, np.sin(x - 2), color='0.75') # Grayscale between 0 and 1
plt.plot(x, np.sin(x - 3), color='#FFDD44') # Hex code (RRGGBB from 00 to FF)
plt.plot(x, np.sin(x - 4), color=(1.0,0.2,0.3)) # RGB tuple, values 0 to 1
plt.plot(x, np.sin(x - 5), color='chartreuse'); # all HTML color names supporte
Explanation: 这就是在Matplotlib中绘制简单函数!现在,我们将深入探讨有关如何控制轴和线的外观的更多详细信息。
调整图:线条颜色和样式
您可能希望对图形进行的第一个调整是控制线条的颜色和样式。 plt.plot()函数采用其他可用于指定这些参数的参数。要调整颜色,可以使用color关键字,该关键字接受一个表示几乎任何可想象的颜色的字符串参数。可以通过多种方式指定颜色:
End of explanation
plt.plot(x, x + 0, linestyle='solid')
plt.plot(x, x + 1, linestyle='dashed')
plt.plot(x, x + 2, linestyle='dashdot')
plt.plot(x, x + 3, linestyle='dotted');
# 也可以使用符合来替代单词
plt.plot(x, x + 4, linestyle='-') # solid
plt.plot(x, x + 5, linestyle='--') # dashed
plt.plot(x, x + 6, linestyle='-.') # dashdot
plt.plot(x, x + 7, linestyle=':'); # dotted
Explanation: 如果未指定颜色,则Matplotlib将自动在一组默认颜色中循环显示多行。
同样,可以使用linestyle关键字调整线条样式:
End of explanation
plt.plot(x, x + 0, '-g') # solid green
plt.plot(x, x + 1, '--c') # dashed cyan
plt.plot(x, x + 2, '-.k') # dashdot black
plt.plot(x, x + 3, ':r'); # dotted red
Explanation: 如果您想更简洁,可以将这些线型和颜色代码组合到plt.plot()函数的单个非关键字参数中:
End of explanation
plt.plot(x, np.sin(x))
# 我们取x轴范围和y轴范围为(-1, 11)和(-1.5,1.5)
plt.xlim(-1, 11)
plt.ylim(-1.5, 1.5);
Explanation: 这些单字符颜色代码反映了通常用于数字彩色图形的RGB(红色/绿色/蓝色)和CMYK(青色/品红色/黄色/黑色)颜色系统中的标准缩写。
还有许多其他关键字参数可用于微调图的外观。有关更多详细信息,建议使用IPython的帮助工具查看plt.plot()函数的文档字符串(请参阅IPython中的帮助和文档)。
调整图:轴极限
Matplotlib在为绘图选择默认轴限制方面做得不错,但是有时候更好的控制会很好。调整轴限制的最基本方法是使用plt.xlim()和plt.ylim()方法:
End of explanation
plt.plot(x, np.cos(x),'--y')
plt.axis([-1, 11, -1.5, 1.5])
Explanation: 一个有用的相关方法是plt.axis()(在此注意带有e的轴和带有i的轴之间的潜在混淆)。通过传递一个指定[xmin,xmax,ymin,ymax]的列表,可以使用plt.axis()方法通过一次调用来设置x和y限制:
End of explanation
plt.plot(x, np.sin(x))
plt.axis('tight');
Explanation: plt.axis()方法甚至超越了此范围,允许您执行诸如自动收紧当前图的边界之类的操作:
End of explanation
plt.plot(x, np.sin(x))
plt.axis('equal');
Explanation: 它甚至允许更高级别的规格,例如确保长宽比相等,以便在屏幕上x中的一个单位等于y中的一个单位:
End of explanation
plt.plot(x,np.sin(x),':r')
plt.title('sin 函数图像')
plt.xlabel('x轴')
plt.ylabel('y轴');
Explanation: 有关轴限制和plt.axis方法的其他功能的更多信息,可以参考plt.axis文档字符串。
标记图
作为本节的最后一部分,我们将简要介绍绘图的标签:标题,轴标签和简单图例。
标题和轴标签是最简单的此类标签-有些方法可用于快速设置它们:
End of explanation
plt.plot(x, np.sin(x), '-g', label='sin(x)')
plt.plot(x, np.cos(x), ':b', label='cos(x)')
plt.axis('equal')
plt.legend();
Explanation: 可以使用函数的可选参数来调整这些标签的位置,大小和样式。有关更多信息,请参见Matplotlib文档和每个函数的文档字符串。
当在单个轴上显示多条线时,创建标记每个线型的打印图例可能很有用。同样,Matplotlib具有快速创建此类图例的内置方法。通过plt.legend()方法完成。尽管有几种有效的用法,但我发现使用plot函数的label关键字指定每条线的标签最简单:
End of explanation
ax = plt.axes()
ax.plot(x, np.sin(x))
ax.set(xlim=(0, 10), ylim=(-2, 2),
xlabel='x', ylabel='sin(x)',
title='A Simple Plot');
Explanation: 如您所见,plt.legend()函数跟踪线条样式和颜色,并将它们与正确的标签匹配。在plt.legend文档字符串中可以找到有关指定和格式化图例的更多信息。此外,我们将在“自定义绘图图例”中介绍一些更高级的图例选项。
Matplotlib陷阱
尽管大多数plt函数直接转换为ax方法(例如plt.plot()→ax.plot(),plt.legend()→ax.legend()等),但并非所有命令都如此。特别是,用于设置限制,标签和标题的功能略有修改。为了在MATLAB样式的函数和面向对象的方法之间转换,请进行以下更改:
plt.xlabel() → ax.set_xlabel()
plt.ylabel() → ax.set_ylabel()
plt.xlim() → ax.set_xlim()
plt.ylim() → ax.set_ylim()
plt.title() → ax.set_title()
在绘图的面向对象的界面中,而不是单独调用这些函数,使用ax.set()方法一次设置所有这些属性通常更为方便:
End of explanation |
2,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Electric Machinery Fundamentals 5th edition
Chapter 2 (Code examples)
Example 2-5
Calculate and plot the voltage regulation of a transformer as a function of load for power factors of 0.8 lagging, 1.0, and 0.8 leading.
Import the PyLab namespace (provides set of useful commands and constants like Pi)
Step1: Define all the parameters
Step2: Calculate the current values for the three power factors.
The first row of I contains the lagging currents, the second row contains
the unity currents, and the third row contains the leading currents.
Step3: Calculate VP/a
Step4: Calculate voltage regulation
Step5: Plot the voltage regulation | Python Code:
%pylab notebook
Explanation: Electric Machinery Fundamentals 5th edition
Chapter 2 (Code examples)
Example 2-5
Calculate and plot the voltage regulation of a transformer as a function of load for power factors of 0.8 lagging, 1.0, and 0.8 leading.
Import the PyLab namespace (provides set of useful commands and constants like Pi)
End of explanation
VS = 230.0 # Secondary voltage (V)
amps = arange(0, 65.2, 6.52) # Current values (A)
Req = 0.0445 # Equivalent R (ohms)
Xeq = 0.0645 # Equivalent X (ohms)
Explanation: Define all the parameters:
End of explanation
I = amps * array ([[0.8 - 0.6j], # Lagging
[1.0], # Unity
[0.8 + 0.6j]]) # Leading
Explanation: Calculate the current values for the three power factors.
The first row of I contains the lagging currents, the second row contains
the unity currents, and the third row contains the leading currents.
End of explanation
VPa = VS + Req * I + 1j * Xeq * I
Explanation: Calculate VP/a:
End of explanation
VR = (abs(VPa) - VS) / VS * 100;
Explanation: Calculate voltage regulation:
End of explanation
rc('text', usetex=True) # enable LaTeX commands for plot
plot(amps,VR[0,])
plot(amps,VR[1,])
plot(amps,VR[2,])
title(r'\textbf{Voltage Regulation Versus Load}');
xlabel(r'\textbf{Load (A)}');
ylabel(r'\textbf{Voltage Regulation (\%)}');
legend(('0.8 PF lagging','1.0 PF','0.8 PF leading'), loc=2);
grid()
Explanation: Plot the voltage regulation:
End of explanation |
2,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
T-shirt inspiration
❝first solve the problem then write the code❞
Step1: Introduction
This Jupyter notebook is a place to keep my thoughts organized on how to best present Fort Lauderdale Police department data obtained at the 2016 Fort Lauderdale Civic Hackathon. I blogged about it here.
Just prior to my participation in the Fort Lauderdale Civic Hackathon, I experimented with a MapBox GL JS API. You can see my simple demonstration here where I created a bunch of fake points and cluster mapped them.
That experiment is what inspired me to suggest to my hackathon parter David Karim that we heat map the data. See that map here.
MongoDB
I know little about databases though I respect their power to efficiently handle data. I struggle with cognitive overhead of SQL databases with their normalized data and join commands.
When I heard that MongoDB organized its data without a requirement of normalization, I knew I had to investigate because that CSV file of data was not normalized.
MongoDB's behavior fit my mental model of how I imagined it would be to easily handle data. While I have little experience with which to compare, it appears that MongoDB can efficiently handle over 92,000 citation documents with ease.
A more difficult question
Step2: Tasks
Create a new collection called 'citations_geojson'.
Jump down to collection creation in this blog post.
* Create some new documents in 'citations_geoson' as a GeoJSON point objects.
* [definition of a geojson point](http
Step3: Verify that the database is running and responding to the pymongo driver. | Python Code:
from IPython.display import IFrame
IFrame(
'https://www.sunfrog.com/Geek-Tech/First-solve-the-problem-Then-write-the-code.html',
width=800,
height=350,
)
Explanation: T-shirt inspiration
❝first solve the problem then write the code❞
End of explanation
from IPython.display import IFrame
IFrame(
'https://docs.mongodb.com/manual/reference/geojson/',
width=800,
height=350,
)
Explanation: Introduction
This Jupyter notebook is a place to keep my thoughts organized on how to best present Fort Lauderdale Police department data obtained at the 2016 Fort Lauderdale Civic Hackathon. I blogged about it here.
Just prior to my participation in the Fort Lauderdale Civic Hackathon, I experimented with a MapBox GL JS API. You can see my simple demonstration here where I created a bunch of fake points and cluster mapped them.
That experiment is what inspired me to suggest to my hackathon parter David Karim that we heat map the data. See that map here.
MongoDB
I know little about databases though I respect their power to efficiently handle data. I struggle with cognitive overhead of SQL databases with their normalized data and join commands.
When I heard that MongoDB organized its data without a requirement of normalization, I knew I had to investigate because that CSV file of data was not normalized.
MongoDB's behavior fit my mental model of how I imagined it would be to easily handle data. While I have little experience with which to compare, it appears that MongoDB can efficiently handle over 92,000 citation documents with ease.
A more difficult question: Can I write code to make MongoDB do its thing most efficiently?!
Creating geojson data
The MapBox API works well with geojson data. A quick search on Google reveals that MongoDB has built-in support for geojson!
End of explanation
# a module for importing values so I do not have to expose them in this Jupyter notebook
from meta import dockerized_mongo_path
# the '!' preceding the command allows me to access the shell from the Jupyter notebook
# in which I am writing this blog post
# ./expect-up-daemon calls a /usr/bin/expect script
# in the $dockerized_mongo_path to bring up the dockerized MongoDB
!cd $dockerized_mongo_path && ./expect-up-daemon
Explanation: Tasks
Create a new collection called 'citations_geojson'.
Jump down to collection creation in this blog post.
* Create some new documents in 'citations_geoson' as a GeoJSON point objects.
* [definition of a geojson point](http://geojson.org/geojson-spec.html#point).
* What a point looks like in MongoDB.
```json
{ type: "Point", coordinates: [ 40, 5 ] }
```
* [PyMongo driver has some info on geo indexing that might be relevant](https://api.mongodb.com/python/current/examples/geo.html).
* [MongoDB docs on geospatial indices](https://docs.mongodb.com/manual/core/geospatial-indexes/).
* [geojson geometry collections looks relevant](http://geojson.org/geojson-spec.html#geometrycollection)
question
Can the MapBox API utilize geojson data served up directly from MongoDB?
While the MongoDB ojbects looks similar to the geojson I was used to seeing when building MapBox maps, they do not appear to be exactly the same.
Theory: a MapBox API may be able to handle the data directly from MongoDB. I have to figure out how to make that connection.
possible sources of answers
geojson vt blog post from MapBox
❝If you’re using Mapbox GL-based tools (either GL JS or Mapbox Mobile), you’re already using GeoJSON-VT under the hood.❞
So it is possible to feed large amounts of data to a map. It does not answer the question of whether or not I have to munge the data coming from MongoDB first or not to make it MapBox valid geojson.
This definitely looks promising from the npm domain!
❝GeoJSON normalization for mongoDB. Convert an array of documents with geospatial information (2dsphere only) into a GeoJSON feature collection.❞
Some words I recognize from trying out MapBox: ❝GeoJSON feature collection❞
Create a collection in MongoDB using PyMongo.
%%HTML # create-collection
<span id="create-collection"></span>
Start the MongoDB Docker container
The repository for this data in a Dockerized MongoDB instance is here: dm-wyncode/docker-mongo-flpd-hackathon-data
End of explanation
from pymongo import MongoClient
client = MongoClient()
db = client.app
collection_names = sorted(db.collection_names())
print(collection_names)
collections = accidents, citations = [db.get_collection(collection)
for collection
in collection_names]
info = [{collection_name: format(collection.count(), ',')}
for collection_name, collection
in zip(collection_names, collections)]
print("document counts")
for item in info:
print(item)
Explanation: Verify that the database is running and responding to the pymongo driver.
End of explanation |
2,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames 001
Step1: We're skipping Table 1 because it is an observing log.
Table 2- Members of IC348
Since this table is formatted in the CDS format, it is easiest to take advantage of the Astropy functionality, ascii.read, which automatically detects the header formatting. We will also take advantage of the fact that ascii.read can accept a url as an argument, so we won't have to actually save the data locally. Let's hope ApJ doesn't change their web link though.
Step2: I am using seaborn (imported as sns), to set nice-looking figure deafaults.
Step3: Table 3 - Foreground members
This table is not formatted as CDS. Instead it is just raw $\LaTeX$, with no header columns. So I will have to assign the column names manually, which I copy and paste and parse by hand. Then I use pandas (imported as pd) to read the tab-separated file (sep='\t').
Step4: Table 4 - Background stars
Step5: Table 8 - Adopted Temperature Scale
Step6: Covert string spectral type to numerical type.
Step7: Save the data tables locally.
Step8: ! mkdir ../data/Luhman2003 | Python Code:
%pylab inline
import seaborn as sns
sns.set_context("notebook", font_scale=1.5)
import warnings
warnings.filterwarnings("ignore")
Explanation: ApJdataFrames 001: Luhman2003
Title: A Census of the Young Cluster IC 348
Authors: Luhman K.L., Stauffer J.R., Muench A.A., Rieke G.H., Lada E.A., Bouvier J., Lada C.J.
Data is from this paper:
http://iopscience.iop.org/0004-637X/593/2/1093/
End of explanation
from astropy.io import ascii
#!curl -o ../data/Luhman2003/Luhman2003_Table2.txt http://iopscience.iop.org/0004-637X/593/2/1093/fulltext/datafile2.txt
data = ascii.read("http://iopscience.iop.org/0004-637X/593/2/1093/fulltext/datafile2.txt")
data[0:2]
print data.colnames
Explanation: We're skipping Table 1 because it is an observing log.
Table 2- Members of IC348
Since this table is formatted in the CDS format, it is easiest to take advantage of the Astropy functionality, ascii.read, which automatically detects the header formatting. We will also take advantage of the fact that ascii.read can accept a url as an argument, so we won't have to actually save the data locally. Let's hope ApJ doesn't change their web link though.
End of explanation
cmap = sns.palettes.cubehelix_palette(start=0.2, rot=0.7,as_cmap=True)
sns.palplot(sns.palettes.cubehelix_palette(start=0.2, rot=0.7))
plt.figure(figsize=(4.5, 9))
plt.scatter(data['I-Z'].data, data['H-K'].data + data['Kmag'].data, c=data['AJ'].data, cmap=cmap)
plt.xlim(0, 1.2)
plt.ylim(18.5, 9.5)
cbar = plt.colorbar()
cbar.set_label('$A_J$')
plt.xlabel("$I-Z$")
plt.ylabel("$H$")
Explanation: I am using seaborn (imported as sns), to set nice-looking figure deafaults.
End of explanation
import pandas as pd
col_names = ["ID","RA","Dec","SpectralType","Ref","ForegroundEvidence","R-I","I","I-Z","J-H","H-Ks","Ks"]
tbl3 = pd.read_csv('http://iopscience.iop.org/0004-637X/593/2/1093/fulltext/57692.tb3.txt', sep='\t',
header=None, na_values='\ldots', names=col_names)
tbl3.head()
plt.figure(figsize=(4.5, 9))
plt.scatter(data['I-Z'].data, data['H-K'].data + data['Kmag'].data, c=data['AJ'].data, cmap=cmap, label='Members')
plt.plot(tbl3['I-Z'], tbl3['H-Ks']+tbl3['Ks'], 'rs', label='Foreground')
plt.xlim(0, 1.2)
plt.ylim(18.5, 9.5)
cbar = plt.colorbar()
cbar.set_label('$A_J$')
plt.xlabel("$I-Z$")
plt.ylabel("$H$")
plt.legend(loc='lower left')
Explanation: Table 3 - Foreground members
This table is not formatted as CDS. Instead it is just raw $\LaTeX$, with no header columns. So I will have to assign the column names manually, which I copy and paste and parse by hand. Then I use pandas (imported as pd) to read the tab-separated file (sep='\t').
End of explanation
col_names = ["ID","RA","Dec","SpectralType","Ref","R-I","I","I-Z","J-H","H-Ks","Ks"]
tbl4 = pd.read_csv('http://iopscience.iop.org/0004-637X/593/2/1093/fulltext/57692.tb4.txt', sep='\t',
header=None, na_values='\ldots', names=col_names)
tbl4.head()
plt.figure(figsize=(4.5, 9))
plt.scatter(data['I-Z'].data, data['H-K'].data + data['Kmag'].data, c=data['AJ'].data, cmap=cmap, label='Members')
plt.plot(tbl3['I-Z'], tbl3['H-Ks']+tbl3['Ks'], 'r.', label='Foreground')
plt.plot(tbl4['I-Z'], tbl4['H-Ks']+tbl3['Ks'], 'bs', label='Background')
plt.xlim(0, 1.2)
plt.ylim(18.5, 9.5)
cbar = plt.colorbar()
cbar.set_label('$A_J$')
plt.xlabel("$I-Z$")
plt.ylabel("$H$")
plt.legend(loc='lower left')
Explanation: Table 4 - Background stars
End of explanation
tbl8 = pd.read_csv("http://iopscience.iop.org/0004-637X/593/2/1093/fulltext/57692.tb8.txt",
names = ["Spectral_Type", "Teff"],
sep = '\t')
tbl8
Explanation: Table 8 - Adopted Temperature Scale
End of explanation
import gully_custom
tbl8["SpT"] = gully_custom.specType(tbl8.Spectral_Type)
sns.lmplot("SpT", "Teff", tbl8)
Explanation: Covert string spectral type to numerical type.
End of explanation
tbl2 = data.to_pandas()
Explanation: Save the data tables locally.
End of explanation
for tbl, outname in zip([tbl2, tbl3, tbl4, tbl8], "tbl2, tbl3, tbl4, tbl8".split(", ")):
tbl.to_csv("../data/Luhman2003/"+outname+".csv", sep='\t', index=False)
Explanation: ! mkdir ../data/Luhman2003
End of explanation |
2,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create an example dataframe
Step2: List Comprehensions
As a loop
Step3: As list comprehension | Python Code:
# Import modules
import pandas as pd
# Set ipython's max row display
pd.set_option('display.max_row', 1000)
# Set iPython's max column width to 50
pd.set_option('display.max_columns', 50)
Explanation: Title: Using List Comprehensions With Pandas
Slug: pandas_list_comprehension
Summary: Using List Comprehensions With Pandas
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Preliminaries
End of explanation
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'])
df
Explanation: Create an example dataframe
End of explanation
# Create a variable
next_year = []
# For each row in df.years,
for row in df['year']:
# Add 1 to the row and append it to next_year
next_year.append(row + 1)
# Create df.next_year
df['next_year'] = next_year
# View the dataframe
df
Explanation: List Comprehensions
As a loop
End of explanation
# Subtract 1 from row, for each row in df.year
df['previous_year'] = [row-1 for row in df['year']]
df
Explanation: As list comprehension
End of explanation |
2,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center"> Introdução ao Processamento de Linguagem Natural (PLN) Usando Python </h1>
<h3 align="center"> Professor Fernando Vieira da Silva MSc.</h3>
<h2> Técnicas para Pré-Processamento </h2>
<p>Vamos avaliar as técnicas mais comuns para prepararmos o texto para usar com algoritmos de aprendizado de máquina logo mais.</p>
<p>Como estudo de caso, vamos usar o texto de <i>Hamlet</i>, encontrado no corpus <i>Gutenberg</i> do pacote <b>NLTK</b></p>
<b>1. Baixando o corpus Gutenberg</b>
Step1: <b>2. Exibindo o texto "Hamlet"</b>
Step2: <b>3. Segmentação de sentenças e tokenização de palavras</b>
Step3: <b>4. Removendo stopwords e pontuação</b>
Step4: <b>5. Part of Speech (POS) Tags </b>
Step5: As tags indicam a classificação sintática de cada palavra no texto. Ver <a href="https
Step6: Já lemmatization vai além de somente remover sufixos, obtendo a raiz linguística da palavra. Vamos usar as tags POS obtidas anteriormente para otimizar o lemmatizer.
Step7: <b>7. N-gramas</b>
Além da técnica de <i>Bag-of-Words</i>, outra opção é utilizar n-gramas (onde "n" pode variar)
Step8: Também podemos usar a classe <b>CountVectorizer</b>, do scikit-learn
Step9: Agora, vamos contar os n-grams (no nosso caso, trigramas) de todas as sentenças do texto | Python Code:
import nltk
nltk.download("gutenberg")
Explanation: <h1 align="center"> Introdução ao Processamento de Linguagem Natural (PLN) Usando Python </h1>
<h3 align="center"> Professor Fernando Vieira da Silva MSc.</h3>
<h2> Técnicas para Pré-Processamento </h2>
<p>Vamos avaliar as técnicas mais comuns para prepararmos o texto para usar com algoritmos de aprendizado de máquina logo mais.</p>
<p>Como estudo de caso, vamos usar o texto de <i>Hamlet</i>, encontrado no corpus <i>Gutenberg</i> do pacote <b>NLTK</b></p>
<b>1. Baixando o corpus Gutenberg</b>
End of explanation
hamlet_raw = nltk.corpus.gutenberg.raw('shakespeare-hamlet.txt')
print(hamlet_raw[:1000])
Explanation: <b>2. Exibindo o texto "Hamlet"</b>
End of explanation
from nltk.tokenize import sent_tokenize
sentences = sent_tokenize(hamlet_raw)
print(sentences[:10])
from nltk.tokenize import word_tokenize
words = word_tokenize(sentences[0])
print(words)
Explanation: <b>3. Segmentação de sentenças e tokenização de palavras</b>
End of explanation
from nltk.corpus import stopwords
stopwords_list = stopwords.words('english')
print(stopwords_list)
non_stopwords = [w for w in words if not w.lower() in stopwords_list]
print(non_stopwords)
import string
punctuation = string.punctuation
print(punctuation)
non_punctuation = [w for w in non_stopwords if not w in punctuation]
print(non_punctuation)
Explanation: <b>4. Removendo stopwords e pontuação</b>
End of explanation
from nltk import pos_tag
pos_tags = pos_tag(words)
print(pos_tags)
Explanation: <b>5. Part of Speech (POS) Tags </b>
End of explanation
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer('english')
sample_sentence = "He has already gone"
sample_words = word_tokenize(sample_sentence)
stems = [stemmer.stem(w) for w in sample_words]
print(stems)
Explanation: As tags indicam a classificação sintática de cada palavra no texto. Ver <a href="https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html" target="blank">https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html</a> para uma lista completa
<b>6. Stemming e Lemmatization</b>
Stemming permite obter a "raiz" da palavra, removendo sufixos, por exemplo.
End of explanation
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
lemmatizer = WordNetLemmatizer()
pos_tags = nltk.pos_tag(sample_words)
lemmas = []
for w in pos_tags:
if w[1].startswith('J'):
pos_tag = wordnet.ADJ
elif w[1].startswith('V'):
pos_tag = wordnet.VERB
elif w[1].startswith('N'):
pos_tag = wordnet.NOUN
elif w[1].startswith('R'):
pos_tag = wordnet.ADV
else:
pos_tag = wordnet.NOUN
lemmas.append(lemmatizer.lemmatize(w[0], pos_tag))
print(lemmas)
Explanation: Já lemmatization vai além de somente remover sufixos, obtendo a raiz linguística da palavra. Vamos usar as tags POS obtidas anteriormente para otimizar o lemmatizer.
End of explanation
non_punctuation = [w for w in words if not w.lower() in punctuation]
n_grams_3 = ["%s %s %s"%(non_punctuation[i], non_punctuation[i+1], non_punctuation[i+2]) for i in range(0, len(non_punctuation)-2)]
print(n_grams_3)
Explanation: <b>7. N-gramas</b>
Além da técnica de <i>Bag-of-Words</i>, outra opção é utilizar n-gramas (onde "n" pode variar)
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer(ngram_range=(3,3))
import numpy as np
arr = np.array([sentences[0]])
print(arr)
n_gram_counts = count_vect.fit_transform(arr)
print(n_gram_counts)
print(count_vect.vocabulary_)
Explanation: Também podemos usar a classe <b>CountVectorizer</b>, do scikit-learn:
End of explanation
arr = np.array(sentences)
n_gram_counts = count_vect.fit_transform(arr)
print(n_gram_counts[:20])
print([k for k in count_vect.vocabulary_.keys()][:20])
from nltk import word_tokenize
frase = 'o cachorro correu atrás do gato'
ngrams = ["%s %s %s" % (nltk.word_tokenize(frase)[i], \
nltk.word_tokenize(frase)[i+1], \
nltk.word_tokenize(frase)[i+2]) \
for i in range(len(nltk.word_tokenize(frase))-2)]
print(ngrams)
Explanation: Agora, vamos contar os n-grams (no nosso caso, trigramas) de todas as sentenças do texto:
End of explanation |
2,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Idomatic Pandas
Q
Step1: Reshaping DataFrame objects
In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.
This dataset in from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia (spasmodic torticollis) from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable
Step2: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways
Step3: Have a peek at the structure of the index of the stacked data (and the data itself).
To complement this, unstack pivots from rows back to columns.
Step4: Exercise
Which columns uniquely define a row? Create a DataFrame called cdystonia2 with a hierarchical index based on these columns.
Step5: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
Step6: We can now merge these reshaped outcomes data with the other variables to create a wide format DataFrame that consists of one row for each patient.
Step7: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking
Step8: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one
or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to
the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.
Step9: This illustrates the two formats for longitudinal data
Step10: This approach of seqentially calling methods is called method chaining, and despite the fact that it creates very long lines of code that must be properly justified, it allows for the writing of rather concise and readable code. Method chaining is possible because of the pandas convention of returning copies of the results of operations, rather than in-place operations. This allows methods from the returned object to be immediately called, as needed, rather than assigning the output to a variable that might not otherwise be used. For example, without method chaining we would have done the following
Step11: This necessitates the creation of a slew of intermediate variables that we really don't need.
Let's transform another dataset using method chaining. The measles.csv file contains de-identified cases of measles from an outbreak in Sao Paulo, Brazil in 1997. The file contains rows of individual records
Step12: The goal is to summarize this data by age groups and bi-weekly period, so that we can see how the outbreak affected different ages over the course of the outbreak.
The best approach is to build up the chain incrementally. We can begin by generating the age groups (using cut) and grouping by age group and the date (ONSET)
Step13: What we then want is the number of occurences in each combination, which we can obtain by checking the size of each grouping
Step14: This results in a hierarchically-indexed Series, which we can pivot into a DataFrame by simply unstacking
Step15: Now, fill replace the missing values with zeros
Step16: Finally, we want the counts in 2-week intervals, rather than as irregularly-reported days, which yields our the table of interest
Step17: From this, it is easy to create meaningful plots and conduct analyses
Step18: Pivoting
The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments
Step19: Exercise
Try pivoting the cdystonia DataFrame without specifying a variable for the cell values
Step20: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
Step21: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
Step22: Data transformation
There are a slew of additional operations for DataFrames that we would collectively refer to as transformations which include tasks such as
Step23: These rows can be removed using drop_duplicates
Step24: Value replacement
Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset
Step25: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.
Step26: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
An example where replacement is useful is replacing sentinel values with an appropriate numeric value prior to analysis. A large negative number is sometimes used in otherwise positive-valued data to denote missing values.
Step27: In such situations, we can use replace to substitute nan where the sentinel values occur.
Step28: We can also perform the same replacement that we used map for with replace
Step29: Inidcator variables
For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.
Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships.
Exercise
Create a subset of the vessels DataFrame called vessels5 that only contains the 5 most common types of vessels, based on their prevalence in the dataset.
Step30: We can now apply get_dummies to the vessel type to create 5 indicator variables.
Step31: Discretization
Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!
Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups
Step32: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's
Step33: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False
Step34: Since the data are now ordinal, rather than numeric, we can give them labels
Step35: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default
Step36: Alternatively, one can specify custom quantiles to act as cut points
Step37: Exercise
Use the discretized segment lengths as the input for get_dummies to create 5 indicator variables for segment length
Step38: Categorical Variables
One of the keys to maximizing performance in pandas is to use the appropriate types for your data wherever possible. In the case of categorical data--either the ordered categories as we have just created, or unordered categories like race, gender or country--the use of the categorical to encode string variables as numeric quantities can dramatically improve performance and simplify subsequent analyses.
When text data are imported into a DataFrame, they are endowed with an object dtype. This will result in relatively slow computation because this dtype runs at Python speeds, rather than as Cython code that gives much of pandas its speed. We can ameliorate this by employing the categorical dtype on such data.
Step39: This creates an unordered categorical variable. To create an ordinal variable, we can specify order=True as an argument to astype
Step40: However, this is not the correct order; by default, the categories will be sorted alphabetically, which here gives exactly the reverse order that we need.
To specify an arbitrary order, we can used the set_categories method, as follows
Step41: Notice that we obtained set_categories from the cat attribute of the categorical variable. This is known as the category accessor, and is a device for gaining access to Categorical variables' categories, analogous to the string accessor that we have seen previously from text variables.
Step42: Additional categoried can be added, even if they do not currently exist in the DataFrame, but are part of the set of possible categories
Step43: To complement this, we can remove categories that we do not wish to retain
Step44: Or, even more simply
Step45: For larger datasets, there is an appreciable gain in performance, both in terms of speed and memory usage.
Step46: Data aggregation and GroupBy operations
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple
Step47: This grouped dataset is hard to visualize
Step48: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups
Step49: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
<div align="right">*(figure taken from "Python for Data Analysis", p.251)*</div>
We can aggregate in Pandas using the aggregate (or agg, for short) method
Step50: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean
Step51: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation
Step52: Exercise
Use the quantile method to generate the median values of the twstrs variable for each patient.
Step53: If we wish, we can easily aggregate according to multiple keys
Step54: Alternately, we can transform the data, using a function of our choice with the transform method
Step55: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns
Step56: Or, as a DataFrame
Step57: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed
Step58: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way
Step59: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index
Step60: The level argument specifies which level of the index to use for grouping.
Step61: Apply
We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.
The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.
Step62: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship
Step63: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.
Exercise
Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
Step64: Women and children first?
Use the groupby method to calculate the proportion of passengers that survived by sex.
Calculate the same proportion, but by class and sex.
Create age categories | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: Idomatic Pandas
Q: How do I make my pandas code faster with parallelism?
A: You don’t need parallelism, you can use Pandas better.
-- Matthew Rocklin
Now that we have been exposed to the basic functionality of pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.
As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.
As you may already have noticed, there are sometimes mutliple ways to achieve the same goal using pandas. Importantly, some approaches are better than others, in terms of performance, readability and ease of use. We will cover some important ways of maximizing your pandas efficiency.
End of explanation
cdystonia = pd.read_csv("../data/cdystonia.csv", index_col=None)
cdystonia.head()
Explanation: Reshaping DataFrame objects
In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.
This dataset in from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia (spasmodic torticollis) from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began
End of explanation
stacked = cdystonia.stack()
stacked
Explanation: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.
The stack method rotates the data frame so that columns are represented in rows:
End of explanation
stacked.unstack().head()
Explanation: Have a peek at the structure of the index of the stacked data (and the data itself).
To complement this, unstack pivots from rows back to columns.
End of explanation
# Write your answer here
Explanation: Exercise
Which columns uniquely define a row? Create a DataFrame called cdystonia2 with a hierarchical index based on these columns.
End of explanation
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
Explanation: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
End of explanation
cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]
.drop_duplicates()
.merge(twstrs_wide, right_index=True, left_on='patient', how='inner'))
cdystonia_wide.head()
Explanation: We can now merge these reshaped outcomes data with the other variables to create a wide format DataFrame that consists of one row for each patient.
End of explanation
(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']
.unstack('week').head())
Explanation: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:
End of explanation
pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head()
Explanation: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one
or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to
the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.
End of explanation
(cdystonia[['patient','site','id','treat','age','sex']]
.drop_duplicates()
.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
.head())
Explanation: This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.
The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.
Method chaining
In the DataFrame reshaping section above, you probably noticed how several methods were strung together to produce a wide format table:
End of explanation
cdystonia_subset = cdystonia[['patient','site','id','treat','age','sex']]
cdystonia_complete = cdystonia_subset.drop_duplicates()
cdystonia_merged = cdystonia_complete.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
cdystonia_merged.head()
Explanation: This approach of seqentially calling methods is called method chaining, and despite the fact that it creates very long lines of code that must be properly justified, it allows for the writing of rather concise and readable code. Method chaining is possible because of the pandas convention of returning copies of the results of operations, rather than in-place operations. This allows methods from the returned object to be immediately called, as needed, rather than assigning the output to a variable that might not otherwise be used. For example, without method chaining we would have done the following:
End of explanation
measles = pd.read_csv("../data/measles.csv", index_col=0, encoding='latin-1', parse_dates=['ONSET'])
measles.head()
Explanation: This necessitates the creation of a slew of intermediate variables that we really don't need.
Let's transform another dataset using method chaining. The measles.csv file contains de-identified cases of measles from an outbreak in Sao Paulo, Brazil in 1997. The file contains rows of individual records:
End of explanation
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP']))
Explanation: The goal is to summarize this data by age groups and bi-weekly period, so that we can see how the outbreak affected different ages over the course of the outbreak.
The best approach is to build up the chain incrementally. We can begin by generating the age groups (using cut) and grouping by age group and the date (ONSET):
End of explanation
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()).head(10)
Explanation: What we then want is the number of occurences in each combination, which we can obtain by checking the size of each grouping:
End of explanation
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()
.unstack()).head(5)
Explanation: This results in a hierarchically-indexed Series, which we can pivot into a DataFrame by simply unstacking:
End of explanation
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()
.unstack()
.fillna(0)).head(5)
Explanation: Now, fill replace the missing values with zeros:
End of explanation
case_counts_2w = (measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()
.unstack()
.fillna(0)
.resample('2W')
.sum())
case_counts_2w
Explanation: Finally, we want the counts in 2-week intervals, rather than as irregularly-reported days, which yields our the table of interest:
End of explanation
case_counts_2w.plot(cmap='hot')
Explanation: From this, it is easy to create meaningful plots and conduct analyses:
End of explanation
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()
Explanation: Pivoting
The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.
For example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:
End of explanation
# Write your answer here
Explanation: Exercise
Try pivoting the cdystonia DataFrame without specifying a variable for the cell values:
End of explanation
cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs',
aggfunc=max).head(20)
Explanation: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
End of explanation
pd.crosstab(cdystonia.sex, cdystonia.site)
Explanation: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
End of explanation
vessels = pd.read_csv('../data/AIS/vessel_information.csv')
vessels.tail(10)
vessels.duplicated(subset='names').tail(10)
Explanation: Data transformation
There are a slew of additional operations for DataFrames that we would collectively refer to as transformations which include tasks such as:
removing duplicate values
replacing values
grouping values.
Dealing with duplicates
We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to remove ships from our vessels dataset that have the same name:
End of explanation
vessels.drop_duplicates(['names']).tail(10)
Explanation: These rows can be removed using drop_duplicates
End of explanation
cdystonia.treat.value_counts()
Explanation: Value replacement
Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:
End of explanation
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment
Explanation: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.
End of explanation
scores = pd.Series([99, 76, 85, -999, 84, 95])
Explanation: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
An example where replacement is useful is replacing sentinel values with an appropriate numeric value prior to analysis. A large negative number is sometimes used in otherwise positive-valued data to denote missing values.
End of explanation
scores.replace(-999, np.nan)
Explanation: In such situations, we can use replace to substitute nan where the sentinel values occur.
End of explanation
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})
Explanation: We can also perform the same replacement that we used map for with replace:
End of explanation
# Write your answer here
Explanation: Inidcator variables
For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.
Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships.
Exercise
Create a subset of the vessels DataFrame called vessels5 that only contains the 5 most common types of vessels, based on their prevalence in the dataset.
End of explanation
pd.get_dummies(vessels5.type).head(10)
Explanation: We can now apply get_dummies to the vessel type to create 5 indicator variables.
End of explanation
cdystonia.age.describe()
Explanation: Discretization
Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!
Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:
End of explanation
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]
Explanation: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:
End of explanation
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]
Explanation: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False:
End of explanation
pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]
Explanation: Since the data are now ordinal, rather than numeric, we can give them labels:
End of explanation
pd.qcut(cdystonia.age, 4)[:30]
Explanation: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:
End of explanation
quantiles = pd.qcut(vessels.max_loa, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30]
Explanation: Alternatively, one can specify custom quantiles to act as cut points:
End of explanation
# Write your answer here
Explanation: Exercise
Use the discretized segment lengths as the input for get_dummies to create 5 indicator variables for segment length:
End of explanation
cdystonia_cat = cdystonia.assign(treatment=cdystonia.treat.astype('category')).drop('treat', axis=1)
cdystonia_cat.dtypes
cdystonia_cat.treatment.head()
cdystonia_cat.treatment.cat.codes
Explanation: Categorical Variables
One of the keys to maximizing performance in pandas is to use the appropriate types for your data wherever possible. In the case of categorical data--either the ordered categories as we have just created, or unordered categories like race, gender or country--the use of the categorical to encode string variables as numeric quantities can dramatically improve performance and simplify subsequent analyses.
When text data are imported into a DataFrame, they are endowed with an object dtype. This will result in relatively slow computation because this dtype runs at Python speeds, rather than as Cython code that gives much of pandas its speed. We can ameliorate this by employing the categorical dtype on such data.
End of explanation
cdystonia.treat.astype('category', ordered=True).head()
Explanation: This creates an unordered categorical variable. To create an ordinal variable, we can specify order=True as an argument to astype:
End of explanation
cdystonia.treat.astype('category').cat.set_categories(['Placebo', '5000U', '10000U'], ordered=True).head()
Explanation: However, this is not the correct order; by default, the categories will be sorted alphabetically, which here gives exactly the reverse order that we need.
To specify an arbitrary order, we can used the set_categories method, as follows:
End of explanation
cdystonia_cat.treatment.cat
Explanation: Notice that we obtained set_categories from the cat attribute of the categorical variable. This is known as the category accessor, and is a device for gaining access to Categorical variables' categories, analogous to the string accessor that we have seen previously from text variables.
End of explanation
cdystonia_cat['treatment'] = (cdystonia.treat.astype('category').cat
.set_categories(['Placebo', '5000U', '10000U', '20000U'], ordered=True))
Explanation: Additional categoried can be added, even if they do not currently exist in the DataFrame, but are part of the set of possible categories:
End of explanation
cdystonia_cat.treatment.cat.remove_categories('20000U').head()
Explanation: To complement this, we can remove categories that we do not wish to retain:
End of explanation
cdystonia_cat.treatment.cat.remove_unused_categories().head()
Explanation: Or, even more simply:
End of explanation
vessels_merged = (pd.read_csv('../data/AIS/vessel_information.csv', index_col=0)
.merge(pd.read_csv('../data/AIS/transit_segments.csv'), left_index=True, right_on='mmsi'))
vessels_merged['registered'] = vessels_merged.flag.astype('category')
%timeit vessels_merged.groupby('flag').avg_sog.mean().sort_values()
%timeit vessels_merged.groupby('registered').avg_sog.mean().sort_values()
vessels_merged[['flag','registered']].memory_usage()
Explanation: For larger datasets, there is an appreciable gain in performance, both in terms of speed and memory usage.
End of explanation
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)
Explanation: Data aggregation and GroupBy operations
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:
aggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results
slicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)
group-wise transformation, such as standardization/normalization
End of explanation
cdystonia_grouped
Explanation: This grouped dataset is hard to visualize
End of explanation
for patient, group in cdystonia_grouped:
print('patient', patient)
print('group', group)
Explanation: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:
End of explanation
cdystonia_grouped.agg(np.mean).head()
Explanation: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
<div align="right">*(figure taken from "Python for Data Analysis", p.251)*</div>
We can aggregate in Pandas using the aggregate (or agg, for short) method:
End of explanation
cdystonia_grouped.mean().head()
Explanation: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean:
End of explanation
cdystonia_grouped.mean().add_suffix('_mean').head()
Explanation: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:
End of explanation
# Write your answer here
Explanation: Exercise
Use the quantile method to generate the median values of the twstrs variable for each patient.
End of explanation
cdystonia.groupby(['week','site']).mean().head()
Explanation: If we wish, we can easily aggregate according to multiple keys:
End of explanation
normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head()
Explanation: Alternately, we can transform the data, using a function of our choice with the transform method:
End of explanation
%timeit cdystonia_grouped['twstrs'].mean().head()
Explanation: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:
End of explanation
cdystonia_grouped[['twstrs']].mean().head()
Explanation: Or, as a DataFrame:
End of explanation
chunks = dict(list(cdystonia_grouped))
chunks[4]
Explanation: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:
End of explanation
dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))
Explanation: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way:
End of explanation
cdystonia2.head(10)
Explanation: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index:
End of explanation
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()
Explanation: The level argument specifies which level of the index to use for grouping.
End of explanation
def top(df, column, n=5):
return df.sort_index(by=column, ascending=False)[:n]
Explanation: Apply
We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.
The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.
End of explanation
goo = vessels_merged.groupby('mmsi')
top3segments = vessels_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments.head(15)
Explanation: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:
End of explanation
from IPython.core.display import HTML
HTML(filename='../data/titanic.html')
Explanation: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.
Exercise
Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
End of explanation
# Write your answer here
Explanation: Women and children first?
Use the groupby method to calculate the proportion of passengers that survived by sex.
Calculate the same proportion, but by class and sex.
Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.
End of explanation |
2,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Models
By Saurabh Mahindre - <a href="https
Step1: Training and generating weights
LeastSquaresRegression has to be initialised with the training features and training labels. Once that is done to learn from data we train() it. This also generates the $\text w$ from the general equation described above. To access $\text w$ use get_w().
Step2: This value of $\text w$ is pretty close to 3, which certifies a pretty good fit for the training data. Now let's apply this trained machine to our test data to get the ouput values.
Step3: As an aid to visualisation, a plot of the output and also of the residuals is shown. The sum of the squares of these residuals is minimised.
Step4: Ridge Regression
The function we choose should not only best fit the training data but also generalise well. If the coefficients/weights are unconstrained, they are susceptible to high variance and overfitting. To control variance, one has to regularize the coefficients i.e. control how large the coefficients grow. This is what is done in Ridge regression which is L2 (sum of squared components of $\bf w$) regularized form of least squares. A penalty is imposed on the size of coefficients. The error to be minimized is
Step5: Relationship between weights and regularization
The prediction in the basic regression example was simliar to that of least squares one. To actually see ridge regression's forte, we analyse how the weights change along with the regularization constant. Data with slightly higher dimensions is sampled to do this because overfitting is more likely to occur in such data. Here set_tau() method is used to set the necessary parameter.
Step6: The mean squared error (MSE) of an estimator measures the average of the squares of the errors. CMeanSquaredError class is used to compute the MSE as
Step7: Data with dimension
Step8: As seen from the plot of errors, regularisation doesn't seem to affect the errors significantly. One interpretation could be that this is beacuse there is less overfitting as we have large number of samples. For a small sample size as compared to the dimensionality, the test set performance may be poor even. The reason for this is that the regression function will fit the noise too much, while the interesting part of the signal is too small. We now generate 10 samples of 10-dimensions to test this.
Step9: The first plot is the famous ridge trace that is the signature of this technique. The plot is really very straight forward to read. It presents the standardized regression coefficients (weights) on the vertical axis and various values of tau (Regularisation constant) along the horizontal axis. Since the values of tau ($\tau$) span several orders of magnitude, we adopt a logarithmic scale along this axis. As tau is increased, the values of the regression estimates change, often wildly at first. At some point, the coefficients seem to settle down and then gradually drift towards zero. Often the value of tau for which these coefficients are at their stable values is the best one. This should be supported by a low error value for that tau.
Least Angle Regression and LASSO
LASSO (Least Absolute Shrinkage and Selection Operator) is another version of Least Squares regression, which uses a L1-norm of the parameter vector. This intuitively enforces sparse solutions, whereas L2-norm penalties usually result in smooth and dense solutions.
$$ \min \|X^T\beta - y\|^2 + \lambda\|\beta\|_1$$
In Shogun, following equivalent form is solved, where increasing $C$ selects more variables
Step10: CLeastAngleRegression requires the features to be normalized with a zero mean and unit norm. Hence we use two preprocessors
Step11: Next we train on the data. Keeping in mind that we had 10 attributes/dimensions in our data, let us have a look at the size of LASSO path which is obtained readily using get_path_size().
Step12: The weights generated ($\beta_i$) and their norm ($\sum_i|\beta_i|$) change with each step. This is when a new variable is added to path. To get the weights at each of these steps get_w_for_var() method is used. The argument is the index of the variable which should be in the range [0, path_size).
Step13: Each color in the plot represents a coefficient and the vertical lines denote steps. It is clear that the weights are piecewise linear function of the norm.
Kernel Ridge Regression
Kernel ridge regression (KRR) is a kernel-based regularized form of regression. The dual form of Ridge regression can be shown to be
Step14: As seen from the example KRR (using the kernel trick) can apply techniques for linear regression in the feature space to perform nonlinear regression in the input space.
Support Vector Regression
In Kernel Ridge Regression $(1)$ we have seen the result to be a dense solution. Thus all training examples are active which limits its usage to fewer number of training examples. Support Vector Regression (SVR) uses the concept of support vectors as in Support Vector Machines that leads to a sparse solution. In the SVM the penalty was paid for being on the wrong side of the discriminating plane. Here we do the same thing
Step15: Let us do comparison of time taken for the 2 different models simliar to that done in section 6 of [1]. The Boston Housing Dataset is used. | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from cycler import cycler
# import all shogun classes
from shogun import *
slope = 3
X_train = rand(30)*10
y_train = slope*(X_train)+random.randn(30)*2+2
y_true = slope*(X_train)+2
X_test = concatenate((linspace(0,10, 50),X_train))
#Convert data to shogun format features
feats_train = RealFeatures(X_train.reshape(1,len(X_train)))
feats_test = RealFeatures(X_test.reshape(1,len(X_test)))
labels_train = RegressionLabels(y_train)
Explanation: Regression Models
By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> as a part of <a href="http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de/">herrstrathmann.de</a>
This notebook demonstrates various regression methods provided in Shogun. Linear models like Least Square regression, Ridge regression, Least Angle regression, etc. and also kernel based methods like Kernel Ridge regression are discussed and applied to toy and real life data.
Introduction
Least Squares regression
Prediction using Least Squares
Training and generating weights
Ridge Regression
Weights and regularization
Least Angle Regression and LASSO
Kernel Ridge Regression
Support Vector Regression
Introduction
Regression is a case of supervised learning where the goal is to learn a mapping from inputs $x\in\mathcal{X}$ to outputs $y\in\mathcal{Y}$, given a labeled set of input-output pairs $\mathcal{D} = {(x_i,y_i)}^{\text N}_{i=1} \subseteq \mathcal{X} \times \mathcal{Y}$. The response variable $y_i$ is continuous in regression analysis. Regression finds applications in many fields like for predicting stock prices or predicting consumption spending, etc. In linear regression, the mapping is a linear (straight-line) equation.
Least Squares regression
A Linear regression model can be defined as $\text y =$ $\bf {w} \cdot \bf{x} $ $+ b$. Here $\text y$ is the predicted value, $\text x$ the independent variable and $\text w$ the so called weights.</br> We aim to find the linear function (line) that best explains the data, i.e. that minimises some measure of deviation to the training data $\mathcal{D}$. One such measure is the sum of squared distances. The Ordinary Least Sqaures method minimizes the sum of squared distances between the observed responses in the dataset and the responses predicted by the linear approximation.
The distances called residuals have to minimized. This can be represented as:$$E({\bf{w}}) = \sum_{i=1}^N(y_i-{\bf w}\cdot {\bf x}_i)^2$$
One can differentiate with respect to $\bf w$ and equate to zero to determine the $\bf w$ that minimises $E({\bf w})$. This leads to solution of the form:
$${\bf w} = \left(\sum_{i=1}^N{\bf x}i{\bf x}_i^T\right)^{-1}\left(\sum{i=1}^N y_i{\bf x}_i\right)$$
Prediction using Least Squares
Regression using Least squares is demonstrated below on toy data. Shogun provides the tool to do it using CLeastSquaresRegression class. The data is a straight line with lot of noise and having slope 3. Comparing with the mathematical equation above we thus expect $\text w$ to be around 3 for a good prediction. Once the data is converted to Shogun format, we are ready to train the machine. To label the training data CRegressionLabels are used.
End of explanation
ls = LeastSquaresRegression(feats_train, labels_train)
ls.train()
w = ls.get_w()
print 'Weights:'
print w
Explanation: Training and generating weights
LeastSquaresRegression has to be initialised with the training features and training labels. Once that is done to learn from data we train() it. This also generates the $\text w$ from the general equation described above. To access $\text w$ use get_w().
End of explanation
out = ls.apply(feats_test).get_labels()
Explanation: This value of $\text w$ is pretty close to 3, which certifies a pretty good fit for the training data. Now let's apply this trained machine to our test data to get the ouput values.
End of explanation
figure(figsize=(20,5))
#Regression and true plot
pl1 = subplot(131)
title('Regression')
_ = plot(X_train,labels_train, 'ro')
_ = plot(X_test,out, color='blue')
_ = plot(X_train, y_true, color='green')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
pl1.legend((p1, p2, p3), ["Training samples", "Predicted output", "True relationship"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
#plot residues
pl2 = subplot(132)
title("Squared error and output")
_ = plot(X_test,out, linewidth=2)
gray()
_ = scatter(X_train,labels_train,c=ones(30) ,cmap=gray(), s=40)
for i in range(50,80):
plot([X_test[i],X_test[i]],[out[i],y_train[i-50]] , linewidth=2, color='red')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
pl2.legend((p1, p2), ["Error/residuals to be squared", "Predicted output"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
jet()
Explanation: As an aid to visualisation, a plot of the output and also of the residuals is shown. The sum of the squares of these residuals is minimised.
End of explanation
tau = 0.8
rr = LinearRidgeRegression(tau, feats_train, labels_train)
rr.train()
w = rr.get_w()
print w
out = rr.apply(feats_test).get_labels()
figure(figsize=(20,5))
#Regression and true plot
pl1 = subplot(131)
title('Ridge Regression')
_ = plot(X_train,labels_train, 'ro')
_ = plot(X_test, out, color='blue')
_ = plot(X_train, y_true, color='green')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
pl1.legend((p1, p2, p3), ["Training samples", "Predicted output", "True relationship"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
jet()
Explanation: Ridge Regression
The function we choose should not only best fit the training data but also generalise well. If the coefficients/weights are unconstrained, they are susceptible to high variance and overfitting. To control variance, one has to regularize the coefficients i.e. control how large the coefficients grow. This is what is done in Ridge regression which is L2 (sum of squared components of $\bf w$) regularized form of least squares. A penalty is imposed on the size of coefficients. The error to be minimized is:
$$E({\bf{w}}) = \sum_{i=1}^N(y_i-{\bf w}\cdot {\bf x}_i)^2 + \tau||{\bf w}||^2$$
Here $\tau$ imposes a penalty on the weights.</br>
By differentiating the regularised training error and equating to zero, we find the optimal $\bf w$, given by:
$${\bf w} = \left(\tau {\bf I}+ \sum_{i=1}^N{\bf x}i{\bf x}_i^T\right)^{-1}\left(\sum{i=1}^N y_i{\bf x}_i\right)$$
Ridge regression can be performed in Shogun using CLinearRidgeRegression class. It takes the regularization constant $\tau$ as an additional argument. Let us see the basic regression example solved using the same.
End of explanation
#Generate Data
def generate_data(N, D):
w = randn(D,1)
X = zeros((N,D))
y = zeros((N,1))
for i in range(N):
x = randn(1,D)
for j in range(D):
X[i][j] = x[0][j]
y = dot(X,w) + randn(N,1);
y.reshape(N,)
return X, y.T
def generate_weights(taus, feats_train, labels_train):
preproc = PruneVarSubMean(True)
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
weights = []
rr = LinearRidgeRegression(tau, feats_train, labels_train)
#vary regularization
for t in taus:
rr.set_tau(t)
rr.train()
weights.append(rr.get_w())
return weights, rr
def plot_regularization(taus, weights):
ax = gca()
ax.set_prop_cycle(cycler('color', ['b', 'r', 'g', 'c', 'k', 'y', 'm']))
ax.plot(taus, weights, linewidth=2)
xlabel('Tau', fontsize=12)
ylabel('Weights', fontsize=12)
ax.set_xscale('log')
Explanation: Relationship between weights and regularization
The prediction in the basic regression example was simliar to that of least squares one. To actually see ridge regression's forte, we analyse how the weights change along with the regularization constant. Data with slightly higher dimensions is sampled to do this because overfitting is more likely to occur in such data. Here set_tau() method is used to set the necessary parameter.
End of explanation
def xval_results(taus):
errors = []
for t in taus:
rr.set_tau(t)
splitting_strategy = CrossValidationSplitting(labels_train, 5)
# evaluation method
evaluation_criterium = MeanSquaredError()
# cross-validation instance
cross_validation = CrossValidation(rr, feats_train, labels_train, splitting_strategy, evaluation_criterium, False)
cross_validation.set_num_runs(100)
result = cross_validation.evaluate()
result = CrossValidationResult.obtain_from_generic(result)
errors.append(result.mean)
return errors
Explanation: The mean squared error (MSE) of an estimator measures the average of the squares of the errors. CMeanSquaredError class is used to compute the MSE as :
$$\frac{1}{|L|} \sum_{i=1}^{|L|} (L_i - R_i)^2$$
Here $L$ is the vector of predicted labels and $R$ is the vector of real labels.
We use 5-fold cross-validation to compute MSE and have a look at how MSE varies with regularisation.
End of explanation
n = 500
taus = logspace(-6, 4, n)
figure(figsize=(20,6))
suptitle('Effect of Regularisation for 10-dimensional data with 200 samples', fontsize=12)
matrix, y = generate_data(200,10)
feats_train = RealFeatures(matrix.T)
labels_train = RegressionLabels(y[0])
weights, rr = generate_weights(taus, feats_train, labels_train)
errors = xval_results(taus)
p1=subplot(121)
plot_regularization(taus, weights)
p2 = subplot(122)
plot(taus, errors)
p2.set_xscale('log')
xlabel('Tau', fontsize=12)
ylabel('Error', fontsize=12)
jet()
Explanation: Data with dimension: 10 and number of samples: 200 is now sampled.
End of explanation
figure(figsize=(20,6))
suptitle('Effect of Regularisation for 10-dimensional data with 10 samples', fontsize=12)
matrix, y = generate_data(10,10)
feats_train = RealFeatures(matrix.T)
labels_train = RegressionLabels(y[0])
weights, rr = generate_weights(taus, feats_train, labels_train)
errors = xval_results(taus)
p1 = subplot(121)
plot_regularization(taus, weights)
p2 = subplot(122)
plot(taus, errors)
p2.set_xscale('log')
xlabel('Tau', fontsize=12)
ylabel('Error', fontsize=12)
jet()
Explanation: As seen from the plot of errors, regularisation doesn't seem to affect the errors significantly. One interpretation could be that this is beacuse there is less overfitting as we have large number of samples. For a small sample size as compared to the dimensionality, the test set performance may be poor even. The reason for this is that the regression function will fit the noise too much, while the interesting part of the signal is too small. We now generate 10 samples of 10-dimensions to test this.
End of explanation
#sample some data
X=rand(10)*1.5
for i in range(9):
x=random.standard_normal(10)*0.5
X=vstack((X, x))
y=ones(10)
feats_train=RealFeatures(X)
labels_train=RegressionLabels(y)
Explanation: The first plot is the famous ridge trace that is the signature of this technique. The plot is really very straight forward to read. It presents the standardized regression coefficients (weights) on the vertical axis and various values of tau (Regularisation constant) along the horizontal axis. Since the values of tau ($\tau$) span several orders of magnitude, we adopt a logarithmic scale along this axis. As tau is increased, the values of the regression estimates change, often wildly at first. At some point, the coefficients seem to settle down and then gradually drift towards zero. Often the value of tau for which these coefficients are at their stable values is the best one. This should be supported by a low error value for that tau.
Least Angle Regression and LASSO
LASSO (Least Absolute Shrinkage and Selection Operator) is another version of Least Squares regression, which uses a L1-norm of the parameter vector. This intuitively enforces sparse solutions, whereas L2-norm penalties usually result in smooth and dense solutions.
$$ \min \|X^T\beta - y\|^2 + \lambda\|\beta\|_1$$
In Shogun, following equivalent form is solved, where increasing $C$ selects more variables:
$$\min \|X^T\beta - y\|^2 \quad s.t. \|\beta\|_1 \leq C $$
One way to solve this regularized form is by using Least Angle Regression (LARS).
LARS is essentially forward stagewise made fast. LARS can be briefly described as follows.
Start with an empty set.
Select $x_j$ that is most correlated with residuals.
Proceed in the direction of $x_j$ until another variable $x_k$ is equally correlated with residuals.
Choose equiangular direction between $x_j$ and $x_k$.
Proceed until third variable enters the active set, etc.
It should be noticed that instead of making tiny hops in the direction of one variable at a time, LARS makes optimally-sized leaps in optimal directions. These directions are chosen to make equal angles (equal correlations) with each of the variables currently in our set (equiangular).
Shogun provides tools for Least angle regression (LARS) and lasso using CLeastAngleRegression class. As explained in the mathematical formaulation, LARS is just like Stepwise Regression but increases the estimated variables in a direction equiangular to each one's correlations with the residual. The working of this is shown below by plotting the LASSO path. Data is generated in a similar way to the previous section.
End of explanation
#Preprocess data
preproc=PruneVarSubMean()
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
preprocessor=NormOne()
preprocessor.init(feats_train)
feats_train.add_preprocessor(preprocessor)
feats_train.apply_preprocessor()
print "(No. of attributes, No. of samples) of data:"
print feats_train.get_feature_matrix().shape
Explanation: CLeastAngleRegression requires the features to be normalized with a zero mean and unit norm. Hence we use two preprocessors: PruneVarSubMean and NormOne.
End of explanation
#Train and generate weights
la=LeastAngleRegression()
la.set_labels(labels_train)
la.train(feats_train)
size=la.get_path_size()
print ("Size of path is %s" %size)
Explanation: Next we train on the data. Keeping in mind that we had 10 attributes/dimensions in our data, let us have a look at the size of LASSO path which is obtained readily using get_path_size().
End of explanation
#calculate weights
weights=[]
for i in range(size):
weights.append(la.get_w_for_var(i))
s = sum(abs(array(weights)), axis=1)
print ('Max. norm is %s' %s[-1])
figure(figsize(30,7))
#plot 1
ax=subplot(131)
title('Lasso path')
ax.plot(s, weights, linewidth=2)
ymin, ymax = ylim()
ax.vlines(s[1:-1], ymin, ymax, linestyle='dashed')
xlabel("Norm")
ylabel("weights")
#Restrict norm to half for early termination
la.set_max_l1_norm(s[-1]*0.5)
la.train(feats_train)
size=la.get_path_size()
weights=[]
for i in range(size):
weights.append(la.get_w_for_var(i))
s = sum(abs(array(weights)), axis=1)
#plot 2
ax2=subplot(132)
title('Lasso path with restricted norm')
ax2.plot(s, weights, linewidth=2)
ax2.vlines(s[1:-1], ymin, ymax, linestyle='dashed')
xlabel("Norm")
ylabel("weights")
print ('Restricted norm is %s' %(s[-1]))
Explanation: The weights generated ($\beta_i$) and their norm ($\sum_i|\beta_i|$) change with each step. This is when a new variable is added to path. To get the weights at each of these steps get_w_for_var() method is used. The argument is the index of the variable which should be in the range [0, path_size).
End of explanation
feats = RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
train_labels = RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
mat = feats.get_feature_matrix()
crime_rate = mat[0]
feats_train = RealFeatures(crime_rate.reshape(1, len(mat[0])))
preproc = RescaleFeatures()
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor(True)
# Store preprocessed feature matrix.
preproc_data = feats_train.get_feature_matrix()
size=500
x1=linspace(0, 1, size)
width=0.5
tau=0.5
kernel=GaussianKernel(feats_train, feats_train, width)
krr=KernelRidgeRegression(tau, kernel, train_labels)
krr.train(feats_train)
feats_test=RealFeatures(x1.reshape(1,len(x1)))
kernel.init(feats_train, feats_test)
out = krr.apply().get_labels()
#Visualization of regression
fig=figure(figsize(6,6))
#first plot with only one attribute
title("Regression with 1st attribute")
_=scatter(preproc_data[0:], train_labels.get_labels(), c=ones(506), cmap=gray(), s=20)
_=xlabel('Crime rate')
_=ylabel('Median value of homes')
_=plot(x1,out, linewidth=3)
Explanation: Each color in the plot represents a coefficient and the vertical lines denote steps. It is clear that the weights are piecewise linear function of the norm.
Kernel Ridge Regression
Kernel ridge regression (KRR) is a kernel-based regularized form of regression. The dual form of Ridge regression can be shown to be:
$${\bf \alpha}=\left({\bf X}^T{\bf X}+\tau{\bf I}\right)^{-1}{\bf y} \quad \quad(1)$$
It can be seen that the equation to compute $\alpha$ only contains the vectors $\bf X$ in inner products with each other. If a non-linear mapping
$\Phi : x \rightarrow \Phi(x) \in \mathcal F$ is used, the equation can be defined in terms of inner products $\Phi(x)^T \Phi(x)$ instead. We can then use the kernel trick where a kernel function, which can be evaluated efficiently, is choosen $K({\bf x_i, x_j})=\Phi({\bf x_i})\Phi({\bf x_j})$. This is done because it is sufficient to know these inner products only, instead of the actual vectors $\bf x_i$. Linear regression methods like above discussed Ridge Regression can then be carried out in the feature space by using a kernel function representing a non-linear map which amounts to nonlinear regression in original input space.
KRR can be performed in Shogun using CKernelRidgeRegression class. Let us apply it on a non linear regression problem from the Boston Housing Dataset, where the task is to predict prices of houses by finding a relationship with the various attributes provided. The per capita crime rate attribute is used in this particular example.
End of explanation
# Use different kernels
gaussian_kernel=GaussianKernel(feats_train, feats_train, 0.1)
#Polynomial kernel of degree 2
poly_kernel=PolyKernel(feats_train, feats_train, 2, True)
linear_kernel=LinearKernel(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
svr_param=1
svr_C=10
svr=LibSVR(svr_C, svr_param, gaussian_kernel, train_labels, LIBSVR_EPSILON_SVR)
#Visualization of regression
x1=linspace(0, 1, size)
feats_test_=RealFeatures(x1.reshape(1,len(x1)))
def svr_regress(kernels):
fig=figure(figsize(8,8))
for i, kernel in enumerate(kernels):
svr.set_kernel(kernel)
svr.train()
out=svr.apply(feats_test_).get_labels()
#subplot(1,len(kernels), i)
#first plot with only one attribute
title("Support Vector Regression")
_=scatter(preproc_data[0:], train_labels.get_labels(), c=ones(506), cmap=gray(), s=20)
_=xlabel('Crime rate')
_=ylabel('Median value of homes')
_=plot(x1,out, linewidth=3)
ylim([0, 40])
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
_=legend((p1, p2, p3), ["Gaussian Kernel", "Linear Kernel", "Polynomial Kernel"], loc=1)
svr_regress(kernels)
Explanation: As seen from the example KRR (using the kernel trick) can apply techniques for linear regression in the feature space to perform nonlinear regression in the input space.
Support Vector Regression
In Kernel Ridge Regression $(1)$ we have seen the result to be a dense solution. Thus all training examples are active which limits its usage to fewer number of training examples. Support Vector Regression (SVR) uses the concept of support vectors as in Support Vector Machines that leads to a sparse solution. In the SVM the penalty was paid for being on the wrong side of the discriminating plane. Here we do the same thing: we introduce a penalty for being far away from predicted line, but once you are close enough, i.e. in some “epsilon-tube” around this line, there is no penalty.
We are given a labeled set of input-output pairs $\mathcal{D}=(x_i,y_i)^N_{i=1}\subseteq \mathcal{X} \times \mathcal{Y}$ where $x\in\mathcal{X}$ and $y\in \mathcal{Y}$ and the primary problem is as follows:
$$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n (\xi_i+ {\xi_i}^*)) }$$
For the constraints:
$$ {\bf w}^T{\bf x}_i+b-c_i-\xi_i\leq 0, \, \forall i=1\dots N$$
$$ -{\bf w}^T{\bf x}_i-b-c_i^-\xi_i^\leq 0, \, \forall i=1\dots N $$
with $c_i=y_i+ \epsilon$ and $c_i^*=-y_i+ \epsilon$
The resulting dual optimaization problem is:
$$ \max_{{\bf \alpha},{\bf \alpha}^} -\frac{1}{2}\sum_{i,j=1}^N(\alpha_i-\alpha_i^)(\alpha_j-\alpha_j^) {\bf x}i^T {\bf x}_j-\sum{i=1}^N(\alpha_i+\alpha_i^)\epsilon - \sum_{i=1}^N(\alpha_i-\alpha_i^)y_i\$$ $$ \mbox{wrt}:$$
$${\bf \alpha},{\bf \alpha}^\in{\bf R}^N\ \mbox{s.t.}: 0\leq \alpha_i,\alpha_i^\leq C,\, \forall i=1\dots N\ \sum_{i=1}^N(\alpha_i-\alpha_i^)y_i=0 $$
This class also support the $\nu$-SVR regression version of the problem, where $\nu$ replaces the $\epsilon$ parameter and represents an upper bound on the action of margin errors and a lower bound on the fraction of support vectors. The resulting problem generally takes a bit longer to solve. The details and comparison of these two versioins can be found in [1].
Let us try regression using Shogun's LibSVR. The dataset from last section is used. The svr_param argument is the $\epsilon$-tube for the $\epsilon$ version and is the $\nu$ parameter in other case.
End of explanation
import time
gaussian_kernel=GaussianKernel(feats, feats, 13)
nus=[0.2, 0.4, 0.6, 0.8]
epsilons=[0.16, 0.09, 0.046, 0.0188]
svr_C=10
def compare_svr(nus, epsilons):
time_eps=[]
time_nus=[]
for i in range(len(epsilons)):
svr_param=1
svr=LibSVR(svr_C, epsilons[i], gaussian_kernel, train_labels, LIBSVR_EPSILON_SVR)
t_start=time.clock()
svr.train()
time_test=(time.clock() - t_start)
time_eps.append(time_test)
for i in range(len(nus)):
svr_param=1
svr=LibSVR(svr_C, nus[i], gaussian_kernel, train_labels, LIBSVR_NU_SVR)
t_start=time.clock()
svr.train()
time_test=(time.clock() - t_start)
time_nus.append(time_test)
print "-"*72
print "|", "%15s" % 'Nu' ,"|", "%15s" % 'Epsilon',"|","%15s" % 'Time (Nu)' ,"|", "%15s" % 'Time(Epsilon)' ,"|"
for i in range(len(nus)):
print "-"*72
print "|", "%15s" % nus[i] ,"|", "%15s" %epsilons[i],"|","%15s" %time_nus[i] ,"|", "%15s" %time_eps[i] ,"|"
print "-"*72
title_='SVR Performance on Boston Housing dataset'
print "%50s" %title_
compare_svr(nus, epsilons)
Explanation: Let us do comparison of time taken for the 2 different models simliar to that done in section 6 of [1]. The Boston Housing Dataset is used.
End of explanation |
2,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: トレーニング後の float16 量子化
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: モデルをトレーニングしてエクスポートする
Step3: この例では、モデルを 1 エポックでトレーニングしたので、トレーニングの精度は 96% 以下になります。
TensorFlow Lite モデルに変換する
Python TFLiteConverter を使用して、トレーニング済みモデルを TensorFlow Lite モデルに変換できるようになりました。
TFLiteConverterを使用してモデルを読み込みます。
Step4: .tfliteファイルに書き込みます。
Step5: エクスポート時にモデルを float16 に量子化するには、最初にoptimizationsフラグを設定してデフォルトの最適化を使用します。次に、float16 がターゲットプラットフォームでサポートされている型であることを指定します。
Step6: 最後に、通常どおりにモデルを変換します。デフォルトでは、変換されたモデルは呼び出しの便宜上、浮動小数点の入力と出力を引き続き使用します。
Step7: 生成されるファイルのサイズが約1/2であることに注意してください。
Step8: TensorFlow Lite モデルを実行する
Python TensorFlow Lite インタープリタを使用して TensorFlow Lite モデルを実行します。
モデルをインタープリタに読み込む
Step9: 1 つの画像でモデルをテストする
Step10: モデルを評価する
Step11: Float16 量子化モデルの評価を繰り返して以下を取得する | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
Explanation: トレーニング後の float16 量子化
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_float16_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab を実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_float16_quant.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td> <img><a>ノートブックをダウンロード</a> </td>
</table>
概要
TensorFlow Lite は、TensorFlow から TensorFlow Lite のフラットバッファ形式へのモデル変換時に、重みを 16 ビット浮動小数点値に変換することをサポートするようになりました。これにより、モデルサイズが 2 分の 1 になります。GPU などの一部のハードウェアは、この精度の低い演算でネイティブに計算でき、従来の浮動小数点よりも高速に実行できます。Tensorflow Lite GPU デリゲートは、このように実行するように構成できます。ただし、重みを float16 に変換したモデルは、追加の変更を加えなくても CPU で実行できます。float16 の重みは、最初の推論の前に float32 にアップサンプリングされます。これにより、レイテンシと精度への影響を最小限に抑え、モデルサイズを大幅に縮小できます。
このチュートリアルでは、MNIST モデルを新規にトレーニングし、TensorFlow でその精度を確認してから、モデルを float16 量子化を使用した Tensorflow Lite フラットバッファに変換します。最後に、変換されたモデルの精度を確認し、元の float32 モデルと比較します。
MNIST モデルの構築
セットアップ
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
Explanation: モデルをトレーニングしてエクスポートする
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: この例では、モデルを 1 エポックでトレーニングしたので、トレーニングの精度は 96% 以下になります。
TensorFlow Lite モデルに変換する
Python TFLiteConverter を使用して、トレーニング済みモデルを TensorFlow Lite モデルに変換できるようになりました。
TFLiteConverterを使用してモデルを読み込みます。
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: .tfliteファイルに書き込みます。
End of explanation
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
Explanation: エクスポート時にモデルを float16 に量子化するには、最初にoptimizationsフラグを設定してデフォルトの最適化を使用します。次に、float16 がターゲットプラットフォームでサポートされている型であることを指定します。
End of explanation
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
Explanation: 最後に、通常どおりにモデルを変換します。デフォルトでは、変換されたモデルは呼び出しの便宜上、浮動小数点の入力と出力を引き続き使用します。
End of explanation
!ls -lh {tflite_models_dir}
Explanation: 生成されるファイルのサイズが約1/2であることに注意してください。
End of explanation
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))
interpreter_fp16.allocate_tensors()
Explanation: TensorFlow Lite モデルを実行する
Python TensorFlow Lite インタープリタを使用して TensorFlow Lite モデルを実行します。
モデルをインタープリタに読み込む
End of explanation
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter_fp16.get_input_details()[0]["index"]
output_index = interpreter_fp16.get_output_details()[0]["index"]
interpreter_fp16.set_tensor(input_index, test_image)
interpreter_fp16.invoke()
predictions = interpreter_fp16.get_tensor(output_index)
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
Explanation: 1 つの画像でモデルをテストする
End of explanation
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
Explanation: モデルを評価する
End of explanation
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(evaluate_model(interpreter_fp16))
Explanation: Float16 量子化モデルの評価を繰り返して以下を取得する
End of explanation |
2,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Numbers in NumPy
Step1: The numpy.random module adds to the standard built-in Python random functions for generating efficiently whole arrays of sample values with many kinds of probability distributions.
Example
Step2: Advantages? Built-in Random Python only samples one value at a time and it is significantly less efficient.
The following block builds an array with 10$^7$ normally distributed values
Step3: Write the equivalent code using the np.random.normal() function and time it! Keep in mind that the NumPy function is vectorized!
See the Numpy documentation site for detailed info on the numpy.random module
Random Walks
Using standard Python builtin functions, try to write a piece of code corresponding to a 1D Random walker initially located at 0 and taking steps of 1 or -1 with equal probability.
Hint
Step4: Now think of a possible alternative code using NumPy. Keep in mind that | Python Code:
import numpy as np
Explanation: Random Numbers in NumPy
End of explanation
samples = np.random.normal(size=(4,4))
samples
Explanation: The numpy.random module adds to the standard built-in Python random functions for generating efficiently whole arrays of sample values with many kinds of probability distributions.
Example: build a 4x4 array of samples from the standard normal distribution,
End of explanation
import random
N = 10000000
%timeit samples = [random.normalvariate(0,1) for i in range(N)]
Explanation: Advantages? Built-in Random Python only samples one value at a time and it is significantly less efficient.
The following block builds an array with 10$^7$ normally distributed values:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
#plt.plot(INSERT THE NAME OF THE VARIABLE CONTAINING THE PATH)
Explanation: Write the equivalent code using the np.random.normal() function and time it! Keep in mind that the NumPy function is vectorized!
See the Numpy documentation site for detailed info on the numpy.random module
Random Walks
Using standard Python builtin functions, try to write a piece of code corresponding to a 1D Random walker initially located at 0 and taking steps of 1 or -1 with equal probability.
Hint: use a list to keep track of the path of your random walker and have a look at the random.randint() function to generate the steps.
If it's too hard to start from scratch, you may want to peek at my solution.
Use matplotlib to plot the path of your random walker
End of explanation
#plt.plot(INSERT THE NAME OF THE VARIABLE CONTAINING THE PATH)
Explanation: Now think of a possible alternative code using NumPy. Keep in mind that:
NumPy offers arrays to store the path
numpy.random offers a vectorized version of random generating functions
Again, here is my solution.
Let's have a look at it:
End of explanation |
2,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
函数调用的参数规则与解包
Python 的函数在声明参数时大概有下面 4 种形式:
不带默认值的:def func(a)
Step1: 另外一条规则是:位置参数优先权:
Step2: 最保险的方法就是全部采用关键词参数。
任意参数
任意参数可以接受任意数量的参数,其中*a的形式代表任意数量的位置参数,**d代表任意数量的关键词参数:
Step3: 上面的这个def concat(*lst, sep = "/")的语法是PEP 3102提出的,在 Python 3.0 之后实现。这里的关键词函数必须明确指明,不能通过位置推断:
Step4: **d则代表任意数量的关键词参数
Step5: Unpacking
Python 3.5 添加的新特性(PEP 448),使得*a、**d可以在函数参数之外使用:
Step6: 所谓的解包(Unpacking)实际上可以看做是去掉()的元组或者是去掉{}的字典。这一语法也提供了一个更加 Pythonic 地合并字典的方法:
Step7: 在函数调用的时候使用这种解包的方法则是 Python 2.7 也可以使用的: | Python Code:
def func(a, b = 1):
pass
func(a = "G", 20) # SyntaxError 语法错误
Explanation: 函数调用的参数规则与解包
Python 的函数在声明参数时大概有下面 4 种形式:
不带默认值的:def func(a): pass
带有默认值的:def func(a, b = 1): pass
任意位置参数:def func(a, b = 1, *c): pass
任意键值参数:def func(a, b = 1, *c, **d): pass
在调用函数时,有两种情况:
没有关键词的参数:func("G", 20)
带有关键词的参数:func(a = "G", b = 20)(其中带有关键词调用可以不考虑顺序:func(b = 20, a = "G")
当然,这两种情况是可以混用的:func("G", b = 20),但最重要的一条规则是位置参数不能在关键词参数之后出现:
End of explanation
def func(a, b = 1):
pass
func(20, a = "G") # TypeError 对参数 a 重复赋值
Explanation: 另外一条规则是:位置参数优先权:
End of explanation
def concat(*lst, sep = "/"):
return sep.join((str(i) for i in lst))
print(concat("G", 20, "@", "Hz", sep = ""))
Explanation: 最保险的方法就是全部采用关键词参数。
任意参数
任意参数可以接受任意数量的参数,其中*a的形式代表任意数量的位置参数,**d代表任意数量的关键词参数:
End of explanation
print(concat("G", 20, "-")) # Not G-20
Explanation: 上面的这个def concat(*lst, sep = "/")的语法是PEP 3102提出的,在 Python 3.0 之后实现。这里的关键词函数必须明确指明,不能通过位置推断:
End of explanation
def dconcat(sep = ":", **dic):
for k in dic.keys():
print("{}{}{}".format(k, sep, dic[k]))
dconcat(hello = "world", python = "rocks", sep = "~")
Explanation: **d则代表任意数量的关键词参数
End of explanation
print(*range(5))
lst = [0, 1, 2, 3]
print(*lst)
a = *range(3), # 这里的逗号不能漏掉
print(a)
d = {"hello": "world", "python": "rocks"}
print({**d}["python"])
Explanation: Unpacking
Python 3.5 添加的新特性(PEP 448),使得*a、**d可以在函数参数之外使用:
End of explanation
user = {'name': "Trey", 'website': "http://treyhunner.com"}
defaults = {'name': "Anonymous User", 'page_name': "Profile Page"}
print({**defaults, **user})
Explanation: 所谓的解包(Unpacking)实际上可以看做是去掉()的元组或者是去掉{}的字典。这一语法也提供了一个更加 Pythonic 地合并字典的方法:
End of explanation
print(concat(*"ILovePython"))
Explanation: 在函数调用的时候使用这种解包的方法则是 Python 2.7 也可以使用的:
End of explanation |
2,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Common Cross Validation Test Code
We used the same cross validation test procedure for the three applications described in the paper. This document provides explanations for the code in analytics.py used in those tests.
See the tests carried out in each application
Step1: Balancing Data
This section defines the data balancing function by over-sampling using the SMOTE algorithm (see SMOTE
Step2: The t_confidence_interval method below calculate the 95% confidence interval for a given list of values.
Step3: Cross Validation Methodology
The following cv_test function carries out the cross validation test over n_iterations times and returns the accuracy scores and importance scores (for each feature). The cross validation steps are as follow
Step4: Experiments | Python Code:
# The 'combined' list has all the 22 metrics
feature_names_combined = (
'entities', 'agents', 'activities', # PROV types (for nodes)
'nodes', 'edges', 'diameter', 'assortativity', # standard metrics
'acc', 'acc_e', 'acc_a', 'acc_ag', # average clustering coefficients
'mfd_e_e', 'mfd_e_a', 'mfd_e_ag', # MFDs
'mfd_a_e', 'mfd_a_a', 'mfd_a_ag',
'mfd_ag_e', 'mfd_ag_a', 'mfd_ag_ag',
'mfd_der', # MFD derivations
'powerlaw_alpha' # Power Law
)
# The 'generic' list has 6 generic network metrics (that do not take provenance information into account)
feature_names_generic = (
'nodes', 'edges', 'diameter', 'assortativity', # standard metrics
'acc',
'powerlaw_alpha' # Power Law
)
# The 'provenance' list has 16 provenance-specific network metrics
feature_names_provenance = (
'entities', 'agents', 'activities', # PROV types (for nodes)
'acc_e', 'acc_a', 'acc_ag', # average clustering coefficients
'mfd_e_e', 'mfd_e_a', 'mfd_e_ag', # MFDs
'mfd_a_e', 'mfd_a_a', 'mfd_a_ag',
'mfd_ag_e', 'mfd_ag_a', 'mfd_ag_ag',
'mfd_der', # MFD derivations
)
# The utitility of above threes set of metrics will be assessed in our experiements to
# understand whether provenance type information help us improve data classification performance
feature_name_lists = (
('combined', feature_names_combined),
('generic', feature_names_generic),
('provenance', feature_names_provenance)
)
Explanation: Common Cross Validation Test Code
We used the same cross validation test procedure for the three applications described in the paper. This document provides explanations for the code in analytics.py used in those tests.
See the tests carried out in each application:
* Application 1: ProvStore Documents
* Application 2: CollabMap
* Applicaiton 3: HAC-ER Messages
Lists of features
In our experiments, we first test our trained classifiers using all 22 provenance network metrics as defined in the paper. We then repeat the test using only the generic network metrics (6) and only the provenance-specific network metrics (16). Comparing the performance from all three tests will help verify whether the provenance-specific
network metrics bring added benefits to the classification application being discussed.
The lists of metrics combined, generic, and provenance are defined below.
End of explanation
from imblearn.over_sampling import SMOTE
from collections import Counter
def balance_smote(df):
X = df.drop('label', axis=1)
Y = df.label
print('Original data shapes:', X.shape, Y.shape)
smoX, smoY = X, Y
c = Counter(smoY)
while (min(c.values()) < max(c.values())): # check if all classes are balanced, if not balance the first minority class
smote = SMOTE(ratio="auto", kind='regular')
smoX, smoY = smote.fit_sample(smoX, smoY)
c = Counter(smoY)
print('Balanced data shapes:', smoX.shape, smoY.shape)
df_balanced = pd.DataFrame(smoX, columns=X.columns)
df_balanced['label'] = smoY
return df_balanced
Explanation: Balancing Data
This section defines the data balancing function by over-sampling using the SMOTE algorithm (see SMOTE: Synthetic Minority Over-sampling Technique).
It takes a dataframe where each row contains the label (in column label) and the feature vector corresponding to that label. It returns a new dataframe of the same format, but with added rows resulted from the SMOTE oversampling process.
End of explanation
def t_confidence_interval(an_array, alpha=0.95):
s = np.std(an_array)
n = len(an_array)
return stats.t.interval(alpha=alpha, df=(n - 1), scale=(s / np.sqrt(n)))
Explanation: The t_confidence_interval method below calculate the 95% confidence interval for a given list of values.
End of explanation
def cv_test(X, Y, n_iterations=1000, test_id=""):
accuracies = []
importances = []
while len(accuracies) < n_iterations:
skf = model_selection.StratifiedKFold(n_splits=10, shuffle=True)
for train, test in skf.split(X, Y):
clf = tree.DecisionTreeClassifier()
clf.fit(X.iloc[train], Y.iloc[train])
accuracies.append(clf.score(X.iloc[test], Y.iloc[test]))
importances.append(clf.feature_importances_)
print("Accuracy: %.2f%% ±%.4f <-- %s" % (np.mean(accuracies) * 100, t_confidence_interval(accuracies)[1] * 100, test_id))
return accuracies, importances
Explanation: Cross Validation Methodology
The following cv_test function carries out the cross validation test over n_iterations times and returns the accuracy scores and importance scores (for each feature). The cross validation steps are as follow:
* Split the input dataset (X, Y) into a training set and a test set using Stratified K-fold method with k = 10
* Train the Decision Tree classifier clf using the training set
* Score the accuracy of the classifier clf on the test set
* (Repeat the above until having done the required number of iterations)
End of explanation
def test_classification(df, n_iterations=1000):
results = pd.DataFrame()
imps = pd.DataFrame()
Y = df.label
for feature_list_name, feature_names in feature_name_lists:
X = df[list(feature_names)]
accuracies, importances = cv_test(X, Y, n_iterations, test_id=feature_list_name)
rs = pd.DataFrame(
{
'Metrics': feature_list_name,
'Accuracy': accuracies}
)
results = results.append(rs, ignore_index=True)
if feature_list_name == "combined": # we are interested in the relevance of all features (i.e. 'combined')
imps = pd.DataFrame(importances, columns=feature_names)
return results, imps
Explanation: Experiments: Having defined the cross validation method above, we now run it on the dataset (df) using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance).
End of explanation |
2,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Twitter Sentiment Analysis
Businesses and organizations around the world know that the first requirement for success is a happy customer base. For the purpose of identifying customer sentiment, the microblogging service Twitter, with its enormous collection of active users, is a font of knowledge. The most recent release of SAS Viya has added support for Twitter analysis to the DeepLearning action set’s Recurrent Neural Network layer. This recipe shows a pipeline to analyze sentiment in Twitter data using word embeddings and RNNs in SAS. The overall structure of this document and a small amount of the text comes from [1]
[1] https
Step1: Load the Glove Embeddings into CAS
Semantic word embeddings, the vector encodings of the meaning of words, are the basis of deep learning for text analytics.
In this recipe, we use the public domain glove embeddings trained on Twitter available at [2]. We have made changes to the format of the glove embeddings for the purpose of this work.
We remove all words with non-ascii characters to make the file more lightweight, as the tweets themelves are ascii.
We also remove ", which is a special character in SAS, and change the delimiter to tabs from spaces.
<code>cat glove.twitter.100d.txt | grep -v \" | grep -Pv "[^\x00-\x7F]" > glove.twitter.100d.clean.txt</code>
We include the modified glove file as part of this recipe.
[2] https
Step2: Load the Twitter data into CAS
Direct distribution of Twitter text is a violation of the Twitter terms of service [3]. The appropriate approach is to distribute data in dehydrated form. That is, we may distribute the tweet ids along with our annotations but without the text. Using [4], the user may download the text themselves through the Twitter API using the command below. You can run it right in the browser. The download takes about twelve hours on our machine.
[3] https
Step3: The twitter download tool requires an access token. You can get a token by applying for a twitter developer account [5]. Once you have an account, register an app and get your consumer key and your secret key. Once you have these, update twitter_download/download_tweets_api.py and run it. The script will open a web browser for you to log in with your Twitter credentials. It will save a file with your private keys so you only need to do it once. Now you can download the data.
Step4: Once we've downloaded the data, we clean it. To create this dataset, we collected 20,000 tweets containing the "
Step5: When a user deletes his or her tweet or makes it private it can no longer be downloaded, so one thing that publicly distributed Twitter data does is decay over time. Fortunately, it's simple to get a quick measure of the amount of data that has been lost.
Step6: This second preprocessing step is to coerce the format of the data into that used in the GloVe twitter embeddings. Here we use an included python Twitter normalization tool based on the ruby script provided by the GloVe team. [4]
Step7: Tokenize Text
This step involves separating the text into tokens, then presenting the result in a table that ApplyWordVector can use.
Term
Step8: A simple out of vocabulary test makes for a good sanity check if there are any mismatches in the glove embedding file. It will change due to Twitter decay, but should be roughly between 0.01 and 0.03
Step9: Apply Word Vector
Here we run apply word vector and merge the resulting word sequences with their labels
Step10: F_n_e = the eth feature of the nth token in the sequence
Step11: This step merges the lost target data back onto the newly embedded sentences. The ApplyWordVec action's metadata does not match that of the original cas action, so we start by clearing the column metadata. Then we simply call SWAT's merge action to combine the two tables.
Step12: Model
Step13: Train the model
Step14: Now that we have scored our data, we can perform an analysis of its errors. We can generate a confusion matrix in CAS using the crosstab function.
Step15: We see here that negative tweets were mistaken for positive tweets with roughly the same frequency as positive for negative.
To examine in more detail, we can look at the misclassified tweets. Remember that our distant supervision approach is noisy, so the ground label may not always agree with your intuition. Here's a look at a few of the falsely classified negative tweets.
Step16: Here are some falsely classified positive tweets.
Step17: Some work that a user of this recipe could do to further improve the results | Python Code:
import swat
from IPython.display import display
swat.options.cas.exception_on_severity = 2
s = swat.CAS('rdcgrd075.unx.sas.com', 3217,authinfo=r'/u/saleem/.authinfo')
s.loadactionset('deeplearn')
Explanation: Twitter Sentiment Analysis
Businesses and organizations around the world know that the first requirement for success is a happy customer base. For the purpose of identifying customer sentiment, the microblogging service Twitter, with its enormous collection of active users, is a font of knowledge. The most recent release of SAS Viya has added support for Twitter analysis to the DeepLearning action set’s Recurrent Neural Network layer. This recipe shows a pipeline to analyze sentiment in Twitter data using word embeddings and RNNs in SAS. The overall structure of this document and a small amount of the text comes from [1]
[1] https://github.com/sassoftware/sas-viya-programming/tree/master/deeplearning/fashion-mnist.
Import modules and create CAS session
In this code we import the needed modules and cas action sets
We assign values for the cashost, casport, and casauth values
These are then used to establish a CAS session named 's'
We set exception_on_severity to 2 to enable tracebacks for server-side CAS errors
Documentation to Connect and Start a Session
End of explanation
import os
# An example embeddings file;
GLOVE_PATH = 'miniglove.tsv'
DELIMITER = "\t"
dims = 100
glove = s.CASTable('glove', replace=True)
glove = s.upload_file(GLOVE_PATH,
casout=glove,
importoptions=dict(fileType='csv',
delimiter="\t",
varChars=True,
getNames=False,
vars=[dict(type='varchar')]+[dict(type='double')]*dims))
Explanation: Load the Glove Embeddings into CAS
Semantic word embeddings, the vector encodings of the meaning of words, are the basis of deep learning for text analytics.
In this recipe, we use the public domain glove embeddings trained on Twitter available at [2]. We have made changes to the format of the glove embeddings for the purpose of this work.
We remove all words with non-ascii characters to make the file more lightweight, as the tweets themelves are ascii.
We also remove ", which is a special character in SAS, and change the delimiter to tabs from spaces.
<code>cat glove.twitter.100d.txt | grep -v \" | grep -Pv "[^\x00-\x7F]" > glove.twitter.100d.clean.txt</code>
We include the modified glove file as part of this recipe.
[2] https://nlp.stanford.edu/projects/glove/
End of explanation
!git clone https://github.com/aritter/twitter_download.git
Explanation: Load the Twitter data into CAS
Direct distribution of Twitter text is a violation of the Twitter terms of service [3]. The appropriate approach is to distribute data in dehydrated form. That is, we may distribute the tweet ids along with our annotations but without the text. Using [4], the user may download the text themselves through the Twitter API using the command below. You can run it right in the browser. The download takes about twelve hours on our machine.
[3] https://twitter.com/en/tos
[4] https://github.com/aritter/twitter_download
[5] https://developer.twitter.com/en/apply/user
End of explanation
!python twitter_download/download_tweets_api.py --dist emoji_sentiment_data_dehydrated.tsv --output emoji_sentiment_data_rehydrated.tsv
Explanation: The twitter download tool requires an access token. You can get a token by applying for a twitter developer account [5]. Once you have an account, register an app and get your consumer key and your secret key. Once you have these, update twitter_download/download_tweets_api.py and run it. The script will open a web browser for you to log in with your Twitter credentials. It will save a file with your private keys so you only need to do it once. Now you can download the data.
End of explanation
import re
import pandas as pd
path = "emoji_sentiment_data_rehydrated.tsv"
df = pd.read_csv(path,
delimiter="\t",
names=['_Document_', '_Target_', 'slice', 'text'],
skiprows=1)
def clean(tweet):
tweet = re.sub(r'[^\x00-\x7F]+', '', tweet)
tweet = re.sub(r"\s+", ' ', tweet)
tweet = re.sub(r"^[\"']", "", tweet)
tweet = tweet.replace("\'", "")
return re.sub(r"(:\)+)|:\(+", "", tweet)
df['text'] = df['text'].apply(clean)
Explanation: Once we've downloaded the data, we clean it. To create this dataset, we collected 20,000 tweets containing the ":)" emoticon and 20,000 contanining ":(." We labeled these positive and negative respectively. We then removed tweets containing foul language and ended up with roughly 37,500 tweets. This is a noisy way to label the data, and you are likely to get more accurate labels if you sample all tweets and label manually. Nevertheless, it's an excellent method to get a lot of sentiment data quickly with an unrestrictive license. Since we don't want our sentiment analysis tool to simply learn to detect the presence of a smiley face or a frowny face, we scrub the data of these two emoticons. We also normalize whitespace to a single space each and remove all non-ascii characters and quotation marks to avoid confusing the software.
End of explanation
import html
count = len(df)
lost_count = len(df[df['text'] == "Not Available"])
print("{:.1%} of data deleted or made private".format(lost_count/count))
df = df[df['text'] != "Not Available"]
for slize in ['train', 'dev', 'test']:
print("{} datapoints in {}".format(len(df[df['slice'] == slize]), slize))
Explanation: When a user deletes his or her tweet or makes it private it can no longer be downloaded, so one thing that publicly distributed Twitter data does is decay over time. Fortunately, it's simple to get a quick measure of the amount of data that has been lost.
End of explanation
from twitter_glove import normalize
def preprocess(tweet):
return normalize(tweet).lower()
df['text'] = df['text'].apply(html.unescape)
df['text'] = df['text'].apply(preprocess)
df = df.drop_duplicates(subset='_Document_')
reviews_train = s.CASTable('reviews_train.csv', replace=True)
reviews_train = s.upload_frame(df, casout=reviews_train)
for slize in ['train', 'dev']:
print(slize)
print(df[df['slice'] == slize]['_Target_'].value_counts(True))
print()
Explanation: This second preprocessing step is to coerce the format of the data into that used in the GloVe twitter embeddings. Here we use an included python Twitter normalization tool based on the ruby script provided by the GloVe team. [4]
End of explanation
print(len(df))
df_cleaned = df
df_cols = {
"_Term_": [],
"_Start_": [],
"_Document_": []
}
for i in df_cleaned.index:
term = list(filter(None, df_cleaned['text'].loc[i].split(" ")))
df_cols["_Term_"].extend(term)
df_cols["_Start_"].extend(range(len(term)))
df_cols["_Document_"].extend([df_cleaned['_Document_'].loc[i]]*len(term))
tokenized_df = pd.DataFrame.from_dict(df_cols)[['_Term_',
'_Start_',
'_Document_']]
out_offset = s.CASTable('out_offset', replace=True)
out_offset = s.upload_frame(tokenized_df, casout=out_offset)
tokenized_df.head()
# vocab = set(tokenized_df['_Term_'].values)
# glove = pd.read_csv(GLOVE_PATH,sep=DELIMITER,header=None)
# miniglove=glove[glove[0].isin(vocab)]
# len(miniglove)/len(glove)
# miniglove.to_csv('miniglove.tsv',sep=DELIMITER,index=False)
Explanation: Tokenize Text
This step involves separating the text into tokens, then presenting the result in a table that ApplyWordVector can use.
Term: The token
Start: The position of the token in the document. This is used to sort the terms, as ApplyWordVector is designed for use in a parallel environment and cannot rely on inputs coming to it in order.
Document: The document id. Because the input is given as a single table, this is important to separate one document from another.
End of explanation
import pandas as pd
vocab = set([item[0] for item in pd.read_csv(
GLOVE_PATH, sep=DELIMITER, header=None, usecols=[0]).values])
tokenized_df['_Term_'].apply(lambda word: word not in vocab).mean()
Explanation: A simple out of vocabulary test makes for a good sanity check if there are any mismatches in the glove embedding file. It will change due to Twitter decay, but should be roughly between 0.01 and 0.03
End of explanation
s.loadactionset('textparse')
embedded = s.CASTable('embedded', replace=True)
s.textparse.applyWordVector(
model=glove,
offset=out_offset,
casout=embedded
)
embedded.head()
Explanation: Apply Word Vector
Here we run apply word vector and merge the resulting word sequences with their labels
End of explanation
embedding_columns = [column for column in embedded.columns if column.startswith('_F')]
len(embedding_columns)
Explanation: F_n_e = the eth feature of the nth token in the sequence
End of explanation
import time
format_clearer = [dict(name=column, format="") for column in embedded.columns]
embedded.table.alterTable(columns=format_clearer)
start_time = time.time()
embedded_with_additional_data = s.CASTable('embedded_with_additional_data', replace=True)
reviews_train.merge(
embedded,
on="_Document_",
casout=embedded_with_additional_data
)
print(time.time() - start_time)
embedded_with_additional_data[['text', '_Target_']].head()
Explanation: This step merges the lost target data back onto the newly embedded sentences. The ApplyWordVec action's metadata does not match that of the original cas action, so we start by clearing the column metadata. Then we simply call SWAT's merge action to combine the two tables.
End of explanation
# Hyperparameters
settings = dict(
n=25,
init='msra',
bidirectional_layers=1,
learning_rate=0.0005,
step_size=20,
thread_minibatch_size=1,
max_epochs=40,
fc_dropout=0.0,
output_dropout=0.0,
recurrent_dropout=0.0
)
sentiment = s.CASTable('sentiment', replace=True)
# Generate the model
s.buildmodel(model=sentiment, type='RNN')
del sentiment.params.replace
# Add the input layer
s.addlayer(model=sentiment, name='data', layer=dict(type='input'))
# Generate some number of bidirectional layers
# This loop will generate however many bidirectional layers are specified in settings
output = ['data']
for i in range(settings['bidirectional_layers']):
forward_birnn = 'birnn{}'.format(i)
backward_birnn = forward_birnn+'r'
s.addlayer(model=sentiment, name=forward_birnn, srclayers=output,
layer=dict(type='recurrent',
n=settings['n'],
init=settings['init'],
rnnType='GRU',
outputType='samelength',
dropout=settings['recurrent_dropout'],
reverse=False))
s.addlayer(model=sentiment, name=backward_birnn, srclayers=output,
layer=dict(type='recurrent',
n=settings['n'],
init=settings['init'],
rnnType='GRU',
outputType='samelength',
dropout=settings['recurrent_dropout'],
reverse=True))
output = [forward_birnn, backward_birnn]
# summary layer
s.addlayer(model=sentiment, name='frnn1', srclayers=output,
layer=dict(type='recurrent',
n=settings['n'],
init=settings['init'],
rnnType='GRU',
dropout=settings['recurrent_dropout'],
outputType='encoding'))
# output fully connected layer
s.addlayer(model=sentiment,
name='outlayer',
srclayers=['frnn1'],
layer=dict(type='output'))
Explanation: Model: Bidirectional Recurrent Neural Networks with Gated Units
To understand a recurrent neural network, imagine a single fully connected neural network that is applied to each step of a sequence. The difference between this and a recurrent neural network is that a hidden layer at time t is given as input at time t+1
One challenge of the recurrent neural network in its simplest form is that it tends to forget prior input quickly. There are a couple different approaches to this issue, Long Short Term Memory networks (LSTMs) being the most well known [6]. We use another approach, the Gated Recurrent Unit (GRU) [7]. We use the GRU because it requires fewer parameters than the LSTM, making it conceptually simpler and less computationally expensive to train, and it tends to give comparable results [8]. Each of these approaches uses "gates" to explicitly control the rate at which old information is forgotten and new information is incorporated.
When we generate a representation for each word in a sentence to represent words in their context, a recurrent neural network that reads from left to right will only include context from the left of each word. A bidirectional recurrent neural network (BiRNN) resolves this by using two recurrent neural networks, one operating from left to right, the other from right to left. The final output is the concatenation of these two representations.
For sentiment analysis, we import the embedded data, then use a variable number of BiRNN layers to generate a contextualized representation of the word sequence, which we then feed into a forward RNN and take the last hidden state as a summary of the sentence. We feed this to a fully connected neural network to get the output.
[6] http://colah.github.io/posts/2015-08-Understanding-LSTMs/
[7] https://towardsdatascience.com/understanding-gru-networks-2ef37df6c9be
[8] https://arxiv.org/abs/1412.3555
End of explanation
trained_weights = s.CASTable('trainedWeights', replace=True)
best_weights = s.CASTable('bestWeights', replace=True)
shuffled_embedded = s.CASTable('shuffled_embedded',replace=True)
s.shuffle(embedded_with_additional_data,casout=shuffled_embedded)
embedded_with_additional_data = shuffled_embedded
r = embedded_with_additional_data.query("slice EQ 'train'").dlTrain(
model=sentiment,
dataspecs=[
dict(type='numericnominal',
layer='data',
data=embedding_columns,
numnomParms=dict(
tokenSize=dims, length='_sequence_length_')),
dict(type='numericnominal',
layer='outlayer',
data='_Target_',
nominals='_Target_')
],
validtable=embedded_with_additional_data.query("slice EQ 'dev'"),
modelWeights=trained_weights,
bestWeights=best_weights,
optimizer=dict(
miniBatchSize=settings['thread_minibatch_size'],
maxEpochs=settings['max_epochs'],
loglevel=2,
algorithm=dict(method='adam',
beta1=0.9,
beta2=0.999,
gamma=0.5,
learningRate=settings['learning_rate'],
clipGradMax=100,
clipGradMin=-100,
stepSize=settings['step_size'],
lrPolicy='step'),
dropout=settings['output_dropout']),
seed=12345)
sentiment_scored = s.CASTable('sentiment_scored', replace=True)
r = embedded_with_additional_data.query("slice EQ 'test'").dlScore(
modelTable=sentiment,
initWeights=best_weights,
copyVars=['_Target_', 'text'],
casOut=sentiment_scored,
bufferSize=2)
r
Explanation: Train the model
End of explanation
cmr = sentiment_scored.crosstab(row='_Target_', col='_DL_PredName_')
cmr.Crosstab
Explanation: Now that we have scored our data, we can perform an analysis of its errors. We can generate a confusion matrix in CAS using the crosstab function.
End of explanation
sentiment_scored.query("_Target_ EQ 'negative' AND _DL_PredName_ EQ 'positive'")['text'].head()
Explanation: We see here that negative tweets were mistaken for positive tweets with roughly the same frequency as positive for negative.
To examine in more detail, we can look at the misclassified tweets. Remember that our distant supervision approach is noisy, so the ground label may not always agree with your intuition. Here's a look at a few of the falsely classified negative tweets.
End of explanation
pd.set_option('display.max_colwidth', -1)
sentiment_scored.query("_Target_ EQ 'positive' AND _DL_PredName_ EQ 'negative'")['text'].head()
Explanation: Here are some falsely classified positive tweets.
End of explanation
s.terminate()
Explanation: Some work that a user of this recipe could do to further improve the results:
1. Collect more data - The advantage of the emoji approach is that it's cheap. We used a simple version of it, but to get more tweets you can include those with image emoji smiley faces such as 😁 and other text versions such as :c) and (O8. Or if you have the data for it, you can just collect more with the same simple smiles and frowns. We intentionally limited our dataset in order to make it quick to download.
We could also look for a cleaner way to collect data, since especially in the falsely classified negative tweets we are seeing some that probably would have received a different sentiment score from a human tagger. Keep in mind, human labeled data is expensive.
Tweak the hyperparameters, make custom sentiment-aware embeddings, or make changes to how the normalization is done. Keep in mind if you change the normalization you will likely have to generate your own embedding file.
Finally, remember that it's good manners to end your session when you are done with it.
End of explanation |
2,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy Basics
Numerical Python, or "NumPy" for short, is a foundational package on which many of the most common data science packages are built. Numpy provides us with high performance multi-dimensional arrays which we can use as vectors or matrices.
The key features of numpy are
Step1: A common habit is to import under the np namespace as you will often find yourself typing numpy a lot otherwise. Two letters is easier on your fingers and your computer.
Rank 1
Step2: Rank 2
Step3: Rank 3 and beyond!
Step4: Reshaping and Slicing Arrays
Oftentimes, we would like to change up the dimensions a bit. One natural way to do this with NumPy is to reshape arrays. Let's start with a 1-dimensional array of 72 elements to help understand how things get re-ordered or changed around.
Step5: Note that the transpose is just ndarray().T. But remember, things are not always what they seem. The above two examples have the exact same dimensionality -- but the reshaping will slice up the vector in different ways! Be careful!
Step6: We can even combine multiple indices with Python slicing!
Step7: Filtering | Python Code:
import numpy as np
from __future__ import print_function
Explanation: NumPy Basics
Numerical Python, or "NumPy" for short, is a foundational package on which many of the most common data science packages are built. Numpy provides us with high performance multi-dimensional arrays which we can use as vectors or matrices.
The key features of numpy are:
ndarrays: n-dimensional arrays of the same data type which are fast and space-efficient. There are a number of built-in methods for ndarrays which allow for rapid processing of data without using loops (e.g., compute the mean).
Broadcasting: a useful tool which defines implicit behavior between multi-dimensional arrays of different sizes.
Vectorization: enables numeric operations on ndarrays.
Input/Output: simplifies reading and writing of data from/to file.
Additional Recommended Resources:
- Numpy Documentation
In this brief tutorial, I will demonstrate some of the common NumPy operations you will see during the rest of the week.
End of explanation
np.arange(-1.0, 1.0, 0.1)
print(np.random.randint(0, 5, size=10))
print(np.ones(10))
print(np.zeros(10))
rank1_array = np.array([3, 33, 333])
print(type(rank1_array))
print(rank1_array.shape)
print(rank1_array.size)
print(rank1_array.dtype)
print(rank1_array[0], rank1_array[1], rank1_array[2])
print(rank1_array[:], rank1_array[1:], rank1_array[:2])
Explanation: A common habit is to import under the np namespace as you will often find yourself typing numpy a lot otherwise. Two letters is easier on your fingers and your computer.
Rank 1
End of explanation
np.ones((10,2)) # 10 rows, 2 columns
np.zeros((2,10)) # 2 columns, 10 rows
np.eye(10,10)*3 # diagonal of 1s but multiplied by 3
rank2_array = np.array([[11,12,13],[21,22,23],[31,32,33]])
print(type(rank2_array))
print(rank2_array.shape)
print(rank2_array.size)
print(rank2_array.dtype)
print(rank2_array[0], rank2_array[1], rank2_array[2])
print(rank2_array[:]) # print everything in array
print(rank2_array[1:]) # slice from 2nd row and on
print(rank2_array[:,0]) # all rows, but 1st column
print(rank2_array[:,1]) # all rows, but 2nd column
print(rank2_array[:,2]) # all rows, but 3rd column
print(rank2_array[0,1]) # i=0, j=1 of the 3x3 matrix we just made
Explanation: Rank 2
End of explanation
np.random.randint(0, 5, (2,5,5)) # 2 x 5 x 5 [3D matrix!]
np.random.randint(0, 5, (2,5,5)).shape
Explanation: Rank 3 and beyond!
End of explanation
np.arange(72).reshape(3,24)
np.arange(72).reshape(24,3).T # tranpose; this is not the same as above! beware
Explanation: Reshaping and Slicing Arrays
Oftentimes, we would like to change up the dimensions a bit. One natural way to do this with NumPy is to reshape arrays. Let's start with a 1-dimensional array of 72 elements to help understand how things get re-ordered or changed around.
End of explanation
np.arange(72).reshape(3, 2, -1) # -1 means to let NumPy figure out the size of the remaining dimension
np.arange(72).reshape(3, -1, 12) # -1 means to let NumPy figure out the size of the remaining dimension
np.arange(36).reshape(6, 6)
Explanation: Note that the transpose is just ndarray().T. But remember, things are not always what they seem. The above two examples have the exact same dimensionality -- but the reshaping will slice up the vector in different ways! Be careful!
End of explanation
np.arange(36).reshape(6,6)[2:4,:3]
Explanation: We can even combine multiple indices with Python slicing!
End of explanation
unfiltered_arr = np.arange(72).reshape(3, -1, 12)
unfiltered_arr
condition = unfiltered_arr % 3 == 0 # divisible by 3
condition # this is a bitmask!
unfiltered_arr[condition] # this creates a view (subset) of the original array, not a copy
unfiltered_arr[condition] = 0 # only change the values matching the condition
unfiltered_arr
unfiltered_arr.reshape(-1) # flatten it back!
Explanation: Filtering
End of explanation |
2,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 style="text-align
Step1: Next-word prediction task
Part 1
Step2: 1.2.Symbols encoding
The LSTM input's can only be numbers. A way to convert words (symbols or any items) to numbers is to assign a unique integer to each word. This process is often based on frequency of occurrence for efficient coding purpose.
Here, we define a function to build an indexed word dictionary (word->number). The "build_vocabulary" function builds both
Step3: Run the cell below to display the vocabulary
Step4: Part 2
Step5: Training Parameters and constants
Step6: Define the Loss/Cost and optimizer
Step7: Comment
Step8: Part 3
Step9: Comment
Step10: Comment
Step11: Comment
Step12: 3.3. Play with number of inputs
The number of input in our example is 3, see what happens when you use other number (1 and 5)
n_input = 1
Step13: Comment
Step14: Comment | Python Code:
import numpy as np
import collections # used to build the dictionary
import random
import time
from time import time
import pickle # may be used to save your model
import matplotlib.pyplot as plt
#Import Tensorflow and rnn
import tensorflow as tf
from tensorflow.contrib import rnn
# Target log path
logs_path = 'lstm_words'
writer = tf.summary.FileWriter(logs_path)
Explanation: <h1 style="text-align:center">Deep Learning </h1>
<h1 style="text-align:center"> Lab Session 3 - 3 Hours </h1>
<h1 style="text-align:center">Long Short Term Memory (LSTM) for Language Modeling</h1>
<b> Student 1:</b> CANALE
<b> Student 2:</b> ELLENA
In this Lab Session, you will build and train a Recurrent Neural Network, based on Long Short-Term Memory (LSTM) units for next word prediction task.
Answers and experiments should be made by groups of one or two students. Each group should fill and run appropriate notebook cells.
Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an pdf document using print as PDF (Ctrl+P). Do not forget to run all your cells before generating your final report and do not forget to include the names of all participants in the group. The lab session should be completed by June 9th 2017.
Send you pdf file to [email protected] and [email protected] using [DeepLearning_lab3] as Subject of your email.
Introduction
You will train a LSTM to predict the next word using a sample short story. The LSTM will learn to predict the next item of a sentence from the 3 previous items (given as input). Ponctuation marks are considered as dictionary items so they can be predicted too. Figure 1 shows the LSTM and the process of next word prediction.
<img src="lstm.png" height="370" width="370">
Each word (and ponctuation) from text sentences is encoded by a unique integer. The integer value corresponds to the index of the corresponding word (or punctuation mark) in the dictionnary. The network output is a one-hot-vector indicating the index of the predicted word in the reversed dictionary (Section 1.2). For example if the prediction is 86, the predicted word will be "company".
You will use a sample short story from Aesop’s Fables (http://www.taleswithmorals.com/) to train your model.
<font size="3" face="verdana" > <i> "There was once a young Shepherd Boy who tended his sheep at the foot of a mountain near a dark forest.
It was rather lonely for him all day, so he thought upon a plan by which he could get a little company and some excitement.
He rushed down towards the village calling out "Wolf, Wolf," and the villagers came out to meet him, and some of them stopped with him for a considerable time.
This pleased the boy so much that a few days afterwards he tried the same trick, and again the villagers came to his help.
But shortly after this a Wolf actually did come out from the forest, and began to worry the sheep, and the boy of course cried out "Wolf, Wolf," still louder than before.
But this time the villagers, who had been fooled twice before, thought the boy was again deceiving them, and nobody stirred to come to his help.
So the Wolf made a good meal off the boy's flock, and when the boy complained, the wise man of the village said:
"A liar will not be believed, even when he speaks the truth." "</i> </font>.
Start by loading the necessary libraries and resetting the default computational graph. For more details about the rnn packages, we suggest you to take a look at https://www.tensorflow.org/api_guides/python/contrib.rnn
End of explanation
def load_data(filename):
with open(filename) as f:
data = f.readlines()
data = [x.strip().lower() for x in data]
data = [data[i].split() for i in range(len(data))]
data = np.array(data)
data = np.reshape(data, [-1, ])
print(data)
return data
#Run the cell
train_file ='data/story.txt'
train_data = load_data(train_file)
print("Loaded training data...")
print(len(train_data))
Explanation: Next-word prediction task
Part 1: Data preparation
1.1. Loading data
Load and split the text of our story
End of explanation
def build_vocabulary(words):
count = collections.Counter(words).most_common()
dic= dict()
for word, _ in count:
dic[word] = len(dic)
reverse_dic= dict(zip(dic.values(), dic.keys()))
return dic, reverse_dic
Explanation: 1.2.Symbols encoding
The LSTM input's can only be numbers. A way to convert words (symbols or any items) to numbers is to assign a unique integer to each word. This process is often based on frequency of occurrence for efficient coding purpose.
Here, we define a function to build an indexed word dictionary (word->number). The "build_vocabulary" function builds both:
Dictionary : used for encoding words to numbers for the LSTM inputs
Reverted dictionnary : used for decoding the outputs of the LSTM into words (and punctuation).
For example, in the story above, we have 113 individual words. The "build_vocabulary" function builds a dictionary with the following entries ['the': 0], [',': 1], ['company': 85],...
End of explanation
dictionary, reverse_dictionary = build_vocabulary(train_data)
vocabulary_size= len(dictionary)
print "Dictionary size (Vocabulary size) = ", vocabulary_size
print("\n")
print("Dictionary : \n")
print(dictionary)
print("\n")
print("Reverted Dictionary : \n" )
print(reverse_dictionary)
Explanation: Run the cell below to display the vocabulary
End of explanation
def lstm_model(x, w, b, n_input, n_hidden):
# reshape to [1, n_input]
x = tf.reshape(x, [-1, n_input])
# Generate a n_input-element sequence of inputs
# (eg. [had] [a] [general] -> [20] [6] [33])
x = tf.split(x,n_input,1)
# 1-layer LSTM with n_hidden units.
rnn_cell = rnn.BasicLSTMCell(n_hidden)
#improvement
#rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden)])
#rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden)])
# generate prediction
outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)
# there are n_input outputs but
# we only want the last output
return tf.matmul(outputs[-1], w['out']) + b['out']
Explanation: Part 2 : LSTM Model in TensorFlow
Since you have defined how the data will be modeled, you are now to develop an LSTM model to predict the word of following a sequence of 3 words.
2.1. Model definition
Define a 2-layers LSTM model.
For this use the following classes from the tensorflow.contrib library:
rnn.BasicLSTMCell(number of hidden units)
rnn.static_rnn(rnn_cell, data, dtype=tf.float32)
rnn.MultiRNNCell(,)
You may need some tensorflow functions (https://www.tensorflow.org/api_docs/python/tf/) :
- tf.split
- tf.reshape
- ...
End of explanation
# Training Parameters
learning_rate = 0.001
epochs = 50000
display_step = 1000
n_input = 3
#For each LSTM cell that you initialise, supply a value for the hidden dimension, number of units in LSTM cell
n_hidden = 64
# tf Graph input
x = tf.placeholder("float", [None, n_input, 1])
y = tf.placeholder("float", [None, vocabulary_size])
# LSTM weights and biases
weights = { 'out': tf.Variable(tf.random_normal([n_hidden, vocabulary_size]))}
biases = {'out': tf.Variable(tf.random_normal([vocabulary_size])) }
#build the model
pred = lstm_model(x, weights, biases,n_input,n_hidden)
Explanation: Training Parameters and constants
End of explanation
# Loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
#cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
#cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
# Model evaluation
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Define the Loss/Cost and optimizer
End of explanation
#run the cell
def test(sentence, session, verbose=False):
sentence = sentence.strip()
words = sentence.split(' ')
if len(words) != n_input:
print("sentence length should be equel to", n_input, "!")
try:
symbols_inputs = [dictionary[str(words[i - n_input])] for i in range(n_input)]
keys = np.reshape(np.array(symbols_inputs), [-1, n_input, 1])
onehot_pred = session.run(pred, feed_dict={x: keys})
onehot_pred_index = int(tf.argmax(onehot_pred, 1).eval())
words.append(reverse_dictionary[onehot_pred_index])
sentence = " ".join(words)
if verbose:
print(sentence)
return reverse_dictionary[onehot_pred_index]
except:
print " ".join(["Word", words[i - n_input], "not in dictionary"])
Explanation: Comment:
We decided to apply the softmax and calculate the cost at the same time. In this way we can use the method softmax_cross_entropy_with_logits, which is more numerically stable in corner cases than applying the softmax and then calculating the cross entropy
We give you here the Test Function
End of explanation
# Initializing the variables
init = tf.global_variables_initializer()
saver = tf.train.Saver()
start_time = time()
# Launch the graph
with tf.Session() as session:
session.run(init)
step = 0
offset = random.randint(0,n_input+1)
end_offset = n_input + 1
acc_total = 0
loss_total = 0
writer.add_graph(session.graph)
while step < epochs:
# Generate a minibatch. Add some randomness on selection process.
if offset > (len(train_data)-end_offset):
offset = random.randint(0, n_input+1)
symbols_in_keys = [ [dictionary[ str(train_data[i])]] for i in range(offset, offset+n_input) ]
symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
symbols_out_onehot = np.zeros([len(dictionary)], dtype=float)
symbols_out_onehot[dictionary[str(train_data[offset+n_input])]] = 1.0
symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1])
_, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \
feed_dict={x: symbols_in_keys, y: symbols_out_onehot})
loss_total += loss
acc_total += acc
if (step+1) % display_step == 0:
print("Iter= " + str(step+1) + ", Average Loss= " + \
"{:.6f}".format(loss_total/display_step) + ", Average Accuracy= " + \
"{:.2f}%".format(100*acc_total/display_step))
acc_total = 0
loss_total = 0
symbols_in = [train_data[i] for i in range(offset, offset + n_input)]
symbols_out = train_data[offset + n_input]
symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())]
print("%s - [%s] vs [%s]" % (symbols_in,symbols_out,symbols_out_pred))
step += 1
offset += (n_input+1)
print("Optimization Finished!")
print("Elapsed time: ", time() - start_time)
print("Run on command line.")
print("\ttensorboard --logdir=%s" % (logs_path))
print("Point your web browser to: http://localhost:6006/")
save_path = saver.save(session, "model.ckpt")
print("Model saved in file: %s" % save_path)
Explanation: Part 3 : LSTM Training
In the Training process, at each epoch, 3 words are taken from the training data, encoded to integer to form the input vector. The training labels are one-hot vector encoding the word that comes after the 3 inputs words. Display the loss and the training accuracy every 1000 iteration. Save the model at the end of training in the lstm_model folder
End of explanation
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Restore model weights from previously saved model
saver.restore(sess, "./model.ckpt")
print(test('get a little', sess))
print(test('nobody tried to', sess))
Explanation: Comment:
We created different models with different number of layers, and we have seen that the best accuracy is achieved using only 2 laers. Using more or less layers we achieve a lower accuracy
Part 4 : Test your model
3.1. Next word prediction
Load your model (using the model_saved variable given in the training session) and test the sentences :
- 'get a little'
- 'nobody tried to'
- Try with other sentences using words from the stroy's vocabulary.
End of explanation
#Your implementation goes here
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Restore model weights from previously saved model
saver.restore(sess, "./model.ckpt")
#a sentence is concluded when we find a dot.
fable = [random.choice(dictionary.keys()) for _ in range(3)]
n_sentences = fable.count('.')
offset = 0
while n_sentences < 5:
next_word = test(' '.join(fable[offset:offset+3]), sess)
fable.append(next_word)
if next_word == '.':
n_sentences += 1
offset+=1
print(' '.join(fable))
Explanation: Comment:
Here it looks that the RNN is working, in fact it can predict correctly the next word.
We should not that in this case is difficult to check if the RNN is actually overfitting the training data.
3.2. More fun with the Fable Writer !
You will use the RNN/LSTM model learned in the previous question to create a
new story/fable.
For this you will choose 3 words from the dictionary which will start your
story and initialize your network. Using those 3 words the RNN will generate
the next word or the story. Using the last 3 words (the newly predicted one
and the last 2 from the input) you will use the network to predict the 5
word of the story.. and so on until your story is 5 sentence long.
Make a point at the end of your story.
To implement that, you will use the test function.
This is the original fable, we will look at it to note an eventual overfitting
It was rather lonely for him all day, so he thought upon a plan by which he could get a little company and some excitement.
He rushed down towards the village calling out "Wolf, Wolf," and the villagers came out to meet him, and some of them stopped with him for a considerable time.
This pleased the boy so much that a few days afterwards he tried the same trick, and again the villagers came to his help.
But shortly after this a Wolf actually did come out from the forest, and began to worry the sheep, and the boy of course cried out "Wolf, Wolf," still louder than before.
But this time the villagers, who had been fooled twice before, thought the boy was again deceiving them, and nobody stirred to come to his help.
So the Wolf made a good meal off the boy's flock, and when the boy complained, the wise man of the village said:
"A liar will not be believed, even when he speaks the truth.
End of explanation
def load_data(filename):
with open(filename) as f:
data = f.readlines()
data = [x.strip().lower() for x in data]
data = [data[i].split() for i in range(len(data))]
data = np.array(data)
data = np.reshape(data, [-1, ])
return data
train_file ='data/story.txt'
train_data = load_data(train_file)
def build_vocabulary(words):
count = collections.Counter(words).most_common()
dic= dict()
for word, _ in count:
dic[word] = len(dic)
reverse_dic= dict(zip(dic.values(), dic.keys()))
return dic, reverse_dic
dictionary, reverse_dictionary = build_vocabulary(train_data)
vocabulary_size= len(dictionary)
import numpy as np
import collections # used to build the dictionary
import random
import time
from time import time
import pickle # may be used to save your model
import matplotlib.pyplot as plt
#Import Tensorflow and rnn
import tensorflow as tf
from tensorflow.contrib import rnn
def create_train_model(n_input = 3, n_layers = 2,verbose = False):
tf.reset_default_graph()
# Target log path
logs_path = 'lstm_words'
writer = tf.summary.FileWriter(logs_path)
def lstm_model(x, w, b, n_input, n_hidden,n_layers):
# reshape to [1, n_input]
x = tf.reshape(x, [-1, n_input])
# Generate a n_input-element sequence of inputs
# (eg. [had] [a] [general] -> [20] [6] [33])
x = tf.split(x,n_input,1)
rnn_layers = [rnn.BasicLSTMCell(n_hidden) for _ in range(n_layers)]
rnn_cell = rnn.MultiRNNCell(rnn_layers)
# generate prediction
outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)
# there are n_input outputs but
# we only want the last output
return tf.matmul(outputs[-1], w['out']) + b['out']
# Training Parameters
learning_rate = 0.001
epochs = 50000
display_step = 1000
#For each LSTM cell that you initialise, supply a value for the hidden dimension, number of units in LSTM cell
n_hidden = 64
# tf Graph input
x = tf.placeholder("float", [None, n_input, 1])
y = tf.placeholder("float", [None, vocabulary_size])
# LSTM weights and biases
weights = { 'out': tf.Variable(tf.random_normal([n_hidden, vocabulary_size]))}
biases = {'out': tf.Variable(tf.random_normal([vocabulary_size])) }
#build the model
pred = lstm_model(x, weights, biases,n_input,n_hidden,n_layers)
# Loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
#cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
#cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
# Model evaluation
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
saver = tf.train.Saver()
start_time = time()
# Launch the graph
with tf.Session() as session:
session.run(init)
step = 0
offset = random.randint(0,n_input+1)
end_offset = n_input + 1
acc_total = 0
loss_total = 0
writer.add_graph(session.graph)
while step < epochs:
# Generate a minibatch. Add some randomness on selection process.
if offset > (len(train_data)-end_offset):
offset = random.randint(0, n_input+1)
symbols_in_keys = [ [dictionary[ str(train_data[i])]] for i in range(offset, offset+n_input) ]
symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
symbols_out_onehot = np.zeros([len(dictionary)], dtype=float)
symbols_out_onehot[dictionary[str(train_data[offset+n_input])]] = 1.0
symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1])
_, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \
feed_dict={x: symbols_in_keys, y: symbols_out_onehot})
loss_total += loss
acc_total += acc
if (step+1) % display_step == 0:
if verbose or step+1 == epochs: print("Iter= " + str(step+1) + ", Average Loss= " + \
"{:.6f}".format(loss_total/display_step) + ", Average Accuracy= " + \
"{:.2f}%".format(100*acc_total/display_step))
acc_total = 0
loss_total = 0
symbols_in = [train_data[i] for i in range(offset, offset + n_input)]
symbols_out = train_data[offset + n_input]
symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())]
if verbose: print("%s - [%s] vs [%s]" % (symbols_in,symbols_out,symbols_out_pred))
step += 1
offset += (n_input+1)
print("Optimization Finished!")
print("Elapsed time: ", time() - start_time)
print("Run on command line.")
print("\ttensorboard --logdir=%s" % (logs_path))
print("Point your web browser to: http://localhost:6006/")
save_path = saver.save(session, "model.ckpt")
print("Model saved in file: %s" % save_path)
#run the cell
def test(sentence, session, verbose=False):
sentence = sentence.strip()
words = sentence.split(' ')
if len(words) != n_input:
print("sentence length should be equel to", n_input, "!")
try:
symbols_inputs = [dictionary[str(words[i - n_input])] for i in range(n_input)]
keys = np.reshape(np.array(symbols_inputs), [-1, n_input, 1])
onehot_pred = session.run(pred, feed_dict={x: keys})
onehot_pred_index = int(tf.argmax(onehot_pred, 1).eval())
words.append(reverse_dictionary[onehot_pred_index])
sentence = " ".join(words)
if verbose:
print(sentence)
return reverse_dictionary[onehot_pred_index]
except:
print " ".join(["Word", words[i - n_input], "not in dictionary"])
#a sentence is concluded when we find a dot.
fable = [random.choice(dictionary.keys()) for _ in range(n_input)]
#print(dictionary)
#print(fable)
n_sentences = fable.count('.')
offset = 0
while n_sentences < 5 and len(fable) < 200:
next_word = test(' '.join(fable[offset:offset+n_input]), session)
fable.append(next_word)
if next_word == '.':
n_sentences += 1
offset+=1
print(' '.join(fable))
Explanation: Comment:
This is interesting, we see that the sentences have some sort of sense, but when we reach a point, we see the same sentence repated many times. Thus is probably due to overfitting, we should look more deeply. We see that the repeated sentence is different from the original one, but it is still always the same. We think this is due to the fact that the dot start always the same sentence. Maybe we could create more layers and see what happens.
End of explanation
create_train_model(n_input = 1, n_layers = 1)
create_train_model(n_input = 1, n_layers = 2)
create_train_model(n_input = 1, n_layers = 3)
Explanation: 3.3. Play with number of inputs
The number of input in our example is 3, see what happens when you use other number (1 and 5)
n_input = 1
End of explanation
create_train_model(n_input = 3, n_layers = 1)
create_train_model(n_input = 3, n_layers = 2)
create_train_model(n_input = 3, n_layers = 3)
Explanation: Comment:
Here we see that when the input size is 1 we obtain a vad model regardless of the number of layers, this is because we are basically predicting a word based on the preceding word. This not enough to create a sentence with some sort of sense.Looking ath the prediction accuracy, it is very low.
n_input = 3
End of explanation
create_train_model(n_input = 5, n_layers = 1)
create_train_model(n_input = 5, n_layers = 2)
create_train_model(n_input = 5, n_layers = 3)
Explanation: Comment:
Here we see some sentences that have a sense, but we see a tendency to repeat the sentence of the training fable. This is interesting, because during the training the single triples where chosen randomly and not sequentially. Somehow, the net learned the training fable.
n_input = 5
End of explanation |
2,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tidy-harness
A tidy pandas.DataFrame with scikit-learn models, interactive bokeh visualizations, and jinja2 templates.
Usage
Example
Step1: More Examples
More examples can be found in the tests directory. Tap the Ⓣ key while in the Github interface to search quickly.
Install
For the meantime
Step2: Build & Run Tests
The tests require pytest and pytest-ipynb. | Python Code:
import harness
from harness import Harness
from pandas import Categorical
from sklearn import datasets, discriminant_analysis
iris = datasets.load_iris()
# Harness is just a dataframe
df = Harness(
data=iris['data'], index=Categorical(iris['target']),
estimator=discriminant_analysis.LinearDiscriminantAnalysis(),
feature_level=-1, # the feature level indicates an index
# in the dataframe. -1 is the last index.
)
# Fit the model with 50 random rows.
df.sample(50).fit()
# Transform the dataframe
transformed = df.transform()
transformed.set_index(
df.index
.rename_categories(iris['target_names'])
.rename('species'), append=True, inplace=True,
)
# Plot the dataframe using Bokeh charts.
with transformed.reset_index().DataSource(x=0, y=1) as source:
source.Scatter(color='species')
source.show()
Explanation: tidy-harness
A tidy pandas.DataFrame with scikit-learn models, interactive bokeh visualizations, and jinja2 templates.
Usage
Example: Modeling Fisher's 🌸 Data
End of explanation
%%script bash --bg
python setup.py develop
watchmedo tricks tricks.yaml
# Execute this cell to stop watching the files
%killbgscripts
Explanation: More Examples
More examples can be found in the tests directory. Tap the Ⓣ key while in the Github interface to search quickly.
Install
For the meantime:
bash
pip install git+https://github.com/tonyfast/tidy-harness
Background
harness initially responded to the need for scikit-learn models closer to a pandas.DataFrame. Since a DataFrame is Tidy Data the rows and columns can assist in tracking samples and features over many estimations. With this knowledge it would be easier to design a testing harness for data science.
The DataFrame has a powerful declarative syntax, consider the groupby and rolling apis. There is a modern tendency toward declarative and functional syntaxes in scientific computing and visualization. This is observed in altair, dask, and scikit-learn.
tidy-harness aims to provide a chain interface between pandas.DataFrame objects and other popular scientific computing libraries in the python ecosystem. The initial harness extensions :
attach a scikit-learn estimator to the dataframe.
attach a shared jinja2 environment to render narratives about the dataframes.
bokeh plotting methods with a contextmanager for interactive visualization development
Development
The development scripts can be run through this notebook.
Jupyter notebooks are used for all Python development in this project. The key features are:
watchdog file system watcher that converts notebooks to python scripts with nbconvert. Tests are not converted.
nbconvert with the --execute flag to run notebooks and fill out their output. _The current goal is for the notebook to be viewable in a Github repo.
pytest-ipynb to run tests directly on the notebooks.
Making the python module
The script below:
Installs a develop copy of harness
Listens for file systems events to convert notebooks to python scripts.
End of explanation
%%script bash
jupyter nbconvert harness/tests/*.ipynb --execute --to notebook --inplace
py.test
Explanation: Build & Run Tests
The tests require pytest and pytest-ipynb.
End of explanation |
2,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian interpretation of medical tests
This notebooks explores several problems related to interpreting the results of medical tests.
Copyright 2016 Allen Downey
MIT License
Step3: Medical tests
Suppose we test a patient to see if they have a disease, and the test comes back positive. What is the probability that the patient is actually sick (that is, has the disease)?
To answer this question, we need to know
Step4: Now we can create a Test object with parameters chosen for demonstration purposes (most medical tests are better than this!)
Step5: If you are curious, here's the nested dictionary that computes the likelihoods
Step6: And here's how we update the Test object with a positive outcome
Step9: The positive test provides evidence that the patient is sick, increasing the probability from 0.1 to 0.25.
Uncertainty about t
So far, this is basic Bayesian inference. Now let's add a wrinkle. Suppose that we don't know the value of t with certainty, but we have reason to believe that t is either 0.2 or 0.4 with equal probability.
Again, we would like to know the probability that a patient who tests positive actually has the disease. As we did with the Red Die problem, we will consider several scenarios
Step10: To update a MetaTest, we update each of the hypothetical Test objects. The return value from Update is the normalizing constant, which is the total probability of the data under the hypothesis.
We use the normalizing constants from the bottom level of the hierarchy as the likelihoods at the top level.
Here's how we create the MetaTest for the scenario we described
Step11: At the top level, there are two tests, with different values of t. Initially, they are equally likely.
When we update the MetaTest, it updates the embedded Test objects and then the MetaTest itself.
Step12: Here are the results.
Step14: Because a positive test is more likely if t=0.4, the positive test is evidence in favor of the hypothesis that t=0.4.
This MetaTest object represents what we should believe about t after seeing the test, as well as what we should believe about the probability that the patient is sick.
Marginal distributions
To compute the probability that the patient is sick, we have to compute the marginal probabilities of sick and notsick, averaging over the possible values of t. The following function computes this distribution
Step15: Here's the posterior predictive distribution
Step19: After seeing the test, the probability that the patient is sick is 0.25, which is the same result we got with t=0.3.
Two patients
Now suppose you test two patients and they both test positive. What is the probability that they are both sick?
To answer that, I define a few more functions to work with Metatests
Step20: MakeMetaTest makes a MetaTest object starting with a given PMF of t.
Marginal extracts the PMF of t from a MetaTest.
Conditional takes a specified value for t and returns the PMF of sick and notsick conditioned on t.
I'll test these functions using the same parameters from above
Step21: Here are the results
Step22: Same as before. Now we can extract the posterior distribution of t.
Step23: Having seen one positive test, we are a little more inclined to believe that t=0.4; that is, that the false positive rate for this patient/test is high.
And we can extract the conditional distributions for the patient
Step24: Finally, we can make the posterior marginal distribution of sick/notsick, which is a weighted mixture of the conditional distributions
Step25: At this point we have a MetaTest that contains our updated information about the test (the distribution of t) and about the patient that tested positive.
Now, to compute the probability that both patients are sick, we have to know the distribution of t for both patients. And that depends on details of the scenario.
In Scenario A, the reason we are uncertain about t is either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.
So the value of t for each patient is an independent choice from pmf_t; that is, if we learn something about t for one patient, that tells us nothing about t for other patients.
So if we consider two patients who have tested positive, the MetaTest we just computed represents our belief about each of the two patients independently.
To compute the probability that both patients are sick, we can convolve the two distributions.
Step26: Then we can compute the posterior marginal distribution of sick/notsick for the two patients
Step27: So in Scenario A the probability that both patients are sick is 1/16.
As an aside, we could have computed the marginal distributions first and then convolved them, which is computationally more efficient
Step28: We can confirm that this result is correct by simulation. Here's a generator that generates random pairs of patients
Step29: And here's a function that runs the simulation for a given number of iterations
Step30: As we increase iters, the probablity of (True, True) converges on 1/16, which is what we got from the analysis.
Good so far!
Scenario B
In Scenario B, we have reason to believe the t is the same for all patients, but we are not sure what it is. So each time we see a positive test, we get some information about t for all patients.
The first time we see positive test we do the same update as in Scenario A
Step31: And the marginal distribution of sick/notsick is the same
Step32: Now suppose the second patient arrives. We need a new MetaTest that contains the updated information about the test, but no information about the patient other than the prior probability of being sick, p
Step33: Now we can update this MetaTest with the result from the second test
Step34: This distribution contains updated information about the test, based on two positive outcomes, and updated information about a patient who has tested positive (once).
After seeing two patients with positive tests, the probability that t=0.4 has increased to 25/34, around 74%.
For either patient, the probability of being sick is given by the marginal distribution from metatest2
Step35: After two tests, the probability that the patient is sick is slightly lower than after one (4/17 is about 23.5%, compared to 25%). That's because the second positive test increases our belief that the false positive rate is high (t=0.4), which decreases our belief that either patient is sick.
Now, to compute the probability that both are sick, we can't just convolve the posterior marginal distribution with itself, as we did in Scenario A, because the selection of t is not independent for the two patients. Instead, we have to make a weighted mixture of conditional distributions.
If we know t=t1, we can compute the joint distribution for the two patients
Step36: If we know that t=t1, the probability of sicksick is 0.111. And for t=t2
Step37: If we know that t=t2, the probability of sicksick is 0.04.
The overall probability of sicksick is the weighted average of these probabilities
Step38: 1/17 is about 0.0588, slightly smaller than in Scenario A (1/16, which is about 0.0667).
To compute the probabilities for all four outcomes, I'll make a Metapmf that contains the two conditional distributions.
Step39: And finally we can use MakeMixture to compute the weighted averages of the posterior probabilities
Step40: To confirm that this result is correct, I'll use the simuation again with a different generator
Step41: The difference between Scenario A and Scenario B is the line I commented out. In Scenario B, we generate t once and it applies to both patients.
Step42: As iters increases, the results from the simulation converge on 1/17.
Summary so far
In summary
Step43: And here's the simulator that uses the generator to estimate the probability that two patients who test positive are both sick.
Step44: As we saw before, this probability converges on $1/16$.
Step45: Now here's a generator that generates pairs of patients in Scenario C. The difference is that for each pair we check the outcome of the tests; if they are not both positive, we loop back and try again
Step46: When we run it, it seems like the probability is still 1/16
Step47: If you examine the code, you see that the conditional in generate_pair_C makes no difference because it is redundant with the conditional in run_simulation. In Scenarios A and C, we filter out pairs if they are not both positive; it doesn't matter whether the filtering happens in the generator or the simulator.
In fact, Scenarios A and C are identical. In both scenarios, when we see a patient with a positive test, we learn something about the patients (more likely to be sick) and something about the particular test applied to the patients (more likely to generate false positives).
This is similar to what we saw in the Red Die problem. In Scenario C, the reddish die is more likely to produce a red outcome, so a red outcome provides evidence that we rolled the reddish die.
However, that is not the case with Scenario D.
Scenario D
As a reminder, Scenario D is similar to B
Step48: And here's a simulator that counts the fraction of positive tests that turn out to be sick
Step49: When we run the simulation, it doesn't look like it converges to 1/4 as it does in the other three scenarios.
Step50: So how can we analyze this scenario?
The key is to realize that, as in the Red Dice problem, if we roll until we get red, we don't learn anything about the die we rolled, and in this case, if we generate pairs until we get a positive test, we don't learn anything about t. The likelihood of the data (a positive test) is 1, regardless of t.
We can compute the probablity the patient is sick by creating a MetaTest and updating only the lower level (the Test objects) but not the upper level (the distribution of t).
Step51: After the update, the marginal distribution of t is unchanged
Step52: But the conditional probabilities have been updated
Step53: We can use MakeMixture to compute the weighted average of the conditional distributions.
Step54: So in Scenario D, a patient who tests positive has a probability of 4/15 of being sick, which is about 26.7%, and consistent with the simulation.
That's a little higher than in the other three Scenarios, because we have less reason to think that t is high.
Scenario D, two patients
Now let's see what happens with two patients. Here's a generator that generates pairs of patients
Step55: And here's what we get when we run the simulation
Step56: It looks like the probability that both patients are sick is higher than 1/16.
We can compute the result exactly using the posterior distribution and the same method we used in Scenario B, computing the mixture of two conjunctions
Step57: Then we'll make a weighted mixture of the conjunctions
Step58: In Scenario D, the probability that both patients are sick is 17/225, or about 0.0755, which is consistent with the simulation and, again, a little higher than in the other scenarios.
In summary
Step59: Scenarios C and D
In Scenario B, I assumed that we see all patients regardless of whether they are sick or not, test positive or not.
In that case, when we see a positive test, it provides evidence that the false positive rate is high. As a result, as we see more patients, we get more and more confident about the value of t.
I'll demonstrate this with a simulation. Here are the usual parameters
Step60: And here's a generator that simulates patients for given parameters
Step61: Now we can simulate a doctor who sees 100 patients and updates metatest each time.
Step62: If t is actually 0.2, the doctor eventually becomes convinced that t=0.2
Step63: And if t is actually 0.4, the doctor eventually becomes convinced that t=0.4
Step64: So far, so good.
But what if the doctor is a specialist who only sees patients after they have tested positive?
Here's a generator that simulates this scenario.
Step65: Now if the doctor applies the same logic as before, updating their belief about the test each time they see a positive test, they are quickly convinced that t is high, regardless of the actual value
Step66: So that's not good.
We have to figure out how to update our belief about t in this case. I'll define r as the referral rate for patients who test negative. If r=1, we see all patients, as in Scenarios A and B.
If r=0 we only see patients who tests positive.
If we know p, s, t, and r, we can compute the probability of seeing a patient with a positive test
Step67: Here are the probabilities of seeing a patient with a positive test for the two values of t
Step68: Since these probabilities are the likelihood of the data, we can use them to update our belief about t. Here's what we get with r=1.
Step69: And that's consistent with what we saw in Scenarios A and B.
But when r=0, we only see patients with positive test. The probability of the data is 1, regardless of t, so the data have no effect on our belief about t.
Step70: To compute the probability that the patient is sick, we can make a MetaTest and update the Test objects it contains, but we don't update the top level of the hierarchy.
Step71: Now we can generate the predictive distribution as usual
Step72: To compute the probability that two patients who test positive are sick, we have to deal with two cases again.
Scenario C
If the value of t is independent for all patients, we just compute the convolution of the predictive distribution with itself.
Step73: Scenario D
Or, if we think the value of t is the same for all patients, we have to use the same technique we used in Scenario B. | Python Code:
from __future__ import print_function, division
from thinkbayes2 import Pmf, Suite
from fractions import Fraction
Explanation: Bayesian interpretation of medical tests
This notebooks explores several problems related to interpreting the results of medical tests.
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
class Test(Suite):
Represents beliefs about a patient based on a medical test.
def __init__(self, p, s, t, label='Test'):
# initialize the prior probabilities
d = dict(sick=p, notsick=1-p)
super(Test, self).__init__(d, label)
# store the parameters
self.p = p
self.s = s
self.t = t
# make a nested dictionary to compute likelihoods
self.likelihood = dict(pos=dict(sick=s, notsick=t),
neg=dict(sick=1-s, notsick=1-t))
def Likelihood(self, data, hypo):
data: 'pos' or 'neg'
hypo: 'sick' or 'notsick'
return self.likelihood[data][hypo]
Explanation: Medical tests
Suppose we test a patient to see if they have a disease, and the test comes back positive. What is the probability that the patient is actually sick (that is, has the disease)?
To answer this question, we need to know:
The prevalence of the disease in the population the patient is from. Let's assume the patient is identified as a member of a population where the known prevalence is p.
The sensitivity of the test, s, which is the probability of a positive test if the patient is sick.
The false positive rate of the test, t, which is the probability of a positive test if the patient is not sick.
Given these parameters, we can compute the probability that the patient is sick, given a positive test.
Test class
To do that, I'll define a Test class that extends Suite, so it inherits Update and provides Likelihood.
The instance variables of Test are:
p, s, and t: Copies of the parameters.
d: a dictionary that maps from hypotheses to their probabilities. The hypotheses are the strings sick and notsick.
likelihood: a dictionary that encodes the likelihood of the possible data values pos and neg under the hypotheses.
End of explanation
p = Fraction(1, 10) # prevalence
s = Fraction(9, 10) # sensitivity
t = Fraction(3, 10) # false positive rate
test = Test(p, s, t)
test.Print()
Explanation: Now we can create a Test object with parameters chosen for demonstration purposes (most medical tests are better than this!):
End of explanation
test.likelihood
Explanation: If you are curious, here's the nested dictionary that computes the likelihoods:
End of explanation
test.Update('pos')
test.Print()
Explanation: And here's how we update the Test object with a positive outcome:
End of explanation
class MetaTest(Suite):
Represents a set of tests with different values of `t`.
def Likelihood(self, data, hypo):
data: 'pos' or 'neg'
hypo: Test object
# the return value from `Update` is the total probability of the
# data for a hypothetical value of `t`
return hypo.Update(data)
Explanation: The positive test provides evidence that the patient is sick, increasing the probability from 0.1 to 0.25.
Uncertainty about t
So far, this is basic Bayesian inference. Now let's add a wrinkle. Suppose that we don't know the value of t with certainty, but we have reason to believe that t is either 0.2 or 0.4 with equal probability.
Again, we would like to know the probability that a patient who tests positive actually has the disease. As we did with the Red Die problem, we will consider several scenarios:
Scenario A: The patients are drawn at random from the relevant population, and the reason we are uncertain about t is that either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.
Scenario B: As in Scenario A, the patients are drawn at random from the relevant population, but the reason we are uncertain about t is that previous studies of the test have been contradictory. That is, there is only one version of the test, and we have reason to believe that t is the same for all groups, but we are not sure what the correct value of t is.
Scenario C: As in Scenario A, there are two versions of the test or two groups of people. But now the patients are being filtered so we only see the patients who tested positive and we don't know how many patients tested negative. For example, suppose you are a specialist and patients are only referred to you after they test positive.
Scenario D: As in Scenario B, we have reason to think that t is the same for all patients, and as in Scenario C, we only see patients who test positive and don't know how many tested negative.
Scenario A
We can represent this scenario with a hierarchical model, where the levels of the hierarchy are:
At the top level, the possible values of t and their probabilities.
At the bottom level, the probability that the patient is sick or not, conditioned on t.
To represent the hierarchy, I'll define a MetaTest, which is a Suite that contains Test objects with different values of t as hypotheses.
End of explanation
q = Fraction(1, 2)
t1 = Fraction(2, 10)
t2 = Fraction(4, 10)
test1 = Test(p, s, t1, 'Test(t=0.2)')
test2 = Test(p, s, t2, 'Test(t=0.4)')
metatest = MetaTest({test1:q, test2:1-q})
metatest.Print()
Explanation: To update a MetaTest, we update each of the hypothetical Test objects. The return value from Update is the normalizing constant, which is the total probability of the data under the hypothesis.
We use the normalizing constants from the bottom level of the hierarchy as the likelihoods at the top level.
Here's how we create the MetaTest for the scenario we described:
End of explanation
metatest.Update('pos')
Explanation: At the top level, there are two tests, with different values of t. Initially, they are equally likely.
When we update the MetaTest, it updates the embedded Test objects and then the MetaTest itself.
End of explanation
metatest.Print()
Explanation: Here are the results.
End of explanation
def MakeMixture(metapmf, label='mix'):
Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for x, p2 in pmf.Items():
mix.Incr(x, p1 * p2)
return mix
Explanation: Because a positive test is more likely if t=0.4, the positive test is evidence in favor of the hypothesis that t=0.4.
This MetaTest object represents what we should believe about t after seeing the test, as well as what we should believe about the probability that the patient is sick.
Marginal distributions
To compute the probability that the patient is sick, we have to compute the marginal probabilities of sick and notsick, averaging over the possible values of t. The following function computes this distribution:
End of explanation
predictive = MakeMixture(metatest)
predictive.Print()
Explanation: Here's the posterior predictive distribution:
End of explanation
def MakeMetaTest(p, s, pmf_t):
Makes a MetaTest object with the given parameters.
p: prevalence
s: sensitivity
pmf_t: Pmf of possible values for `t`
tests = {}
for t, q in pmf_t.Items():
label = 'Test(t=%s)' % str(t)
tests[Test(p, s, t, label)] = q
return MetaTest(tests)
def Marginal(metatest):
Extracts the marginal distribution of t.
marginal = Pmf()
for test, prob in metatest.Items():
marginal[test.t] = prob
return marginal
def Conditional(metatest, t):
Extracts the distribution of sick/notsick conditioned on t.
for test, prob in metatest.Items():
if test.t == t:
return test
Explanation: After seeing the test, the probability that the patient is sick is 0.25, which is the same result we got with t=0.3.
Two patients
Now suppose you test two patients and they both test positive. What is the probability that they are both sick?
To answer that, I define a few more functions to work with Metatests:
End of explanation
pmf_t = Pmf({t1:q, t2:1-q})
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Print()
Explanation: MakeMetaTest makes a MetaTest object starting with a given PMF of t.
Marginal extracts the PMF of t from a MetaTest.
Conditional takes a specified value for t and returns the PMF of sick and notsick conditioned on t.
I'll test these functions using the same parameters from above:
End of explanation
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Update('pos')
metatest.Print()
Explanation: Here are the results
End of explanation
Marginal(metatest).Print()
Explanation: Same as before. Now we can extract the posterior distribution of t.
End of explanation
cond1 = Conditional(metatest, t1)
cond1.Print()
cond2 = Conditional(metatest, t2)
cond2.Print()
Explanation: Having seen one positive test, we are a little more inclined to believe that t=0.4; that is, that the false positive rate for this patient/test is high.
And we can extract the conditional distributions for the patient:
End of explanation
MakeMixture(metatest).Print()
Explanation: Finally, we can make the posterior marginal distribution of sick/notsick, which is a weighted mixture of the conditional distributions:
End of explanation
convolution = metatest + metatest
convolution.Print()
Explanation: At this point we have a MetaTest that contains our updated information about the test (the distribution of t) and about the patient that tested positive.
Now, to compute the probability that both patients are sick, we have to know the distribution of t for both patients. And that depends on details of the scenario.
In Scenario A, the reason we are uncertain about t is either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.
So the value of t for each patient is an independent choice from pmf_t; that is, if we learn something about t for one patient, that tells us nothing about t for other patients.
So if we consider two patients who have tested positive, the MetaTest we just computed represents our belief about each of the two patients independently.
To compute the probability that both patients are sick, we can convolve the two distributions.
End of explanation
marginal = MakeMixture(metatest+metatest)
marginal.Print()
Explanation: Then we can compute the posterior marginal distribution of sick/notsick for the two patients:
End of explanation
marginal = MakeMixture(metatest) + MakeMixture(metatest)
marginal.Print()
Explanation: So in Scenario A the probability that both patients are sick is 1/16.
As an aside, we could have computed the marginal distributions first and then convolved them, which is computationally more efficient:
End of explanation
from random import random
def flip(p):
return random() < p
def generate_pair_A(p, s, pmf_t):
while True:
sick1, sick2 = flip(p), flip(p)
t = pmf_t.Random()
test1 = flip(s) if sick1 else flip(t)
t = pmf_t.Random()
test2 = flip(s) if sick2 else flip(t)
yield test1, test2, sick1, sick2
Explanation: We can confirm that this result is correct by simulation. Here's a generator that generates random pairs of patients:
End of explanation
def run_simulation(generator, iters=100000):
pmf_t = Pmf([0.2, 0.4])
pair_iterator = generator(0.1, 0.9, pmf_t)
outcomes = Pmf()
for i in range(iters):
test1, test2, sick1, sick2 = next(pair_iterator)
if test1 and test2:
outcomes[sick1, sick2] += 1
outcomes.Normalize()
return outcomes
outcomes = run_simulation(generate_pair_A)
outcomes.Print()
Explanation: And here's a function that runs the simulation for a given number of iterations:
End of explanation
metatest1 = MakeMetaTest(p, s, pmf_t)
metatest1.Update('pos')
metatest1.Print()
Explanation: As we increase iters, the probablity of (True, True) converges on 1/16, which is what we got from the analysis.
Good so far!
Scenario B
In Scenario B, we have reason to believe the t is the same for all patients, but we are not sure what it is. So each time we see a positive test, we get some information about t for all patients.
The first time we see positive test we do the same update as in Scenario A:
End of explanation
marginal = MakeMixture(metatest1)
marginal.Print()
Explanation: And the marginal distribution of sick/notsick is the same:
End of explanation
metatest2 = MakeMetaTest(p, s, Marginal(metatest1))
metatest2.Print()
Explanation: Now suppose the second patient arrives. We need a new MetaTest that contains the updated information about the test, but no information about the patient other than the prior probability of being sick, p:
End of explanation
metatest2.Update('pos')
metatest2.Print()
Explanation: Now we can update this MetaTest with the result from the second test:
End of explanation
predictive = MakeMixture(metatest2)
predictive.Print()
Explanation: This distribution contains updated information about the test, based on two positive outcomes, and updated information about a patient who has tested positive (once).
After seeing two patients with positive tests, the probability that t=0.4 has increased to 25/34, around 74%.
For either patient, the probability of being sick is given by the marginal distribution from metatest2:
End of explanation
cond_t1 = Conditional(metatest2, t1)
conjunction_t1 = cond_t1 + cond_t1
conjunction_t1.Print()
Explanation: After two tests, the probability that the patient is sick is slightly lower than after one (4/17 is about 23.5%, compared to 25%). That's because the second positive test increases our belief that the false positive rate is high (t=0.4), which decreases our belief that either patient is sick.
Now, to compute the probability that both are sick, we can't just convolve the posterior marginal distribution with itself, as we did in Scenario A, because the selection of t is not independent for the two patients. Instead, we have to make a weighted mixture of conditional distributions.
If we know t=t1, we can compute the joint distribution for the two patients:
End of explanation
cond_t2 = Conditional(metatest2, t2)
conjunction_t2 = cond_t2 + cond_t2
conjunction_t2.Print()
Explanation: If we know that t=t1, the probability of sicksick is 0.111. And for t=t2:
End of explanation
posterior_t = Marginal(metatest2)
posterior_t[t1] * conjunction_t1['sicksick'] + posterior_t[t2] * conjunction_t2['sicksick']
Explanation: If we know that t=t2, the probability of sicksick is 0.04.
The overall probability of sicksick is the weighted average of these probabilities:
End of explanation
metapmf = Pmf()
for t, prob in Marginal(metatest2).Items():
cond = Conditional(metatest2, t)
conjunction = cond + cond
metapmf[conjunction] = prob
metapmf.Print()
Explanation: 1/17 is about 0.0588, slightly smaller than in Scenario A (1/16, which is about 0.0667).
To compute the probabilities for all four outcomes, I'll make a Metapmf that contains the two conditional distributions.
End of explanation
predictive = MakeMixture(metapmf)
predictive.Print()
Explanation: And finally we can use MakeMixture to compute the weighted averages of the posterior probabilities:
End of explanation
def generate_pair_B(p, s, pmf_t):
while True:
sick1, sick2 = flip(p), flip(p)
t = pmf_t.Random()
test1 = flip(s) if sick1 else flip(t)
# Here's the difference
# t = pmf_t.Random()
test2 = flip(s) if sick2 else flip(t)
yield test1, test2, sick1, sick2
Explanation: To confirm that this result is correct, I'll use the simuation again with a different generator:
End of explanation
outcomes = run_simulation(generate_pair_B)
outcomes.Print()
Explanation: The difference between Scenario A and Scenario B is the line I commented out. In Scenario B, we generate t once and it applies to both patients.
End of explanation
def generate_pair_A(p, s, pmf_t):
while True:
sick1, sick2 = flip(p), flip(p)
t = pmf_t.Random()
test1 = flip(s) if sick1 else flip(t)
t = pmf_t.Random()
test2 = flip(s) if sick2 else flip(t)
yield test1, test2, sick1, sick2
Explanation: As iters increases, the results from the simulation converge on 1/17.
Summary so far
In summary:
P(sick|pos) P(sicksick|pospos)
Scenario A 1/4 = 25% 1/16 = 6.25%
Scenario B 1/4 = 25% 1/17 ~= 5.88%
If we are only interested in one patient at a time, Scenarios A and B are the same. But for collections of patients, they yield different probabilities.
A real scenario might combine elements of A and B; that is, the false positive rate might be different for different people, and we might have some uncertainty about what it is. In that case, the most accurate probability for two patients might be anywhere between 1/16 and 1/17.
Scenario C
Scenario C is similar to Scenario A: we believe that the false positive rate t might be different for different people, or for different versions of the test. The difference is that in Scenario A we see all patients, sick or not, positive test or not.
In Scenario C, we only see patients after they have tested positive, and we don't know how many tested negative. For example, if you are a specialist and patients are referred to you only if they test positive, Scenario C might be a good model of your situation.
Before I analyze this scenario, I'll start with a simulation. As a reminder, here's a generator that generates pairs of patients in Scenario A:
End of explanation
def run_simulation(generator, iters=100000):
pmf_t = Pmf([0.2, 0.4])
pair_iterator = generator(0.1, 0.9, pmf_t)
outcomes = Pmf()
for i in range(iters):
test1, test2, sick1, sick2 = next(pair_iterator)
if test1 and test2:
outcomes[sick1, sick2] += 1
outcomes.Normalize()
return outcomes
Explanation: And here's the simulator that uses the generator to estimate the probability that two patients who test positive are both sick.
End of explanation
outcomes = run_simulation(generate_pair_A)
outcomes.Print()
Explanation: As we saw before, this probability converges on $1/16$.
End of explanation
def generate_pair_C(p, s, pmf_t):
while True:
sick1, sick2 = flip(p), flip(p)
t = pmf_t.Random()
test1 = flip(s) if sick1 else flip(t)
t = pmf_t.Random()
test2 = flip(s) if sick2 else flip(t)
# here is the difference
if test1 and test2:
yield test1, test2, sick1, sick2
Explanation: Now here's a generator that generates pairs of patients in Scenario C. The difference is that for each pair we check the outcome of the tests; if they are not both positive, we loop back and try again:
End of explanation
outcomes = run_simulation(generate_pair_C)
outcomes.Print()
Explanation: When we run it, it seems like the probability is still 1/16:
End of explanation
def generate_patient_D(p, s, pmf_t):
while True:
# choose t
t = pmf_t.Random()
# generate patients until positive test
while True:
sick = flip(p)
test = flip(s) if sick else flip(t)
if test:
yield test, sick
break
Explanation: If you examine the code, you see that the conditional in generate_pair_C makes no difference because it is redundant with the conditional in run_simulation. In Scenarios A and C, we filter out pairs if they are not both positive; it doesn't matter whether the filtering happens in the generator or the simulator.
In fact, Scenarios A and C are identical. In both scenarios, when we see a patient with a positive test, we learn something about the patients (more likely to be sick) and something about the particular test applied to the patients (more likely to generate false positives).
This is similar to what we saw in the Red Die problem. In Scenario C, the reddish die is more likely to produce a red outcome, so a red outcome provides evidence that we rolled the reddish die.
However, that is not the case with Scenario D.
Scenario D
As a reminder, Scenario D is similar to B: we have reason to think that t is either 0.2 or 0.4 for everyone. The difference in Scenario D is that we only see patients if they test positive.
Here's a generator that generates single patients:
End of explanation
def run_single_simulation(generator, iters=100000):
pmf_t = Pmf([0.2, 0.4])
iterator = generator(0.1, 0.9, pmf_t)
outcomes = Pmf()
for i in range(iters):
test, sick = next(iterator)
if test:
outcomes[sick] += 1
outcomes.Normalize()
return outcomes
Explanation: And here's a simulator that counts the fraction of positive tests that turn out to be sick:
End of explanation
outcomes = run_single_simulation(generate_patient_D)
outcomes.Print()
Explanation: When we run the simulation, it doesn't look like it converges to 1/4 as it does in the other three scenarios.
End of explanation
metatest = MakeMetaTest(p, s, pmf_t)
for hypo in metatest:
hypo.Update('pos')
Explanation: So how can we analyze this scenario?
The key is to realize that, as in the Red Dice problem, if we roll until we get red, we don't learn anything about the die we rolled, and in this case, if we generate pairs until we get a positive test, we don't learn anything about t. The likelihood of the data (a positive test) is 1, regardless of t.
We can compute the probablity the patient is sick by creating a MetaTest and updating only the lower level (the Test objects) but not the upper level (the distribution of t).
End of explanation
Marginal(metatest).Print()
Explanation: After the update, the marginal distribution of t is unchanged:
End of explanation
Conditional(metatest, t1).Print()
Conditional(metatest, t2).Print()
Explanation: But the conditional probabilities have been updated:
End of explanation
MakeMixture(metatest).Print()
Explanation: We can use MakeMixture to compute the weighted average of the conditional distributions.
End of explanation
def generate_pair_D(p, s, pmf_t):
while True:
t = pmf_t.Random()
while True:
sick1, sick2 = flip(p), flip(p)
test1 = flip(s) if sick1 else flip(t)
test2 = flip(s) if sick2 else flip(t)
if test1 and test2:
yield test1, test2, sick1, sick2
break
Explanation: So in Scenario D, a patient who tests positive has a probability of 4/15 of being sick, which is about 26.7%, and consistent with the simulation.
That's a little higher than in the other three Scenarios, because we have less reason to think that t is high.
Scenario D, two patients
Now let's see what happens with two patients. Here's a generator that generates pairs of patients:
End of explanation
outcomes = run_simulation(generate_pair_D, iters=1000000)
outcomes.Print()
Explanation: And here's what we get when we run the simulation:
End of explanation
def MixConjunctions(metatest):
metapmf = Pmf()
for t, prob in Marginal(metatest).Items():
cond = Conditional(metatest, t)
conjunction = cond + cond
metapmf[conjunction] = prob
return MakeMixture(metapmf)
Explanation: It looks like the probability that both patients are sick is higher than 1/16.
We can compute the result exactly using the posterior distribution and the same method we used in Scenario B, computing the mixture of two conjunctions:
End of explanation
MixConjunctions(metatest).Print()
Explanation: Then we'll make a weighted mixture of the conjunctions:
End of explanation
def scenario_a(p, s, pmf_t):
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Update('pos')
single = MakeMixture(metatest)
pair = single + single
return single, pair
single, pair = scenario_a(p, s, pmf_t)
single.Print()
pair.Print()
def scenario_b(p, s, pmf_t):
metatest1 = MakeMetaTest(p, s, pmf_t)
metatest1.Update('pos')
single = MakeMixture(metatest1)
metatest2 = MakeMetaTest(p, s, Marginal(metatest1))
metatest2.Update('pos')
pair = MixConjunctions(metatest2)
return single, pair
single, pair = scenario_b(p, s, pmf_t)
single.Print()
pair.Print()
def scenario_d(p, s, pmf_t):
metatest = MakeMetaTest(p, s, pmf_t)
for hypo in metatest:
hypo.Update('pos')
single = MakeMixture(metatest)
pair = MixConjunctions(metatest)
return single, pair
single, pair = scenario_d(p, s, pmf_t)
single.Print()
pair.Print()
from sympy import symbols
p, s, q, t1, t2 = symbols(['p', 's', 'q', 't1', 't2'])
pmf_t = Pmf({t1:q, t2:1-q})
def PrintSymSuite(suite):
for hypo, prob in suite.Items():
print(hypo, prob.simplify())
single, pair = scenario_b(p, s, pmf_t)
PrintSymSuite(single)
PrintSymSuite(pair)
Explanation: In Scenario D, the probability that both patients are sick is 17/225, or about 0.0755, which is consistent with the simulation and, again, a little higher than in the other scenarios.
In summary:
P(sick|pos) P(sicksick|pospos)
Scenario A 1/4 = 25% 1/16 = 6.25%
Scenario B 1/4 = 25% 1/17 ~= 5.88%
Scenario C 1/4 = 25% 1/16 = 6.25%
Scenario D 4/15 ~= 26.7% 17/225 ~= 7.55%
End of explanation
p = 0.1
s = 0.9
q = 0.5
t1 = 0.2
t2 = 0.4
pmf_t = Pmf({t1:q, t2:1-q})
Explanation: Scenarios C and D
In Scenario B, I assumed that we see all patients regardless of whether they are sick or not, test positive or not.
In that case, when we see a positive test, it provides evidence that the false positive rate is high. As a result, as we see more patients, we get more and more confident about the value of t.
I'll demonstrate this with a simulation. Here are the usual parameters:
End of explanation
def generate_patient_all(p, s, t):
while True:
sick = flip(p)
test = flip(s) if sick else flip(t)
yield 'pos' if test else 'neg'
Explanation: And here's a generator that simulates patients for given parameters:
End of explanation
def run_simulation(p, s, pmf_t, iterator):
metatest = MakeMetaTest(p, s, pmf_t)
for i in range(100):
data = next(iterator)
metatest = MakeMetaTest(p, s, Marginal(metatest))
metatest.Update(data)
Marginal(metatest).Print()
Explanation: Now we can simulate a doctor who sees 100 patients and updates metatest each time.
End of explanation
t = 0.2
iterator = generate_patient_all(p, s, t)
run_simulation(p, s, pmf_t, iterator)
Explanation: If t is actually 0.2, the doctor eventually becomes convinced that t=0.2
End of explanation
t = 0.4
iterator = generate_patient_all(p, s, t)
run_simulation(p, s, pmf_t, iterator)
Explanation: And if t is actually 0.4, the doctor eventually becomes convinced that t=0.4
End of explanation
def generate_patient_posonly(p, s, t):
while True:
sick = flip(p)
test = flip(s) if sick else flip(t)
if test:
yield 'pos'
Explanation: So far, so good.
But what if the doctor is a specialist who only sees patients after they have tested positive?
Here's a generator that simulates this scenario.
End of explanation
t = 0.2
iterator = generate_patient_posonly(p, s, t)
run_simulation(p, s, pmf_t, iterator)
t = 0.4
iterator = generate_patient_posonly(p, s, t)
run_simulation(p, s, pmf_t, iterator)
Explanation: Now if the doctor applies the same logic as before, updating their belief about the test each time they see a positive test, they are quickly convinced that t is high, regardless of the actual value
End of explanation
def prob_pos(p, s, t, r):
yes = p*s + (1-p) * t
no = p * (1-s) + (1-p) * (1-t)
return yes / (yes + no * r)
Explanation: So that's not good.
We have to figure out how to update our belief about t in this case. I'll define r as the referral rate for patients who test negative. If r=1, we see all patients, as in Scenarios A and B.
If r=0 we only see patients who tests positive.
If we know p, s, t, and r, we can compute the probability of seeing a patient with a positive test:
End of explanation
p = Fraction(1, 10)
s = Fraction(9, 10)
t = Fraction(3, 10)
q = Fraction(1, 2)
t1 = Fraction(2, 10)
t2 = Fraction(4, 10)
pmf_t = Pmf({t1:q, t2:1-q})
pp1 = prob_pos(p, s, t1, 1)
pp2 = prob_pos(p, s, t2, 1)
pp1, pp2
Explanation: Here are the probabilities of seeing a patient with a positive test for the two values of t:
End of explanation
pmf_t = Pmf({t1:q, t2:1-q})
pmf_t[t1] *= prob_pos(p, s, t1, r=1)
pmf_t[t2] *= prob_pos(p, s, t2, r=1)
pmf_t.Normalize()
pmf_t.Print()
Explanation: Since these probabilities are the likelihood of the data, we can use them to update our belief about t. Here's what we get with r=1.
End of explanation
pmf_t = Pmf({t1:q, t2:1-q})
pmf_t[t1] *= prob_pos(p, s, t1, 0)
pmf_t[t2] *= prob_pos(p, s, t2, 0)
pmf_t.Normalize()
pmf_t.Print()
Explanation: And that's consistent with what we saw in Scenarios A and B.
But when r=0, we only see patients with positive test. The probability of the data is 1, regardless of t, so the data have no effect on our belief about t.
End of explanation
metatest = MakeMetaTest(p, s, pmf_t)
for test in metatest:
test.Update('pos')
metatest.Print()
Explanation: To compute the probability that the patient is sick, we can make a MetaTest and update the Test objects it contains, but we don't update the top level of the hierarchy.
End of explanation
predictive = MakeMixture(metatest)
predictive.Print()
Explanation: Now we can generate the predictive distribution as usual:
End of explanation
conjunction = predictive + predictive
conjunction.Print()
Explanation: To compute the probability that two patients who test positive are sick, we have to deal with two cases again.
Scenario C
If the value of t is independent for all patients, we just compute the convolution of the predictive distribution with itself.
End of explanation
metapmf = Pmf()
for t, prob in Marginal(metatest).Items():
cond = Conditional(metatest, t)
conjunction = cond + cond
metapmf[conjunction] = prob
metapmf.Print()
MakeMixture(metapmf).Print()
16/255, 17/225
Explanation: Scenario D
Or, if we think the value of t is the same for all patients, we have to use the same technique we used in Scenario B.
End of explanation |
2,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
text_Set = set(text)
#sorted_vocab = sorted(text_counts, key=text_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(text_Set)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_dict = {'.':'||Period||', ',':'||Comma||', '"':'||Quotation_Mark||', ';':'||Semicolon||',
'!':'||Exclamation_mark||', '?':'||Question_mark||', '(':'||Left_Parentheses||', ')':'||Right_Parentheses||',
'--':'||Dash||', '\n':'||Return||'}
return token_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32,[None,None],name='input')
targets = tf.placeholder(tf.int32,[None,None],name='targets')
learning_rate = tf.placeholder(tf.float32,name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
num_layers = 3
# TODO: Implement Function
singlelstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
Cell = tf.contrib.rnn.MultiRNNCell([singlelstm]*num_layers)
#initalize Cell State
initStateCell = Cell.zero_state(batch_size,tf.float32)
initStateCell = tf.identity(initStateCell,'initial_state')
return Cell, initStateCell
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedSeq = tf.contrib.layers.embed_sequence(ids=input_data,embed_dim=embed_dim,vocab_size=vocab_size)
return embedSeq
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs,final_state = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32)
final_state = tf.identity(final_state,'final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
inputs = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, inputs)
logits = tf.contrib.layers.fully_connected(outputs,vocab_size,None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
num_batch = int(len(int_text)/(batch_size*seq_length))
in_data = np.array(int_text[:num_batch*batch_size*seq_length])
out_data = np.append(int_text[1:num_batch*batch_size*seq_length],int_text[0])
in_batches = np.split(in_data.reshape(batch_size,-1),num_batch,1)
out_batches = np.split(out_data.reshape(batch_size,-1),num_batch,1)
batchPair = np.array(list(zip(in_batches,out_batches)))
return batchPair
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 32
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 299
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.0016
# Show stats for every n number of batches
show_every_n_batches = 200
import time
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
lastBatchesTime = None
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
if lastBatchesTime == None:
lastBatchesTime = time.time()
else:
print("time elaps:",str(time.time()-lastBatchesTime))
lastBatchesTime = time.time()
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
tensorsSaved = (loaded_graph.get_tensor_by_name('input:0'),
loaded_graph.get_tensor_by_name('initial_state:0'),
loaded_graph.get_tensor_by_name('final_state:0'),
loaded_graph.get_tensor_by_name('probs:0'))
return tensorsSaved
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
next_word = np.random.choice(a=list(int_to_vocab.values()), p=probabilities)
return next_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'homer_simpson'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
2,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementation of ADAM
This method computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients; the name Adam is derived from adaptive moment estimation. The method is designed to combine the advantages of two recently popular methods
Step1: As a simple example, let us find a local minimum for the function $f(x) = x^3-2x^2+2$
Step2: We can see from plot above that our local minimum is gonna be near around 1.4 or 1.5 (on the x-axis), but let's pretend that we don't know that, so we set our starting point (arbitrarily, in this case) at $x_0 = 2$
Step3: Reference for algorithm | Python Code:
%matplotlib inline
import numpy as np
import math
import matplotlib.pyplot as plt
Explanation: Implementation of ADAM
This method computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients; the name Adam is derived from adaptive moment estimation. The method is designed to combine the advantages of two recently popular methods: AdaGrad (Duchi et al., 2011), which works well with sparse gradients, and RMSProp (Tieleman & Hinton, 2012), which works well in on-line and non-stationary
settings. Some of Adam’s advantages are that the magnitudes of parameter updates are invariant to rescaling of the gradient, its stepsizes are approximately bounded by the stepsize hyperparameter, it does not require a stationary objective, it works with sparse gradients, and it naturally performs a form of step size annealing. [arXiv:1412.6980]
End of explanation
f = lambda x: x**3-2*x**2+2
x = np.linspace(-1,2.5,1000)
plt.plot(x,f(x))
plt.xlim([-1,2.5])
plt.ylim([0,3])
plt.show()
Explanation: As a simple example, let us find a local minimum for the function $f(x) = x^3-2x^2+2$
End of explanation
# returns the value of the derivative of our function
def f_prime(x):
return 3*x**2-4*x
Explanation: We can see from plot above that our local minimum is gonna be near around 1.4 or 1.5 (on the x-axis), but let's pretend that we don't know that, so we set our starting point (arbitrarily, in this case) at $x_0 = 2$
End of explanation
# to avoid division by zero, we choose epsilon as .
epsilon = 10**-8
# learning rate
alpha = 0.1
# exponential decay rates
b1 = 0.9
b2 = 0.999
# set precision
precision = 0.0001
# make lists for plotting ADAM
new_x = 2 # The algorithm starts at x=2
x_list, y_list = [np.array([new_x])], [f(new_x)]
x = new_x
#initalize m, v, t to zero
m = 0
v = 0
t = 0
# initialize sentinel to False
converged = False
# while the optimization algorithm has not yet converged
while (not converged):
# time step increment by 1
t = t + 1
# determine the value of gradient
g = f_prime(x)
# calculate m
m = b1*m + ((1 - b1)*g)
# calculate v
v = b2*v + ((1 - b2)*(g**2))
# calculate m_hat
m_hat = m/(1 - b1**t)
# calculate v_hat
v_hat = v/(1 - b2**t)
# determine new_x
new_x = x - (alpha*m_hat/((v_hat**(1/2)) + epsilon))
# calculte convergence measure
if abs(new_x - x) <= precision:
converged = True
# set x to the new_x
x = new_x
x_list.append(new_x)
y_list.append(f(new_x))
print "Local minimum occurs at x =", new_x, "with f(x) =", f(new_x)
print "Number of steps:", len(x_list)
# visualize the gradient descent
plt.figure(figsize=[10,3])
x = np.linspace(-1,2.5,1000)
plt.plot(x,f(x))
plt.xlim([-1,2.5])
plt.ylim([0,3])
plt.subplot(1,2,1)
plt.scatter(x_list,y_list,c="r")
plt.plot(x_list,y_list,c="r")
plt.plot(x,f(x), c="b")
plt.xlim([-1,2.5])
plt.ylim([0,3])
plt.title("Gradient descent using ADAM")
plt.subplot(1,2,2)
plt.scatter(x_list,y_list,c="r")
plt.plot(x_list,y_list,c="r")
plt.plot(x,f(x), c="b")
plt.xlim([1.2,1.7])
plt.ylim([0,3])
plt.title("Gradient descent using ADAM (zoomed in)")
plt.show()
Explanation: Reference for algorithm: https://arxiv.org/pdf/1412.6980.pdf
Algorithm 1: Adam, our proposed algorithm for stochastic optimization.
Good default settings for the tested machine learning problems are $\alpha = 0.001$,
$\beta_1 = 0.9$, $\beta_2 = 0.999$ and $\epsilon = 10^{-8}$
End of explanation |
2,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
2,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 4
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as
Step5: To test your feature derivartive run the following
Step6: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
Step7: Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature
Step8: Let us split the dataset into training set and test set. Make sure to use seed=0
Step9: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
Step10: Let's set the parameters for our optimization
Step11: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step12: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step13: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
Step14: Compute the RSS on the TEST data for the following three sets of weights
Step15: QUIZ QUESTIONS
1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?
3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
Running a multiple regression with L2 penalty
Let us now consider a model with 2 features
Step16: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
Step17: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step18: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step19: Compute the RSS on the TEST data for the following three sets of weights | Python Code:
import graphlab
import numpy as np
Explanation: Regression Week 4: Ridge Regression (gradient descent)
In this notebook, you will implement ridge regression via gradient descent. You will:
* Convert an SFrame into a Numpy array
* Write a Numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty
Fire up graphlab create
Make sure you have the latest version of GraphLab Create (>= 1.7)
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe
# (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe['price']
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
End of explanation
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding
# numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
Explanation: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights:
End of explanation
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
derivative = 2 * np.dot(errors, feature)
if not feature_is_constant:
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
derivative = derivative + 2 * l2_penalty * weight
return derivative
Explanation: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as:
2*SUM[ error*[feature_i] ].
The derivative of the regularization term with respect to w[i] is:
2*l2_penalty*w[i].
Summing both, we get
2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i].
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus 2*l2_penalty*w[i].
We will not regularize the constant. Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the 2*l2_penalty*w[0] term).
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus 2*l2_penalty*w[i].
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call feature_is_constant which you should set to True when computing the derivative of the constant and False otherwise.
End of explanation
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)
print np.sum(errors*example_features[:,1])*2+20.
print ''
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)
print np.sum(errors)*2.
Explanation: To test your feature derivartive run the following:
End of explanation
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
weights = np.array(initial_weights) # make sure it's a numpy array
iter = 1
while iter <= max_iterations:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
for i in xrange(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
isConstant = False
if i == 0:
isConstant = True
derivative = feature_derivative_ridge(errors, feature_matrix[:,i], weights[i], l2_penalty, isConstant)
# subtract the step size times the derivative from the current weight
weights[i] = weights[i] - step_size * derivative
iter += 1
return weights
Explanation: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
End of explanation
simple_features = ['sqft_living']
my_output = 'price'
Explanation: Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature:
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Let us split the dataset into training set and test set. Make sure to use seed=0:
End of explanation
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
Explanation: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
End of explanation
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
Explanation: Let's set the parameters for our optimization:
End of explanation
l2_penalty = 0.0
simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix,
output,
initial_weights,
step_size,
l2_penalty,
max_iterations)
print simple_weights_0_penalty
Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_0_penalty
we'll use them later.
End of explanation
l2_penalty = 1e11
simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix,
output,
initial_weights,
step_size,
l2_penalty,
max_iterations)
print simple_weights_high_penalty
Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_high_penalty
we'll use them later.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(simple_feature_matrix,output,'k.',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
Explanation: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
End of explanation
test_predictions = predict_output(simple_test_feature_matrix, initial_weights)
test_errors = test_predictions - test_output
RSS_initial_weights = sum(test_errors * test_errors)
print RSS_initial_weights
testNoReg_predictions = predict_output(simple_test_feature_matrix, simple_weights_0_penalty)
testNoReg_errors = testNoReg_predictions - test_output
NoReg_RSS = sum(testNoReg_errors * testNoReg_errors)
print NoReg_RSS
testReg_predictions = predict_output(simple_test_feature_matrix, simple_weights_high_penalty)
testReg_errors = testReg_predictions - test_output
Reg_RSS = sum(testReg_errors * testReg_errors)
print Reg_RSS
Explanation: Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
End of explanation
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
Explanation: QUIZ QUESTIONS
1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?
3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
Running a multiple regression with L2 penalty
Let us now consider a model with 2 features: ['sqft_living', 'sqft_living15'].
First, create Numpy versions of your training and test data with these two features.
End of explanation
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
Explanation: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
End of explanation
l2_penalty = 0.0
multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix,
output,
initial_weights,
step_size,
l2_penalty,
max_iterations)
print multiple_weights_0_penalty
Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_0_penalty
End of explanation
l2_penalty = 1e11
multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix,
output,
initial_weights,
step_size,
l2_penalty,
max_iterations)
print multiple_weights_high_penalty
Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_high_penalty
End of explanation
multiple_test_pred = predict_output(test_feature_matrix, initial_weights)
multiple_test_errors = multiple_test_pred - test_output
RSS_multi_initial_weights = sum(multiple_test_errors * multiple_test_errors)
print RSS_multi_initial_weights
multiple_testNoReg_pred = predict_output(test_feature_matrix, multiple_weights_0_penalty)
multiple_testNoReg_errors = multiple_testNoReg_pred - test_output
RSS_multi_NoReg = sum(multiple_testNoReg_errors * multiple_testNoReg_errors)
print RSS_multi_NoReg
Explanation: Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
End of explanation |
2,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Postcards of Parliament From A Digital Flâneur
Flâneur, noun, A man who saunters around observing society.
The flâneur wandered in the shopping arcades, but he did not give in to the temptations of consumerism; the arcade was primarily a pathway to a rich sensory experience — and only then a temple of consumption. His goal was to observe, to bathe in the crowd, taking in its noises, its chaos, its heterogeneity, its cosmopolitanism. Occasionally, he would narrate what he saw — surveying both his private self and the world at large — in the form of short essays for daily newspapers.
The Death of the Cyberflâneur, Evgeny Morozov, New York Times, Sunday Review, February 4, 2012
APIs
Using the APIs
Packages Make Life Easier (?)
Step1: Creating Custom pandas Data Reader Packages
Step2: Package Issues
development
building up example and reusable recipes
ownership and production quality (participation in development)
Notebooks as Open / Shared Recipes
But How Do I Share Working Examples?
BinderHub Build Sequence
"[P]hilosophically similar to Heroku Build Packs"
requirements.txt
python packages
environment.yml
conda environment specification
apt.txt
debian packages that should be installed (latest version of Ubuntu)
postBuild
arbitrary commands to be run after the whole repository has been built
REQUIRE
Julia packages
Dockerfile
treated as a regular Dockerfile. The presence of a Dockerfile will cause all other building behavior to not be triggered.
Building a Local Docker Image From a Github Repository
```bash
pip3 install jupyter-repo2docker
jupyter-repo2docker --image-name psychemedia/parlihacks --no-run https | Python Code:
import mnis
import datetime
# Create a date for the analysis
d = datetime.date.today()
# Download the full data for MPs serving on the given date as a list
mnis.getCommonsMembersOn(d)[0]
Explanation: Postcards of Parliament From A Digital Flâneur
Flâneur, noun, A man who saunters around observing society.
The flâneur wandered in the shopping arcades, but he did not give in to the temptations of consumerism; the arcade was primarily a pathway to a rich sensory experience — and only then a temple of consumption. His goal was to observe, to bathe in the crowd, taking in its noises, its chaos, its heterogeneity, its cosmopolitanism. Occasionally, he would narrate what he saw — surveying both his private self and the world at large — in the form of short essays for daily newspapers.
The Death of the Cyberflâneur, Evgeny Morozov, New York Times, Sunday Review, February 4, 2012
APIs
Using the APIs
Packages Make Life Easier (?)
End of explanation
import pd_datareader_nhs.nhs_digital_ods as ods
ods.search(string='Prison', field='Label')
dd=ods.download('eprison')
dd.head()
Explanation: Creating Custom pandas Data Reader Packages
End of explanation
import requests
requests.get('http://127.0.0.1:8899/demo/role/worker').json()
requests.get('http://127.0.0.1:8899/demo/name/jo').json()
Explanation: Package Issues
development
building up example and reusable recipes
ownership and production quality (participation in development)
Notebooks as Open / Shared Recipes
But How Do I Share Working Examples?
BinderHub Build Sequence
"[P]hilosophically similar to Heroku Build Packs"
requirements.txt
python packages
environment.yml
conda environment specification
apt.txt
debian packages that should be installed (latest version of Ubuntu)
postBuild
arbitrary commands to be run after the whole repository has been built
REQUIRE
Julia packages
Dockerfile
treated as a regular Dockerfile. The presence of a Dockerfile will cause all other building behavior to not be triggered.
Building a Local Docker Image From a Github Repository
```bash
pip3 install jupyter-repo2docker
jupyter-repo2docker --image-name psychemedia/parlihacks --no-run https://github.com/psychemedia/parlihacks
docker push psychemedia/parlihacks
```
Creating Simple Service APIs
In terminal:
jupyter kernelgateway --KernelGatewayApp.api='kernel_gateway.notebook_http' --KernelGatewayApp.seed_uri='./SimpleAPI2.ipynb' --port 8899
End of explanation |
2,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
----- IMPORTANT ------
The code presented here assumes that you're running TensorFlow v1.3.0 or higher, this was not released yet so the easiet way to run this is update your TensorFlow version to TensorFlow's master.
To do that go here and then execute
Step1: 1) Simple Linear Regression with low-level TensorFlow
Generating data
This function creates a noisy dataset that's roughly linear, according to the equation y = mx + b + noise.
Notice that the expected value for m is 0.1 and for b is 0.3. This is the values we expect the model to predict.
Step2: Create training data
Step3: Plot the training data
Step4: The Model
Step5: The Loss and Optimizer
Define a loss function (here, squared error) and an optimizer (here, gradient descent).
Step6: The Training Loop and generating predictions
Step7: Visualizing predictions
Step8: What is the final weight and bias?
Step9: 2) Simple Linear Regression with a canned estimator
Input Pipeline
Step10: Describe input feature usage
Step11: Build and train the model
Step12: Generating and visualizing predictions
Step13: 3) Playing with real data
Step14: Load the data
Step15: Input pipeline
Step16: Feature description
Step17: Evaluate the model
Step18: DNN model
Update input pre-processing
Step19: Custom Input Pipeline using Datasets API
Read the data
Step20: Try the input function
Step21: 4) Building a custom estimator to classify handwritten digits (MNIST)
Image from
Step22: tf.estimator.LinearClassifier
Step23: Examine the results with TensorBoard
$> tensorboard --logdir mnnist/DNN
Step24: A Custom Model
Step25: Runs estimator
Step26: Distributed tensorflow | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
# tensorflow
import tensorflow as tf
print('Expected TensorFlow version is v1.3.0 or higher')
print('Your TensorFlow version:', tf.__version__)
# data manipulation
import numpy as np
import pandas as pd
# visualization
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = [12,8]
Explanation: ----- IMPORTANT ------
The code presented here assumes that you're running TensorFlow v1.3.0 or higher, this was not released yet so the easiet way to run this is update your TensorFlow version to TensorFlow's master.
To do that go here and then execute:
pip install --ignore-installed --upgrade <URL for the right binary for your machine>.
For example, considering a Linux CPU-only running python2:
pip install --upgrade https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp27-none-linux_x86_64.whl
Here is walk-through to help getting started with tensorflow
1) Simple Linear Regression with low-level TensorFlow
2) Simple Linear Regression with a canned estimator
3) Playing with real data: linear regressor and DNN
4) Building a custom estimator to classify handwritten digits (MNIST)
What's next?
Dependencies
End of explanation
def make_noisy_data(m=0.1, b=0.3, n=100):
x = np.random.randn(n)
noise = np.random.normal(scale=0.01, size=len(x))
y = m * x + b + noise
return x, y
Explanation: 1) Simple Linear Regression with low-level TensorFlow
Generating data
This function creates a noisy dataset that's roughly linear, according to the equation y = mx + b + noise.
Notice that the expected value for m is 0.1 and for b is 0.3. This is the values we expect the model to predict.
End of explanation
x_train, y_train = make_noisy_data()
Explanation: Create training data
End of explanation
plt.plot(x_train, y_train, 'b.')
Explanation: Plot the training data
End of explanation
# input and output
x = tf.placeholder(shape=[None], dtype=tf.float32, name='x')
y_label = tf.placeholder(shape=[None], dtype=tf.float32, name='y_label')
# variables
W = tf.Variable(tf.random_normal([1], name="W")) # weight
b = tf.Variable(tf.random_normal([1], name="b")) # bias
# actual model
y = W * x + b
Explanation: The Model
End of explanation
loss = tf.reduce_mean(tf.square(y - y_label))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train = optimizer.minimize(loss)
Explanation: The Loss and Optimizer
Define a loss function (here, squared error) and an optimizer (here, gradient descent).
End of explanation
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init) # initialize variables
for i in range(100): # train for 100 steps
sess.run(train, feed_dict={x: x_train, y_label:y_train})
x_plot = np.linspace(-3, 3, 101) # return evenly spaced numbers over a specified interval
# using the trained model to predict values for the training data
y_plot = sess.run(y, feed_dict={x: x_plot})
# saving final weight and bias
final_W = sess.run(W)
final_b = sess.run(b)
Explanation: The Training Loop and generating predictions
End of explanation
plt.scatter(x_train, y_train)
plt.plot(x_plot, y_plot, 'g')
Explanation: Visualizing predictions
End of explanation
print('W:', final_W, 'expected: 0.1')
print('b:', final_b, 'expected: 0.3')
Explanation: What is the final weight and bias?
End of explanation
x_dict = {'x': x_train}
train_input = tf.estimator.inputs.numpy_input_fn(x_dict, y_train,
shuffle=True,
num_epochs=None) # repeat forever
Explanation: 2) Simple Linear Regression with a canned estimator
Input Pipeline
End of explanation
features = [tf.feature_column.numeric_column('x')] # because x is a real number
Explanation: Describe input feature usage
End of explanation
estimator = tf.estimator.LinearRegressor(features)
estimator.train(train_input, steps = 1000)
Explanation: Build and train the model
End of explanation
x_test_dict = {'x': np.linspace(-5, 5, 11)}
data_source = tf.estimator.inputs.numpy_input_fn(x_test_dict, shuffle=False)
predictions = list(estimator.predict(data_source))
preds = [p['predictions'][0] for p in predictions]
for y in predictions:
print(y['predictions'])
plt.scatter(x_train, y_train)
plt.plot(x_test_dict['x'], preds, 'g')
Explanation: Generating and visualizing predictions
End of explanation
census_train_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
census_train_path = tf.contrib.keras.utils.get_file('census.train', census_train_url)
census_test_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test'
census_test_path = tf.contrib.keras.utils.get_file('census.test', census_test_url)
Explanation: 3) Playing with real data: linear regressor and DNN
Get the data
The Adult dataset is from the Census bureau and the task is to predict whether a given adult makes more than $50,000 a year based attributes such as education, hours of work per week, etc.
But the code here presented can be easilly aplicable to any csv dataset that fits in memory.
More about the data here
End of explanation
column_names = [
'age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country',
'income'
]
census_train = pd.read_csv(census_train_path, index_col=False, names=column_names)
census_test = pd.read_csv(census_train_path, index_col=False, names=column_names)
census_train_label = census_train.pop('income') == " >50K"
census_test_label = census_test.pop('income') == " >50K"
census_train.head(10)
census_train_label[:20]
Explanation: Load the data
End of explanation
train_input = tf.estimator.inputs.pandas_input_fn(
census_train,
census_train_label,
shuffle=True,
batch_size = 32, # process 32 examples at a time
num_epochs=None,
)
test_input = tf.estimator.inputs.pandas_input_fn(
census_test,
census_test_label,
shuffle=True,
num_epochs=1)
features, labels = train_input()
features
Explanation: Input pipeline
End of explanation
features = [
tf.feature_column.numeric_column('hours-per-week'),
tf.feature_column.bucketized_column(tf.feature_column.numeric_column('education-num'), list(range(25))),
tf.feature_column.categorical_column_with_vocabulary_list('sex', ['male','female']),
tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000),
]
estimator = tf.estimator.LinearClassifier(features, model_dir='census/linear',n_classes=2)
estimator.train(train_input, steps=5000)
Explanation: Feature description
End of explanation
estimator.evaluate(test_input)
Explanation: Evaluate the model
End of explanation
features = [
tf.feature_column.numeric_column('education-num'),
tf.feature_column.numeric_column('hours-per-week'),
tf.feature_column.numeric_column('age'),
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list('sex',['male','female'])),
tf.feature_column.embedding_column( # now using embedding!
tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000), 10)
]
estimator = tf.estimator.DNNClassifier(hidden_units=[20,20],
feature_columns=features,
n_classes=2,
model_dir='census/dnn')
estimator.train(train_input, steps=5000)
estimator.evaluate(test_input)
Explanation: DNN model
Update input pre-processing
End of explanation
def census_input_fn(path):
def input_fn():
dataset = (
tf.contrib.data.TextLineDataset(path)
.map(csv_decoder)
.shuffle(buffer_size=100)
.batch(32)
.repeat())
columns = dataset.make_one_shot_iterator().get_next()
income = tf.equal(columns.pop('income')," >50K")
return columns, income
return input_fn
csv_defaults = collections.OrderedDict([
('age',[0]),
('workclass',['']),
('fnlwgt',[0]),
('education',['']),
('education-num',[0]),
('marital-status',['']),
('occupation',['']),
('relationship',['']),
('race',['']),
('sex',['']),
('capital-gain',[0]),
('capital-loss',[0]),
('hours-per-week',[0]),
('native-country',['']),
('income',['']),
])
def csv_decoder(line):
parsed = tf.decode_csv(line, csv_defaults.values())
return dict(zip(csv_defaults.keys(), parsed))
Explanation: Custom Input Pipeline using Datasets API
Read the data
End of explanation
tf.reset_default_graph()
census_input = census_input_fn(census_train_path)
training_batch = census_input()
with tf.Session() as sess:
features, high_income = sess.run(training_batch)
print(features['education'])
print(features['age'])
print(high_income)
Explanation: Try the input function
End of explanation
train,test = tf.contrib.keras.datasets.mnist.load_data()
x_train,y_train = train
x_test,y_test = test
mnist_train_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_train, dtype=np.float32)},
np.array(y_train,dtype=np.int32),
shuffle=True,
num_epochs=None)
mnist_test_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_test, dtype=np.float32)},
np.array(y_test,dtype=np.int32),
shuffle=True,
num_epochs=1)
Explanation: 4) Building a custom estimator to classify handwritten digits (MNIST)
Image from: http://rodrigob.github.io/are_we_there_yet/build/images/mnist.png?1363085077
End of explanation
estimator = tf.estimator.LinearClassifier([tf.feature_column.numeric_column('x',shape=784)],
n_classes=10,
model_dir="mnist/linear")
estimator.train(mnist_train_input, steps = 10000)
estimator.evaluate(mnist_test_input)
Explanation: tf.estimator.LinearClassifier
End of explanation
estimator = tf.estimator.DNNClassifier(hidden_units=[256],
feature_columns=[tf.feature_column.numeric_column('x',shape=784)],
n_classes=10,
model_dir="mnist/DNN")
estimator.train(mnist_train_input, steps = 10000)
estimator.evaluate(mnist_test_input)
# Parameters
BATCH_SIZE = 128
STEPS = 10000
Explanation: Examine the results with TensorBoard
$> tensorboard --logdir mnnist/DNN
End of explanation
def build_cnn(input_layer, mode):
with tf.name_scope("conv1"):
conv1 = tf.layers.conv2d(inputs=input_layer,filters=32, kernel_size=[5, 5],
padding='same', activation=tf.nn.relu)
with tf.name_scope("pool1"):
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
with tf.name_scope("conv2"):
conv2 = tf.layers.conv2d(inputs=pool1,filters=64, kernel_size=[5, 5],
padding='same', activation=tf.nn.relu)
with tf.name_scope("pool2"):
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
with tf.name_scope("dense"):
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
with tf.name_scope("dropout"):
is_training_mode = mode == tf.estimator.ModeKeys.TRAIN
dropout = tf.layers.dropout(inputs=dense, rate=0.4, training=is_training_mode)
logits = tf.layers.dense(inputs=dropout, units=10)
return logits
def model_fn(features, labels, mode):
# Describing the model
input_layer = tf.reshape(features['x'], [-1, 28, 28, 1])
tf.summary.image('mnist_input',input_layer)
logits = build_cnn(input_layer, mode)
# Generate Predictions
classes = tf.argmax(input=logits, axis=1)
predictions = {
'classes': classes,
'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
}
if mode == tf.estimator.ModeKeys.PREDICT:
# Return an EstimatorSpec object
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
with tf.name_scope('loss'):
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
loss = tf.reduce_sum(loss)
tf.summary.scalar('loss', loss)
with tf.name_scope('accuracy'):
accuracy = tf.cast(tf.equal(tf.cast(classes,tf.int32),labels),tf.float32)
accuracy = tf.reduce_mean(accuracy)
tf.summary.scalar('accuracy', accuracy)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.train.get_global_step(),
learning_rate=1e-4,
optimizer='Adam')
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,
loss=loss, train_op=train_op)
# Configure the accuracy metric for evaluation
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
classes,
input=labels)
}
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,
loss=loss, eval_metric_ops=eval_metric_ops)
Explanation: A Custom Model
End of explanation
# create estimator
run_config = tf.contrib.learn.RunConfig(model_dir='mnist/CNN')
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
# train for 10000 steps
estimator.train(input_fn=mnist_train_input, steps=10000)
# evaluate
estimator.evaluate(input_fn=mnist_test_input)
# predict
preds = estimator.predict(input_fn=test_input_fn)
Explanation: Runs estimator
End of explanation
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Enable TensorFlow logs
tf.logging.set_verbosity(tf.logging.INFO)
# create experiment
def experiment_fn(run_config, hparams):
# create estimator
estimator = tf.estimator.Estimator(model_fn=model_fn,
config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
# run experiment
learn_runner.run(experiment_fn,
run_config=run_config)
Explanation: Distributed tensorflow: using experiments
End of explanation |
2,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Baxter kinematics
In this notebook, we'll try IKPy on the baxter robot
You will get the following chains
Step1: ## Robot import and setup
Step2: Inverse kinematics | Python Code:
# Some necessary imports
import numpy as np
from ikpy.chain import Chain
from ikpy.utils import plot
# Optional: support for 3D plotting in the NB
%matplotlib widget
# turn this off, if you don't need it
Explanation: Baxter kinematics
In this notebook, we'll try IKPy on the baxter robot
You will get the following chains:
Let's begin!
Requirements
To get this notebook to work, you need to install IKPy, version >= 3.0
Also, if you want to use interactive 3D visualisation (highly recommended), you must use the widget matplotlib backend
End of explanation
# First, let's import the baxter chains
baxter_left_arm_chain = Chain.from_json_file("../resources/baxter/baxter_left_arm.json")
baxter_right_arm_chain = Chain.from_json_file("../resources/baxter/baxter_right_arm.json")
baxter_pedestal_chain = Chain.from_json_file("../resources/baxter/baxter_pedestal.json")
baxter_head_chain = Chain.from_json_file("../resources/baxter/baxter_head.json")
from mpl_toolkits.mplot3d import Axes3D;
fig, ax = plot.init_3d_figure();
baxter_left_arm_chain.plot([0] * (len(baxter_left_arm_chain)), ax)
baxter_right_arm_chain.plot([0] * (len(baxter_right_arm_chain)), ax)
baxter_pedestal_chain.plot([0] * (2 + 2), ax)
baxter_head_chain.plot([0] * (4 + 2), ax)
ax.legend()
Explanation: ## Robot import and setup
End of explanation
### Let's try some IK
fig, ax = plot.init_3d_figure();
target = [0.3, 0.5, 0.15]
target_orientation = [1, 0, 0]
frame_target = np.eye(4)
frame_target[:3, 3] = target
ik = baxter_left_arm_chain.inverse_kinematics_frame(frame_target)
baxter_left_arm_chain.plot(ik, ax, target=target)
baxter_right_arm_chain.plot([0] * (len(baxter_left_arm_chain)), ax)
baxter_pedestal_chain.plot([0] * (2 + 2), ax)
baxter_head_chain.plot([0] * (4 + 2), ax)
ax.legend()
Explanation: Inverse kinematics
End of explanation |
2,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time dependent Schrödinger equation
We want to describe an electron wavefunction by a wavepacket
$\psi (x,y)$ that is a function of position $x$ and time $t$. We assume
that the electron is initially localized around $x_0$, and model this by
a Gaussian multiplying a plane wave
Step1: Exercise 8.2
Step2: Exercise 8.2 | Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot
import math
import matplotlib.animation as animation
from IPython.display import HTML
lx=20
dx = 0.04
nx = int(lx/dx)
dt = dx**2/20.
V0 = 60
alpha = dt/dx**2
fig = pyplot.figure()
ax = pyplot.axes(xlim=(0, lx), ylim=(0, 2), xlabel='x', ylabel='|Psi|^2')
points, = ax.plot([], [], marker='', linestyle='-', lw=3)
psi0_r = np.zeros(nx+1)
psi0_i = np.zeros(nx+1)
x = np.arange(0, lx+dx, dx)
#Define your potential
V = np.zeros(nx+1)
V = V0*(x-lx/2)**2
#Initial conditions: wave packet
sigma2 = 0.5**2
k0 = np.pi*10.5
x0 = lx/2
for ix in range(0,nx):
psi0_r[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.cos(k0*ix*dx)
psi0_i[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.sin(k0*ix*dx)
def solve(i):
global psi0_r, psi0_i
for ix in range(1,nx-2):
psi0_i[ix]=psi0_i[ix]+alpha*psi0_r[ix+1]+alpha*psi0_r[ix-1]-(2.*alpha+dt*V[ix])*psi0_r[ix]
for ix in range(1,nx-2):
psi0_r[ix]=psi0_r[ix]-alpha*psi0_i[ix+1]-alpha*psi0_i[ix-1]+(2.*alpha+dt*V[ix])*psi0_i[ix]
points.set_data(x,psi0_r**2 + psi0_i**2)
return (points,)
#for i in range(2000):
# solve(i)
#pyplot.plot(x,psi0_r**2+psi0_i**2);
anim = animation.FuncAnimation(fig, solve, frames = 8000, interval=50)
HTML(anim.to_jshtml())
Explanation: Time dependent Schrödinger equation
We want to describe an electron wavefunction by a wavepacket
$\psi (x,y)$ that is a function of position $x$ and time $t$. We assume
that the electron is initially localized around $x_0$, and model this by
a Gaussian multiplying a plane wave:
$$\psi(x,t=0)=\exp{\left[-\frac{1}{2}\left(\frac{x-x_0}{\sigma _0}
\right)^2\right ]} e^{ik_0x}
$$
This wave function does not correspond to an electron with a well
defined momentum. However, if the width of the Gaussian $\sigma _0$ is
made very large, the electron gets spread over a sufficiently large
region of space and can be considered as a plane wave with momentum
$k_0$ with a slowly varying amplitude.
The behavior of this wave packet as a function of time is described by
the time-dependent Schröedinger equation (here in 1d):
$$i\frac{\partial \psi}{\partial t}=H\psi(x,t).$$
$H$ is the Hamiltonian operator:
$$H=-\frac{1}{2m}\frac{\partial ^2}{\partial x^2}+V(x),$$
where $V(x)$ is a time independent potential. The
Hamiltonian is chosen to be real. we have picked teh energy units such
that $\hbar=1$, and from now on, we will pick mass units such that
$2m=1$ to make equations simpler.
Scrhödinger’s equation is obviously a PDE, and we can use
generalizations of the techniques learned in previous sections to solve
it. The main observation is that this time we have to deal with complex
numbers, and the function $\psi (x,y)$ has real and imaginary parts:
$$\psi (x,t) = R(x,t)+iI(x,t).$$ However, is this section we will
present an alternative method that makes the quantum mechanical nature
of this problem more transparent.
The time-evolution operator
The Scrödinger equation ([time]) can be integrated in a formal sense
to obtain:
$$\psi(x,t)=U(t)\psi(x,t=0)=e^{-iHt}\psi(x,t).$$
From here we deduce that the wave function can be
evolved forward in time by applying the time-evolution operator
$U(t)=\exp{-iHt}$: $$\psi(t+\Delta t)= e^{-iH\delta t}\psi(t).$$
Likewise, the inverse of the time-evolution operator moves the wave
function back in time: $$\psi(t-\Delta t)=e^{iH\Delta t}\psi(t),$$ where
we have use the property $$U^{-1}(t)=U(-t).$$ Although it would be nice
to have an algorithm based on the direct application of $U$, it has been
shown that this is not stable. Hence, we apply the following relation:
$$\psi(t+\Delta t)=\psi(t-\Delta t)+\left[e^{-iH\Delta t}-e^{iH\Delta
t}\right]\psi(t).$$ Now, the derivatives with recpect to $x$ can be
approximated by
$$\begin{aligned}
\frac{\partial \psi}{\partial t}
&\sim& \frac{\psi(x,t+\Delta t)-\psi(x, t)}{\Delta t}, \
\frac{\partial ^2 \psi}{\partial x^2} &\sim& \frac{\psi(x+\Delta %
x,t)+\psi(x-\Delta x,t)-2\psi(x,t)}{(\Delta x)^2}.
\end{aligned}$$
The time evolution operator is
approximated by: $$U(\Delta t)=e^{-iH\Delta t} \sim 1+iH\Delta t.$$
Replacing the expression ([hami]) for $H$, we obtain:
$$\psi(x,t+\Delta t)=\psi(x,t)-i[(2\alpha+\Delta t
V(x))\psi(x,t)-\alpha(\psi(x+\Delta x,t)+\psi(x-\Delta x,t))],
$$
with $\alpha=\frac{\Delta t}{(\Delta x)^2}$. The
probability of finding an electron at $(x,t)$ is given by
$|\psi(x,t)|^2$. This equations do no conserve this probability exactly,
but the error is of the order of $(\Delta t)^2$. The convergence can be
determined by using smaller steps.
We can write this expression explicitly for the real and imaginary parts, becoming:
$$\begin{aligned}
\mathrm{Im} \psi(x, t + \Delta t) = \mathrm{Im} \psi(x, t) + \alpha \mathrm{Re} \psi (x + \Delta x, t) + \alpha \mathrm{Re}\psi(x − \Delta x, t) − (2\alpha + \Delta t V (x)) \mathrm{Re} \psi(x, t) \
\mathrm{Re} \psi(x, t + \Delta t) = \mathrm{Re} \psi(x, t) − \alpha \mathrm{Im} \psi (x + \Delta x, t) − \alpha \mathrm{Im} \psi (x − \Delta x, t) + (2\alpha + \Delta tV (x)) \mathrm{Im} \psi(x, t)
\end{aligned}$$
Notice the symmetry between these equations a: while the calculation of the imaginary part of the wave function at the later time involves a weighted average of the real part of the wave function at di erent positions from the earlier time, the calculation of the real part involves a weighted average of the imaginary part for di erent positions at the earlier time. This intermixing of the real and imaginary parts of the wave function may seem a bit strange, but remember that this situation is a direct result of our breaking up the wave function into its real and imaginary parts in the first place.
Exercise 8.1: Harmonic Potential
Simulate a Gaussian wave-packet moving along the $x$ axis in a harmonic potential
End of explanation
import matplotlib.animation as animation
from IPython.display import HTML
fig = pyplot.figure()
ax = pyplot.axes(xlim=(0, lx), ylim=(0, 2), xlabel='x', ylabel='$|\Psi|^2$')
points, = ax.plot([], [], marker='', linestyle='-', lw=3)
x0=6
for ix in range(0,nx):
psi0_r[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.cos(k0*ix*dx)
psi0_i[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.sin(k0*ix*dx)
x = np.arange(0, lx+dx, dx)
V = np.zeros(nx+1)
for ix in range(nx//2-20,nx//2+20):
V[ix]=2000.
def solve(i):
global psi0_r, psi0_i
psi0_r[1:-1] = psi0_r[1:-1]- alpha*(psi0_i[2:]+psi0_i[:-2]-2*psi0_i[1:-1])+dt*V[1:-1]*psi0_i[1:-1]
psi0_i[1:-1] = psi0_i[1:-1]+ alpha*(psi0_r[2:]+psi0_r[:-2]-2*psi0_r[1:-1])-dt*V[1:-1]*psi0_r[1:-1]
points.set_data(x,psi0_r**2 + psi0_i**2)
return points
#for i in range(2000):
# solve(i)
#pyplot.plot(x,psi0_r**2+psi0_i**2);
anim = animation.FuncAnimation(fig, solve, frames = 2000, interval=10)
HTML(anim.to_jshtml())
Explanation: Exercise 8.2: Potential barrier
Simulate a Gaussian wave-packet moving along the x axis passing through a potential barrier
End of explanation
%matplotlib inline
import numpy as np
from matplotlib import pyplot
import math
lx = 20 #Box length in x
ly = 20 #Box length in y
dx = 0.25 #Incremental step size in x (Increased this to decrease the time of the sim)
dy = dx #Incremental step size in y
nx = int(lx/dx) #Number of steps in x
ny = int(ly/dy) #Number of steps in y
dt = dx**2/20. #Incremental step size in time
sigma2 = 0.5**2 #Sigma2 Value
k0 = np.pi*10.5 #K0 value
amp = math.pow(1./2., 64) #Amplitude (to avoid large values out of range. This was one issue)
alpha = (dt/2.)/dx**2 #Alpha
psi0_r = np.zeros(shape=(ny+1,nx+1)) #Initialize real part of psi
psi0_i = np.zeros(shape=(ny+1,nx+1)) #Initialize imaginary part of psi
V = np.zeros(shape=(ny+1,nx+1)) #Initialize Potential
#Define your potential wall
V = np.zeros(shape=(ny+1,nx+1))
for ix in range(nx//2-20,nx//2+20):
for iy in range(0,ny):
if(abs(iy*dy-ly/2)>2.5):
V[iy,ix] = 200.
#Initial conditions: wave packet
x0 = 6.
y0 = ly/2.
for x in range(0,nx):
for y in range(0,ny):
psi0_r[y,x] = math.exp(-0.5*((x*dx-x0)**2+(y*dy-y0)**2)/sigma2)*math.cos(k0*x*dx)
psi0_i[y,x] = math.exp(-0.5*((x*dx-x0)**2+(y*dy-y0)**2)/sigma2)*math.sin(k0*x*dx)
x = np.arange(0, lx+dx, dx)
y = np.arange(0, ly+dy, dy)
X, Y = np.meshgrid(x, y)
pyplot.contourf(X,Y,psi0_r**2+psi0_i**2)
pyplot.contour(X,Y,V)
#Function to solve incremental changes in psi
def solve():
#Grab psi lists
global psi0_r, psi0_i
#Calculate Imaginary Part in all points except for last 2 (because of indice notation)
for x in range(1,nx-2):
for y in range(1,ny-2):
psi0_i[y,x] = psi0_i[y,x] + alpha*psi0_r[y,x+1] + alpha*psi0_r[y,x-1] - (2*alpha + dt*V[y,x])*psi0_r[y,x] + alpha*psi0_r[y+1,x] + alpha*psi0_r[y-1,x] - (2*alpha+dt*V[y,x])*psi0_r[y,x]
#Calculate Real Part in all points except for last 2 (because of indice notation)
for x in range(1,nx-2):
for y in range(1,ny-2):
psi0_r[y,x] = psi0_r[y,x] - alpha*psi0_i[y,x+1] - alpha*psi0_i[y,x-1] + (2*alpha + dt*V[y,x])*psi0_i[y,x] - alpha*psi0_i[y+1,x] - alpha*psi0_i[y-1,x] + (2*alpha+dt*V[y,x])*psi0_i[y,x]
for i in range(1000):
solve()
pyplot.contourf(X,Y,psi0_r**2+psi0_i**2);
#pyplot.contour(X,Y,V);
Explanation: Exercise 8.2: Single-slit diffraction
Young’s single-slit experiment consists of a wave passing though a small
slit, which causes the emerging wavelets to intefere with eachother
forming a diffraction pattern. In quantum mechanics, where particles are
represented by probabilities, and probabilities by wave packets, it
means that the same phenomenon should occur when a particle (electron,
neutron) passes though a small slit. Consider a wave packet of initial
width 3 incident on a slit of width 5, and plot the probability density
$|\psi ^2|$ as the packet crosses the slit. Generalize the
time-evolution equation ([time_diff]) for 2 dimensions. Model the
slit with a potential wall:
$$V(x,y)=100 \,\,\,\,\,\ \mathrm{for}\,\,x=10,|y|\geq 2.5.$$
End of explanation |
2,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenTire Moment Method Example
Draws a constant velocity force-moment diagram of a bicycle using the OpenTire library
Import OpenTire and other libraries used in this demonstration
Step1: Define the vehicle class
Step2: Define the Moment Method class
Step3: Run simulation | Python Code:
from opentire import OpenTire
from opentire.Core import TireState
import numpy as np
import matplotlib.pyplot as plt
Explanation: OpenTire Moment Method Example
Draws a constant velocity force-moment diagram of a bicycle using the OpenTire library
Import OpenTire and other libraries used in this demonstration
End of explanation
class Vehicle():
def __init__(self):
self._mass = 1000
self._wb = 1
self._wd = 0.5
self._ft = None
self._rt = None
@property
def mass(self):
return self._mass
@mass.setter
def mass(self, value):
if value <= 0:
raise ValueError('Mass must be greater than zero')
self._mass = value
@property
def wheelbase(self):
return self._wb
@wheelbase.setter
def wheelbase(self, value):
if value <= 0:
raise ValueError('Wheelbase must be greater than zero')
self._wb = value
@property
def weight_dist(self):
return self._wd
@weight_dist.setter
def weight_dist(self, value):
if value >= 1 or value <= 0:
raise ValueError('Weight distribution must be a ratio between 0 and 1')
self._wd = value
@property
def front_tire(self):
return self._ft
@front_tire.setter
def front_tire(self, value):
if not isinstance(value, TireModelBase) and value is not None:
raise TypeError('Front tire must be a OpenTire model')
self._ft = value
@property
def rear_tire(self):
return self._rt
@front_tire.setter
def rear_tire(self, value):
if not isinstance(value, TireModelBase) and values is not None:
raise TypeError('Rear tire must be a OpenTire model')
self._rt = value
@property
def length_a(self):
return self.wheelbase * (1 - self.weight_dist)
@property
def length_b(self):
return self.wheelbase * self.weight_dist
Explanation: Define the vehicle class
End of explanation
class MomentMethodSolver():
def __init__(self):
self._vehicle = None
self._beta = np.linspace(-12, 12, 25) * 3.14 / 180
self._delta = np.linspace(-12, 12, 25) * 3.14 / 180
self._velocity = 36
@property
def vehicle(self):
return self._vehicle
@vehicle.setter
def vehicle(self, value):
if not isinstance(value, Vehicle) and value is not None:
raise TypeError('Solver vehicle must be of Vehicle type')
self._vehicle = value
@property
def beta_range(self):
return self._beta
@beta_range.setter
def beta_range(self, value):
self._beta = value
@property
def delta_range(self):
return self._delta
@delta_range.setter
def delta_range(self, value):
self._delta = value
@property
def velocity(self):
return self._velocity
@velocity.setter
def velocity(self, value):
self._velocity = value
def solve(self):
fy = np.empty([len(self.beta_range), len(self.delta_range)])
mz = np.empty([len(self.beta_range), len(self.delta_range)])
initial_guess = (0, 0)
for i, beta in enumerate(self.beta_range):
for j, delta in enumerate(self.delta_range):
# Use previous solution as a guess
if j > 0: initial_guess = self._invertSolution(fy[i][j-1], mz[i][j-1])
elif i > 0: initial_guess = self._invertSolution(fy[i-1][j], mz[i-1][j])
else: initial_guess = (0, 0)
result = self._solve(beta, delta, initial_guess)
fy[i][j] = result[0]
mz[i][j] = result[1]
return (fy, mz)
def _solve(self, beta, delta, initial_guess = (0, 0)):
state = TireState()
state['FZ'] = 1500
state['IA'] = 0.0
state['SR'] = 0.0
state['SA'] = 0.0
state['FY'] = 0.0
state['V'] = 10.0
state['P'] = 260000
MAX_ITER = 100
n = 0
error = 9999
tolerance = 0.1
yaw_velocity = 0
front_force = initial_guess[0]
rear_force = initial_guess[1]
while (n < MAX_ITER and abs(error) > tolerance):
# Yaw rate
yaw_velocity = (front_force + rear_force) / (self.vehicle.mass * self.velocity)
error = front_force + rear_force
# Slip Angles
sa_front = beta - delta + yaw_velocity * self.vehicle.length_a / self.velocity
sa_rear = beta - yaw_velocity * self.vehicle.length_b / self.velocity
# Front Tire
state['SA'] = sa_front
state['FZ'] = 0.5 * 9.81 * self.vehicle.mass * self.vehicle.weight_dist
self.vehicle.front_tire.solve(state)
front_force = state['FY']
state['SA'] = -sa_front
self.vehicle.front_tire.solve(state)
front_force -= state['FY']
# Rear Tire
state['SA'] = sa_rear
state['FZ'] = 0.5 * 9.81 * self.vehicle.mass * (1 - self.vehicle.weight_dist)
self.vehicle.rear_tire.solve(state)
rear_force = state['FY']
state['SA'] = -sa_rear
self.vehicle.rear_tire.solve(state)
rear_force -= state['FY']
error -= front_force + rear_force
n += 1
return (front_force + rear_force,
front_force * self.vehicle.length_a - rear_force * self.vehicle.length_b)
def _invertSolution(self, lateral_force, yaw_moment):
front_force = (1 / (self.vehicle.length_a + self.vehicle.length_b)) * (self.vehicle.length_b * lateral_force
+ yaw_moment)
rear_force = (1 / (self.vehicle.length_a + self.vehicle.length_b)) * (self.vehicle.length_a * lateral_force
- yaw_moment)
return (front_force, rear_force)
Explanation: Define the Moment Method class
End of explanation
openTire = OpenTire()
myVehicle = Vehicle()
myVehicle.mass = 1250 # kg
myVehicle.wheelbase = 2.4 # m
myVehicle.weight_dist = 0.47 # ratio
myVehicle.front_tire = openTire.createmodel('PAC2002')
myVehicle.rear_tire = openTire.createmodel('PAC2002')
solver = MomentMethodSolver()
solver.vehicle = myVehicle
solver.beta = np.linspace(-15, 16, 31) * 3.14 / 180
solver.delta = np.linspace(-15, 16, 31) * 3.14 / 180
solver.velocity = 70 / 3.6
force_moments = solver.solve()
lateral_accel = force_moments[0] / myVehicle.mass / 9.81
yaw_moment = force_moments[1]
plt.plot(lateral_accel[:][:], yaw_moment[:][:], color='black')
plt.plot(np.transpose(lateral_accel[:][:]), np.transpose(yaw_moment[:][:]), color='red')
plt.grid()
plt.xlabel("Lateral Acceleration [g]")
plt.ylabel("Yaw Moment [Nm]")
plt.show()
Explanation: Run simulation
End of explanation |
2,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parte III
Step1: 2. Entrenando un primer modelo (IV)
Step2: 2. Entrenando un primer modelo (V)
En este primer ejemplo ignoramos muchas cuestiones que se tienen que tomar en cuenta en proyectos reales
Los datos casi nunca están en un formato aceptable por lo que hay que cambiar formato, exportarlos a una base de datos, etc
Una vez que tenemos los datos en formato aceptable el siguiente paso importante es limpiarlos, por lo general los datos van a contener errores (mal formato, nombres de columnas incorrectos, errores de captura, etc).
Cuando tenemos los datos limpios necesitamos trabajar en otro problema
Step3: Vamos a descartar PassenderId pues no nos da información útil para el modelo (el dato es único para cada pasajero), vemos que también tenemos
También tenemos columnas con texto, a pesar de que podemos extraer información de ellas y usarlas en el modelo no lo haremos por simplicidad
Step4: 3.2 Entrenamiento
Step5: 3.3 Cross-validation
Los modelos de ML tienen hiperparámetros que se tienen que elegir manualmente
Debido a esto, podría darse el caso e que al estar optimizando dichos hiperparámetros causemos overfitting en el modelo por lo que nuestra estimación del error de generalización será muy optimista
Para resolver este problema se suelen dividir los datos en tres conjuntos
Step6: 3.4 Bias-Variance tradeoff
3.5 Entrando más modelos | Python Code:
# configuramos matplotlib para incluir las gráficas en jupyter e importamos pandas
%matplotlib inline
import pandas as pd
# cargamos los datos en un data frame de pandas
url = 'http://mlr.cs.umass.edu/ml/machine-learning-databases/iris/iris.data'
names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'class']
df = pd.read_csv(url, names=names)
# veamos como lucen las primeras observaciones
df.head()
# obtenemos los valores del data frame, esto nos regresará
# arreglos de numpy
X = df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']].values
y = df['class'].values
from sklearn.model_selection import train_test_split
# partimos los datos en entrenamiento y evaluación
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X_train, y_train)
Explanation: Parte III: Aprendizaje supervisado con scikit-learn
1. Introducción
Queremos construir un modelo usando los datos de entrenamiento que pueda generalizar datos que el modelo "no haya visto"
Si un modelo es capaz de hacer predicciones precisas en datos no vistos decimos que ha generalizado a partir de los datos de entrenamiento
Una forma de estimar qué tan buena será la generalización es dividir nuestro conjuntos de datos en dos: uno para entrenamiento (training set) y otro para evaluación (test set)
1. Introducción (II)
Otro factor a tomar en cuenta es la complejidad del modelo (que depende del espacio de hipótesis $H$)
Un modelo muy complejo es capaz de "memorizar los datos" por lo que se desempeñará bien en los datos de entrenamiento pero no en los de evaluación (este problema se conoce como overfitting
Por otro lado, un modelo muy simple no será capaz de capturar toda la información contenida en los datos por lo que el desempeño será malo tnato en los datos de entrenamiento como en los de evaluación (underfitting)
2. Entrenando un primer modelo
Usaremos el dataset iris, hacer un clasificador con este conjunto de datos es como el Hello World! en programación
El conjunto de datos contiene 150 observaciones de 3 tipos de flores con medidas de altura y ancho para pétalo y sépalo, así como la clase a la que pertenece la flor (setosa, versicolour o virginica)
Para mayor información sobre los datos ver: http://mlr.cs.umass.edu/ml/machine-learning-databases/iris/
2. Entrenando un primer modelo (II)
Fuente: http://blog.kaggle.com/2015/04/22/scikit-learn-video-3-machine-learning-first-steps-with-the-iris-dataset/
2. Entrenando un primer modelo (III)
End of explanation
print("Test set accuracy: {:.2f}".format(clf.score(X_test, y_test)))
y_pred = clf.predict(X_test)
y_score = clf.predict_proba(X_test)
from sklearn_evaluation import ClassifierEvaluator
ce = ClassifierEvaluator(clf, y_test, y_pred, y_score,
df.columns,
['setosa', 'versicolor', 'virginica'])
cm = ce.confusion_matrix
ce.precision_recall
Explanation: 2. Entrenando un primer modelo (IV)
End of explanation
df = pd.read_csv("titanic.csv")
df.head()
# Tenemos algunos datos faltantes
df.isnull().sum()
Explanation: 2. Entrenando un primer modelo (V)
En este primer ejemplo ignoramos muchas cuestiones que se tienen que tomar en cuenta en proyectos reales
Los datos casi nunca están en un formato aceptable por lo que hay que cambiar formato, exportarlos a una base de datos, etc
Una vez que tenemos los datos en formato aceptable el siguiente paso importante es limpiarlos, por lo general los datos van a contener errores (mal formato, nombres de columnas incorrectos, errores de captura, etc).
Cuando tenemos los datos limpios necesitamos trabajar en otro problema: datos faltantes. Es normal que en nuestros datos haya faltantes y existen muchas técnicas para trabajar con ellos, desde borrar filas/columnas con datos faltantes hasta algoritmos para llenarlos, algunos algoritmos de ML pueden trabajar con datos faltantes pero otros no
Una vez que tenemos los datos limpios y completos se suele recurrir a un preprocesamiento, entre las operaciones más comunes se encuentra transformar la escala de los datos, esto ayuda a ciertos algoritmos a tener un mejor desempeño
3. Un ejemplo un poco más difícil
Ahora veremos otro ejemplo de aprendizaje supervisado con datos que se asemejan un poco más a lo que encontraríamos en un proyecto real
Trabajaremos con los datos del Titanic disponibles en https://www.kaggle.com/c/titanic
3.1 Datos
En este conjunto de datos tenemos obervaciones sobre algunos de los pasajeros del Titanic incluida una variable que indica si el pasajero sobrevivió o no
Usaremos los datos para modelar la sobrevivencia dada la información que tenemos sobre el pasajero
Más información sobre los datos disponible en https://www.kaggle.com/c/titanic/data
3.2 Limpieza de datos
End of explanation
# Obtenemos un subconjunto de las columnas
df = df[['Survived', 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Embarked']]
df.head()
# Tenemos ahora que trabajar con los datos faltantes en Embarked y Age
# para el caso de Embarked (variable categórica) usaremos la moda como el valor imputado
# para Age usaremos la media como valor inputado
df.loc[df.Age.isnull(), 'Age'] = df.Age.mean()
df.Embarked.value_counts()
df.loc[df.Embarked.isnull(), 'Embarked'] = 'S'
# Ahora que tenemos datos completos tenemos que convertir las variables expresadas como strings a numéros
# pues muchos algoritmos solo aceptan este tipo de entrada, en nuestro caso hay que convertir Sex y Embarked
print('Sex values: ', df.Sex.unique())
print('Embarked values: ', df.Embarked.unique())
df.Sex.replace({'male': 0, 'female': 1}, inplace=True)
df.Embarked.replace({'S': 0, 'C': 1, 'Q': 2}, inplace=True)
print('Sex values: ', df.Sex.unique())
print('Embarked values: ', df.Embarked.unique())
Explanation: Vamos a descartar PassenderId pues no nos da información útil para el modelo (el dato es único para cada pasajero), vemos que también tenemos
También tenemos columnas con texto, a pesar de que podemos extraer información de ellas y usarlas en el modelo no lo haremos por simplicidad
End of explanation
X = df[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Embarked']].values
y = df['Survived'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7)
clf = LogisticRegression()
clf.fit(X_train, y_train)
print("Test set accuracy: {:.2f}".format(clf.score(X_test, y_test)))
Explanation: 3.2 Entrenamiento
End of explanation
from sklearn.model_selection import cross_val_score
clf = LogisticRegression()
scores = cross_val_score(clf, X_train, y_train, cv=5)
scores
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Explanation: 3.3 Cross-validation
Los modelos de ML tienen hiperparámetros que se tienen que elegir manualmente
Debido a esto, podría darse el caso e que al estar optimizando dichos hiperparámetros causemos overfitting en el modelo por lo que nuestra estimación del error de generalización será muy optimista
Para resolver este problema se suelen dividir los datos en tres conjuntos: entrenamiento (training), validación (validation) y prueba (test)
El algoritmo se entrena con los datos de entrenamiento, el error se estima con el conjunto de validación y cuando se ha seleccionado el modelo final, se estima el error de generalización con el conjunto de prueba
3.3 Cross-validation (II)
Desafortunadamente al hacer estas tres particiones reducimos el número de observaciones que se usarán para el entrenamiento del modelo
Una solución a este problema se conoce como cross-validation (CV)
Cuando se usa CV un test de prueba se usa para la evaluación final, pero el conjunto de validación ya no es necesario
3.3 Cross-validation (III)
En una de sus variantes conocida como k-fold CV, se siguen estos pasos:
Se divide el conjunto de entrenamiento en k partes
El modelo se entrena con k-1 partes
El modelo se valida con la parte que no se usó para entrenamiento
Se repite el proceso hasta que se haya validado el modelo con las k partes y se reporta el promedio de la métrica que estemos usando para evaluar el modelo
End of explanation
from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier, RandomForestClassifier, ExtraTreesClassifier
Explanation: 3.4 Bias-Variance tradeoff
3.5 Entrando más modelos
End of explanation |
2,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Calibrated-Recommendations" data-toc-modified-id="Calibrated-Recommendations-1"><span class="toc-item-num">1 </span>Calibrated Recommendations</a></span><ul class="toc-item"><li><span><a href="#Preparation" data-toc-modified-id="Preparation-1.1"><span class="toc-item-num">1.1 </span>Preparation</a></span></li><li><span><a href="#Deep-Dive-Into-Calibrated-Recommendation" data-toc-modified-id="Deep-Dive-Into-Calibrated-Recommendation-1.2"><span class="toc-item-num">1.2 </span>Deep Dive Into Calibrated Recommendation</a></span><ul class="toc-item"><li><span><a href="#Calibration-Metric" data-toc-modified-id="Calibration-Metric-1.2.1"><span class="toc-item-num">1.2.1 </span>Calibration Metric</a></span></li><li><span><a href="#Generating-Calibrated-Recommendations" data-toc-modified-id="Generating-Calibrated-Recommendations-1.2.2"><span class="toc-item-num">1.2.2 </span>Generating Calibrated Recommendations</a></span></li></ul></li><li><span><a href="#End-Note" data-toc-modified-id="End-Note-1.3"><span class="toc-item-num">1.3 </span>End Note</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step3: Calibrated Recommendations
When a user has watched, say, 70% romance movies and 30% action movies in the past, then it is reasonable to expect the personalized list of recommended movies to be comprised of 70% romance and 30% action movies as well since we would like to cover the user's diverse set of interests. A recommendation that actually reflect most if not all of the user's interest is considered a Calibrated Recommendation. But the question is, does our recommendation exhibit this trait?
Recommendation algorithm provides personalized user experience past on the user's past historical interaction with the product/system/website. However, when serving the recommendation such as recommendation top 10 movies that we think the user might be interested in, a recommendation engine that is solely measured based on ranking metrics can easily generate recommendations that focus on the main area of interests, resulting the user's other area of interest to be under-represented, or worse, absent in the final recommendation.
To hit the notion home, using the example above, given a user that has watched 70% romance movies and 30% action movies, if we were to solely measure the metric based on precision, we can say we can achieve the best performance by predicting the majority genre, i.e. we will recommend 100% romance movies and we can expect the user to interact with those recommendations 70% of the time. On other other hand, if we were to recommend 70% romance movies and 30% action movies, then we would expect our recommendation to only be correct 0.7 * 0.7 + 0.3 * 0.3 = 58% of the time.
Throughout the rest of this notebook, we will take a look at if the phenomenon of crowding out user's sub-interest occurs with our recommendation, develop a quantitative metric to measure how severe this issue is and implement a post-preprocessing logic that is agnostic of the underlying recommendation algorithm we decided to use to ensure the recommendation becomes more calibrated.
Preparation
We'll be using the publicly available movielens-20m dataset throughout this experiment. We can download it via the following link. There's multiple data under that folder, we can select download all to make things easier.
The algorithm we will be using to generate our recommendation is Bayesian Personalized Ranking, which is a matrix factorization based collaborative filtering algorithm. Readers don't need to be acquainted with this model per se to continue with this notebook as the discussion is model-agnostic and we'll be explaining the syntax. That said, this link contains some resources on this algorithm if it is of interest.
Preparation steps in the next few code chunks involve the following steps
Step4: Given this dataframe we will use the userId, movieId and rating to construct a sparse matrix, perform the random train/test split (we can split based on the time if preferred) and feed the training set into a collaborative filtering based algorithm to train the model, so we can generate item recommendations for users.
Step5: we will look at a precision_at_k metric just to make sure our recommender is reasonable, feel free to tune the model's hyperparameter to squeeze out performance, but that is not the focus here.
Step6: Deep Dive Into Calibrated Recommendation
We will take the first user as an example to see whether our recommendations are calibrated or not. Once we're familiar with the procedure for one user, we can repeat the process for all of the users if we'd like to.
Let's start of by defining the problem. We are given the distribution genres $g$ for each movie $i$, $p(g|i)$, what we are interested is whether $p(g|u)$ is similar to $q(g|u)$. Where
Step7: For the same user, we can use the .recommend method to recommend the topn recommendation for him/her, note that we also passed in the original sparse matrix, and by default, the items/movies that the user has already played will be filtered from the list (controlled by a filter_already_liked_items argument, which defaults to True).
Step9: The next code chunk defines a function to obtain the genre distribution for a list of items. Given that we now have the list of interacted items and recommended items, we can pass it to the function to obtain the two genre distributions.
Step11: Calibration Metric
Looking at the results above, we can see that according to $p(g|u)$, the user has interacted with genres such as War, Western, however, they are nowhere to be seen in the topn recommendation to the user, hence we can argue based on the output that our recommendation might not be that well calibrated to the user's past interaction.
To scale this type of comparison, we'll now define our calibration metric $C$. There are various methods to compare whether two distributions are similar to each other, and one popular choice is KL-divergence.
\begin{align}
C(p,q) = D_{KL}(p || q) = \sum_{g} p(g|u) \cdot \log \frac{p(g|u)}{\tilde{q}(g|u)}
\end{align}
The denominator in the formula should be $q(g|u)$, but given that the formula would be undefined if $q(g|u) = 0$ and $p(g|u) > 0$ for a genre $g$. We instead use
Step15: Generating Calibrated Recommendations
Being able to compute the calibration metric between $p(g|u)$ and $q(g|u)$ is all well and good, but how can we generate a recommendation list that is more calibrated becomes the next important and interesting question.
Different recommendation algorithm's objective function might be completely different, thus instead of going to hard-route of incorporating it into the objective function right off the bat and spend two weeks writing the customized algorithm in an efficient manner, we will start with an alternative approach of re-ranking the predicted list of a recommender system in a post-processing step.
To determine the optimal set $I^*$ of $N$ recommended items, we'll be using maximum marginal relevance.
\begin{align}
I^* = \underset{I, |I|=N}{\text{argmax}} \; (1 - \lambda) \cdot s(I) - \lambda \cdot C(p, q(I))
\end{align}
Where
$s(i)$ is the score of the items $i \in I$ predicted by the recommender system and $s(I) = \sum_{i \in I} s(i)$, i.e. the sum of all the items' score in the recommendation list.
$\lambda \in [0, 1]$ is a tuning parameter that determines the trade-off between the score generated by the recommender and the calibration score, notice that since the calibration score is measured by KL-divergence, which is a metric that's the lower the better we use its negative in the maximization formula.
Finding the optimal set $I^*$ is a combinatorial optimization problem and can be a topic by itself. We won't do a deep dive into it, but instead leverage a popular greedy submodular optimization to solve this problem. The process is as follows
Step16: In the code chunk above, we turned the $\lambda$ knob extremely high to generate the most calibrated recommendation list possible. Let's now compare the calibrated recommendation (which only optimizes for score, $s$), the original recommendation and the user's interaction distribution.
Step17: Printing out the genre distribution from the calibrated recommendation list shows that this list covers more genre and its distribution closely resembles the distribution of the user's past historical interaction and our quantitative calibration metric, KL-divergence also confirms this. i.e. the calibrated recommendation's KL-divergence is lower than the original recommendation's score.
Thankfully from the results above, it seems that the re-ranked recommendation list that aims to maximize calibration score does in fact generate a more calibrated list. But the question is at what cost? Does other ranking metrics that recommender system often optimize for drop? Let's take a look at precision_at_k. Here the number for k is the topn parameter that we've defined earlier. i.e. the number of recommendations to generate for the user.
Step18: Well ..., it's not a surprise that the calibrated recommendation list's precision score is a bit disappointing compared to the original recommendation. But let's see if we try a different value of $\lambda$, this time turning it down a bit to strike a balance between calibration and precision.
Step19: Well, well, well. It turns out calibration can be improved considerably while accuracy is reduced only slightly if we find the sweet spot for the tuning parameter $\lambda$.
The following code chunk curates all the code to generate the calibrated recommendation, the original recommendation and compare it with the user's historical interaction in one place for ease of tracking the flow. This process is outlined for 1 user, feel free to modify the code to perform this comparison across all users and due to the randomness in the recommendation algorithm, the results might differ across runs, but the underlying trend should remain the same. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.sparse import csr_matrix
from implicit.bpr import BayesianPersonalizedRanking
from implicit.evaluation import train_test_split, precision_at_k
%watermark -a 'Ethen' -d -t -v -p scipy,numpy,pandas,matplotlib,implicit
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Calibrated-Recommendations" data-toc-modified-id="Calibrated-Recommendations-1"><span class="toc-item-num">1 </span>Calibrated Recommendations</a></span><ul class="toc-item"><li><span><a href="#Preparation" data-toc-modified-id="Preparation-1.1"><span class="toc-item-num">1.1 </span>Preparation</a></span></li><li><span><a href="#Deep-Dive-Into-Calibrated-Recommendation" data-toc-modified-id="Deep-Dive-Into-Calibrated-Recommendation-1.2"><span class="toc-item-num">1.2 </span>Deep Dive Into Calibrated Recommendation</a></span><ul class="toc-item"><li><span><a href="#Calibration-Metric" data-toc-modified-id="Calibration-Metric-1.2.1"><span class="toc-item-num">1.2.1 </span>Calibration Metric</a></span></li><li><span><a href="#Generating-Calibrated-Recommendations" data-toc-modified-id="Generating-Calibrated-Recommendations-1.2.2"><span class="toc-item-num">1.2.2 </span>Generating Calibrated Recommendations</a></span></li></ul></li><li><span><a href="#End-Note" data-toc-modified-id="End-Note-1.3"><span class="toc-item-num">1.3 </span>End Note</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
data_dir = 'movielens-20m-dataset'
# we are working with movie data, but we'll name
# the movie as item to make it more generic to
# all use-cases
user_col = 'userId'
item_col = 'movieId'
value_col = 'rating'
time_col = 'timestamp'
rating_path = os.path.join(data_dir, 'rating.csv')
df_raw = pd.read_csv(rating_path)
print('dimension: ', df_raw.shape)
df_raw.head()
title_col = 'title'
genre_col = 'genres'
item_info_path = os.path.join(data_dir, 'movie.csv')
df_item = pd.read_csv(item_info_path)
df_item = df_item[df_item[genre_col] != '(no genres listed)']
print('dimension: ', df_item.shape)
df_item.head()
class Item:
Data holder for our item.
Parameters
----------
id : int
title : str
genre : dict[str, float]
The item/movie's genre distribution, where the key
represents the genre and value corresponds to the
ratio of that genre.
score : float
Score for the item, potentially generated by some
recommendation algorithm.
def __init__(self, _id, title, genres, score=None):
self.id = _id
self.title = title
self.score = score
self.genres = genres
def __repr__(self):
return self.title
def create_item_mapping(df_item, item_col, title_col, genre_col):
Create a dictionary of item id to Item lookup.
item_mapping = {}
for row in df_item.itertuples():
item_id = getattr(row, item_col)
item_title = getattr(row, title_col)
item_genre = getattr(row, genre_col)
splitted = item_genre.split('|')
genre_ratio = 1. / len(splitted)
item_genre = {genre: genre_ratio for genre in splitted}
item = Item(item_id, item_title, item_genre)
item_mapping[item_id] = item
return item_mapping
item_mapping = create_item_mapping(df_item, item_col, title_col, genre_col)
item_mapping[1]
# convert to implicit feedback data and filter out
# movies that doesn't have any genre
df_rating = df_raw[df_raw[value_col] >= 4.0].copy()
df_rating = df_rating.merge(df_item, on=item_col)
for col in (user_col, item_col):
df_rating[col] = df_rating[col].astype('category')
# the original id are converted to indices to create
# the sparse matrix, so we keep track of the mappings here
# e.g. a userId 1 will correspond to index 0 in our sparse matrix
index2user = df_rating[user_col].cat.categories
index2item = df_rating[item_col].cat.categories
print('dimension: ', df_rating.shape)
df_rating.head()
Explanation: Calibrated Recommendations
When a user has watched, say, 70% romance movies and 30% action movies in the past, then it is reasonable to expect the personalized list of recommended movies to be comprised of 70% romance and 30% action movies as well since we would like to cover the user's diverse set of interests. A recommendation that actually reflect most if not all of the user's interest is considered a Calibrated Recommendation. But the question is, does our recommendation exhibit this trait?
Recommendation algorithm provides personalized user experience past on the user's past historical interaction with the product/system/website. However, when serving the recommendation such as recommendation top 10 movies that we think the user might be interested in, a recommendation engine that is solely measured based on ranking metrics can easily generate recommendations that focus on the main area of interests, resulting the user's other area of interest to be under-represented, or worse, absent in the final recommendation.
To hit the notion home, using the example above, given a user that has watched 70% romance movies and 30% action movies, if we were to solely measure the metric based on precision, we can say we can achieve the best performance by predicting the majority genre, i.e. we will recommend 100% romance movies and we can expect the user to interact with those recommendations 70% of the time. On other other hand, if we were to recommend 70% romance movies and 30% action movies, then we would expect our recommendation to only be correct 0.7 * 0.7 + 0.3 * 0.3 = 58% of the time.
Throughout the rest of this notebook, we will take a look at if the phenomenon of crowding out user's sub-interest occurs with our recommendation, develop a quantitative metric to measure how severe this issue is and implement a post-preprocessing logic that is agnostic of the underlying recommendation algorithm we decided to use to ensure the recommendation becomes more calibrated.
Preparation
We'll be using the publicly available movielens-20m dataset throughout this experiment. We can download it via the following link. There's multiple data under that folder, we can select download all to make things easier.
The algorithm we will be using to generate our recommendation is Bayesian Personalized Ranking, which is a matrix factorization based collaborative filtering algorithm. Readers don't need to be acquainted with this model per se to continue with this notebook as the discussion is model-agnostic and we'll be explaining the syntax. That said, this link contains some resources on this algorithm if it is of interest.
Preparation steps in the next few code chunks involve the following steps:
- rating.csv contains user's rating for each movie. Here, we will focus on implicit data, and follow the usual procedure of simulating binary implicit feedback data (i.e. whether the user enjoyed the movie) by retaining only ratings of 4 stars and higher, while dropping lower ratings.
- movie.csv contains each movies genre tag. We will also eliminate movies that had no genre information attached and create a mapping that stores each movies' genre distribution. In this dataset, each movie $i$ typically has several genres $g$ associated with it, thus we assign equal probabilities $p(g|i)$ to each genre such that $\sum_g p(g|i) = 1$ for each movie $i$. This genre distribution will play a strong role in determining whether our recommendation is well calibrated or not.
End of explanation
def create_user_item_csr_matrix(data, user_col, item_col, value_col):
rows = data[user_col].cat.codes
cols = data[item_col].cat.codes
values = data[value_col].astype(np.float32)
return csr_matrix((values, (rows, cols)))
user_item = create_user_item_csr_matrix(df_rating, user_col, item_col, value_col)
user_item
np.random.seed(1234)
user_item_train, user_item_test = train_test_split(user_item, train_percentage=0.8)
user_item_train
user_item_test
# the model expects item-user sparse matrix,
# i.e. the rows represents item and the column
# represents users
np.random.seed(1234)
bpr = BayesianPersonalizedRanking(iterations=70)
bpr.fit(user_item_train.T.tocsr())
Explanation: Given this dataframe we will use the userId, movieId and rating to construct a sparse matrix, perform the random train/test split (we can split based on the time if preferred) and feed the training set into a collaborative filtering based algorithm to train the model, so we can generate item recommendations for users.
End of explanation
precision = precision_at_k(bpr, user_item_train, user_item_test, K=10)
precision
Explanation: we will look at a precision_at_k metric just to make sure our recommender is reasonable, feel free to tune the model's hyperparameter to squeeze out performance, but that is not the focus here.
End of explanation
# look a the first user
user_id = 0
# find the index that the user interacted with,
# we can then map this to a list of Item, note that we need to first
# map the recommended index to the actual itemId/movieId first
interacted_ids = user_item_train[user_id].nonzero()[1]
interacted_items = [item_mapping[index2item[index]] for index in interacted_ids]
interacted_items[:10]
Explanation: Deep Dive Into Calibrated Recommendation
We will take the first user as an example to see whether our recommendations are calibrated or not. Once we're familiar with the procedure for one user, we can repeat the process for all of the users if we'd like to.
Let's start of by defining the problem. We are given the distribution genres $g$ for each movie $i$, $p(g|i)$, what we are interested is whether $p(g|u)$ is similar to $q(g|u)$. Where:
$p(g|u)$ is the distribution over genre $g$ of the set of movies $H$ played by user $u$ in the past.
\begin{align}
p(g|u) = \sum_{i \in H} p(g|i)
\end{align}
$q(g|u)$ is the distribution over genre $g$ of the set of movies $I$ we recommended to user $u$.
\begin{align}
q(g|u) = \sum_{i \in I} p(g|i)
\end{align}
For these distributions, we can have a weighted version if we liked to get sophisticated. e.g. the $p(g|i)$ can be weighted by recency saying something like the item/movie interaction matters more if its a more recent interaction, indicating that item/movie's genre should also be weighted more, but let's not go there yet.
Let's first look at some code to generate these information.
End of explanation
# it returns the recommended index and their corresponding score
topn = 20
reco = bpr.recommend(user_id, user_item_train, N=topn)
reco[:10]
# map the index to Item
reco_items = [item_mapping[index2item[index]] for index, _ in reco]
reco_items[:10]
Explanation: For the same user, we can use the .recommend method to recommend the topn recommendation for him/her, note that we also passed in the original sparse matrix, and by default, the items/movies that the user has already played will be filtered from the list (controlled by a filter_already_liked_items argument, which defaults to True).
End of explanation
def compute_genre_distr(items):
Compute the genre distribution for a given list of Items.
distr = {}
for item in items:
for genre, score in item.genres.items():
genre_score = distr.get(genre, 0.)
distr[genre] = genre_score + score
# we normalize the summed up probability so it sums up to 1
# and round it to three decimal places, adding more precision
# doesn't add much value and clutters the output
for item, genre_score in distr.items():
normed_genre_score = round(genre_score / len(items), 3)
distr[item] = normed_genre_score
return distr
# we can check that the probability does in fact add up to 1
# np.array(list(interacted_distr.values())).sum()
interacted_distr = compute_genre_distr(interacted_items)
interacted_distr
reco_distr = compute_genre_distr(reco_items)
reco_distr
# change default style figure and font size
plt.rcParams['figure.figsize'] = 10, 8
plt.rcParams['font.size'] = 12
def distr_comparison_plot(interacted_distr, reco_distr, width=0.3):
# the value will automatically be converted to a column with the
# column name of '0'
interacted = pd.DataFrame.from_dict(interacted_distr, orient='index')
reco = pd.DataFrame.from_dict(reco_distr, orient='index')
df = interacted.join(reco, how='outer', lsuffix='_interacted')
n = df.shape[0]
index = np.arange(n)
plt.barh(index, df['0_interacted'], height=width, label='interacted distr')
plt.barh(index + width, df['0'], height=width, label='reco distr')
plt.yticks(index, df.index)
plt.legend(bbox_to_anchor=(1, 0.5))
plt.title('Genre Distribution between User Historical Interaction v.s. Recommendation')
plt.ylabel('Genre')
plt.show()
distr_comparison_plot(interacted_distr, reco_distr)
Explanation: The next code chunk defines a function to obtain the genre distribution for a list of items. Given that we now have the list of interacted items and recommended items, we can pass it to the function to obtain the two genre distributions.
End of explanation
def compute_kl_divergence(interacted_distr, reco_distr, alpha=0.01):
KL (p || q), the lower the better.
alpha is not really a tuning parameter, it's just there to make the
computation more numerically stable.
kl_div = 0.
for genre, score in interacted_distr.items():
reco_score = reco_distr.get(genre, 0.)
reco_score = (1 - alpha) * reco_score + alpha * score
kl_div += score * np.log2(score / reco_score)
return kl_div
compute_kl_divergence(interacted_distr, reco_distr)
Explanation: Calibration Metric
Looking at the results above, we can see that according to $p(g|u)$, the user has interacted with genres such as War, Western, however, they are nowhere to be seen in the topn recommendation to the user, hence we can argue based on the output that our recommendation might not be that well calibrated to the user's past interaction.
To scale this type of comparison, we'll now define our calibration metric $C$. There are various methods to compare whether two distributions are similar to each other, and one popular choice is KL-divergence.
\begin{align}
C(p,q) = D_{KL}(p || q) = \sum_{g} p(g|u) \cdot \log \frac{p(g|u)}{\tilde{q}(g|u)}
\end{align}
The denominator in the formula should be $q(g|u)$, but given that the formula would be undefined if $q(g|u) = 0$ and $p(g|u) > 0$ for a genre $g$. We instead use:
\begin{align}
\tilde{q}(g|u) = (1 - \alpha) \cdot q(g|u) + \alpha \cdot p(g|u)
\end{align}
with a small $\alpha$ such as 0.01, so that $q(g|u) \approx \tilde{q}(g|u)$.
End of explanation
def generate_item_candidates(model, user_item, user_id, index2item, item_mapping,
filter_already_liked_items=True):
For a given user, generate the list of items that we can recommend, during this
step, we will also attach the recommender's score to each item.
n_items = user_item.shape[1]
# this is how implicit's matrix factorization generates
# the scores for each item for a given user, modify this
# part of the logic if we were to use a completely different
# algorithm to generate the ranked items
user_factor = model.user_factors[user_id]
scores = model.item_factors.dot(user_factor)
liked = set()
if filter_already_liked_items:
liked = set(user_item[user_id].indices)
item_ids = set(np.arange(n_items))
item_ids -= liked
items = []
for item_id in item_ids:
item = item_mapping[index2item[item_id]]
item.score = scores[item_id]
items.append(item)
return items
items = generate_item_candidates(bpr, user_item_train, user_id, index2item, item_mapping)
print('number of item candidates:', len(items))
items[:5]
def compute_utility(reco_items, interacted_distr, lmbda=0.5):
Our objective function for computing the utility score for
the list of recommended items.
lmbda : float, 0.0 ~ 1.0, default 0.5
Lambda term controls the score and calibration tradeoff,
the higher the lambda the higher the resulting recommendation
will be calibrated. Lambda is keyword in Python, so it's
lmbda instead ^^
reco_distr = compute_genre_distr(reco_items)
kl_div = compute_kl_divergence(interacted_distr, reco_distr)
total_score = 0.0
for item in reco_items:
total_score += item.score
# kl divergence is the lower the better, while score is
# the higher the better so remember to negate it in the calculation
utility = (1 - lmbda) * total_score - lmbda * kl_div
return utility
def calib_recommend(items, interacted_distr, topn, lmbda=0.5):
start with an empty recommendation list,
loop over the topn cardinality, during each iteration
update the list with the item that maximizes the utility function.
calib_reco = []
for _ in range(topn):
max_utility = -np.inf
for item in items:
if item in calib_reco:
continue
utility = compute_utility(calib_reco + [item], interacted_distr, lmbda)
if utility > max_utility:
max_utility = utility
best_item = item
calib_reco.append(best_item)
return calib_reco
start = time.time()
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda=0.99)
elapsed = time.time() - start
print('elapsed: ', elapsed)
calib_reco_items
Explanation: Generating Calibrated Recommendations
Being able to compute the calibration metric between $p(g|u)$ and $q(g|u)$ is all well and good, but how can we generate a recommendation list that is more calibrated becomes the next important and interesting question.
Different recommendation algorithm's objective function might be completely different, thus instead of going to hard-route of incorporating it into the objective function right off the bat and spend two weeks writing the customized algorithm in an efficient manner, we will start with an alternative approach of re-ranking the predicted list of a recommender system in a post-processing step.
To determine the optimal set $I^*$ of $N$ recommended items, we'll be using maximum marginal relevance.
\begin{align}
I^* = \underset{I, |I|=N}{\text{argmax}} \; (1 - \lambda) \cdot s(I) - \lambda \cdot C(p, q(I))
\end{align}
Where
$s(i)$ is the score of the items $i \in I$ predicted by the recommender system and $s(I) = \sum_{i \in I} s(i)$, i.e. the sum of all the items' score in the recommendation list.
$\lambda \in [0, 1]$ is a tuning parameter that determines the trade-off between the score generated by the recommender and the calibration score, notice that since the calibration score is measured by KL-divergence, which is a metric that's the lower the better we use its negative in the maximization formula.
Finding the optimal set $I^*$ is a combinatorial optimization problem and can be a topic by itself. We won't do a deep dive into it, but instead leverage a popular greedy submodular optimization to solve this problem. The process is as follows:
We start out with the empty set.
Iteratively append one item $i$ at a time, and at step $n$, when we already have the set $I_{n-1}$ comprised of $n - 1$ items, the item $i$ that maximizes the objective function defined above for the set $I_{n-1} \cup {i}$ is added to obtain $I_n$
Repeat the process the generate the full $I^*$ of size $N$.
From a theoretical standpoint, this procedure guarantees a solution that has a score of 0.63 of the optimal set.
With these information at hand, let's look at the implementation part:
End of explanation
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
reco_kl_div = compute_kl_divergence(interacted_distr, reco_distr)
print('\noriginal reco kl-divergence score:', reco_kl_div)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
distr_comparison_plot(interacted_distr, calib_reco_distr)
Explanation: In the code chunk above, we turned the $\lambda$ knob extremely high to generate the most calibrated recommendation list possible. Let's now compare the calibrated recommendation (which only optimizes for score, $s$), the original recommendation and the user's interaction distribution.
End of explanation
def precision(user_item, user_id, reco_items, index2item):
indptr = user_item.indptr
indices = user_item.indices
reco_ids = {item.id for item in reco_items}
likes = {index2item[indices[i]] for i in range(indptr[user_id], indptr[user_id + 1])}
relevant = len(reco_ids & likes)
total = min(len(reco_items), len(likes))
return relevant / total
reco_precision = precision(user_item_test, user_id, reco_items, index2item)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('original reco precision score:', reco_precision)
print('calibrated reco precision score:', calib_reco_precision)
Explanation: Printing out the genre distribution from the calibrated recommendation list shows that this list covers more genre and its distribution closely resembles the distribution of the user's past historical interaction and our quantitative calibration metric, KL-divergence also confirms this. i.e. the calibrated recommendation's KL-divergence is lower than the original recommendation's score.
Thankfully from the results above, it seems that the re-ranked recommendation list that aims to maximize calibration score does in fact generate a more calibrated list. But the question is at what cost? Does other ranking metrics that recommender system often optimize for drop? Let's take a look at precision_at_k. Here the number for k is the topn parameter that we've defined earlier. i.e. the number of recommendations to generate for the user.
End of explanation
start = time.time()
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda=0.5)
elapsed = time.time() - start
print('elapsed: ', elapsed)
calib_reco_items
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
print('calibrated reco precision score:', calib_reco_precision)
calib_reco_distr = compute_genre_distr(calib_reco_items)
distr_comparison_plot(interacted_distr, calib_reco_distr)
Explanation: Well ..., it's not a surprise that the calibrated recommendation list's precision score is a bit disappointing compared to the original recommendation. But let's see if we try a different value of $\lambda$, this time turning it down a bit to strike a balance between calibration and precision.
End of explanation
topn = 20
user_id = 0
lmbda = 0.99
reco = bpr.recommend(user_id, user_item_train, N=topn)
reco_items = [item_mapping[index2item[index]] for index, _ in reco]
reco_distr = compute_genre_distr(reco_items)
interacted_ids = user_item_train[user_id].nonzero()[1]
interacted_items = [item_mapping[index2item[index]] for index in interacted_ids]
interacted_distr = compute_genre_distr(interacted_items)
items = generate_item_candidates(bpr, user_item_train, user_id, index2item, item_mapping)
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda)
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
print('calibrated reco precision score:', calib_reco_precision)
distr_comparison_plot(interacted_distr, calib_reco_distr)
reco_kl_div = compute_kl_divergence(interacted_distr, reco_distr)
reco_precision = precision(user_item_test, user_id, reco_items, index2item)
print('original reco kl-divergence score:', reco_kl_div)
print('original reco precision score:', reco_precision)
distr_comparison_plot(interacted_distr, reco_distr)
Explanation: Well, well, well. It turns out calibration can be improved considerably while accuracy is reduced only slightly if we find the sweet spot for the tuning parameter $\lambda$.
The following code chunk curates all the code to generate the calibrated recommendation, the original recommendation and compare it with the user's historical interaction in one place for ease of tracking the flow. This process is outlined for 1 user, feel free to modify the code to perform this comparison across all users and due to the randomness in the recommendation algorithm, the results might differ across runs, but the underlying trend should remain the same.
End of explanation |
2,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
The Lines object provides the following features
Step1: Basic Line Chart
Step2: We can explore the different attributes by changing each of them for the plot above
Step3: In a similar way, we can also change any attribute after the plot has been displayed to change the plot. Run each of the cells below, and try changing the attributes to explore the different features and how they affect the plot.
Step4: While a Lines plot allows the user to extract the general shape of the data being plotted, there may be a need to visualize discrete data points along with this shape. This is where the markers attribute comes in.
Step5: The marker attributes accepts the values square, circle, cross, diamond, square, triangle-down, triangle-up, arrow, rectangle, ellipse. Try changing the string above and re-running the cell to see how each marker type looks.
Plotting a Time-Series
The DateScale allows us to plot time series as a Lines plot conveniently with most date formats.
Step6: Plotting multiples sets of data
The Lines mark allows the user to plot multiple y-values for a single x-value. This can be done by passing an ndarray or a list of the different y-values as the y-attribute of the Lines as shown below.
Step7: We pass each data set as an element of a list
Step8: Similarly, we can also pass multiple x-values for multiple sets of y-values
Step9: Coloring Lines according to data
The color attribute of a Lines mark can also be used to encode one more dimension of data. Suppose we have a portfolio of securities and we would like to color them based on whether we have bought or sold them. We can use the color attribute to encode this information.
Step10: We can also reset the colors of the Line to their defaults by setting the color attribute to None.
Step11: Patches
The fill attribute of the Lines mark allows us to fill a path in different ways, while the fill_colors attribute lets us control the color of the fill | Python Code:
import numpy as np
from pandas import date_range
import bqplot.pyplot as plt
from bqplot import *
security_1 = np.cumsum(np.random.randn(150)) + 100.
security_2 = np.cumsum(np.random.randn(150)) + 100.
Explanation: Introduction
The Lines object provides the following features:
Ability to plot a single set or multiple sets of y-values as a function of a set or multiple sets of x-values
Ability to style the line object in different ways, by setting different attributes such as the colors, line_style, stroke_width etc.
Ability to specify a marker at each point passed to the line. The marker can be a shape which is at the data points between which the line is interpolated and can be set through the markers attribute
The Lines object has the following attributes
| Attribute | Description | Default Value |
|:-:|---|:-:|
| colors | Sets the color of each line, takes as input a list of any RGB, HEX, or HTML color name | CATEGORY10 |
| opacities | Controls the opacity of each line, takes as input a real number between 0 and 1 | 1.0 |
| stroke_width | Real number which sets the width of all paths | 2.0 |
| line_style | Specifies whether a line is solid, dashed, dotted or both dashed and dotted | 'solid' |
| interpolation | Sets the type of interpolation between two points | 'linear' |
| marker | Specifies the shape of the marker inserted at each data point | None |
| marker_size | Controls the size of the marker, takes as input a non-negative integer | 64 |
|close_path| Controls whether to close the paths or not | False |
|fill| Specifies in which way the paths are filled. Can be set to one of {'none', 'bottom', 'top', 'inside'}| None |
|fill_colors| List that specifies the fill colors of each path | [] |
| Data Attribute | Description | Default Value |
|x |abscissas of the data points | array([]) |
|y |ordinates of the data points | array([]) |
|color | Data according to which the Lines will be colored. Setting it to None defaults the choice of colors to the colors attribute | None |
pyplot's plot method can be used to plot lines with meaningful defaults
End of explanation
fig = plt.figure(title='Security 1')
axes_options = {'x': {'label': 'Index'}, 'y': {'label': 'Price'}}
# x values default to range of values when not specified
line = plt.plot(security_1, axes_options=axes_options)
fig
Explanation: Basic Line Chart
End of explanation
line.colors = ['DarkOrange']
Explanation: We can explore the different attributes by changing each of them for the plot above:
End of explanation
# The opacity allows us to display the Line while featuring other Marks that may be on the Figure
line.opacities = [.5]
line.stroke_width = 2.5
line.line_style = 'dashed'
line.interpolation = 'basis'
Explanation: In a similar way, we can also change any attribute after the plot has been displayed to change the plot. Run each of the cells below, and try changing the attributes to explore the different features and how they affect the plot.
End of explanation
line.marker = 'triangle-down'
Explanation: While a Lines plot allows the user to extract the general shape of the data being plotted, there may be a need to visualize discrete data points along with this shape. This is where the markers attribute comes in.
End of explanation
# Here we define the dates we would like to use
dates = date_range(start='01-01-2007', periods=150)
fig = plt.figure(title='Time Series')
axes_options = {'x': {'label': 'Date'}, 'y': {'label': 'Security 1'}}
time_series = plt.plot(dates, security_1,
axes_options=axes_options)
fig
Explanation: The marker attributes accepts the values square, circle, cross, diamond, square, triangle-down, triangle-up, arrow, rectangle, ellipse. Try changing the string above and re-running the cell to see how each marker type looks.
Plotting a Time-Series
The DateScale allows us to plot time series as a Lines plot conveniently with most date formats.
End of explanation
dates_new = date_range(start='06-01-2007', periods=150)
Explanation: Plotting multiples sets of data
The Lines mark allows the user to plot multiple y-values for a single x-value. This can be done by passing an ndarray or a list of the different y-values as the y-attribute of the Lines as shown below.
End of explanation
fig = plt.figure()
axes_options = {'x': {'label': 'Date'}, 'y': {'label': 'Price'}}
line = plt.plot(dates, [security_1, security_2],
labels=['Security 1', 'Security 2'],
axes_options=axes_options,
display_legend=True)
fig
Explanation: We pass each data set as an element of a list
End of explanation
line.x, line.y = [dates, dates_new], [security_1, security_2]
Explanation: Similarly, we can also pass multiple x-values for multiple sets of y-values
End of explanation
fig = plt.figure()
axes_options = {'x': {'label': 'Date'},
'y': {'label': 'Security 1'},
'color' : {'visible': False}}
# add a custom color scale to color the lines
plt.scales(scales={'color': ColorScale(colors=['Red', 'Green'])})
dates_color = date_range(start='06-01-2007', periods=150)
securities = 100. + np.cumsum(np.random.randn(150, 10), axis=0)
# we generate 10 random price series and 10 random positions
positions = np.random.randint(0, 2, size=10)
# We pass the color scale and the color data to the plot method
line = plt.plot(dates_color, securities.T, color=positions,
axes_options=axes_options)
fig
Explanation: Coloring Lines according to data
The color attribute of a Lines mark can also be used to encode one more dimension of data. Suppose we have a portfolio of securities and we would like to color them based on whether we have bought or sold them. We can use the color attribute to encode this information.
End of explanation
line.color = None
Explanation: We can also reset the colors of the Line to their defaults by setting the color attribute to None.
End of explanation
fig = plt.figure(animation_duration=1000)
patch = plt.plot([],[],
fill_colors=['orange', 'blue', 'red'],
fill='inside',
axes_options={'x': {'visible': False}, 'y': {'visible': False}},
stroke_width=10,
close_path=True,
display_legend=True)
patch.x = [[0, 2, 1.2], [0.5, 2.5, 1.7], [4, 5, 6, 6, 5, 4, 3]],
patch.y = [[0, 0, 1], [0.5, 0.5, -0.5], [1, 1.1, 1.2, 2.3, 2.2, 2.7, 1.0]]
fig
patch.fill = 'top'
patch.fill = 'bottom'
patch.opacities = [0.1, 0.2]
patch.x = [[2, 3, 3.2], [0.5, 2.5, 1.7], [4,5,6, 6, 5, 4, 3]]
patch.close_path = False
Explanation: Patches
The fill attribute of the Lines mark allows us to fill a path in different ways, while the fill_colors attribute lets us control the color of the fill
End of explanation |
2,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Zipline beginner tutorial
Basics
Zipline is an open-source algorithmic trading simulator written in Python.
The source can be found at
Step1: As you can see, we first have to import some functions we would like to use. All functions commonly used in your algorithm can be found in zipline.api. Here we are using order() which takes two arguments -- a security object, and a number specifying how many stocks you would like to order (if negative, order() will sell/short stocks). In this case we want to order 10 shares of Apple at each iteration. For more documentation on order(), see the Quantopian docs.
Finally, the record() function allows you to save the value of a variable at each iteration. You provide it with a name for the variable together with the variable itself
Step2: Note that you have to omit the preceding '!' when you call run_algo.py, this is only required by the IPython Notebook in which this tutorial was written.
As you can see there are a couple of flags that specify where to find your algorithm (-f) as well as parameters specifying which stock data to load from Yahoo! finance and the time-range (--start and --end). Finally, you'll want to save the performance metrics of your algorithm so that you can analyze how it performed. This is done via the --output flag and will cause it to write the performance DataFrame in the pickle Python file format. Note that you can also define a configuration file with these parameters that you can then conveniently pass to the -c option so that you don't have to supply the command line args all the time (see the .conf files in the examples directory).
Thus, to execute our algorithm from above and save the results to buyapple_out.pickle we would call run_algo.py as follows
Step3: run_algo.py first outputs the algorithm contents. It then fetches historical price and volume data of Apple from Yahoo! finance in the desired time range, calls the initialize() function, and then streams the historical stock price day-by-day through handle_data(). After each call to handle_data() we instruct zipline to order 10 stocks of AAPL. After the call of the order() function, zipline enters the ordered stock and amount in the order book. After the handle_data() function has finished, zipline looks for any open orders and tries to fill them. If the trading volume is high enough for this stock, the order is executed after adding the commission and applying the slippage model which models the influence of your order on the stock price, so your algorithm will be charged more than just the stock price * 10. (Note, that you can also change the commission and slippage model that zipline uses, see the Quantopian docs for more information).
Note that there is also an analyze() function printed. run_algo.py will try and look for a file with the ending with _analyze.py and the same name of the algorithm (so buyapple_analyze.py) or an analyze() function directly in the script. If an analyze() function is found it will be called after the simulation has finished and passed in the performance DataFrame. (The reason for allowing specification of an analyze() function in a separate file is that this way buyapple.py remains a valid Quantopian algorithm that you can copy&paste to the platform).
Lets take a quick look at the performance DataFrame. For this, we use pandas from inside the IPython Notebook and print the first ten rows. Note that zipline makes heavy usage of pandas, especially for data input and outputting so it's worth spending some time to learn it.
Step4: As you can see, there is a row for each trading day, starting on the first business day of 2000. In the columns you can find various information about the state of your algorithm. The very first column AAPL was placed there by the record() function mentioned earlier and allows us to plot the price of apple. For example, we could easily examine now how our portfolio value changed over time compared to the AAPL stock price.
Step5: As you can see, our algorithm performance as assessed by the portfolio_value closely matches that of the AAPL stock price. This is not surprising as our algorithm only bought AAPL every chance it got.
IPython Notebook
The IPython Notebook is a very powerful browser-based interface to a Python interpreter (this tutorial was written in it). As it is already the de-facto interface for most quantitative researchers zipline provides an easy way to run your algorithm inside the Notebook without requiring you to use the CLI.
To use it you have to write your algorithm in a cell and let zipline know that it is supposed to run this algorithm. This is done via the %%zipline IPython magic command that is available after you run %load_ext zipline in a separate cell. This magic takes the same arguments as the command line interface described above.
Step6: Note that we did not have to specify an input file as above since the magic will use the contents of the cell and look for your algorithm functions there.
Step7: Using Custom Bundles (Advanced)
If you want to use your own custom data bundles using yahoo finance data, you'll first need to find where your zipline root directory is.
Step8: Once we've found where our root directory is, we'll want to add a file called extension.py to the zipline root directory, which is where we will register our custom yahoo bundle. We'll edit that file with the following code
Step9: Now we'll check that our bundle was created by running zipline bundles, and then ingest our bundle for usage using zipline ingest.
Step10: And now we can re-run the code we wrote above using our custom yahoo bundle.
Step11: If you want to use your own custom bundle, that doesn't use yahoo finance data, check out our docs on writing a new bundle
Access to previous prices using data.history()
Working example | Python Code:
# assuming you're running this notebook in zipline/docs/notebooks
import os
if os.name == 'nt':
# windows doesn't have the cat command, but uses 'type' similarly
! type "..\..\zipline\examples\buyapple.py"
else:
! cat ../../zipline/examples/buyapple.py
Explanation: Zipline beginner tutorial
Basics
Zipline is an open-source algorithmic trading simulator written in Python.
The source can be found at: https://github.com/quantopian/zipline
Some benefits include:
Realistic: slippage, transaction costs, order delays.
Stream-based: Process each event individually, avoids look-ahead bias.
Batteries included: Common transforms (moving average) as well as common risk calculations (Sharpe).
Developed and continuously updated by Quantopian which provides an easy-to-use web-interface to Zipline, 10 years of minute-resolution historical US stock data, and live-trading capabilities. This tutorial is directed at users wishing to use Zipline without using Quantopian. If you instead want to get started on Quantopian, see here.
This tutorial assumes that you have zipline correctly installed, see the installation instructions if you haven't set up zipline yet.
Every zipline algorithm consists of two functions you have to define:
* initialize(context)
* handle_data(context, data)
Before the start of the algorithm, zipline calls the initialize() function and passes in a context variable. context is a persistent namespace for you to store variables you need to access from one algorithm iteration to the next.
After the algorithm has been initialized, zipline calls the handle_data() function once for each event. At every call, it passes the same context variable and an event-frame called data containing the current trading bar with open, high, low, and close (OHLC) prices as well as volume for each stock in your universe. For more information on these functions, see the relevant part of the Quantopian docs.
My first algorithm
Lets take a look at a very simple algorithm from the examples directory, buyapple.py:
End of explanation
!zipline run --help
Explanation: As you can see, we first have to import some functions we would like to use. All functions commonly used in your algorithm can be found in zipline.api. Here we are using order() which takes two arguments -- a security object, and a number specifying how many stocks you would like to order (if negative, order() will sell/short stocks). In this case we want to order 10 shares of Apple at each iteration. For more documentation on order(), see the Quantopian docs.
Finally, the record() function allows you to save the value of a variable at each iteration. You provide it with a name for the variable together with the variable itself: varname=var. After the algorithm finished running you will have access to each variable value you tracked with record() under the name you provided (we will see this further below). You also see how we can access the current price data of the AAPL stock in the data event frame (for more information see here.
Running the algorithm
To now test this algorithm on financial data, zipline provides two interfaces. A command-line interface and an IPython Notebook interface.
Command line interface
After you installed zipline you should be able to execute the following from your command line (e.g. cmd.exe on Windows, or the Terminal app on OSX):
End of explanation
!zipline run -f ../../zipline/examples/buyapple.py --start 2000-1-1 --end 2014-1-1 -o buyapple_out.pickle
Explanation: Note that you have to omit the preceding '!' when you call run_algo.py, this is only required by the IPython Notebook in which this tutorial was written.
As you can see there are a couple of flags that specify where to find your algorithm (-f) as well as parameters specifying which stock data to load from Yahoo! finance and the time-range (--start and --end). Finally, you'll want to save the performance metrics of your algorithm so that you can analyze how it performed. This is done via the --output flag and will cause it to write the performance DataFrame in the pickle Python file format. Note that you can also define a configuration file with these parameters that you can then conveniently pass to the -c option so that you don't have to supply the command line args all the time (see the .conf files in the examples directory).
Thus, to execute our algorithm from above and save the results to buyapple_out.pickle we would call run_algo.py as follows:
End of explanation
import pandas as pd
perf = pd.read_pickle('buyapple_out.pickle') # read in perf DataFrame
perf.head()
Explanation: run_algo.py first outputs the algorithm contents. It then fetches historical price and volume data of Apple from Yahoo! finance in the desired time range, calls the initialize() function, and then streams the historical stock price day-by-day through handle_data(). After each call to handle_data() we instruct zipline to order 10 stocks of AAPL. After the call of the order() function, zipline enters the ordered stock and amount in the order book. After the handle_data() function has finished, zipline looks for any open orders and tries to fill them. If the trading volume is high enough for this stock, the order is executed after adding the commission and applying the slippage model which models the influence of your order on the stock price, so your algorithm will be charged more than just the stock price * 10. (Note, that you can also change the commission and slippage model that zipline uses, see the Quantopian docs for more information).
Note that there is also an analyze() function printed. run_algo.py will try and look for a file with the ending with _analyze.py and the same name of the algorithm (so buyapple_analyze.py) or an analyze() function directly in the script. If an analyze() function is found it will be called after the simulation has finished and passed in the performance DataFrame. (The reason for allowing specification of an analyze() function in a separate file is that this way buyapple.py remains a valid Quantopian algorithm that you can copy&paste to the platform).
Lets take a quick look at the performance DataFrame. For this, we use pandas from inside the IPython Notebook and print the first ten rows. Note that zipline makes heavy usage of pandas, especially for data input and outputting so it's worth spending some time to learn it.
End of explanation
%pylab inline
figsize(12, 12)
import matplotlib.pyplot as plt
ax1 = plt.subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value')
ax2 = plt.subplot(212, sharex=ax1)
perf.AAPL.plot(ax=ax2)
ax2.set_ylabel('AAPL stock price')
Explanation: As you can see, there is a row for each trading day, starting on the first business day of 2000. In the columns you can find various information about the state of your algorithm. The very first column AAPL was placed there by the record() function mentioned earlier and allows us to plot the price of apple. For example, we could easily examine now how our portfolio value changed over time compared to the AAPL stock price.
End of explanation
%load_ext zipline
%%zipline --start 2000-1-1 --end 2014-1-1 -o perf_ipython.pickle
from zipline.api import symbol, order, record
def initialize(context):
context.asset = symbol('AAPL')
def handle_data(context, data):
order(context.asset, 10)
record(AAPL=data.current(context.asset, 'price'))
Explanation: As you can see, our algorithm performance as assessed by the portfolio_value closely matches that of the AAPL stock price. This is not surprising as our algorithm only bought AAPL every chance it got.
IPython Notebook
The IPython Notebook is a very powerful browser-based interface to a Python interpreter (this tutorial was written in it). As it is already the de-facto interface for most quantitative researchers zipline provides an easy way to run your algorithm inside the Notebook without requiring you to use the CLI.
To use it you have to write your algorithm in a cell and let zipline know that it is supposed to run this algorithm. This is done via the %%zipline IPython magic command that is available after you run %load_ext zipline in a separate cell. This magic takes the same arguments as the command line interface described above.
End of explanation
pd.read_pickle('perf_ipython.pickle').head()
Explanation: Note that we did not have to specify an input file as above since the magic will use the contents of the cell and look for your algorithm functions there.
End of explanation
from zipline.utils.paths import zipline_root
root = zipline_root()
Explanation: Using Custom Bundles (Advanced)
If you want to use your own custom data bundles using yahoo finance data, you'll first need to find where your zipline root directory is.
End of explanation
ext_path = os.path.join(root, "extension.py")
%%writefile {ext_path}
from zipline.data.bundles import register, yahoo_equities
equities = (
'AAPL',
'IBM',
'MSFT',
)
register(
'my-bundle', # you can use any name you want
yahoo_equities(equities),
)
Explanation: Once we've found where our root directory is, we'll want to add a file called extension.py to the zipline root directory, which is where we will register our custom yahoo bundle. We'll edit that file with the following code:
End of explanation
! zipline bundles
! zipline ingest -b my-bundle
Explanation: Now we'll check that our bundle was created by running zipline bundles, and then ingest our bundle for usage using zipline ingest.
End of explanation
%%zipline --bundle my-bundle --start 2000-1-1 --end 2014-1-1 -o custom_bundle
from zipline.api import symbol, order, record
def initialize(context):
context.asset = symbol('AAPL')
def handle_data(context, data):
order(context.asset, 10)
record(AAPL=data.current(context.asset, 'price'))
Explanation: And now we can re-run the code we wrote above using our custom yahoo bundle.
End of explanation
%%zipline --start 2000-1-1 --end 2014-1-1 -o perf_dma.pickle
from zipline.api import order_target, record, symbol
import numpy as np
import matplotlib.pyplot as plt
def initialize(context):
context.i = 0
context.asset = symbol('AAPL')
def handle_data(context, data):
# Skip first 300 days to get full windows
context.i += 1
if context.i < 300:
return
# Compute averages
# data.history() has to be called with the same params
# from above and returns a pandas dataframe.
short_mavg = data.history(context.asset, 'price', bar_count=100, frequency="1d").mean()
long_mavg = data.history(context.asset, 'price', bar_count=300, frequency="1d").mean()
# Trading logic
if short_mavg > long_mavg:
# order_target orders as many shares as needed to
# achieve the desired number of shares.
order_target(context.asset, 100)
elif short_mavg < long_mavg:
order_target(context.asset, 0)
# Save values for later inspection
record(AAPL=data.current(context.asset, 'price'),
short_mavg=short_mavg,
long_mavg=long_mavg)
def analyze(context, perf):
fig = plt.figure()
ax1 = fig.add_subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value in $')
ax1.set_xlabel('time in years')
ax2 = fig.add_subplot(212)
perf['AAPL'].plot(ax=ax2)
perf[['short_mavg', 'long_mavg']].plot(ax=ax2)
perf_trans = perf.ix[[t != [] for t in perf.transactions]]
buys = perf_trans.ix[[t[0]['amount'] > 0 for t in perf_trans.transactions]]
sells = perf_trans.ix[[t[0]['amount'] < 0 for t in perf_trans.transactions]]
ax2.plot(buys.index, perf.short_mavg.ix[buys.index], '^', markersize=10, color='m')
ax2.plot(sells.index, perf.short_mavg.ix[sells.index],'v', markersize=10, color='k')
ax2.set_ylabel('price in $')
ax2.set_xlabel('time in years')
plt.legend(loc=0)
plt.show()
Explanation: If you want to use your own custom bundle, that doesn't use yahoo finance data, check out our docs on writing a new bundle
Access to previous prices using data.history()
Working example: Dual Moving Average Cross-Over
The Dual Moving Average (DMA) is a classic momentum strategy. It's probably not used by any serious trader anymore but is still very instructive. The basic idea is that we compute two rolling or moving averages (mavg) -- one with a longer window that is supposed to capture long-term trends and one shorter window that is supposed to capture short-term trends. Once the short-mavg crosses the long-mavg from below we assume that the stock price has upwards momentum and long the stock. If the short-mavg crosses from above we exit the positions as we assume the stock to go down further.
As we need to have access to previous prices to implement this strategy we need a new concept: History
data.history() is a convenience function that keeps a rolling window of data for you. The first argument is the asset or iterable of assets you're using, the second argument is the field you're looking for i.e. price, open, volume, the third argument is the number of bars, and the fourth argument is your frequency (either '1d' for '1m' but note that you need to have minute-level data for using 1m).
For a more detailed description of data.history()'s features, see the Quantopian docs. Let's look at the strategy which should make this clear:
End of explanation |
2,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow
Step1: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
Step2: Writing and running programs in TensorFlow has the following steps
Step3: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
Step4: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
Step6: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step15: Expected Output
Step16: Change the index below and run the cell to visualize some examples in the dataset.
Step17: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
Step19: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise
Step21: Expected Output
Step23: Expected Output
Step25: Expected Output
Step27: Expected Output
Step28: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
Step29: Expected Output | Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
Explanation: TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
Initialize variables
Start your own session
Train algorithms
Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
1 - Exploring the Tensorflow Library
To start, you will import the library:
End of explanation
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
Explanation: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
End of explanation
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
Explanation: Writing and running programs in TensorFlow has the following steps:
Create Tensors (variables) that are not yet executed/evaluated.
Write operations between those Tensors.
Initialize your Tensors.
Create a Session.
Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value.
Now let us look at an easy example. Run the cell below:
End of explanation
sess = tf.Session()
print(sess.run(c))
Explanation: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
End of explanation
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
Explanation: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
End of explanation
# GRADED FUNCTION: linear_function
def linear_function():
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
Explanation: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
Exercise: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
### START CODE HERE ### ( approx. 4 lines of code)
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
Explanation: Expected Output :
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session.
Exercise : Implement the sigmoid function below. You should use the following:
tf.placeholder(tf.float32, name = "...")
tf.sigmoid(...)
sess.run(..., feed_dict = {x: z})
Note that there are two typical ways to create and use sessions in tensorflow:
Method 1:
```python
sess = tf.Session()
Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
**Method 2:**python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
End of explanation
# GRADED FUNCTION: cost
def cost(logits, labels):
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
### START CODE HERE ###
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2, 0.4, 0.7, 0.9]))
cost = cost(logits, np.array([0, 0, 1, 1]))
print ("cost = " + str(cost))
Explanation: Expected Output :
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
To summarize, you how know how to:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{2}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
Exercise: Implement the cross entropy loss. The function you will use is:
tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)
Your code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{2}) + (1-y^{(i)})\log (1-\sigma(z^{2})\large )\small\tag{2}$$
End of explanation
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
### START CODE HERE ###
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C=4)
print ("one_hot = " + str(one_hot))
Explanation: Expected Output :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
tf.one_hot(labels, depth, axis)
Exercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this.
End of explanation
# GRADED FUNCTION: ones
def ones(shape):
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
### START CODE HERE ###
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
Explanation: Expected Output:
<table>
<tr>
<td>
**one_hot**
</td>
<td>
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
</td>
</tr>
</table>
1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is tf.ones(). To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
Exercise: Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
tf.ones(shape)
End of explanation
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Expected Output:
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
Create the computation graph
Run the graph
Let's delve into the problem you'd like to solve!
2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
Training set: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
Test set: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> Figure 1</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
End of explanation
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: Change the index below and run the cell to visualize some examples in the dataset.
End of explanation
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten / 255.
X_test = X_test_flatten / 255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print("number of training examples = " + str(X_train.shape[1]))
print("number of test examples = " + str(X_test.shape[1]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))
Explanation: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
### START CODE HERE ### (approx. 2 lines)
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print("X = " + str(X))
print("Y = " + str(Y))
Explanation: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise: Implement the function below to create the placeholders in tensorflow.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(6, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
Exercise: Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
Please use seed = 1 to make sure your results match ours.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
tf.add(...,...) to do an addition
tf.matmul(...,...) to do a matrix multiplication
tf.nn.relu(...) to apply the ReLU activation
Question: Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at z3. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need a3!
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
Explanation: Expected Output:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
2.4 Compute cost
As seen before, it is very easy to compute the cost using:
python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
Question: Implement the cost function below.
- It is important to know that the "logits" and "labels" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, tf.reduce_mean basically does the summation over the examples.
End of explanation
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "optimizer" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
To make the optimization you would do:
python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
Note When coding, we often use _ as a "throwaway" variable to store values that we won't need to use later. Here, _ takes on the evaluated value of optimizer, which we don't need (and c takes the value of the cost variable).
2.6 - Building the model
Now, you will bring it all together!
Exercise: Implement the model. You will be calling the functions you had previously implemented.
End of explanation
parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
End of explanation
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64, 64)).reshape((1, 64 * 64 * 3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
Explanation: Expected Output:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
Insights:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
End of explanation |
2,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Software Engineering for Data Scientists
Manipulating Data with Python
CSE 583
Today's Objectives
1. Opening & Navigating the Jupyter Notebook
2. Simple Math in the Jupyter Notebook
3. Loading data with pandas
4. Cleaning and Manipulating data with pandas
5. Visualizing data with pandas & matplotlib
1. Opening and Navigating the IPython Notebook
We will start today with the interactive environment that we will be using often through the course
Step1: uncomment this to download the data
Step2: Loading Data with Pandas
Step3: Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern
Step4: Now we can use the read_csv command to read the comma-separated-value data
Step5: Note
Step6: The shape attribute shows us the number of elements
Step7: The columns attribute gives us the column names
Step8: The index attribute gives us the index names
Step9: The dtypes attribute gives the data types of each column
Step10: 4. Manipulating data with pandas
Here we'll cover some key features of manipulating data with pandas
Access columns by name using square-bracket indexing
Step11: Mathematical operations on columns happen element-wise
Step12: Columns can be created (or overwritten) with the assignment operator.
Let's create a tripminutes column with the number of minutes for each trip
Step13: Note that this manipulation only modifies the data frame in memory; if you want to save the modified dataframe to CSV, you can use the to_csv() method
Step14: More complicated mathematical operations can be done with tools in the numpy package
Step15: Working with Times
One trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times.
For a dataset of this size, using pd.to_datetime and specifying the date format can make things much faster (from the strftime reference, we see that the pronto data has format "%m/%d/%Y %I
Step16: (Note
Step17: Simple Grouping of Data
The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations.
Step18: Value Counts
Pandas includes an array of useful functionality for manipulating and analyzing tabular data.
We'll take a look at two of these here.
The pandas.value_counts returns statistics on the unique values within each column.
We can use it, for example, to break down rides by gender
Step19: Or to break down rides by age
Step20: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off
Step21: We can explore other things as well
Step22: Group-by Operation
One of the killer features of the Pandas dataframe is the ability to do group-by operations.
You can visualize the group-by like this (image borrowed from the Python Data Science Handbook)
Step23: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
<data object>.groupby(<grouping values>).<aggregate>()
for example, we can group by gender and find the average of all numerical columns
Step24: It's also possible to indes the grouped object like it is a dataframe
Step25: You can even group by multiple values
Step26: The unstack() operation can help make sense of this type of multiply-grouped data. What this technically does is split a multiple-valued index into an index plus columns
Step27: 5. Visualizing data with pandas
Of course, looking at tables of data is not very intuitive.
Fortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots.
Whenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots
Step28: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data
Step29: Adjusting the Plot Style
Matplotlib has a number of plot styles you can use. For example, if you like R you might use the 'ggplot' style.
I like the 'seaborn' style
Step30: Other plot types
Pandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the plot method
Step31: If you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object
Step32: Breakout
Step33: Split this plot by gender. Do you see any seasonal ridership patterns by gender?
Step34: Split this plot by user type. Do you see any seasonal ridership patterns by usertype?
Step35: Repeat the above three steps, counting the number of rides by time of day rather thatn by month. | Python Code:
!ls
Explanation: Software Engineering for Data Scientists
Manipulating Data with Python
CSE 583
Today's Objectives
1. Opening & Navigating the Jupyter Notebook
2. Simple Math in the Jupyter Notebook
3. Loading data with pandas
4. Cleaning and Manipulating data with pandas
5. Visualizing data with pandas & matplotlib
1. Opening and Navigating the IPython Notebook
We will start today with the interactive environment that we will be using often through the course: the Jupyter Notebook.
We will walk through the following steps together:
Download miniconda (be sure to get Version 3.6) and install it on your system (hopefully you have done this before coming to class)
Use the conda command-line tool to update your package listing and install the IPython notebook:
Update conda's listing of packages for your system:
$ conda update conda
Install IPython notebook and all its requirements
$ conda install jupyter notebook
Navigate to the directory containing the course material. For example:
$ cd ~/courses/CSE583/
You should see a number of files in the directory, including these:
$ ls
...
Breakout-Simple-Math.ipynb
Lecture-Python-And-Data.ipynb
...
Type jupyter notebook in the terminal to start the notebook
$ jupyter notebook
If everything has worked correctly, it should automatically launch your default browser
Click on Lecture-Python-And-Data.ipynb to open the notebook containing the content for this lecture.
With that, you're set up to use the Jupyter notebook!
2. Simple Math in the Jupyter Notebook
Now that we have the Jupyter notebook up and running, we're going to do a short breakout exploring some of the mathematical functionality that Python offers.
Please open Breakout-Simple-Math.ipynb, find a partner, and make your way through that notebook, typing and executing code along the way.
3. Loading data with pandas
With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.
Python's Data Science Ecosystem
In addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.
Some of the most important ones are:
numpy: Numerical Python
Numpy is short for "Numerical Python", and contains tools for efficient manipulation of arrays of data.
If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.
scipy: Scientific Python
Scipy is short for "Scientific Python", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more.
We will not look closely at Scipy today, but we will use its functionality later in the course.
pandas: Labeled Data Manipulation in Python
Pandas is short for "Panel Data", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a Data Frame.
If you've used the R statistical language (and in particular the so-called "Hadley Stack"), much of the functionality in Pandas should feel very familiar.
matplotlib: Visualization in Python
Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly).
Installing Pandas & friends
Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like conda. All it takes is to run
$ conda install numpy scipy pandas matplotlib
and (so long as your conda setup is working) the packages will be downloaded and installed on your system.
Downloading the data
shell commands can be run from the notebook by preceding them with an exclamation point:
End of explanation
# !curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD
Explanation: uncomment this to download the data:
End of explanation
import pandas
Explanation: Loading Data with Pandas
End of explanation
import pandas as pd
Explanation: Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern:
End of explanation
data = pd.read_csv('pronto.csv')
Explanation: Now we can use the read_csv command to read the comma-separated-value data:
End of explanation
data.head()
data.tail()
Explanation: Note: strings in Python can be defined either with double quotes or single quotes
Viewing Pandas Dataframes
The head() and tail() methods show us the first and last rows of the data
End of explanation
data.shape
Explanation: The shape attribute shows us the number of elements:
End of explanation
data.columns
Explanation: The columns attribute gives us the column names
End of explanation
data.index
Explanation: The index attribute gives us the index names
End of explanation
data.dtypes
Explanation: The dtypes attribute gives the data types of each column:
End of explanation
data.columns
data['tripduration']
Explanation: 4. Manipulating data with pandas
Here we'll cover some key features of manipulating data with pandas
Access columns by name using square-bracket indexing:
End of explanation
data['tripduration'] / 60
Explanation: Mathematical operations on columns happen element-wise:
End of explanation
data['tripminutes'] = data['tripduration'] / 60
data.head()
Explanation: Columns can be created (or overwritten) with the assignment operator.
Let's create a tripminutes column with the number of minutes for each trip
End of explanation
data.to_csv('pronto-new.csv')
!ls
Explanation: Note that this manipulation only modifies the data frame in memory; if you want to save the modified dataframe to CSV, you can use the to_csv() method:
End of explanation
import numpy as np
np.exp(data['tripminutes'])
Explanation: More complicated mathematical operations can be done with tools in the numpy package:
End of explanation
data['starttime'].head()
pd.to_datetime(data['starttime'].head())
times = pd.DatetimeIndex(data['starttime'].head())
times.dayofweek
data['starttime']
times = pd.DatetimeIndex(pd.to_datetime(data['starttime'], format="%m/%d/%Y %I:%M:%S %p"))
Explanation: Working with Times
One trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times.
For a dataset of this size, using pd.to_datetime and specifying the date format can make things much faster (from the strftime reference, we see that the pronto data has format "%m/%d/%Y %I:%M:%S %p"
End of explanation
times.dayofweek
times.month
times
Explanation: (Note: you can also use infer_datetime_format=True in most cases to automatically infer the correct format, though due to a bug it doesn't work when AM/PM are present)
With it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time:
End of explanation
data.head()
Explanation: Simple Grouping of Data
The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations.
End of explanation
pd.value_counts(data['gender'])
Explanation: Value Counts
Pandas includes an array of useful functionality for manipulating and analyzing tabular data.
We'll take a look at two of these here.
The pandas.value_counts returns statistics on the unique values within each column.
We can use it, for example, to break down rides by gender:
End of explanation
pd.value_counts(data['birthyear'])
Explanation: Or to break down rides by age:
End of explanation
pd.value_counts(data['birthyear'], sort=False)
Explanation: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off:
End of explanation
pd.value_counts(times.dayofweek, sort=False)
pd.value_counts(times.month, sort=False)
pd.value_counts(data['gender'], dropna=False)
Explanation: We can explore other things as well: day of week, hour of day, etc.
End of explanation
from IPython.display import Image
Image('split_apply_combine.png')
Explanation: Group-by Operation
One of the killer features of the Pandas dataframe is the ability to do group-by operations.
You can visualize the group-by like this (image borrowed from the Python Data Science Handbook)
End of explanation
data.groupby('gender').count()
data.groupby('gender').mean()
Explanation: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
<data object>.groupby(<grouping values>).<aggregate>()
for example, we can group by gender and find the average of all numerical columns:
End of explanation
data.groupby('gender')['tripminutes'].mean()
Explanation: It's also possible to indes the grouped object like it is a dataframe:
End of explanation
data.groupby([times.hour, 'gender'])['tripminutes'].mean()
Explanation: You can even group by multiple values: for example we can look at the trip duration by time of day and by gender:
End of explanation
grouped = data.groupby([times.hour, 'gender'])['tripminutes'].mean().unstack()
grouped
Explanation: The unstack() operation can help make sense of this type of multiply-grouped data. What this technically does is split a multiple-valued index into an index plus columns:
End of explanation
%matplotlib inline
Explanation: 5. Visualizing data with pandas
Of course, looking at tables of data is not very intuitive.
Fortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots.
Whenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots:
End of explanation
grouped.plot()
Explanation: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data:
End of explanation
import matplotlib.pyplot as plt
plt.style.use('seaborn')
grouped.plot()
Explanation: Adjusting the Plot Style
Matplotlib has a number of plot styles you can use. For example, if you like R you might use the 'ggplot' style.
I like the 'seaborn' style:
End of explanation
grouped.plot.bar()
Explanation: Other plot types
Pandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the plot method:
For example, we can create a histogram of trip durations:
End of explanation
ax = grouped.plot.bar()
ax.set_xlim(-1, 10)
Explanation: If you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object:
End of explanation
data['month'] = times.month
ax = data.groupby('month')['trip_id'].count().plot.bar();
Explanation: Breakout: Exploring the Data
Make a plot of the total number of rides as a function of month of the year (You'll need to extract the month, use a groupby, and find the appropriate aggregation to count the number in each group).
End of explanation
data.groupby(['month','gender'])['trip_id'].count().unstack().plot();
Explanation: Split this plot by gender. Do you see any seasonal ridership patterns by gender?
End of explanation
data.groupby(['month','usertype'])['trip_id'].count().unstack().plot();
Explanation: Split this plot by user type. Do you see any seasonal ridership patterns by usertype?
End of explanation
data['hour'] = times.month
ax = data.groupby('hour')['trip_id'].count().plot.bar();
data.groupby(['hour','gender'])['trip_id'].count().unstack().plot();
data.groupby(['hour','usertype'])['trip_id'].count().unstack().plot();
Explanation: Repeat the above three steps, counting the number of rides by time of day rather thatn by month.
End of explanation |
2,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 정형 데이터 다루기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: 판다스로 데이터프레임 만들기
판다스는 정형 데이터를 읽고 조작하는데 유용한 유틸리티 함수를 많이 제공하는 파이썬 라이브러리입니다. 판다스를 이용해 URL로부터 데이터를 다운로드하여 읽은 다음 데이터프레임으로 변환하겠습니다.
Step3: 데이터프레임을 훈련 세트, 검증 세트, 테스트 세트로 나누기
하나의 CSV 파일에서 데이터셋을 다운로드했습니다. 이를 훈련 세트, 검증 세트, 테스트 세트로 나누겠습니다.
Step4: tf.data를 사용하여 입력 파이프라인 만들기
그다음 tf.data를 사용하여 데이터프레임을 감싸겠습니다. 이렇게 하면 특성 열을 사용하여 판다스 데이터프레임의 열을 모델 훈련에 필요한 특성으로 매핑할 수 있습니다. 아주 큰 CSV 파일(메모리에 들어갈 수 없을 정도로 큰 파일)을 다룬다면 tf.data로 디스크 디렉토리에서 데이터를 읽을 수 있습니다. 이런 내용은 이 튜토리얼에 포함되어 있지 않습니다.
Step5: 입력 파이프라인 이해하기
앞서 만든 입력 파이프라인을 호출하여 반환되는 데이터 포맷을 확인해 보겠습니다. 간단하게 출력하기 위해 작은 배치 크기를 사용합니다.
Step6: 이 데이터셋은 (데이터프레임의) 열 이름을 키로 갖는 딕셔너리를 반환합니다. 데이터프레임 열의 값이 매핑되어 있습니다.
여러 종류의 특성 열 알아 보기
텐서플로는 여러 종류의 특성 열을 제공합니다. 이 절에서 몇 가지 특성 열을 만들어서 데이터프레임의 열을 변환하는 방법을 알아 보겠습니다.
Step7: 수치형 열
특성 열의 출력은 모델의 입력이 됩니다(앞서 정의한 함수를 사용하여 데이터프레임의 각 열이 어떻게 변환되는지 알아 볼 것입니다). 수치형 열은 가장 간단한 종류의 열입니다. 이 열은 실수 특성을 표현하는데 사용됩니다. 이 열을 사용하면 모델은 데이터프레임 열의 값을 변형시키지 않고 그대로 전달 받습니다.
Step8: 심장병 데이터셋 데이터프레임의 대부분 열은 수치형입니다.
버킷형 열
종종 모델에 수치 값을 바로 주입하기 원치 않을 때가 있습니다. 대신 수치 값의 구간을 나누어 이를 기반으로 범주형으로 변환합니다. 원본 데이터가 사람의 나이를 표현한다고 가정해 보죠. 나이를 수치형 열로 표현하는 대신 버킷형 열(bucketized column)을 사용하여 나이를 몇 개의 버킷(bucket)으로 분할할 수 있습니다. 다음에 원-핫 인코딩(one-hot encoding)된 값은 각 열이 매칭되는 나이 범위를 나타냅니다.
Step9: 범주형 열
이 데이터셋에서 thal 열은 문자열입니다(예를 들어 'fixed', 'normal', 'reversible'). 모델에 문자열을 바로 주입할 수 없습니다. 대신 문자열을 먼저 수치형으로 매핑해야 합니다. 범주형 열(categorical column)을 사용하여 문자열을 원-핫 벡터로 표현할 수 있습니다. 문자열 목록은 categorical_column_with_vocabulary_list를 사용하여 리스트로 전달하거나 categorical_column_with_vocabulary_file을 사용하여 파일에서 읽을 수 있습니다.
Step10: 더 복잡한 데이터셋에는 범주형(예를 들면 문자열)인 열이 많을 수 있습니다. 특성 열은 범주형 데이터를 다룰 때 진가가 발휘됩니다. 이 데이터셋에는 범주형 열이 하나 뿐이지만 다른 데이터셋에서 사용할 수 있는 여러 종류의 특성 열을 소개하겠습니다.
임베딩 열
가능한 문자열이 몇 개가 있는 것이 아니라 범주마다 수천 개 이상의 값이 있는 경우를 상상해 보겠습니다. 여러 가지 이유로 범주의 개수가 늘어남에 따라 원-핫 인코딩을 사용하여 신경망을 훈련시키는 것이 불가능해집니다. 임베딩 열(embedding column)을 사용하면 이런 제한을 극복할 수 있습니다. 고차원 원-핫 벡터로 데이터를 표현하는 대신 임베딩 열을 사용하여 저차원으로 데이터를 표현합니다. 이 벡터는 0 또는 1이 아니라 각 원소에 어떤 숫자도 넣을 수 있는 밀집 벡터(dense vector)입니다. 임베딩의 크기(아래 예제에서는 8입니다)는 튜닝 대상 파라미터입니다.
핵심 포인트
Step11: 해시 특성 열
가능한 값이 많은 범주형 열을 표현하는 또 다른 방법은 categorical_column_with_hash_bucket을 사용하는 것입니다. 이 특성 열은 입력의 해시(hash) 값을 계산한 다음 hash_bucket_size 크기의 버킷 중 하나를 선택하여 문자열을 인코딩합니다. 이 열을 사용할 때는 어휘 목록을 제공할 필요가 없고 공간을 절약하기 위해 실제 범주의 개수보다 훨씬 작게 해시 버킷(bucket)의 크기를 정할 수 있습니다.
핵심 포인트
Step12: 교차 특성 열
여러 특성을 연결하여 하나의 특성으로 만드는 것을 교차 특성(feature cross)이라고 합니다. 모델이 특성의 조합에 대한 가중치를 학습할 수 있습니다. 이 예제에서는 age와 thal의 교차 특성을 만들어 보겠습니다. crossed_column은 모든 가능한 조합에 대한 해시 테이블을 만들지 않고 hashed_column 매개변수를 사용하여 해시 테이블의 크기를 선택합니다.
Step13: 사용할 열 선택하기
여러 가지 특성 열을 사용하는 방법을 보았으므로 이제 이를 사용하여 모델을 훈련하겠습니다. 이 튜토리얼의 목적은 특성 열을 사용하는 완전한 코드(예를 들면 작동 방식)를 제시하는 것이므로 임의로 몇 개의 열을 선택하여 모델을 훈련하겠습니다.
핵심 포인트
Step14: 특성 층 만들기
특성 열을 정의하고 나면 DenseFeatures 층을 사용해 케라스 모델에 주입할 수 있습니다.
Step15: 앞서 특성 열의 작동 예를 보이기 위해 작은 배치 크기를 사용했습니다. 여기에서는 조금 더 큰 배치 크기로 입력 파이프라인을 만듭니다.
Step16: 모델 생성, 컴파일, 훈련 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install sklearn
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
Explanation: 정형 데이터 다루기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/feature_columns">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/feature_columns.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/feature_columns.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
깃허브(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/structured_data/feature_columns.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[email protected]로
메일을 보내주시기 바랍니다.
이 튜토리얼은 정형 데이터(structured data)를 다루는 방법을 소개합니다(예를 들어 CSV에서 읽은 표 형식의 데이터). 케라스를 사용하여 모델을 정의하고 특성 열(feature column)을 사용하여 CSV의 열을 모델 훈련에 필요한 특성으로 매핑하겠습니다. 이 튜토리얼은 다음 내용을 포함합니다:
판다스(Pandas)를 사용하여 CSV 파일을 읽기
tf.data를 사용하여 행을 섞고 배치로 나누는 입력 파이프라인(pipeline)을 만들기
CSV의 열을 feature_column을 사용해 모델 훈련에 필요한 특성으로 매핑하기
케라스를 사용하여 모델 구축, 훈련, 평가하기
데이터셋
클리블랜드(Cleveland) 심장병 재단에서 제공한 작은 데이터셋을 사용하겠습니다. 이 CSV 파일은 수백 개의 행으로 이루어져 있습니다. 각 행은 환자 한 명을 나타내고 각 열은 환자에 대한 속성 값입니다. 이 정보를 사용해 환자의 심장병 발병 여부를 예측해 보겠습니다. 즉 이 데이터셋은 이진 분류 문제입니다.
다음은 이 데이터셋에 대한 설명입니다. 수치형과 범주형 열이 모두 있다는 점을 주목하세요.
열| 설명| 특성 타입 | 데이터 타입
------------|--------------------|----------------------|-----------------
Age | 나이 | 수치형 | 정수
Sex | (1 = 남성; 0 = 여성) | 범주형 | 정수
CP | 가슴 통증 유형 (0, 1, 2, 3, 4) | 범주형 | 정수
Trestbpd | 안정 혈압 (병원 입원시 mm Hg) | 수치형 | 정수
Chol | 혈청 콜레스테롤 (mg/dl) | 수치형 | 정수
FBS | (공복 혈당 > 120 mg/dl) (1 = true; 0 = false) | 범주형 | 정수
RestECG | 안정 심전도 결과 (0, 1, 2) | 범주형 | 정수
Thalach | 최대 심박동수 | 수치형 | 정수
Exang | 협심증 유발 운동 (1 = yes; 0 = no) | 범주형 | 정수
Oldpeak | 비교적 안정되기까지 운동으로 유발되는 ST depression | 수치형 | 정수
Slope | 최대 운동 ST segment의 기울기 | 수치형 | 실수
CA | 형광 투시된 주요 혈관의 수 (0-3) | 수치형 | 정수
Thal | 3 = 보통; 6 = 해결된 결함; 7 = 해결가능한 결함 | 범주형 | 문자열
Target | 심장병 진단 (1 = true; 0 = false) | 분류 | 정수
텐서플로와 필요한 라이브러리 임포트하기
End of explanation
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
Explanation: 판다스로 데이터프레임 만들기
판다스는 정형 데이터를 읽고 조작하는데 유용한 유틸리티 함수를 많이 제공하는 파이썬 라이브러리입니다. 판다스를 이용해 URL로부터 데이터를 다운로드하여 읽은 다음 데이터프레임으로 변환하겠습니다.
End of explanation
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), '훈련 샘플')
print(len(val), '검증 샘플')
print(len(test), '테스트 샘플')
Explanation: 데이터프레임을 훈련 세트, 검증 세트, 테스트 세트로 나누기
하나의 CSV 파일에서 데이터셋을 다운로드했습니다. 이를 훈련 세트, 검증 세트, 테스트 세트로 나누겠습니다.
End of explanation
# 판다스 데이터프레임으로부터 tf.data 데이터셋을 만들기 위한 함수
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # 예제를 위해 작은 배치 크기를 사용합니다.
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
Explanation: tf.data를 사용하여 입력 파이프라인 만들기
그다음 tf.data를 사용하여 데이터프레임을 감싸겠습니다. 이렇게 하면 특성 열을 사용하여 판다스 데이터프레임의 열을 모델 훈련에 필요한 특성으로 매핑할 수 있습니다. 아주 큰 CSV 파일(메모리에 들어갈 수 없을 정도로 큰 파일)을 다룬다면 tf.data로 디스크 디렉토리에서 데이터를 읽을 수 있습니다. 이런 내용은 이 튜토리얼에 포함되어 있지 않습니다.
End of explanation
for feature_batch, label_batch in train_ds.take(1):
print('전체 특성:', list(feature_batch.keys()))
print('나이 특성의 배치:', feature_batch['age'])
print('타깃의 배치:', label_batch )
Explanation: 입력 파이프라인 이해하기
앞서 만든 입력 파이프라인을 호출하여 반환되는 데이터 포맷을 확인해 보겠습니다. 간단하게 출력하기 위해 작은 배치 크기를 사용합니다.
End of explanation
# 특성 열을 시험해 보기 위해 샘플 배치를 만듭니다.
example_batch = next(iter(train_ds))[0]
# 특성 열을 만들고 배치 데이터를 변환하는 함수
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
Explanation: 이 데이터셋은 (데이터프레임의) 열 이름을 키로 갖는 딕셔너리를 반환합니다. 데이터프레임 열의 값이 매핑되어 있습니다.
여러 종류의 특성 열 알아 보기
텐서플로는 여러 종류의 특성 열을 제공합니다. 이 절에서 몇 가지 특성 열을 만들어서 데이터프레임의 열을 변환하는 방법을 알아 보겠습니다.
End of explanation
age = feature_column.numeric_column("age")
demo(age)
Explanation: 수치형 열
특성 열의 출력은 모델의 입력이 됩니다(앞서 정의한 함수를 사용하여 데이터프레임의 각 열이 어떻게 변환되는지 알아 볼 것입니다). 수치형 열은 가장 간단한 종류의 열입니다. 이 열은 실수 특성을 표현하는데 사용됩니다. 이 열을 사용하면 모델은 데이터프레임 열의 값을 변형시키지 않고 그대로 전달 받습니다.
End of explanation
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
Explanation: 심장병 데이터셋 데이터프레임의 대부분 열은 수치형입니다.
버킷형 열
종종 모델에 수치 값을 바로 주입하기 원치 않을 때가 있습니다. 대신 수치 값의 구간을 나누어 이를 기반으로 범주형으로 변환합니다. 원본 데이터가 사람의 나이를 표현한다고 가정해 보죠. 나이를 수치형 열로 표현하는 대신 버킷형 열(bucketized column)을 사용하여 나이를 몇 개의 버킷(bucket)으로 분할할 수 있습니다. 다음에 원-핫 인코딩(one-hot encoding)된 값은 각 열이 매칭되는 나이 범위를 나타냅니다.
End of explanation
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
demo(thal_one_hot)
Explanation: 범주형 열
이 데이터셋에서 thal 열은 문자열입니다(예를 들어 'fixed', 'normal', 'reversible'). 모델에 문자열을 바로 주입할 수 없습니다. 대신 문자열을 먼저 수치형으로 매핑해야 합니다. 범주형 열(categorical column)을 사용하여 문자열을 원-핫 벡터로 표현할 수 있습니다. 문자열 목록은 categorical_column_with_vocabulary_list를 사용하여 리스트로 전달하거나 categorical_column_with_vocabulary_file을 사용하여 파일에서 읽을 수 있습니다.
End of explanation
# 임베딩 열의 입력은 앞서 만든 범주형 열입니다.
thal_embedding = feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
Explanation: 더 복잡한 데이터셋에는 범주형(예를 들면 문자열)인 열이 많을 수 있습니다. 특성 열은 범주형 데이터를 다룰 때 진가가 발휘됩니다. 이 데이터셋에는 범주형 열이 하나 뿐이지만 다른 데이터셋에서 사용할 수 있는 여러 종류의 특성 열을 소개하겠습니다.
임베딩 열
가능한 문자열이 몇 개가 있는 것이 아니라 범주마다 수천 개 이상의 값이 있는 경우를 상상해 보겠습니다. 여러 가지 이유로 범주의 개수가 늘어남에 따라 원-핫 인코딩을 사용하여 신경망을 훈련시키는 것이 불가능해집니다. 임베딩 열(embedding column)을 사용하면 이런 제한을 극복할 수 있습니다. 고차원 원-핫 벡터로 데이터를 표현하는 대신 임베딩 열을 사용하여 저차원으로 데이터를 표현합니다. 이 벡터는 0 또는 1이 아니라 각 원소에 어떤 숫자도 넣을 수 있는 밀집 벡터(dense vector)입니다. 임베딩의 크기(아래 예제에서는 8입니다)는 튜닝 대상 파라미터입니다.
핵심 포인트: 범주형 열에 가능한 값이 많을 때는 임베딩 열을 사용하는 것이 최선입니다. 여기에서는 예시를 목적으로 하나를 사용하지만 완전한 예제이므로 나중에 다른 데이터셋에 수정하여 적용할 수 있습니다.
End of explanation
thal_hashed = feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(feature_column.indicator_column(thal_hashed))
Explanation: 해시 특성 열
가능한 값이 많은 범주형 열을 표현하는 또 다른 방법은 categorical_column_with_hash_bucket을 사용하는 것입니다. 이 특성 열은 입력의 해시(hash) 값을 계산한 다음 hash_bucket_size 크기의 버킷 중 하나를 선택하여 문자열을 인코딩합니다. 이 열을 사용할 때는 어휘 목록을 제공할 필요가 없고 공간을 절약하기 위해 실제 범주의 개수보다 훨씬 작게 해시 버킷(bucket)의 크기를 정할 수 있습니다.
핵심 포인트: 이 기법의 큰 단점은 다른 문자열이 같은 버킷에 매핑될 수 있다는 것입니다. 그럼에도 실전에서는 일부 데이터셋에서 잘 작동합니다.
End of explanation
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(feature_column.indicator_column(crossed_feature))
Explanation: 교차 특성 열
여러 특성을 연결하여 하나의 특성으로 만드는 것을 교차 특성(feature cross)이라고 합니다. 모델이 특성의 조합에 대한 가중치를 학습할 수 있습니다. 이 예제에서는 age와 thal의 교차 특성을 만들어 보겠습니다. crossed_column은 모든 가능한 조합에 대한 해시 테이블을 만들지 않고 hashed_column 매개변수를 사용하여 해시 테이블의 크기를 선택합니다.
End of explanation
feature_columns = []
# 수치형 열
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# 버킷형 열
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# 범주형 열
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# 임베딩 열
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# 교차 특성 열
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
Explanation: 사용할 열 선택하기
여러 가지 특성 열을 사용하는 방법을 보았으므로 이제 이를 사용하여 모델을 훈련하겠습니다. 이 튜토리얼의 목적은 특성 열을 사용하는 완전한 코드(예를 들면 작동 방식)를 제시하는 것이므로 임의로 몇 개의 열을 선택하여 모델을 훈련하겠습니다.
핵심 포인트: 제대로 된 모델을 만들어야 한다면 대용량의 데이터셋을 사용하고 어떤 특성을 포함하는 것이 가장 의미있는지, 또 어떻게 표현해야 할지 신중하게 생각하세요.
End of explanation
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
Explanation: 특성 층 만들기
특성 열을 정의하고 나면 DenseFeatures 층을 사용해 케라스 모델에 주입할 수 있습니다.
End of explanation
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
Explanation: 앞서 특성 열의 작동 예를 보이기 위해 작은 배치 크기를 사용했습니다. 여기에서는 조금 더 큰 배치 크기로 입력 파이프라인을 만듭니다.
End of explanation
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("정확도", accuracy)
Explanation: 모델 생성, 컴파일, 훈련
End of explanation |
2,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Data
Step2: Exercise 1
Step3: Exercise 2
Step4: Exercise 3
Step5: Exercise 4 | Python Code:
# Useful Libraries
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import seaborn as sns
# Useful functions
def normal_test(X):
z, pval = stats.normaltest(X)
if pval < 0.05:
print 'Values are not normally distributed.'
else:
print 'Values are normally distributed.'
return
Explanation: Exercises: Comparing ETFs
By Christopher van Hoecke, Maxwell Margenot, and Delaney Mackenzie
Lecture Link :
https://www.quantopian.com/lectures/statistical-moments
https://www.quantopian.com/lectures/hypothesis-testing
IMPORTANT NOTE:
This lecture corresponds to the statistical moments and hypothesis testing lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
When you feel comfortable with the topics presented here, see if you can create an algorithm that qualifies for the Quantopian Contest. Participants are evaluated on their ability to produce risk-constrained alpha and the top 10 contest participants are awarded cash prizes on a daily basis.
https://www.quantopian.com/contest
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Key Concepts
t-statistic formula for unequal variances : $ t = \frac{\bar{X}_1 - \bar{X}_2}{(\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2})^{1/2}}$
Where $s_1$ and $s_2$ are the standard deviation of set 1 and set 2; and $n_1$ and $n_2$ are the number of observations we have.
End of explanation
# Get pricing data for an energy (XLE) and industrial (XLI) ETF
xle = get_pricing('XLE', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')
xli = get_pricing('XLI', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')
# Compute returns
xle_returns = xle.pct_change()[1:]
xli_returns = xli.pct_change()[1:]
Explanation: Data
End of explanation
# Histograms of XLE and XLI returns
## Your code goes here
# Checking for normality using function above.
## Your code goes here
# Use the levene or the F-test to check hypothesis of variance.
## Your code goes ehre
Explanation: Exercise 1 : Hypothesis Testing on Variance.
Plot the histogram of the returns of XLE and XLI
Check to see if each return stream is normally distributed
If the assets are normally distributed, use the F-test to perform a hypothesis test and decide whether they have the two assets have the same variance.
If the assets are not normally distributed, use the Levene test (in the scipy library) to perform a hypothesis test on variance.
End of explanation
# Manually calculating the t-statistic
# Note that the test also requires information about the degrees of freedom
# We will not compute that here
## Your code goes here
# Alternative form, using the scipy library on python.
## Your code goes here
Explanation: Exercise 2 : Hypothesis Testing on Mean.
Since we know that the variances are not equal, we must use Welch's t-test.
- Calculate the mean returns of XLE and XLI.
- Find the difference between the two means.
- Calculate the standard deviation of the returns of XLE and XLI
- Using the formula given above, calculate the t-test statistic (Using $\alpha = 0.05$) for Welch's t-test to test whether the mean returns of XLE and XLI are different.
- Consult the Hypothesis Testing Lecture to calculate the p-value for this test. Are the mean returns of XLE and XLI the same?
Now use the t-test function for two independent samples from the scipy library. Compare the results.
End of explanation
# Calculate the mean and median of xle and xli using the numpy library
## Your code goes here
# Print values of Skewness for xle and xli returns
## Your code goes here
Explanation: Exercise 3 : Skewness
Calculate the mean and median of the two assets
Calculate the skewness using the scipy library
End of explanation
# Print value of Kurtosis for xle and xli returns
## Your code goes here
# Distribution plot of XLE returns in red (for Kurtosis of 1.6).
# Distribution plot of XLI returns in blue (for Kurtosis of 2.0).
## Your code goes here
Explanation: Exercise 4 : Kurtosis
Check the kurtosis of the two assets, using the scipy library.
Using the seaborn library, plot the distribution of XLE and XLI returns.
Recall:
- Kurtosis > 3 is leptokurtic, a highly peaked, narrow deviation from the mean
- Kurtosis = 3 is mesokurtic. The most significant mesokurtic distribution is the normal distribution family.
- Kurtosis < 3 is platykurtic, a lower-peaked, broad deviation from the mean
End of explanation |
2,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Ray for Highly Parallelizable Tasks
While Ray can be used for very complex parallelization tasks,
often we just want to do something simple in parallel.
For example, we may have 100,000 time series to process with exactly the same algorithm,
and each one takes a minute of processing.
Clearly running it on a single processor is prohibitive
Step2: We use the @ray.remote decorator to create a Ray task.
A task is like a function, except the result is returned asynchronously.
It also may not run on the local machine, it may run elsewhere in the cluster.
This way you can run multiple tasks in parallel,
beyond the limit of the number of processors you can have in a single machine.
Step3: To get the result of a future, we use ray.get() which
blocks until the result is complete.
Step4: Now let's see how good our approximation is.
Step5: Meh. A little off -- that's barely 4 decimal places.
Why don't we do it a 100,000 times as much? Let's do 100 billion!
Step6: Notice that in the above, we generated a list with 100,000 futures.
Now all we do is have to do is wait for the result.
Depending on your ray cluster's size, this might take a few minutes.
But to give you some idea, if we were to do it on a single machine,
when I ran this it took 0.4 seconds.
On a single core, that means we're looking at 0.4 * 100000 = about 11 hours.
Here's what the Dashboard looks like | Python Code:
import ray
import random
import time
import math
from fractions import Fraction
# Let's start Ray
ray.init(address='auto')
Explanation: Using Ray for Highly Parallelizable Tasks
While Ray can be used for very complex parallelization tasks,
often we just want to do something simple in parallel.
For example, we may have 100,000 time series to process with exactly the same algorithm,
and each one takes a minute of processing.
Clearly running it on a single processor is prohibitive: this would take 70 days.
Even if we managed to use 8 processors on a single machine,
that would bring it down to 9 days. But if we can use 8 machines, each with 16 cores,
it can be done in about 12 hours.
How can we use Ray for these types of task?
We take the simple example of computing the digits of pi.
The algorithm is simple: generate random x and y, and if x^2 + y^2 < 1, it's
inside the circle, we count as in. This actually turns out to be pi/4
(remembering your high school math).
The following code (and this notebook) assumes you have already set up your Ray cluster and that you are running on the head node. For more details on how to set up a Ray cluster please see the Ray Cluster Quickstart Guide.
End of explanation
@ray.remote
def pi4_sample(sample_count):
pi4_sample runs sample_count experiments, and returns the
fraction of time it was inside the circle.
in_count = 0
for i in range(sample_count):
x = random.random()
y = random.random()
if x*x + y*y <= 1:
in_count += 1
return Fraction(in_count, sample_count)
Explanation: We use the @ray.remote decorator to create a Ray task.
A task is like a function, except the result is returned asynchronously.
It also may not run on the local machine, it may run elsewhere in the cluster.
This way you can run multiple tasks in parallel,
beyond the limit of the number of processors you can have in a single machine.
End of explanation
SAMPLE_COUNT = 1000 * 1000
start = time.time()
future = pi4_sample.remote(sample_count = SAMPLE_COUNT)
pi4 = ray.get(future)
end = time.time()
dur = end - start
print(f'Running {SAMPLE_COUNT} tests took {dur} seconds')
Explanation: To get the result of a future, we use ray.get() which
blocks until the result is complete.
End of explanation
pi = pi4 * 4
float(pi)
abs(pi-math.pi)/pi
Explanation: Now let's see how good our approximation is.
End of explanation
FULL_SAMPLE_COUNT = 100 * 1000 * 1000 * 1000 # 100 billion samples!
BATCHES = int(FULL_SAMPLE_COUNT / SAMPLE_COUNT)
print(f'Doing {BATCHES} batches')
results = []
for _ in range(BATCHES):
results.append(pi4_sample.remote())
output = ray.get(results)
Explanation: Meh. A little off -- that's barely 4 decimal places.
Why don't we do it a 100,000 times as much? Let's do 100 billion!
End of explanation
pi = sum(output)*4/len(output)
float(pi)
abs(pi-math.pi)/pi
Explanation: Notice that in the above, we generated a list with 100,000 futures.
Now all we do is have to do is wait for the result.
Depending on your ray cluster's size, this might take a few minutes.
But to give you some idea, if we were to do it on a single machine,
when I ran this it took 0.4 seconds.
On a single core, that means we're looking at 0.4 * 100000 = about 11 hours.
Here's what the Dashboard looks like:
So now, rather than just a single core working on this,
I have 168 working on the task together. And its ~80% efficient.
End of explanation |
2,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Transfer Learning Using Pretrained ConvNets
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Data preprocessing
Download data - cats_and_dogs_filtered.zip
We will download a filtered version of Kaggle's Dogs vs Cats dataset. Then store the downloaded zip file to the "/tmp/" directory.
Step3: Prepare training and validation cats and dogs datasets
Create the training and validation directories for cats datasets and dog datasets.
Step4: Create Image Data Generator with Image Augmentation
We will use ImageDataGenerator to rescale the images.
To create the train generator, specify where the train dataset directory, image size, batch size and binary classification mode.
The validation generator is created the same way.
Step5: Create the base model from the pre-trained ConvNets
We will create the base model from the MobileNet V2 model developed at Google, and pre-trained on the ImageNet dataset, a large dataset of 1.4M images and 1000 classes of web images. This is a powerful model. Let's see what the features that it has learned can do for our cat vs. dog problem.
First, we need to pick which intermediate layer of MobileNet V2 we will use for feature extraction. A common practice is to use the output of the very last layer before the flatten operation, the so-called "bottleneck layer". The reasoning here is that the following fully-connected layers will be too specialized to the task the network was trained on, and thus the features learned by these layers won't be very useful for a new task. The bottleneck features, however, retain much generality.
Let's instantiate an MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the include_top=False argument, we load a network that doesn't include the classification layers at the top, which is ideal for feature extraction.
Step6: Feature extraction
We will freeze the convolutional base created from the previous step and use that as a feature extractor, add a classifier on top of it and train the top-level classifier.
Freeze the convolutional base
It's important to freeze the convolutional based before we compile and train the model. By freezing (or setting layer.trainable = False), we prevent the weights in these layers from being updated during training.
Step7: Add a classification head
Now let's add a few layers on top of the base model
Step8: Compile the model
You must compile the model before training it.
Step9: These 1.2K trainable parameters are divided among 2 TensorFlow Variable objects, the weights and biases of the two dense layers
Step10: Train the model
After training for 10 epochs, we are able to get ~94% accuracy.
If you have more time, train it to convergence (50 epochs, ~96% accuracy)
Step11: Learning curves
Let's take a look at the learning curves of the training and validation accuracy / loss, when using the MobileNet V2 base model as a fixed feature extractor.
If you train to convergence (epochs=50) the resulting graph should look like this
Step12: Fine tuning
In our feature extraction experiment, we were only training a few layers on top of an MobileNet V2 base model. The weights of the pre-trained network were not updated during training. One way to increase performance even further is to "fine-tune" the weights of the top layers of the pre-trained model alongside the training of the top-level classifier. The training process will force the weights to be tuned from generic features maps to features associated specifically to our dataset.
Note
Step13: Compile the model
Compile the model using a much-lower training rate.
Step14: Continue Train the model
If you trained to convergence earlier, this will get you a few percent more accuracy.
Step15: Learning curves
Let's take a look at the learning curves of the training and validation accuracy / loss, when fine tuning the last few layers of the MobileNet V2 base model, as well as the classifier on top of it. Note the validation loss much higher than the training loss which means there maybe some overfitting.
Note | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import os
import tensorflow.compat.v1 as tf
from tensorflow import keras
print("TensorFlow version is ", tf.__version__)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
Explanation: Transfer Learning Using Pretrained ConvNets
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
In this tutorial we will discuss how to classify cats vs dogs images by using transfer learning from a pre-trained network. This will allows us to get higher accuracies than we saw by training our network from scratch.
A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. We can either use the pretrained model as it is or transfer learning using the pretrained ConvNets. The intuition behind transfer learning is that if this model trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. We can leverage these learned feature maps without having to train a large model on a large dataset by using these models as the basis of our own model specific to our task. There are 2 scenarios of transfer learning using a pretrained model:
Feature Extraction - use the representations of learned by a previous network to extract meaningful features from new samples. We simply add a new classifier, which will be trained from scratch, on top of the pretrained model so that we can repurpose the feature maps learned previously for our dataset. Do we use the entire pretrained model or just the convolutional base? - We use the feature extraction portion of these pretrained ConvNets (convolutional base) since they are likely to be generic features and learned concepts over a picture. However, the classification part of the pretrained model is often specific to original classification task, and subsequently specific to the set of classes on which the model was trained.
Fine-Tuning - unfreezing a few of the top layers of a frozen model base used for feature extraction, and jointly training both the newly added classifier layers as well as the last layers of the frozen model. This allows us to "fine tune" the higher order feature representations in addition to our final classifier in order to make them more relevant for the specific task involved.
We will follow the general machine learning workflow:
Examine and understand data
Build an input pipeline - using Keras ImageDataGenerator as we did in the image classification tutorial
Compose our model
Load in our pretrained model (and pretrained weights)
Stack our classification layers on top
Train our model
Evaluate model
We will see an example of using the pre-trained ConvNet as the feature extraction and then fine-tune to train the last few layers of the base model.
Audience: This post is geared towards beginners with some Keras API and ML background. To get the most out of this post, you should have some basic ML background, know what CNNs are, and be familiar with the Keras Sequential API.
Time Estimated: 30 minutes
End of explanation
zip_file = tf.keras.utils.get_file(origin="https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip",
fname="cats_and_dogs_filtered.zip", extract=True)
base_dir, _ = os.path.splitext(zip_file)
Explanation: Data preprocessing
Download data - cats_and_dogs_filtered.zip
We will download a filtered version of Kaggle's Dogs vs Cats dataset. Then store the downloaded zip file to the "/tmp/" directory.
End of explanation
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
print ('Total training cat images:', len(os.listdir(train_cats_dir)))
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
print ('Total training dog images:', len(os.listdir(train_dogs_dir)))
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
print ('Total validation cat images:', len(os.listdir(validation_cats_dir)))
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
print ('Total validation dog images:', len(os.listdir(validation_dogs_dir)))
Explanation: Prepare training and validation cats and dogs datasets
Create the training and validation directories for cats datasets and dog datasets.
End of explanation
image_size = 160 # All images will be resized to 160x160
batch_size = 32
# Rescale all images by 1./255 and apply image augmentation
train_datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1./255)
validation_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # Source directory for the training images
target_size=(image_size, image_size),
batch_size=batch_size,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = validation_datagen.flow_from_directory(
validation_dir, # Source directory for the validation images
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode='binary')
Explanation: Create Image Data Generator with Image Augmentation
We will use ImageDataGenerator to rescale the images.
To create the train generator, specify where the train dataset directory, image size, batch size and binary classification mode.
The validation generator is created the same way.
End of explanation
IMG_SHAPE = (image_size, image_size, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
Explanation: Create the base model from the pre-trained ConvNets
We will create the base model from the MobileNet V2 model developed at Google, and pre-trained on the ImageNet dataset, a large dataset of 1.4M images and 1000 classes of web images. This is a powerful model. Let's see what the features that it has learned can do for our cat vs. dog problem.
First, we need to pick which intermediate layer of MobileNet V2 we will use for feature extraction. A common practice is to use the output of the very last layer before the flatten operation, the so-called "bottleneck layer". The reasoning here is that the following fully-connected layers will be too specialized to the task the network was trained on, and thus the features learned by these layers won't be very useful for a new task. The bottleneck features, however, retain much generality.
Let's instantiate an MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the include_top=False argument, we load a network that doesn't include the classification layers at the top, which is ideal for feature extraction.
End of explanation
base_model.trainable = False
# Let's take a look at the base model architecture
base_model.summary()
Explanation: Feature extraction
We will freeze the convolutional base created from the previous step and use that as a feature extractor, add a classifier on top of it and train the top-level classifier.
Freeze the convolutional base
It's important to freeze the convolutional based before we compile and train the model. By freezing (or setting layer.trainable = False), we prevent the weights in these layers from being updated during training.
End of explanation
model = tf.keras.Sequential([
base_model,
keras.layers.GlobalAveragePooling2D(),
keras.layers.Dense(1, activation='sigmoid')
])
Explanation: Add a classification head
Now let's add a few layers on top of the base model:
End of explanation
model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
Explanation: Compile the model
You must compile the model before training it.
End of explanation
len(model.trainable_variables)
Explanation: These 1.2K trainable parameters are divided among 2 TensorFlow Variable objects, the weights and biases of the two dense layers:
End of explanation
epochs = 10
steps_per_epoch = train_generator.n // batch_size
validation_steps = validation_generator.n // batch_size
history = model.fit_generator(train_generator,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
workers=4,
validation_data=validation_generator,
validation_steps=validation_steps)
Explanation: Train the model
After training for 10 epochs, we are able to get ~94% accuracy.
If you have more time, train it to convergence (50 epochs, ~96% accuracy)
End of explanation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,max(plt.ylim())])
plt.title('Training and Validation Loss')
plt.show()
Explanation: Learning curves
Let's take a look at the learning curves of the training and validation accuracy / loss, when using the MobileNet V2 base model as a fixed feature extractor.
If you train to convergence (epochs=50) the resulting graph should look like this:
End of explanation
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine tune from this layer onwards
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
Explanation: Fine tuning
In our feature extraction experiment, we were only training a few layers on top of an MobileNet V2 base model. The weights of the pre-trained network were not updated during training. One way to increase performance even further is to "fine-tune" the weights of the top layers of the pre-trained model alongside the training of the top-level classifier. The training process will force the weights to be tuned from generic features maps to features associated specifically to our dataset.
Note: this should only be attempted after you have trained the top-level classifier with the pre-trained model set to non-trainable. If you add a randomly initialized classifier on top of a pre-trained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier) and your pre-trained model will just forget everything it has learned.
Additionally, the reasoning behind fine-tuning the top layers of the pre-trained model rather than all layers of the pre-trained model is the following: in a ConvNet, the higher up a layer is, the more specialized it is. The first few layers in a ConvNet learned very simple and generic features, which generalize to almost all types of images. But as you go higher up, the features are increasingly more specific to the dataset that the model was trained on. The goal of fine-tuning is to adapt these specialized features to work with the new dataset.
Un-freeze the top layers of the model
All we need to do is unfreeze the base_model, and set the bottom layers be un-trainable. Then, recompile the model (necessary for these changes to take effect), and resume training.
End of explanation
model.compile(optimizer = tf.keras.optimizers.RMSprop(learning_rate=2e-5),
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
Explanation: Compile the model
Compile the model using a much-lower training rate.
End of explanation
history_fine = model.fit_generator(train_generator,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
workers=4,
validation_data=validation_generator,
validation_steps=validation_steps)
Explanation: Continue Train the model
If you trained to convergence earlier, this will get you a few percent more accuracy.
End of explanation
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.9, 1])
plt.plot([epochs-1,epochs-1], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 0.2])
plt.plot([epochs-1,epochs-1], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
acc
loss
Explanation: Learning curves
Let's take a look at the learning curves of the training and validation accuracy / loss, when fine tuning the last few layers of the MobileNet V2 base model, as well as the classifier on top of it. Note the validation loss much higher than the training loss which means there maybe some overfitting.
Note: the training dataset is fairly small, and is similar to the original datasets that MobileNet V2 was trained on, so fine-tuning may result in overfitting.
If you train to convergence (epochs=50) the resulting graph should look like this:
End of explanation |
2,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Why Version Control?
Here's why.
Step1: If that hasn't convinced you, here are some other benefits | Python Code:
from IPython.display import Image
Image(url='http://www.phdcomics.com/comics/archive/phd101212s.gif')
Explanation: Why Version Control?
Here's why.
End of explanation
%%bash
git status
Explanation: If that hasn't convinced you, here are some other benefits:
http://stackoverflow.com/questions/1408450/why-should-i-use-version-control
Replace 'code' in the first answer with 'essay', 'thesis', 'homework' -- all stuff that a version control system such as git and GitHub can help you with!
Git for Scientists: A Tutorial (by John McDonnell)
http://nyuccl.org/pages/GitTutorial/
Go through the tutorial. You can either follow along from the terminal in the command line, or from within this very notebook using the %%bash magic:
End of explanation |
2,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-vol', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-VOL
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
2,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times
and numbers of states, and compares the relaxation timescales
Step1: First
Step2: Now sequences is our featurized data. | Python Code:
from __future__ import print_function
import os
%matplotlib inline
from matplotlib.pyplot import *
from msmbuilder.featurizer import SuperposeFeaturizer
from msmbuilder.example_datasets import AlanineDipeptide
from msmbuilder.hmm import GaussianFusionHMM
from msmbuilder.cluster import KCenters
from msmbuilder.msm import MarkovStateModel
Explanation: This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times
and numbers of states, and compares the relaxation timescales
End of explanation
print(AlanineDipeptide.description())
dataset = AlanineDipeptide().get()
trajectories = dataset.trajectories
topology = trajectories[0].topology
indices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']]
featurizer = SuperposeFeaturizer(indices, trajectories[0][0])
sequences = featurizer.transform(trajectories)
Explanation: First: load and "featurize"
Featurization refers to the process of converting the conformational
snapshots from your MD trajectories into vectors in some space $\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models the trajectory as a time-dependent
mixture of multivariate Gaussians.
In general, the featurization is somewhat of an art. For this example, we're using Mixtape's SuperposeFeaturizer, which superposes each snapshot onto a reference frame (trajectories[0][0] in this example), and then measure the distance from each
atom to its position in the reference conformation as the 'feature'
End of explanation
lag_times = [1, 10, 20, 30, 40]
hmm_ts0 = {}
hmm_ts1 = {}
n_states = [3, 5]
for n in n_states:
hmm_ts0[n] = []
hmm_ts1[n] = []
for lag_time in lag_times:
strided_data = [s[i::lag_time] for s in sequences for i in range(lag_time)]
hmm = GaussianFusionHMM(n_states=n, n_features=sequences[0].shape[1], n_init=1).fit(strided_data)
timescales = hmm.timescales_ * lag_time
hmm_ts0[n].append(timescales[0])
hmm_ts1[n].append(timescales[1])
print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales))
print()
figure(figsize=(14,3))
for i, n in enumerate(n_states):
subplot(1,len(n_states),1+i)
plot(lag_times, hmm_ts0[n])
plot(lag_times, hmm_ts1[n])
if i == 0:
ylabel('Relaxation Timescale')
xlabel('Lag Time')
title('%d states' % n)
show()
msmts0, msmts1 = {}, {}
lag_times = [1, 10, 20, 30, 40]
n_states = [4, 8, 16, 32, 64]
for n in n_states:
msmts0[n] = []
msmts1[n] = []
for lag_time in lag_times:
assignments = KCenters(n_clusters=n).fit_predict(sequences)
msm = MarkovStateModel(lag_time=lag_time, verbose=False).fit(assignments)
timescales = msm.timescales_
msmts0[n].append(timescales[0])
msmts1[n].append(timescales[1])
print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales[0:2]))
print()
figure(figsize=(14,3))
for i, n in enumerate(n_states):
subplot(1,len(n_states),1+i)
plot(lag_times, msmts0[n])
plot(lag_times, msmts1[n])
if i == 0:
ylabel('Relaxation Timescale')
xlabel('Lag Time')
title('%d states' % n)
show()
Explanation: Now sequences is our featurized data.
End of explanation |
2,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to pandas
Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true.
The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along.
Now let's start coding. Hopefully you did pip install pandas before you started up this notebook.
Step1: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd.
You don't have to, but every other person on the planet will be doing it, so you might as well.
Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete.
Step2: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day.
Selecting rows
Now let's look at our data, since that's what data is for
Step3: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling?
Step4: ...but maybe we want to see more than a measly five results?
Step5: But maybe we want to make a basketball joke and see the final four?
Step6: So yes, head and tail work kind of like the terminal commands. That's nice, I guess.
But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too.
Step7: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end.
Selecting columns
But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age.
Step8: NOTE
Step9: I want to know how many people are in each position. Luckily, pandas can tell me!
Step10: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details.
But now I'm curious about numbers
Step11: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing.
Step12: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data.
Manipulating data
Oh wait there is, HA HA HA.
Step13: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right?
Step14: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer?
Step15: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall.
But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys!
Sorting and sub-selecting
Step16: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending.
Step17: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall.
First, we need to check out boolean things.
Step18: Drawing pictures
Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay.
Step19: matplotlib is a graphing library. It's the Python way to make graphs!
Step20: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot.
Step21: That might look better with a little more customization. So let's customize it.
Step22: I want more graphics! Do tall people make more money?!?! | Python Code:
# import pandas, but call it pd. Why? Because that's What People Do.
import pandas as pd #so that you don't have to type pandas later -- most people use pd instead of pands
Explanation: An Introduction to pandas
Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true.
The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along.
Now let's start coding. Hopefully you did pip install pandas before you started up this notebook.
End of explanation
# We're going to call this df, which means "data frame"
# It isn't in UTF-8 (I saved it from my mac!) so we need to set the encoding
#saved on mac, therefore the encoding needs to be mac_roman!
#it will not open if the encoding is not set :()
df = pd.read_csv('NBA-Census-10.14.2013.csv', encoding='mac_roman')
# encoding, the most common are: mac_roman if saved on a mac, latin-1 if saved on PC or UTF-8
# 'pd.read_csv?' will give you more info about how read_csv works
Explanation: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd.
You don't have to, but every other person on the planet will be doing it, so you might as well.
Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete.
End of explanation
# Let's look at all of it
print(df)
Explanation: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day.
Selecting rows
Now let's look at our data, since that's what data is for
End of explanation
# Look at the first few rows
df.head() # shows header + first 5 rows!
Explanation: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling?
End of explanation
# Let's look at MORE of the first few rows
df.head(10) # shows the first 10 lines of the program
Explanation: ...but maybe we want to see more than a measly five results?
End of explanation
# Let's look at the final few rows
df.tail(4) # shows the final four
Explanation: But maybe we want to make a basketball joke and see the final four?
End of explanation
# Show the 6th through the 8th rows
df[6:9]
Explanation: So yes, head and tail work kind of like the terminal commands. That's nice, I guess.
But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too.
End of explanation
# Get the names of the columns, just because
df.columns #prints out the name of the columns - casing must match actually
# If we want to be "correct" we add .values on the end of it
df.columns.values
# Select only name and age
columns_we_want = ['Name', 'Age']
#passing the list of colums we want to data frame
df[columns_we_want]
# Combing that with .head() to see not-so-many rows
# We can also do this all in one line, even though it starts looking ugly
# (unlike the cute bears pandas looks ugly pretty often)
df[['Name', 'Age']] # brackets brackets
Explanation: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end.
Selecting columns
But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age.
End of explanation
df['POS'] # shows you each position
Explanation: NOTE: That was not df['Name', 'Age'], it was df[['Name', 'Age']]. You'll definitely type it wrong all of the time. When things break with pandas it's probably because you forgot to put in a million brackets.
Describing your data
A powerful tool of pandas is being able to select a portion of your data, because who ordered all that data anyway.
End of explanation
# Grab the POS column, and count the different values in it.
df['POS'].value_counts() # counts the number of values that match each position
df['Race'].value_counts() # race of players
Explanation: I want to know how many people are in each position. Luckily, pandas can tell me!
End of explanation
# Summary statistics for Age
df['Age'].value_counts() # statistics about age
df['Age'].describe() #statistics about NBA players and their ages
# That's pretty good. Does it work for everything? How about the money?
df.describe() # shows info for all of the numerical data
# EEEK minum weight = 20 lbs -- seems incorrect
df['Ht (In.)'].describe()
#df.columns # look at column names again
df['2013 $'].describe() # this column is string as opposed to int -- therefore it didn't work :()
Explanation: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details.
But now I'm curious about numbers: how old is everyone? Maybe we could, I don't know, get some statistics about age? Some statistics to describe age?
End of explanation
# Doing more describing
Explanation: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing.
End of explanation
# Take another look at our inches, but only the first few
df['Ht (In.)'].head()
# Divide those inches by 12
df['Ht (In.)'].head()/12 #divides every single value by 12
# Let's divide ALL of them by 12
df['Ht (In.)']/12
# Can we get statistics on those?
height_in_feet = df['Ht (In.)']/12
height_in_feet.describe()
# Let's look at our original data again
df.head()
Explanation: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data.
Manipulating data
Oh wait there is, HA HA HA.
End of explanation
# Store a new column
df['Ht (Ft.)'] = df['Ht (In.)']/12 # adds a new column with the height as feet
df.head()
df.sort_values('Ht (Ft.)') # automatically sorts from lowest to highest - ascending value
#shows the tallest players by height in feet
df.sort_values('Ht (Ft.)', ascending =False).head() # automatically sorts from lowest to highest - ascending value
#shows you who is/isn't above 6"5 ft.
above_or_below_six_five = df['Ht (Ft.)'] > 6
above_or_below_six_five.value_counts() # returns how many players are or are not above 6"5
Explanation: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right?
End of explanation
# Can't just use .replace
# Need to use this weird .str thing
# Can't just immediately replace the , either
# Need to use the .str thing before EVERY string method
# Describe still doesn't work.
# Let's convert it to an integer using .astype(int) before we describe it
# Maybe we can just make them millions?
# Unfortunately one is "n/a" which is going to break our code, so we can make n/a be 0
# Remove the .head() piece and save it back into the dataframe
Explanation: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer?
End of explanation
# This is just the first few guys in the dataset. Can we order it?
# Let's try to sort them
Explanation: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall.
But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys!
Sorting and sub-selecting
End of explanation
# It isn't descending = True, unfortunately
# We can use this to find the oldest guys in the league
#shows the oldest players
df.sort_values('Age', ascending =False).head() # automatically sorts from lowest to highest - ascending value
# Or the youngest, by taking out 'ascending=False'
#shows the youngest players
df.sort_values('Age').head() # automatically sorts from lowest to highest
Explanation: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending.
End of explanation
# Get a big long list of True and False for every single row.
#shows you who is/isn't above 6"5 ft.
above_or_below_six_five = df['Ht (Ft.)'] > 6
# print(above_or_below_six_five)
# We could use value counts if we wanted
above_or_below_six_five.value_counts() # returns how many players are or are not above 6"5
# But we can also apply this to every single row to say whether YES we want it or NO we don't
above_or_below_six_five = df['Ht (Ft.)'] > 7
df[df['Race'] == 'Asian']
# Instead of putting column names inside of the brackets, we instead
# put the True/False statements. It will only return the players above
# seven feet tall
# Or only the guards
df[df['Ht (Ft.)'] > 7]
# Or only the guards who are under 6 feet tall
# are you a guard? AND are below 6 feet tall?
df[(df['POS'] == 'G') & (df['Ht (Ft.)'] < 6)]
# It might be easier to break down the booleans into separate variables
is_a_guard = df['POS'] == 'G'
is_below_six_feet = df['Ht (Ft.)'] < 6
df[is_a_guard & is_below_six_feet]
centers = df[df['POS'] == 'C']
guards = df[df['POS'] == 'G']
# We can save this stuff
centers['Ht (Ft.)'].describe()
guards['Ht (Ft.)'].describe()
# Maybe we can compare them to taller players?
Explanation: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall.
First, we need to check out boolean things.
End of explanation
!pip install matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# This will scream we don't have matplotlib.
df['Ht (Ft.)'].hist()
Explanation: Drawing pictures
Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay.
End of explanation
%matplotlib inline
# save things as .png and not .jpeg
plt.savefig('heights.png')
# this will open up a weird window that won't do anything
# So instead you run this code
Explanation: matplotlib is a graphing library. It's the Python way to make graphs!
End of explanation
# Import matplotlib
# What's available?
# Use ggplot
# Make a histogram
# Try some other styles
Explanation: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot.
End of explanation
# Pass in all sorts of stuff!
# Most from http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html
# .range() is a matplotlib thing
Explanation: That might look better with a little more customization. So let's customize it.
End of explanation
# How does experience relate with the amount of money they're making?
# At least we can assume height and weight are related
# At least we can assume height and weight are related
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
# We can also use plt separately
# It's SIMILAR but TOTALLY DIFFERENT
Explanation: I want more graphics! Do tall people make more money?!?!
End of explanation |
2,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
Step1: First reload the data we generated in notmist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
Step4: Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
Step5: Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
Explanation: Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in notmist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
batch_size = 128
hidden_layer_length = 1024
regularization_factor1 = 0.01
regularization_factor2 = 0.01
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
tf_regularization_factor1 = tf.constant(regularization_factor1)
tf_regularization_factor2 = tf.constant(regularization_factor2)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_layer_length]))
biases1 = tf.Variable(tf.zeros([hidden_layer_length]))
weights2 = tf.Variable(
tf.truncated_normal([hidden_layer_length, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
train_logits1 = tf.matmul(tf_train_dataset, weights1) + biases1
train_activations1 = tf.nn.relu(train_logits1)
train_logits2 = tf.matmul(train_activations1, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(train_logits2, tf_train_labels)) + \
tf_regularization_factor1*tf.nn.l2_loss(weights1) + \
tf_regularization_factor2*tf.nn.l2_loss(weights2)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(train_logits2)
valid_logits1 = tf.matmul(tf_valid_dataset, weights1) + biases1
valid_activations1 = tf.nn.relu(valid_logits1)
valid_logits2 = tf.matmul(valid_activations1, weights2) + biases2
valid_prediction = tf.nn.softmax(valid_logits2)
test_logits1 = tf.matmul(tf_test_dataset, weights1) + biases1
test_activations1 = tf.nn.relu(test_logits1)
test_logits2 = tf.matmul(test_activations1, weights2) + biases2
test_prediction = tf.nn.softmax(test_logits2)
num_steps = 3000
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps+1):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Train accuracy: %.1f%%" % accuracy(train_prediction.eval(), train_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
End of explanation
num_steps = 3000
furthest_training_example_idx = 1000
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps+1):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (furthest_training_example_idx - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Train accuracy: %.1f%%" % accuracy(train_prediction.eval(), train_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
End of explanation
batch_size = 128
hidden_layer_length = 1024
regularization_factor1 = 0.01
regularization_factor2 = 0.01
dropout_keep_prob = 0.2
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
tf_regularization_factor1 = tf.constant(regularization_factor1)
tf_regularization_factor2 = tf.constant(regularization_factor2)
tf_dropout_keep_prob = tf.constant(dropout_keep_prob)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_layer_length]))
biases1 = tf.Variable(tf.zeros([hidden_layer_length]))
weights2 = tf.Variable(
tf.truncated_normal([hidden_layer_length, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
train_logits1 = tf.matmul(tf_train_dataset, weights1) + biases1
train_activations1 = tf.nn.relu(train_logits1)
dropped_train_activations1 = tf.nn.dropout(train_activations1, tf_dropout_keep_prob)
dropped_train_logits2 = tf.matmul(dropped_train_activations1, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(dropped_train_logits2, tf_train_labels)) + \
tf_regularization_factor1*tf.nn.l2_loss(weights1) + \
tf_regularization_factor2*tf.nn.l2_loss(weights2)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss)
# Predictions for the training, validation, and test data.
undropped_train_logits2 = tf.matmul(train_activations1, weights2) + biases2
train_prediction = tf.nn.softmax(undropped_train_logits2)
valid_logits1 = tf.matmul(tf_valid_dataset, weights1) + biases1
valid_activations1 = tf.nn.relu(valid_logits1)
valid_logits2 = tf.matmul(valid_activations1, weights2) + biases2
valid_prediction = tf.nn.softmax(valid_logits2)
test_logits1 = tf.matmul(tf_test_dataset, weights1) + biases1
test_activations1 = tf.nn.relu(test_logits1)
test_logits2 = tf.matmul(test_activations1, weights2) + biases2
test_prediction = tf.nn.softmax(test_logits2)
num_steps = 3000
furthest_training_example_idx = 1000
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps+1):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (furthest_training_example_idx - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Train accuracy: %.1f%%" % accuracy(train_prediction.eval(), train_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case?
End of explanation |
2,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter() # bag of words here
for idx, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word:index for index,word in enumerate(vocab)} ## create the word-to-index dictionary here
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vec = np.zeros(len(vocab), dtype=int)
for word in text.split(' '):
if word2idx.get(word, None):
word_vec[word2idx[word]] += 1
return word_vec
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000])
net = tflearn.fully_connected(net, 10, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
2,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step11: OPTIONAL | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(error, self.weights_hidden_to_output.T)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1.0 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += (self.lr * delta_weights_h_o) / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += (self.lr * delta_weights_i_h) / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 5000
learning_rate = 0.9
hidden_nodes = 15
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
The model predict really well for "normal" days (the beginning of the month).
It don't perform so well in the end of December, where the behavior don't follow a trend, caused by the holidays.
I guess with more data of this period of another years the model would adjust and predict better those days.
Explanation: OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
End of explanation |
2,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the NYC Subway Dataset
Section 1. Statistical Test
1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value?
Given random draws x from the population of people that ride the subuway when it rains and y from the population of people that ride the subway when it does not rain, the standard two-tailed hypotheses are as follows
Step1: 1.3 What results did you get from this statistical test? These should include the following numerical values
Step2: 1.4 What is the significance and interpretation of these results?
The p-value is below the significance value ($\alpha = 0.05$). Thus, the results obtained reject the null hipothesis with a significance level of 0.05. This means that the number of passengers in rainy days is different than the number observed in non-rainy days.
The following statistics support our test
Step4: Section 2. Linear Regression
2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model
Step5: 2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features?
I have used rain, precipi, Hour, meantempi and UNIT. UNIT was transformed into dummy variables.
2.3 Why did you select these features in your model? We are looking for specific reasons that lead you to believe that
the selected features will contribute to the predictive power of your model.
Your reasons might be based on intuition. For example, response for fog might be
Step6: 2.4 What are the parameters (also known as "coefficients" or "weights") of the non-dummy features in your linear regression model?
Step7: 2.5 What is your model’s R2 (coefficients of determination) value?
Step8: 2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value?
When the coefficient of determination, $R^2$, give us the correlation between the predictor features and the independent variable Entries per hour.
When $R^2$ is close to 1, it means that the model has very good fitness, while when it is close to 0, the model does not fit at all.
We have an $R^2$ of 0.46 which means that 0.46 of the variance of data is explained in the regression model.
In addition, we should be evaluating our model with data that was not used to train the model. Even if we get a good score, our model might be overfiting.
If we look at our coefficients we can see that rain and meantempi have a negative impact in Entries per hour, while precipi, Hour and Fog have a positive impact.
This means that 0.46 of the variance of the data is explained with a negative impact of rain.
Section 3. Visualization
Please include two visualizations that show the relationships between two or more variables in the NYC subway data.
Remember to add appropriate titles and axes labels to your plots. Also, please add a short description below each figure commenting on the key insights depicted in the figure.
3.1 One visualization should contain two histograms
Step9: Although the maximum value of ENTRIESn_hourly is above 50000, from the histogram we see that most values are below 10000. Thus, let's generate a histogram limited to 10000 entries.
Step10: 3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like. Some suggestions are
Step11: Section 4. Conclusion
Please address the following questions in detail. Your answers should be 1-2 paragraphs long.
4.1 From your analysis and interpretation of the data, do more people ride the NYC subway when it is raining or when it is not raining?
The number of people that ride NYC in raining or non-raining days is different, but the analysis made shows that is not clear which days have more ridership.
4.2 What analyses lead you to this conclusion? You should use results from both your statistical tests and your linear regression to support your analysis.
The Mann-Whitney U-statistic was able to reject the null hypothesis with a significance level of 0.05.
When we look at the distributions, we see that the maximum value of ridership per hour is much higher on rainy days (51839 against 43199).
The histograms are not able to produce a good visualization to compare distributions since, there are more tuples for non-rainy days. Perhaps, some normalization will help for further analysis.
Nevertheless, when we look at our linear regression model with $R^2=0.46$, the coefficient for rain has a negative value (-39.307), which means that the number of ridership is inversely proportional with the existence of rain. This might happen due to existent correlation or causality between rain and other features. E.g., rain might have some correlation with fog which might also affect ridership.
Section 5. Reflection
Please address the following questions in detail. Your answers should be 1-2 paragraphs long.
5.1 Please discuss potential shortcomings of the methods of your analysis, including | Python Code:
print ggplot(turnstile_weather, aes(x='ENTRIESn_hourly')) +\
geom_histogram(binwidth=1000,position="identity") +\
scale_x_continuous(breaks=range(0, 60001, 10000), labels = range(0, 60001, 10000))+\
facet_grid("rain")+\
ggtitle('Distribution of ENTRIESn_hourly in non-rainy days (0.0) and rainy days(1.0)')
Explanation: Analyzing the NYC Subway Dataset
Section 1. Statistical Test
1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value?
Given random draws x from the population of people that ride the subuway when it rains and y from the population of people that ride the subway when it does not rain, the standard two-tailed hypotheses are as follows:
$H0: P(x \gt y) = 0.5$
$H1: P(x \gt y) \neq 0.5$
The test used is Mann-Whitney U-statistic, and a two-tail P value is used.
The p-critical value is 0.05.
1.2 Why is this statistical test applicable to the dataset? In particular, consider the assumptions that the test is making about the distribution of ridership in the two samples.
Sample size is greater than 20
Distribution of samples is not normal (see histograms)
Samples are independent
End of explanation
### YOUR CODE HERE ###
df_with_rain = turnstile_weather[turnstile_weather['rain']==1]
df_without_rain = turnstile_weather[turnstile_weather['rain']==0]
with_rain_mean = df_with_rain['ENTRIESn_hourly'].mean()
without_rain_mean = df_without_rain['ENTRIESn_hourly'].mean()
U, p = scipy.stats.mannwhitneyu(df_with_rain['ENTRIESn_hourly'], df_without_rain['ENTRIESn_hourly'])
print "mean_with_rain=%f mean_without_rain=%f p-value=%.8f" %(with_rain_mean, without_rain_mean, p*2)
Explanation: 1.3 What results did you get from this statistical test? These should include the following numerical values: p-values, as well as the means for each of the two samples under test.
End of explanation
print "Descriptive statistics for the ridership in rainy days"
df_with_rain['ENTRIESn_hourly'].describe()
print "Descriptive statistics for the ridership in non-rainy days"
df_without_rain['ENTRIESn_hourly'].describe()
Explanation: 1.4 What is the significance and interpretation of these results?
The p-value is below the significance value ($\alpha = 0.05$). Thus, the results obtained reject the null hipothesis with a significance level of 0.05. This means that the number of passengers in rainy days is different than the number observed in non-rainy days.
The following statistics support our test:
End of explanation
def linear_regression(features, values):
Perform linear regression given a data set with an arbitrary number of features.
This can be the same code as in the lesson #3 exercise.
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(features, values)
return regr.intercept_, regr.coef_
def predictions(dataframe):
'''
The NYC turnstile data is stored in a pandas dataframe called weather_turnstile.
Using the information stored in the dataframe, let's predict the ridership of
the NYC subway using linear regression with ordinary least squares.
You can download the complete turnstile weather dataframe here:
https://www.dropbox.com/s/meyki2wl9xfa7yk/turnstile_data_master_with_weather.csv
Your prediction should have a R^2 value of 0.40 or better.
You need to experiment using various input features contained in the dataframe.
We recommend that you don't use the EXITSn_hourly feature as an input to the
linear model because we cannot use it as a predictor: we cannot use exits
counts as a way to predict entry counts.
Note: Due to the memory and CPU limitation of our Amazon EC2 instance, we will
give you a random subet (~10%) of the data contained in
turnstile_data_master_with_weather.csv. You are encouraged to experiment with
this exercise on your own computer, locally. If you do, you may want to complete Exercise
8 using gradient descent, or limit your number of features to 10 or so, since ordinary
least squares can be very slow for a large number of features.
If you receive a "server has encountered an error" message, that means you are
hitting the 30-second limit that's placed on running your program. Try using a
smaller number of features.
'''
################################ MODIFY THIS SECTION #####################################
# Select features. You should modify this section to try different features! #
# We've selected rain, precipi, Hour, meantempi, and UNIT (as a dummy) to start you off. #
# See this page for more info about dummy variables: #
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html #
##########################################################################################
features = dataframe[['rain', 'precipi', 'Hour', 'meantempi', 'fog']]
dummy_units = pandas.get_dummies(dataframe['UNIT'], prefix='unit')
features = features.join(dummy_units)
# Values
values = dataframe['ENTRIESn_hourly']
# Perform linear regression
intercept, params = linear_regression(features, values)
predictions = intercept + numpy.dot(features, params)
return predictions, intercept, params
predicted, intercept, params = predictions(turnstile_weather)
values = turnstile_weather['ENTRIESn_hourly']
(turnstile_weather['ENTRIESn_hourly'] - predicted).hist(bins=20)
print "R2 Score=%f"%r2_score(values, predicted)
Explanation: Section 2. Linear Regression
2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model:
OLS using Scikit Learn
End of explanation
print "Correlation analysis"
turnstile_weather.corr()['ENTRIESn_hourly'].sort_values(inplace=False)
# plt.rcParams['figure.figsize'] = (12.0, 3.0)
# dtypes = turnstile_weather.dtypes
# for column in turnstile_weather.columns:
# if dtypes[column] in ['int64', 'float64']:
# plt.figure()
# turnstile_weather[column].hist(bins=20)
# #turnstile_weather.plot(kind='kde', x=column)
# plt.title(column)
# plt.rcParams['figure.figsize'] = (16.0, 8.0)
Explanation: 2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features?
I have used rain, precipi, Hour, meantempi and UNIT. UNIT was transformed into dummy variables.
2.3 Why did you select these features in your model? We are looking for specific reasons that lead you to believe that
the selected features will contribute to the predictive power of your model.
Your reasons might be based on intuition. For example, response for fog might be: “I decided to use fog because I thought that when it is very foggy outside people might decide to use the subway more often.”
Your reasons might also be based on data exploration and experimentation, for example: “I used feature X because as soon as I included it in my model, it drastically improved my R2 value.”
We know that weather, namely precipitation, affects the $\mu_{passengers}$. Thus I have included rain, precipi, meantempi and fog. From the correlation analysis below we can also see that Hour is the most correlated valid feature. For this reason Hour was also included in the input features.
End of explanation
features=['rain', 'precipi', 'Hour', 'meantempi', 'fog']
print "== Non-dummy features coefficients =="
for i in range(5):
output_str = ("%s:"%features[i]).ljust(12)
output_str += "%.3f"%(params[i])
print output_str
Explanation: 2.4 What are the parameters (also known as "coefficients" or "weights") of the non-dummy features in your linear regression model?
End of explanation
r_squared = 1 - ((values-predicted)**2).sum()/((values-values.mean())**2).sum()
assert(r_squared == r2_score(values, predicted))
print "R2 Score=%f"%r_squared
Explanation: 2.5 What is your model’s R2 (coefficients of determination) value?
End of explanation
print ggplot(aes(x='ENTRIESn_hourly', fill='rain'), data=turnstile_weather) +\
geom_histogram(binwidth=1000) +\
ggtitle('Ridership per hour distribution for rainy and non-rainy days') +\
ylab('Number of tuples')
print "ENTRIESn_hourly max value: %d"%turnstile_weather['ENTRIESn_hourly'].max()
Explanation: 2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value?
When the coefficient of determination, $R^2$, give us the correlation between the predictor features and the independent variable Entries per hour.
When $R^2$ is close to 1, it means that the model has very good fitness, while when it is close to 0, the model does not fit at all.
We have an $R^2$ of 0.46 which means that 0.46 of the variance of data is explained in the regression model.
In addition, we should be evaluating our model with data that was not used to train the model. Even if we get a good score, our model might be overfiting.
If we look at our coefficients we can see that rain and meantempi have a negative impact in Entries per hour, while precipi, Hour and Fog have a positive impact.
This means that 0.46 of the variance of the data is explained with a negative impact of rain.
Section 3. Visualization
Please include two visualizations that show the relationships between two or more variables in the NYC subway data.
Remember to add appropriate titles and axes labels to your plots. Also, please add a short description below each figure commenting on the key insights depicted in the figure.
3.1 One visualization should contain two histograms: one of ENTRIESn_hourly for rainy days and one of ENTRIESn_hourly for non-rainy days.
You can combine the two histograms in a single plot or you can use two separate plots.
If you decide to use to two separate plots for the two histograms, please ensure that the x-axis limits for both of the plots are identical. It is much easier to compare the two in that case.
For the histograms, you should have intervals representing the volume of ridership (value of ENTRIESn_hourly) on the x-axis and the frequency of occurrence on the y-axis. For example, each interval (along the x-axis), the height of the bar for this interval will represent the number of records (rows in our data) that have ENTRIESn_hourly that falls in this interval.
Remember to increase the number of bins in the histogram (by having larger number of bars). The default bin width is not sufficient to capture the variability in the two samples.
R:
The following visualization has 2 histograms combined in a single plot. The histogram in red shows the ridership per hour distribution for non-rainy days, while the histogram in blue shows for rainy days. We can see that non-rainy have bigger bars for ENTRIESn_hourly below 10000. This doesn't mean rainy days have less passengers. I just means that we have less data for rainy days, which is natural since we have less rainy days.
End of explanation
print ggplot(aes(x='ENTRIESn_hourly', fill='rain'), data=turnstile_weather) +\
geom_histogram(binwidth=100) +\
xlim(0, 10000)+\
ggtitle('Ridership per hour distribution for rainy and non-rainy days limited to 10000') +\
ylab('Number of tuples')
# print ggplot(aes(x='ENTRIESn_hourly', color='rain'), data=turnstile_weather) +\
# geom_density() +\
# ggtitle('Ridership per hour distribution for rainy and non-rainy days limited to 10000') +\
# ylab('Number of tuples')
Explanation: Although the maximum value of ENTRIESn_hourly is above 50000, from the histogram we see that most values are below 10000. Thus, let's generate a histogram limited to 10000 entries.
End of explanation
print ggplot(turnstile_weather, aes(x='Hour', y='ENTRIESn_hourly'))+geom_bar(stat = "summary", fun_y=numpy.mean, fill='lightblue')+ggtitle('Average ridership by time-of-day')
Explanation: 3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like. Some suggestions are:
Ridership by time-of-day
Ridership by day-of-week
R:
The following plot shows the average number of passengers per hour in our dataset. We can see that in average 8pm, 12pm, and 4pm are the times of day with most passengers.
End of explanation
print pandas.to_datetime(turnstile_weather['DATEn']).describe()
Explanation: Section 4. Conclusion
Please address the following questions in detail. Your answers should be 1-2 paragraphs long.
4.1 From your analysis and interpretation of the data, do more people ride the NYC subway when it is raining or when it is not raining?
The number of people that ride NYC in raining or non-raining days is different, but the analysis made shows that is not clear which days have more ridership.
4.2 What analyses lead you to this conclusion? You should use results from both your statistical tests and your linear regression to support your analysis.
The Mann-Whitney U-statistic was able to reject the null hypothesis with a significance level of 0.05.
When we look at the distributions, we see that the maximum value of ridership per hour is much higher on rainy days (51839 against 43199).
The histograms are not able to produce a good visualization to compare distributions since, there are more tuples for non-rainy days. Perhaps, some normalization will help for further analysis.
Nevertheless, when we look at our linear regression model with $R^2=0.46$, the coefficient for rain has a negative value (-39.307), which means that the number of ridership is inversely proportional with the existence of rain. This might happen due to existent correlation or causality between rain and other features. E.g., rain might have some correlation with fog which might also affect ridership.
Section 5. Reflection
Please address the following questions in detail. Your answers should be 1-2 paragraphs long.
5.1 Please discuss potential shortcomings of the methods of your analysis, including: Dataset, Analysis, such as the linear regression model or statistical test.
Regarding the linear regression, this method is not robust against correlated features. The use of correlated features might be reducing the quality of our model and conclusions.
Although our test rejected the null hypothesis, we can't assume that there is a causality between rain and ridership. There is the possibility of having another condition that affects both features.
End of explanation |
2,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Samples and Results
Setup
Step1: Connection
s is a Session object representing a connection to the Ovation API
Step2: Organization id required for all calls.
Step3: Workflow samples
Collect all samples from a Workflow by ID (this would, for example give all samples in a sequencing workflow after batch creation).
Step4: Sample results
Collect all WorkflowSampleResults for samples in the batch of type library-dilution | Python Code:
import uuid
from pprint import pprint
from datetime import date
from ovation.session import connect_lab
Explanation: Samples and Results
Setup
End of explanation
s = connect_lab(input("Email: "), api='https://lab-services-staging.ovation.io')
Explanation: Connection
s is a Session object representing a connection to the Ovation API
End of explanation
organization_id = input('Organization id: ')
Explanation: Organization id required for all calls.
End of explanation
workflow_id = input('Workflow ID: ')
samples = s.get(s.path('samples'),
params={'workflow_id': workflow_id, 'organization_id': organization_id})
sample_ids = [s.id for s in samples.samples]
Explanation: Workflow samples
Collect all samples from a Workflow by ID (this would, for example give all samples in a sequencing workflow after batch creation).
End of explanation
for sample_id in sample_ids:
sample_results = s.get(s.path('workflow_sample_results'),
params={'sample_id': sample_id, 'result_type': 'library-dilution'})
sample_results
sample_ids
Explanation: Sample results
Collect all WorkflowSampleResults for samples in the batch of type library-dilution
End of explanation |
2,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting data from database
I set up a Postgres database on Amazon Web Services and used a pandasql engine to write SQL queries directly from Python. sqlalchemy_conn is script containing the connection details and includes my hostname and password to connect to the database.
I was most interested in goals 1, 5, 6, and 8.
Goal 1
Step1: Selecting targets to investigate & series to use
I was particularly interested in Goal 6 and Target 6.A
Step2: Selecting the data
The series associated with target ids 1, 7, 12, 13, 14, 15, 19, and 20 seemed like they may be helpful in answering my questions. I queried the database again and saved these as a dataframe.
Step3: I wanted to use data for all countries, but the former Sudan was giving me errors when I tried to unstack and reshape the data. I took a look at what series data was available for the former Sudan.
Step4: Exclude former Sudan data
No data was available for the main variable I planned to explore, the HIV incidence rate, so I excluded the former Sudan rows from my dataframe.
Step5: Simplify dataframe
I looked at the included columns and simplified the dataframe to include only variables I intended to use.
Step6: Unstack the dataframe
The dataframe included a column called 'value' which contained the value available for the many different series. In order to explore and make predictions I unstacked the dataframe, turning the value column into multiple columns associated with the corresponding series name.
I started by extracting the values for the series 'AIDS deaths' and then looped through a list of the other series, merging them with the reshaped dataframe.
Step7: Add more series from World Bank MDG database
I found more variables which I wanted to include in my analysis on the World Bank database.
The years for this data were individual columns, so I needed to transform it into a shape similar to the original data in order to unstack and merge it with my reshaped dataframe.
Step8: Add columns to new dataframe
In order to properly merge the data provided by the UN with the data downloaded from the World Bank, the two dataframes need to have the same key columns. In order to add these columns to the new dataframe, I created a dictionary using the ISO code as a key and storing the values for the columns to add.
Step9: Rename and remove countries
A few countries in the World Bank data had ISO codes which differed from the original UN dataset, possibly these are outdated codes. I replaces these with the codes used in the UN dataset.
A number of countries from the new World Bank data were not present in the original UN dataset. I removed these to avoid errors.
Step10: These loops add the columns 'countryname', 'isdeveloped', 'mdgregions', 'ismdgcountry', 'gdppc2012', 'population2012', 'isldc2014', and 'islldc' to the World Bank dataframe so that it can be properly merged with the UN dataframe.
Step11: Merge World Bank data with reshaped dataframe
Step12: Save processed data
Finally, I saved the reshaped and merged dataframe with pickle for use in exploration in another ipython notebook. | Python Code:
query = 'SELECT DISTINCT targetname FROM undata WHERE goalid = 1'
targets1 = pd.read_sql(query, engine)
for target in targets1['targetname']:
print target
query = 'SELECT DISTINCT targetname FROM undata WHERE goalid = 5'
targets5 = pd.read_sql(query, engine)
for target in targets5['targetname']:
print target
query = 'SELECT DISTINCT targetname FROM undata WHERE goalid = 6'
targets6 = pd.read_sql(query, engine)
for target in targets6['targetname']:
print target
query = 'SELECT DISTINCT targetname FROM undata WHERE goalid = 8'
targets8 = pd.read_sql(query, engine)
for target in targets8['targetname']:
print target
Explanation: Getting data from database
I set up a Postgres database on Amazon Web Services and used a pandasql engine to write SQL queries directly from Python. sqlalchemy_conn is script containing the connection details and includes my hostname and password to connect to the database.
I was most interested in goals 1, 5, 6, and 8.
Goal 1: Eradicate extreme poverty and hunger
Goal 5: Improve maternal health
Goal 6: Combat HIV/AIDS, malaria, and other diseases
Goal 8: Develop a global partnership for development
I first queried the database to see the specific targets associated with each of these goals.
End of explanation
# PRINT FULL SERIES NAMES FOR GOAL 1
query = "SELECT DISTINCT targetid, seriesrowid, seriesname FROM undata WHERE goalid = 1;"
series1 = pd.read_sql(query, engine)
for name in series1['seriesname']:
print name
# PRINT TABLE
series1
# PRINT FULL SERIES NAMES FOR GOAL 5
query = "SELECT DISTINCT targetid, seriesrowid, seriesname FROM undata WHERE goalid = 5;"
series5 = pd.read_sql(query, engine)
for name in series5['seriesname']:
print name
# PRINT TABLE
series5
# PRINT FULL SERIES NAMES FOR GOAL 6
query = "SELECT DISTINCT targetid, seriesrowid, seriesname FROM undata WHERE goalid = 6;"
series6 = pd.read_sql(query, engine)
for name in series6['seriesname']:
print name
# PRINT TABLE
series6
# PRINT FULL SERIES NAMES FOR GOAL 8
query = "SELECT DISTINCT targetid, seriesrowid, seriesname FROM undata WHERE goalid = 8;"
series8 = pd.read_sql(query, engine)
for name in series8['seriesname']:
print name
# PRINT TABLE
series8
Explanation: Selecting targets to investigate & series to use
I was particularly interested in Goal 6 and Target 6.A: Have halted by 2015 and begun to reverse the spread of HIV/AIDS. My questions:
What progress has been made in halting the spread of HIV/AIDS?
What factors may have contributed to this progress?
I felt that there may be data associated with goals 1, 5, and 8 which could also help me answer these questions. I wanted to examine the relationships between the reduction of HIV/AIDS with a reduction in poverty, promoting maternal health, and development assistance.
Next, I queried the database to see which series were available, keeping note of the target id for series which seemed like they might help answer my questions.
End of explanation
query = "SELECT * FROM undata WHERE targetid IN (1, 7, 12, 13, 14, 15, 19, 20);"
undata = pd.read_sql(query, engine)
undata.shape
Explanation: Selecting the data
The series associated with target ids 1, 7, 12, 13, 14, 15, 19, and 20 seemed like they may be helpful in answering my questions. I queried the database again and saved these as a dataframe.
End of explanation
print len(undata[undata.isformer == 1])
print len(set(undata[undata.isformer == 1]['seriesname']))
print set(undata[undata.isformer == 1]['seriesname'])
Explanation: I wanted to use data for all countries, but the former Sudan was giving me errors when I tried to unstack and reshape the data. I took a look at what series data was available for the former Sudan.
End of explanation
undata = undata[undata.isformer == 0]
undata = undata.drop(['isformer'], axis=1)
Explanation: Exclude former Sudan data
No data was available for the main variable I planned to explore, the HIV incidence rate, so I excluded the former Sudan rows from my dataframe.
End of explanation
columns = [column for column in undata.columns]
columns
unsimple = undata[['countryname', 'iso3code', 'year', 'isdeveloped', 'mdgregions',
'isldc2014', 'islldc', 'ismdgcountry', 'seriesname', 'gdppc2012',
'population2012', 'value']]
unsimple.describe()
unsimple.shape
Explanation: Simplify dataframe
I looked at the included columns and simplified the dataframe to include only variables I intended to use.
End of explanation
all_series = list(set(unsimple['seriesname']))
row = 'AIDS deaths'
all_series.remove(row)
unreshape = deepcopy(unsimple[unsimple['seriesname'] == row])
unreshape.rename(columns={'value': row}, inplace=True)
unreshape.shape
for series in all_series:
new_cols = deepcopy(unsimple[unsimple['seriesname'] == series])
new_cols.rename(columns={'value': series}, inplace=True)
new_cols = new_cols.drop(['seriesname'], axis=1)
keys = ['countryname', 'iso3code', 'year', 'isdeveloped', 'mdgregions',
'ismdgcountry', 'gdppc2012', 'population2012', 'isldc2014', 'islldc']
unreshape = pd.merge(unreshape, new_cols, how='outer', on=keys)
unreshape.shape
Explanation: Unstack the dataframe
The dataframe included a column called 'value' which contained the value available for the many different series. In order to explore and make predictions I unstacked the dataframe, turning the value column into multiple columns associated with the corresponding series name.
I started by extracting the values for the series 'AIDS deaths' and then looped through a list of the other series, merging them with the reshaped dataframe.
End of explanation
undata_new = pd.read_csv('data/Data_Extract_From_Millennium_Development_Goals_Data.csv')
list(set(undata_new['Series Name']))
undata_new.head()
# drop series code
undata_new = undata_new.drop(['Series Code'], axis = 1)
# rename columns
years = range(1990, 2015)
undata_new.columns = ['countryname', 'iso3code', 'seriesname'] + years
undata_new.head()
# set a multi-level index and stack the data
undata_new = undata_new.set_index(keys=['countryname', 'iso3code', 'seriesname'])
undata_new = pd.DataFrame(undata_new.stack(level=-1))
# rename value column
undata_new.columns = ['value']
undata_new.head()
# reset the index to turn these variables back into columns
undata_new = undata_new.reset_index()
# rename columns
undata_new.columns = ['countryname', 'iso3code', 'seriesname', 'year', 'value']
undata_new.head()
Explanation: Add more series from World Bank MDG database
I found more variables which I wanted to include in my analysis on the World Bank database.
The years for this data were individual columns, so I needed to transform it into a shape similar to the original data in order to unstack and merge it with my reshaped dataframe.
End of explanation
# columns to add
cols = ['countryname', 'isdeveloped', 'mdgregions', 'ismdgcountry', 'gdppc2012',
'population2012', 'isldc2014', 'islldc']
add_cols = {}
for code in set(unreshape['iso3code']):
country_cols = {}
for col in cols:
try:
temp = list(unreshape[unreshape['iso3code'] == code][col])[0]
country_cols[col] = temp
except IndexError:
country_cols[col] = np.NaN
add_cols[code] = country_cols
add_cols['AFG']
Explanation: Add columns to new dataframe
In order to properly merge the data provided by the UN with the data downloaded from the World Bank, the two dataframes need to have the same key columns. In order to add these columns to the new dataframe, I created a dictionary using the ISO code as a key and storing the values for the columns to add.
End of explanation
undata_new['iso3code'] = undata_new['iso3code'].replace('ZAR', 'COD')
undata_new['iso3code'] = undata_new['iso3code'].replace('TMP', 'TLS')
missing_data = list(set(undata_new['iso3code']).difference(set(unreshape['iso3code'])))
sorted(missing_data)
undata_new.shape
for country in missing_data:
undata_new = undata_new[undata_new['iso3code'] != country]
undata_new.shape
Explanation: Rename and remove countries
A few countries in the World Bank data had ISO codes which differed from the original UN dataset, possibly these are outdated codes. I replaces these with the codes used in the UN dataset.
A number of countries from the new World Bank data were not present in the original UN dataset. I removed these to avoid errors.
End of explanation
for col in cols:
new_col = []
for code in undata_new['iso3code']:
new_col.append(add_cols[code][col])
undata_new[col] = new_col
for col in cols:
new_col = []
for code in unreshape['iso3code']:
new_col.append(add_cols[code][col])
unreshape[col] = new_col
undata_new.shape
undata_new.dtypes
Explanation: These loops add the columns 'countryname', 'isdeveloped', 'mdgregions', 'ismdgcountry', 'gdppc2012', 'population2012', 'isldc2014', and 'islldc' to the World Bank dataframe so that it can be properly merged with the UN dataframe.
End of explanation
for series in set(undata_new['seriesname']):
new_cols = deepcopy(undata_new[undata_new['seriesname'] == series])
new_cols.rename(columns={'value': series}, inplace=True)
new_cols = new_cols.drop(['seriesname'], axis=1)
keys = ['countryname', 'iso3code', 'year', 'isdeveloped', 'mdgregions',
'ismdgcountry', 'gdppc2012', 'population2012', 'isldc2014', 'islldc']
unreshape = pd.merge(unreshape, new_cols, how='outer', on=keys)
unreshape.shape
# replace '..' values with NaN
unreshape = unreshape.replace('..', np.NaN)
# inspect how many observations are available for each variable
for col in sorted(unreshape.columns):
print unreshape[col].count(), col
# drop rows with no value for countryname or region
unreshape = unreshape.drop(pd.isnull(unreshape[['mdgregions', 'countryname']]).any(1).nonzero()[0])
# now that series have been transformed into columns, seriesname can be dropped
unreshape = unreshape.drop(['seriesname'], axis = 1)
Explanation: Merge World Bank data with reshaped dataframe
End of explanation
with open('un_reshape.pkl', 'w') as picklefile:
pickle.dump(unreshape, picklefile)
Explanation: Save processed data
Finally, I saved the reshaped and merged dataframe with pickle for use in exploration in another ipython notebook.
End of explanation |
2,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using custom containers with AI Platform Training
Learning Objectives
Step1: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name ends with the -kubeflowpipelines-default suffix.
Step2: Importing the dataset into BigQuery
Step3: Explore the Covertype dataset
Step4: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
Create a training split
Step5: Create a validation split
Exercise
In the first cell below, create
a validation split that takes 10% of the data using the bq command and
export this split into the BigQuery table covertype_dataset.validation.
In the second cell, use the bq command to export that BigQuery validation table to GCS at $VALIDATION_FILE_PATH.
Step6: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
Step7: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
Step8: Run the pipeline locally.
Step9: Calculate the trained model's accuracy.
Step10: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
Step11: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to AI Platform hyperparameter tuning service.
Exercise
Complete the code below to capture the metric that the hyper parameter tunning engine will use to optimize
the hyper parameter.
Step12: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
Exercise
Complete the Dockerfile below so that it copies the 'train.py' file into the container
at /app and runs it when the container is started.
Step13: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
Step14: Submit an AI Platform hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier
Step15: Start the hyperparameter tuning job.
Exercise
Use the gcloud command to start the hyperparameter tuning job.
Step16: Monitor the job.
You can monitor the job using GCP console or from within the notebook using gcloud commands.
Step17: Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
Step18: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
Step19: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
Step20: Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS.
Step21: Deploy the model to AI Platform Prediction
Create a model resource
Exercise
Complete the gcloud command below to create a model with
model_name in $REGION tagged with labels
Step22: Create a model version
Exercise
Complete the gcloud command below to create a version of the model
Step23: Serve predictions
Prepare the input file with JSON formated instances.
Step24: Invoke the model
Exercise
Using the gcloud command send the data in $input_file to
your model deployed as a REST API | Python Code:
import json
import os
import pickle
import tempfile
import time
import uuid
from typing import NamedTuple
import numpy as np
import pandas as pd
from google.cloud import bigquery
from googleapiclient import discovery, errors
from jinja2 import Template
from kfp.components import func_to_container_op
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
Explanation: Using custom containers with AI Platform Training
Learning Objectives:
1. Learn how to create a train and a validation split with Big Query
1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP
1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters
1. Learn how to deploy a trained machine learning model GCP as a rest API and query it.
In this lab, you develop, package as a docker image, and run on AI Platform Training a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository.
The training code uses scikit-learn for data pre-processing and modeling. The code has been instrumented using the hypertune package so it can be used with AI Platform hyperparameter tuning.
End of explanation
!gsutil ls
REGION = "us-central1"
ARTIFACT_STORE = "gs://hostedkfp-default-l2iv13wnek"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
os.environ["PROJECT_ID"] = PROJECT_ID
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = "{}/{}/{}".format(DATA_ROOT, "training", "dataset.csv")
VALIDATION_FILE_PATH = "{}/{}/{}".format(DATA_ROOT, "validation", "dataset.csv")
Explanation: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name ends with the -kubeflowpipelines-default suffix.
End of explanation
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
Explanation: Importing the dataset into BigQuery
End of explanation
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
Explanation: Explore the Covertype dataset
End of explanation
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
Explanation: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
Create a training split
End of explanation
# TODO: You code to create the BQ table validation split
# TODO: Your code to export the validation table to GCS
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
Explanation: Create a validation split
Exercise
In the first cell below, create
a validation split that takes 10% of the data using the bq command and
export this split into the BigQuery table covertype_dataset.validation.
In the second cell, use the bq command to export that BigQuery validation table to GCS at $VALIDATION_FILE_PATH.
End of explanation
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
Explanation: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
End of explanation
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
Explanation: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
End of explanation
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
Explanation: Run the pipeline locally.
End of explanation
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
Explanation: Calculate the trained model's accuracy.
End of explanation
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
Explanation: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TODO: Score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
Explanation: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to AI Platform hyperparameter tuning service.
Exercise
Complete the code below to capture the metric that the hyper parameter tunning engine will use to optimize
the hyper parameter.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
Explanation: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
Exercise
Complete the Dockerfile below so that it copies the 'train.py' file into the container
at /app and runs it when the container is started.
End of explanation
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
Explanation: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TODO: Your code goes here
Explanation: Submit an AI Platform hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier:
- Max iterations
- Alpha
The below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of max_iter and the linear range betwee 0.00001 and 0.001 for alpha.
Exercise
Complete the hptuning_config.yaml file below so that the hyperparameter
tunning engine try for parameter values
* max_iter the two values 200 and 300
* alpha a linear range of values between 0.00001 and 0.001
End of explanation
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TODO\
--job-dir=# TODO \
--master-image-uri=# TODO \
--scale-tier=# TODO \
--config # TODO \
-- \
# TODO
Explanation: Start the hyperparameter tuning job.
Exercise
Use the gcloud command to start the hyperparameter tuning job.
End of explanation
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
Explanation: Monitor the job.
You can monitor the job using GCP console or from within the notebook using gcloud commands.
End of explanation
ml = discovery.build("ml", "v1")
job_id = f"projects/{PROJECT_ID}/jobs/{JOB_NAME}"
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
Explanation: Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
End of explanation
response["trainingOutput"]["trials"][0]
Explanation: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
End of explanation
alpha = response["trainingOutput"]["trials"][0]["hyperparameters"]["alpha"]
max_iter = response["trainingOutput"]["trials"][0]["hyperparameters"][
"max_iter"
]
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
Explanation: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
End of explanation
!gsutil ls $JOB_DIR
Explanation: Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS.
End of explanation
model_name = "forest_cover_classifier"
labels = "task=classifier,domain=forestry"
!gcloud # TODO: You code goes here
Explanation: Deploy the model to AI Platform Prediction
Create a model resource
Exercise
Complete the gcloud command below to create a model with
model_name in $REGION tagged with labels:
End of explanation
model_version = 'v01'
!gcloud # TODO \
--model=# TODO \
--origin=# TODO \
--runtime-version=# TODO \
--framework=# TODO \
--python-version=# TODO \
--region=global
Explanation: Create a model version
Exercise
Complete the gcloud command below to create a version of the model:
End of explanation
input_file = "serving_instances.json"
with open(input_file, "w") as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write("\n")
!cat $input_file
Explanation: Serve predictions
Prepare the input file with JSON formated instances.
End of explanation
!gcloud # TODO: Complete the command
Explanation: Invoke the model
Exercise
Using the gcloud command send the data in $input_file to
your model deployed as a REST API:
End of explanation |
2,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the pregnancy file.
Step1: Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
Step2: Display the CDF.
Step3: Find out how much you weighed at birth, if you can, and compute CDF(x).
Step4: If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.
Step5: Compute the percentile rank of your birthweight
Compute the median birth weight by looking up the value associated with p=0.5.
Step6: Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.
Step7: Make a random selection from <tt>cdf</tt>.
Step8: Draw a random sample from <tt>cdf</tt>.
Step9: Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.
Step10: Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
Step11: Assuming that the PMF doesn't work very well, try plotting the CDF instead. | Python Code:
%matplotlib inline
import nsfg
preg = nsfg.ReadFemPreg()
import thinkstats2
import thinkplot
import numpy as np
Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the pregnancy file.
End of explanation
live = preg[preg.outcome == 1]
print live
wgt_cdf = thinkstats2.Cdf(live.totalwgt_lb, label='')
Explanation: Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
End of explanation
thinkplot.Cdf(wgt_cdf)
thinkplot.Show(xlabel='birthweight',
ylabel = 'CDF',
title = 'Cumulative Distribution of Birthweights')
Explanation: Display the CDF.
End of explanation
wgt_cdf.PercentileRank(8.2)
# wgt_cdf.PercentileRank(live.totalwgt_lb.mean())
Explanation: Find out how much you weighed at birth, if you can, and compute CDF(x).
End of explanation
others = live[live.pregordr > 1]
others_wgt_cdf = thinkstats2.Cdf(others.totalwgt_lb)
others_wgt_cdf.PercentileRank(8.2)
Explanation: If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.
End of explanation
wgt_cdf.Value(0.5)
Explanation: Compute the percentile rank of your birthweight
Compute the median birth weight by looking up the value associated with p=0.5.
End of explanation
iqr = (wgt_cdf.Percentile(25), wgt_cdf.Percentile(75))
iqr
Explanation: Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.
End of explanation
wgt_cdf.Random()
Explanation: Make a random selection from <tt>cdf</tt>.
End of explanation
wgt_cdf.Sample(10)
Explanation: Draw a random sample from <tt>cdf</tt>.
End of explanation
values = wgt_cdf.Sample(1000)
values_hist = thinkstats2.Hist(values, 'values')
ranks = [wgt_cdf.PercentileRank(v) for v in values]
ranks_hist = thinkstats2.Hist(ranks, 'ranks')
thinkplot.PrePlot(3, rows=3)
thinkplot.SubPlot(1)
thinkplot.Hist(values_hist, label='values Hist')
thinkplot.SubPlot(2)
values_cdf = thinkstats2.Cdf(values, label='values CDF')
thinkplot.Cdf(values_cdf)
thinkplot.SubPlot(3)
ranks_cdf = thinkstats2.Cdf(ranks, label='ranks CDF')
thinkplot.Cdf(ranks_cdf)
Explanation: Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.
End of explanation
rand_vals = [np.random.random() for i in range(100)]
rv_pmf = thinkstats2.Pmf(rand_vals, label="random values")
thinkplot.Hist(rv_pmf)
Explanation: Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
End of explanation
rv_cdf = thinkstats2.Cdf(rand_vals, label="random values")
thinkplot.Cdf(rv_cdf)
Explanation: Assuming that the PMF doesn't work very well, try plotting the CDF instead.
End of explanation |
2,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the Echo package
The "echo" subpackage of murraylab_tools is designed to automate the setup of reactions (mostly TX-TL reactions) with the Echo. The EchoRun class can produce picklists for the Echo, as well as instructions for loading a source plate to go with those instructions, when appropriate.
There are currently three main modes of operations of the Echo package, distinguished mainly by their inputs
Step1: ### More Information
The build_picklist_from_txtl_setup_csvs function is used for creating a picklist from a TX-TL setup spreadsheet (spreadsheet version 2.0 or later -- spreadsheets from before late 2016 will not work). You will need to feed it the names of two CSV files produced from the "recipe" sheet (the first sheet, containing explicit pipetting directions) and the "stocks" sheet (the second sheet, which details the concentrations of most of the materials used in the experiment). The standard workflow looks like
Step2: Note the warnings -- a lot of mistakes you might make will cause you to pipette 0 nL at a time, so the code will warn you if you do so. In this case, those 0 volume pipetting steps are normal -- you just added a material to 0 concentration. Similar warnings will appear if you under-fill a reaction.
The final write_picklist command produces two files -- an Echo picklist, and a plaintext experiment file to help you load the source plate. You can find
The Echo picklist (2D_dilution_series/outputs/dilution_setup_example_EchoInput.csv; the rest of the dilution series examples output to the same folder)
Step3: ...then we can add, say, bananas, to a 2x2 square in the top-left corner of the reaction.
Step4: You can also add to a single well, if you really need to.
Step5: If you want something to be in the final reaction but want to add it to your destination plate by hand instead of having the echo pipette it (e.g., you might not want to echo the TX-TL master mix), you can set this in the add_material_to_block or add_material_to_well functions. If you set this flag, directions to add this material manually will be added to the experiment overview file.
Step6: Notice the warnings about underfilled reactions -- this is because we're only adding a little bit of stuff to the mix, and haven't added water or any other bulk solvent. You can fill up the remainder of a well with EchoRun.fill_well_with or EchoRun.fill_all_wells_with
Step7: More Information
The build_dilution_series function of EchoRun is useful for quickly building a grid of dilutions of two materials, one on each axis. This is useful for double titrations of, for example, plasmid vs. inducer, or titration of two inputs. If you want to do anything more complex, you'll probably need to move to a TX-TL setup spreadsheet.
This function will always output reactions in solid blocks, with the upper-left-most well specified by the starting well argument of build_dilution_series (last argument). The function will also add a negative control well one row below the last row of the dilution series block, aligned with its first column (if you don't want a negative control reaction, then call build_dilution_series with the flag negative_control = False). You will not get a positive control reaction -- you'll have to add that yourself, if you want it.
Note that you must manually define source materials for 2D dilution setup. An EchoSourceMaterial object has four attributes -- a name, a concentration, a length, and a plate object.
* The name can be whatever you want, but note that the names "water" and "txtl_mm" are reserved for water and TX-TL master mix, respectively. If you make one of your materials "water" or "txtl_mm", be aware that they're going to be assumed to actually be water and TX-TL master mix.
* Concentration and length attributes of an EchoSourceMaterial follow specific unit conventions. In brief, if the material is dsDNA, length is the number of base pairs and concentration is in units of ng/µL; otherwise, length is 0 and concentration is in units of nM. See "How it all works" above for more details.
* The EchoSourcePlate object should be the same EchoSourcePlate associated with the EchoRun object you're going to use.
If you want to set up multiple dilutions series, you can call build_dilution_series multiple times on the same EchoRun object. All dilution series will be put on the same plate, and source wells for materials with the same names (including the master mix and water) will be combined. Just make sure to start each dilution series in a different well!
build_dilution_series requires that the EchoRun object have a source plate object associated with a source plate file. It will automatically assign wells to that source plate for all the materials required to run that reaction.
Association Spreadsheet
Quick Start Example
The following creates an Echo picklist for an automated PCR setup with three sets of primers to be applied individually to three different plasmids, as defined in a pair of CSV files (one defining the source plate, one describing what should go in the destination plates).
The source plate definition file (association_list/inputs/association_source_sheet.csv)
Step8: More Information
The build_picklist_from_association_spreadsheet function is used for arbitrary mappings of source plate wells to destination wells, using 1) a spreadsheet describing the contents of each source plate, and 2) a second spreadsheet describing what materials should be put in what wells, and at what concentration. You must always set up a source plate first. This should be done by calling load_source_plate with the name of the source plate spreadsheet and information about its organization. Alternatively, you could build your own manually. That is not recommended.
The first 'source' spreadsheet (describing the source plate) is a CSV-format spreadsheet with, at minimum, columns with the following information. You can add any number of additional columns to the source plate spreadsheet, which will be ignored by this function. This is the spreadsheet read in with the load_source_plate command.
* Location
Step9: To change the reaction volume
Step10: Make sure to run this before running build_picklist_from_association_spreadsheet or build_dilution_series. You almost certainly shouldn't do this at all when using build_picklist_from_txtl_setup_csvs, because that function will automatically extract a reaction volume from the setup spreadsheet.
To change the master mix composition/extract fraction
Step11: You can also add arbitrary components to the master mix. For example, the following code ads the dye DFHBI-1T to every well at a final concentration of 10 µM, from a 2 mM stock
Step12: To change buffer/extract aliquot size
Step13: To run a (dilution series) reaction without master mix
Step14: To change the source plate type/material type
Step15: To change the source plate name
Step16: To change the destination plate type
Step17: To change dead volume and max volume | Python Code:
import murraylab_tools.echo as mt_echo
import os.path
# Relevant input and output files. Check these out for examples of input file format.
txtl_inputs = os.path.join("txtl_setup", "inputs")
txtl_outputs = os.path.join("txtl_setup", "outputs")
stock_file = os.path.join(txtl_inputs, "TX-TL_setup_example_stocks.csv") # Source materials
recipe_file = os.path.join(txtl_inputs, "TX-TL_setup_example_recipe.csv") # Experimental setup
plate_file = os.path.join(txtl_inputs, "TX-TL_setup_example_plate.dat") # Keeps track of wells used
output_name = os.path.join(txtl_outputs, "TX-TL_setup_example") # Output (both a picklist and a
# small protocol for building the
# source plate)
# Build an EchoRun object
txtl_plate = mt_echo.SourcePlate(filename = plate_file)
txtl_echo_calculator = mt_echo.EchoRun(plate = txtl_plate)
# Describe the experiment
txtl_echo_calculator.build_picklist_from_txtl_setup_csvs(stock_file, recipe_file)
# Write results
txtl_echo_calculator.write_picklist(output_name)
Explanation: Welcome to the Echo package
The "echo" subpackage of murraylab_tools is designed to automate the setup of reactions (mostly TX-TL reactions) with the Echo. The EchoRun class can produce picklists for the Echo, as well as instructions for loading a source plate to go with those instructions, when appropriate.
There are currently three main modes of operations of the Echo package, distinguished mainly by their inputs:
- TX-TL Setup Spreadsheet: Takes a pair of CSV spreadsheets saved from a TX-TL setup spreadsheet (version 2.X). Use this for most TX-TL experiments.
- Programmatic Setup: You can build a TX-TL reaction (or, with a bit more difficulty, a non-TX-TL reaction) programmatically, without using a setup spreadsheet. There are a couple of functions for doing this, which can be combined:
- build_dilution_series: Takes two materials and an array of concentrations for each, and builds a 2D dilution series out of the two. Can also be used to set up 1D dilution series (by setting one of the materials to a dummy material and giving it the concentration list [0]).
- add_material_to_block: Adds a single ingredient to all wells in a rectangular block on the destination plate, at a fixed concentration.
- add_material_to_well: Like add_material_to_block, but to a single well.
Association Spreadsheet: Takes one or more spreadsheets describing the contents of an Echo source plate, plus a simple spreadsheet describing final concentrations of the materials of those source plates in each destination. Use this for non-TX-TL experiments, or if you have a TX-TL experiment with more source materials than can be handled in a TX-TL setup spreadsheet.
A word of warning: The quick start examples will help you jump right into setting up an experiment, but they make a number of assumptions about your experiment. Some things you'll want to check before running your own:
- Reaction Size: Default is 5 µL.
- Buffer/Extract Fractions: Defaults are 0.42/0.33 (typical for French Press extracts).
- Buffer/Extract Aliquot Size: Defaults are 30µL/37µL.
- "Master Mix Excess: For accounting for pipetting loss. Default is 1.1 (10% excess).
- Master Mix Composition: Default is buffer and extract only. Really only relevant for dilution series.
- Source Plate Type: Default is 384_PP. You probably want this.
- Source Plate Material: Default is AQ_BP (buffer-like liquids).
- Destination Plate Type: Default is "Nunc_384_black_glassbottom".
- Destination Plate Size: The Echo package has no knowledge of the destination plate. There is no check to keep you from defining picks off the edge of the destination plate.
- Controls: TX-TL experiments come with a negative control. Most TX-TL setup spreadsheets also define a positive control.
- Dead Volume/Max Volume: Default dead volume is 21 µL, including loss to meniscus. Default max volume is 65 µL.
- Volume/Aliquot of Buffer and Extract: Default is 30 µL extract/aliquot and 37 µL buffer/aliquot.
Additionally, be aware that the TX-TL setup scripts keep track of which source plate wells they've used in a .dat file. If you run those experiments repeatedly, they'll fill up the file and eventually error out when they run out of wells on the plate.
You can read more about how the echo package works in the next section, "How it all works", or you can skip to the example usage sections below that. For more information on tweaking settings, see Tweaking Settings at the end of this notebook.
See the following notebooks for other Echo setup features:
* Reusable Plate Example: To re-use materials stored in a fixed position on a source plate.
* echo_dilution_series: To make 3D (or higher-dimension) dilution series.
How it all works
Every project begins with an EchoRun object. This object holds (nearly) all of the information about an experiment, and is ultimately responsible for coordinating plates and materials, and for actually writing picklists and experimental protocols.
Most EchoRun objects need a SourcePlate object. This object is responsible for keeping track of which wells have been used, and for assigning new wells on the source plate to materials that you'll want to transfer. It will try to assign wells in a way that keeps the same material in a contiguous block, buffered on either side by an empty well for ease of pipetting. SourcePlate objects are also typically associated with a .dat tracking file that lists which wells have been used on the plate. That way, you can use the same source plate over many experiments without having to manually program in which wells are forbidden. The SourcePlate object will read this file to learn what it has available to it, and will automatically write back to the same file whenever the EchoRun object controlling it writes a picklist.
Materials (DNA, chemicals, water, TX-TL, etc) on a source plate are represented by EchoSourceMaterial objects. An EchoSourceMaterial has a concentration, which is always stored in nM. However, because dsDNA is usually measured in ng/uL, and dsDNA is one of the most common materials used, EchoSourceMaterials by default assume that the concentration set in their constructors is in units of ng/uL. Any EchoSourceMaterial with length > 0 will convert that ng/uL concentration into an internal nM concentration. Only if the length of the EchoSourceMaterial is set to 0 will it use its set concentration value directly. This convention appears in several other contexts in the echo package.
An EchoSourceMaterial is always associated with a SourcePlate. The EchoSourceMaterial keeps track of how much of itself has been used, and will request that wells be allocated on the SourcePlate.
EchoRun objects can also have a MasterMix object, which defines what materials will be put into the master mix, and at what concentrations. Association lists don't currently support master mixes, and TX-TL reactions built from CSVs will always pull their master mix information from the CSV, so this is currently only useful if you're using the dilution series function(s).
To generate an Echo picklist, you will generally need to do three things:
- Building an EchoRun object. This usually just means setting the name of a source plate tracking file
- Describe the experiment. This almost always means one or more calls to build_picklist_from_txtl_setup_csvs, build_dilution_series, or build_picklist_from_association_spreadsheet from your EchoRun object. What this entails depends on your experiment; TX-TL setup with a setup spreadsheet and association file setups are almost completely defined by external files, while 2D dilution series experiments require some definition in your script.
- Write the picklist, which is done with a call to write_picklist from your EchoRun object.
For more information, see the examples below.
TX-TL Setup Spreadsheet
Quick Start Example
The following creates an Echo picklist and experimental protocol for a variety of fluorescent protein mixes described in "inputs/TX-TL_setup_example.xlsx".
End of explanation
import murraylab_tools.echo as mt_echo
import os.path
# Relevant input and output files. Check these out for examples of input file format.
dilution_inputs = os.path.join("2D_dilution_series", "inputs")
dilution_outputs = os.path.join("2D_dilution_series", "outputs")
plate_file = os.path.join(dilution_inputs, "dilution_setup_example_plate.dat") # Keeps track of wells used
output_name = os.path.join(dilution_outputs, "dilution_setup_example") # Output (both a picklist and a
# small protocol for building the
# source plate)
# Build an EchoRun object
dilution_plate = mt_echo.SourcePlate(filename = plate_file)
default_master_mix = mt_echo.MasterMix(plate = dilution_plate)
dilution_echo_calculator = mt_echo.EchoRun(plate = dilution_plate, master_mix = default_master_mix)
# Set final concentrations of two materials
gfp_final_concentrations = range(0,6,1) # in nM
atc_final_concentrations = range(0,100,10) # in ng/uL
# Define reporter plasmid material
gfp_conc = 294 # Concentration in ng/uL
gfp_len = 3202 # Size of DNA in bp
gfp = mt_echo.EchoSourceMaterial('GFP Plasmid', gfp_conc, gfp_len, dilution_plate)
# Define inducer material
atc_conc = 1000 # Concentration in ng/uL (important that this matches the units of the final concentrations)
atc_len = 0 # This isn't dsDNA, so it has 0 length.
atc = mt_echo.EchoSourceMaterial("ATc", atc_conc, atc_len, dilution_plate)
# Plan out the experiment
starting_well = "D2"
dilution_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,
atc_final_concentrations, starting_well)
# Write results
dilution_echo_calculator.write_picklist(output_name)
Explanation: ### More Information
The build_picklist_from_txtl_setup_csvs function is used for creating a picklist from a TX-TL setup spreadsheet (spreadsheet version 2.0 or later -- spreadsheets from before late 2016 will not work). You will need to feed it the names of two CSV files produced from the "recipe" sheet (the first sheet, containing explicit pipetting directions) and the "stocks" sheet (the second sheet, which details the concentrations of most of the materials used in the experiment). The standard workflow looks like:
Edit your TX-TL setup Excel document (modifying the "Stocks" sheet and the "Layout" sheet, and the few cells shaded purple in the "Recipe" sheet).
Save the "Recipe" and "Stocks" sheets from the xls/xlsx file as CSVs.
Run build_picklist_from_txtl_setup_csvs, passing it the names of the two CSVs you just saved.
Reaction size and master mix excess ratio are read from the recipe spreadsheet. You probably should not mess with those settings
Note that you must give each reaction a plate location, i.e. "D4" or "E07". In the Excel spreadsheet, plate locations can be added in the "Layout" tab, and will be automatically propagated to the "Recipe" tab.
build_picklist_from_txtl_setup_csvs requires that the EchoRun object have a source plate object associated with a source plate file. It will automatically assign wells to that source plate for all the materials required to run that reaction.
Programmatic construction of TX-TL Reactions
Quick Start Example
The following creates an Echo picklist and simple experimental protcol for a two-way dilution series of a reporter plasmid and inducer.
End of explanation
# Build an EchoRun object
dilution_plate = mt_echo.SourcePlate(filename = plate_file)
# default_master_mix = mt_echo.MasterMix(plate = dilution_plate)
dilution_echo_calculator = mt_echo.EchoRun(plate = dilution_plate, master_mix = None)
# Define materials
# Note: Don't try to re-use an existing EchoSourceMaterial object if that object has been
# used already in an EchoRun object that has had its write_picklist function called,
# or you will get errors
gfp = mt_echo.EchoSourceMaterial('GFP Plasmid', gfp_conc, gfp_len, dilution_plate)
atc = mt_echo.EchoSourceMaterial("ATc", atc_conc, atc_len, dilution_plate)
# # Plan out a dilution series
# starting_well = "D2"
# dilution_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,
# atc_final_concentrations, starting_well)
Explanation: Note the warnings -- a lot of mistakes you might make will cause you to pipette 0 nL at a time, so the code will warn you if you do so. In this case, those 0 volume pipetting steps are normal -- you just added a material to 0 concentration. Similar warnings will appear if you under-fill a reaction.
The final write_picklist command produces two files -- an Echo picklist, and a plaintext experiment file to help you load the source plate. You can find
The Echo picklist (2D_dilution_series/outputs/dilution_setup_example_EchoInput.csv; the rest of the dilution series examples output to the same folder):
The plaintext experiment file (2D_dilution_series/outputs/dilution_setup_example_experiment_overview.csv):
You can also manually add a single material to a well (add_material_to_well) or a rectangular block of wells (add_material_to_block) by specifying the material (an EchoSourceMaterial object), a final concentration, and a location. For example, if we set up a dilution series using the variables above...
End of explanation
# Bananas at 100 nM
bananas = mt_echo.EchoSourceMaterial('Cavendish', 100, 0)
dilution_echo_calculator.add_material_to_block(bananas, 1, 'D2', 'E3')
Explanation: ...then we can add, say, bananas, to a 2x2 square in the top-left corner of the reaction.
End of explanation
old_bananas = mt_echo.EchoSourceMaterial('Gros Michel', 100, 0)
dilution_echo_calculator.add_material_to_well(old_bananas, 3, 'F5')
Explanation: You can also add to a single well, if you really need to.
End of explanation
viscous_bananas = mt_echo.EchoSourceMaterial('Viscous', 100, 0)
dilution_echo_calculator.add_material_to_well(viscous_bananas, 5, "E4", pipette_by_hand = True)
dilution_echo_calculator.write_picklist(output_name + "_banana_case")
Explanation: If you want something to be in the final reaction but want to add it to your destination plate by hand instead of having the echo pipette it (e.g., you might not want to echo the TX-TL master mix), you can set this in the add_material_to_block or add_material_to_well functions. If you set this flag, directions to add this material manually will be added to the experiment overview file.
End of explanation
dilution_plate = mt_echo.SourcePlate(filename = plate_file)
dilution_echo_calculator = mt_echo.EchoRun(plate = dilution_plate, master_mix = None)
gfp = mt_echo.EchoSourceMaterial('GFP Plasmid', gfp_conc, gfp_len, dilution_plate)
atc = mt_echo.EchoSourceMaterial("ATc", atc_conc, atc_len, dilution_plate)
# Bananas at 100 nM
bananas = mt_echo.EchoSourceMaterial('Cavendish', 100, 0)
dilution_echo_calculator.add_material_to_block(bananas, 1, 'D2', 'E3')
old_bananas = mt_echo.EchoSourceMaterial('Gros Michel', 100, 0)
dilution_echo_calculator.add_material_to_well(old_bananas, 3, 'F5')
viscous_bananas = mt_echo.EchoSourceMaterial('Viscous', 100, 0)
dilution_echo_calculator.add_material_to_well(viscous_bananas, 5, "E4", pipette_by_hand = True)
# Here's where we fill with ingredients.
water = mt_echo.EchoSourceMaterial('Water', 1, 0)
saltwater = mt_echo.EchoSourceMaterial('Salt Water', 1, 0)
dilution_echo_calculator.fill_well_with('D2', water)
dilution_echo_calculator.fill_all_wells_with(saltwater)
# All reactions are filled! No warnings!
dilution_echo_calculator.write_picklist(output_name + "_fill_case")
Explanation: Notice the warnings about underfilled reactions -- this is because we're only adding a little bit of stuff to the mix, and haven't added water or any other bulk solvent. You can fill up the remainder of a well with EchoRun.fill_well_with or EchoRun.fill_all_wells_with:
End of explanation
import murraylab_tools.echo as mt_echo
import os.path
# Relevant input and output files. Check these out for examples of input file format.
assoc_inputs = os.path.join("association_list", "inputs")
assoc_outputs = os.path.join("association_list", "outputs")
stock_file = os.path.join(assoc_inputs, 'association_source_sheet.csv')
assoc_file = os.path.join(assoc_inputs, 'association_final_sheet.csv')
assoc_name = os.path.join(assoc_outputs, 'association_example')
# Build an EchoRun object
assoc_echo_calculator = mt_echo.EchoRun()
assoc_echo_calculator.rxn_vol = 50000 # PCR is large-volume!
# Define which column of the source file is what
name_col = 'B'
conc_col = 'C'
len_col = 'D'
well_col = 'A'
plate_col = 'E'
# Define the source plate based on the stock file.
assoc_echo_calculator.load_source_plate(stock_file, name_col, conc_col,
len_col, well_col, plate_col)
# Build a protocol, based on the association file.
assoc_echo_calculator.build_picklist_from_association_spreadsheet(assoc_file,
well_col)
# Write the picklist
assoc_echo_calculator.write_picklist(assoc_name)
Explanation: More Information
The build_dilution_series function of EchoRun is useful for quickly building a grid of dilutions of two materials, one on each axis. This is useful for double titrations of, for example, plasmid vs. inducer, or titration of two inputs. If you want to do anything more complex, you'll probably need to move to a TX-TL setup spreadsheet.
This function will always output reactions in solid blocks, with the upper-left-most well specified by the starting well argument of build_dilution_series (last argument). The function will also add a negative control well one row below the last row of the dilution series block, aligned with its first column (if you don't want a negative control reaction, then call build_dilution_series with the flag negative_control = False). You will not get a positive control reaction -- you'll have to add that yourself, if you want it.
Note that you must manually define source materials for 2D dilution setup. An EchoSourceMaterial object has four attributes -- a name, a concentration, a length, and a plate object.
* The name can be whatever you want, but note that the names "water" and "txtl_mm" are reserved for water and TX-TL master mix, respectively. If you make one of your materials "water" or "txtl_mm", be aware that they're going to be assumed to actually be water and TX-TL master mix.
* Concentration and length attributes of an EchoSourceMaterial follow specific unit conventions. In brief, if the material is dsDNA, length is the number of base pairs and concentration is in units of ng/µL; otherwise, length is 0 and concentration is in units of nM. See "How it all works" above for more details.
* The EchoSourcePlate object should be the same EchoSourcePlate associated with the EchoRun object you're going to use.
If you want to set up multiple dilutions series, you can call build_dilution_series multiple times on the same EchoRun object. All dilution series will be put on the same plate, and source wells for materials with the same names (including the master mix and water) will be combined. Just make sure to start each dilution series in a different well!
build_dilution_series requires that the EchoRun object have a source plate object associated with a source plate file. It will automatically assign wells to that source plate for all the materials required to run that reaction.
Association Spreadsheet
Quick Start Example
The following creates an Echo picklist for an automated PCR setup with three sets of primers to be applied individually to three different plasmids, as defined in a pair of CSV files (one defining the source plate, one describing what should go in the destination plates).
The source plate definition file (association_list/inputs/association_source_sheet.csv):
The destionation file (association_list/inputs/association_destination_sheet.csv):
End of explanation
import murraylab_tools.echo as mt_echo
import os.path
plate_file = os.path.join("tweaking", "plate_file_example.dat")
# Build an EchoRun object
example_plate = mt_echo.SourcePlate(filename = plate_file)
example_master_mix = mt_echo.MasterMix(example_plate)
example_echo_calculator = mt_echo.EchoRun(plate = example_plate, master_mix = example_master_mix)
Explanation: More Information
The build_picklist_from_association_spreadsheet function is used for arbitrary mappings of source plate wells to destination wells, using 1) a spreadsheet describing the contents of each source plate, and 2) a second spreadsheet describing what materials should be put in what wells, and at what concentration. You must always set up a source plate first. This should be done by calling load_source_plate with the name of the source plate spreadsheet and information about its organization. Alternatively, you could build your own manually. That is not recommended.
The first 'source' spreadsheet (describing the source plate) is a CSV-format spreadsheet with, at minimum, columns with the following information. You can add any number of additional columns to the source plate spreadsheet, which will be ignored by this function. This is the spreadsheet read in with the load_source_plate command.
* Location: This is the well number of the material, i.e. "C4" or "E08". If the same material is found in multiple wells, it will need one row for each well.
* Name: Brief string describing the material. This name will be used in the recipe output of EchoRun. It can be any string, but be aware that the names "water" and "txtl_mm" are reserved for describing water and TX-TL master mix, respectively. For this function, that won't matter, but if you try to combine other setup commands with this one, you should avoid using "water" and "txtl_mm" as material names.
* Concentration: If the material is dsDNA, this should be the concentration of the DNA in ng/µL. Otherwise, it should be the concentration of the material in whatever units you want (nM recommended).
* Length: If the material is dsDNA, this should be the number of base pairs in the DNA. Otherwise, length should be 0. This is important for correct unit usage.
* Plate: A single source plate spreadsheet can contain materials from different source plates, so a column is required to determine which plate the material is coming from. Name of the source plate. Put a number N here, and the plate will be auto-named "Plate[N]" (recommended usage). Alternatively, you can give the plate a custom name.
The second 'association' spreadsheet (describing the what materials go together) is also a CSV-format spreadsheet. This spreadsheet determines what goes into the destination well. One column of the association spreadsheet determines that row's well on the destination plate. The EchoRun object will scan through every column, ignoring the well column, taking pairs of columns from left to right. There can be any number of pairs of columns; each one will cause one material to be moved from the source plate to the destination plate. The first clumn in each pair holds the name of a material. This name must exactly match one of the material names listed in the source spreadsheet, and determines where material will be taken from. The second column in each pair describes the final concentration of that material. If the material is dsDNA (has non-zero length), the units of final concentration are assumed to be nM. Otherwise, the units of final concentration are the same as the units of concentration used for that material in the source plate.
Note that unlike the other two experimental settings of EchoRun, build_picklist_from_association_spreadsheet does not require that its EchoRun object be associated with a source plate file or SourcePlate object prior to the function being called -- a new SourcePlate object will be manufactured from the input spreadsheet when you call load_source_plate.
Tweaking Settings
End of explanation
example_echo_calculator.rxn_vol = 10.5 * 1e3 # Volume IN nL!!!
Explanation: To change the reaction volume:
Reaction volume is a property of an EchoRun object.
End of explanation
new_master_mix = mt_echo.MasterMix(example_plate, extract_fraction = 0.40,
rxn_vol = example_echo_calculator.rxn_vol)
example_echo_calculator.add_master_mix(new_master_mix)
example_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,
atc_final_concentrations, starting_well)
Explanation: Make sure to run this before running build_picklist_from_association_spreadsheet or build_dilution_series. You almost certainly shouldn't do this at all when using build_picklist_from_txtl_setup_csvs, because that function will automatically extract a reaction volume from the setup spreadsheet.
To change the master mix composition/extract fraction:
This is really only relevant for the 2D dilution series TX-TL setup (build_dilution_series) -- TX-TL setup from a spreadsheet pulls the extract fraction from the spreadsheet, and the association spreadsheet method has no knowledge of TX-TL. Accordingly, the extract fraction is an optional argument in build_dilution_series. To modify the master mix of a reaction, you'll have to set its MasterMix object, which is most safely done with the add_master_mix function.
Changing the buffer/extract composition can be accomplished in the constructor of the new MasterMix object. Be sure to add the new MasterMix object before calling write_picklist! Also be sure that the new MasterMix object has the same reaction volume (rxn_vol) as the EchoRun object. Otherwise you'll get an error.
End of explanation
new_master_mix = mt_echo.MasterMix(example_plate,
rxn_vol = example_echo_calculator.rxn_vol)
dfhbi = mt_echo.EchoSourceMaterial("DFHBI-1T", 2000, 0, example_plate)
new_master_mix.add_material(dfhbi, 10)
example_echo_calculator.add_master_mix(new_master_mix)
example_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,
atc_final_concentrations, starting_well)
Explanation: You can also add arbitrary components to the master mix. For example, the following code ads the dye DFHBI-1T to every well at a final concentration of 10 µM, from a 2 mM stock:
End of explanation
new_master_mix = mt_echo.MasterMix(example_plate,
extract_per_aliquot = 50000,
buffer_per_aliquot = 70000,
rxn_vol = example_echo_calculator.rxn_vol)
example_echo_calculator.add_master_mix(new_master_mix)
example_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,
atc_final_concentrations, starting_well)
# ...
# calculator_with_odd_destination.write_picklist(...)
#...
Explanation: To change buffer/extract aliquot size:
Buffer and extract aliquot size are controlled by the MasterMix object. Like extract percentage, aliquot sizes can be changed in the MasterMix's constructor.
Note that both aliquot sizes are in units of nL, not uL.
End of explanation
dye1 = mt_echo.EchoSourceMaterial('A Dye', 100, 0, dilution_plate)
dye2 = mt_echo.EchoSourceMaterial('Another Dye', 122, 0, dilution_plate)
dye_concentrations = [x for x in range(10)]
example_echo_calculator.remove_master_mix()
example_echo_calculator.build_dilution_series(dye1, dye2, dye_concentrations,
dye_concentrations, starting_well)
Explanation: To run a (dilution series) reaction without master mix:
You can also make a dilution series without any master mix by either creating a new EchoRun object with None for its MasterMix, or by removing the MasterMix on an existing EchoRun object with a call to remove_master_mix.
End of explanation
plate_type = "384PP_AQ_BP" # This is actually the default plate value
retyped_example_plate = mt_echo.SourcePlate(filename = plate_file, SPtype = plate_type)
another_example_echo_calculator = mt_echo.EchoRun(plate = retyped_example_plate)
Explanation: To change the source plate type/material type:
Source plate types and material types are set as optional arguments in the constructor of a SourcePlate object. The type and material of a source plate are both set by a string, which can be any string that the Echo Plate Reformat software will recognize.
End of explanation
plate_name = "FirstPlate"
renamed_example_plate = mt_echo.SourcePlate(filename = plate_file, SPname = plate_name)
yet_another_example_echo_calculator = mt_echo.EchoRun(plate = renamed_example_plate)
Explanation: To change the source plate name:
Source plate names are set much like source plate types with the argument SPname. In addition, as a shorthand, you can set SPname to be a number N, in which case the plate will be named Source[N].
End of explanation
calculator_with_odd_destination = mt_echo.EchoRun(plate = example_plate, DPtype = "some_96_well_plate")
calculator_with_odd_destination.DPtype = "Nunc_384_black_glassbottom" # Just kidding!
# ...
# calculator_with_odd_destination.write_picklist(...)
#...
Explanation: To change the destination plate type:
Destination plate types are determined and stored directly in the EchoRun object. The destination plate type can be set by the optional argument DPtype in the constructor of an EchoRun object, or set manually any time before calling write_picklist on that EchoRun.
End of explanation
from murraylab_tools.echo.echo_functions import dead_volume, max_volume, usable_volume
dead_volume = 10000 # Volume in nL!
max_volume = 75000 # Volume in nL!
usable_volume = max_volume - dead_volume # Don't forget to re-calculate this!
Explanation: To change dead volume and max volume:
You probably shouldn't do this. If you absolutely must squeeze every last bit of efficiency out of your source wells (or if you want to use a low-dead-volume plate), you can set the dead_volume and max_volume variables, which are static variables in the murraylab_tools.echo package. If you change them, also make sure to set the static variable usable_volume, which defines the volume of material in a well that can actually be used by the Echo (this is normally calculated from dead_volume and max_volume at package import). Also, you should do this before running any experimental protocol function.
End of explanation |
2,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - La sous-séquence de plus grande somme
Ce problème est connu sur le nom de Maximum subarray problem. Notion abordée
Step1: Enoncé
On suppose qu'on a une liste $L=( l_1, l_2, ..., l_n)$. On souhaite trouver la sous-séquence de plus grande somme. En d'autres termes, on veut trouver $(i^,j^)$ tels que
Step2: La ligne A contient l'instruction meilleur = min(li). Pour une liste où tous les nombres sont négatifs, la meilleure sous-liste est constitué du plus petit élément de la liste. Remplacer cette instruction par meilleur = 0 a pour conséquence de retourner une liste vide dans ce cas précis.
$cout(n) = \sum_{i=0}^n \sum_{j=i+1}^n j-i = \sum_{i=0}^n \sum_{j=0}^i j = \sum_{i=0}^n \frac{i(i+1)}{2} \sim O(n^3)$
Solution plus rapide
Il est possible de modifier cette fonction de telle sorte que le coût soit en $O(n^2)$ car on évite la répétition de certains calculs lors du calcul de la somme des sous-séquences. On peut écrire
Step3: Solution dichotomique
Il existe une dernière version plus rapide encore. Si on considère la liste $L=(l_1, ..., l_n)$ dont on cherche à extraire la sous-séquence de somme maximale. Supposons que $l_a$ appartienne à cette sous-séquence. On construit la fonction suivante
Step4: Le coût de cette fonction est au pire en $O(n \ln n)$. A chaque itération, on effectue trois calculs
Step5: On suppose que $f(2^n)=n2^n \Leftrightarrow f(n) = n \ln_2 n$. Il suffit de vérifier cela avec la récurrence
Step6: La courbe devient négative au quatrième nombre. L'astuce consiste à dire qu'on peut traiter les deux ensembles séparément et que la meilleure sous-séquence n'inclura pas ce nombre en quatrième position. Si on cherche la meilleure sous-séquence se terminant à la position $i$, il suffit juste de revenir en arrière et de trouver le minimum de la courbe cumulée avant la position $i$. Pour $i=5$, le point le plus bas qui précède est le point $k=3$. Au point $i=3$, on sait qu'il n'existe pas de sous-séquence positive précédent $i=3$.
On découpe la courbe en segments $[[i,j]]$ vérifiant $l_{i-1} < 0 \leqslant l_i$ et $\sum_{k=1}^{j} l_k < 0$ et $l_{j+1} \geqslant 0$.
Step7: On parcourt la série cumulée. A chaque fois qu'on passe sous zéro, au premier nombre positif suivant, on redémarre à zéro. La sous-séquence de somme maximale est forcément dans un de ces tronçons, elle a pour point de départ le premier élément et pour point d'arrivée le maximum obtenu sur le tronçon en question
Step8: Comparaison des temps de calcul
On effectue cela sur une liste de nombres aléatoire négatifs et positifs.
Step9: Coût en $O(n^3)$
Step10: Coût en $O(n^2)$
Step11: Coût en $O(n \ln^2 n)$
Step12: Coût en $O(n)$ | Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - La sous-séquence de plus grande somme
Ce problème est connu sur le nom de Maximum subarray problem. Notion abordée : programmation dynamique.
End of explanation
def somme_partielle(li, i, j):
r = 0
for a in range (i, j) :
r += li[a]
return r
# on pourrait juste écrire
# return sum(li[i:j])
def plus_grande_sous_liste(li):
meilleur = min(li) # ligne A
im, jm = -1, -1
for i in range(0, len(li)):
for j in range(i+1, len(li)+1): # ne pas oublier +1 car sinon
# le dernier élément n'est jamais pris en compte
s = somme_partielle(li, i, j)
if s > meilleur:
meilleur = s
im, jm = i, j
return li [im:jm]
# si li ne contient que des valeurs positives, la solution est évidemment la liste entière
# c'est pourquoi il faut tester cette fonction avec des valeurs négatives
li = [ 4,-6,7,-1,8,-50,3]
m = plus_grande_sous_liste(li)
m
Explanation: Enoncé
On suppose qu'on a une liste $L=( l_1, l_2, ..., l_n)$. On souhaite trouver la sous-séquence de plus grande somme. En d'autres termes, on veut trouver $(i^,j^)$ tels que :
$\sum_{k=i^}^{j^} l_k = \max_{i,j} \sum_{k=i}^{j} l_k$
Solution naïve
La première solution consiste à calculer les sommes de toutes les sous-séquences et de garder les $i,j$ qui ont permis d'obtenir la meilleure sous-séquence. On divise le programme en deux fonctions :
somme_partielle : calcule la somme de la sous-séquence l[i:j] (coût de la fonction : $O(n)$)
plus_grande_sous_liste : passe en revue toutes les sous-séquences et retourne la meilleure (coût de la fonction $O(n^2)$)
Le coût de cet algorithme est en $O(n^3)$.
End of explanation
def plus_grande_sous_liste_n2(li):
meilleur = 0
im, jm = -1, -1
for i in range (0, len(li)):
s = 0
for j in range(i, len(li)):
s += li[j]
if s > meilleur:
meilleur = s
im, jm = i, j+1
return li[im:jm]
li = [ 4, -6, 7, -1, 8, -50, 3]
m = plus_grande_sous_liste_n2(li)
print(m)
li = [1, 2, 3, 4, 5, -98, 78, 9, 7, 7]
m = plus_grande_sous_liste_n2(li)
print(m)
li = [0, 2, 4, -7, -2, 7, -1, 8, -10, 3]
m = plus_grande_sous_liste_n2(li)
m
Explanation: La ligne A contient l'instruction meilleur = min(li). Pour une liste où tous les nombres sont négatifs, la meilleure sous-liste est constitué du plus petit élément de la liste. Remplacer cette instruction par meilleur = 0 a pour conséquence de retourner une liste vide dans ce cas précis.
$cout(n) = \sum_{i=0}^n \sum_{j=i+1}^n j-i = \sum_{i=0}^n \sum_{j=0}^i j = \sum_{i=0}^n \frac{i(i+1)}{2} \sim O(n^3)$
Solution plus rapide
Il est possible de modifier cette fonction de telle sorte que le coût soit en $O(n^2)$ car on évite la répétition de certains calculs lors du calcul de la somme des sous-séquences. On peut écrire :
$\sum_{k=i}^{j+1} l_k = l_j + \sum_{k=i}^{j} l_k$
Dans la seconde boucle, il suffit d'ajouter l'élément li[j] à la somme précédente.
End of explanation
def plus_grande_sous_liste_nlnn2_r(li, i, j):
if i == j:
return 0
elif i+1 == j:
return li[i], i, i+1
milieu = (i+j) // 2
# on coupe le problème deux
ma, ia, ja = plus_grande_sous_liste_nlnn2_r(li, i, milieu)
mb, ib, jb = plus_grande_sous_liste_nlnn2_r(li, milieu, j)
# pour aller encore plus vite dans un cas précis
if ja == ib:
total = ma + mb
im, jm = ia, jb
else :
# on étudie la jonction
im, jm = milieu, milieu+1
meilleur = li[milieu]
s = meilleur
for k in range(milieu+1, j):
s += li[k]
if s > meilleur:
meilleur = s
jm = k + 1
total = meilleur
meilleur = li[milieu]
s = meilleur
for k in range(milieu-1, i-1, -1):
s += li[k]
if s > meilleur:
meilleur = s
im = k
total += meilleur - li[milieu]
if ma >= max(mb,total):
return ma, ia, ja
elif mb >= max(ma,total):
return mb, ib, jb
else:
return total, im, jm
def plus_grande_sous_liste_nlnn2(li):
m, i, j = plus_grande_sous_liste_nlnn2_r(li, 0, len(li))
return li[i:j]
li = [ 4, -6, 7, -1, 8, -50, 3]
m = plus_grande_sous_liste_nlnn2(li)
print(m)
li = [1, 2, 3, 4, 5, -98, 78, 9, 7, 7]
m = plus_grande_sous_liste_nlnn2(li)
print(m)
li = [0, 2, 4, -7, -2, 7, -1, 8, -10, 3]
m = plus_grande_sous_liste_nlnn2(li)
m
Explanation: Solution dichotomique
Il existe une dernière version plus rapide encore. Si on considère la liste $L=(l_1, ..., l_n)$ dont on cherche à extraire la sous-séquence de somme maximale. Supposons que $l_a$ appartienne à cette sous-séquence. On construit la fonction suivante :
$f(a,k) =\left { \begin{array}{ll} \sum_{i=a}^{k} l_i &si \; k > a \ \sum_{i=k}^{a} l_i &si \; k < a \end{array} \right .$
On cherche alors les valeurs $k_1$ et $k_2$ telles que :
$\begin{array}{rcl} f(a,k_1) &=& \max_{k<a} f(a,k) \\ f(a,k_2) &=& \max_{k>a} f(a,k) \end{array}$
La sous-séquence de somme maximale cherchée est $[k_1,k_2]$ avec $a$ dans cet intervalle et le coût de cette recherche est en $O(n)$. Mais ceci n'est vrai que si on sait que $l_a$ appartient à la sous-séquence de somme maximale.
Autre considération : pour deux listes $l_1$ et $l_2$, la séquence maximale de la réunion des deux appartient soit à l'une, soit à l'autre, soit elle inclut le point de jonction.
Ces deux idées mises bout à bout donne l'algorithme suivant construit de façon récursive. On coupe la liste en deux parties de longueur égale :
On calcule la meilleure séquence sur la première sous-séquence.
On calcule la meilleure séquence sur la seconde sous-séquence.
On calcule la meilleure séquence en suppose que l'élément du milieu appartient à cette séquence.
La meilleure séquence est nécessairement l'une des trois.
End of explanation
cout = lambda n: 0 if n == 1 else n + 2 * cout(n//2)
for i in range(1, 10):
print("f({0})={1} --> f({0})/{0} = {2}".format(2**i, cout(2**i), cout(2**i) / 2**i))
Explanation: Le coût de cette fonction est au pire en $O(n \ln n)$. A chaque itération, on effectue trois calculs :
meilleure séquence à droite : $f(n/2)$
meilleure séquence à gauche : $f(n/2)$
meilleure séquence incluant $a$ : $n$
Le coût de l'iteration $n$ est $f(n)=n + 2f(n/2)$ avec $f(1)=0$. On calcule les premières termes :
End of explanation
import matplotlib.pyplot as plt
li = [0, 2, 4, -7, -2, 7, -1, 8, -10, 3]
cumul = [li[0]]
for i in li[1:]:
cumul.append( cumul[-1] + i )
cumul2 = [li[0]]
for i in li[1:]:
cumul2.append(max(cumul2[-1] + i, 0))
plt.plot(cumul, label="cumul")
plt.plot(cumul2, label="cumul2")
plt.plot([0 for c in cumul])
plt.legend()
plt.title("somme cumulée");
Explanation: On suppose que $f(2^n)=n2^n \Leftrightarrow f(n) = n \ln_2 n$. Il suffit de vérifier cela avec la récurrence :
$\begin{array}{rcl} f(n) &=& n + 2f(\frac{n}{2}) = n + 2 \frac{n}{2} \ln_2(\frac{n}{2}) \ &=& n + n \ln_2(n) - n\ln_2(2) = n + n\ln_2(n) - n = n\ln_2 n\end{array}$
C'est le coût d'une itération. Comme à chaque itération on coupe le problème en deux, le coût total de l'algorithme est :
$\begin{array}{rcl} C(2^n) &=& f(2^n) + 2f(2^{n-1}) + 4f(2^{n-2}) + ... + 2^{n-1}f(2) = \sum_{k=1}^{n} 2^{n-k} f(2^k) \ &=& \sum_{k=1}^n 2^{n-k} 2^k k = \sum_{k=1}^n 2^n k = 2^{n-1}n(n-1) \leqslant 2^{n-1} n^2 \end{array}$
Par conséquent, le coût est en $C(n) \sim O(n \ln^2 n)$.
Solution linéaire
La dernière solution est la plus rapide. Elle consiste à parcourir la liste dans le sens croissant des indices et à construire la série cumulée.
End of explanation
from IPython.core.display import Image
Image("sommemax.png")
Explanation: La courbe devient négative au quatrième nombre. L'astuce consiste à dire qu'on peut traiter les deux ensembles séparément et que la meilleure sous-séquence n'inclura pas ce nombre en quatrième position. Si on cherche la meilleure sous-séquence se terminant à la position $i$, il suffit juste de revenir en arrière et de trouver le minimum de la courbe cumulée avant la position $i$. Pour $i=5$, le point le plus bas qui précède est le point $k=3$. Au point $i=3$, on sait qu'il n'existe pas de sous-séquence positive précédent $i=3$.
On découpe la courbe en segments $[[i,j]]$ vérifiant $l_{i-1} < 0 \leqslant l_i$ et $\sum_{k=1}^{j} l_k < 0$ et $l_{j+1} \geqslant 0$.
End of explanation
def plus_grande_sous_liste_n(li):
meilleur = [None for i in li]
somme = [None for i in li]
best = None
for i, el in enumerate(li):
if el >= 0:
if i > 0:
somme[i] = max(somme[i-1], 0) + el
meilleur[i] = meilleur[i-1] if somme[i-1] >= 0 else i
else:
somme[i] = el
meilleur[i] = i
if best is None or somme[i] > somme[best]:
best = i
else:
somme[i] = (somme[i-1] + el) if i > 0 else el
if somme[i] >= 0:
meilleur[i] = meilleur[i-1]
i, j = meilleur[best], best+1
return li [i:j]
li = [4, -6, 7, -1, 8, -10, 3]
m = plus_grande_sous_liste_n(li)
print(m) # affiche [7, -1, 8]
li = [1, 2, 3, 4, 5, -98, 78, 9, 7, 7]
m = plus_grande_sous_liste_n(li)
print(m)
li = [0, 2, 4, -7, -2, 7, -1, 8, -10, 3]
m = plus_grande_sous_liste_n(li)
m
Explanation: On parcourt la série cumulée. A chaque fois qu'on passe sous zéro, au premier nombre positif suivant, on redémarre à zéro. La sous-séquence de somme maximale est forcément dans un de ces tronçons, elle a pour point de départ le premier élément et pour point d'arrivée le maximum obtenu sur le tronçon en question : pour chaque point $x,Cumul2(x)$ d'un tronçon, le minimum de la courbe cumulée précédent $x$ est nécessairement le premier élément du tronçon.
End of explanation
import random
li100 = [random.randint(-10, 50) for i in range(0,100)]
Explanation: Comparaison des temps de calcul
On effectue cela sur une liste de nombres aléatoire négatifs et positifs.
End of explanation
%timeit plus_grande_sous_liste(li100)
Explanation: Coût en $O(n^3)$ :
End of explanation
%timeit plus_grande_sous_liste_n2(li100)
Explanation: Coût en $O(n^2)$ :
End of explanation
%timeit plus_grande_sous_liste_nlnn2(li100)
Explanation: Coût en $O(n \ln^2 n)$ :
End of explanation
%timeit plus_grande_sous_liste_n(li100)
Explanation: Coût en $O(n)$ :
End of explanation |
2,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Data
Step2: Constant baseline
As a baseline, we use the empirical mean of y.
Step3: Kernel regression
Step4: We can visualize the kernel matrix to see which inputs are used to predict each output.
Step5: Implementation using learned attention
As an illustration of how to learn attention kernels, we make the bandwidth parameter adjustable, so we can optimize it by backprop.
The implementation uses batch matrix multiplication (torch.bmm).
This is defined as follows. Suppose the first batch contains n matrix Xi of size a x b, and the second batch contains n matrix Yi of size b x c. Then the output will have size (n, a, c).
Step6: To apply attention to kernel regression, we make a batch of size $N$, where $N$ is the number of training points. In batch $i$, the query is the $i$'th training point we are truying to predict, the keys are all the other inputs $x_{-i}$ and the values are all the other outpouts $y_{-i}$.
Step7: Train using SGD.
Step8: Results of training
Not suprisignly, fitting the hyper-parameter 'w' (the bandwidth of the kernel) results in overfitting, as we show below. However, for parametric attention, this is less likely to occur. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(seed=1)
import math
import collections
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
from torch import nn
from torch.nn import functional as F
try:
from probml_utils import d2l
except ModuleNotFoundError:
%pip install git+https://github.com/probml/probml-utils.git
from probml_utils import d2l
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/kernel_regression_attention.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Nadaraya-Watson kernel regression in 1d using attention
We show how to interpret kernel regression as an attention mechanism.
Based on sec 10.2 of http://d2l.ai/chapter_attention-mechanisms/nadaraya-waston.html
End of explanation
torch.manual_seed(0)
n_train = 50 # No. of training examples
x_train, _ = torch.sort(torch.rand(n_train) * 5) # Training inputs
def f(x):
return 2 * torch.sin(x) + x**0.8
y_train = f(x_train) + torch.normal(0.0, 0.5, (n_train,)) # Training outputs
x_test = torch.arange(0, 5, 0.1) # Testing examples
y_truth = f(x_test) # Ground-truth outputs for the testing examples
n_test = len(x_test) # No. of testing examples
n_test
Explanation: Data
End of explanation
def plot_kernel_reg(y_hat):
d2l.plot(x_test, [y_truth, y_hat], "x", "y", legend=["Truth", "Pred"], xlim=[0, 5], ylim=[-1, 5])
d2l.plt.plot(x_train, y_train, "o", alpha=0.5);
y_hat = torch.repeat_interleave(y_train.mean(), n_test)
plot_kernel_reg(y_hat)
Explanation: Constant baseline
As a baseline, we use the empirical mean of y.
End of explanation
# Shape of `X_repeat`: (`n_test`, `n_train`), where each row contains the
# same testing inputs (i.e., same queries)
X_repeat = x_test.repeat_interleave(n_train).reshape((-1, n_train))
# Note that `x_train` contains the keys. Shape of `attention_weights`:
# (`n_test`, `n_train`), where each row contains attention weights to be
# assigned among the values (`y_train`) given each query
attention_weights = nn.functional.softmax(-((X_repeat - x_train) ** 2) / 2, dim=1)
# Each element of `y_hat` is weighted average of values, where weights are
# attention weights
y_hat = torch.matmul(attention_weights, y_train)
plot_kernel_reg(y_hat)
plt.savefig("kernelRegrAttenPlot.pdf", dpi=300)
Explanation: Kernel regression
End of explanation
d2l.show_heatmaps(
attention_weights.unsqueeze(0).unsqueeze(0), xlabel="Sorted training inputs", ylabel="Sorted testing inputs"
)
plt.savefig("kernelRegrAttenMat.pdf", dpi=300)
Explanation: We can visualize the kernel matrix to see which inputs are used to predict each output.
End of explanation
# 2 batches of weights over the 10 data points
weights = torch.ones((2, 10)) * 0.1
weights = weights.unsqueeze(1)
print(weights.shape) # (2,1,10)
# 2 batches of 10 scalar data points
values = torch.arange(20.0).reshape((2, 10))
values = values.unsqueeze(-1)
print(values.shape) # (2,10,1)
Y = torch.bmm(weights, values)
print(Y.shape)
print(Y)
class NWKernelRegression(nn.Module):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.w = nn.Parameter(torch.rand((1,), requires_grad=True))
def forward(self, queries, keys, values):
# Shape of the output `queries` and `attention_weights`:
# (no. of queries, no. of key-value pairs)
queries = queries.repeat_interleave(keys.shape[1]).reshape((-1, keys.shape[1]))
self.attention_weights = nn.functional.softmax(-(((queries - keys) * self.w) ** 2) / 2, dim=1)
# Shape of `values`: (no. of queries, no. of key-value pairs)
return torch.bmm(self.attention_weights.unsqueeze(1), values.unsqueeze(-1)).reshape(-1)
Explanation: Implementation using learned attention
As an illustration of how to learn attention kernels, we make the bandwidth parameter adjustable, so we can optimize it by backprop.
The implementation uses batch matrix multiplication (torch.bmm).
This is defined as follows. Suppose the first batch contains n matrix Xi of size a x b, and the second batch contains n matrix Yi of size b x c. Then the output will have size (n, a, c).
End of explanation
# Shape of `X_tile`: (`n_train`, `n_train`), where each column contains the
# same training inputs
X_tile = x_train.repeat((n_train, 1))
# Shape of `Y_tile`: (`n_train`, `n_train`), where each column contains the
# same training outputs
Y_tile = y_train.repeat((n_train, 1))
# Shape of `keys`: ('n_train', 'n_train' - 1)
keys = X_tile[(1 - torch.eye(n_train)).type(torch.bool)].reshape((n_train, -1))
# Shape of `values`: ('n_train', 'n_train' - 1)
values = Y_tile[(1 - torch.eye(n_train)).type(torch.bool)].reshape((n_train, -1))
print([x_train.shape, X_tile.shape, keys.shape, values.shape])
Explanation: To apply attention to kernel regression, we make a batch of size $N$, where $N$ is the number of training points. In batch $i$, the query is the $i$'th training point we are truying to predict, the keys are all the other inputs $x_{-i}$ and the values are all the other outpouts $y_{-i}$.
End of explanation
net = NWKernelRegression()
loss = nn.MSELoss(reduction="none")
trainer = torch.optim.SGD(net.parameters(), lr=0.5)
animator = d2l.Animator(xlabel="epoch", ylabel="loss", xlim=[1, 5])
for epoch in range(5):
trainer.zero_grad()
# Note: L2 Loss = 1/2 * MSE Loss. PyTorch has MSE Loss which is slightly
# different from MXNet's L2Loss by a factor of 2. Hence we halve the loss
l = loss(net(x_train, keys, values), y_train) / 2
l.sum().backward()
trainer.step()
print(f"epoch {epoch + 1}, loss {float(l.sum()):.6f}")
animator.add(epoch + 1, float(l.sum()))
Explanation: Train using SGD.
End of explanation
# Shape of `keys`: (`n_test`, `n_train`), where each column contains the same
# training inputs (i.e., same keys)
keys = x_train.repeat((n_test, 1))
# Shape of `value`: (`n_test`, `n_train`)
values = y_train.repeat((n_test, 1))
y_hat = net(x_test, keys, values).unsqueeze(1).detach()
plot_kernel_reg(y_hat)
d2l.show_heatmaps(
net.attention_weights.unsqueeze(0).unsqueeze(0), xlabel="Sorted training inputs", ylabel="Sorted testing inputs"
)
print(net.w)
Explanation: Results of training
Not suprisignly, fitting the hyper-parameter 'w' (the bandwidth of the kernel) results in overfitting, as we show below. However, for parametric attention, this is less likely to occur.
End of explanation |
2,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Rock, Paper & Scissors with TensorFlow Hub - TFLite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Select the Hub/TF2 module to use
Hub modules for TF 1.x won't work here, please use one of the selections provided.
Step3: Data preprocessing
Use TensorFlow Datasets to load the rock, paper and scissors dataset.
This tfds package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see loading image data
Step4: The tfds.load method downloads and caches the data, and returns a tf.data.Dataset object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.
Since "rock_paper_scissors" doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
Step5: Format the Data
Use the tf.image module to format the images for the task.
Resize the images to a fixes input size, and rescale the input channels
Step6: Now shuffle and batch the data
Step7: Inspect a batch
Step8: Defining the model
All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.
For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
Step9: Training the model
Step10: Export the model
Step11: Export the SavedModel
Step12: Here you can verify the default signature of your exported SavedModel
Step13: Convert with TFLiteConverter
Step14: Run the following cells to check whether your TFLite model is working using the Python Interpreter
Step15: Download the model
NOTE
Step16: Prepare the test images for download (Optional)
This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
Explanation: Rock, Paper & Scissors with TensorFlow Hub - TFLite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c05_exercise_rock_paper_scissors.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c05_exercise_rock_paper_scissors.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
Setup
End of explanation
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
Explanation: Select the Hub/TF2 module to use
Hub modules for TF 1.x won't work here, please use one of the selections provided.
End of explanation
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
Explanation: Data preprocessing
Use TensorFlow Datasets to load the rock, paper and scissors dataset.
This tfds package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see loading image data
End of explanation
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
# Go to the TensorFlow Dataset's website and search for the Rock, Paper, Scissors dataset and load it here
splits, info = tfds.load( # YOUR CODE HERE )
# Save the dataset splits in a tuple
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
Explanation: The tfds.load method downloads and caches the data, and returns a tf.data.Dataset object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.
Since "rock_paper_scissors" doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
End of explanation
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
Explanation: Format the Data
Use the tf.image module to format the images for the task.
Resize the images to a fixes input size, and rescale the input channels
End of explanation
BATCH_SIZE = 32 #@param {type:"integer"}
# Prepare the examples by preprocessing them and then batching them (and optionally prefetching them)
# If you wish you can shuffle train set here
train_batches = # YOUR CODE HERE
validation_batches = # YOUR CODE HERE
test_batches = # YOUR CODE HERE
Explanation: Now shuffle and batch the data
End of explanation
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
Explanation: Inspect a batch
End of explanation
do_fine_tuning = False #@param {type:"boolean"}
# Build the model with a TFHub KerasLayer and attach a classification head to it
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3, ),
output_shape=[FV_SIZE],
trainable=do_fine_tuning),
tf.keras.layers.Dense(num_classes)
])
model.summary()
Explanation: Defining the model
All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.
For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
End of explanation
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
Explanation: Training the model
End of explanation
RPS_SAVED_MODEL = "exp_saved_model"
Explanation: Export the model
End of explanation
# Use TensorFlow's SavedModel API to export the SavedModel from the trained Keras model
# YOUR CODE HERE
Explanation: Export the SavedModel
End of explanation
%%bash -s $RPS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(RPS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
Explanation: Here you can verify the default signature of your exported SavedModel
End of explanation
# Intialize the TFLite converter to load the SavedModel
converter = # YOUR CODE HERE
# Set the optimization strategy for 'size' in the converter
converter.optimizations = [# YOUR CODE HERE]
# Use the tool to finally convert the model
tflite_model = # YOUR CODE HERE
with open("converted_model.tflite", "wb") as f:
f.write(tflite_model)
Explanation: Convert with TFLiteConverter
End of explanation
#@title Loading the converted TFLite model...
# Load TFLite model and allocate tensors.
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, 'rb') as fid:
tflite_model = fid.read()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
#@title Testing on random test examples...
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['rock', 'paper', 'scissors']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
#@title Visualize the outputs { run: "auto" }
index = 9 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
Explanation: Run the following cells to check whether your TFLite model is working using the Python Interpreter
End of explanation
with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
Explanation: Download the model
NOTE: You might have to run to the cell below twice
End of explanation
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq rps_test_images.zip -r test_images/
try:
files.download('rps_test_images.zip')
except:
pass
Explanation: Prepare the test images for download (Optional)
This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
End of explanation |
2,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Video Segments
The following example shows how to read the video segments files
Step1: The duration of the segments has to be defined manually. It is 5s for all provided video segment files
Step2: Plot the video bit-rate for all quality levels based on the segment sizes | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
dfsegs = pd.read_csv("../data/videos/CRZbG73SX3s_segments.csv")
Explanation: Video Segments
The following example shows how to read the video segments files:
End of explanation
segment_duration = 5
Explanation: The duration of the segments has to be defined manually. It is 5s for all provided video segment files:
End of explanation
fig = plt.figure(figsize=(14, 9))
ax = fig.add_subplot(111)
cmap = plt.get_cmap('copper')
colors = iter(cmap(np.linspace(0,1,3)))
labels = ['low', 'medium', 'high']
ql_cols = reversed(["quality_%d" % i for i in range(1,6)])
ql_labels = ['720p', '480p', '360p', '240p', '144p']
for ql, ql_label in zip(ql_cols, ql_labels):
dfql_segs = dfsegs.loc[:,ql] / 1024 / 1024
dfql_segs = dfql_segs.repeat(segment_duration)
dfql_segs = dfql_segs.reset_index(drop=True)
ax.plot(dfql_segs.index, dfql_segs, label=ql_label)
ax.grid()
ax.legend()
ax.set_xlabel("Time (s)")
ax.set_ylabel("Bitrate (Mbps)")
ax.set_xlim([0, 550])
Explanation: Plot the video bit-rate for all quality levels based on the segment sizes:
End of explanation |
2,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Processing Robotic total station data
Upload total station data into sample_data folder
Unfortunately, in the space delimited text file, there are spaces in the time if the minutes or the seconds are less than '10'. Furthermore, sometimes the seconds are provided as '60', which Python identifies as an error. Using grep, let's display the first 10 records of these rows from the file
Step2: Let's write a short Python program to correct the aforementioned issues
Step3: Let's load the data into pandas data frame. Please correct the date!
Step4: Let's remove some test observation.
Step5: Let's plot the RTS data.
Step6: Do we have gaps in the data series?
Step7: Let's fill the gaps.
Step8: Now, let's display a chart containing the critical deflections. For that, the the critical values are based on max deflection.
Step9: As it is possible to check, there is a decreasing trend in the data. Let's remove it.
Step10: Finaly, let's try to estimate the bus passes, saving the results to a file.
Step11: Let's display the results
Step12: GNSS data processing
This processing is very similar to RTS data processing.
First, let's upload the GNSS data into the sample_data folder.
Step13: Based on the above, we should use only the lines starting with GPS_Auto. Furthermore, please note that the seconds are sometimes '60' as well.
Based on that, let's write a short Python program to select the lines and correct the '60' seconds issue
Step14: Now, let's load the data into a pandas data frame
Step15: Let's plot the GPS data.
Step16: Please not thatat 11
Step17: Do we have gaps in the data serie?
Step18: As it is possible to check, there are no gaps.
Therefore, let's prepare the data and display a chart with the critical deflections, where the critical values are based on the max deflection.
Step19: This time, the decreasing trend data is not so clear as in the previous example. Nonetheless, let's remove it.
Step20: Finaly let's try to estimate the bus passes and save them to a file.
Step21: Let's display the results
Step22: Now, let's merge both the RTS and GPS result files, and try to find matches
Step24: Using cross correlation, let's check how close the relation is between the RTS and the GPS data.
Step25: With the help of the cross-correlation, we can check the time offset of the two series (in the example, a time offset of 2).
Based on that, let's check the crosscorrelation results by shifting the TPS observations
Step26: Processing camera observations of ArUco marker
On the field it was used Ulyxes to get the position of the ArUco marker. The image coordinates of the ArUco marker are store in a SQLite database. Upload the aruco.db file to the sample_data folder.
Step27: Let's plot the camera data (north column and datetime)
Step28: The figure above shows that there are some false positions. Let's use only those positions where the quality is above 0.9, changing the direction, and the pixels to millimeters.
Step29: The figure above is less smooth when compared to the RTS and GNSS results.
Do we have gaps in the data serie?
Step30: As it is possible to check, there are 13 gaps. Let's fill the gaps.
Step31: Now, let's prepare the data and display a chart with the critical deflections, where the critical values are based on the max deflection.
Step32: As it is possible to check, there is no significant trend in the data. Finally, let's try to estimate the bus passes and save them to a file. | Python Code:
import re # for regular expressions
import numpy as np # for vector/matri operations
import matplotlib.pyplot as plt # for charts
import pandas as pd # for table like data structures
from datetime import datetime, timedelta # for time/date handling
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/bridge_observations_tps.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Processing Bridge Observations
Import neccessary Python packages
End of explanation
!head sample_data/total_station.txt
Explanation: Processing Robotic total station data
Upload total station data into sample_data folder
Unfortunately, in the space delimited text file, there are spaces in the time if the minutes or the seconds are less than '10'. Furthermore, sometimes the seconds are provided as '60', which Python identifies as an error. Using grep, let's display the first 10 records of these rows from the file:
End of explanation
pattern = re.compile('[0-2][0-9]:[0-6][0-9]:60') # pattern for 60 seconds
with open('sample_data/total_station.txt') as fp: # input file
with open('sample_data/tps.txt', 'w') as fo: # output file
for line in fp:
if line.startswith('64'):
line = line.replace(': ', ':0') # space in minutes or seconds
p = pattern.search(line) # are there 60 seconds?
if p: # there is :60 seconds in string
orig = p.group() # save match part
tm = orig.replace(':60', ':59') # change to valid time decreasing with 1 second
t = datetime.strptime(tm, '%H:%M:%S') + timedelta(seconds=1) # add 1 second back
line = line.replace(orig, t.strftime('%H:%M:%S')) # convert back to string and replace original
print(line, end='', file=fo) # write to output file
Explanation: Let's write a short Python program to correct the aforementioned issues:
End of explanation
names = ["east", "north", "elev", "dummy", "std", "time"]
rts_data = pd.read_csv('sample_data/tps.txt', sep='\s+', names=names, skiprows=3)
rts_data['time'] = pd.to_datetime('2022-05-05 ' + rts_data['time']) #, format='%H:%M:%S')
print(f'{rts_data.shape[0]} records loaded')
Explanation: Let's load the data into pandas data frame. Please correct the date!
End of explanation
rts_data = rts_data.query('time > "2022-05-05 11:20:00"')
Explanation: Let's remove some test observation.
End of explanation
fig = plt.figure()
ax = fig.add_axes([0, 0, 4., 0.5]) # dimensions of the chart window
ax.plot(rts_data['time'], rts_data['elev'])
_ = ax.set_title('Elevations')
Explanation: Let's plot the RTS data.
End of explanation
deltas = rts_data['time'].diff()[1:] # time difference from previous record
dd = deltas[deltas > timedelta(seconds=1)] # select row where diff > 1 sec
print(f'{rts_data.shape[0]} records with {dd.size} gaps')
Explanation: Do we have gaps in the data series?
End of explanation
t0 = rts_data['time'].min()
t0 = datetime(t0.year, t0.month, t0.day, 0, 0, 0)
dt = (rts_data['time'] - t0).dt.total_seconds() # time difference from midnight
de = rts_data['elev'] - rts_data['elev'].max()
dt = dt.to_numpy().astype(int)
de = (de.to_numpy() * 1000).astype(int) # change to mm
rts_x = np.arange(dt[0], dt[-1]+1) # create serie of seconds
rts_y = np.interp(rts_x, dt, de) # interpolate missing deflection (linear)
print(f'max deflection: {rts_y.min():.1f} mm')
Explanation: Let's fill the gaps.
End of explanation
critical = -30 # critical deflection value
rts = np.c_[rts_x, rts_y] # numpy matrix from x, y vectors
rts_bus = rts[rts[:,1] < critical] # select below critical
print(f'{rts_bus.shape[0]} seconds under critical')
plt.plot(rts_x, rts_y) # plot all data
_ = plt.plot(rts_bus[:,0], rts_bus[:,1], 'ro') # plot critical points
Explanation: Now, let's display a chart containing the critical deflections. For that, the the critical values are based on max deflection.
End of explanation
rts_y = rts_y - np.polyval(np.polyfit(rts_x, rts_y, 2), rts_x) # remove second order trend
rts_y = rts_y - rts_y.max() # move max deflection value to zero
print(f'max deflection: {rts_y.min():.1f} mm')
rts = np.c_[rts_x, rts_y] # numpy matrix from x, y vectors
rts_bus = rts[rts[:,1] < critical]
print(f'{rts_bus.shape[0]} seconds under critical')
plt.plot(rts_x, rts_y)
plt.plot(rts_bus[:,0], rts_bus[:,1], 'ro')
Explanation: As it is possible to check, there is a decreasing trend in the data. Let's remove it.
End of explanation
start = 0 # start row index of the first bus
bus = []
for i in range(1, rts_bus.shape[0]): # for each row except first
if rts_bus[i,0] - rts_bus[i-1,0] > 1: # gap -> new bus arrived
if i-start-1 > 2: # pass longer than 2 seconds?
bus.append((np.average(rts_bus[start:i,0]),
np.min(rts_bus[start:i,1]),
i-start-1)) # store time, max deflection and duration (sec)
start = i
# add last
if rts_bus.shape[0] - start > 2:
bus.append((np.average(rts_bus[start:,0]),
np.min(rts_bus[start:,1]),
rts_bus.shape[0]-start))
print(f'{len(bus)} busses found')
print(f'time max.defl. duration')
fo = open('sample_data/rts_bus.txt', 'w')
for b in bus:
print(f'{datetime.fromtimestamp(b[0]).strftime("%H:%M:%S"):8s} {b[1]:5.1f} {b[2]:5d}')
print(f'{b[0]:.1f} {b[1]:.1f} {b[2]}', file=fo)
fo.close()
Explanation: Finaly, let's try to estimate the bus passes, saving the results to a file.
End of explanation
plt.plot(rts_x, rts_y)
plt.plot([rts_x.min(), rts_x.max()], [critical, critical])
plt.plot([b[0] for b in bus], [b[1] for b in bus], 'ro')
Explanation: Let's display the results:
End of explanation
! head sample_data/GNSS.txt
Explanation: GNSS data processing
This processing is very similar to RTS data processing.
First, let's upload the GNSS data into the sample_data folder.
End of explanation
pattern = re.compile('[0-2][0-9]:[0-6][0-9]:60') # pattern for 60 seconds
pattern1 = re.compile('GPS_Auto_[0-9]') # pattern for line selection
with open('sample_data/GNSS.txt') as fp: # input file
with open('sample_data/gps.txt', 'w') as fo: # output file
for line in fp:
if pattern1.match(line): # line pattern match
p = pattern.search(line) # are there 60 seconds?
if p: # there is :60 seconds in string
orig = p.group() # save match part
tm = orig.replace(':60', ':59') # change to valid time decreasing with 1 second
t = datetime.strptime(tm, '%H:%M:%S') + timedelta(seconds=1) # add 1 second back
line = line.replace(orig, t.strftime('%H:%M:%S')) # convert back to string and replace original
print(line, end='', file=fo) # write to output file
!head -10 sample_data/gps.txt
Explanation: Based on the above, we should use only the lines starting with GPS_Auto. Furthermore, please note that the seconds are sometimes '60' as well.
Based on that, let's write a short Python program to select the lines and correct the '60' seconds issue:
End of explanation
names = ['id', 'east', 'north', 'elev', 'dummy', 'date', 'time']
gps_data = pd.read_csv('sample_data/gps.txt', sep='\s+', names=names)
gps_data['time'] = pd.to_datetime(gps_data['date'] + ' ' + gps_data['time'])
gps_data.drop(columns=['dummy', 'date'], inplace=True)
gps_data[0:10]
Explanation: Now, let's load the data into a pandas data frame:
End of explanation
fig = plt.figure()
ax = fig.add_axes([0, 0, 4., 0.5]) # dimensions of the chart window
ax.plot(gps_data['time'], gps_data['elev'])
ax.set_title('Elevations')
Explanation: Let's plot the GPS data.
End of explanation
gps_data = gps_data.query('time < "2022-05-05 11:33:00" | time > "2022-05-05 11:38:56"')
fig = plt.figure()
ax = fig.add_axes([0, 0, 4., 0.5]) # dimensions of the chart window
ax.plot(gps_data['time'], gps_data['elev'])
_ = ax.set_title('Elevations')
Explanation: Please not thatat 11:32:40 we hadto change battery and after that there were some observations with huge errors till 11:38:57.
End of explanation
deltas = gps_data['time'].diff()[1:] # time difference from previous record
dd = deltas[deltas > timedelta(seconds=1)] # select row where diff > 1 sec
print(f'{gps_data.shape[0]} records with {dd.size} gaps')
Explanation: Do we have gaps in the data serie?
End of explanation
t0 = gps_data['time'].min()
t0 = datetime(t0.year, t0.month, t0.day, 0, 0, 0) # keep day
dt = (gps_data['time'] - t0).dt.total_seconds() # time difference from midnight
de = gps_data['elev'] - gps_data['elev'].max()
dt = dt.to_numpy().astype(int)
de = (de.to_numpy() * 1000).astype(int) # change to mm
gps_x = dt
gps_y = de
print(f'max deflection: {gps_y.min():.1f} mm')
critical = -40
gps = np.c_[gps_x, gps_y] # numpy matrix from x, y vectors
gps_bus = gps[gps[:,1] < critical] # select below critical
print(f'{gps_bus.shape[0]} seconds under critical')
plt.plot(gps_x, gps_y) # plot all data
plt.plot(gps_bus[:,0], gps_bus[:,1], 'ro') # plot critical points
Explanation: As it is possible to check, there are no gaps.
Therefore, let's prepare the data and display a chart with the critical deflections, where the critical values are based on the max deflection.
End of explanation
gps_y = gps_y - np.polyval(np.polyfit(gps_x, gps_y, 2), gps_x) # remove second order trend
gps_y = gps_y - gps_y.max() # move max deflection value to zero
print(f'max deflection: {gps_y.min():.1f} mm')
gps = np.c_[gps_x, gps_y] # numpy matrix from x, y vectors
gpss_bus = gps[gps[:,1] < critical]
print(f'{gps_bus.shape[0]} seconds under critical')
plt.plot(gps_x, gps_y)
plt.plot(gps_bus[:,0], gps_bus[:,1], 'ro')
Explanation: This time, the decreasing trend data is not so clear as in the previous example. Nonetheless, let's remove it.
End of explanation
start = 0 # start row index of the first bus
gbus = []
for i in range(1, gps_bus.shape[0]): # for each row except first
if gps_bus[i,0] - gps_bus[i-1,0] > 1: # gap -> new bus arrived
if i-start-1 > 2: # pass longer than 2 seconds?
gbus.append((np.average(gps_bus[start:i,0]),
np.min(gps_bus[start:i,1]),
i-start-1)) # store time, max deflection and duration (sec)
start = i
# add last
if gps_bus.shape[0] - start > 2:
gbus.append((np.average(gps_bus[start:,0]),
np.min(gps_bus[start:,1]),
gps_bus.shape[0]-start))
print(f'{len(gbus)} busses found')
print(f'time max.defl. duration')
fo = open('sample_data/gps_bus.txt', 'w')
for b in gbus:
print(f'{datetime.fromtimestamp(b[0]).strftime("%H:%M:%S"):8s} {b[1]:5.1f} {b[2]:5d}')
print(f'{b[0]:.1f} {b[1]:.1f} {b[2]}', file=fo)
fo.close()
Explanation: Finaly let's try to estimate the bus passes and save them to a file.
End of explanation
plt.plot(gps_x, gps_y)
plt.plot([gps_x.min(), gps_x.max()], [critical, critical])
plt.plot([b[0] for b in gbus], [b[1] for b in gbus], 'ro')
Explanation: Let's display the results:
End of explanation
i = 0 # index for rts bus
j = 0 # index for gps bus
ni = len(bus)
nj = len(gbus)
while True:
if i < ni and j < nj:
if bus[i][0] < gbus[j][0]:
print(f'{datetime.fromtimestamp(bus[i][0]).strftime("%H:%M:%S"):8s} {bus[i][1]:5.1f} {bus[i][2]:5d} TPS')
i += 1
else:
print(f'{datetime.fromtimestamp(gbus[j][0]).strftime("%H:%M:%S"):8s} {gbus[j][1]:5.1f} {gbus[j][2]:5d} GPS ***')
j += 1
elif i < ni:
print(f'{datetime.fromtimestamp(bus[i][0]).strftime("%H:%M:%S"):8s} {bus[i][1]:5.1f} {bus[i][2]:5d} TPS')
i += 1
elif j < nj:
print(f'{datetime.fromtimestamp(gbus[j][0]).strftime("%H:%M:%S"):8s} {gbus[j][1]:5.1f} {gbus[j][2]:5d} GPS ***')
j += 1
else:
break
Explanation: Now, let's merge both the RTS and GPS result files, and try to find matches:
End of explanation
def cross_corr(x1, y1, x2, y2):
calculate normalized cross correlation between two data series
:param x1: first data x
:param y1: first data y
:param x2: second data x
:param y2: second data y
:return lags and correlations
# find common range in x
xmin = max(x1[0], x2[0])
xmax = min(x1[-1], x2[-1])
n = int(xmax - xmin) # number of seconds
x = np.linspace(xmin, xmax, n)
yy1 = np.interp(x, x1, y1)
yy2 = np.interp(x, x2, y2)
yy1 = (yy1 - np.mean(yy1)) / (np.std(yy1))
yy2 = (yy2 - np.mean(yy2)) / np.std(yy2)
corr = np.correlate(yy1, yy2, "full")
lags = np.arange(-n + 1, n)
return lags, corr
lags, corr = cross_corr(rts_x, rts_y, gps_x, gps_y)
plt.plot(lags, corr)
plt.grid()
plt.title("Crosscorrelation RTS - GNSS")
plt.show()
print(f'The time offset between RTS and GNSS data: {lags[np.argmax(corr)]} seconds')
Explanation: Using cross correlation, let's check how close the relation is between the RTS and the GPS data.
End of explanation
shift_rts_x = rts_x - 200 # shift with 200 seconds
lags, corr = cross_corr(shift_rts_x[150:], rts_y[150:], gps_x[:-149], gps_y[:-149])
plt.plot(lags, corr)
plt.grid()
plt.title("Crosscorrelation RTS - GNSS")
plt.show()
print(f'The time offset between RTS-200 and GNSS data: {lags[np.argmax(corr)]} seconds')
Explanation: With the help of the cross-correlation, we can check the time offset of the two series (in the example, a time offset of 2).
Based on that, let's check the crosscorrelation results by shifting the TPS observations:
End of explanation
import sqlite3 as sql
conn = sql.connect('sample_data/aruco.db')
cam_data = pd.read_sql('SELECT * FROM template_coo', conn, parse_dates=['datetime'])
print(f'{cam_data.shape[0]} records loaded')
print(f'Columns: {cam_data.columns}')
Explanation: Processing camera observations of ArUco marker
On the field it was used Ulyxes to get the position of the ArUco marker. The image coordinates of the ArUco marker are store in a SQLite database. Upload the aruco.db file to the sample_data folder.
End of explanation
fig = plt.figure()
ax = fig.add_axes([0, 0, 4., 0.5]) # dimensions of the chart window
ax.plot(cam_data['datetime'], cam_data['north'])
_ = ax.set_title('Elevations')
Explanation: Let's plot the camera data (north column and datetime)
End of explanation
cam_data = pd.read_sql('SELECT * FROM template_coo WHERE quality > 0.9', conn, parse_dates=['datetime'])
print(f'{cam_data.shape[0]} records loaded')
print(f'Columns: {cam_data.columns}')
cam_data['north'] = -2.5 * cam_data['north'] # 1 pixel is ~2.5 mm
fig = plt.figure()
ax = fig.add_axes([0, 0, 4., 0.5]) # dimensions of the chart window
ax.plot(cam_data['datetime'], cam_data['north'])
_ = ax.set_title('Elevations')
Explanation: The figure above shows that there are some false positions. Let's use only those positions where the quality is above 0.9, changing the direction, and the pixels to millimeters.
End of explanation
deltas = cam_data['datetime'].diff()[1:] # time difference from previous record
dd = deltas[deltas > timedelta(seconds=1)] # select row where diff > 1 sec
print(f'{cam_data.shape[0]} records with {dd.size} gaps')
Explanation: The figure above is less smooth when compared to the RTS and GNSS results.
Do we have gaps in the data serie?
End of explanation
t0 = cam_data['datetime'].min()
t0 = datetime(t0.year, t0.month, t0.day, 0, 0, 0)
dt = (cam_data['datetime'] - t0).dt.total_seconds() # time difference from midnight
de = cam_data['north'] - cam_data['north'].max()
dt = dt.to_numpy().astype(int)
de = (de.to_numpy()).astype(int)
cam_x = np.arange(dt[0], dt[-1]+1) # create serie of seconds
cam_y = np.interp(cam_x, dt, de) # interpolate missing deflection (linear)
print(f'max deflection: {cam_y.min():.1f} mm')
Explanation: As it is possible to check, there are 13 gaps. Let's fill the gaps.
End of explanation
critical = -30 # critical deflection value
cam = np.c_[cam_x, cam_y] # numpy matrix from x, y vectors
cam_bus = cam[cam[:,1] < critical] # select below critical
print(f'{cam_bus.shape[0]} seconds under critical')
plt.plot(cam_x, cam_y) # plot all data
plt.plot(cam_bus[:,0], cam_bus[:,1], 'ro') # plot critical points
Explanation: Now, let's prepare the data and display a chart with the critical deflections, where the critical values are based on the max deflection.
End of explanation
start = 0 # start row index of the first bus
cbus = []
for i in range(1, cam_bus.shape[0]): # for each row except first
if cam_bus[i,0] - cam_bus[i-1,0] > 1: # gap -> new bus arrived
if i-start-1 > 2: # pass longer than 2 seconds?
cbus.append((np.average(cam_bus[start:i,0]),
np.min(cam_bus[start:i,1]),
i-start-1)) # store time, max deflection and duration (sec)
start = i
# add last
if cam_bus.shape[0] - start > 2:
cbus.append((np.average(cam_bus[start:,0]),
np.min(cam_bus[start:,1]),
cam_bus.shape[0]-start))
print(f'{len(cbus)} busses found')
print(f'time max.defl. duration')
fo = open('sample_data/cam_bus.txt', 'w')
for b in cbus:
print(f'{datetime.fromtimestamp(b[0]).strftime("%H:%M:%S"):8s} {b[1]:5.1f} {b[2]:5d}')
print(f'{b[0]:.1f} {b[1]:.1f} {b[2]}', file=fo)
fo.close()
Explanation: As it is possible to check, there is no significant trend in the data. Finally, let's try to estimate the bus passes and save them to a file.
End of explanation |
2,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CAPM - Capital Asset Pricing Model
Watch the video for the full overview.
Portfolio Returns
Step1: Compare Cumulative Return
Step2: Get Daily Return
Step3: What if our stock was completely related to SP500? | Python Code:
# Model CAPM as a simple linear regression
from scipy import stats
help(stats.linregress)
import pandas as pd
import pandas_datareader as web
spy_etf = web.DataReader('SPY', 'google')
spy_etf.info()
spy_etf.head()
start = pd.to_datetime('2010-01-04')
end = pd.to_datetime('2017-07-18')
aapl = web.DataReader('AAPL', 'google', start, end)
aapl.head()
import matplotlib.pyplot as plt
%matplotlib inline
aapl['Close'].plot(label = 'AAPL',
figsize = (12, 8))
spy_etf['Close'].plot(label = 'SPY Index')
plt.legend()
Explanation: CAPM - Capital Asset Pricing Model
Watch the video for the full overview.
Portfolio Returns:
$r_p(t) = \sum\limits_{i}^{n}w_i r_i(t)$
Market Weights:
$ w_i = \frac{MarketCap_i}{\sum_{j}^{n}{MarketCap_j}} $
CAPM of a portfolio
$ r_p(t) = \beta_pr_m(t) + \sum\limits_{i}^{n}w_i \alpha_i(t)$
End of explanation
aapl['Cumulative'] = aapl['Close'] / aapl['Close'].iloc[0]
spy_etf['Cumulative'] = spy_etf['Close'] / spy_etf['Close'].iloc[0]
aapl['Cumulative'].plot(label = 'AAPL',
figsize = (10,8))
spy_etf['Cumulative'].plot(label = 'SPY Index')
plt.legend()
plt.title('Cumulative Return')
Explanation: Compare Cumulative Return
End of explanation
aapl['Daily Return'] = aapl['Close'].pct_change(1)
spy_etf['Daily Return'] = spy_etf['Close'].pct_change(1)
fig = plt.figure(figsize = (12, 8))
plt.scatter(aapl['Daily Return'], spy_etf['Daily Return'],
alpha = 0.3)
aapl['Daily Return'].hist(bins = 100, figsize = (12, 8))
spy_etf['Daily Return'].hist(bins = 100, figsize = (12, 8))
beta,alpha, r_value, p_value, std_err = stats.linregress(aapl['Daily Return'].iloc[1:],spy_etf['Daily Return'].iloc[1:])
beta
alpha
r_value
Explanation: Get Daily Return
End of explanation
spy_etf['Daily Return'].head()
import numpy as np
noise = np.random.normal(0, 0.001, len(spy_etf['Daily Return'].iloc[1:]))
noise
spy_etf['Daily Return'].iloc[1:] + noise
beta, alpha, r_value, p_value, std_err = stats.linregress(spy_etf['Daily Return'].iloc[1:]+noise,
spy_etf['Daily Return'].iloc[1:])
beta
alpha
Explanation: What if our stock was completely related to SP500?
End of explanation |
2,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Measuring a Multiport Device with a 2-Port Network Analyzer
Introduction
In microwave measurements, one commonly needs to measure a n-port device with a m-port network analyzer ($m<n$ of course).
<img src="nports_with_2ports.svg"/>
This can be done by terminating each non-measured port with a matched load, and assuming the reflected power is negligible. With multiple measurements, it is then possible to reconstitute the original n-port. The first section of this example illustrates this method.
However, in some cases this may not provide the most accurate results, or even be possible in all measurement environments. Or, sometime it is not possible to have matched loads for all ports. The second part of this example presents an elegant solution to this problem, using impedance renormalization. We'll call it Tippet's technique, because it has a good ring to it.
Step1: Matched Ports
Let's assume that you have a 2-ports VNA. In order to measure a n-port network, you will need at least $p=n(n-1)/2$ measurements between the different pair of ports (total number of unique pairs of a set of n).
For example, let's assume we wants to measure a 3-ports network with a 2-ports VNA. One needs to perform at least 3 measurements
Step2: For the sake of the demonstration, we will "fake" the 3 distinct measurements by extracting 3 subsets of the original Network, i.e., 3 subnetworks
Step3: In reality of course, these three Networks comes from three measurements with distinct pair of ports, the non-used port being properly matched.
Before using the n_twoports_2_nport function, one must define the name of these subsets by setting the Network.name property, in order the function to know which corresponds to what
Step4: Now we can build the 3-ports Network from these three 2-port subnetworks
Step5: Tippet's Technique
This example demonstrates a numerical test of the technique described in "A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer" [1].
In Tippets technique, several sub-networks are measured in a similar way as before, but the port terminations are not assumed to be matched. Instead, the terminations just have to be known and no more than one can be completely reflective. So, in general $|\Gamma| \ne 1$.
During measurements, each port is terminated with a consistent termination. So port 1 is always terminated with $Z_1$ when not being measured. Once measured, each sub-network is re-normalized to these port impedances. Think about that. Finally, the composite network is constructed, and may then be re-normalized to the desired system impedance, say $50$ ohm.
[1] J. C. Tippet and R. A. Speciale, “A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer,” IEEE Transactions on Microwave Theory and Techniques, vol. 30, no. 5, pp. 661–666, May 1982.
Outline of Tippet's Technique
Following the example given in [1], measuring a 4-port network with a 2-port network analyzer.
An outline of the technique
Step6: Next, lets generate a random 4-port network which will be the DUT, that we are trying to measure with out 2-port network analyzer.
Step7: Now, we need to define the loads used to terminate each port when it is not being measured, note as described in [1] not more than one can be have full reflection, $|\Gamma| = 1$
Step8: Create required measurement port combinations. There are 6 different measurements required to measure a 4-port with a 2-port VNA. In general, #measurements = $n\choose 2$, for n-port DUT on a 2-port VNA.
Step9: Now to do it. Ok we loop over the port combo's and connect the loads to the right places, simulating actual measurements. Each raw subnetwork measurement is saved, along with the renormalized subnetwork. Finally, we stuff the result into the 4-port composit network.
Step10: Results
Self-Consistency
Note that 6-measurements of 2-port subnetworks works out to 24 s-parameters, and we only need 16. This is because each reflect, s-parameter is measured three-times. As, in [1], we will use this redundant measurement as a check of our accuracy.
The renormalized networks are stored in a dictionary with names based on their port indices, from this you can see that each have been renormalized to the appropriate z0.
Step11: Plotting all three raw measurements of $S_{11}$, we can see that they are not in agreement. These plots answer to plots 5 and 7 of [1]
Step12: However, the renormalized measurements agree perfectly. These plots answer to plots 6 and 8 of [1]
Step13: Test For Accuracy
Making sure our composite network is the same as our DUT
Step14: Nice!. How close ?
Step16: Dang!
Practical Application
This could be used in many ways. In waveguide, one could just make a measurement of a radiating open after a standard two-port calibration (like TRL). Then using Tippets technique, you can leave each port wide open while not being measured. This way you dont have to buy a bunch of loads. How sweet would that be?
More Complex Simulations | Python Code:
import skrf as rf
from itertools import combinations
%matplotlib inline
from pylab import *
rf.stylely()
Explanation: Measuring a Multiport Device with a 2-Port Network Analyzer
Introduction
In microwave measurements, one commonly needs to measure a n-port device with a m-port network analyzer ($m<n$ of course).
<img src="nports_with_2ports.svg"/>
This can be done by terminating each non-measured port with a matched load, and assuming the reflected power is negligible. With multiple measurements, it is then possible to reconstitute the original n-port. The first section of this example illustrates this method.
However, in some cases this may not provide the most accurate results, or even be possible in all measurement environments. Or, sometime it is not possible to have matched loads for all ports. The second part of this example presents an elegant solution to this problem, using impedance renormalization. We'll call it Tippet's technique, because it has a good ring to it.
End of explanation
tee = rf.data.tee
print(tee)
Explanation: Matched Ports
Let's assume that you have a 2-ports VNA. In order to measure a n-port network, you will need at least $p=n(n-1)/2$ measurements between the different pair of ports (total number of unique pairs of a set of n).
For example, let's assume we wants to measure a 3-ports network with a 2-ports VNA. One needs to perform at least 3 measurements: between ports 1 & 2, between ports 2 & 3 and between ports 1 & 3. We will assume these measurements are then converted into three 2-ports Network. To build the full 3-ports Network, one needs to provide a list of these 3 (sub)networks to the scikit-rf builtin function n_twoports_2_nport. While the order of the measurements in the list is not important, pay attention to define the Network.name properties of these subnetworks to contain the port index, for example p12 for the measurement between ports 1&2 or p23 between 2&3, etc.
Let's suppose we want to measure a tee:
End of explanation
# 2 port Networks as if one measures the tee with a 2 ports VNA
tee12 = rf.subnetwork(tee, [0, 1]) # 2 port Network btw ports 1 & 2, port 3 being matched
tee23 = rf.subnetwork(tee, [1, 2]) # 2 port Network btw ports 2 & 3, port 1 being matched
tee13 = rf.subnetwork(tee, [0, 2]) # 2 port Network btw ports 1 & 3, port 2 being matched
Explanation: For the sake of the demonstration, we will "fake" the 3 distinct measurements by extracting 3 subsets of the original Network, i.e., 3 subnetworks:
End of explanation
tee12.name = 'tee12'
tee23.name = 'tee23'
tee13.name = 'tee13'
Explanation: In reality of course, these three Networks comes from three measurements with distinct pair of ports, the non-used port being properly matched.
Before using the n_twoports_2_nport function, one must define the name of these subsets by setting the Network.name property, in order the function to know which corresponds to what:
End of explanation
ntw_list = [tee12, tee23, tee13]
tee_rebuilt = rf.n_twoports_2_nport(ntw_list, nports=3)
print(tee_rebuilt)
# this is an ideal example, both Network are thus identical
print(tee == tee_rebuilt)
Explanation: Now we can build the 3-ports Network from these three 2-port subnetworks:
End of explanation
wg = rf.wr10
wg.frequency.npoints = 101
Explanation: Tippet's Technique
This example demonstrates a numerical test of the technique described in "A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer" [1].
In Tippets technique, several sub-networks are measured in a similar way as before, but the port terminations are not assumed to be matched. Instead, the terminations just have to be known and no more than one can be completely reflective. So, in general $|\Gamma| \ne 1$.
During measurements, each port is terminated with a consistent termination. So port 1 is always terminated with $Z_1$ when not being measured. Once measured, each sub-network is re-normalized to these port impedances. Think about that. Finally, the composite network is constructed, and may then be re-normalized to the desired system impedance, say $50$ ohm.
[1] J. C. Tippet and R. A. Speciale, “A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer,” IEEE Transactions on Microwave Theory and Techniques, vol. 30, no. 5, pp. 661–666, May 1982.
Outline of Tippet's Technique
Following the example given in [1], measuring a 4-port network with a 2-port network analyzer.
An outline of the technique:
Calibrate 2-port network analyzer
Get four known terminations ($Z_1, Z_2, Z_3,Z_4$). No more than one can have $|\Gamma| = 1$
Measure all combinations of 2-port subnetworks (there are 6). Each port not currently being measured must be terminated with its corresponding load.
Renormalize each subnetwork to the impedances of the loads used to terminate it when note being measured.
Build composite 4-port, renormalize to VNA impedance.
Implementation
First, we create a Media object, which is used to generate networks for testing. We will use WR-10 Rectangular waveguide.
End of explanation
dut = wg.random(n_ports = 4,name= 'dut')
dut
Explanation: Next, lets generate a random 4-port network which will be the DUT, that we are trying to measure with out 2-port network analyzer.
End of explanation
loads = [wg.load(.1+.1j),
wg.load(.2-.2j),
wg.load(.3+.3j),
wg.load(.5),
]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
Explanation: Now, we need to define the loads used to terminate each port when it is not being measured, note as described in [1] not more than one can be have full reflection, $|\Gamma| = 1$
End of explanation
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
port_combos
Explanation: Create required measurement port combinations. There are 6 different measurements required to measure a 4-port with a 2-port VNA. In general, #measurements = $n\choose 2$, for n-port DUT on a 2-port VNA.
End of explanation
composite = wg.match(nports = 4) # composite network, to be filled.
measured,measured_renorm = {},{} # measured subnetworks and renormalized sub-networks
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from
composite.renormalize(50)
Explanation: Now to do it. Ok we loop over the port combo's and connect the loads to the right places, simulating actual measurements. Each raw subnetwork measurement is saved, along with the renormalized subnetwork. Finally, we stuff the result into the 4-port composit network.
End of explanation
measured_renorm
Explanation: Results
Self-Consistency
Note that 6-measurements of 2-port subnetworks works out to 24 s-parameters, and we only need 16. This is because each reflect, s-parameter is measured three-times. As, in [1], we will use this redundant measurement as a check of our accuracy.
The renormalized networks are stored in a dictionary with names based on their port indices, from this you can see that each have been renormalized to the appropriate z0.
End of explanation
s11_set = rf.NS([measured[k] for k in measured if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
Explanation: Plotting all three raw measurements of $S_{11}$, we can see that they are not in agreement. These plots answer to plots 5 and 7 of [1]
End of explanation
s11_set = rf.NS([measured_renorm[k] for k in measured_renorm if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
Explanation: However, the renormalized measurements agree perfectly. These plots answer to plots 6 and 8 of [1]
End of explanation
composite == dut
Explanation: Test For Accuracy
Making sure our composite network is the same as our DUT
End of explanation
sum((composite - dut).s_mag)
Explanation: Nice!. How close ?
End of explanation
def tippits(dut, gamma, noise=None):
simulate tippits technique on a 4-port dut.
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
loads = [wg.load(gamma) for k in ports]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
composite = wg.match(nports = dut.nports) # composite network, to be filled.
#measured,measured_renorm = {},{} # measured subnetworks and renormalized sub-networks
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
if noise is not None:
two_port.add_noise_polar(*noise)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from
composite.renormalize(50)
return composite
wg.frequency.npoints = 11
dut = wg.random(4)
#er = lambda gamma: mean((tippits(dut,gamma)-dut).s_mag)/mean(dut.s_mag)
def er(gamma, *args):
return max(abs(tippits(dut, rf.db_2_mag(gamma),*args).s_db-dut.s_db).flatten())
gammas = linspace(-80,0,11)
title('Error vs $|\Gamma|$')
plot(gammas, [er(k) for k in gammas])
plot(gammas, [er(k) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
figure()
#er = lambda gamma: max(abs(tippits(dut,gamma,(1e-5,.1)).s_db-dut.s_db).flatten())
noise = (1e-5,.1)
title('Error vs $|\Gamma|$ with reasonable noise')
plot(gammas, [er(k, noise) for k in gammas])
plot(gammas, [er(k,noise) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
Explanation: Dang!
Practical Application
This could be used in many ways. In waveguide, one could just make a measurement of a radiating open after a standard two-port calibration (like TRL). Then using Tippets technique, you can leave each port wide open while not being measured. This way you dont have to buy a bunch of loads. How sweet would that be?
More Complex Simulations
End of explanation |
2,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Concatenation
Concatenation basically glues together DataFrames. Keep in mind that dimensions should match along the axis you are concatenating on. You can use pd.concat and pass in a list of DataFrames to concatenate together
Step2: Example DataFrames
Step3: Merging
The merge function allows you to merge DataFrames together using a similar logic as merging SQL Tables together. For example
Step4: Or to show a more complicated example
Step5: Joining
Joining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame. | Python Code:
import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df1
df2
df3
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Merging, Joining, and Concatenating
There are 3 main ways of combining DataFrames together: Merging, Joining and Concatenating. In this lecture we will discuss these 3 methods with examples.
Example DataFrames
End of explanation
pd.concat([df1,df2,df3])
pd.concat([df1,df2,df3],axis=1)
Explanation: Concatenation
Concatenation basically glues together DataFrames. Keep in mind that dimensions should match along the axis you are concatenating on. You can use pd.concat and pass in a list of DataFrames to concatenate together:
End of explanation
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
Explanation: Example DataFrames
End of explanation
pd.merge(left,right,how='inner',on='key')
Explanation: Merging
The merge function allows you to merge DataFrames together using a similar logic as merging SQL Tables together. For example:
End of explanation
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer', on=['key1', 'key2'])
pd.merge(left, right, how='right', on=['key1', 'key2'])
pd.merge(left, right, how='left', on=['key1', 'key2'])
Explanation: Or to show a more complicated example:
End of explanation
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left.join(right)
left.join(right, how='outer')
Explanation: Joining
Joining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame.
End of explanation |
2,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plane-DEM intersections
First dated version
Step1: Test case 1
The first test case is illustrated in the image below. We have a horizontal topographic surface, at a height of 0, with 100 x 100 cells with a cell size of 1. The geological plane dips 45° towards East. The source point for the plane is located at (0, 50, 50).
The locations of the expected intersection points are (50, *, 0).
First, a horizontal plane was created with Saga GIS and saved in pygsf/example_data/horiz_plane.asc.
Step2: We read the data source with success. So we may unpack the result.
Step3: Hmmm, there is no projection info. In fact, there shouldn't..
Step4: A dictionary, as suspected. Try to see the content..
Step5: A very horizontal surface, we agree..
Step6: Given these data, we store them into a GeoArray
Step7: There is a single band provided in the geoarray, and represented by the data array.
The signature of the plane-DEM intersection function is
Step8: The source point is located at (0, 50, 50)
Step9: Now we try calculating the intersection
Step10: As expected, all the intersection points lie at (50, *, 0)
Plotting with Bokeh..
Step11: Test case 2
Now we consider a horizontal plane at z = 0 as topographic surface (same as case 1) and another horizontal surface at z = 1 as geological plane. We should get no intersection.
Step12: The horizontal geological plane definition
Step13: The source point located at (0, 50, 1)
Step14: Ok, list is empty, as expected.
Test case 3
Now we consider a horizontal plane at z = 0 as topographic surface (same as case 1) and another horizontal surface at z = 0 as geological plane. We should get all grid points as intersections.
The variables are the same as Case 2, apart from the point definition
Step15: They seem correct, just quite numerous..
We visualize them with Bokeh. | Python Code:
from pygsf.io.gdal.raster import try_read_raster_band
Explanation: Plane-DEM intersections
First dated version: 2019-06-11
Current version: 2021-04-24
Last run: 2021-04-24
A few simulated topographic surfaces were used to validate the routine for calculating the plane-DEM intersection.
Loading the dataset can be made with the following function:
End of explanation
source_data = "/home/mauro/Documents/projects/gsf/example_data/others/horiz_plane.asc"
success, cntnt = try_read_raster_band(raster_source=source_data)
print(success)
Explanation: Test case 1
The first test case is illustrated in the image below. We have a horizontal topographic surface, at a height of 0, with 100 x 100 cells with a cell size of 1. The geological plane dips 45° towards East. The source point for the plane is located at (0, 50, 50).
The locations of the expected intersection points are (50, *, 0).
First, a horizontal plane was created with Saga GIS and saved in pygsf/example_data/horiz_plane.asc.
End of explanation
geotransform, projection, band_params, data = cntnt
type(geotransform)
print(geotransform)
type(projection)
print(projection)
Explanation: We read the data source with success. So we may unpack the result.
End of explanation
type(band_params)
Explanation: Hmmm, there is no projection info. In fact, there shouldn't..
End of explanation
print(band_params)
Explanation: A dictionary, as suspected. Try to see the content..
End of explanation
type(data)
data.shape
data.min()
data.max()
Explanation: A very horizontal surface, we agree..
End of explanation
from pygsf.georeferenced.rasters import GeoArray
ga = GeoArray(inGeotransform=geotransform, inLevels=[data])
Explanation: Given these data, we store them into a GeoArray:
End of explanation
from pygsf.orientations.orientations import Plane
gplane = Plane(azim=90.0, dip_ang=45.0)
print(gplane)
Explanation: There is a single band provided in the geoarray, and represented by the data array.
The signature of the plane-DEM intersection function is:
plane_dem_intersection (srcPlaneAttitude: Plane, srcPt: Point, geo_array: GeoArray, level_ndx: int=0)
-> Tuple[List[Point], List[Point]]:
We already have the geoarray, we need to define the source plane attitue and the source point.
The geoplane is East-dipping with a dip angle of 45°:
End of explanation
from pygsf.geometries.shapes.space3d import Point3D
pt = Point3D(0, 50, 50)
Explanation: The source point is located at (0, 50, 50)
End of explanation
from pygsf.georeferenced.rasters import plane_dem_intersection
inters_pts = plane_dem_intersection(
srcPlaneAttitude=gplane,
srcPt=pt,
geo_array=ga)
print(inters_pts)
Explanation: Now we try calculating the intersection:
End of explanation
from bokeh.plotting import figure, output_notebook, show
x = list(map(lambda pt: pt.x, inters_pts))
y = list(map(lambda pt: pt.y, inters_pts))
output_notebook()
p = figure()
p.circle(x, y, size=2, color="navy", alpha=0.5)
show(p)
Explanation: As expected, all the intersection points lie at (50, *, 0)
Plotting with Bokeh..
End of explanation
source_data = "/home/mauro/Documents/projects/gsf/example_data/others/horiz_plane.asc"
success, cntnt = try_read_raster_band(raster_source=source_data)
print(success)
geotransform, projection, band_params, data = cntnt
ga = GeoArray(inGeotransform=geotransform, inLevels=[data])
Explanation: Test case 2
Now we consider a horizontal plane at z = 0 as topographic surface (same as case 1) and another horizontal surface at z = 1 as geological plane. We should get no intersection.
End of explanation
from pygsf.orientations.orientations import Plane
gplane = Plane(azim=90.0, dip_ang=0.0)
Explanation: The horizontal geological plane definition:
End of explanation
pt = Point3D(0, 50, 1)
inters_pts = plane_dem_intersection(
srcPlaneAttitude=gplane,
srcPt=pt,
geo_array=ga)
print(inters_pts)
Explanation: The source point located at (0, 50, 1)
End of explanation
pt = Point3D(0, 50, 0)
inters_pts = plane_dem_intersection(
srcPlaneAttitude=gplane,
srcPt=pt,
geo_array=ga)
print(inters_pts)
Explanation: Ok, list is empty, as expected.
Test case 3
Now we consider a horizontal plane at z = 0 as topographic surface (same as case 1) and another horizontal surface at z = 0 as geological plane. We should get all grid points as intersections.
The variables are the same as Case 2, apart from the point definition:
End of explanation
from bokeh.plotting import figure, output_notebook, show
x = list(map(lambda pt: pt.x, inters_pts))
y = list(map(lambda pt: pt.y, inters_pts))
output_notebook()
p = figure()
p.circle(x, y, size=2, color="navy", alpha=0.5)
show(p)
Explanation: They seem correct, just quite numerous..
We visualize them with Bokeh.
End of explanation |
2,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scores (pyannote.core.scores.Scores)
Step1: Scores instances are used to describe classification scores.
For instance, one can use a Scores to store the result of speaker identification approach applied on an episode of The Big Bang Theory TV series.
Step2: For instance, to represent a dialogue between Penny and Leonard, we could do the following | Python Code:
from pyannote.core import Scores
Explanation: Scores (pyannote.core.scores.Scores)
End of explanation
scores = Scores(
uri='TheBigBangTheory.Season01.Episode01',
modality='speaker'
)
Explanation: Scores instances are used to describe classification scores.
For instance, one can use a Scores to store the result of speaker identification approach applied on an episode of The Big Bang Theory TV series.
End of explanation
from pyannote.core import Segment
scores[Segment(3, 5), '_', 'Penny'] = 8
scores[Segment(3, 5), '_', 'Leonard'] = 0.15
scores[Segment(3, 5), '_', 'Sheldon'] = 0.05
scores[Segment(5.5, 7), '_', 'Penny'] = 0.4
scores[Segment(5.5, 7), '_', 'Leonard'] = 0.5
scores[Segment(5.5, 7), '_', 'Sheldon'] = 0.1
scores[Segment(8, 10), '_', 'Penny'] = 0.4
scores[Segment(8, 10), '_', 'Leonard'] = 0.25
scores[Segment(8, 10), '_', 'Sheldon'] = -12
scores
scores.to_annotation()
from pyannote.core import notebook
subplot(211)
notebook(scores, time=False)
subplot(212)
notebook(scores.to_annotation(), legend=False)
Explanation: For instance, to represent a dialogue between Penny and Leonard, we could do the following:
End of explanation |
2,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
analyzing data from multiple files
software carpentry
Step1: if / else statements
syntax similar to that in loops, need
Step2: useful tools for strings
Step3: regular expressions
We have already seen easy ways to search/replace things in strings, with the string methods .split, .replace, .strip. See dir("") for string methods. To get access to manipulations using regular expressions
Step4: When there is no match, the matched object is None and interpreted as False in boolean context
Step5: by the way
Step6: Finding all instances
Step7: search and replace
Step8: next
Step9: split according to a regular expression
removes the matched substrings
returns an array
Step11: writing functions
syntax similar to if/else statements and for loops
Step13: key principle
Step14: function arguments
Step16: exercise | Python Code:
import glob
glob.glob("inflammation-??.csv")
filenames = glob.glob('inflammation*.csv')
for f in filenames:
data = numpy.loadtxt(fname=f, delimiter=',')
print("file",f[13:15],
", # rows and columns: ", data.shape, sep="")
import numpy
import matplotlib.pyplot
filenames = sorted(glob.glob('inflammation*.csv'))
filenames = filenames[0:3]
for f in filenames:
print(f)
data = numpy.loadtxt(fname=f, delimiter=',')
fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(numpy.mean(data, axis=0))
axes2.set_ylabel('max')
axes2.plot(numpy.max(data, axis=0))
axes3.set_ylabel('min')
axes3.plot(numpy.min(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show()
Explanation: analyzing data from multiple files
software carpentry
End of explanation
num = -3
print("num is",num)
print("first try: ")
if num > 0:
print(num, "is positive")
print("second try: ", end="")
if num > 0:
print(num, "is positive")
else:
print(num, "is 0 or negative")
print("third try: ", end="")
if num > 0:
print(num, "is positive")
elif num == 0:
print(num, "is zero")
else:
print(num, "is negative")
f = 'inflammation-01.csv'
data = numpy.loadtxt(fname=f, delimiter=',')
print("file",f,": ", sep="", end="")
if numpy.max(data, axis=0)[0] == 0 and numpy.max(data, axis=0)[20] == 20:
print('Suspicious looking maxima!')
elif numpy.sum(numpy.min(data, axis=0)) == 0:
print('Minima add up to zero!')
else:
print('Seems OK!')
a = None # try again with a = "abc", "", [1,2,3], [], True, False, 0, 1, 18
if a:
print("a was true")
else:
print("a was false")
Explanation: if / else statements
syntax similar to that in loops, need : after the conditions, and indentation is critical to tell where each block ends.
elif statements optional. else would be the final statement, optional too.
to combine conditions: and, or, not.
End of explanation
a = "hello world"
print(a.startswith("h"))
print(a.startswith("he"))
print("h" in a)
print("low" in a)
print("lo w" in a)
Explanation: useful tools for strings: in, startswith
End of explanation
print(type(""))
print(dir(""))
print("aha".find("a"))
print("hohoho".find("oh"))
mylist = ["AA","BB","CC"]
"coolsep".join(mylist)
import re
filenames = glob.glob('*.csv')
print(filenames)
mo = re.search(r'i.*n',filenames[0]) # multiplier * is greedy
print(mo) # match object, stores much info. search: first instance only.
print(mo.group()) # what matched
print(mo.start()) # where match starts: indexing start at 0
print(mo.end()) # where it ends: index *after*!
mo = re.search(r'i.*?n',filenames[0])
print(mo)
print(mo.group())
print(mo.start())
print(mo.end())
Explanation: regular expressions
We have already seen easy ways to search/replace things in strings, with the string methods .split, .replace, .strip. See dir("") for string methods. To get access to manipulations using regular expressions:
re library
use r'' to write the regular expression pattern. for raw strings: to read a \n as slash and an n, not as a newline character.
multipliers are greedy by default: *, +, ?. Add ? to make them non-greedy
re.search, re.findall, re.sub
info from match objects: .group, .start, .end
End of explanation
sequences = ["ATCGGGGATCGAAGTTGAG", "ACGGCCAGUGUACN"]
for dna in sequences:
mo = re.search(r'[^ATCG]', dna)
if mo:
print("non-ACGT found in sequence",dna,": found", mo.group())
Explanation: When there is no match, the matched object is None and interpreted as False in boolean context:
End of explanation
for dna in sequences:
if re.search(r'[^ATCG]', dna):
mo = re.search(r'[^ATCG]', dna)
print("non-ACGT found in sequence",dna,": found", mo.group())
Explanation: by the way: compare with the less efficient code:
End of explanation
print(re.findall(r'i.*n',filenames[0])) # greedy. non-overlapping matches
mo = re.findall(r'i.*?n',filenames[0]) # non-greedy
print(mo)
mo
for f in filenames:
if not re.search(r'^i', f): # if no match: search object interpreted as False
print("file name",f,"does not start with i")
Explanation: Finding all instances:
End of explanation
re.sub(r'^(\w)\w+-(\d+)\.csv', r'\1\2.csv', filenames[0])
for i in range(0,len(filenames)):
filenames[i] = re.sub(r'^(\w)\w+-(\d+)\.csv', r'\1\2.csv', filenames[i])
print(filenames)
taxa = ["Drosophila melanogaster", "Homo sapiens"]
for taxon in taxa:
mo = re.search(r'^([^\s]+) ([^\s]+)$', taxon)
if mo:
genus = mo.group(1)
species = mo.group(2)
print("genus=" + genus + ", species=" + species)
print(taxon)
print(mo)
print(mo.start(1))
print(mo.start(2))
Explanation: search and replace: re.sub
capture with parentheses in the regular expression
recall captured elements with \1, \2 etc. in a regular expression, to use them in a replacement for example
End of explanation
taxa_abbrev = []
for taxon in taxa:
taxa_abbrev.append(
re.sub(r'^(\S).* ([^\s]+)$', r'\1_\2', taxon)
)
print(taxa_abbrev)
Explanation: next: abbreviate genus name to its first letter, and replace space by underscore:
End of explanation
coolstring = "Homo sapiens is pretty super"
re.split(r's.p', coolstring)
Explanation: split according to a regular expression
removes the matched substrings
returns an array
End of explanation
def startswithi(str):
returns True if the input string starts with 'i'.
Let me continue this very long explanation
and also give an example:
startswithi("hohoho")
note 2 things:
- the double and single quotes inside my tripe double-quoted docstring
- in my text here the indentation adds 4 spaces on each line.
Those are ignored because it's a triple set of quotes.
return(bool(re.search(r'^i', str)))
print(startswithi("ihihi"))
print(startswithi("hohoho"))
?startswithi
def fahr_to_kelvin(temp):
return ((temp - 32) * (5/9)) + 273.15
print('freezing point of water:', fahr_to_kelvin(32))
print('boiling point of water:', fahr_to_kelvin(212))
def kelvin_to_celsius(temp_k):
return temp_k - 273.15
print('absolute zero in Celsius:', kelvin_to_celsius(0.0))
def fahr_to_celsius(temp_f):
temp_k = fahr_to_kelvin(temp_f)
result = kelvin_to_celsius(temp_k)
return result
print('freezing point of water in Celsius:', fahr_to_celsius(32.0))
Explanation: writing functions
syntax similar to if/else statements and for loops: colon to end the function name/argument names, and indentation of the function body.
python
def function_name(arguments):
docstring here: to explain what the function does
command indented
indented command
return value
End of explanation
import numpy
import glob
def analyze(filename):
data = numpy.loadtxt(fname=filename, delimiter=',')
fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(numpy.mean(data, axis=0))
axes2.set_ylabel('max')
axes2.plot(numpy.max(data, axis=0))
axes3.set_ylabel('min')
axes3.plot(numpy.min(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show()
def detect_problems(filename):
data = numpy.loadtxt(fname=filename, delimiter=',')
if (numpy.max(data, axis=0)[0] == 0 and
numpy.max(data, axis=0)[20] == 20):
print('Suspicious looking maxima!')
elif numpy.sum(numpy.min(data, axis=0)) == 0:
print('Minima add up to zero!')
else:
print('Seems OK!')
filenames = glob.glob('inflammation*.csv')
for f in filenames[:3]:
print(f)
analyze(f)
detect_problems(f)
Explanation: key principle: break code down into small parts
write functions
if you do some "copy-paste" of your code: you need to write a function
functions make your code easier to debug, easier to read
use meaningful names for functions and for variables
End of explanation
numpy.loadtxt('inflammation-01.csv', delimiter=',')
numpy.loadtxt('inflammation-01.csv', ',')
help(numpy.loadtxt)
Explanation: function arguments: can be named, can have default values
End of explanation
import math
def logfactorial(n):
calculate the log of factorial n: log(1) + ... + log(n).
Examples:
>>> round(math.exp(logfactorial(5)),5)
120.0
assert type(n)==int, "argument to logfactorial should be an integer"
res = 0
for i in range(0,n):
res += math.log(i+1)
return res
help(logfactorial)
?logfactorial
Explanation: exercise: binomial coefficients
Calculating binomial coefficients is not easy numerically.
The number of ways to choose k elements among n is
choose(n,k) = n! / (k! (n-k)!)
where factorial n: n! = 1*2*...*n becomes very big very fast.
But many terms cancel each other in choose(n,k), and it is a lot easier numerically to calculate the log of factorial numbers: log(n!)=log(1)+ ... + log(n).
Write a function "logfactorial" that calculates the log(n!) for any integer n>0.
Add a docstring
Add checks on the input n
Add tests as examples inside the docstring.
For the tests to be used, add a section using the doctest module.
Add an optional argument k to calculate log(n!/k!)=log((k+1)*...*n), with default k=0.
Add an associated test.
Write a function "choose" to calculate the log of the binomial log(choose(n,k)) for any integers n>=0 and 0 <= k <= n. Start with the docstring and with a test.
Add an optional argument to this choose function, to return the binomial coefficient itself or its log. Make the function return the binomial coefficient by default, not its log.
End of explanation |
2,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loan Approval Model
Created with H2O Automatic Machine Learning
This notebook ingests a dataset, and trains many machine learning models intelligently searching the hyper-parameter space for optimal values. A leaderboard is maintained. Finally, an ensemble is created stacking together some of the base learners and the result is added to the leaderboard. The best model is deployed to production.
Step1: Leaderboard
Display the best models, sorted by descending AUC
Step2: Variable Importance - Best Model
Step3: Leaderboard ROC Curves
Step4: Confusion Matrix
Step5: Business Impact Matrix
Weighting Predictions With a Dollar Value
- Correctly predicting GOOD | Python Code:
%%capture
import h2o
from h2o.automl import H2OAutoML
import os
import plotly
import cufflinks
import plotly.plotly as py
import plotly.graph_objs as go
import plotly.figure_factory as ff
plotly.offline.init_notebook_mode(connected=True)
myPlotlyKey = os.environ['SECRET_ENV_BRETTS_PLOTLY_KEY']
py.sign_in(username='bretto777',api_key=myPlotlyKey)
# Suppress unwatned warnings
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
%%capture
#h2o.init(nthreads=1, max_mem_size="256M")
h2o.connect(ip="35.199.178.30")
#h2o.no_progress()
# Import some data from Amazon S3
h2oDF = h2o.import_file("https://s3-us-west-1.amazonaws.com/dsclouddata/LendingClubData/LoansGoodBad.csv")
# Stratified Split into Train/Test
stratsplit = h2oDF["Bad_Loan"].stratified_split(test_frac=0.3, seed=12349453)
train = h2oDF[stratsplit=="train"]
test = h2oDF[stratsplit=="test"]
dfSum = h2oDF.group_by(by="State").sum().frame
dfMean = h2oDF.group_by(by="State").mean().frame
stateData = dfSum.merge(dfMean).as_data_frame(use_pandas=True, header=True)
stateData = stateData.iloc[1:]
train.head(10)
for col in stateData.columns:
stateData[col] = stateData[col].astype(str)
scl = [[0.0, 'rgb(164, 182, 216)'],[0.2, 'rgb(116, 141, 188)'],[0.4, 'rgb(69, 102, 165)'],\
[0.6, 'rgb(45, 82, 153)'],[0.8, 'rgb(26, 62, 132)'],[1.0, 'rgb(4, 37, 99)']]
stateData['text'] = 'Avg Interest_Rate '+stateData['mean_Interest_Rate']+ '<br>' +\
'Total Loan_Amount '+stateData['sum_Loan_Amount']+'<br>'+\
'Avg Term '+stateData['mean_Term']+ '<br>' +\
'Avg Income ' + stateData['mean_Annual_Income']
data = [ dict(
type='choropleth',
colorscale = scl,
autocolorscale = False,
locations = stateData['State'],
z = stateData['sum_Bad_Loan'].astype(float),
locationmode = 'USA-states',
text = stateData['text'],
marker = dict(
line = dict (
color = 'rgb(255,255,255)',
width = 2
) ),
colorbar = dict(
title = "# Bad Loans")
) ]
layout = dict(
title = 'Bad Loans by State<br>(Hover for breakdown)',
geo = dict(
scope='usa',
projection=dict( type='albers usa' ),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)'),
)
fig = dict( data=data, layout=layout )
py.iplot( fig, filename='d3-cloropleth-map' )
# Identify predictors and response
x = train.columns
y = "Bad_Loan"
x.remove(y)
# For binary classification, response should be a factor
train[y] = train[y].asfactor()
test[y] = test[y].asfactor()
# Run AutoML, building 11 models
autoModel = H2OAutoML(max_models=11)
autoModel.train(x = x, y = y,
training_frame = train,
leaderboard_frame = test)
Explanation: Loan Approval Model
Created with H2O Automatic Machine Learning
This notebook ingests a dataset, and trains many machine learning models intelligently searching the hyper-parameter space for optimal values. A leaderboard is maintained. Finally, an ensemble is created stacking together some of the base learners and the result is added to the leaderboard. The best model is deployed to production.
End of explanation
leaders = autoModel.leaderboard
leaders
Explanation: Leaderboard
Display the best models, sorted by descending AUC
End of explanation
leaders[1, 0]
importances = h2o.get_model(leaders[2, 0]).varimp(use_pandas=True)
importances
importances = h2o.get_model(leaders[2, 0]).varimp(use_pandas=True)
importances = importances.loc[:,['variable','relative_importance']].groupby('variable').mean()
importances.sort_values(by="relative_importance", ascending=False).iplot(kind='bar', colors='#5AC4F2', theme='white')
Explanation: Variable Importance - Best Model
End of explanation
Model0 = np.array(h2o.get_model(leaders[0, 0]).roc(valid=True))
Model1 = np.array(h2o.get_model(leaders[1, 0]).roc(valid=True))
Model2 = np.array(h2o.get_model(leaders[2, 0]).roc(valid=True))
Model3 = np.array(h2o.get_model(leaders[3, 0]).roc(valid=True))
Model4 = np.array(h2o.get_model(leaders[4, 0]).roc(valid=True))
Model5 = np.array(h2o.get_model(leaders[5, 0]).roc(valid=True))
Model6 = np.array(h2o.get_model(leaders[6, 0]).roc(valid=True))
Model7 = np.array(h2o.get_model(leaders[7, 0]).roc(valid=True))
Model8 = np.array(h2o.get_model(leaders[8, 0]).roc(valid=True))
Model9 = np.array(h2o.get_model(leaders[9, 0]).roc(valid=True))
layout = go.Layout(autosize=False, width=725, height=575, xaxis=dict(title='False Positive Rate', titlefont=dict(family='Arial, sans-serif', size=15, color='grey')),
yaxis=dict(title='True Positive Rate', titlefont=dict(family='Arial, sans-serif', size=15, color='grey')))
Model0Trace = go.Scatter(x = Model0[0], y = Model0[1], mode = 'lines', name = 'Leader', line = dict(color = ('rgb(26, 58, 126)'), width = 3))
Model1Trace = go.Scatter(x = Model1[0], y = Model1[1], mode = 'lines', name = 'Model 1', line = dict(color = ('rgb(135, 160, 216)'), width = 3))
Model2Trace = go.Scatter(x = Model2[0], y = Model2[1], mode = 'lines', name = 'Model 2', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
Model3Trace = go.Scatter(x = Model3[0], y = Model3[1], mode = 'lines', name = 'Model 3', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
Model4Trace = go.Scatter(x = Model4[0], y = Model4[1], mode = 'lines', name = 'Model 4', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
Model5Trace = go.Scatter(x = Model5[0], y = Model5[1], mode = 'lines', name = 'Model 5', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
Model6Trace = go.Scatter(x = Model6[0], y = Model6[1], mode = 'lines', name = 'Model 6', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
Model7Trace = go.Scatter(x = Model7[0], y = Model7[1], mode = 'lines', name = 'Model 7', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
Model8Trace = go.Scatter(x = Model8[0], y = Model8[1], mode = 'lines', name = 'Model 8', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
Model9Trace = go.Scatter(x = Model9[0], y = Model9[1], mode = 'lines', name = 'Model 9', line = dict(color = ('rgb(156, 190, 241)'), width = 1))
traceChanceLine = go.Scatter(x = [0,1], y = [0,1], mode = 'lines+markers', name = 'chance', line = dict(color = ('rgb(136, 140, 150)'), width = 4, dash = 'dash'))
fig = go.Figure(data=[Model0Trace,Model1Trace,Model2Trace,Model3Trace,Model4Trace,Model5Trace,Model7Trace,Model8Trace,Model9Trace,traceChanceLine], layout=layout)
py.iplot(fig)
Explanation: Leaderboard ROC Curves
End of explanation
cm = autoModel.leader.confusion_matrix(xval=True)
cm = cm.table.as_data_frame()
cm
confusionMatrix = ff.create_table(cm)
confusionMatrix.layout.height=300
confusionMatrix.layout.width=800
confusionMatrix.layout.font.size=17
py.iplot(confusionMatrix)
Explanation: Confusion Matrix
End of explanation
CorrectPredictBad = cm.loc[0,'BAD']
CorrectPredictBadImpact = 500
cm1 = CorrectPredictBad*CorrectPredictBadImpact
IncorrectPredictBad = cm.loc[1,'BAD']
IncorrectPredictBadImpact = -100
cm2 = IncorrectPredictBad*IncorrectPredictBadImpact
IncorrectPredictGood = cm.loc[0,'GOOD']
IncorrectPredictGoodImpact = -1000
cm3 = IncorrectPredictGood*IncorrectPredictGoodImpact
CorrectPredictGood = cm.loc[0,'GOOD']
CorrectPredictGoodImpact = 800
cm4 = CorrectPredictGood*CorrectPredictGoodImpact
data_matrix = [['Business Impact', '($) Predicted BAD', '($) Predicted GOOD', '($) Total'],
['($) Actual BAD', cm1, cm3, '' ],
['($) Actual GOOD', cm2, cm4, ''],
['($) Total', cm1+cm2, cm3+cm4, cm1+cm2+cm3+cm4]]
impactMatrix = ff.create_table(data_matrix, height_constant=20, hoverinfo='weight')
impactMatrix.layout.height=300
impactMatrix.layout.width=800
impactMatrix.layout.font.size=17
py.iplot(impactMatrix)
h2o.save_model(model=autoModel.leader)
def approve_loan(Loan_Amount,Term,Interest_Rate,Employment_Years,Home_Ownership,Annual_Income,Verification_Status,Loan_Purpose,State,
Debt_to_Income,Delinquent_2yr,Revolving_Cr_Util,Total_Accounts,Longest_Credit_Length):
# connect to the model scoring service
h2o.connect()
# open the downloaded model
ChurnPredictor = h2o.load_model(path='DRF_model_1496459915419_4')
# define a feature vector to evaluate with the model
newData = pd.DataFrame({'Loan_Amount' : Loan_Amount,
'Term' : Term,
'Interest_Rate' : Interest_Rate,
'Employment_Years' : Employment_Years,
'Home_Ownership' : Home_Ownership,
'Annual_Income' : Annual_Income,
'Verification_Status' : Verification_Status,
'Loan_Purpose' : Loan_Purpose,
'State' : State,
'Debt_to_Income' : Debt_to_Income,
'Delinquent_2yr' : Delinquent_2yr,
'Revolving_Cr_Util' : Revolving_Cr_Util,
'Total_Accounts' : Total_Accounts,
'Longest_Credit_Length' : Longest_Credit_Length}, index=[0])
# evaluate the feature vector using the model
predictions = ChurnPredictor.predict(h2o.H2OFrame(newData))
predictionsOut = h2o.as_list(predictions, use_pandas=False)
prediction = predictionsOut[1][0]
probabilityBad = predictionsOut[1][1]
probabilityGood = predictionsOut[1][2]
return "Prediction: " + str(prediction) + " |Probability of Bad Loan: " + str(probabilityBad) + " |Probability of Good Loan: " + str(probabilityGood)
Loan_Amount = 5000
Term = "60 months"
Interest_Rate=13
Employment_Years=5
Home_Ownership="RENT"
Annual_Income=75000
Verification_Status="VERIFIED - income"
Loan_Purpose="credit_card"
State="CA"
Debt_to_Income="16.12"
Delinquent_2yr="0"
Revolving_Cr_Util=37
Total_Accounts=6
Longest_Credit_Length=97
approve_loan(Loan_Amount,Term,Interest_Rate,Employment_Years,Home_Ownership,Annual_Income,Verification_Status,Loan_Purpose,State,Debt_to_Income,Delinquent_2yr,Revolving_Cr_Util,Total_Accounts,Longest_Credit_Length)
Explanation: Business Impact Matrix
Weighting Predictions With a Dollar Value
- Correctly predicting GOOD: +\$500
- Correctly predicting BAD: +\$800
- Incorrectly predicting GOOD: -\$1000
- Incorrectly predicting BAD: -\$100
End of explanation |
2,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
13 Boucles for et while
Step1: 13.1 La boucle for
Syntaxe
Step2: 13.4 Mise à jour d'une variable
Step3: 13.5 Quelques exemples
Calculer la somme des éléments d'une liste
Step4: Écrivez un programme qui affiche les 20 premiers termes de la table de multiplication par 7.
Step5: Écrivez un programme qui affiche une suite de 12 nombres dont chaque terme soit égal au triple du terme précédent.
Step6: Écrivez un programme qui affiche la suite de symboles suivante
Step7: Écrivez un script qui recopie une chaîne (dans une nouvelle variable), en insérant des astérisques entre les caractères.
Ainsi par exemple, "gaston" devra devenir "g*a*s*t*o*n"
Step8: 13.6 La boucle while
La boucle while peut être utile lorsqu'on ne sait pas combien d'itérations on doit faire.
Syntaxe
Step9: 13.7 Interrompre une boucle avec break
Écrire un programme qui affiche les puissances de deux inférieures à un milliard et arrêter le programme si une puissance termine par 88.
13.8 Continuer une boucle à l'itération suivante avec continue
Écrivez un programme qui calcule les 50 premiers termes de la table de multiplication par 13, mais n’affiche que ceux qui sont des multiples de 7.
14 Conditions if
Syntaxe
Step10: Écrivez un script qui compte le nombre d’occurrences du caractère « e » dans une chaîne de caractères.
Step11: Que fait le programme ci-dessous, dans les quatre cas où l’on aurait défini au préalable que la variable a vaut 1, 2, 3 ou 15?
if a != 2
Step12: Écrivez un programme qui analyse un par un tous les éléments d’une liste de nombres pour générer deux nouvelles listes. L’une contiendra seulement les nombres pairs de la liste initiale, et l’autre les nombres impairs.
Step13: 14.1 Syntaxe compacte d'une assignation conditionnelle
15 Fonctions def
Syntaxe
Step14: Écrire une fonction qui affiche les 10 premiers multiples d'un nombre choisi.
Écrivez un programme qui calcule le volume d’un parallélépipède rectangle dont sont fournis au départ la largeur, la hauteur et la profondeur.
Step15: Écrivez un programme qui calcule le périmètre et l’aire d’un triangle quelconque dont l’utilisateur fournit les 3 côtés.
(Rappel
Step16: Modifier légèrement ce programme pour qu’il additionne les nombres multiples de 3 ou de 5 compris entre les bornes a et b. Avec les bornes 0 et 32 , le résultat devrait donc être | Python Code:
from IPython.display import Image
Image('../NotesDeCours/images/bart_simpson.jpg')
range(100, 109)
for i in range(100, 109):
print(i, "I will not do this again")
for i in range(10):
print(i)
Explanation: 13 Boucles for et while
End of explanation
s = 0
Explanation: 13.1 La boucle for
Syntaxe :
for ELEMENT in LISTE: # ligne d'en-tête se terminant par ":"
INSTRUCTION #1 # bloc d'instructions indenté de 4 espaces
INSTRUCTION #2
INSTRUCTION #3
13.2 Un exemple de boucle for avec Sympy
13.3 Affectation d'une variable
End of explanation
s = s + 1
s
Explanation: 13.4 Mise à jour d'une variable
End of explanation
L = range(10)
L
sum(L)
s = 0
for i in L:
s = s + i
t = 0
#t = t + 1
t += 1
t
Explanation: 13.5 Quelques exemples
Calculer la somme des éléments d'une liste
End of explanation
for i in range(20):
print(7, 'x', i, '=', 7 * i)
Explanation: Écrivez un programme qui affiche les 20 premiers termes de la table de multiplication par 7.
End of explanation
terme = 1
for _ in range(12):
print(terme)
terme *= 3
Explanation: Écrivez un programme qui affiche une suite de 12 nombres dont chaque terme soit égal au triple du terme précédent.
End of explanation
a = 'cheval'
a * 10
for i in range(1,8):
print('*'*i)
Explanation: Écrivez un programme qui affiche la suite de symboles suivante :
```
*
**
```
End of explanation
'cheval' + 'gaston'
s = 'gaston'
t = ''
for a in s:
t += a + '*'
print(t[:-1])
Explanation: Écrivez un script qui recopie une chaîne (dans une nouvelle variable), en insérant des astérisques entre les caractères.
Ainsi par exemple, "gaston" devra devenir "g*a*s*t*o*n"
End of explanation
k = 1
while k < 10**9:
print(k)
k = k * 2
Explanation: 13.6 La boucle while
La boucle while peut être utile lorsqu'on ne sait pas combien d'itérations on doit faire.
Syntaxe :
while CONDITION: # ligne d'en-tête se terminant par ":"
INSTRUCTION #1 # bloc d'instructions indenté de 4 espaces
INSTRUCTION #2
INSTRUCTION #3
Écrire un programme qui affiche les puissances de deux inférieures à un milliard.
End of explanation
x = -3
if x > 0:
print('la variable x est positive')
else:
if x == 0:
print('x est zéro')
else:
print('la variable x est négative')
x = -3
if x > 0:
print('la variable x est positive')
elif x == 0:
print('x est zéro')
elif x < 0:
print('la variable x est négative')
x = -3
if x > 0:
print('la variable x est positive')
elif x >= 0:
print('x est zéro')
else:
print('la variable x est négative')
Explanation: 13.7 Interrompre une boucle avec break
Écrire un programme qui affiche les puissances de deux inférieures à un milliard et arrêter le programme si une puissance termine par 88.
13.8 Continuer une boucle à l'itération suivante avec continue
Écrivez un programme qui calcule les 50 premiers termes de la table de multiplication par 13, mais n’affiche que ceux qui sont des multiples de 7.
14 Conditions if
Syntaxe :
if CONDITION_1: # ligne d'en-tête se terminant par ":"
INSTRUCTION #1 # bloc d'instructions indenté de 4 espaces
INSTRUCTION #2
INSTRUCTION #3
elif CONDITION_2:
INSTRUCTION #4 # bloc d'instructions indenté de 4 espaces
INSTRUCTION #5
INSTRUCTION #6
else:
INSTRUCTION #7 # bloc d'instructions indenté de 4 espaces
INSTRUCTION #8
INSTRUCTION #9
Tester si une variable x est positive, négative ou nulle.
End of explanation
s = 'asldfjhalskjfqwlkjqwteeeeeeeeeas fasdfafds'
n = 0
for lettre in s:
print("je lis la lettre", lettre, '. Nombre de e vu:', n, end=' ')
if lettre == 'e':
n += 1
print("nombre de e vu apres lecture:", n)
print(n)
Explanation: Écrivez un script qui compte le nombre d’occurrences du caractère « e » dans une chaîne de caractères.
End of explanation
nombre = 8
for i in range(20):
produit = nombre * i
if produit % 3 == 0:
print(nombre, 'x', i, '=', produit, '*')
else:
print(nombre, 'x', i, '=', produit)
Explanation: Que fait le programme ci-dessous, dans les quatre cas où l’on aurait défini au préalable que la variable a vaut 1, 2, 3 ou 15?
if a != 2:
print('perdu')
elif a == 3:
print('un instant, s.v.p.')
else:
print('gagné')
Écrivez un programme qui affiche les 20 premiers termes de la table de multiplication par 7, en signalant au passage (à l’aide d’une astérisque) ceux qui sont des multiples de 3.
Exemple: 7 14 21* 28 35 42* 49...
End of explanation
L = [123, 535, 74764, 14379, 56452546, 2356, 3, 4, 8]
Explanation: Écrivez un programme qui analyse un par un tous les éléments d’une liste de nombres pour générer deux nouvelles listes. L’une contiendra seulement les nombres pairs de la liste initiale, et l’autre les nombres impairs.
End of explanation
def table_7():
nombre = 7
for i in range(20):
produit = nombre * i
if produit % 3 == 0:
print(nombre, 'x', i, '=', produit, '*')
else:
print(nombre, 'x', i, '=', produit)
table_7()
table_7()
def table(nombre, stop=20):
for i in range(stop):
produit = nombre * i
if produit % 3 == 0:
print(nombre, 'x', i, '=', produit, '*')
else:
print(nombre, 'x', i, '=', produit)
table(5)
Explanation: 14.1 Syntaxe compacte d'une assignation conditionnelle
15 Fonctions def
Syntaxe :
def FONCTION(PARAMETRES): # ligne d'en-tête se terminant par ":"
INSTRUCTION #1 # bloc d'instructions indenté de 4 espaces
INSTRUCTION #2
INSTRUCTION #3
Écrire une fonction qui affiche les 10 premiers multiples de 7.
End of explanation
def volume_old(largeur, hauteur, profondeur):
print(largeur*hauteur*profondeur)
def volume(largeur, hauteur, profondeur):
return largeur*hauteur*profondeur
v = volume(2,3,4)
v + 1
Explanation: Écrire une fonction qui affiche les 10 premiers multiples d'un nombre choisi.
Écrivez un programme qui calcule le volume d’un parallélépipède rectangle dont sont fournis au départ la largeur, la hauteur et la profondeur.
End of explanation
def ff_and(a, b):
s = 0
for i in range(a, b):
if i % 3 == 0 and i % 5 == 0 :
s += i
return s
ff(0, 32)
Explanation: Écrivez un programme qui calcule le périmètre et l’aire d’un triangle quelconque dont l’utilisateur fournit les 3 côtés.
(Rappel : l’aire d’un triangle quelconque se calcule à l’aide de la formule :
$$
\sqrt{d(d-a)(d-b)(d-c)}
$$
dans laquelle d désigne la longueur du demi-périmètre, et a, b, c celles des trois côtés.)
Écrire un programme qui, étant données deux bornes entières a et b, additionne les nombres multiples de 3 et de 5 compris entre ces bornes. Prendre par exemple a = 0, b = 32 ; le résultat devrait être alors 0+15+30=45.
End of explanation
def ff_or(a, b):
s = 0
for i in range(a, b):
print('i=',i,' s=',s)
if i % 3 == 0 or i % 5 == 0 :
print('youpi pour i = ', i)
s += i
return s
ff_or(0, 32)
Explanation: Modifier légèrement ce programme pour qu’il additionne les nombres multiples de 3 ou de 5 compris entre les bornes a et b. Avec les bornes 0 et 32 , le résultat devrait donc être : 0+3+ 5 +6+9+10+12+15+18+20+21+24+25+27+30=225.
End of explanation |
2,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting a linear model fp1
Step1: Linear Function
Step2: Fitting the model with polynomial degree of 2
Step3: Trying to fir the model with 53 polynomial
Step4: As we can see that between week3 and week4, there is drastic change in data behaviour, so instead of plotting one line for all the data, we are going to split the data into two parts based on data behaviour
Step5: Plotting the linear model with these 2 datasets | Python Code:
# starting with linear model where degree is 1
# polyfit() - best put that line into the chart so that it results in the smallest
# approximation error
fp1, residuals, rank, sv, rcond = sp.polyfit(X, y, 1, full=True)
fp1
Explanation: Fitting a linear model fp1
End of explanation
print(residuals)
print(rank, sv, rcond)
# fitting these value in a linear model
f1 = sp.poly1d(fp1)
f1
# checking put the error for fp1 model
rssErr(f1, X, y)
# plotting it
# setting the plot
plt.figure(figsize=(8,6), dpi=80)
# plot type and color
plt.scatter(X,y, color='green')
plt.plot(X, f1(X), color='blue', linewidth=3) #plotting the fucntions
# adding legends
plt.legend(["d=%i" % f1.order], loc="upper left")
# plot labels
plt.xlabel("X (week) ")
plt.ylabel("y (traffic in thousands) ")
plt.title("Weekly web traffic data")
# grid and autoscale
plt.grid(True, linestyle='-', color='0.75')
plt.autoscale(True)
# ticks renaming
# plt.xticks(x, labels, rotation='vertical')
v1 = [w*7*24 for w in range(10)]
lbl = ["week %i" % i for i in range(10) ]
plt.xticks(v1, lbl, rotation='vertical')
plt.yticks([i*1000 for i in range(10)], ["%i" % i for i in range(10)])
# Pad margins so that markers don't get clipped by the axes
plt.margins(0.2)
# display plot
plt.show()
Explanation: Linear Function :
f(x) = 2.59619213 + 989.02487106 * x
End of explanation
fp2 = sp.polyfit(X, y, 2)
fp2
# fitting the model
f2 = sp.poly1d(fp2)
f2
# checking put the error for fp1 model
rssErr(f2, X, y)
# plotting it
# setting the plot
plt.figure(figsize=(8,6), dpi=80)
# plot type and color
plt.scatter(X,y, color='green')
l1, = plt.plot(X, f1(X), color='blue', linewidth=3) #plotting the fucntions
l2, = plt.plot(X, f2(X), color='red', linewidth=3) #plotting the fucntions
# adding legends
plt.legend([l1, l2], ["d=%i" % f1.order, "d=%i" % f2.order], loc="upper left")
#plt.legend(["d=%i" % f1.order], loc="upper left")
#plt.legend(["d=%i" % f2.order], loc="upper left")
# plot labels
plt.xlabel("X (week) ")
plt.ylabel("y (traffic in thousands) ")
plt.title("Weekly web traffic data")
# grid and autoscale
plt.grid(True, linestyle='-', color='0.75')
plt.autoscale(True)
# ticks renaming
# plt.xticks(x, labels, rotation='vertical')
v1 = [w*7*24 for w in range(10)]
lbl = ["week %i" % i for i in range(10) ]
plt.xticks(v1, lbl, rotation='vertical')
plt.yticks([i*1000 for i in range(10)], ["%i" % i for i in range(10)])
# Pad margins so that markers don't get clipped by the axes
plt.margins(0.2)
# display plot
plt.show()
Explanation: Fitting the model with polynomial degree of 2
End of explanation
fp53 = sp.polyfit(X, y, 53)
fp53
# fitting the model
f53 = sp.poly1d(fp53)
f53
# checking put the error for fp1 model
rssErr(f53, X, y)
# plotting it
# setting the plot
plt.figure(figsize=(12,9), dpi=80)
# plot type and color
plt.scatter(X,y, color='green')
l1, = plt.plot(X, f1(X), color='blue', linewidth=3) #plotting the fucntions
l53, = plt.plot(X, f53(X), color='red', linewidth=3) #plotting the fucntions
# adding legends
plt.legend([l1, l53],["d=%i" % f1.order, "d=%i" % f53.order], loc="upper left")
# plot labels
plt.xlabel("X (week) ")
plt.ylabel("y (traffic in thousands) ")
plt.title("Weekly web traffic data")
# grid and autoscale
plt.grid(True, linestyle='-', color='0.75')
plt.autoscale(True)
# ticks renaming
# plt.xticks(x, labels, rotation='vertical')
v1 = [w*7*24 for w in range(10)]
lbl = ["week %i" % i for i in range(10) ]
plt.xticks(v1, lbl, rotation='vertical')
plt.yticks([i*1000 for i in range(10)], ["%i" % i for i in range(10)])
# Pad margins so that markers don't get clipped by the axes
plt.margins(0.2)
# display plot
plt.show()
Explanation: Trying to fir the model with 53 polynomial
End of explanation
# taking n=3.5
split=int(3.5*7*24)
X1 = X[:split]
X2 = X[split:]
y1 = y[:split]
y2 = y[split:]
Explanation: As we can see that between week3 and week4, there is drastic change in data behaviour, so instead of plotting one line for all the data, we are going to split the data into two parts based on data behaviour
End of explanation
fps1 = sp.polyfit(X1, y1, 1)
fs1 = sp.poly1d(fps1)
fps2 = sp.polyfit(X2, y2, 1)
fs2 = sp.poly1d(fps2)
# error in both
print(rssErr(fs1, X1, y1))
print(rssErr(fs2, X2, y2))
# plotting the data into
plt.figure(figsize=(12,9), dpi=120)
# grid style and autoscale properties
plt.grid(True)
plt.autoscale(True)
plt.margins(0.15)
# assigning labels
plt.title("Web traffic data")
plt.xlabel("Week")
plt.ylabel("Hits")
# re-marking ticks
plt.xticks([w*7*24 for w in range(0, 9)], ["week %d" %i for i in range(0, 9)])
# plotting the data
plt.scatter(X, y, color='green')
ls1, = plt.plot(X1, fs1(X1), color='blue', linewidth=3)
plt.plot(X2, fs1(X2), color='blue', linewidth=3, linestyle='--')
ls2, = plt.plot(X2, fs2(X2), color='red', linewidth=3)
plt.plot(X1[split-40:split], fs2(X1[split-40:split]), color='red', linewidth=3, linestyle='--')
# adding legeds
plt.legend([ls1, ls2], ["sub-dataset1 d=%d " %fs1.order, "subdataset2 d=%d" %fs2.order], loc="upper left")
# display plot
plt.show()
Explanation: Plotting the linear model with these 2 datasets
End of explanation |
2,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
one_hot = np.zeros((len(x), 10))
for i in range(len(x)):
one_hot[i][x[i]] = 1
return one_hot
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape = [None, image_shape[0], image_shape[1], image_shape[2]],
name = 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape = [None, n_classes],
name = 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name = 'keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
# Input/Image
input = x_tensor
# Weight and bias
weight = tf.Variable(tf.truncated_normal(
[conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
# Apply Convolution
conv_layer = tf.nn.conv2d(input, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
# Add bias
conv_layer = tf.nn.bias_add(conv_layer, bias)
# Apply activation function
conv_layer = tf.nn.relu(conv_layer)
# Apply maxpooling
conv_layer = tf.nn.max_pool(
conv_layer,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
batch_size = x_tensor.get_shape().as_list()[0]# i tried as_list()[]
width = x_tensor.get_shape().as_list()[1]
height = x_tensor.get_shape().as_list()[2]
depth = x_tensor.get_shape().as_list()[3]
image_flat_size = width * height * depth
return tf.contrib.layers.flatten(x_tensor, [batch_size, image_flat_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs]))
mul = tf.matmul(x_tensor, weights, name='mul')
bias = tf.Variable(tf.zeros(num_outputs))
return tf.add(mul, bias)
# y = tf.add(mul, bias)
# fc = tf.contrib.layers.fully_connected(y, num_outputs)
# return fc
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note: Activation, softmax, or cross entropy shouldn't be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 10
conv_ksize = [2, 2]
conv_strides = [2, 2]
pool_ksize = [2, 2]
pool_strides = [2, 2]
conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
conv_layer = flatten(conv_layer)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
num_outputs = 10
conv_layer = fully_conn(conv_layer, num_outputs)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
num_classes = 10
conv_layer = output(conv_layer, num_classes)
# TODO: return output
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>10.4f} Accuracy: {:.6f}'.format(loss, valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 100
batch_size = 256
keep_probability = 0.2
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
2,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Better performance with the tf.data API
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Throughout this guide, you will iterate across a dataset and measure the performance.
Making reproducible performance benchmarks can be difficult. Different factors affecting reproducibility include
Step3: This dataset is similar to the tf.data.Dataset.range one, adding a fixed delay at the beginning of and in-between each sample.
The training loop
Next, write a dummy training loop that measures how long it takes to iterate over a dataset.
Training time is simulated.
Step4: Optimize performance
To exhibit how performance can be optimized, you will improve the performance of the ArtificialDataset.
The naive approach
Start with a naive pipeline using no tricks, iterating over the dataset as-is.
Step5: Under the hood, this is how your execution time was spent
Step6: Now, as the data execution time plot shows, while the training step is running for sample 0, the input pipeline is reading the data for the sample 1, and so on.
Parallelizing data extraction
In a real-world setting, the input data may be stored remotely (for example, on Google Cloud Storage or HDFS).
A dataset pipeline that works well when reading data locally might become bottlenecked on I/O when reading data remotely because of the following differences between local and remote storage
Step7: This data execution time plot allows to exhibit the behavior of the interleave transformation, fetching samples alternatively from the two datasets available.
However, no performance improvement is involved here.
Parallel interleave
Now, use the num_parallel_calls argument of the interleave transformation.
This loads multiple datasets in parallel, reducing the time waiting for the files to be opened.
Step8: This time, as the data execution time plot shows, the reading of the two datasets is parallelized, reducing the global data processing time.
Parallelizing data transformation
When preparing data, input elements may need to be pre-processed.
To this end, the tf.data API offers the tf.data.Dataset.map transformation, which applies a user-defined function to each element of the input dataset.
Because input elements are independent of one another, the pre-processing can be parallelized across multiple CPU cores.
To make this possible, similarly to the prefetch and interleave transformations, the map transformation provides the num_parallel_calls argument to specify the level of parallelism.
Choosing the best value for the num_parallel_calls argument depends on your hardware, characteristics of your training data (such as its size and shape), the cost of your map function, and what other processing is happening on the CPU at the same time.
A simple heuristic is to use the number of available CPU cores.
However, as for the prefetch and interleave transformation, the map transformation supports tf.data.AUTOTUNE which will delegate the decision about what level of parallelism to use to the tf.data runtime.
Step9: Sequential mapping
Start by using the map transformation without parallelism as a baseline example.
Step10: As for the naive approach, here, as the plot shows, the times spent for opening, reading, pre-processing (mapping) and training steps sum together for a single iteration.
Parallel mapping
Now, use the same pre-processing function but apply it in parallel on multiple samples.
Step11: As the data plot demonstrates, the pre-processing steps overlap, reducing the overall time for a single iteration.
Caching
The tf.data.Dataset.cache transformation can cache a dataset, either in memory or on local storage.
This will save some operations (like file opening and data reading) from being executed during each epoch.
Step12: Here, the data execution time plot shows that when you cache a dataset, the transformations before the cache one (like the file opening and data reading) are executed only during the first epoch.
The next epochs will reuse the data cached by thecache transformation.
If the user-defined function passed into the map transformation is expensive, apply the cache transformation after the map transformation as long as the resulting dataset can still fit into memory or local storage.
If the user-defined function increases the space required to store the dataset beyond the cache capacity, either apply it after the cache transformation or consider pre-processing your data before your training job to reduce resource usage.
Vectorizing mapping
Invoking a user-defined function passed into the map transformation has overhead related to scheduling and executing the user-defined function.
Vectorize the user-defined function (that is, have it operate over a batch of inputs at once) and apply the batch transformation before the map transformation.
To illustrate this good practice, your artificial dataset is not suitable.
The scheduling delay is around 10 microseconds (10e-6 seconds), far less than the tens of milliseconds used in the ArtificialDataset, and thus its impact is hard to see.
For this example, use the base tf.data.Dataset.range function and simplify the training loop to its simplest form.
Step13: Scalar mapping
Step14: The plot above illustrates what is going on (with less samples) using the scalar mapping method.
It shows that the mapped function is applied for each sample.
While this function is very fast, it has some overhead that impact the time performance.
Vectorized mapping
Step15: This time, the mapped function is called once and applies to a batch of sample.
As the data execution time plot shows, while the function could takes more time to execute, the overhead appear only once, improving the overall time performance.
Reducing memory footprint
A number of transformations, including interleave, prefetch, and shuffle, maintain an internal buffer of elements. If the user-defined function passed into the map transformation changes the size of the elements, then the ordering of the map transformation and the transformations that buffer elements affects the memory usage. In general, choose the order that results in lower memory footprint, unless different ordering is desirable for performance.
Caching partial computations
It is recommended to cache the dataset after the map transformation except if this transformation makes the data too big to fit in memory.
A trade-off can be achieved if your mapped function can be split in two parts
Step16: The dataset
Similar to the ArtificialDataset you can build a dataset returning the time spent in each step.
Step17: This dataset provides samples of shape [[2, 1], [2, 2], [2, 3]] and of type [tf.dtypes.string, tf.dtypes.float32, tf.dtypes.int32].
Each sample is
Step18: The plotting method
Finally, define a function able to plot a timeline given the values returned by the timelined_benchmark function.
Step19: Use wrappers for mapped function
To run mapped function in an eager context, you have to wrap them inside a tf.py_function call.
Step20: Pipelines comparison
Step21: Naive
Step22: Optimized | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import time
Explanation: Better performance with the tf.data API
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/data_performance"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/data_performance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/data_performance.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/data_performance.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
GPUs and TPUs can radically reduce the time required to execute a single training step.
Achieving peak performance requires an efficient input pipeline that delivers data for the next step before the current step has finished.
The tf.data API helps to build flexible and efficient input pipelines.
This document demonstrates how to use the tf.data API to build highly performant TensorFlow input pipelines.
Before you continue, check the Build TensorFlow input pipelines guide to learn how to use the tf.data API.
Resources
Build TensorFlow input pipelines
tf.data.Dataset API
Analyze tf.data performance with the TF Profiler
Setup
End of explanation
class ArtificialDataset(tf.data.Dataset):
def _generator(num_samples):
# Opening the file
time.sleep(0.03)
for sample_idx in range(num_samples):
# Reading data (line, record) from the file
time.sleep(0.015)
yield (sample_idx,)
def __new__(cls, num_samples=3):
return tf.data.Dataset.from_generator(
cls._generator,
output_signature = tf.TensorSpec(shape = (1,), dtype = tf.int64),
args=(num_samples,)
)
Explanation: Throughout this guide, you will iterate across a dataset and measure the performance.
Making reproducible performance benchmarks can be difficult. Different factors affecting reproducibility include:
The current CPU load
The network traffic
Complex mechanisms, such as cache
To get a reproducible benchmark, you will build an artificial example.
The dataset
Start with defining a class inheriting from tf.data.Dataset called ArtificialDataset.
This dataset:
Generates num_samples samples (default is 3)
Sleeps for some time before the first item to simulate opening a file
Sleeps for some time before producing each item to simulate reading data from a file
End of explanation
def benchmark(dataset, num_epochs=2):
start_time = time.perf_counter()
for epoch_num in range(num_epochs):
for sample in dataset:
# Performing a training step
time.sleep(0.01)
print("Execution time:", time.perf_counter() - start_time)
Explanation: This dataset is similar to the tf.data.Dataset.range one, adding a fixed delay at the beginning of and in-between each sample.
The training loop
Next, write a dummy training loop that measures how long it takes to iterate over a dataset.
Training time is simulated.
End of explanation
benchmark(ArtificialDataset())
Explanation: Optimize performance
To exhibit how performance can be optimized, you will improve the performance of the ArtificialDataset.
The naive approach
Start with a naive pipeline using no tricks, iterating over the dataset as-is.
End of explanation
benchmark(
ArtificialDataset()
.prefetch(tf.data.AUTOTUNE)
)
Explanation: Under the hood, this is how your execution time was spent:
The plot shows that performing a training step involves:
Opening a file if it hasn't been opened yet
Fetching a data entry from the file
Using the data for training
However, in a naive synchronous implementation like here, while your pipeline is fetching the data, your model is sitting idle.
Conversely, while your model is training, the input pipeline is sitting idle.
The training step time is thus the sum of opening, reading and training times.
The next sections build on this input pipeline, illustrating best practices for designing performant TensorFlow input pipelines.
Prefetching
Prefetching overlaps the preprocessing and model execution of a training step.
While the model is executing training step s, the input pipeline is reading the data for step s+1.
Doing so reduces the step time to the maximum (as opposed to the sum) of the training and the time it takes to extract the data.
The tf.data API provides the tf.data.Dataset.prefetch transformation.
It can be used to decouple the time when data is produced from the time when data is consumed.
In particular, the transformation uses a background thread and an internal buffer to prefetch elements from the input dataset ahead of the time they are requested.
The number of elements to prefetch should be equal to (or possibly greater than) the number of batches consumed by a single training step.
You could either manually tune this value, or set it to tf.data.AUTOTUNE, which will prompt the
tf.data runtime to tune the value dynamically at runtime.
Note that the prefetch transformation provides benefits any time there is an opportunity to overlap the work of a "producer" with the work of a "consumer."
End of explanation
benchmark(
tf.data.Dataset.range(2)
.interleave(lambda _: ArtificialDataset())
)
Explanation: Now, as the data execution time plot shows, while the training step is running for sample 0, the input pipeline is reading the data for the sample 1, and so on.
Parallelizing data extraction
In a real-world setting, the input data may be stored remotely (for example, on Google Cloud Storage or HDFS).
A dataset pipeline that works well when reading data locally might become bottlenecked on I/O when reading data remotely because of the following differences between local and remote storage:
Time-to-first-byte: Reading the first byte of a file from remote storage can take orders of magnitude longer than from local storage.
Read throughput: While remote storage typically offers large aggregate bandwidth, reading a single file might only be able to utilize a small fraction of this bandwidth.
In addition, once the raw bytes are loaded into memory, it may also be necessary to deserialize and/or decrypt the data (e.g. protobuf), which requires additional computation.
This overhead is present irrespective of whether the data is stored locally or remotely, but can be worse in the remote case if data is not prefetched effectively.
To mitigate the impact of the various data extraction overheads, the tf.data.Dataset.interleave transformation can be used to parallelize the data loading step, interleaving the contents of other datasets (such as data file
readers).
The number of datasets to overlap can be specified by the cycle_length argument, while the level of parallelism can be specified by the num_parallel_calls argument. Similar to the prefetch transformation, the interleave transformation supports tf.data.AUTOTUNE, which will delegate the decision about what level of parallelism to use to the tf.data runtime.
Sequential interleave
The default arguments of the tf.data.Dataset.interleave transformation make it interleave single samples from two datasets sequentially.
End of explanation
benchmark(
tf.data.Dataset.range(2)
.interleave(
lambda _: ArtificialDataset(),
num_parallel_calls=tf.data.AUTOTUNE
)
)
Explanation: This data execution time plot allows to exhibit the behavior of the interleave transformation, fetching samples alternatively from the two datasets available.
However, no performance improvement is involved here.
Parallel interleave
Now, use the num_parallel_calls argument of the interleave transformation.
This loads multiple datasets in parallel, reducing the time waiting for the files to be opened.
End of explanation
def mapped_function(s):
# Do some hard pre-processing
tf.py_function(lambda: time.sleep(0.03), [], ())
return s
Explanation: This time, as the data execution time plot shows, the reading of the two datasets is parallelized, reducing the global data processing time.
Parallelizing data transformation
When preparing data, input elements may need to be pre-processed.
To this end, the tf.data API offers the tf.data.Dataset.map transformation, which applies a user-defined function to each element of the input dataset.
Because input elements are independent of one another, the pre-processing can be parallelized across multiple CPU cores.
To make this possible, similarly to the prefetch and interleave transformations, the map transformation provides the num_parallel_calls argument to specify the level of parallelism.
Choosing the best value for the num_parallel_calls argument depends on your hardware, characteristics of your training data (such as its size and shape), the cost of your map function, and what other processing is happening on the CPU at the same time.
A simple heuristic is to use the number of available CPU cores.
However, as for the prefetch and interleave transformation, the map transformation supports tf.data.AUTOTUNE which will delegate the decision about what level of parallelism to use to the tf.data runtime.
End of explanation
benchmark(
ArtificialDataset()
.map(mapped_function)
)
Explanation: Sequential mapping
Start by using the map transformation without parallelism as a baseline example.
End of explanation
benchmark(
ArtificialDataset()
.map(
mapped_function,
num_parallel_calls=tf.data.AUTOTUNE
)
)
Explanation: As for the naive approach, here, as the plot shows, the times spent for opening, reading, pre-processing (mapping) and training steps sum together for a single iteration.
Parallel mapping
Now, use the same pre-processing function but apply it in parallel on multiple samples.
End of explanation
benchmark(
ArtificialDataset()
.map( # Apply time consuming operations before cache
mapped_function
).cache(
),
5
)
Explanation: As the data plot demonstrates, the pre-processing steps overlap, reducing the overall time for a single iteration.
Caching
The tf.data.Dataset.cache transformation can cache a dataset, either in memory or on local storage.
This will save some operations (like file opening and data reading) from being executed during each epoch.
End of explanation
fast_dataset = tf.data.Dataset.range(10000)
def fast_benchmark(dataset, num_epochs=2):
start_time = time.perf_counter()
for _ in tf.data.Dataset.range(num_epochs):
for _ in dataset:
pass
tf.print("Execution time:", time.perf_counter() - start_time)
def increment(x):
return x+1
Explanation: Here, the data execution time plot shows that when you cache a dataset, the transformations before the cache one (like the file opening and data reading) are executed only during the first epoch.
The next epochs will reuse the data cached by thecache transformation.
If the user-defined function passed into the map transformation is expensive, apply the cache transformation after the map transformation as long as the resulting dataset can still fit into memory or local storage.
If the user-defined function increases the space required to store the dataset beyond the cache capacity, either apply it after the cache transformation or consider pre-processing your data before your training job to reduce resource usage.
Vectorizing mapping
Invoking a user-defined function passed into the map transformation has overhead related to scheduling and executing the user-defined function.
Vectorize the user-defined function (that is, have it operate over a batch of inputs at once) and apply the batch transformation before the map transformation.
To illustrate this good practice, your artificial dataset is not suitable.
The scheduling delay is around 10 microseconds (10e-6 seconds), far less than the tens of milliseconds used in the ArtificialDataset, and thus its impact is hard to see.
For this example, use the base tf.data.Dataset.range function and simplify the training loop to its simplest form.
End of explanation
fast_benchmark(
fast_dataset
# Apply function one item at a time
.map(increment)
# Batch
.batch(256)
)
Explanation: Scalar mapping
End of explanation
fast_benchmark(
fast_dataset
.batch(256)
# Apply function on a batch of items
# The tf.Tensor.__add__ method already handle batches
.map(increment)
)
Explanation: The plot above illustrates what is going on (with less samples) using the scalar mapping method.
It shows that the mapped function is applied for each sample.
While this function is very fast, it has some overhead that impact the time performance.
Vectorized mapping
End of explanation
import itertools
from collections import defaultdict
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: This time, the mapped function is called once and applies to a batch of sample.
As the data execution time plot shows, while the function could takes more time to execute, the overhead appear only once, improving the overall time performance.
Reducing memory footprint
A number of transformations, including interleave, prefetch, and shuffle, maintain an internal buffer of elements. If the user-defined function passed into the map transformation changes the size of the elements, then the ordering of the map transformation and the transformations that buffer elements affects the memory usage. In general, choose the order that results in lower memory footprint, unless different ordering is desirable for performance.
Caching partial computations
It is recommended to cache the dataset after the map transformation except if this transformation makes the data too big to fit in memory.
A trade-off can be achieved if your mapped function can be split in two parts: a time consuming one and a memory consuming part.
In this case, you can chain your transformations like below:
python
dataset.map(time_consuming_mapping).cache().map(memory_consuming_mapping)
This way, the time consuming part is only executed during the first epoch, and you avoid using too much cache space.
Best practice summary
Here is a summary of the best practices for designing performant TensorFlow
input pipelines:
Use the prefetch transformation to overlap the work of a producer and consumer
Parallelize the data reading transformation using the interleave transformation
Parallelize the map transformation by setting the num_parallel_calls argument
Use the cache transformation to cache data in memory during the first epoch
Vectorize user-defined functions passed in to the map transformation
Reduce memory usage when applying the interleave, prefetch, and shuffle transformations
Reproducing the figures
Note: The rest of this notebook is about how to reproduce the above figures. Feel free to play around with this code, but understanding it is not an essential part of this tutorial.
To go deeper in the tf.data.Dataset API understanding, you can play with your own pipelines.
Below is the code used to plot the images from this guide.
It can be a good starting point, showing some workarounds for common difficulties such as:
Execution time reproducibility
Mapped functions eager execution
interleave transformation callable
End of explanation
class TimeMeasuredDataset(tf.data.Dataset):
# OUTPUT: (steps, timings, counters)
OUTPUT_TYPES = (tf.dtypes.string, tf.dtypes.float32, tf.dtypes.int32)
OUTPUT_SHAPES = ((2, 1), (2, 2), (2, 3))
_INSTANCES_COUNTER = itertools.count() # Number of datasets generated
_EPOCHS_COUNTER = defaultdict(itertools.count) # Number of epochs done for each dataset
def _generator(instance_idx, num_samples):
epoch_idx = next(TimeMeasuredDataset._EPOCHS_COUNTER[instance_idx])
# Opening the file
open_enter = time.perf_counter()
time.sleep(0.03)
open_elapsed = time.perf_counter() - open_enter
for sample_idx in range(num_samples):
# Reading data (line, record) from the file
read_enter = time.perf_counter()
time.sleep(0.015)
read_elapsed = time.perf_counter() - read_enter
yield (
[("Open",), ("Read",)],
[(open_enter, open_elapsed), (read_enter, read_elapsed)],
[(instance_idx, epoch_idx, -1), (instance_idx, epoch_idx, sample_idx)]
)
open_enter, open_elapsed = -1., -1. # Negative values will be filtered
def __new__(cls, num_samples=3):
return tf.data.Dataset.from_generator(
cls._generator,
output_types=cls.OUTPUT_TYPES,
output_shapes=cls.OUTPUT_SHAPES,
args=(next(cls._INSTANCES_COUNTER), num_samples)
)
Explanation: The dataset
Similar to the ArtificialDataset you can build a dataset returning the time spent in each step.
End of explanation
def timelined_benchmark(dataset, num_epochs=2):
# Initialize accumulators
steps_acc = tf.zeros([0, 1], dtype=tf.dtypes.string)
times_acc = tf.zeros([0, 2], dtype=tf.dtypes.float32)
values_acc = tf.zeros([0, 3], dtype=tf.dtypes.int32)
start_time = time.perf_counter()
for epoch_num in range(num_epochs):
epoch_enter = time.perf_counter()
for (steps, times, values) in dataset:
# Record dataset preparation informations
steps_acc = tf.concat((steps_acc, steps), axis=0)
times_acc = tf.concat((times_acc, times), axis=0)
values_acc = tf.concat((values_acc, values), axis=0)
# Simulate training time
train_enter = time.perf_counter()
time.sleep(0.01)
train_elapsed = time.perf_counter() - train_enter
# Record training informations
steps_acc = tf.concat((steps_acc, [["Train"]]), axis=0)
times_acc = tf.concat((times_acc, [(train_enter, train_elapsed)]), axis=0)
values_acc = tf.concat((values_acc, [values[-1]]), axis=0)
epoch_elapsed = time.perf_counter() - epoch_enter
# Record epoch informations
steps_acc = tf.concat((steps_acc, [["Epoch"]]), axis=0)
times_acc = tf.concat((times_acc, [(epoch_enter, epoch_elapsed)]), axis=0)
values_acc = tf.concat((values_acc, [[-1, epoch_num, -1]]), axis=0)
time.sleep(0.001)
tf.print("Execution time:", time.perf_counter() - start_time)
return {"steps": steps_acc, "times": times_acc, "values": values_acc}
Explanation: This dataset provides samples of shape [[2, 1], [2, 2], [2, 3]] and of type [tf.dtypes.string, tf.dtypes.float32, tf.dtypes.int32].
Each sample is:
(
[("Open"), ("Read")],
[(t0, d), (t0, d)],
[(i, e, -1), (i, e, s)]
)
Where:
Open and Read are steps identifiers
t0 is the timestamp when the corresponding step started
d is the time spent in the corresponding step
i is the instance index
e is the epoch index (number of times the dataset has been iterated)
s is the sample index
The iteration loop
Make the iteration loop a little bit more complicated to aggregate all timings.
This will only work with datasets generating samples as detailed above.
End of explanation
def draw_timeline(timeline, title, width=0.5, annotate=False, save=False):
# Remove invalid entries (negative times, or empty steps) from the timelines
invalid_mask = np.logical_and(timeline['times'] > 0, timeline['steps'] != b'')[:,0]
steps = timeline['steps'][invalid_mask].numpy()
times = timeline['times'][invalid_mask].numpy()
values = timeline['values'][invalid_mask].numpy()
# Get a set of different steps, ordered by the first time they are encountered
step_ids, indices = np.stack(np.unique(steps, return_index=True))
step_ids = step_ids[np.argsort(indices)]
# Shift the starting time to 0 and compute the maximal time value
min_time = times[:,0].min()
times[:,0] = (times[:,0] - min_time)
end = max(width, (times[:,0]+times[:,1]).max() + 0.01)
cmap = mpl.cm.get_cmap("plasma")
plt.close()
fig, axs = plt.subplots(len(step_ids), sharex=True, gridspec_kw={'hspace': 0})
fig.suptitle(title)
fig.set_size_inches(17.0, len(step_ids))
plt.xlim(-0.01, end)
for i, step in enumerate(step_ids):
step_name = step.decode()
ax = axs[i]
ax.set_ylabel(step_name)
ax.set_ylim(0, 1)
ax.set_yticks([])
ax.set_xlabel("time (s)")
ax.set_xticklabels([])
ax.grid(which="both", axis="x", color="k", linestyle=":")
# Get timings and annotation for the given step
entries_mask = np.squeeze(steps==step)
serie = np.unique(times[entries_mask], axis=0)
annotations = values[entries_mask]
ax.broken_barh(serie, (0, 1), color=cmap(i / len(step_ids)), linewidth=1, alpha=0.66)
if annotate:
for j, (start, width) in enumerate(serie):
annotation = "\n".join([f"{l}: {v}" for l,v in zip(("i", "e", "s"), annotations[j])])
ax.text(start + 0.001 + (0.001 * (j % 2)), 0.55 - (0.1 * (j % 2)), annotation,
horizontalalignment='left', verticalalignment='center')
if save:
plt.savefig(title.lower().translate(str.maketrans(" ", "_")) + ".svg")
Explanation: The plotting method
Finally, define a function able to plot a timeline given the values returned by the timelined_benchmark function.
End of explanation
def map_decorator(func):
def wrapper(steps, times, values):
# Use a tf.py_function to prevent auto-graph from compiling the method
return tf.py_function(
func,
inp=(steps, times, values),
Tout=(steps.dtype, times.dtype, values.dtype)
)
return wrapper
Explanation: Use wrappers for mapped function
To run mapped function in an eager context, you have to wrap them inside a tf.py_function call.
End of explanation
_batch_map_num_items = 50
def dataset_generator_fun(*args):
return TimeMeasuredDataset(num_samples=_batch_map_num_items)
Explanation: Pipelines comparison
End of explanation
@map_decorator
def naive_map(steps, times, values):
map_enter = time.perf_counter()
time.sleep(0.001) # Time consuming step
time.sleep(0.0001) # Memory consuming step
map_elapsed = time.perf_counter() - map_enter
return (
tf.concat((steps, [["Map"]]), axis=0),
tf.concat((times, [[map_enter, map_elapsed]]), axis=0),
tf.concat((values, [values[-1]]), axis=0)
)
naive_timeline = timelined_benchmark(
tf.data.Dataset.range(2)
.flat_map(dataset_generator_fun)
.map(naive_map)
.batch(_batch_map_num_items, drop_remainder=True)
.unbatch(),
5
)
Explanation: Naive
End of explanation
@map_decorator
def time_consuming_map(steps, times, values):
map_enter = time.perf_counter()
time.sleep(0.001 * values.shape[0]) # Time consuming step
map_elapsed = time.perf_counter() - map_enter
return (
tf.concat((steps, tf.tile([[["1st map"]]], [steps.shape[0], 1, 1])), axis=1),
tf.concat((times, tf.tile([[[map_enter, map_elapsed]]], [times.shape[0], 1, 1])), axis=1),
tf.concat((values, tf.tile([[values[:][-1][0]]], [values.shape[0], 1, 1])), axis=1)
)
@map_decorator
def memory_consuming_map(steps, times, values):
map_enter = time.perf_counter()
time.sleep(0.0001 * values.shape[0]) # Memory consuming step
map_elapsed = time.perf_counter() - map_enter
# Use tf.tile to handle batch dimension
return (
tf.concat((steps, tf.tile([[["2nd map"]]], [steps.shape[0], 1, 1])), axis=1),
tf.concat((times, tf.tile([[[map_enter, map_elapsed]]], [times.shape[0], 1, 1])), axis=1),
tf.concat((values, tf.tile([[values[:][-1][0]]], [values.shape[0], 1, 1])), axis=1)
)
optimized_timeline = timelined_benchmark(
tf.data.Dataset.range(2)
.interleave( # Parallelize data reading
dataset_generator_fun,
num_parallel_calls=tf.data.AUTOTUNE
)
.batch( # Vectorize your mapped function
_batch_map_num_items,
drop_remainder=True)
.map( # Parallelize map transformation
time_consuming_map,
num_parallel_calls=tf.data.AUTOTUNE
)
.cache() # Cache data
.map( # Reduce memory usage
memory_consuming_map,
num_parallel_calls=tf.data.AUTOTUNE
)
.prefetch( # Overlap producer and consumer works
tf.data.AUTOTUNE
)
.unbatch(),
5
)
draw_timeline(naive_timeline, "Naive", 15)
draw_timeline(optimized_timeline, "Optimized", 15)
Explanation: Optimized
End of explanation |
2,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cat vs coherent states in a Kerr resonator, and the role of measurement
$\newcommand{\ket}[1]{| #1 \rangle}$
$\newcommand{\bra}[1]{\langle #1 |}$
$\newcommand{\braket}[1]{\langle #1 \rangle}$
$\newcommand{\CC}{\mathcal{C}}$
Author
Step1: The two-photon Kerr Resontator
Let us consider a single nonlinear Kerr resonator subject
to a parametric two-photon driving.
In a frame rotating at the pump frequency, the Hamiltonian reads
\begin{equation}\label{Eq
Step2: This model can be solved exactly for its steady state [2,3].
The corresponding density matrix $\hat{\rho}{\rm ss}$ is well approximated by the statistical mixture of two orthogonal states
Step3: Correctly, the two system have opposite parity. Indeed, for sufficiently intense pumping ($G> U,\gamma,\eta$ and $|\alpha|\gg1$), it was shown in [2] that $p^+\simeq p^- \simeq 1/2$.
However, in this strong-pumping regime, the steady-state can be recast as
\begin{equation}\label{Eq
Step4: Quantum Trajectories
From a theoretical point of view, the Lindblad master equation describes the out-of-equilibrium dynamics of a system coupled to a Markovian (i.e., memoryless) environment.
Indeed, the density matrix $\hat{\rho}(t)$ solving the Lindblad equation encodes the average evolution of the system when no information is collected about environment state.
However, one can imagine to keep track of the system state by continuously probing the environment.
Doing so, the time evolution of the system would change at each realisation.
However, $\hat{\rho}(t)$ can be retrieved by averaging over an infinite number of such "monitored" realisations.
The Monte Carlo wavefunction method has been developed relying exactly on this idea.
It is based on the stochastic simulation of the system evolution when one continuously gathers information from the environment.
Each simulation of the stochastic evolution of the system gives a single quantum trajectory.
The results obtained by solving the master equation are recovered by averaging over many trajectories.
In order to simulate the quantum trajectories, it is necessary to explicitly model how an observer measures the environment, thus affecting the system evolution itself (a detailed discussion on this subject is given in [5].
Interestingly, several different measures can be associated with the same master equation.
Depending on the chosen measurement, contrasting results and interpretations can emerge.
Those incompatibilities are, however, harmonized once the mean value over many trajectories is taken.
$\newcommand{\ket}[1]{| #1 \rangle}$
$\newcommand{\bra}[1]{\langle #1 |}$
$\newcommand{\CC}{\mathcal{C}}$
Photon counting
The most natural way to observe the exchanges between the Kerr resonator and the environment is to just detect every leaked photon (both individually and by couples).
This mechanism is described via the action of the one-photon jump operator $\hat{J}_1=\sqrt{\gamma}\, \hat{a}$ and the two-photon one $\hat{J}_2=\sqrt{\eta}\, \hat{a}^2$, which describe the absorption of one or two photons by an ideal photodetector (details in e.g. [6]).
Indeed, in typical realisations (e.g. [4]) the one- and two-photon dissipation channels are discernible.
Hence, we can assume that the photodetector is capable of distinguishing between one- and two-photon losses.
The photon-counting trajectory is then obtained by using the "mcsolve" function of qutip.
In conclusion, a photon-counting trajectory is characterised by abrupt jumps corresponding to the projective measure associated to the detection of one or two photons.
Step5: As shown in [2], the Hamiltonian $\hat{H}$ and the two-photon dissipation tend to stabilize photonic cat states.
On the other hand, the annihilation operator switches from the even (odd) cat to the odd (even) one
Step6: We see that the mean parity $\braket{\hat{\mathcal{P}}}$
is confined around zero along a single homodyne trajectory, in spite of the
"switching cat" picture.
These fluctuations are due to the diffusive nature of the homodyne trajectory, which rules
the stochastic time evolution of the system wave function under homodyne detection.
The bimodal behaviour, instead, is clear in the time evolution of $\braket{\hat{x}}$ and $\braket{\hat{p}}$.
This appears compatible with the picture given by
$\hat{\rho}_{\rm ss}\simeq
\frac{1}{2}\ket{\alpha}!\bra{\alpha}
+\frac{1}{2}\ket{-\alpha}!\bra{-\alpha}$
Step7: References
[1] N. Bartolo, F. Minganti 1 , J. Lolli, and C. Ciuti,
Homodyne versus photon-counting quantum trajectories for dissipative Kerr resonators
with two-photon driving, The European Physical Journal Special Topics 226, 2705 (2017).
The European Physical Journal Special Topics 226, 2705 (2017).
[2] F. Minganti, N. Bartolo, J. Lolli, W. Casteels, and C. Ciuti,
Exact results for Schrödinger cats in driven-dissipative systems and their feedback control, Scientific Reports 6, 26987 (2016).
[3] N. Bartolo, F. Minganti, W. Casteels, and C. Ciuti,
Exact steady state of a Kerr resonator with one- and two-photon driving and dissipation | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
from IPython.display import display, Math, Latex
Explanation: Cat vs coherent states in a Kerr resonator, and the role of measurement
$\newcommand{\ket}[1]{| #1 \rangle}$
$\newcommand{\bra}[1]{\langle #1 |}$
$\newcommand{\braket}[1]{\langle #1 \rangle}$
$\newcommand{\CC}{\mathcal{C}}$
Author: F. Minganti ([email protected])
In this notebook we show how the same system can produce extremely different results according to the way an observer collects the emitted field of a resonator. This notebook closely follows the results obtained in Refs. [1-3].
End of explanation
font_size=20
label_size=30
title_font=35
a=destroy(20)
U=1
G=4
gamma=1
eta=1
H=U*a.dag()*a.dag()*a*a + G*(a*a + a.dag()*a.dag())
c_ops=[np.sqrt(gamma)*a,np.sqrt(eta)*a*a]
parity=1.j*np.pi*a.dag()*a
parity=parity.expm()
rho_ss=steadystate(H, c_ops)
Explanation: The two-photon Kerr Resontator
Let us consider a single nonlinear Kerr resonator subject
to a parametric two-photon driving.
In a frame rotating at the pump frequency, the Hamiltonian reads
\begin{equation}\label{Eq:Hamiltonian}
\hat{H}
=\frac{U}{2}\,\hat{a}^\dagger\hat{a}^\dagger\hat{a}\hat{a}
+\frac{G}{2}\left(\hat{a}^\dagger\hat{a}^\dagger+\hat{a}\hat{a}\right),
\end{equation}
where $U$ is the Kerr photon-photon interaction strength, $G$ is the two-photon driving amplitude, and $\hat{a}^\dagger$ ($\hat{a}$) is the bosonic creation (annihilation) operator.
The time dynamics of the density matrix $\hat{\rho}$ of this sytem is given by a Lindblad master equation $i \partial_t \hat{\rho} = \mathcal{L} \hat{\rho}$, where $\mathcal{L}$ is the Liouvillian superoperator.
The superoperator $\mathcal{L}$ is made of an Hamiltonian part and
a non-hermitian contribution, which describe the dissipation of energy, particle and information into the environment, as detailed
in e.g. [5].
Given the parametric drive, the dissipation processes include one- and two-photon dissipation, and the Lindblad superoperator become
\begin{equation}\label{Eq:Lindblad}
\mathcal{L} \hat{\rho} = - i \left[\hat{H},\hat{\rho}\right]
+\frac{\gamma}{2} \left(2\hat{a}\hat{\rho}\hat{a}^\dagger
-\hat{a}^\dagger\hat{a}\hat{\rho}
-\hat{\rho}\hat{a}^\dagger\hat{a}\right)
+ \, \frac{\eta}{2} \left(2\hat{a}\hat{a}\hat{\rho}\hat{a}^\dagger\hat{a}^\dagger
-\hat{a}^\dagger\hat{a}^\dagger\hat{a}\hat{a}\hat{\rho}
-\hat{\rho}\hat{a}^\dagger\hat{a}^\dagger\hat{a}\hat{a}\right),
\end{equation}
where $\gamma$ and $\eta$ are, respectively, the one- and two-photon dissipation rates.
We define the system parameters in the following cells.
End of explanation
vals, vecs = rho_ss.eigenstates(sort='high')
print("The mean number of photon is " + str(expect(a.dag()*a, rho_ss)))
plt.figure(figsize=(8, 6))
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=font_size)
plt.semilogy(range(1,7),vals[0:6], 'rx')
plt.xlabel('Eigenvalue', fontsize=label_size)
plt.ylabel('Probability', fontsize=label_size)
plt.title('Distribution of the eigenvalues',fontsize=title_font)
plt.show()
state_zero=vecs[0].full()
state_one=vecs[1].full()
plt.figure(figsize=(8, 6))
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=font_size)
plt.plot(range(0,20), [abs(i)**2 for i in state_zero[0:20]], 'rx', label='First state')
plt.plot(range(0,20), [abs(i)**2 for i in state_one[0:20]], 'bo', label='Second state')
plt.legend()
plt.xlabel('Eigenvalue', fontsize=label_size)
plt.ylabel('Probability', fontsize=label_size)
plt.show()
Explanation: This model can be solved exactly for its steady state [2,3].
The corresponding density matrix $\hat{\rho}{\rm ss}$ is well approximated by the statistical mixture of two orthogonal states:
\begin{equation}\label{Eq:MixtureCats}
\hat{\rho}{\rm ss}\simeq
p^+\,\ket{\CC^+\alpha}!\bra{\CC^+\alpha}
+p^-\,\ket{\CC^-\alpha}!\bra{\CC^-\alpha},
\end{equation}
where $\ket{\CC^\pm_\alpha}\propto\ket{\alpha}\pm\ket{-\alpha}$ are photonic Schr\"odinger cat states whose complex amplitude $\alpha$ is determined by the system parameters [2-4].
We recall that the coherent state $\ket{\alpha}$ is the eigenstate of the destruction operator: $\hat{a} \ket{\alpha}=\alpha \ket{\alpha}$.
The state $\ket{\CC^+\alpha}$ is called the even cat, since it can be written as a superposition of solely even Fock states, while $\ket{\CC^-\alpha}$ is the odd cat.
In the previous equation, the coefficients $p^\pm$ can be interpreted as the probabilities of the system of being found in the corresponding cat state.
Below, we demonstrate this feature by diagonalising the steady-state density matrix, and by plotting the photon-number probability for the two most probable states.
End of explanation
xvec=np.linspace(-4,4, 500)
W_even=wigner(vecs[0], xvec, xvec, g=2)
W_odd=wigner(vecs[1], xvec, xvec, g=2)
W_ss=wigner(rho_ss, xvec, xvec, g=2)
W_ss=np.around(W_ss, decimals=2)
plt.figure(figsize=(10, 8))
plt.contourf(xvec,xvec, W_ss, cmap='RdBu', levels=np.linspace(-1, 1, 20))
plt.colorbar()
plt.xlabel(r'Re$(\alpha)$', fontsize=label_size)
plt.ylabel(r'Im$(\alpha)$', fontsize=label_size)
plt.title("Steady state", fontsize=title_font)
plt.show()./images/
W_even=np.around(W_even, decimals=2)
plt.figure(figsize=(10, 8))
plt.contourf(xvec,xvec, W_even, cmap='RdBu', levels=np.linspace(-1, 1, 20))
plt.colorbar()
plt.xlabel(r'Re$(\alpha)$', fontsize=label_size), width="300"
plt.ylabel(r'Im$(\alpha)$', fontsize=label_size)
plt.title("First state: even cat-like", fontsize=title_font)
plt.show()
W_odd=np.around(W_odd, decimals=2)
plt.figure(figsize=(10, 8))
plt.contourf(xvec,xvec, W_odd, cmap='RdBu', levels=np.linspace(-1, 1, 20))
plt.colorbar()
plt.xlabel(r'Re$(\alpha)$', fontsize=label_size)
plt.ylabel(r'Im$(\alpha)$', fontsize=label_size)
plt.title("Second state: odd cat-like", fontsize=title_font)
plt.show()
Explanation: Correctly, the two system have opposite parity. Indeed, for sufficiently intense pumping ($G> U,\gamma,\eta$ and $|\alpha|\gg1$), it was shown in [2] that $p^+\simeq p^- \simeq 1/2$.
However, in this strong-pumping regime, the steady-state can be recast as
\begin{equation}\label{Eq:MixtureCoherent}
\hat{\rho}{\rm ss}\simeq
\frac{1}{2}\ket{\alpha}!\bra{\alpha}
+\frac{1}{2}\ket{-\alpha}!\bra{-\alpha}.
\end{equation}
Hence, the steady state can be seen as well as a statistical mixture of two coherent states of opposite phase.
Since $\hat{\rho}{\rm ss}$ is anyhow a mixture of two (quasi-)orthogonal states, the steady state is bimodal.
Such a bimodality can be visualised, for instance, through the Wigner function [2,3].
Now, the pivotal question is: if one monitors the evolution of the system, in which states can it be observed?
The orthogonal cat states, the two coherent states with opposite phases, or none of them in particular?
As we will show in the following, the answer dramatically depends on the type of measurement scheme employed to monitor the trajectory of the system.
End of explanation
tlist=np.linspace(0,20,2000)
sol_mc=mcsolve(H, fock(20,0), tlist, c_ops, [a.dag()*a, (a+a.dag())/2, -1.j*(a-a.dag())/2, parity], ntraj=1)
plt.figure(figsize=(18, 8))
plt.subplot(311)
plt.plot(tlist, sol_mc.expect[0])
plt.ylabel(r'$\langle \hat{a}^\dagger \hat{a} \rangle$', fontsize=label_size)
plt.xlim([0,20])
plt.subplot(312)
plt.plot(tlist, sol_mc.expect[3])
plt.ylabel(r'$\langle \hat{P} \rangle$', fontsize=label_size)
plt.xlim([0,20])
plt.subplot(313)
plt.plot(tlist, sol_mc.expect[1], label=r'$\langle \hat{x} \rangle$')
plt.plot(tlist, sol_mc.expect[2], label=r'$\langle \hat{p} \rangle$')
plt.xlabel(r'$\gamma t$', fontsize=label_size)
plt.xlim([0,20])
plt.ylim([-3,3])
plt.legend()
plt.show()
Explanation: Quantum Trajectories
From a theoretical point of view, the Lindblad master equation describes the out-of-equilibrium dynamics of a system coupled to a Markovian (i.e., memoryless) environment.
Indeed, the density matrix $\hat{\rho}(t)$ solving the Lindblad equation encodes the average evolution of the system when no information is collected about environment state.
However, one can imagine to keep track of the system state by continuously probing the environment.
Doing so, the time evolution of the system would change at each realisation.
However, $\hat{\rho}(t)$ can be retrieved by averaging over an infinite number of such "monitored" realisations.
The Monte Carlo wavefunction method has been developed relying exactly on this idea.
It is based on the stochastic simulation of the system evolution when one continuously gathers information from the environment.
Each simulation of the stochastic evolution of the system gives a single quantum trajectory.
The results obtained by solving the master equation are recovered by averaging over many trajectories.
In order to simulate the quantum trajectories, it is necessary to explicitly model how an observer measures the environment, thus affecting the system evolution itself (a detailed discussion on this subject is given in [5].
Interestingly, several different measures can be associated with the same master equation.
Depending on the chosen measurement, contrasting results and interpretations can emerge.
Those incompatibilities are, however, harmonized once the mean value over many trajectories is taken.
$\newcommand{\ket}[1]{| #1 \rangle}$
$\newcommand{\bra}[1]{\langle #1 |}$
$\newcommand{\CC}{\mathcal{C}}$
Photon counting
The most natural way to observe the exchanges between the Kerr resonator and the environment is to just detect every leaked photon (both individually and by couples).
This mechanism is described via the action of the one-photon jump operator $\hat{J}_1=\sqrt{\gamma}\, \hat{a}$ and the two-photon one $\hat{J}_2=\sqrt{\eta}\, \hat{a}^2$, which describe the absorption of one or two photons by an ideal photodetector (details in e.g. [6]).
Indeed, in typical realisations (e.g. [4]) the one- and two-photon dissipation channels are discernible.
Hence, we can assume that the photodetector is capable of distinguishing between one- and two-photon losses.
The photon-counting trajectory is then obtained by using the "mcsolve" function of qutip.
In conclusion, a photon-counting trajectory is characterised by abrupt jumps corresponding to the projective measure associated to the detection of one or two photons.
End of explanation
tlist=np.linspace(0,8000,800)
sol_hom=ssesolve(H, fock(20,0), tlist, c_ops, [a.dag()*a, (a+a.dag())/2, -1.j*(a-a.dag())/2, parity],ntraj=1,nsubsteps=9500, store_measurement=False, method='homodyne')
plt.figure(figsize=(18, 8))
plt.subplot(311)
plt.plot(tlist, sol_hom.expect[0])
plt.ylabel(r'$\langle \hat{a}^\dagger \hat{a} \rangle$', fontsize=label_size)
plt.xlim([0,8000])
plt.subplot(312)
plt.plot(tlist, sol_hom.expect[3])
plt.ylabel(r'$\langle \hat{P} \rangle$', fontsize=label_size)
plt.xlim([0,8000])
plt.subplot(313)
plt.plot(tlist, sol_hom.expect[1], label=r'$\langle \hat{x} \rangle$')
plt.plot(tlist, sol_hom.expect[2], label=r'$\langle \hat{p} \rangle$')
plt.xlabel(r'$\gamma t$', fontsize=label_size)
plt.xlim([0,8000])
plt.ylim([-3,3])
plt.legend()
plt.show()
Explanation: As shown in [2], the Hamiltonian $\hat{H}$ and the two-photon dissipation tend to stabilize photonic cat states.
On the other hand, the annihilation operator switches from the even (odd) cat to the odd (even) one: $\hat{a}\ket{\CC^\pm_\alpha} \propto \alpha \ket{\CC^\mp_\alpha}$.
The operator $\hat{J}1$ thus induces jumps between the two cat states at a rate proportional to $\gamma \braket{\hat{a}^\dagger \hat{a}}$.
This picture is very well captured in the framework of photon-counting trajectories, an example of which is given in the previous figure.
The cat states are, indeed, orthogonal eigenstates of the parity operator $\hat{\mathcal{P}}=e^{i \pi \hat{a}^\dagger \hat{a}}$ with eigenvalues $\pm1$.
As we can see, along a single trajectory the state intermittently and randomly switches between the two cat states.
We stress that, instead, the mean values of the field quadratures $\hat{x}=\left(\hat{a}^\dagger+\hat{a}\right)/2$ and $\hat{p}=i\left(\hat{a}^\dagger-\hat{a}\right)/2$ are practically zero along the trajectory, as expected for any cat state.
The parity, hence, appears to be the appropriate observable to detect a bimodal behaviour in a photon-counting environment.
Thus, we may interpret
$$\hat{\rho}{\rm ss}\simeq
p^+\,\ket{\CC^+\alpha}!\bra{\CC^+\alpha}
+p^-\,\ket{\CC^-\alpha}!\bra{\CC^-\alpha}$$
as the steady-state probabilities to find the system in one of the two cat states.
The previous analysis seems to point in the direction of privileging the cat states over the coherent ones as the more truthful picture of the steady state.
Homodyne
Another possible way to monitor a quantum-optical system is through homodyne detection, a widely-used experimental technique which allows to access the field quadratures [5-6].
To implement this kind of measurement, the cavity output field is mixed to the coherent field of a reference laser through a beam splitter (here assumed of perfect transmittance).
Then, the mixed fields are probed via (perfect) photodectors, whose measures are described by new jump operators.
We stress that both the coherent and the cavity fields are measured simultaneously.
In our case, we want to probe independently the two dissipation channels.
To distinguish between one- and two-photon losses, one can exploit a nonlinear element acting on the cavity output field.
Indeed, in experimental realisations such as [4], a nonlinear element is already part of the system and is the key ingredient to realise two-photon processes.
More specifically, one-photon losses are due to the finite quality factor of the resonator.
They can be probed by directly mixing the output field of the cavity with a coherent beam of amplitude $\beta_1$ acting as local oscillator.
Therefore, the homodyne jump operator for one-photon losses can be cast as $\hat{K}1=\hat{J}_1 +\beta_1 \hat{1}$.
Two-photon losses are, instead, mediated by a nonlinear element (a Josephson junction in [4]), which converts two cavity photons of frequency $\omega_c$ into one photon of frequency $\omega{nl}$. Hence, the field coming out of the nonlinear element can be probed by a second independent oscillator.
This whole process can be seen as the action of a nonlinear beam splitter which mixes couples of dissipated photons with a reference oscillator of amplitude $\beta_2$.
Therefore, the homodyne two-photon jump operator takes the form $\hat{K}2=\hat{J}_2 +\beta_2 \hat{1}$.
Without loss of generality, in the following, we assume the amplitudes $\beta{1,2}$ to be real [6].
In the ideal limit $\beta_{1,2}\to\infty$, the system evolves diffusively according to a homodyne stochastic Schr\"odinger equation.
Using the ssesolve function with option "method='homodyne'", one can simulate the trajectory.
End of explanation
tlist=np.linspace(0,3,100)
sol_mc_mean=mcsolve(H, fock(20,0), tlist, c_ops, [a.dag()*a, (a+a.dag())/2, -1.j*(a-a.dag())/2, parity], ntraj=50)
sol_hom_mean=ssesolve(H, fock(20,0), tlist, c_ops, [a.dag()*a, (a+a.dag())/2, -1.j*(a-a.dag())/2, parity],ntraj=50,nsubsteps=350, store_measurement=False, method='homodyne')
plt.figure(figsize=(18, 8))
plt.subplot(311)
plt.plot(tlist, sol_mc_mean.expect[0], 'r', label='Conunting')
plt.plot(tlist, sol_hom_mean.expect[0], 'b', label='Homodyne')
plt.ylabel(r'$\langle \hat{a}^\dagger \hat{a} \rangle$', fontsize=label_size)
plt.xlim([0,3])
plt.legend()
plt.subplot(312)
plt.plot(tlist, sol_mc_mean.expect[3], 'r')
plt.plot(tlist, sol_hom_mean.expect[3], 'b')
plt.ylabel(r'$\langle \hat{P} \rangle$', fontsize=label_size)
plt.xlim([0,3])
plt.subplot(313)
plt.plot(tlist, sol_mc_mean.expect[2], 'r')
plt.plot(tlist, sol_hom_mean.expect[2], 'b')
plt.ylabel(r'$\langle \hat{p} \rangle$', fontsize=label_size)
plt.xlim([0,3])
plt.ylim([-2,2])
Explanation: We see that the mean parity $\braket{\hat{\mathcal{P}}}$
is confined around zero along a single homodyne trajectory, in spite of the
"switching cat" picture.
These fluctuations are due to the diffusive nature of the homodyne trajectory, which rules
the stochastic time evolution of the system wave function under homodyne detection.
The bimodal behaviour, instead, is clear in the time evolution of $\braket{\hat{x}}$ and $\braket{\hat{p}}$.
This appears compatible with the picture given by
$\hat{\rho}_{\rm ss}\simeq
\frac{1}{2}\ket{\alpha}!\bra{\alpha}
+\frac{1}{2}\ket{-\alpha}!\bra{-\alpha}$: at the steady state the system switches between the coherent states $\ket{\pm\alpha}$.
We point out that the phase switches observed for homodyne trajectories have a much smaller rate than parity
switches in photon-counting trajectories. This is a consequence of the metastable nature of the
coherent states $\ket{\pm\alpha}$ [1-4].
Conciling the two points of view
Summing up, we have shown that the behaviour of the system along a single quantum trajectory dramatically depends on the measurement protocol adopted.
For photon-counting measurements on the environment, the system switches between the parity-defined cat states, while under homodyne detection, the states explored along a single quantum trajectory are the coherent ones.
In other words, one may assign a physical meaning to the probabilities appearing in the mixed-state representation of $\hat{\rho}_{\rm ss}$ only upon specification of the single-trajectory protocol.
However, any possible controversy at the single-trajectory level is washed out by averaging over many of them.
End of explanation
qutip.about()
Explanation: References
[1] N. Bartolo, F. Minganti 1 , J. Lolli, and C. Ciuti,
Homodyne versus photon-counting quantum trajectories for dissipative Kerr resonators
with two-photon driving, The European Physical Journal Special Topics 226, 2705 (2017).
The European Physical Journal Special Topics 226, 2705 (2017).
[2] F. Minganti, N. Bartolo, J. Lolli, W. Casteels, and C. Ciuti,
Exact results for Schrödinger cats in driven-dissipative systems and their feedback control, Scientific Reports 6, 26987 (2016).
[3] N. Bartolo, F. Minganti, W. Casteels, and C. Ciuti,
Exact steady state of a Kerr resonator with one- and two-photon driving and dissipation: Controllable Wigner-function multimodality and dissipative phase transitions, Physical Review A 94, 033841 (2016).
[4] Z. Leghtas et al., Confining the state of light to a quantum manifold by
engineered two-photon loss, Science 347, 853 (2015).
[5] S. Haroche and J. M. Raimond, Exploring the Quantum: Atoms, Cavities, and Photons
(Oxford University Press, 2006).
[6] H. Wiseman and G. Milburn, Quantum Measurement and Control (Cambridge University Press, 2010).
End of explanation |
2,282 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Scale Data Using Standard Scaler in Sklearn
| Python Code::
from sklearn.preprocessing import StandardScaler
#Initalise standard scaler
scaler = StandardScaler()
#Fit the scaler using X_train data
scaler.fit(X_train)
#Transform X_train and X_test using the scaler and convert back to DataFrame
X_train = pd.DataFrame(scaler.transform(X_train), columns = X_train.columns)
X_test = pd.DataFrame(scaler.transform(X_test), columns = X_test.columns)
|
2,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to parameter tuning
Hyper-parameters
A machine learning model is a mathematical formula with a number of parameters that are learnt from the data. That is the crux of machine learning
Step1: Basic checks
Check if the columns are the same in train and test.
What else will you check? [Discuss]
Step2: The categorical data should be encoded.
We saw LabelEncoder earlier. Now, we will use one-hot encoding
One-hot encoding
Step3: Exercise
Apply one-hot encoding to test dataset and store in test_updated
Step4: Let's do cross validation and see what the generalization error is
Cross-validation
Step5: Exercise
Step6: grid-search
The above was for some arbitrary chosen parameter value.
How do we run the model on various choices of hyper-parameters?
Step7: Exercise
For max_depth include - 6, 10
Add min_samples_split, min_samples_leaf to the grid search
In addition to roc_auc, add precision and recall
Challenges with grid_search
Discuss
Randomized Search | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
#Read the data
#Read the data
df = pd.read_csv("data/historical_loan.csv")
# refine the data
df.years = df.years.fillna(np.mean(df.years))
# Setup the features and target
X = df.iloc[:,1:]
y = df.iloc[:,0]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Explanation: Introduction to parameter tuning
Hyper-parameters
A machine learning model is a mathematical formula with a number of parameters that are learnt from the data. That is the crux of machine learning: fitting a model to the data.
However, there is another kind of parameters that cannot be directly learned from the regular training process. These parameters express “higher-level” properties of the model such as its complexity or how fast it should learn. They are called hyperparameters. Hyperparameters are usually fixed before the actual training process begins.
So, how are hyperparameters decided?
Broadly speaking, this is done by setting different values for those hyperparameters, training different models, and deciding which ones work best by testing them
So, to summarize. Hyperparameters:
Define higher level concepts about the model such as complexity, or capacity to learn.
Cannot be learned directly from the data in the standard model training process and need to be predefined.
Can be decided by setting different values, training different models, and choosing the values that test better
Some examples of hyperparameters:
Number of leaves or depth of a tree
Number of latent factors in a matrix factorization
Learning rate (in many models)
Number of hidden layers in a deep neural network
Number of clusters in a k-means clustering
source: Quora
End of explanation
X_train.columns
X_test.columns
print(X_train.shape, X_test.shape)
print("train")
print(X_train.dtypes)
print()
print("test")
print(X_test.dtypes)
Explanation: Basic checks
Check if the columns are the same in train and test.
What else will you check? [Discuss]
End of explanation
X_train_updated = pd.get_dummies(X_train)
X_train.shape
X_train_updated.shape
#print the first record
X_train_updated.iloc[0]
Explanation: The categorical data should be encoded.
We saw LabelEncoder earlier. Now, we will use one-hot encoding
One-hot encoding
End of explanation
#Code here
X_test_updated = pd.get_dummies(X_test)
print(X_test.shape, X_test_updated.shape)
#print the first record
X_test_updated.iloc[1]
print(X_train_updated.shape, y_train.shape)
#Let's build random forest model
from sklearn.ensemble import RandomForestClassifier
model_rf = RandomForestClassifier(n_estimators=100,
criterion="gini",
max_depth=5,
min_samples_split=2,
min_samples_leaf= 1,
oob_score=True,
n_jobs=-1
)
model_rf.fit(X_train_updated, y_train)
model_rf.oob_score_
Explanation: Exercise
Apply one-hot encoding to test dataset and store in test_updated
End of explanation
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve, auc
model_rf = RandomForestClassifier(n_estimators=100,
criterion="gini",
max_depth=5,
min_samples_split=2,
min_samples_leaf= 1,
oob_score=True,
n_jobs=-1
)
%%time
#Or use %%timeit -n1 -r1 to time the cell
cross_val_score_rf = cross_val_score(model_rf,
X_train_updated,
y_train, scoring="roc_auc",
cv=5,
n_jobs=-1
)
cross_val_score_rf
Explanation: Let's do cross validation and see what the generalization error is
Cross-validation
End of explanation
#What is the average cross validation score?
np.mean(cross_val_score_rf)
Explanation: Exercise
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
%%timeit -n1 -r1
# Set the parameters by cross-validation
tuned_parameters = [{'n_estimators': [50,100],
'max_depth': [3, 4, 5, 6]
}]
scores = ['roc_auc']
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print()
clf = GridSearchCV(RandomForestClassifier(n_jobs=-1),
tuned_parameters, cv=5,
scoring='%s' % score)
clf
clf.fit(X_train_updated, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test_updated)
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_true, y_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
print("AUC:", roc_auc)
print(classification_report(y_true, y_pred))
print()
Explanation: grid-search
The above was for some arbitrary chosen parameter value.
How do we run the model on various choices of hyper-parameters?
End of explanation
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint as sp_randint
%%timeit -n1 -r1
# Set the parameters by cross-validation
tuned_parameters = { "n_estimators": [50,100],
"max_depth": [3, 4, 6, None],
"max_features": sp_randint(1, 11),
"min_samples_split": sp_randint(2, 11),
"min_samples_leaf": sp_randint(1, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]
}
scores = ['roc_auc']
n_iter_search = 20
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print()
clf = RandomizedSearchCV(RandomForestClassifier(n_jobs=-1),
param_distributions = tuned_parameters,
n_iter = n_iter_search,
n_jobs=-1,
cv=5,
scoring='%s' % score)
clf.fit(X_train_updated, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test_updated)
#false_positive_rate, true_positive_rate, thresholds = roc_curve(y_true, y_pred)
#roc_auc = auc(false_positive_rate, true_positive_rate)
#print("AUC:", roc_auc)
#print(classification_report(y_true, y_pred))
#print()
Explanation: Exercise
For max_depth include - 6, 10
Add min_samples_split, min_samples_leaf to the grid search
In addition to roc_auc, add precision and recall
Challenges with grid_search
Discuss
Randomized Search
End of explanation |
2,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 잘라내기 종합 가이드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 모델 정의하기
전체 모델 잘라내기(순차 및 함수형)
모델 정확성의 향상을 위한 팁
Step3: 일부 레이어 잘라내기(순차 및 함수형)
모델을 잘라내면 정확성에 부정적인 영향을 미칠 수 있습니다. 모델의 레이어를 선택적으로 잘라내어 정확성, 속도 및 모델 크기 간의 균형을 탐색할 수 있습니다.
모델 정확성의 향상을 위한 팁
Step4: 이 예에서는 레이어 유형을 사용하여 잘라낼 레이어를 결정했지만, 특정 레이어를 잘라내는 가장 쉬운 방법은 name 속성을 설정하고 clone_function에서 해당 내용을 찾는 것입니다.
Step5: 읽기 더 쉽지만 잠재적으로 모델 정확성이 낮음
잘라내기를 사용한 미세 조정과 호환되지 않으므로 미세 조정을 지원하는 위의 예보다 정확성이 떨어질 수 있습니다.
초기 모델을 정의하는 동안 prune_low_magnitude를 적용할 수 있지만, 이후에 가중치를 로드하면 아래 예에서 동작하지 않습니다.
함수형 예
Step6: 순차 예
Step7: 사용자 정의 Keras 레이어를 잘라내거나 잘라낼 레이어의 일부를 수정합니다.
일반적인 실수
Step8: 모델 훈련하기
Model.fit
훈련 중에 tfmot.sparsity.keras.UpdatePruningStep 콜백을 호출합니다.
훈련 디버깅에 tfmot.sparsity.keras.PruningSummaries 콜백을 사용합니다.
Step9: Colab이 아닌 사용자의 경우, TensorBoard.dev에서 이 코드 블록의 이전 실행의 결과를 볼 수 있습니다.
사용자 정의 훈련 루프
훈련 중에 tfmot.sparsity.keras.UpdatePruningStep 콜백을 호출합니다.
To help debug training, use the tfmot.sparsity.keras.PruningSummaries callback.
Step10: Colab이 아닌 사용자의 경우, TensorBoard.dev에서 이 코드 블록의 이전 실행의 결과를 볼 수 있습니다.
잘라낸 모델의 정확성 향상하기
먼저, tfmot.sparsity.keras.prune_low_magnitude API 문서를 보고 잘라내기 일정이 무엇인지, 그리고 각 잘라내기 일정 유형의 수학을 이해합니다.
팁
Step11: 위의 코드가 일반적으로 적용됩니다. 아래 코드는 HDF5 모델 형식(HDF5 가중치 및 기타 형식이 아님)에만 필요합니다.
Step12: 잘라낸 모델 배포하기
크기 압축으로 모델 내보내기
일반적인 실수
Step13: 하드웨어별 최적화
여러 백엔드에서 잘라내기를 사용하여 지연 시간을 개선하면, 블록 희소성을 사용하여 특정 하드웨어의 지연 시간을 개선할 수 있습니다.
블록 크기를 늘리면 대상 모델의 정확성에 대해 달성할 수 있는 최대 희소성이 감소합니다. 그럼에도 불구하고, 지연 시간은 여전히 개선될 수 있습니다.
블록 희소성에 지원되는 항목에 대한 자세한 내용은 tfmot.sparsity.keras.prune_low_magnitude API 문서를 참조하세요. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
%load_ext tensorboard
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model = setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def get_gzipped_model_size(model):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, keras_file = tempfile.mkstemp('.h5')
model.save(keras_file, include_optimizer=False)
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(keras_file)
return os.path.getsize(zipped_file)
setup_model()
pretrained_weights = setup_pretrained_weights()
Explanation: 잘라내기 종합 가이드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">}TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
Keras 가중치 잘라내기에 대한 종합 가이드를 시작합니다.
이 페이지는 다양한 사용 사례를 문서화하고 각각에 대해 API를 사용하는 방법을 보여줍니다. 필요한 API를 알고 나면, API 문서에서 매개변수와 하위 수준의 세부 정보를 찾아보세요.
잘라내기의 이점과 지원되는 기능을 보려면 개요를 참조하세요.
단일 엔드 투 엔드 예는 잘라내기 예를 참조하세요.
다음 사용 사례를 다룹니다.
잘라낸 모델을 정의하고 훈련합니다.
순차 및 함수형
Keras model.fit 및 사용자 정의 훈련 루프
잘라낸 모델을 체크포인트 지정하고 역직렬화합니다.
잘라낸 모델을 배포하고 압축 이점을 확인합니다.
잘라내기 알고리즘의 구성에 대해서는 tfmot.sparsity.keras.prune_low_magnitude API 문서를 참조하세요.
설정
필요한 API를 찾고 목적을 이해하기 위해 실행할 수 있지만, 이 섹션은 건너뛸 수 있습니다.
End of explanation
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended.
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
model_for_pruning.summary()
Explanation: 모델 정의하기
전체 모델 잘라내기(순차 및 함수형)
모델 정확성의 향상을 위한 팁:
정확성을 가장 많이 떨어뜨리는 레이어 잘라내기를 건너뛰려면 "일부 레이어 잘라내기"를 시도합니다.
일반적으로 처음부터 훈련하는 것보다 잘라내기로 미세 조정하는 것이 좋습니다.
잘라내기로 전체 모델을 훈련하려면, tfmot.sparsity.keras.prune_low_magnitude를 모델에 적용합니다.
End of explanation
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `prune_low_magnitude` to make only the
# Dense layers train with pruning.
def apply_pruning_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.sparsity.keras.prune_low_magnitude(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_pruning_to_dense`
# to the layers of the model.
model_for_pruning = tf.keras.models.clone_model(
base_model,
clone_function=apply_pruning_to_dense,
)
model_for_pruning.summary()
Explanation: 일부 레이어 잘라내기(순차 및 함수형)
모델을 잘라내면 정확성에 부정적인 영향을 미칠 수 있습니다. 모델의 레이어를 선택적으로 잘라내어 정확성, 속도 및 모델 크기 간의 균형을 탐색할 수 있습니다.
모델 정확성의 향상을 위한 팁:
일반적으로 처음부터 훈련하는 것보다 잘라내기로 미세 조정하는 것이 좋습니다.
첫 번째 레이어 대신 이후 레이어를 잘라냅니다.
중요 레이어(예: attention 메커니즘)을 잘라내지 마세요.
추가 자료:
tfmot.sparsity.keras.prune_low_magnitude API 문서는 레이어별로 잘라내기 구성을 변경하는 방법에 대한 세부 정보를 제공합니다.
아래 예에서는 Dense 레이어만 잘라냅니다.
End of explanation
print(base_model.layers[0].name)
Explanation: 이 예에서는 레이어 유형을 사용하여 잘라낼 레이어를 결정했지만, 특정 레이어를 잘라내는 가장 쉬운 방법은 name 속성을 설정하고 clone_function에서 해당 내용을 찾는 것입니다.
End of explanation
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
i = tf.keras.Input(shape=(20,))
x = tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
model_for_pruning = tf.keras.Model(inputs=i, outputs=o)
model_for_pruning.summary()
Explanation: 읽기 더 쉽지만 잠재적으로 모델 정확성이 낮음
잘라내기를 사용한 미세 조정과 호환되지 않으므로 미세 조정을 지원하는 위의 예보다 정확성이 떨어질 수 있습니다.
초기 모델을 정의하는 동안 prune_low_magnitude를 적용할 수 있지만, 이후에 가중치를 로드하면 아래 예에서 동작하지 않습니다.
함수형 예
End of explanation
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
Explanation: 순차 예
End of explanation
class MyDenseLayer(tf.keras.layers.Dense, tfmot.sparsity.keras.PrunableLayer):
def get_prunable_weights(self):
# Prune bias also, though that usually harms model accuracy too much.
return [self.kernel, self.bias]
# Use `prune_low_magnitude` to make the `MyDenseLayer` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(MyDenseLayer(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
Explanation: 사용자 정의 Keras 레이어를 잘라내거나 잘라낼 레이어의 일부를 수정합니다.
일반적인 실수: 바이어스를 제거하면 일반적으로 모델 정확성이 너무 많이 손상됩니다.
tfmot.sparsity.keras.PrunableLayer는 두 가지 사용 사례를 제공합니다.
사용자 정의 Keras 레이어를 잘라냅니다.
내장 Keras 레이어의 일부를 수정하여 잘라냅니다.
예를 들어, API는 기본적으로 Dense 레이어의 커널만 잘라냅니다. 아래의 예는 바이어스도 제거합니다.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
log_dir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# Log sparsity and other metrics in Tensorboard.
tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir)
]
model_for_pruning.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model_for_pruning.fit(
x_train,
y_train,
callbacks=callbacks,
epochs=2,
)
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
Explanation: 모델 훈련하기
Model.fit
훈련 중에 tfmot.sparsity.keras.UpdatePruningStep 콜백을 호출합니다.
훈련 디버깅에 tfmot.sparsity.keras.PruningSummaries 콜백을 사용합니다.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Boilerplate
loss = tf.keras.losses.categorical_crossentropy
optimizer = tf.keras.optimizers.Adam()
log_dir = tempfile.mkdtemp()
unused_arg = -1
epochs = 2
batches = 1 # example is hardcoded so that the number of batches cannot change.
# Non-boilerplate.
model_for_pruning.optimizer = optimizer
step_callback = tfmot.sparsity.keras.UpdatePruningStep()
step_callback.set_model(model_for_pruning)
log_callback = tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir) # Log sparsity and other metrics in Tensorboard.
log_callback.set_model(model_for_pruning)
step_callback.on_train_begin() # run pruning callback
for _ in range(epochs):
log_callback.on_epoch_begin(epoch=unused_arg) # run pruning callback
for _ in range(batches):
step_callback.on_train_batch_begin(batch=unused_arg) # run pruning callback
with tf.GradientTape() as tape:
logits = model_for_pruning(x_train, training=True)
loss_value = loss(y_train, logits)
grads = tape.gradient(loss_value, model_for_pruning.trainable_variables)
optimizer.apply_gradients(zip(grads, model_for_pruning.trainable_variables))
step_callback.on_epoch_end(batch=unused_arg) # run pruning callback
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
Explanation: Colab이 아닌 사용자의 경우, TensorBoard.dev에서 이 코드 블록의 이전 실행의 결과를 볼 수 있습니다.
사용자 정의 훈련 루프
훈련 중에 tfmot.sparsity.keras.UpdatePruningStep 콜백을 호출합니다.
To help debug training, use the tfmot.sparsity.keras.PruningSummaries callback.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
_, keras_model_file = tempfile.mkstemp('.h5')
# Checkpoint: saving the optimizer is necessary (include_optimizer=True is the default).
model_for_pruning.save(keras_model_file, include_optimizer=True)
Explanation: Colab이 아닌 사용자의 경우, TensorBoard.dev에서 이 코드 블록의 이전 실행의 결과를 볼 수 있습니다.
잘라낸 모델의 정확성 향상하기
먼저, tfmot.sparsity.keras.prune_low_magnitude API 문서를 보고 잘라내기 일정이 무엇인지, 그리고 각 잘라내기 일정 유형의 수학을 이해합니다.
팁:
모델이 잘라내기를 수행할 때 학습률이 너무 높거나 낮지 않습니다. 잘라내기 일정을 하이퍼 매개변수로 간주합니다.
빠른 테스트로, tfmot.sparsity.keras.ConstantSparsity 일정으로 begin_step을 0으로 설정하여 훈련 시작 시 모델을 최종 희소성까지 잘라내는 실험을 시도해 보세요. 운이 좋으면 우수한 결과를 얻을 수도 있습니다.
모델이 복구할 시간을 주기 위해 자주 잘라내기를 수행하지 마세요. 잘라내기 일정에서 적절한 기본 빈도를 제공합니다.
모델 정확성을 개선하기 위한 일반적인 아이디어는 '모델 정의하기'에서 사용 사례에 대한 팁을 찾아보세요.
체크포인트 및 역직렬화
체크포인트 중에 옵티마이저 단계를 보존해야 합니다. 즉, 체크포인트 지정을 위해 Keras HDF5 모델을 사용할 수 있지만, Keras HDF5 가중치는 사용할 수 없습니다.
End of explanation
# Deserialize model.
with tfmot.sparsity.keras.prune_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
Explanation: 위의 코드가 일반적으로 적용됩니다. 아래 코드는 HDF5 모델 형식(HDF5 가중치 및 기타 형식이 아님)에만 필요합니다.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Typically you train the model here.
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
print("final model")
model_for_export.summary()
print("\n")
print("Size of gzipped pruned model without stripping: %.2f bytes" % (get_gzipped_model_size(model_for_pruning)))
print("Size of gzipped pruned model with stripping: %.2f bytes" % (get_gzipped_model_size(model_for_export)))
Explanation: 잘라낸 모델 배포하기
크기 압축으로 모델 내보내기
일반적인 실수: 잘라내기의 압축 이점을 확인하려면, strip_pruning과 표준 압축 알고리즘(예: gzip을 통해)을 적용하는 것이 모두 필요합니다.
End of explanation
base_model = setup_model()
# For using intrinsics on a CPU with 128-bit registers, together with 8-bit
# quantized weights, a 1x16 block size is nice because the block perfectly
# fits into the register.
pruning_params = {'block_size': [1, 16]}
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model, **pruning_params)
model_for_pruning.summary()
Explanation: 하드웨어별 최적화
여러 백엔드에서 잘라내기를 사용하여 지연 시간을 개선하면, 블록 희소성을 사용하여 특정 하드웨어의 지연 시간을 개선할 수 있습니다.
블록 크기를 늘리면 대상 모델의 정확성에 대해 달성할 수 있는 최대 희소성이 감소합니다. 그럼에도 불구하고, 지연 시간은 여전히 개선될 수 있습니다.
블록 희소성에 지원되는 항목에 대한 자세한 내용은 tfmot.sparsity.keras.prune_low_magnitude API 문서를 참조하세요.
End of explanation |
2,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a Simple TensorFlow Lite for Microcontrollers model
This notebook demonstrates the process of training a 2.5 kB model using TensorFlow and converting it for use with TensorFlow Lite for Microcontrollers.
Deep learning networks learn to model patterns in underlying data. Here, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
The model created in this notebook is used in the hello_world example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Setup Environment
Install Dependencies
Step2: Import Dependencies
Step3: Dataset
1. Generate Data
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph.
Step4: 2. Add Noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph
Step5: 3. Split the Data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
The data is split as follows
Step6: Training
1. Design the Model
We're going to build a simple neural network model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 8 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note
Step7: 2. Train the Model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 500 epochs, with 64 pieces of data in each batch. We also pass in some data for validation. As you will see when you run the cell, training can take a while to complete
Step8: 3. Plot Metrics
1. Loss (or Mean Squared Error)
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form
Step9: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs
Step10: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
2. Mean Absolute Error
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers
Step11: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
3. Actual vs Predicted Outputs
To get more insight into what is happening, let's check its predictions against the test dataset we set aside earlier
Step12: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Training a Larger Model
1. Design the Model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with 16 neurons in the first layer and an additional layer of 16 neurons in the middle
Step13: 2. Train the Model
We'll now train and save the new model.
Step14: 3. Plot Metrics
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ)
Step15: Great results! From these graphs, we can see several exciting things
Step16: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to stop. This model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Generate a TensorFlow Lite Model
1. Generate Models with or without Quantization
We now have an acceptably accurate model. We'll use the TensorFlow Lite Converter to convert the model into a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of a model is called quantization. It reduces the precision of the model's weights, and possibly the activations (output of each layer) as well, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
In the following cell, we'll convert the model twice
Step17: 2. Compare Model Performance
To prove these models are accurate even after conversion and quantization, we'll compare their predictions and loss on our test dataset.
Helper functions
We define the predict (for predictions) and evaluate (for loss) functions for TFLite models. Note
Step18: 1. Predictions
Step19: 2. Loss (MSE/Mean Squared Error)
Step20: 3. Size
Step21: Summary
We can see from the predictions (graph) and loss (table) that the original TF model, the TFLite model, and the quantized TFLite model are all close enough to be indistinguishable - even though they differ in size (table). This implies that the quantized (smallest) model is ready to use!
Note
Step22: Deploy to a Microcontroller
Follow the instructions in the hello_world README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model | Python Code:
# Define paths to model files
import os
MODELS_DIR = 'models/'
if not os.path.exists(MODELS_DIR):
os.mkdir(MODELS_DIR)
MODEL_TF = MODELS_DIR + 'model'
MODEL_NO_QUANT_TFLITE = MODELS_DIR + 'model_no_quant.tflite'
MODEL_TFLITE = MODELS_DIR + 'model.tflite'
MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc'
Explanation: Train a Simple TensorFlow Lite for Microcontrollers model
This notebook demonstrates the process of training a 2.5 kB model using TensorFlow and converting it for use with TensorFlow Lite for Microcontrollers.
Deep learning networks learn to model patterns in underlying data. Here, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
The model created in this notebook is used in the hello_world example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Configure Defaults
End of explanation
! pip install tensorflow==2.4.0
Explanation: Setup Environment
Install Dependencies
End of explanation
# TensorFlow is an open source machine learning library
import tensorflow as tf
# Keras is TensorFlow's high-level API for deep learning
from tensorflow import keras
# Numpy is a math library
import numpy as np
# Pandas is a data manipulation library
import pandas as pd
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# Math is Python's math library
import math
# Set seed for experiment reproducibility
seed = 1
np.random.seed(seed)
tf.random.set_seed(seed)
Explanation: Import Dependencies
End of explanation
# Number of sample datapoints
SAMPLES = 1000
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(
low=0, high=2*math.pi, size=SAMPLES).astype(np.float32)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values).astype(np.float32)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
Explanation: Dataset
1. Generate Data
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph.
End of explanation
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
Explanation: 2. Add Noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph:
End of explanation
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
Explanation: 3. Split the Data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
The data is split as follows:
1. Training: 60%
2. Validation: 20%
3. Testing: 20%
The following code will split our data and then plots each set as a different color:
End of explanation
# We'll use Keras to create a simple model architecture
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 8 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(keras.layers.Dense(8, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model_1.compile(optimizer='adam', loss='mse', metrics=['mae'])
Explanation: Training
1. Design the Model
We're going to build a simple neural network model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 8 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.
The code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained:
End of explanation
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
Explanation: 2. Train the Model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 500 epochs, with 64 pieces of data in each batch. We also pass in some data for validation. As you will see when you run the cell, training can take a while to complete:
End of explanation
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(train_loss) + 1)
plt.plot(epochs, train_loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: 3. Plot Metrics
1. Loss (or Mean Squared Error)
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form:
End of explanation
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs:
End of explanation
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
Explanation: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
2. Mean Absolute Error
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers:
End of explanation
# Calculate and print the loss on our test dataset
test_loss, test_mae = model_1.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model_1.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predictions')
plt.legend()
plt.show()
Explanation: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
3. Actual vs Predicted Outputs
To get more insight into what is happening, let's check its predictions against the test dataset we set aside earlier:
End of explanation
model = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model.add(keras.layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second and third layer will help the network learn more complex representations
model.add(keras.layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model.compile(optimizer='adam', loss="mse", metrics=["mae"])
Explanation: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Training a Larger Model
1. Design the Model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with 16 neurons in the first layer and an additional layer of 16 neurons in the middle:
End of explanation
# Train the model
history = model.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
# Save the model to disk
model.save(MODEL_TF)
Explanation: 2. Train the Model
We'll now train and save the new model.
End of explanation
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(train_loss) + 1)
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history.history['mae']
val_mae = history.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.tight_layout()
Explanation: 3. Plot Metrics
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ):
Epoch 500/500
10/10 [==============================] - 0s 10ms/step - loss: 0.0121 - mae: 0.0882 - val_loss: 0.0115 - val_mae: 0.0865
You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.01, and validation MAE has dropped from 0.33 to 0.08.
The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
End of explanation
# Calculate and print the loss on our test dataset
test_loss, test_mae = model.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predicted')
plt.legend()
plt.show()
Explanation: Great results! From these graphs, we can see several exciting things:
The overall loss and MAE are much better than our previous network
Metrics are better for validation than training, which means the network is not overfitting
The reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.
This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
End of explanation
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_TF)
model_no_quant_tflite = converter.convert()
# Save the model to disk
open(MODEL_NO_QUANT_TFLITE, "wb").write(model_no_quant_tflite)
# Convert the model to the TensorFlow Lite format with quantization
def representative_dataset():
for i in range(500):
yield([x_train[i].reshape(1, 1)])
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open(MODEL_TFLITE, "wb").write(model_tflite)
Explanation: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to stop. This model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Generate a TensorFlow Lite Model
1. Generate Models with or without Quantization
We now have an acceptably accurate model. We'll use the TensorFlow Lite Converter to convert the model into a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of a model is called quantization. It reduces the precision of the model's weights, and possibly the activations (output of each layer) as well, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
In the following cell, we'll convert the model twice: once with quantization, once without.
End of explanation
def predict_tflite(tflite_model, x_test):
# Prepare the test data
x_test_ = x_test.copy()
x_test_ = x_test_.reshape((x_test.size, 1))
x_test_ = x_test_.astype(np.float32)
# Initialize the TFLite interpreter
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
# If required, quantize the input layer (from float to integer)
input_scale, input_zero_point = input_details["quantization"]
if (input_scale, input_zero_point) != (0.0, 0):
x_test_ = x_test_ / input_scale + input_zero_point
x_test_ = x_test_.astype(input_details["dtype"])
# Invoke the interpreter
y_pred = np.empty(x_test_.size, dtype=output_details["dtype"])
for i in range(len(x_test_)):
interpreter.set_tensor(input_details["index"], [x_test_[i]])
interpreter.invoke()
y_pred[i] = interpreter.get_tensor(output_details["index"])[0]
# If required, dequantized the output layer (from integer to float)
output_scale, output_zero_point = output_details["quantization"]
if (output_scale, output_zero_point) != (0.0, 0):
y_pred = y_pred.astype(np.float32)
y_pred = (y_pred - output_zero_point) * output_scale
return y_pred
def evaluate_tflite(tflite_model, x_test, y_true):
global model
y_pred = predict_tflite(tflite_model, x_test)
loss_function = tf.keras.losses.get(model.loss)
loss = loss_function(y_true, y_pred).numpy()
return loss
Explanation: 2. Compare Model Performance
To prove these models are accurate even after conversion and quantization, we'll compare their predictions and loss on our test dataset.
Helper functions
We define the predict (for predictions) and evaluate (for loss) functions for TFLite models. Note: These are already included in a TF model, but not in a TFLite model.
End of explanation
# Calculate predictions
y_test_pred_tf = model.predict(x_test)
y_test_pred_no_quant_tflite = predict_tflite(model_no_quant_tflite, x_test)
y_test_pred_tflite = predict_tflite(model_tflite, x_test)
# Compare predictions
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual values')
plt.plot(x_test, y_test_pred_tf, 'ro', label='TF predictions')
plt.plot(x_test, y_test_pred_no_quant_tflite, 'bx', label='TFLite predictions')
plt.plot(x_test, y_test_pred_tflite, 'gx', label='TFLite quantized predictions')
plt.legend()
plt.show()
Explanation: 1. Predictions
End of explanation
# Calculate loss
loss_tf, _ = model.evaluate(x_test, y_test, verbose=0)
loss_no_quant_tflite = evaluate_tflite(model_no_quant_tflite, x_test, y_test)
loss_tflite = evaluate_tflite(model_tflite, x_test, y_test)
# Compare loss
df = pd.DataFrame.from_records(
[["TensorFlow", loss_tf],
["TensorFlow Lite", loss_no_quant_tflite],
["TensorFlow Lite Quantized", loss_tflite]],
columns = ["Model", "Loss/MSE"], index="Model").round(4)
df
Explanation: 2. Loss (MSE/Mean Squared Error)
End of explanation
# Calculate size
size_tf = os.path.getsize(MODEL_TF)
size_no_quant_tflite = os.path.getsize(MODEL_NO_QUANT_TFLITE)
size_tflite = os.path.getsize(MODEL_TFLITE)
# Compare size
pd.DataFrame.from_records(
[["TensorFlow", f"{size_tf} bytes", ""],
["TensorFlow Lite", f"{size_no_quant_tflite} bytes ", f"(reduced by {size_tf - size_no_quant_tflite} bytes)"],
["TensorFlow Lite Quantized", f"{size_tflite} bytes", f"(reduced by {size_no_quant_tflite - size_tflite} bytes)"]],
columns = ["Model", "Size", ""], index="Model")
Explanation: 3. Size
End of explanation
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file, i.e, a TensorFlow Lite for Microcontrollers model
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
Explanation: Summary
We can see from the predictions (graph) and loss (table) that the original TF model, the TFLite model, and the quantized TFLite model are all close enough to be indistinguishable - even though they differ in size (table). This implies that the quantized (smallest) model is ready to use!
Note: The quantized (integer) TFLite model is just 300 bytes smaller than the original (float) TFLite model - a tiny reduction in size! This is because the model is already so small that quantization has little effect. Complex models with more weights, can have upto a 4x reduction in size!
Generate a TensorFlow Lite for Microcontrollers Model
Convert the TensorFlow Lite quantized model into a C source file that can be loaded by TensorFlow Lite for Microcontrollers.
End of explanation
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
Explanation: Deploy to a Microcontroller
Follow the instructions in the hello_world README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model: If you have not modified this notebook, you can follow the instructions as is, to deploy the model. Refer to the hello_world/train/models directory to access the models generated in this notebook.
New Model: If you have generated a new model, then update the values assigned to the variables defined in hello_world/model.cc with values displayed after running the following cell.
End of explanation |
2,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src='http
Step1: First algorithm
Step2: Note 1
Step3: Representing Cities and Distance
Now for the notion of distance. We define total_distance(tour) as the sum of the distances between consecutive cities in the tour; that part is shown below and is easy (with one Python-specific trick
Step4: Distance between cities
Before we can define distance(A, B), the distance between two cities, we have to make a choice. In the fully general version of the TSP problem, the distance between two cities could be anything
Step5: A cool thing is to be able to plot a tour
Step6: We are ready to test our algorithm
Step7: Improving the algorithm
Step8: Results of the improvement
Step9: It takes a few seconds on my machine to solve this problem. In general, the function exact_non_redundant_TSP() looks at $(n-1)!$ tours for an $n$-city problem, and each tour has $n$ cities, so the time for $n$ cities should be roughly proportional to $n!$. This means that the time grows rapidly with the number of cities; we'd need longer than the age of the Universe to run exact_non_redundant_TSP() on just 24 cities
Step10: (In Python, as in the formal mathematical theory of computability, lambda is the symbol for function, so "lambda x
Step11: greedy_TSP() can handle bigger problems
Step12: But... don't be greedy!
A greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
For many problmes greedy algorithms fail to produce the optimal solution, and may even produce the unique worst possible solution. One example is the traveling salesman problem mentioned above
Step13: Elements to take into account solving problems with genetic algorithms
Individual representation (binary, floating-point, etc.);
evaluation and fitness assignment;
selection, that establishes a partial order of individuals in the population using their fitness function value as reference and determines the degree at which individuals in the population will take part in the generation of new (offspring) individuals.
variation, that applies a range of evolution-inspired operators, like crossover, mutation, etc., to synthesize offspring individuals from the current (parent) population. This process is supposed to prime the fittest individuals so they play a bigger role in the generation of the offspring.
stopping criterion, that determines when the algorithm shoulod be stopped, either because the optimum was reach or because the optimization process is not progressing.
Hence a 'general' evolutionary algorithm can be described as
```
def evolutionary_algorithm()
Step14: The toolbox stored the setup of the algorithm. It describes the different elements to take into account.
Step15: Individual representation and evaluation
Individuals represent possible solutions to the problem.
In the TSP case, it looks like the tour itself can be a suitable representation.
For simplicity, an individual can be a list with the indexes corresponding to each city.
This will simplify the crossover and mutation operators.
We can rely on the total_distance() function for evaluation and set the fitness assignment as to minimize it.
Step16: Let's now define that our individuals are composed by indexes that referr to elements of cities and, correspondingly, the population is composed by individuals.
Step17: Defining the crossover and mutation operators can be a challenging task.
There are various <a href='http
Step18: Evaluation can be easily defined from the total_distance() definition.
Step19: We will employ tournament selection with size 3.
Step20: Lets' run the algorithm with a population of 100 individuals and 400 generations.
Step21: We can now review the results
The best individual of the last population
Step22: It is interesting to assess how the fitness of the population changed as the evolution process took place.
We can prepare an deap.tools.Statistics instance to specify what data to collect.
Step23: We are all set now but lets run again the genetic algorithm configured to collect the statistics that we want to gather
Step24: Plotting mean and minimium fitness as evolution took place.
Step25: How has the population evolved?
Ok, but how the population evolved? As TSP solutions are easy to visualize, we can plot the individuals of each population the evolution progressed. We need a new Statistics instance prepared for that.
Step26: Note
Step27: Plotting the individuals and their fitness (color-coded)
Step28: We can now plot the population as the evolutionary process progressed. Darker blue colors imply better fitness.
Step29: Comprarison with greedy_TSP()
Step30: The genetic algorithm outperformed the greedy approach at a viable computational cost.
Note 1
Step31: The next step takes some time to execute. Use the video controls to see the evolution in animated form.
Step32: Embeding the previous animation in the online notebook makes it really big. I have removed the result of the previous cell and created a .gif version of the animation for online viewing. | Python Code:
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
import random, operator
import time
import itertools
import numpy
import math
%matplotlib inline
random.seed(time.time()) # planting a random seed
Explanation: <img src='http://www.puc-rio.br/sobrepuc/admin/vrd/brasao/download/ass_vertpb_reduz4.jpg' align='left'/>
Demonstration Class 03
Using genetic algorithms to solve the traveling salesperson problem
Luis Martí, LIRA/DEE/PUC-Rio
http://lmarti.com; [email protected]
Advanced Evolutionary Computation: Theory and Practice
The notebook is better viewed rendered as slides. You can convert it to slides and view them by:
- using nbconvert with a command like:
bash
$ ipython nbconvert --to slides --post serve <this-notebook-name.ipynb>
- installing Reveal.js - Jupyter/IPython Slideshow Extension
- using the online IPython notebook slide viewer (some slides of the notebook might not be properly rendered).
This and other related IPython notebooks can be found at the course github repository:
* https://github.com/lmarti/evolutionary-computation-course
Traveling Salesperson Problem (TSP):
Given a set of cities, and the distances between each pair of cities, find a tour of the cities with the minimum total distance. A tour means you start at one city, visit every other city exactly once, and then return to the starting city.
This notebook relies on Peter Norvig's IPython notebook on the traveling salesperson problem.
I will be showing how to apply evolutionary algorithms to solve the TSP.
This is a well-known [intractable](http://en.wikipedia.org/wiki/Intractability_(complexity) problem, meaning that there are no efficient solutions that work for a large number of cities.
We can create an inefficient algorithm that works fine for a small number of cites (about a dozen).
We can also find a nearly-shortest tour over thousands of cities.
Actually, the fact there is no efficient algorithm is liberating:
This means that we can use a very simple, inefficient algorithm and not feel too bad about it.
The vocabulary of the problem:
City: For the purpose of this exercise, a city is "atomic" in the sense that we don't have to know anything about the components or attributes of a city, just how far it is from other cities.
Cities: We will need to represent a set of cities; Python's set datatype might be appropriate for that.
Distance: We will need the distance between two cities. If A and B are cities. This could be done with a function, distance(A, B), or with a dict, distance[A][B] or distance[A, B], or with an array if A and B are integer indexes. The resulting distance will be a real number (which Python calls a float).
Tour: A tour is an ordered list of cities; Python's list or tuple datatypes would work.
Total distance: The sum of the distances of adjacent cities in the tour. We will probably have a function, total_distance(tour).
We are doing this demonstration as an IPython notebook. Therefore, we need to perform some initialization.
End of explanation
def exact_TSP(cities):
"Generate all possible tours of the cities and choose the shortest one."
return shortest(alltours(cities))
def shortest(tours):
"Return the tour with the minimum total distance."
return min(tours, key=total_distance)
Explanation: First algorithm: find the tour with shortest total distance fro all possible tours
Generate all the possible tours of the cities, and choose the shortest one (the tour with the minimum total distance).
We can implement this as the Python function exact_TSP (TSP is the standard abbreviation for Traveling Salesperson Problem, and "exact" means that it finds the shortest tour, exactly, not just an approximation to the shortest tour). Here's the design philosophy we will use:
Write Python code that closely mirrors the English description of the algorithm. This will probably require
some auxilliary functions and data structures; just assume we will be able to define them as well, using the same design philosophy.
End of explanation
alltours = itertools.permutations # The permutation function is already defined in the itertools module
cities = {1, 2, 3}
list(alltours(cities))
Explanation: Note 1: We have not yet defined the function total_distance, nor alltours.
Note 2: In Python min(collection,key=function) means to find the element x that is a member of collection such that function(x) is minimized. So shortest finds the tour whose total_distance in the minimal among the tours. So our Python code implements (and closely mimics) our English description of the algorithm. Now we need to define what a tour is, and how to measure total distance.
Representing Tours
A tour starts in one city, and then visits each of the other cities in order, before finally retirning to the start.
A natural representation of the set of available cities is a Python set, and a natural representation of a tour is a sequence that is a permutation of the set.
The tuple (1, 2, 3), for example, represents a tour that starts in city 1, moves to 2, then 3, and then returns to 1 to finish the tour.
End of explanation
def total_distance(tour):
"The total distance between each pair of consecutive cities in the tour."
return sum(distance(tour[i], tour[i-1])
for i in range(len(tour)))
Explanation: Representing Cities and Distance
Now for the notion of distance. We define total_distance(tour) as the sum of the distances between consecutive cities in the tour; that part is shown below and is easy (with one Python-specific trick: when i is 0, then distance(tour[0], tour[-1]) gives us the wrap-around distance between the first and last cities, because tour[-1] is the last element of tour).
End of explanation
City = complex # Constructor for new cities, e.g. City(300, 400)
def distance(A, B):
"The Euclidean distance between two cities."
return abs(A - B)
A = City(300, 0)
B = City(0, 400)
distance(A, B)
def generate_cities(n):
"Make a set of n cities, each with random coordinates."
return set(City(random.randrange(10, 890),
random.randrange(10, 590))
for c in range(n))
cities8, cities10, cities100, cities1000 = generate_cities(8), generate_cities(10), generate_cities(100), generate_cities(1000)
cities8
Explanation: Distance between cities
Before we can define distance(A, B), the distance between two cities, we have to make a choice. In the fully general version of the TSP problem, the distance between two cities could be anything: it could be the amount of time it takes to travel between cities, the number of dollars it costs, or anything else.
How will we represent a two-dimensional point? Here are some choices, with their pros and cons:
Tuple: A point (or city) is a two-tuple of (x, y) coordinates, for example, (300, 0).
Pro: Very simple, easy to break a point down into components. Reasonably efficient.
Con: doesn't distinguish points from other two-tuples. If p is a point, can't do p.x or p.y.
class: Define City as a custom class with x and y fields.
Pro: explicit, gives us p.x accessors.
Con: less efficient because of the overhead of creating user-defined objects.
Distance between cities (contd)
complex: Python already has the two-dimensional point as a built-in numeric data type, but in a non-obvious way: as complex numbers, which inhabit the two-dimensional (real × complex) plane. We can make this use more explicit by defining "City = complex", meaning that we can construct the representation of a city using the same constructor that makes complex numbers.
Pro: most efficient, because it uses a builtin type that is already a pair of numbers. The distance between two points is simple: the absolute value of their difference.
Con: it may seem confusing to bring complex numbers into play; can't say p.x.
subclass: Define "class Point(complex): pass", meaning that points are a subclass of complex numbers.
Pro: All the pros of using complex directly, with the added protection of making it more explicit that these are treated as points, not as complex numbers.
Con: less efficient than using complex directly; still can't do p.x or p.y.
subclass with properties: Define "class Point(complex): x, y = property(lambda p: p.real), property(lambda p: p.imag)".
Pro: All the pros of previous approach, and we can finally say p.x.
Con: less efficient than using complex directly.
From possible alternatives Peter chose to go with complex numbers:
End of explanation
def plot_tour(tour, alpha=1, color=None):
# Plot the tour as blue lines between blue circles, and the starting city as a red square.
plotline(list(tour) + [tour[0]], alpha=alpha, color=color)
plotline([tour[0]], 'rs', alpha=alpha)
# plt.show()
def plotline(points, style='bo-', alpha=1, color=None):
"Plot a list of points (complex numbers) in the 2-D plane."
X, Y = XY(points)
if color:
plt.plot(X, Y, style, alpha=alpha, color=color)
else:
plt.plot(X, Y, style, alpha=alpha)
def XY(points):
"Given a list of points, return two lists: X coordinates, and Y coordinates."
return [p.real for p in points], [p.imag for p in points]
Explanation: A cool thing is to be able to plot a tour
End of explanation
tour = exact_TSP(cities8)
plot_tour(tour)
Explanation: We are ready to test our algorithm
End of explanation
def all_non_redundant_tours(cities):
"Return a list of tours, each a permutation of cities, but each one starting with the same city."
start = first(cities)
return [[start] + list(tour)
for tour in itertools.permutations(cities - {start})]
def first(collection):
"Start iterating over collection, and return the first element."
for x in collection: return x
def exact_non_redundant_TSP(cities):
"Generate all possible tours of the cities and choose the shortest one."
return shortest(all_non_redundant_tours(cities))
all_non_redundant_tours({1, 2, 3})
Explanation: Improving the algorithm: Try All Non-Redundant Tours
The permutation (1, 2, 3) represents the tour that goes from 1 to 2 to 3 and back to 1. You may have noticed that there aren't really six different tours of three cities: the cities 1, 2, and 3 form a triangle; any tour must connect the three points of the triangle; and there are really only two ways to do this: clockwise or counterclockwise. In general, with $n$ cities, there are $n!$ (that is, $n$ factorial) permutations, but only $(n-1)!$, tours that are distinct: the tours 123, 231, and 312 are three ways of representing the same tour.
So we can make our TSP program $n$ times faster by never considering redundant tours. Arbitrarily, we will say that all tours must start with the "first" city in the set of cities. We don't have to change the definition of TSP—just by making alltours return only nonredundant tours, the whole program gets faster.
(While we're at it, we'll make tours be represented as lists, rather than the tuples that are returned by permutations. It doesn't matter now, but later on we will want to represent partial tours, to which we will want to append cities one by one; that can only be done to lists, not tuples.)
End of explanation
%timeit exact_TSP(cities8)
%timeit exact_non_redundant_TSP(cities8)
%timeit exact_non_redundant_TSP(cities10)
Explanation: Results of the improvement
End of explanation
def greedy_TSP(cities):
"At each step, visit the nearest neighbor that is still unvisited."
start = first(cities)
tour = [start]
unvisited = cities - {start}
while unvisited:
C = nearest_neighbor(tour[-1], unvisited)
tour.append(C)
unvisited.remove(C)
return tour
def nearest_neighbor(A, cities):
"Find the city in cities that is nearest to city A."
return min(cities, key=lambda x: distance(x, A))
Explanation: It takes a few seconds on my machine to solve this problem. In general, the function exact_non_redundant_TSP() looks at $(n-1)!$ tours for an $n$-city problem, and each tour has $n$ cities, so the time for $n$ cities should be roughly proportional to $n!$. This means that the time grows rapidly with the number of cities; we'd need longer than the age of the Universe to run exact_non_redundant_TSP() on just 24 cities:
<table>
<tr><th>n cities<th>time
<tr><td>10<td>3 secs
<tr><td>12<td>3 secs × 12 × 11 = 6.6 mins
<tr><td>14<td>6.6 mins × 13 × 14 = 20 hours
<tr><td>24<td>3 secs × 24! / 10! = <a href="https://www.google.com/search?q=3+seconds+*+24!+%2F+10!+in+years">16 billion years</a>
</table>
There must be a better way... or at least we need to look for it until quantum computing comes around.
Approximate (Heuristic) Algorithms
The general, exact Traveling Salesperson Problem is intractable;
there is no efficient algorithm to find the tour with minimum total distance.
But if we restrict ourselves to Euclidean distance and if we are willing to settle for a tour that is reasonably short but not the shortest, then the news is much better.
We will consider several approximate algorithms, which find tours that are usually within 10 or 20% of the shortest possible and can handle thousands of cities in a few seconds.
Greedy Nearest Neighbor (greedy_TSP)
Here is our first approximate algorithm:
Start at any city; at each step extend the tour by moving from the previous city to its nearest neighbor that has not yet been visited.
This is called a greedy algorithm, because it greedily takes what looks best in the short term (the nearest neighbor) even when that won't always be the best in the long term.
To implement the algorithm I need to represent all the noun phrases in the English description:
* start: a city which is arbitrarily the first city;
* the tour: a list of cities, initialy just the start city);
* previous city: the last element of tour, that is, tour[-1]);
* nearest neighbor: a function that, when given a city, A, and a list of other cities, finds the one with minimal distance from A); and
* not yet visited: we will keep a set of unvisited cities; initially all cities but the start city are unvisited).
Once these are initialized, we repeatedly find the nearest unvisited neighbor, C, and add it to the tour and remove it from unvisited.
End of explanation
cities = generate_cities(9)
%timeit exact_non_redundant_TSP(cities)
plot_tour(exact_non_redundant_TSP(cities))
%timeit greedy_TSP(cities)
plot_tour(greedy_TSP(cities))
Explanation: (In Python, as in the formal mathematical theory of computability, lambda is the symbol for function, so "lambda x: distance(x, A)" means the function of x that computes the distance from x to the city A. The name lambda comes from the Greek letter λ.)
We can compare the fast approximate greedy_TSP algorithm to the slow exact_TSP algorithm on a small map, as shown below. (If you have this page in a IPython notebook you can repeatedly run the cell, and see how the algorithms compare. Cities(9) will return a different set of cities each time. I ran it 20 times, and only once did the greedy algorithm find the optimal solution, but half the time it was within 10% of optimal, and it was never more than 25% worse than optimal.)
End of explanation
%timeit greedy_TSP(cities100)
plot_tour(greedy_TSP(cities100))
%timeit greedy_TSP(cities1000)
plot_tour(greedy_TSP(cities1000))
Explanation: greedy_TSP() can handle bigger problems
End of explanation
from deap import algorithms, base, creator, tools
Explanation: But... don't be greedy!
A greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
For many problmes greedy algorithms fail to produce the optimal solution, and may even produce the unique worst possible solution. One example is the traveling salesman problem mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest neighbor heuristic produces the unique worst possible tour.
A thought on computational complexity
<img src='http://imgs.xkcd.com/comics/travelling_salesman_problem.png' align='center' width='65%'/>
from XKCD
Check out Peter Norvig's IPython notebook on the traveling salesperson problem on more alternatives for the TSP.
Nature-inspired metaheuristics
We have seen in class some examples of nature-inspired metaheuristics.
They are an option in which we dedicate a little more computational effort in order to produce better solutions than greedy_TSP().
We will be using the DEAP library to code this tackle this problem using a genetic algorithm.
<img src='https://raw.githubusercontent.com/DEAP/deap/master/doc/_static/deap_long.png' width='29%' align='center'/>
End of explanation
num_cities = 30
cities = generate_cities(num_cities)
Explanation: Elements to take into account solving problems with genetic algorithms
Individual representation (binary, floating-point, etc.);
evaluation and fitness assignment;
selection, that establishes a partial order of individuals in the population using their fitness function value as reference and determines the degree at which individuals in the population will take part in the generation of new (offspring) individuals.
variation, that applies a range of evolution-inspired operators, like crossover, mutation, etc., to synthesize offspring individuals from the current (parent) population. This process is supposed to prime the fittest individuals so they play a bigger role in the generation of the offspring.
stopping criterion, that determines when the algorithm shoulod be stopped, either because the optimum was reach or because the optimization process is not progressing.
Hence a 'general' evolutionary algorithm can be described as
```
def evolutionary_algorithm():
'Pseudocode of an evolutionary algorithm'
populations = [] # a list with all the populations
populations[0] = initialize_population(pop_size)
t = 0
while not stop_criterion(populations[t]):
fitnesses = evaluate(populations[t])
offspring = matting_and_variation(populations[t],
fitnesses)
populations[t+1] = environmental_selection(
populations[t],
offspring)
t = t+1
```
Some preliminaries for the experiment
We will carry out our tests with a 30-cities problem.
End of explanation
toolbox = base.Toolbox()
Explanation: The toolbox stored the setup of the algorithm. It describes the different elements to take into account.
End of explanation
creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", list, fitness=creator.FitnessMin)
Explanation: Individual representation and evaluation
Individuals represent possible solutions to the problem.
In the TSP case, it looks like the tour itself can be a suitable representation.
For simplicity, an individual can be a list with the indexes corresponding to each city.
This will simplify the crossover and mutation operators.
We can rely on the total_distance() function for evaluation and set the fitness assignment as to minimize it.
End of explanation
toolbox.register("indices", numpy.random.permutation, len(cities))
toolbox.register("individual", tools.initIterate, creator.Individual,
toolbox.indices)
toolbox.register("population", tools.initRepeat, list,
toolbox.individual)
Explanation: Let's now define that our individuals are composed by indexes that referr to elements of cities and, correspondingly, the population is composed by individuals.
End of explanation
toolbox.register("mate", tools.cxOrdered)
toolbox.register("mutate", tools.mutShuffleIndexes, indpb=0.05)
Explanation: Defining the crossover and mutation operators can be a challenging task.
There are various <a href='http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)#Crossover_for_Ordered_Chromosomes'>crossover operators</a> that have been devised to deal with ordered individuals like ours.
We will be using DEAP's deap.tools.cxOrdered() crossover.
For mutation we will swap elements from two points of the individual.
This is performed by deap.tools.mutShuffleIndexes().
End of explanation
def create_tour(individual):
return [list(cities)[e] for e in individual]
def evaluation(individual):
'''Evaluates an individual by converting it into
a list of cities and passing that list to total_distance'''
return (total_distance(create_tour(individual)),)
toolbox.register("evaluate", evaluation)
Explanation: Evaluation can be easily defined from the total_distance() definition.
End of explanation
toolbox.register("select", tools.selTournament, tournsize=3)
Explanation: We will employ tournament selection with size 3.
End of explanation
pop = toolbox.population(n=100)
%%time
result, log = algorithms.eaSimple(pop, toolbox,
cxpb=0.8, mutpb=0.2,
ngen=400, verbose=False)
Explanation: Lets' run the algorithm with a population of 100 individuals and 400 generations.
End of explanation
best_individual = tools.selBest(result, k=1)[0]
print('Fitness of the best individual: ', evaluation(best_individual)[0])
plot_tour(create_tour(best_individual))
Explanation: We can now review the results
The best individual of the last population:
End of explanation
fit_stats = tools.Statistics(key=operator.attrgetter("fitness.values"))
fit_stats.register('mean', numpy.mean)
fit_stats.register('min', numpy.min)
Explanation: It is interesting to assess how the fitness of the population changed as the evolution process took place.
We can prepare an deap.tools.Statistics instance to specify what data to collect.
End of explanation
result, log = algorithms.eaSimple(toolbox.population(n=100), toolbox,
cxpb=0.5, mutpb=0.2,
ngen=400, verbose=False,
stats=fit_stats)
Explanation: We are all set now but lets run again the genetic algorithm configured to collect the statistics that we want to gather:
End of explanation
plt.figure(1, figsize=(11, 4), dpi=500)
plots = plt.plot(log.select('min'),'c-', log.select('mean'), 'b-', antialiased=True)
plt.legend(plots, ('Minimum fitness', 'Mean fitness'))
plt.ylabel('Fitness')
plt.xlabel('Iterations')
Explanation: Plotting mean and minimium fitness as evolution took place.
End of explanation
pop_stats = tools.Statistics(key=numpy.copy)
pop_stats.register('pop', numpy.copy) # -- copies the populations themselves
pop_stats.register('fitness', # -- computes and stores the fitnesses
lambda x : [evaluation(a) for a in x])
Explanation: How has the population evolved?
Ok, but how the population evolved? As TSP solutions are easy to visualize, we can plot the individuals of each population the evolution progressed. We need a new Statistics instance prepared for that.
End of explanation
result, log = algorithms.eaSimple(toolbox.population(n=100), toolbox,
cxpb=0.5, mutpb=0.2,
ngen=400, verbose=False,
stats=pop_stats)
Explanation: Note: I am aware that this could be done in a more efficient way.
End of explanation
def plot_population(record, min_fitness, max_fitness):
'''
Plots all individuals in a population.
Darker individuals have a better fitness.
'''
pop = record['pop']
fits = record['fitness']
index = sorted(range(len(fits)), key=lambda k: fits[k])
norm=colors.Normalize(vmin=min_fitness,
vmax=max_fitness)
sm = cmx.ScalarMappable(norm=norm,
cmap=plt.get_cmap('PuBu'))
for i in range(len(index)):
color = sm.to_rgba(max_fitness - fits[index[i]][0])
plot_tour(create_tour(pop[index[i]]), alpha=0.5, color=color)
min_fitness = numpy.min(log.select('fitness'))
max_fitness = numpy.max(log.select('fitness'))
Explanation: Plotting the individuals and their fitness (color-coded)
End of explanation
plt.figure(1, figsize=(11,11), dpi=500)
for i in range(0, 12):
plt.subplot(4,3,i+1)
it = int(math.ceil((len(log)-1.)/15))
plt.title('t='+str(it*i))
plot_population(log[it*i], min_fitness, max_fitness)
Explanation: We can now plot the population as the evolutionary process progressed. Darker blue colors imply better fitness.
End of explanation
%timeit total_distance(greedy_TSP(cities))
print('greedy_TSP() distance: ', total_distance(greedy_TSP(cities)))
print('Genetic algorithm best distance: ', evaluation(best_individual)[0])
Explanation: Comprarison with greedy_TSP()
End of explanation
from JSAnimation import IPython_display
from matplotlib import animation
def update_plot_tour(plot, points, alpha=1, color='blue'):
'A function for updating a plot with an individual'
X, Y = XY(list(points) + [points[0]])
plot.set_data(X, Y)
plot.set_color(color)
return plot
def init():
'Initialization of all plots to empty data'
for p in list(tour_plots):
p.set_data([], [])
return tour_plots
def animate(i):
'Updates all plots to match frame _i_ of the animation'
pop = log[i]['pop']
fits = log[i]['fitness']
index = sorted(range(len(fits)), key=lambda k: fits[k])
norm=colors.Normalize(vmin=min_fitness,
vmax=max_fitness)
sm = cmx.ScalarMappable(norm=norm,
cmap=plt.get_cmap('PuBu'))
for j in range(len(tour_plots)):
color = sm.to_rgba(max_fitness - fits[index[j]][0])
update_plot_tour(tour_plots[j],
create_tour(pop[index[j]]),
alpha=0.5, color=color)
return tour_plots
Explanation: The genetic algorithm outperformed the greedy approach at a viable computational cost.
Note 1: Viable depends on the particular problem, of course.
Note 2: These results depend on the cities that were randomly generated. Your milleage may vary.
Homework
We have just performed one run of the experiment, but genetic algorithms are stochastic algorithms and their performace should be assessed in statistical terms. Modify the genetic algorithm code in order to be able to report the comparison with greedy_TSP() in statistically sound terms.
Population size should have an impact on the performace of the algorithm. Make an experiment regarding that.
What is the influence of the mutation and crossover probabilities in the performance of the genetic algorithm?
Extra credit
The population of the previous experiment can be better appreciated in animated form. We are going to use matplotlib.animation and the JSAnimation library (you need to install it if you plan to run this notebook locally). Similarly, this functionality needs an HTML5 capable browser.
Part of this code has also been inspired by A Simple Animation: The Magic Triangle.
End of explanation
fig = plt.figure()
ax = plt.axes(xlim=(0, 900), ylim=(0, 600))
tour_plots = [ax.plot([], [], 'bo-', alpha=0.1) for i in range(len(log[0]['pop']))]
tour_plots = [p[0] for p in tour_plots]
animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=60, blit=True)
Explanation: The next step takes some time to execute. Use the video controls to see the evolution in animated form.
End of explanation
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=60, blit=True)
anim.save('tsp-populations.gif', writer='imagemagick')
Explanation: Embeding the previous animation in the online notebook makes it really big. I have removed the result of the previous cell and created a .gif version of the animation for online viewing.
End of explanation |
2,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize the Architecture of a Neural Network
Step1: The function $\texttt{generateNN}(\texttt{Topology})$ takes a
network topology Topology as its argument and draws a graph of the
resulting fully connected feed-forward neural net. A network topology is a list of numbers specifying the number of neurons of each layer.
For example, the network topology [3, 8, 6, 2] specifies a neural network with three layers of neurons.
The network has $3$ input nodes, the first hidden layer has $8$ neurons, the second hidden layer has $6$ neurons, and
the output layer has $2$ neurons. | Python Code:
import graphviz as gv
Explanation: Visualize the Architecture of a Neural Network
End of explanation
def generateNN(Topology):
L = len(Topology)
input_layer = ['i' + str(i) for i in range(1, Topology[0]+1)]
hidden_layers = [['h' + str(k+1) + ',' + str(i) for i in range(1, s+1)]
for (k, s) in enumerate(Topology[1:-1])]
output_layer = ['o' + str(i) for i in range(1, Topology[-1]+1)]
nng = gv.Graph()
nng.attr(rankdir='LR', splines='false')
# create nodes for input layer
for n in input_layer:
nng.node(n, label='', shape='point', width='0.05')
# create nodes for hidden layers
for NodeList in hidden_layers:
for n in NodeList:
nng.node(n, label='', shape='circle', width='0.1')
# create nodes for output layer
for n in output_layer:
nng.node(n, label='', shape='circle', width='0.1')
# connect input layer to first hidden layer
for n1 in input_layer:
for n2 in hidden_layers[0]:
nng.edge(n1, n2)
# connect hidden layers d to hidden layer d+1
for d in range(0, L-3):
for n1 in hidden_layers[d]:
for n2 in hidden_layers[d+1]:
nng.edge(n1, n2)
# connect output layer
for n1 in hidden_layers[L-3]:
for n2 in output_layer:
nng.edge(n1, n2)
return nng
Topology = [3, 6, 4, 2]
nn1 = generateNN(Topology)
nn1
Topology = [8, 12, 8, 6, 3]
nn2 = generateNN(Topology)
nn2
Topology = [12, 9, 10, 8, 7, 8, 6, 5, 4, 8, 5, 6, 7, 5, 4, 4, 4, 7, 8, 9]
nn3 = generateNN(Topology)
nn3
Explanation: The function $\texttt{generateNN}(\texttt{Topology})$ takes a
network topology Topology as its argument and draws a graph of the
resulting fully connected feed-forward neural net. A network topology is a list of numbers specifying the number of neurons of each layer.
For example, the network topology [3, 8, 6, 2] specifies a neural network with three layers of neurons.
The network has $3$ input nodes, the first hidden layer has $8$ neurons, the second hidden layer has $6$ neurons, and
the output layer has $2$ neurons.
End of explanation |
2,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (10, 20)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
def to_id(text, vocab, add_eos=False):
ids = []
for sentence in text.split('\n'):
sw_ids = [] # sentence words ids
for word in sentence.split():
sw_ids.append(vocab[word])
if add_eos:
sw_ids.append(vocab['<EOS>'])
ids.append(sw_ids)
return ids
source_id_text = to_id(source_text, source_vocab_to_int)
target_id_text = to_id(target_text, target_vocab_to_int, True)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
inputs = tf.placeholder(tf.int32, (None, None), name='input')
targets = tf.placeholder(tf.int32, (None, None), name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
tsl = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
mtl = tf.reduce_max(tsl, name='max_target_len')
ssl = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return inputs, targets, lr, keep_prob, tsl, mtl, ssl
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
go_id = target_vocab_to_int['<GO>']
# Ref: udacity/deep-learning.git:seq2seq/sequence_to_sequence_implementation.ipynb
target_data = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], go_id), target_data], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
# RNN cell
def make_cell(rnn_size, seed=42):
initializer = tf.random_uniform_initializer(-0.1, 0.1, seed=seed)
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=initializer)
return cell
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
inputs = tf.contrib.layers.embed_sequence(rnn_inputs,
source_vocab_size,
encoding_embedding_size)
cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
return tf.nn.dynamic_rnn(cell, inputs,
sequence_length=source_sequence_length,
dtype=tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
s2s = tf.contrib.seq2seq
# Apply dropout
drop_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
# Create the decoder
helper = s2s.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = s2s.BasicDecoder(drop_cell, helper, encoder_state, output_layer)
# Perform dynamic decoding
return s2s.dynamic_decode(decoder, impute_finished=True,
maximum_iterations=max_summary_length)[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
s2s = tf.contrib.seq2seq
# vocab_size is not in use?
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32),
[batch_size], name='start_tokens')
helper = s2s.GreedyEmbeddingHelper(dec_embeddings, start_tokens,
end_of_sequence_id)
drop_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
decoder = s2s.BasicDecoder(drop_cell, helper, encoder_state, output_layer)
return s2s.dynamic_decode(decoder, impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
out_kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)
output_layer = Dense(target_vocab_size, kernel_initializer=out_kernel_initializer)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
with tf.variable_scope("decoding") as decoding_scope:
train_logits = decoding_layer_train(encoder_state,
dec_cell,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
inference_logits = decoding_layer_infer(encoder_state,
dec_cell,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
_, encoder_state = encoding_layer(input_data, rnn_size,
num_layers, keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data,
target_vocab_to_int,
batch_size)
return decoding_layer(dec_input, encoder_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 4
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 50
decoding_embedding_size = 50
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
display_step = 80
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [
vocab_to_int[x] if x in vocab_to_int else vocab_to_int['<UNK>']
for x in sentence.lower().split()
]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
2,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pull political boundary data from OpenStreetMap as shapefile
This notebook pulls political boundary data from the OpenStreetMap database and creates a shapefile containing the query results.
Required packages
<a href="https
Step1: Import statements
Step2: Utility functions
function to see what results were returned from the Overpass API query
Step3: Query OpenStreetMap using OverpassAPI via overpy python package
setup Overpass api
Step4: define bounding box from a 1km-buffered envelope around the study area boundary
Step7: define query
Step9: execute query
Step10: Write OpenStreetMap data to a shapefile | Python Code:
bounding_box_file = ""
result_shapefile_filepath = ""
p1 = pyproj.Proj("+init=epsg:31254")
p2 = pyproj.Proj("+init=epsg:4326")
p3 = pyproj.Proj("+init=epsg:3857")
p4 = pyproj.Proj("+init=epsg:25832")
Explanation: Pull political boundary data from OpenStreetMap as shapefile
This notebook pulls political boundary data from the OpenStreetMap database and creates a shapefile containing the query results.
Required packages
<a href="https://github.com/DinoTools/python-overpy">overpy</a> <br />
<a href="https://github.com/Toblerity/Fiona">Fiona</a>
Variable settings
bounding_box_filepath — path to shapefile to can define the desired bounding box to query the OpenStreetMap database <br />
result_shapefile_filepath — path to export shapefile containing the query results
End of explanation
import overpy
import fiona
import numpy
import geopandas
from shapely.ops import polygonize
from shapely.geometry import LineString
from database.models import Site
import pyproj
from matplotlib import pyplot
%matplotlib inline
Explanation: Import statements
End of explanation
def print_results(results):
for way in result.ways:
print("Name: %s" % way.tags.get("name", "n/a"))
print(" Highway: %s" % way.tags.get("highway", "n/a"))
print(" Nodes:")
for node in way.nodes:
print(" Lat: %f, Lon: %f" % (node.lat, node.lon))
Explanation: Utility functions
function to see what results were returned from the Overpass API query
End of explanation
api = overpy.Overpass()
Explanation: Query OpenStreetMap using OverpassAPI via overpy python package
setup Overpass api
End of explanation
with fiona.open(bounding_box_file, mode='r') as bounding_box:
bounds = bounding_box.bounds
bounding_box.close()
print(bounds)
Explanation: define bounding box from a 1km-buffered envelope around the study area boundary
End of explanation
query = way({bottom},{left},{top},{right}) ["highway"]; (._;>;); out body;.format(bottom=bounds[1],
left=bounds[0],
top=bounds[3],
right=bounds[2])
query =
[out:json];
relation
["boundary"="administrative"]
["admin_level"="2"]
["name:en"="Austria"];
(._;>;);
out;
.replace("\n", "").replace(" ", "")
query
Explanation: define query
End of explanation
result = api.query(query)
ways = numpy.empty(len(result.ways), dtype=numpy.object)
for i, way in enumerate(result.ways):
ways[i] = LineString([ (node.lon, node.lat) for node in way.nodes ])
boundaries = list(polygonize(ways))
boundaries = geopandas.GeoDataFrame(geometry=boundaries, crs="+init=epsg:4326")
boundaries
boundaries.plot(facecolor='white', edgecolor='red')
bbox = boundaries.bounds.iloc[0]
bbox
query =
relation({s}, {w}, {n}, {e})
["boundary"="administrative"]
["admin_level"="2"];
(._;>;);
out;
.format(s=bbox['miny'], w=bbox['minx'], n=bbox['maxy'], e=bbox['maxx']).replace("\n", "").replace(" ", "")
query
result = api.query(query)
ways = numpy.empty(len(result.ways), dtype=numpy.object)
for i, way in enumerate(result.ways):
ways[i] = LineString([ (node.lon, node.lat) for node in way.nodes ]).simplify(0.01, preserve_topology=False)
boundaries = list(polygonize(ways))
boundaries = geopandas.GeoDataFrame(geometry=boundaries, crs="+init=epsg:4326")
boundaries = boundaries.to_crs(crs="+init=epsg:25832")
center = Site.objects.get(name='Hofgarten')
x, y = center.geometry.coords
x, y = pyproj.transform(p1, p4, x, y)
x
y
geopandas.datasets.available
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
area = world[world.name.isin(['Austria', 'Germany', 'Switzerland', 'Italy'])]
plt = area.plot(facecolor='white', edgecolor='black')
plt.set_frame_on(False)
Explanation: execute query
End of explanation
from fiona.crs import from_epsg
schema = {'geometry': 'LineString', 'properties': {'Name':'str:80', 'Type':'str:80'}}
with fiona.open(result_shapefile_filepath, 'w', crs=from_epsg(4326), driver='ESRI Shapefile', schema=schema) as output:
for way in result.ways:
# the shapefile geometry use (lon,lat)
line = {'type': 'LineString', 'coordinates':[(node.lon, node.lat) for node in way.nodes]}
prop = {'Name': way.tags.get("name", "n/a"), 'Type': way.tags.get("highway", "n/a")}
output.write({'geometry': line, 'properties':prop})
output.close()
Explanation: Write OpenStreetMap data to a shapefile
End of explanation |
2,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning Scikit-learn
Step1: Let's start again with our text-classification problem, but for now we will only use a reduced number of instances. We will work only with 3,000 instances.
Step2: Then import the set of stop words and create a pipeline that compounds the TF-IDF vectorizer and the Naïve Bayes algorithms (recall that we had a stopwords_en.txt file with a list of stop words).
Step3: If we evaluate our algorithm with a three-fold cross-validation, we obtain a mean score of around 0.81.
Step4: It looks like we should train the algorithm with a list of different parameter values and keep the parameter value that achieves the best results. Let's implement a helper function to do that. This function will train the algorithm with a list of values, each time obtaining an accuracy score calculated by performing k-fold cross-validation
on the training instances. After that, it will plot the training and testing scores as a function of the parameter values.
Step5: Let's call this function; we will use numpy's logspace function to generate a list of alpha values spaced evenly on a log scale.
Step6: As expected, the training accuracy is always greater than the testing accuracy. The best results are obtained with an alpha value of 0.1 (accuracy of 0.81)
Step7: We created a very useful function to graph and obtain the best parameter value for a classifier. Let's use it to adjust another classifier that uses a Support Vector Machines (SVM) instead of MultinomialNB
Step8: For gamma < 1 we have underfitting. For gamma > 1 we have overfitting. So here, the best result is for gamma = 1 where we obtain a training an accuracy of 0.999 and a testing accuracy of about 0.75
Grid Search
If you take a closer look at the SVC class constructor parameters, we have other parameters, apart from gamma, that may also affect classifier performance. If we only adjust the gamma value, we implicitly state that the optimal C value is 1.0 (the default value that we did not explicitly set). Perhaps we could obtain better results with a new combination of C and gamma values. This opens a new degree of complexity; we should try all the parameter combinations and keep the better one.
With GridSearchCV, we can specify a grid of any number of parameters and parameter values to traverse. It will train the classifier for each combination and obtain a cross-validation accuracy to evaluate each one.
Step9: Let's execute our grid search and print the best parameter values and scores.
Step11: With the grid search we obtained a better combination of C and gamma parameters, for values 10.0 and 0.10 respectively, we obtained a 3-fold cross validation accuracy of 0.828 much better than the best value we obtained (0.76) in the previous experiment by only adjusting gamma and keeeping C value at 1.0.
We could continue trying to improve the results by also adjusting the vectorizer parameters in the grid search.
Parallelizing
Grid search calculation grows exponentially with each parameter and its possible values we want to tune. We could reduce our response time if we calculate each of the combinations in parallel instead of sequentially, as we have done. In our previous example, we had four different values for gamma and three different values for C, summing up 12 parameter combinations. Additionally, we also needed to train each combination three times (in a three-fold cross-validation), so we summed up
36 trainings and evaluations. We could try to run these 36 tasks in parallel, since the tasks are independent.
Most modern computers have multiple cores that can be used to run tasks in parallel. We also have a very useful tool within IPython, called IPython parallel, that allows us to run independent tasks in parallel, each task in a different core of our machine. Let's do that with our text classifier example.
First we will declare a function that will persist all the K folds for the cross validation in different files. These files will be loaded by a process that will execute the corresponding fold
Step12: The following function loads a particular fold and fits the classifier with the specified parameters set. Finally returns the testing score. This function will be called by each of the parallel processes
Step14: This function executes the grid search in parallel processes. For each of the parameter combination (returned by the IterGrid iterator), it iterates over the K folds and creates a process to compute the evaluation. It returns the parameter combinations alongside with the tasks list | Python Code:
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
Explanation: Learning Scikit-learn: Machine Learning in Python
IPython Notebook for Chapter 4: Advanced Features - Model Selection
In the previous section we worked on ways to preprocess the data and select the most promising features. As we stated, selecting a good set of features is a crucial step to obtain good results. Now we will focus on another important step: selecting the algorithm parameters, known as hyperparameters to distinguish them from the parameters that are adjusted within the machine learning algorithm. Many machine learning algorithms include hyperparameters (from now on we will simply call them parameters) that guide certain aspects of the underlying method and have great impact on the results. In this section we will review some methods to help us obtain the best parameter configuration, a process known as model selection.
We will look back at the text-classification problem we addressed in Chapter 2, Supervised Learning. In that example, we compounded a TF-IDF vectorizer alongside a multinomial Naïve Bayes (NB) algorithm to classify a set of newsgroup messages into a discrete number of categories. The MultinomialNB algorithm has one important parameter, named alpha, that adjusts the smoothing. We initially used the class with its default parameter values (alpha = 1.0) and obtained an accuracy of 0.89. But when we set alpha to 0.01, we obtained a noticeable accuracy improvement to 0.92. Clearly, the configuration of the alpha parameter has great impact on the performance of the algorithm. How can we be sure 0.01 is the best value? Perhaps if we try other possible values, we could still obtain better results.
Start by importing numpy, scikit-learn, and pyplot, the Python libraries we will be using in this chapter. Show the versions we will be using (in case you have problems running the notebooks).
End of explanation
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
n_samples = 3000
X = news.data[:n_samples]
y = news.target[:n_samples]
Explanation: Let's start again with our text-classification problem, but for now we will only use a reduced number of instances. We will work only with 3,000 instances.
End of explanation
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
def get_stop_words():
result = set()
for line in open('data/stopwords_en.txt', 'r').readlines():
result.add(line.strip())
return result
stop_words = get_stop_words()
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('nb', MultinomialNB(alpha=0.01)),
])
Explanation: Then import the set of stop words and create a pipeline that compounds the TF-IDF vectorizer and the Naïve Bayes algorithms (recall that we had a stopwords_en.txt file with a list of stop words).
End of explanation
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem
def evaluate_cross_validation(clf, X, y, K):
# create a k-fold croos validation iterator of k=5 folds
cv = KFold(len(y), K, shuffle=True, random_state=0)
# by default the score used is the one returned by score method of the estimator (accuracy)
scores = cross_val_score(clf, X, y, cv=cv)
print scores
print ("Mean score: {0:.3f} (+/-{1:.3f})").format(
np.mean(scores), sem(scores))
evaluate_cross_validation(clf, X, y, 3)
Explanation: If we evaluate our algorithm with a three-fold cross-validation, we obtain a mean score of around 0.81.
End of explanation
def calc_params(X, y, clf, param_values, param_name, K):
# initialize training and testing scores with zeros
train_scores = np.zeros(len(param_values))
test_scores = np.zeros(len(param_values))
# iterate over the different parameter values
for i, param_value in enumerate(param_values):
print param_name, ' = ', param_value
# set classifier parameters
clf.set_params(**{param_name:param_value})
# initialize the K scores obtained for each fold
k_train_scores = np.zeros(K)
k_test_scores = np.zeros(K)
# create KFold cross validation
cv = KFold(n_samples, K, shuffle=True, random_state=0)
# iterate over the K folds
for j, (train, test) in enumerate(cv):
# fit the classifier in the corresponding fold
# and obtain the corresponding accuracy scores on train and test sets
clf.fit([X[k] for k in train], y[train])
k_train_scores[j] = clf.score([X[k] for k in train], y[train])
k_test_scores[j] = clf.score([X[k] for k in test], y[test])
# store the mean of the K fold scores
train_scores[i] = np.mean(k_train_scores)
test_scores[i] = np.mean(k_test_scores)
# plot the training and testing scores in a log scale
plt.semilogx(param_values, train_scores, alpha=0.4, lw=2, c='b')
plt.semilogx(param_values, test_scores, alpha=0.4, lw=2, c='g')
plt.xlabel(param_name + " values")
plt.ylabel("Mean cross validation accuracy")
# return the training and testing scores on each parameter value
return train_scores, test_scores
Explanation: It looks like we should train the algorithm with a list of different parameter values and keep the parameter value that achieves the best results. Let's implement a helper function to do that. This function will train the algorithm with a list of values, each time obtaining an accuracy score calculated by performing k-fold cross-validation
on the training instances. After that, it will plot the training and testing scores as a function of the parameter values.
End of explanation
alphas = np.logspace(-7, 0, 8)
print alphas
train_scores, test_scores = calc_params(X, y, clf, alphas, 'nb__alpha', 3)
Explanation: Let's call this function; we will use numpy's logspace function to generate a list of alpha values spaced evenly on a log scale.
End of explanation
print 'training scores: ', train_scores
print 'testing scores: ', test_scores
Explanation: As expected, the training accuracy is always greater than the testing accuracy. The best results are obtained with an alpha value of 0.1 (accuracy of 0.81):
End of explanation
from sklearn.svm import SVC
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('svc', SVC()),
])
gammas = np.logspace(-2, 1, 4)
train_scores, test_scores = calc_params(X, y, clf, gammas, 'svc__gamma', 3)
print 'training scores: ', train_scores
print 'testing scores: ', test_scores
Explanation: We created a very useful function to graph and obtain the best parameter value for a classifier. Let's use it to adjust another classifier that uses a Support Vector Machines (SVM) instead of MultinomialNB:
End of explanation
from sklearn.grid_search import GridSearchCV
parameters = {
'svc__gamma': np.logspace(-2, 1, 4),
'svc__C': np.logspace(-1, 1, 3),
}
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('svc', SVC()),
])
gs = GridSearchCV(clf, parameters, verbose=2, refit=False, cv=3)
Explanation: For gamma < 1 we have underfitting. For gamma > 1 we have overfitting. So here, the best result is for gamma = 1 where we obtain a training an accuracy of 0.999 and a testing accuracy of about 0.75
Grid Search
If you take a closer look at the SVC class constructor parameters, we have other parameters, apart from gamma, that may also affect classifier performance. If we only adjust the gamma value, we implicitly state that the optimal C value is 1.0 (the default value that we did not explicitly set). Perhaps we could obtain better results with a new combination of C and gamma values. This opens a new degree of complexity; we should try all the parameter combinations and keep the better one.
With GridSearchCV, we can specify a grid of any number of parameters and parameter values to traverse. It will train the classifier for each combination and obtain a cross-validation accuracy to evaluate each one.
End of explanation
%time _ = gs.fit(X, y)
gs.best_params_, gs.best_score_
Explanation: Let's execute our grid search and print the best parameter values and scores.
End of explanation
from sklearn.externals import joblib
from sklearn.cross_validation import ShuffleSplit
import os
def persist_cv_splits(X, y, K=3, name='data', suffix="_cv_%03d.pkl"):
Dump K folds to filesystem.
cv_split_filenames = []
# create KFold cross validation
cv = KFold(n_samples, K, shuffle=True, random_state=0)
# iterate over the K folds
for i, (train, test) in enumerate(cv):
cv_fold = ([X[k] for k in train], y[train], [X[k] for k in test], y[test])
cv_split_filename = name + suffix % i
cv_split_filename = os.path.abspath(cv_split_filename)
joblib.dump(cv_fold, cv_split_filename)
cv_split_filenames.append(cv_split_filename)
return cv_split_filenames
cv_filenames = persist_cv_splits(X, y, name='news')
Explanation: With the grid search we obtained a better combination of C and gamma parameters, for values 10.0 and 0.10 respectively, we obtained a 3-fold cross validation accuracy of 0.828 much better than the best value we obtained (0.76) in the previous experiment by only adjusting gamma and keeeping C value at 1.0.
We could continue trying to improve the results by also adjusting the vectorizer parameters in the grid search.
Parallelizing
Grid search calculation grows exponentially with each parameter and its possible values we want to tune. We could reduce our response time if we calculate each of the combinations in parallel instead of sequentially, as we have done. In our previous example, we had four different values for gamma and three different values for C, summing up 12 parameter combinations. Additionally, we also needed to train each combination three times (in a three-fold cross-validation), so we summed up
36 trainings and evaluations. We could try to run these 36 tasks in parallel, since the tasks are independent.
Most modern computers have multiple cores that can be used to run tasks in parallel. We also have a very useful tool within IPython, called IPython parallel, that allows us to run independent tasks in parallel, each task in a different core of our machine. Let's do that with our text classifier example.
First we will declare a function that will persist all the K folds for the cross validation in different files. These files will be loaded by a process that will execute the corresponding fold:
End of explanation
def compute_evaluation(cv_split_filename, clf, params):
# All module imports should be executed in the worker namespace
from sklearn.externals import joblib
# load the fold training and testing partitions from the filesystem
X_train, y_train, X_test, y_test = joblib.load(
cv_split_filename, mmap_mode='c')
clf.set_params(**params)
clf.fit(X_train, y_train)
test_score = clf.score(X_test, y_test)
return test_score
Explanation: The following function loads a particular fold and fits the classifier with the specified parameters set. Finally returns the testing score. This function will be called by each of the parallel processes:
End of explanation
from sklearn.grid_search import ParameterGrid
def parallel_grid_search(lb_view, clf, cv_split_filenames, param_grid):
all_tasks = []
all_parameters = list(ParameterGrid(param_grid))
# iterate over parameter combinations
for i, params in enumerate(all_parameters):
task_for_params = []
# iterate over the K folds
for j, cv_split_filename in enumerate(cv_split_filenames):
t = lb_view.apply(
compute_evaluation, cv_split_filename, clf, params)
task_for_params.append(t)
all_tasks.append(task_for_params)
return all_parameters, all_tasks
from sklearn.svm import SVC
from IPython.parallel import Client
client = Client()
lb_view = client.load_balanced_view()
all_parameters, all_tasks = parallel_grid_search(
lb_view, clf, cv_filenames, parameters)
def print_progress(tasks):
progress = np.mean([task.ready() for task_group in tasks
for task in task_group])
print "Tasks completed: {0}%".format(100 * progress)
print_progress(all_tasks)
def find_bests(all_parameters, all_tasks, n_top=5):
Compute the mean score of the completed tasks
mean_scores = []
for param, task_group in zip(all_parameters, all_tasks):
scores = [t.get() for t in task_group if t.ready()]
if len(scores) == 0:
continue
mean_scores.append((np.mean(scores), param))
return sorted(mean_scores, reverse=True)[:n_top]
print find_bests(all_parameters, all_tasks)
Explanation: This function executes the grid search in parallel processes. For each of the parameter combination (returned by the IterGrid iterator), it iterates over the K folds and creates a process to compute the evaluation. It returns the parameter combinations alongside with the tasks list:
End of explanation |
2,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple linear regression
Step1: Fitting multiple linear regression to the training set
Step2: Building the optimal model using Backward elimination
Previously to build the multiple regression model we used all the independent variable.
Out of these some independent variable are higly statistically significant and some are not | Python Code:
# Import the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#importing the datset
dataset = pd.read_csv('datasets/50_Startups.csv')
dataset.head()
X = dataset.iloc[:, :-1].values
Y = dataset.iloc[:, 4].values
X
Y
# But there is categorical variable. i.e. independent varaible State
# We will use one hot encoder
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
labelencoder_X = LabelEncoder()
X[:, 3] = labelencoder_X.fit_transform(X[:, 3])
onehotencoder = OneHotEncoder(categorical_features = [3])
X = onehotencoder.fit_transform(X).toarray()
# Here index column 3 has categorical variable
X
# avoiding the dummy variable trap
X = X[:, 1:]
# It doesn't contain the 1st column which was for california
X
# Split the dataset into train and test
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 0)
# We dont need feature scaling for multiple linear regression
# The library will takle care of that.
Explanation: Multiple linear regression
End of explanation
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train,Y_train)
# Predicting the test set results
Y_pred = regressor.predict(X_test)
Explanation: Fitting multiple linear regression to the training set
End of explanation
import statsmodels.formula.api as sm
# The multiple linear regression equation is
# y = b0 + b1.X^1 + b2.x^2 + ... + bn.x^n
# The stats model requires that b0 is actually b0.x^0
# So, X^0 here is column vector of 1's
# we will add this vector of 1's to X
X = np.append(arr = np.ones((50, 1)).astype(int), values = X, axis = 1)
# append methode adds a new row or column
# np.ones adds a column of 1's if axis=1
# We want to keep column of 1's as first column so, np.ones is first argument and then append X to it
# X has 50 rows
# astype[int] required to convert the 1's to int otherwise
# you will get type error
X[1]
# Backward elimination steps
# Step 1: Select a significance level to stay in the model(e.g. SL = 0.05)
X_opt = X[:, [0,1,2,3,4,5]]
# Step 2: Fit the full model with all possible predictors
regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit()
# Step 3: Consider the predictor with the highest P-value.
# If P>SL, go to STEP4, otherwise go to Finish(model is final)
regressor_OLS.summary()
# P-value of X2 is highest, so we remove X2
# Step4: Remove the predictor whose p-value is highest and more than SL = 0.05
# here we have to remove index 2 column
# Step 5: Fit the model without this variable
X_opt = X[:, [0,1,3,4,5]]
regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit()
regressor_OLS.summary()
# here X1 has highest p-value. So we will remove column index
# 1
X_opt = X[:, [0,3,4,5]]
regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit()
regressor_OLS.summary()
# here X2 has highest p-value. So we will remove column index
# 4
X_opt = X[:, [0,3,5]]
regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit()
regressor_OLS.summary()
# here X2 has highest p-value. So we will remove column index
# 5
X_opt = X[:, [0,3]]
regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit()
regressor_OLS.summary()
X[1]
# So column R&D spent has the max impact on profit
Explanation: Building the optimal model using Backward elimination
Previously to build the multiple regression model we used all the independent variable.
Out of these some independent variable are higly statistically significant and some are not
End of explanation |
2,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Slice in volumetric data, via Plotly
A volume included in a parallelepiped is described by the values of a scalar field, $f(x,y,z)$, with $x\in[a,b]$, $y\in [c,d]$, $z\in[e,f]$.
A slice in this volume is visualized by coloring the surface of the slice, according to the values of the function f, restricted to that surface.
In order to plot a planar or a nonlinear slice of equation z=s(x,y) one proceeds as follows
Step1: Define a function that returns a slice as a Plotly Surface
Step2: Let us plot the slices z=0 and y=-0.5 in the volume defined by
Step3: In order to be able to compare the two slices, we choose a unique interval of values to be mapped to the colorscale
Step4: Oblique slice in volumetric data
As an example we plot comparatively two slices | Python Code:
import numpy as np
import plotly.graph_objects as go
from IPython
Explanation: Slice in volumetric data, via Plotly
A volume included in a parallelepiped is described by the values of a scalar field, $f(x,y,z)$, with $x\in[a,b]$, $y\in [c,d]$, $z\in[e,f]$.
A slice in this volume is visualized by coloring the surface of the slice, according to the values of the function f, restricted to that surface.
In order to plot a planar or a nonlinear slice of equation z=s(x,y) one proceeds as follows:
define a meshgrid in x,y;
evaluate z=s(x,y)
define an instance of the Plotly Surface class, that represents the surface z=s(x,y)
this surface is colored according to the values, f(x,y,z), at its points. More precisely, the normalized values of the function f are mapped to a colormap/colorscale.
With obvious modications we get slices of equation $x=s(y,z), y=s(z,x)$.
End of explanation
def get_the_slice(x,y,z, surfacecolor):
return go.Surface(x=x,
y=y,
z=z,
surfacecolor=surfacecolor,
coloraxis='coloraxis')
def get_lims_colors(surfacecolor):# color limits for a slice
return np.min(surfacecolor), np.max(surfacecolor)
Explanation: Define a function that returns a slice as a Plotly Surface:
End of explanation
scalar_f = lambda x,y,z: x*np.exp(-x**2-y**2-z**2)
x = np.linspace(-2,2, 50)
y = np.linspace(-2,2, 50)
x, y = np.meshgrid(x,y)
z = np.zeros(x.shape)
surfcolor_z = scalar_f(x,y,z)
sminz, smaxz = get_lims_colors(surfcolor_z)
slice_z = get_the_slice(x, y, z, surfcolor_z)
x = np.linspace(-2,2, 50)
z = np.linspace(-2,2, 50)
x, z = np.meshgrid(x,y)
y = -0.5 * np.ones(x.shape)
surfcolor_y = scalar_f(x,y,z)
sminy, smaxy = get_lims_colors(surfcolor_y)
vmin = min([sminz, sminy])
vmax = max([smaxz, smaxy])
slice_y = get_the_slice(x, y, z, surfcolor_y)
Explanation: Let us plot the slices z=0 and y=-0.5 in the volume defined by:
End of explanation
def colorax(vmin, vmax):
return dict(cmin=vmin,
cmax=vmax)
fig1 = go.Figure(data=[slice_z, slice_y])
fig1.update_layout(
title_text='Slices in volumetric data',
title_x=0.5,
width=700,
height=700,
scene_zaxis_range=[-2,2],
coloraxis=dict(colorscale='BrBG',
colorbar_thickness=25,
colorbar_len=0.75,
**colorax(vmin, vmax)))
#fig1.show()
from IPython.display import IFrame
IFrame('https://chart-studio.plotly.com/~empet/13862', width=700, height=700)
Explanation: In order to be able to compare the two slices, we choose a unique interval of values to be mapped to the colorscale:
End of explanation
alpha = np.pi/4
x = np.linspace(-2, 2, 50)
y = np.linspace(-2, 2, 50)
x, y = np.meshgrid(x,y)
z = -x * np.tan(alpha)
surfcolor_obl = scalar_f(x,y,z)
smino, smaxo = get_lims_colors(surfcolor_obl)
vmin = min([sminz, smino])
vmax = max([smaxz, smaxo])
slice_obl = get_the_slice(x,y,z, surfcolor_obl)
fig2 = go.Figure(data=[slice_z, slice_obl], layout=fig1.layout)
fig2.update_layout( coloraxis=colorax(vmin, vmax))
#fig2.show()
IFrame('https://chart-studio.plotly.com/~empet/13864', width=700, height=700)
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Oblique slice in volumetric data
As an example we plot comparatively two slices: a slice through $z=0$ and an oblique planar slice, that is defined by rotating the plane z=0 by $\alpha=\pi/4$, about Oy.
Rotating the plane $z=c$ about Oy (from Oz towards Ox) with $\alpha$ radians we get the plane of equation
$z=c/\cos(\alpha)-x\tan(\alpha)$
End of explanation |
2,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
회귀 분석용 가상 데이터 생성 방법
Scikit-learn 의 datasets 서브 패키지에는 회귀 분석 시험용 가상 데이터를 생성하는 명령어인 make_regression() 이 있다.
http
Step1: 위 선형 모형은 다음과 같다.
$$
y = 100 + 79.1725 x
$$
noise 인수를 증가시키면 $\text{Var}[e]$가 증가하고 bias 인수를 증가시키면 y 절편이 증가한다.
Step2: 이번에는 n_features 즉, 독립 변수가 2개인 표본 데이터를 생성하여 스캐터 플롯을 그리면 다음과 같다. 종속 변수 값은 점의 명암으로 표시하였다.
Step3: 만약 실제로 y값에 영향을 미치는 종속 변수는 하나 뿐이라면 다음과 같이 사용한다.
Step4: 만약 두 종속 변수가 상관관계가 있다면 다음과 같이 생성하고 스캐터 플롯에서도 이를 알아볼 수 있다. | Python Code:
from sklearn.datasets import make_regression
X, y, c = make_regression(n_samples=10, n_features=1, bias=0, noise=0, coef=True, random_state=0)
print("X\n", X)
print("y\n", y)
print("c\n", c)
plt.scatter(X, y, s=100)
plt.show()
Explanation: 회귀 분석용 가상 데이터 생성 방법
Scikit-learn 의 datasets 서브 패키지에는 회귀 분석 시험용 가상 데이터를 생성하는 명령어인 make_regression() 이 있다.
http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_regression.html
입출력 요소
make_regression()는 다음과 같은 입출력 요소를 가진다.
입력
n_samples : 정수 (옵션, 디폴트 100)
표본의 갯수
n_features : 정수 (옵션, 디폴트 100)
독립 변수(feature)의 수(차원)
n_targets : 정수 (옵션, 디폴트 1)
종속 변수(target)의 수(차원)
n_informative : 정수 (옵션, 디폴트 10)
독립 변수(feature) 중 실제로 종속 변수와 상관 관계가 있는 독립 변수의 수(차원)
effective_rank: 정수 또는 None (옵션, 디폴트 None)
독립 변수(feature) 중 서로 독립인 독립 변수의 수. 만약 None이면 모두 독립
tail_strength : 0부터 1사이의 실수 (옵션, 디폴트 0.5)
effective_rank가 None이 아닌 경우 독립 변수간의 상관 관계 형태를 결정하는 변수
bias : 실수 (옵션, 디폴트 0.0)
절편
noise : 실수 (옵션, 디폴트 0.0)
출력 즉, 종속 변수에 더해지는 정규 분포의 표준 편차
coef : 불리언 (옵션, 디폴트 False)
True 이면 선형 모형의 계수도 출력
random_state : 정수 (옵션, 디폴트 None)
난수 발생용 시작값
출력
X : [n_samples, n_features] 형상의 2차원 배열
독립 변수의 표본 데이터
y : [n_samples] 형상의 1차원 배열 또는 [n_samples, n_targets] 형상의 2차원 배열
종속 변수의 표본 데이터
coef : [n_features] 형상의 1차원 배열 또는 [n_features, n_targets] 형상의 2차원 배열 (옵션)
선형 모형의 계수, 입력 인수 coef가 True 인 경우에만 출력됨
예를 들어 독립 변수가 1개, 종속 변수가 1개 즉, 선형 모형이 다음과 같은 수식은 경우
$$ y = C_0 + C_1 x + e $$
이러한 관계를 만족하는 표본 데이터는 다음과 같이 생성한다.
End of explanation
X, y, c = make_regression(n_samples=50, n_features=1, bias=100, noise=10, coef=True, random_state=0)
plt.scatter(X, y, s=100)
plt.show()
Explanation: 위 선형 모형은 다음과 같다.
$$
y = 100 + 79.1725 x
$$
noise 인수를 증가시키면 $\text{Var}[e]$가 증가하고 bias 인수를 증가시키면 y 절편이 증가한다.
End of explanation
X, y, c = make_regression(n_samples=300, n_features=2, noise=10, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
Explanation: 이번에는 n_features 즉, 독립 변수가 2개인 표본 데이터를 생성하여 스캐터 플롯을 그리면 다음과 같다. 종속 변수 값은 점의 명암으로 표시하였다.
End of explanation
X, y, c = make_regression(n_samples=300, n_features=2, n_informative=1, noise=0, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
Explanation: 만약 실제로 y값에 영향을 미치는 종속 변수는 하나 뿐이라면 다음과 같이 사용한다.
End of explanation
X, y, c = make_regression(n_samples=300, n_features=2, effective_rank=1, noise=0, tail_strength=0, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
X, y, c = make_regression(n_samples=300, n_features=2, effective_rank=1, noise=0, tail_strength=1, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
Explanation: 만약 두 종속 변수가 상관관계가 있다면 다음과 같이 생성하고 스캐터 플롯에서도 이를 알아볼 수 있다.
End of explanation |
2,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align='center' style="margin-bottom
Step1: Here is a broad outline of technical steps to be done for data collection
Sign up for TMDB (themoviedatabase.org), and set up API to scrape movie posters for above movies.
Set up and work with TMDb to get movie information from their database
Do the same for IMDb
Compare the entries of IMDb and TMDb for a movie
Get a listing and information of a few movies
Think and ponder over the potential challenges that may come our way, and think about interesting questions we can answer given the API's we have in our hands.
Get data from the TMDb
Let's go over each one of these one by one.
Signing up for TMDB and getting set up for getting movie metadata.
Step 1. Head over to [tmdb.org] (https
Step2: While the above functions have been made to make it easy to get genres, posters and ID, all the information that can be accessed can be seen by calling the function get_movie_info() as shown below
Step3: So, to get the tagline of the movie we can use the above dictionary key -
Step4: Getting movie information from IMDB
Now that we know how to get information from TMDB, here's how we can get information about the same movie from IMDB. This makes it possible for us to combine more information, and get a richer dataset. I urge you to try and see what dataset you can make, and go above and beyond the basic things I've done in this tutorial. Due to the differences between the two datasets, you will have to do some cleaning, however both of these datasets are extremely clean and it will be minimal.
Step5: A small comparison of IMDB and TMDB
Now that we have both systems running, let's do a very short comparison for the same movie?
Step6: As we can see, both the systems are correct, but the way they package information is different. TMDB calls it "Science Fiction" and has an ID for every genre. While IMDB calls it "Sci-Fi". Thus, it is important to keep track of these things when making use of both the datasets simultaneously.
Now that we know how to scrape information for one movie, let's take a bigger step towards scraping multiple movies?
Working with multiple movies
Step7: Let's look at one of these movies. It's the same format as above, as we had information on the movie "The Matrix", as you can see below. It's a dictionary which can be queried for specific information on that movie
Step8: Let's print out top 5 movie's titles!
Step9: Yes, I know. I'm a little upset too seeing Beauty and the Beast above Logan in the list!
Moving on, we can get their genres the same way.
Step10: So, TMDB doesn't want to make your job as easy as you thought. Why these random numbers? Want to see their genre names? Well, there's the Genre() class for it. Let's get this done!
Step11: Let's convert this list into a nice dictionary to look up genre names from genre IDs!
Step12: Now, let's re-print the genres of top 20 movies?
Step13: Section 4 - Building a dataset to work with
Step14: Pairwise analysis of Movie Genres
As our dataset is multi label, simply looking at the distribution of genres is not sufficient. It might be beneficial to see which genres co-occur, as it might shed some light on inherent biases in our dataset. For example, it would make sense if romance and comedy occur together more often than documentary and comedy. Such inherent biases tell us that the underlying population we are sampling from itself is skewed and not balanced. We may then take steps to account for such problems. Even if we don't take such steps, it is important to be aware that we are making the assumption that an unbalanced dataset is not hurting our performance and if need be, we can come back to address this assumption. Good old scientific method, eh?
So for the top 1000 movies let's do some pairwise analysis for genre distributions. Our main purpose is to see which genres occur together in the same movie. So, we first define a function which takes a list and makes all possible pairs from it. Then, we pull the list of genres for a movie and run this function on the list of genres to get all pairs of genres which occur together
Step15: As mentioned, now we will pull genres for each movie, and use above function to count occurrences of when two genres occurred together
Step16: Let's take a look at the structure we just made. It is a 19X19 structure, as shown below. Also, see that we had 19 Genres. Needless to say, this structure counts the number of simultaneous occurrences of genres in same movie.
Step17: The above image shows how often the genres occur together, as a heatmap
Important thing to notice in the above plot is the diagonal. The diagonal corresponds to self-pairs, i.e. number of times a genre, say Drama occurred with Drama. Which is basically just a count of the total times that genre occurred!
As we can see there are a lot of dramas in the data set, it is also a very unspecific label. There are nearly no documentaries or TV Movies. Horror is a very distinct label, and romance is also not too widely spread.
To account for this unbalanced data, there are multiple things we can try to explore what interesting relationships can be found.
Delving Deeper into co-occurrence of genres
What we want to do now is to look for nice groups of genres that co-occur, and see if it makes sense to us logically? Intuitively speaking, wouldn't it be fun if we saw nice boxes on the above plot - boxes of high intensity i.e. genres that occur together and don't occur much with other genres. In some ways, that would isolate the co-occurrence of some genres, and heighten the co-occurrence of others.
While the data may not show that directly, we can play with the numbers to see if that's possible. The technique used for that is called biclustering.
Step18: Looking at the above figure, "boxes" or groups of movie genres automatically emerge!
Intuitively - Crime, Sci-Fi, Mystery, Action, Horror, Drama, Thriller, etc co-occur.
AND, Romance, Fantasy, Family, Music, Adventure, etc co-occur.
That makes a lot of intuitive sense, right?
One challenge is the broad range of the drama genre. It makes the two clusters highly overlapping. If we merge it together with action thriller, etc. We will end up with nearly all movies just having that label.
Based on playing around with the stuff above, we can sort the data into the following genre categories - "Drama, Action, ScienceFiction, exciting(thriller, crime, mystery), uplifting(adventure, fantasy, animation, comedy, romance, family), Horror, History"
Note
Step19: Let's remove any duplicates that we have in the list of movies
Step20: Also, let's remove movies for which we have no posters!
Step21: Congratulations, we are done scraping!
Building a dataset out of the scraped information!
This task is simple, but extremely important. It's basically what will set the stage for the whole project. Given that you have the freedom to cast their own project within the framework I am providing, there are many decisions that you must make to finalize your own version of the project.
As we are working on a classification problem, we need to make two decisions given the data at hand -
* What do we want to predict, i.e. what's our Y?
* What features to use for predicting this Y, i.e. what X should we use?
There are many different options possible, and it comes down to you to decide what's most exciting. I will be picking my own version for the example, but it is imperative that you think this through, and come up with a version which excites you!
As an example, here are some possible ways to frame Y, while still sticking to the problem of genre prediction -
Assume every movie can have multiple genres, and then it becomes a multi-label classification problem. For example, a movie can be Action, Horror and Adventure simultaneously. Thus, every movie can be more than one genre.
Make clusters of genres as we did in Milestone 1 using biclustering, and then every movie can have only 1 genre. This way, the problem becomes a simpler, multi-class problem. For example, a movie could have the class - Uplifting (refer Milestone 1), or Horror or History. No movie get's more than one class.
For the purposes of this implementation, I'm going with the first case explained above - i.e. a multi-label classification problem.
Similarly, for designing our input features i.e. X, you may pick any features you think make sense, for example, the Director of a movie may be a good predictor for genre. OR, they may choose any features they design using algorithms like PCA. Given the richness of IMDB, TMDB and alternate sources like Wikipedia, there is a plethora of options available. Be creative here!
Another important thing to note is that in doing so, we must also make many more small implementation decisions on the way. For example, what genres are we going to include? what movies are we going to include? All these are open ended!
My Implementation
Implementation decisions made -
* The problem is framed here as a multi-label problem explained above.
* We will try to predict multiple genres associated with a movie. This will be our Y.
* We will use 2 different kinds of X - text and images.
* For the text part - Input features being used to predict the genre is a form of the movie's plot available from TMDB using the property 'overview'. This will be our X.
* For the image part - we will use the scraped poster images as our X.
NOTE
Step22: Now let's store the genre's for these movies in a list that we will later transform into a binarized vector.
Binarized vector representation is a very common and important way data is stored/represented in ML. Essentially, it's a way to reduce a categorical variable with n possible values to n binary indicator variables. What does that mean? For example, let [(1,3),(4)] be the list saying that sample A has two labels 1 and 3, and sample B has one label 4. For every sample, for every possible label, the representation is simply 1 if it has that label, and 0 if it doesn't have that label. So the binarized version of the above list will be -
~~~~~
[(1,0,1,0]),
(0,0,0,1])]
~~~~~
Step23: This is interesting. We started with only 19 genre labels if you remember. But the shape for Y is 1666,20 while it should be 1666,19 as there are only 19 genres? Let's explore.
Let's find genre IDs that are not present in our original list of genres!
Step24: Well, this genre ID wasn't given to us by TMDB when we asked it for all possible genres. How do we go about this now? We can either neglect all samples that have this genre. But if you look up you'll see there's too many of these samples. So, I googled more and went into their documentation and found that this ID corresponds to the genre "Foreign". So, we add it to the dictionary of genre names ourselves. Such problems are ubiquitous in machine learning, and it is up to us to diagnose and correct them. We must always make a decision about what to keep, how to store data and so on.
Step25: Now, we turn to building the X matrix i.e. the input features! As described earlier, we will be using the overview of movies as our input vector! Let's look at a movie's overview for example!
Step26: So, how do we store this movie overview in a matrix?
Do we just store the whole string? We know that we need to work with numbers, but this is all text. What do we do?!
The way we will be storing the X matrix is called a "Bag of words" representation. The basic idea of this representation in our context is that we can think of all the distinct words that are possible in the movies' reviews as a distinct object. And then every movie overview can be thought as a "Bag" containing a bunch of these possible objects.
For example, in the case of Zootopia the movie above - The "Bag" contains the words ("Determined", "to", "prove", "herself"......"the", "mystery"). We make such lists for all movie overviews. Finally, we binarize again like we did above for Y. scikit-learn makes our job easy here by simply using a function CountVectorizer() because this representation is so often used in Machine Learning.
What this means is that, for all the movies that we have the data on, we will first count all the unique words. Say, there's 30,000 unique words. Then we can represent every movie overview as a 30000x1 vector, where each position in the vector corresponds to the presence or absence of a particular word. If the word corresponding to that position is present in the overview, that position will have 1, otherwise it will be 0.
Ex - if our vocabular was 4 words - "I","am","a","good","boy", then the representation for the sentence "I am a boy" would be [1 1 1 0 1], and for the sentence "I am good" would be [1 1 0 1 0].
Step27: Are all words equally important?
At the cost of sounding "Animal Farm" inspired, I would say not all words are equally important.
For example, let's consider the overview for the Matrix -
Step28: For "The Matrix" a word like "computer" is a stronger indicators of it being a Sci-Fi movie, than words like "who" or "powerful" or "vast". One way computer scientists working with natural language tackled this problem in the past (and it is still used very popularly) is what we call TF-IDF i.e. Term Frequence, Inverse Document Frequency. The basic idea here is that words that are strongly indicative of the content of a single document (every movie overview is a document in our case) are words that occur very frequently in that document, and very infrequently in all other documents. For example, "Computer" occurs twice here but probably will not in most other movie overviews. Hence, it is indicative. On the other hand, generic words like "a","and","the" will occur very often in all documents. Hence, they are not indicative.
So, can we use this information to reduce our insanely high 30,000 dimensional vector representation to a smaller, more handle-able number? But first up, why should we even care? The answer is probably one of the most used phrases in ML - "The Curse of Dimensionality".
The Curse of Dimensionality
This section is strongly borrowing from one of the greatest <a href="https
Step29: We are excluding all words that occur in too many or too few documents, as these are very unlikely to be discriminative. Words that only occur in one document most probably are names, and words that occur in nearly all documents are probably stop words. Note that the values here were not tuned using a validation set. They are just guesses. It is ok to do, because we didn't evaluate the performance of these parameters. In a strict case, for example for a publication, it would be better to tune these as well.
Step30: So, each movie's overview gets represented by a 1x1365 dimensional vector.
Now, we are ready for the kill. Our data is cleaned, hypothesis is set (Overview can predict movie genre), and the feature/output vectors are prepped. Let's train some models!
Step31: Congratulations, we have our data set ready!
A note
Step32: Let's divide our X and Y matrices into train and test split. We train the model on the train split, and report the performance on the test split. Think of this like the questions you do in the problem sets v/s the exam. Of course, they are both (assumed to be) from the same population of questions. And doing well on Problem Sets is a good indicator that you'll do well in exams, but really, you must test before you can make any claims about you knowing the subject.
Step33: As you can see, the performance is by and large poorer for movies which are less represented like War and animation, and better for categories like Drama.
Numbers aside, let's look at our model's predictions for a small sample of movies from our test set.
Step34: Let's try our second model? The naive bayes model.
Step35: As can be seen above, the results seem promising, but how do we really compare the two models? We need to quantify our performance so that we can say which one's better. Takes us back to what we discussed right in the beginning - we're learning a function $g$ which can approximate the original unknown function $f$. For some values of $x_i$, the predictions will be wrong for sure, and we want to minimize it.
For multi label systems, we often keep track of performance using "Precision" and "Recall". These are standard metrics, and you can google to read up more about them if you're new to these terms.
Evaluation Metrics
We will use the standard precision recall metrics for evaluating our system.
Step36: The average precision and recall scores for our samples are pretty good! Models seem to be working! Also, we can see that the Naive Bayes performs outperforms SVM. I strongly suggest you to go read about Multinomial Bayes and think about why it works so well for "Document Classification", which is very similar to our case as every movie overview can be thought of as a document we are assigning labels to.
Section 6 - Deep Learning
Step37: Training a simple neural network model using these VGG features.
Step38: Let's first get the labels on our 1342 samples first! As image download fails on a few instances, the best way to work with the right model is to read the poster names downloaded, and working from there. These posters cannot be uploaded to Github as they are too large, and so are being downloaded and read from my local computer. If you do re-do it, you might have to check and edit the paths in the code to make sure it runs.
Step39: This looks odd, why are we re-running the loop we ran above again below? The reason is simple, the most important thing to know about numpy is that using vstack() and hstack() are highly sub-optimal. Numpy arrays when created, a fixed size is allocated in the memory and when we stack, a new one is copied and created in a new location. This makes the code really, really slow. The best way to do it (and this remains the same with MATLAB matrices if you work with them), is to create a numpy array of zeros, and over-write it row by row. The above code was just to see what size numpy array we will need!
The final movie poster set for which we have all the information we need, is 1265 movies. In the above code we are making an X numpy array containing the visual features of one image per row. So, the VGG features are reshaped to be in the shape (1,25088) and we finally obtain a matrix of shape (1265,25088)
Step40: Our binarized Y numpy array contains the binarized labels corresponding to the genre IDs of the 1277 movies
Step41: Now, we create our own keras neural network to use the VGG features and then classify movie genres. Keras makes this super easy.
Neural network architectures have gotten complex over the years. But the simplest ones contain very standard computations organized in layers, as described above. Given the popularity of some of these, Keras makes it as easy as writing out the names of these operations in a sequential order. This way you can make a network while completely avoiding the Mathematics (HIGHLY RECOMMENDED SPENDING MORE TIME ON THE MATH THOUGH)
Sequential() allows us to make models the follow this sequential order of layers. Different kinds of layers like Dense, Conv2D etc can be used, and many activation functions like RELU, Linear etc are also available.
Important Question
Step42: We train the model using the fit() function. The parameters it takes are - training features and training labels, epochs, batch_size and verbose.
Simplest one - verbose. 0="dont print anything as you work", 1="Inform me as you go".
Often the data set is too large to be loaded into the RAM. So, we load data in batches. For batch_size=32 and epochs=10, the model starts loading rows from X in batches of 32 everytime it calculates the loss and updates the model. It keeps on going till it has covered all the samples 10 times.
So, the no. of times model is updated = (Total Samples/Batch Size) * (Epochs)
Step43: For the first 10 epochs I trained the model in a verbose fashion to show you what's happening. After that, in the below cell you can see I turned off the verbosity to keep the code cleaner.
Step44: Let's look at some of our predictions?
Step45: So, even with just the poster i.e. visual features we are able to make great predictions! Sure, text outperforms the visual features, but the important thing is that it still works. In more complicated models, we can combine the two to make even better predictions. That is precisely what I work on in my research.
These models were trained on CPU's, and a simple 1 layer model was used to show that there is a lot of information in this data that the models can extract. With a larger dataset, and more training I was able to bring these numbers to as high as 70%, which is the similar to textual features. Some teams in my class outperformed this even more. More data is the first thing you should try if you want better results. Then, you can start playing with training on GPUs, learning rate schedules and other hyperparameters. Finally, you can consider using ResNet, a much more powerful neural network model than VGG. All of these can be tried once you have a working knowledge of machine learning.
Section 8 - Deep Learning to get Textual Features
Let's do the same thing as above with text now?
We will use an off the shelf representation for words - Word2Vec model. Just like VGGnet before, this is a model made available to get a meaningful representation. As the total number of words is small, we don't even need to forward propagate our sample through a network. Even that has been done for us, and the result is stored in the form of a dictionary. We can simply look up the word in the dictionary and get the Word2Vec features for the word.
You can download the dictionary from here - https
Step46: Now, we can simply look up for a word in the above loaded model. For example, to get the Word2Vec representation of the word "King" we just do - model2['king']
Step47: This way, we can represent the words in our overviews using this word2vec model. And then, we can use that as our X representations. So, instead of count of words, we are using a representation which is based on the semantic representation of the word. Mathematically, each word went from 3-4 dimensional (the length) to 300 dimensions!
For the same set of movies above, let's try and predict the genres from the deep representation of their overviews!
Step48: Text needs some pre-processing before we can train the model. The only preprocessing we do here is - we delete commonly occurring words which we know are not informative about the genre. Think of it as the clutter in some sense. These words are often removed and are referred to as "stop words". You can look them up online. These include simple words like "a", "and", "but", "how", "or" and so on. They can be easily removed using the python package NLTK.
From the above dataset, movies with overviews which contain only stop words, or movies with overviews containing no words with word2vec representation are neglected. Others are used to build our Mean word2vec representation. Simply, put for every movie overview -
Take movie overview
Throw out stop words
For non stop words
Step49: Once again, we use a very similar, super simple architecture as before. | Python Code:
import torchvision
import urllib2
import requests
import json
import imdb
import time
import itertools
import wget
import os
import tmdbsimple as tmdb
import numpy as np
import random
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import pickle
Explanation: <h1 align='center' style="margin-bottom: 0px"> An end to end implementation of a Machine Learning pipeline </h1>
<h4 align='center' style="margin-top: 0px"> SPANDAN MADAN</h4>
<h4 align='center' style="margin-top: 0px"> Visual Computing Group, Harvard University</h4>
<h4 align='center' style="margin-top: 0px"> Computer Science and Artificial Intelligence Laboratory, MIT</h4>
<h2 align='center' style="margin-top: 0px"><a href='https://github.com/Spandan-Madan/DeepLearningProject'>Link to Github Repo</a></h2>
Section 1. Introduction
Background
In the fall of 2016, I was a Teaching Fellow (Harvard's version of TA) for the graduate class on "Advanced Topics in Data Science (CS209/109)" at Harvard University. I was in-charge of designing the class project given to the students, and this tutorial has been built on top of the project I designed for the class.
Why write yet another Tutorial on Machine Learning and Deep Learning?
As a researcher on Computer Vision, I come across new blogs and tutorials on ML (Machine Learning) every day. However, most of them are just focussing on introducing the syntax and the terminology relevant to the field. For example - a 15 minute tutorial on Tensorflow using MNIST dataset, or a 10 minute intro to Deep Learning in Keras on Imagenet.
While people are able to copy paste and run the code in these tutorials and feel that working in ML is really not that hard, it doesn't help them at all in using ML for their own purposes. For example, they never introduce you to how you can run the same algorithm on your own dataset. Or, how do you get the dataset if you want to solve a problem. Or, which algorithms do you use - Conventional ML, or Deep Learning? How do you evaluate your models performance? How do you write your own model, as opposed to choosing a ready made architecture? All these form fundamental steps in any Machine Learning pipeline, and it is these steps that take most of our time as ML practitioners.
This tutorial breaks down the whole pipeline, and leads the reader through it step by step in an hope to empower you to actually use ML, and not just feel that it was not too hard. Needless to say, this will take much longer than 15-30 minutes. I believe a weekend would be a good enough estimate.
About the Author
I am <a href="http://spandanmadan.com/">Spandan Madan</a>, a graduate student at Harvard University working on Computer Vision. My research work is supervised collaboratively by Professor Hanspeter Pfister at Harvard, and Professor Aude Oliva at MIT. My current research focusses on using Computer Vision and Natural Language Techniques in tandem to build systems capable of reasoning using text and visual elements simultaneusly.
Section 2. Project Outline : Multi-Modal Genre Classification for Movies
Wow, that title sounds like a handful, right? Let's break it down step by step.
Q.1. what do we mean by Classification?
In machine learning, the task of classification means to use the available data to learn a <i>function</i> which can assign a category to a data point. For example, assign a genre to a movie, like "Romantic Comedy", "Action", "Thriller". Another example could be automatically assigning a category to news articles, like "Sports" and "Politics".
More Formally
Given:
A data point $x_i$
A set of categories $y_1,y_2...y_n$ that $x_i$ can belong to. <br>
Task :
Predict the correct category $y_k$ for a new data point $x_k$ not present in the given dataset.
Problem :
We don't know how the $x$ and $y$ are related mathematically.
Assumption :
We assume there exists a function $f$ relating $x$ and $y$ i.e. $f(x_i)=y_i$
Approach :
Since $f$ is not known, we learn a function $g$, which approximates $f$.
Important consideration :
If $f(x_i)=g(x_i)=y_i$ for all $x_i$, then the two functions $f$ and $g$ are exactly equal. Needless to say, this won't realistically ever happen, and we'll only be able to approximate the true function $f$ using $g$. This means, sometimes the prediction $g(x_i)$ will not be correct. And essentially, our whole goal is to find a $g$ which makes a really low number of such errors. That's basically all that we're trying to do.
For the sake of completeness, I should mention that this is a specific kind of learning problem which we call "Supervised Learning". Also, the idea that $g$ approximates $f$ well for data not present in our dataset is called "Generalization". It is absolutely paramount that our model generalizes, or else all our claims will only be true about data we already have and our predictions will not be correct.
We will look into generalization a little bit more a little ahead in the tutorial.
Finally, There are several other kinds, but supervised learning is the most popular and well studied kind.
Q.2. What's Multi-Modal Classification then?
In the machine learning community, the term Multi-Modal is used to refer to multiple <i>kinds</i> of data. For example, consider a YouTube video. It can be thought to contain 3 different modalities -
The video frames (visual modality)
The audio clip of what's being spoken (audio modality)
Some videos also come with the transcription of the words spoken in the form of subtitles (textual modality)
Consider, that I'm interested in classifying a song on YouTube as pop or rock. You can use any of the above 3 modalities to predict the genre - The video, the song itself, or the lyrics. But, needless to say, you can predict it much better if you could use all three simultaneously. This is what we mean by multi-modal classification.
For this project, we will be using visual and textual data to classify movie genres.
Project Outline
Scraping a dataset : The first step is to build a rich data set. We will collect textual and visual data for each movie.
Data pre-processing
Non-deep Machine Learning models : Probabilistic and Max-Margin Classifiers.
Intuitive theory behind Deep Learning
Deep Models for Visual Data
Deep Models for Text
Potential Extensions
Food for Thought
Section 3. Building your very own DataSet.
For any machine learning algorithm to work, it is imperative that we collect data which is "representative". Now, let's take a moment to discuss what the word representative mean.
What data is good data? OR What do you mean by data being "representative"?
Let's look at this from first principles. Mathematically, the premise of machine learning (to be precise, the strand of machine learning we'll be working with here) is that given input variable X, and an output variable y, IF there is a function such that g(X)=y, then if g is unknown, we can "learn" a function f which approximates g. At the very heart, its not at all different from what you may have earlier studied as "curve fitting". For example, if you're trying to predict someone's movie preferences then X can be information about the person's gender, age, nationality and so on, while y can be the genre they most like to listen to!
Let's do a thought experiment. Consider the same example - I'm trying to predict people's movie preferences. I walk into a classroom today, and collect information about some students and their movie preferences. Now, I use that data to build a model. How well do you think I can predict my father's movie preferences? The answer is - probably not very well. Why? Intuitively, there was probably no one in the classroom who was my father's age. My model can tell me that as people go from age 18 to 30, they have a higher preference for documentaries over superhero movies. But does this trend continue at 55? Probably, they may start liking family dramas more. Probably they don't. In a nutshell, we cannot say with certainty, as our data tells us nothing about it. So, if the task was to make predictions about ANYONE's movie preferences, then the data collected from just undergraduates is NOT representative.
Now, let's see why this makes sense Mathematically. Look at the graph below.
<img src="files/contour.png">
<center>Fig.1: Plot of a function we are trying to approximate(<a href="http://www.jzy3d.org/js/slider/images/ContourPlotsDemo.png">source</a>)</center>
If we consider that the variable plotted on the vertical axis is $y$, and the values of the 2 variables on the horizontal axes make the input vector $X$, then, our hope is that we are able to find a function $f$ which can approximate the function plotted here. If all the data I collect is such that $x_1$ belongs to (80,100) and $x_2$ belongs to (80,100), the learned function will only be able to learn the "yellow-green dipping bellow" part of the function. Our function will never be able to predict the behavior in the "red" regions of the true function. So, in order to be able to learn a good function, we need data sampled from a diverse set of values of $x_1$ and x2. That would be representative data to learn this contour.
Therefore, we want to collect data which is representative of all possible movies that we want to make predictions about. Or else (which is often the case), we need to be aware of the limitations of the model we have trained, and the predictions we can make with confidence. The easiest way to do this is to only make predictions about the domain of data we collected the training data from. For example, in our case, let us start by assuming that our model will predict genres for only English movies. Now, the task is to collect data about a diverse collection of movies.
So how do we get this data then? Neither google, nor any university has released such a dataset. We want to collect visual and textual data about these movies. The simple answer is to scrape it from the internet to build our own dataset. For the purpose of this project, we will use movie posters as our visual data, and movie plots as textual data. Using these, we will build a model that can predict movie genres!
We will be scraping data from 2 different movie sources - IMDB and TMDB
<h3>IMDB:http://www.imdb.com/</h3>
For those unaware, IMDB is the primary source of information about movies on the internet. It is immensely rich with posters, reviews, synopsis, ratings and many other information on every movie. We will use this as our primary data source.
<h3>TMDB:https://www.themoviedb.org/</h3>
TMDB, or The Movie DataBase, is an open source version of IMDB, with a free to use API that can be used to collect information. You do need an API key, but it can be obtained for free by just making a request after making a free account.
Note -
IMDB gives some information for free through the API, but doesn't release other information about movies. Here, we will keep it legal and only use information given to us for free and legally. However, scraping does reside on the moral fence, so to say. People often scrape data which isn't exactly publicly available for use from websites.
End of explanation
# set here the path where you want the scraped folders to be saved!
poster_folder='posters_final/'
if poster_folder.split('/')[0] in os.listdir('./'):
print('Folder already exists')
else:
os.mkdir('./'+poster_folder)
poster_folder
# For the purpose of this example, i will be working with the 1999 Sci-Fi movie - "The Matrix"!
api_key = 'a237bfff7e08d0e6902c623978183be0' #Enter your own API key here to run the code below.
# Generate your own API key as explained above :)
tmdb.API_KEY = api_key #This sets the API key setting for the tmdb object
search = tmdb.Search() #this instantiates a tmdb "search" object which allows your to search for the movie
import os.path
# These functions take in a string movie name i.e. like "The Matrix" or "Interstellar"
# What they return is pretty much clear in the name - Poster, ID , Info or genre of the Movie!
def grab_poster_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
posterp=movie.info()['poster_path']
title=movie.info()['original_title']
url='image.tmdb.org/t/p/original'+posterp
title='_'.join(title.split(' '))
strcmd='wget -O '+poster_folder+title+'.jpg '+url
os.system(strcmd)
def get_movie_id_tmdb(movie):
response = search.movie(query=movie)
movie_id=response['results'][0]['id']
return movie_id
def get_movie_info_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
info=movie.info()
return info
def get_movie_genres_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
genres=movie.info()['genres']
return genres
Explanation: Here is a broad outline of technical steps to be done for data collection
Sign up for TMDB (themoviedatabase.org), and set up API to scrape movie posters for above movies.
Set up and work with TMDb to get movie information from their database
Do the same for IMDb
Compare the entries of IMDb and TMDb for a movie
Get a listing and information of a few movies
Think and ponder over the potential challenges that may come our way, and think about interesting questions we can answer given the API's we have in our hands.
Get data from the TMDb
Let's go over each one of these one by one.
Signing up for TMDB and getting set up for getting movie metadata.
Step 1. Head over to [tmdb.org] (https://www.themoviedb.org/?language=en) and create a new account there by signing up.
Step 2. Click on your account icon on the top right, then from drop down menu select "Settings".
Step 3. On the settings page, you will see the option "API" on the left pane. Click on that.
Step 4. Apply for a new developer key. Fill out the form as required. The fields "Application Name" and "Application URL" are not important. Fill anything there.
Step 5. It should generate a new API key for you and you should also receive a mail.
Now that you have the API key for TMDB, you can query using TMDB. Remember, it allows only 40 queries per 10 seconds.
An easy way to respect this is to just have a call to <i>time.sleep(1)</i> after each iteration. This is also being very nice to the server.
If you want to try and maximize your throughput you can embed every TMDB request in a nested try except block. If the first try fails, the second try first uses python's sleep function to give it a little rest, and then try again to make a request. Something like this -
~~~~
try:
search.movie(query=movie) #An API request
except:
try:
time.sleep(10) #sleep for a bit, to give API requests a rest.
search.movie(query=<i>movie_name</i>) #Make second API request
except:
print "Failed second attempt too, check if there's any error in request"
~~~~
Using TMDB using the obtained API Key to get movie information
I have made these functions which make things easy. Basically, I'm making use of a library called tmdbsimple which makes TMDB using even easier. This library was installed at the time of setup.
However, if you want to avoid the library, it is also easy enough to load the API output directly into a dictionary like this without using tmdbsimple:
~~~
url = 'https://api.themoviedb.org/3/movie/1581?api_key=' + api_key
data = urllib2.urlopen(url).read()
create dictionary from JSON
dataDict = json.loads(data)
~~~
End of explanation
print get_movie_genres_tmdb("The Matrix")
info=get_movie_info_tmdb("The Matrix")
print "All the Movie information from TMDB gets stored in a dictionary with the following keys for easy access -"
info.keys()
Explanation: While the above functions have been made to make it easy to get genres, posters and ID, all the information that can be accessed can be seen by calling the function get_movie_info() as shown below
End of explanation
info=get_movie_info_tmdb("The Matrix")
print info['tagline']
Explanation: So, to get the tagline of the movie we can use the above dictionary key -
End of explanation
# Create the IMDB object that will be used to access the IMDb's database.
imbd_object = imdb.IMDb() # by default access the web.
# Search for a movie (get a list of Movie objects).
results = imbd_object.search_movie('The Matrix')
# As this returns a list of all movies containing the word "The Matrix", we pick the first element
movie = results[0]
imbd_object.update(movie)
print "All the information we can get about this movie from IMDB-"
movie.keys()
print "The genres associated with the movie are - ",movie['genres']
Explanation: Getting movie information from IMDB
Now that we know how to get information from TMDB, here's how we can get information about the same movie from IMDB. This makes it possible for us to combine more information, and get a richer dataset. I urge you to try and see what dataset you can make, and go above and beyond the basic things I've done in this tutorial. Due to the differences between the two datasets, you will have to do some cleaning, however both of these datasets are extremely clean and it will be minimal.
End of explanation
print "The genres for The Matrix pulled from IMDB are -",movie['genres']
print "The genres for The Matrix pulled from TMDB are -",get_movie_genres_tmdb("The Matrix")
Explanation: A small comparison of IMDB and TMDB
Now that we have both systems running, let's do a very short comparison for the same movie?
End of explanation
all_movies=tmdb.Movies()
top_movies=all_movies.popular()
# This is a dictionary, and to access results we use the key 'results' which returns info on 20 movies
print(len(top_movies['results']))
top20_movs=top_movies['results']
Explanation: As we can see, both the systems are correct, but the way they package information is different. TMDB calls it "Science Fiction" and has an ID for every genre. While IMDB calls it "Sci-Fi". Thus, it is important to keep track of these things when making use of both the datasets simultaneously.
Now that we know how to scrape information for one movie, let's take a bigger step towards scraping multiple movies?
Working with multiple movies : Obtaining Top 20 movies from TMDB
We first instantiate an object that inherits from class Movies from TMDB. Then We use the popular() class method (i.e. function) to get top movies. To get more than one page of results, the optional page argument lets us see movies from any specified page number.
End of explanation
first_movie=top20_movs[0]
print "Here is all the information you can get on this movie - "
print first_movie
print "\n\nThe title of the first movie is - ", first_movie['title']
Explanation: Let's look at one of these movies. It's the same format as above, as we had information on the movie "The Matrix", as you can see below. It's a dictionary which can be queried for specific information on that movie
End of explanation
for i in range(len(top20_movs)):
mov=top20_movs[i]
title=mov['title']
print title
if i==4:
break
Explanation: Let's print out top 5 movie's titles!
End of explanation
for i in range(len(top20_movs)):
mov=top20_movs[i]
genres=mov['genre_ids']
print genres
if i==4:
break
Explanation: Yes, I know. I'm a little upset too seeing Beauty and the Beast above Logan in the list!
Moving on, we can get their genres the same way.
End of explanation
# Create a tmdb genre object!
genres=tmdb.Genres()
# the list() method of the Genres() class returns a listing of all genres in the form of a dictionary.
list_of_genres=genres.list()['genres']
Explanation: So, TMDB doesn't want to make your job as easy as you thought. Why these random numbers? Want to see their genre names? Well, there's the Genre() class for it. Let's get this done!
End of explanation
Genre_ID_to_name={}
for i in range(len(list_of_genres)):
genre_id=list_of_genres[i]['id']
genre_name=list_of_genres[i]['name']
Genre_ID_to_name[genre_id]=genre_name
Explanation: Let's convert this list into a nice dictionary to look up genre names from genre IDs!
End of explanation
for i in range(len(top20_movs)):
mov=top20_movs[i]
title=mov['title']
genre_ids=mov['genre_ids']
genre_names=[]
for id in genre_ids:
genre_name=Genre_ID_to_name[id]
genre_names.append(genre_name)
print title,genre_names
if i==4:
break
Explanation: Now, let's re-print the genres of top 20 movies?
End of explanation
all_movies=tmdb.Movies()
top_movies=all_movies.popular()
# This is a dictionary, and to access results we use the key 'results' which returns info on 20 movies
len(top_movies['results'])
top20_movs=top_movies['results']
# Comment out this cell once the data is saved into pickle file.
all_movies=tmdb.Movies()
top1000_movies=[]
print('Pulling movie list, Please wait...')
for i in range(1,51):
if i%15==0:
time.sleep(7)
movies_on_this_page=all_movies.popular(page=i)['results']
top1000_movies.extend(movies_on_this_page)
len(top1000_movies)
f3=open('movie_list.pckl','wb')
pickle.dump(top1000_movies,f3)
f3.close()
print('Done!')
f3=open('movie_list.pckl','rb')
top1000_movies=pickle.load(f3)
f3.close()
Explanation: Section 4 - Building a dataset to work with : Let's take a look at the top 1000 movies from the database
Making use of the same api as before, we will just pull results from the top 50 pages. As mentioned earlier, the "page" attribute of the command top_movies=all_movies.popular() can be used for this purpose.
Please note: Some of the code below will store the data into python "pickle" files so that it can be ready directly from memory, as opposed to being downloaded every time. Once done, you should comment out any code which generated an object that was pickled and is no longer needed.
End of explanation
# This function just generates all possible pairs of movies
def list2pairs(l):
# itertools.combinations(l,2) makes all pairs of length 2 from list l.
pairs = list(itertools.combinations(l, 2))
# then the one item pairs, as duplicate pairs aren't accounted for by itertools
for i in l:
pairs.append([i,i])
return pairs
Explanation: Pairwise analysis of Movie Genres
As our dataset is multi label, simply looking at the distribution of genres is not sufficient. It might be beneficial to see which genres co-occur, as it might shed some light on inherent biases in our dataset. For example, it would make sense if romance and comedy occur together more often than documentary and comedy. Such inherent biases tell us that the underlying population we are sampling from itself is skewed and not balanced. We may then take steps to account for such problems. Even if we don't take such steps, it is important to be aware that we are making the assumption that an unbalanced dataset is not hurting our performance and if need be, we can come back to address this assumption. Good old scientific method, eh?
So for the top 1000 movies let's do some pairwise analysis for genre distributions. Our main purpose is to see which genres occur together in the same movie. So, we first define a function which takes a list and makes all possible pairs from it. Then, we pull the list of genres for a movie and run this function on the list of genres to get all pairs of genres which occur together
End of explanation
# get all genre lists pairs from all movies
allPairs = []
for movie in top1000_movies:
allPairs.extend(list2pairs(movie['genre_ids']))
nr_ids = np.unique(allPairs)
visGrid = np.zeros((len(nr_ids), len(nr_ids)))
for p in allPairs:
visGrid[np.argwhere(nr_ids==p[0]), np.argwhere(nr_ids==p[1])]+=1
if p[1] != p[0]:
visGrid[np.argwhere(nr_ids==p[1]), np.argwhere(nr_ids==p[0])]+=1
Explanation: As mentioned, now we will pull genres for each movie, and use above function to count occurrences of when two genres occurred together
End of explanation
print visGrid.shape
print len(Genre_ID_to_name.keys())
annot_lookup = []
for i in xrange(len(nr_ids)):
annot_lookup.append(Genre_ID_to_name[nr_ids[i]])
sns.heatmap(visGrid, xticklabels=annot_lookup, yticklabels=annot_lookup)
Explanation: Let's take a look at the structure we just made. It is a 19X19 structure, as shown below. Also, see that we had 19 Genres. Needless to say, this structure counts the number of simultaneous occurrences of genres in same movie.
End of explanation
from sklearn.cluster import SpectralCoclustering
model = SpectralCoclustering(n_clusters=5)
model.fit(visGrid)
fit_data = visGrid[np.argsort(model.row_labels_)]
fit_data = fit_data[:, np.argsort(model.column_labels_)]
annot_lookup_sorted = []
for i in np.argsort(model.row_labels_):
annot_lookup_sorted.append(Genre_ID_to_name[nr_ids[i]])
sns.heatmap(fit_data, xticklabels=annot_lookup_sorted, yticklabels=annot_lookup_sorted, annot=False)
plt.title("After biclustering; rearranged to show biclusters")
plt.show()
Explanation: The above image shows how often the genres occur together, as a heatmap
Important thing to notice in the above plot is the diagonal. The diagonal corresponds to self-pairs, i.e. number of times a genre, say Drama occurred with Drama. Which is basically just a count of the total times that genre occurred!
As we can see there are a lot of dramas in the data set, it is also a very unspecific label. There are nearly no documentaries or TV Movies. Horror is a very distinct label, and romance is also not too widely spread.
To account for this unbalanced data, there are multiple things we can try to explore what interesting relationships can be found.
Delving Deeper into co-occurrence of genres
What we want to do now is to look for nice groups of genres that co-occur, and see if it makes sense to us logically? Intuitively speaking, wouldn't it be fun if we saw nice boxes on the above plot - boxes of high intensity i.e. genres that occur together and don't occur much with other genres. In some ways, that would isolate the co-occurrence of some genres, and heighten the co-occurrence of others.
While the data may not show that directly, we can play with the numbers to see if that's possible. The technique used for that is called biclustering.
End of explanation
# Done before, reading from pickle file now to maintain consistency of data!
# We now sample 100 movies per genre. Problem is that the sorting is by popular movies, so they will overlap.
# Need to exclude movies that were already sampled.
movies = []
baseyear = 2017
print('Starting pulling movies from TMDB. If you want to debug, uncomment the print command. This will take a while, please wait...')
done_ids=[]
for g_id in nr_ids:
#print('Pulling movies for genre ID '+g_id)
baseyear -= 1
for page in xrange(1,6,1):
time.sleep(0.5)
url = 'https://api.themoviedb.org/3/discover/movie?api_key=' + api_key
url += '&language=en-US&sort_by=popularity.desc&year=' + str(baseyear)
url += '&with_genres=' + str(g_id) + '&page=' + str(page)
data = urllib2.urlopen(url).read()
dataDict = json.loads(data)
movies.extend(dataDict["results"])
done_ids.append(str(g_id))
print("Pulled movies for genres - "+','.join(done_ids))
# f6=open("movies_for_posters",'wb')
# pickle.dump(movies,f6)
# f6.close()
f6=open("movies_for_posters",'rb')
movies=pickle.load(f6)
f6.close()
Explanation: Looking at the above figure, "boxes" or groups of movie genres automatically emerge!
Intuitively - Crime, Sci-Fi, Mystery, Action, Horror, Drama, Thriller, etc co-occur.
AND, Romance, Fantasy, Family, Music, Adventure, etc co-occur.
That makes a lot of intuitive sense, right?
One challenge is the broad range of the drama genre. It makes the two clusters highly overlapping. If we merge it together with action thriller, etc. We will end up with nearly all movies just having that label.
Based on playing around with the stuff above, we can sort the data into the following genre categories - "Drama, Action, ScienceFiction, exciting(thriller, crime, mystery), uplifting(adventure, fantasy, animation, comedy, romance, family), Horror, History"
Note: that this categorization is subjective and by no means the only right solution. One could also just stay with the original labels and only exclude the ones with not enough data. Such tricks are important to balance the dataset, it allows us to increase or decrease the strength of certain signals, making it possible to improve our inferences :)
Interesting Questions
This really should be a place for you to get creative and hopefully come up with better questions than me.
Here are some of my thoughts:
- Which actors are bound to a genre, and which can easily hop genres?
- Is there a trend in genre popularity over the years?
- Can you use sound tracks to identify the genre of a movie?
- Are top romance actors higher paid than top action actors?
- If you look at release date vs popularity score, which movie genres have a longer shelf life?
Ideas to explore specifically for feature correlations:
- Are title length correlated with movie genre?
- Are movie posters darker for horror than for romance end comedy?
- Are some genres specifically released more often at a certain time of year?
- Is the RPG rating correlated with the genre?
Based on this new category set, we will now pull posters from TMDB as our training data!
End of explanation
movie_ids = [m['id'] for m in movies]
print "originally we had ",len(movie_ids)," movies"
movie_ids=np.unique(movie_ids)
print len(movie_ids)
seen_before=[]
no_duplicate_movies=[]
for i in range(len(movies)):
movie=movies[i]
id=movie['id']
if id in seen_before:
continue
# print "Seen before"
else:
seen_before.append(id)
no_duplicate_movies.append(movie)
print "After removing duplicates we have ",len(no_duplicate_movies), " movies"
Explanation: Let's remove any duplicates that we have in the list of movies
End of explanation
poster_movies=[]
counter=0
movies_no_poster=[]
print("Total movies : ",len(movies))
print("Started downloading posters...")
for movie in movies:
id=movie['id']
title=movie['title']
if counter==1:
print('Downloaded first. Code is working fine. Please wait, this will take quite some time...')
if counter%300==0 and counter!=0:
print "Done with ",counter," movies!"
print "Trying to get poster for ",title
try:
#grab_poster_tmdb(title)
poster_movies.append(movie)
except:
try:
time.sleep(7)
grab_poster_tmdb(title)
poster_movies.append(movie)
except:
movies_no_poster.append(movie)
counter+=1
print("Done with all the posters!")
print len(movies_no_poster)
print len(poster_movies)
# f=open('poster_movies.pckl','w')
# pickle.dump(poster_movies,f)
# f.close()
f=open('poster_movies.pckl','r')
poster_movies=pickle.load(f)
f.close()
# f=open('no_poster_movies.pckl','w')
# pickle.dump(movies_no_poster,f)
# f.close()
f=open('no_poster_movies.pckl','r')
movies_no_poster=pickle.load(f)
f.close()
Explanation: Also, let's remove movies for which we have no posters!
End of explanation
movies_with_overviews=[]
for i in range(len(no_duplicate_movies)):
movie=no_duplicate_movies[i]
id=movie['id']
overview=movie['overview']
if len(overview)==0:
continue
else:
movies_with_overviews.append(movie)
len(movies_with_overviews)
Explanation: Congratulations, we are done scraping!
Building a dataset out of the scraped information!
This task is simple, but extremely important. It's basically what will set the stage for the whole project. Given that you have the freedom to cast their own project within the framework I am providing, there are many decisions that you must make to finalize your own version of the project.
As we are working on a classification problem, we need to make two decisions given the data at hand -
* What do we want to predict, i.e. what's our Y?
* What features to use for predicting this Y, i.e. what X should we use?
There are many different options possible, and it comes down to you to decide what's most exciting. I will be picking my own version for the example, but it is imperative that you think this through, and come up with a version which excites you!
As an example, here are some possible ways to frame Y, while still sticking to the problem of genre prediction -
Assume every movie can have multiple genres, and then it becomes a multi-label classification problem. For example, a movie can be Action, Horror and Adventure simultaneously. Thus, every movie can be more than one genre.
Make clusters of genres as we did in Milestone 1 using biclustering, and then every movie can have only 1 genre. This way, the problem becomes a simpler, multi-class problem. For example, a movie could have the class - Uplifting (refer Milestone 1), or Horror or History. No movie get's more than one class.
For the purposes of this implementation, I'm going with the first case explained above - i.e. a multi-label classification problem.
Similarly, for designing our input features i.e. X, you may pick any features you think make sense, for example, the Director of a movie may be a good predictor for genre. OR, they may choose any features they design using algorithms like PCA. Given the richness of IMDB, TMDB and alternate sources like Wikipedia, there is a plethora of options available. Be creative here!
Another important thing to note is that in doing so, we must also make many more small implementation decisions on the way. For example, what genres are we going to include? what movies are we going to include? All these are open ended!
My Implementation
Implementation decisions made -
* The problem is framed here as a multi-label problem explained above.
* We will try to predict multiple genres associated with a movie. This will be our Y.
* We will use 2 different kinds of X - text and images.
* For the text part - Input features being used to predict the genre is a form of the movie's plot available from TMDB using the property 'overview'. This will be our X.
* For the image part - we will use the scraped poster images as our X.
NOTE : We will first look at some conventional machine learning models, which were popular before the recent rise of neural networks and deep learning. For the poster image to genre prediction, I have avoided using this for the reason that conventional ML models are simply not used anymore without using deep learning for feature extraction (all discussed in detail ahead, don't be scared by the jargon). For the movie overview to genre prediction problem we will look at both conventional models and deep learning models.
Now, let's build our X and Y!
First, let's identify movies that have overviews. Next few steps are going to be a good example on why data cleaning is important!
End of explanation
# genres=np.zeros((len(top1000_movies),3))
genres=[]
all_ids=[]
for i in range(len(movies_with_overviews)):
movie=movies_with_overviews[i]
id=movie['id']
genre_ids=movie['genre_ids']
genres.append(genre_ids)
all_ids.extend(genre_ids)
from sklearn.preprocessing import MultiLabelBinarizer
mlb=MultiLabelBinarizer()
Y=mlb.fit_transform(genres)
genres[1]
print Y.shape
print np.sum(Y, axis=0)
len(list_of_genres)
Explanation: Now let's store the genre's for these movies in a list that we will later transform into a binarized vector.
Binarized vector representation is a very common and important way data is stored/represented in ML. Essentially, it's a way to reduce a categorical variable with n possible values to n binary indicator variables. What does that mean? For example, let [(1,3),(4)] be the list saying that sample A has two labels 1 and 3, and sample B has one label 4. For every sample, for every possible label, the representation is simply 1 if it has that label, and 0 if it doesn't have that label. So the binarized version of the above list will be -
~~~~~
[(1,0,1,0]),
(0,0,0,1])]
~~~~~
End of explanation
# Create a tmdb genre object!
genres=tmdb.Genres()
# the list() method of the Genres() class returns a listing of all genres in the form of a dictionary.
list_of_genres=genres.list()['genres']
Genre_ID_to_name={}
for i in range(len(list_of_genres)):
genre_id=list_of_genres[i]['id']
genre_name=list_of_genres[i]['name']
Genre_ID_to_name[genre_id]=genre_name
for i in set(all_ids):
if i not in Genre_ID_to_name.keys():
print i
Explanation: This is interesting. We started with only 19 genre labels if you remember. But the shape for Y is 1666,20 while it should be 1666,19 as there are only 19 genres? Let's explore.
Let's find genre IDs that are not present in our original list of genres!
End of explanation
Genre_ID_to_name[10769]="Foreign" #Adding it to the dictionary
len(Genre_ID_to_name.keys())
Explanation: Well, this genre ID wasn't given to us by TMDB when we asked it for all possible genres. How do we go about this now? We can either neglect all samples that have this genre. But if you look up you'll see there's too many of these samples. So, I googled more and went into their documentation and found that this ID corresponds to the genre "Foreign". So, we add it to the dictionary of genre names ourselves. Such problems are ubiquitous in machine learning, and it is up to us to diagnose and correct them. We must always make a decision about what to keep, how to store data and so on.
End of explanation
sample_movie=movies_with_overviews[5]
sample_overview=sample_movie['overview']
sample_title=sample_movie['title']
print "The overview for the movie",sample_title," is - \n\n"
print sample_overview
Explanation: Now, we turn to building the X matrix i.e. the input features! As described earlier, we will be using the overview of movies as our input vector! Let's look at a movie's overview for example!
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
import re
content=[]
for i in range(len(movies_with_overviews)):
movie=movies_with_overviews[i]
id=movie['id']
overview=movie['overview']
overview=overview.replace(',','')
overview=overview.replace('.','')
content.append(overview)
print content[0]
print len(content)
Explanation: So, how do we store this movie overview in a matrix?
Do we just store the whole string? We know that we need to work with numbers, but this is all text. What do we do?!
The way we will be storing the X matrix is called a "Bag of words" representation. The basic idea of this representation in our context is that we can think of all the distinct words that are possible in the movies' reviews as a distinct object. And then every movie overview can be thought as a "Bag" containing a bunch of these possible objects.
For example, in the case of Zootopia the movie above - The "Bag" contains the words ("Determined", "to", "prove", "herself"......"the", "mystery"). We make such lists for all movie overviews. Finally, we binarize again like we did above for Y. scikit-learn makes our job easy here by simply using a function CountVectorizer() because this representation is so often used in Machine Learning.
What this means is that, for all the movies that we have the data on, we will first count all the unique words. Say, there's 30,000 unique words. Then we can represent every movie overview as a 30000x1 vector, where each position in the vector corresponds to the presence or absence of a particular word. If the word corresponding to that position is present in the overview, that position will have 1, otherwise it will be 0.
Ex - if our vocabular was 4 words - "I","am","a","good","boy", then the representation for the sentence "I am a boy" would be [1 1 1 0 1], and for the sentence "I am good" would be [1 1 0 1 0].
End of explanation
get_movie_info_tmdb('The Matrix')['overview']
Explanation: Are all words equally important?
At the cost of sounding "Animal Farm" inspired, I would say not all words are equally important.
For example, let's consider the overview for the Matrix -
End of explanation
# The min_df paramter makes sure we exclude words that only occur very rarely
# The default also is to exclude any words that occur in every movie description
vectorize=CountVectorizer(max_df=0.95, min_df=0.005)
X=vectorize.fit_transform(content)
Explanation: For "The Matrix" a word like "computer" is a stronger indicators of it being a Sci-Fi movie, than words like "who" or "powerful" or "vast". One way computer scientists working with natural language tackled this problem in the past (and it is still used very popularly) is what we call TF-IDF i.e. Term Frequence, Inverse Document Frequency. The basic idea here is that words that are strongly indicative of the content of a single document (every movie overview is a document in our case) are words that occur very frequently in that document, and very infrequently in all other documents. For example, "Computer" occurs twice here but probably will not in most other movie overviews. Hence, it is indicative. On the other hand, generic words like "a","and","the" will occur very often in all documents. Hence, they are not indicative.
So, can we use this information to reduce our insanely high 30,000 dimensional vector representation to a smaller, more handle-able number? But first up, why should we even care? The answer is probably one of the most used phrases in ML - "The Curse of Dimensionality".
The Curse of Dimensionality
This section is strongly borrowing from one of the greatest <a href="https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf">ML papers I've ever read.</a>
This expression was coined by Bellman in 1961 to refer to the fact that many algorithms that work fine in low dimensions become intractable when the input is high-dimensional. The reason for them not working in high dimensions is very strongly linked to what we discussed earlier - having a representative dataset. Consider this, you have a function $f$ dependent only one dependent variable $x$, and $x$ can only integer values from 1 to 100. Since it's one dimensional, it can be plotted on a line. To get a representative sample, you'd need to sample something like - $f(1),f(20),f(40),f(60),f(80),f(100)$
Now, let's increase the dimensionality i.e. number of dependent variables and see what happens. Say, we have 2 variables $x_1$ and $x_2$, same possible as before - integers between 1 and 100. Now, instead of a line, we'll have a plane with $x_1$ and $x_2$ on the two axes. The interesting bit is that instead of 100 possible values of dependent variables like before, we now have 100,000 possible values! Basically, we can make 100x100 table of possible values of $x_1$ and $x_2$. Wow, that increased exponentially. Not just figuratively, but mathematically exponentially. Needless to say, to cover 5% of the space like we did before, we'd need to sample $f$ at 5000 values.
For 3 variables, it would be 100,000,000, and we'd need to sample at 500,000 points. That's already more than the number of data points we have for most training problems we will ever come across.
Basically, as the dimensionality (number of features) of the examples grows, because a fixed-size training set covers a dwindling fraction of the input space. Even with a moderate dimension of 100 and a huge training set of a trillion examples, the latter covers only a fraction of about $10^{−18}$ of the input space. This is what makes machine learning
both necessary and hard.
So, yes, if some words are unimportant, we want to get rid of them and reduce the dimensionality of our X matrix. And the way we will do it is using TF-IDF to identify un-important words. Python let's us do this with just one line of code (And this is why you should spend more time reading maths, than coding!)
End of explanation
X.shape
Explanation: We are excluding all words that occur in too many or too few documents, as these are very unlikely to be discriminative. Words that only occur in one document most probably are names, and words that occur in nearly all documents are probably stop words. Note that the values here were not tuned using a validation set. They are just guesses. It is ok to do, because we didn't evaluate the performance of these parameters. In a strict case, for example for a publication, it would be better to tune these as well.
End of explanation
import pickle
f4=open('X.pckl','wb')
f5=open('Y.pckl','wb')
pickle.dump(X,f4)
pickle.dump(Y,f5)
f6=open('Genredict.pckl','wb')
pickle.dump(Genre_ID_to_name,f6)
f4.close()
f5.close()
f6.close()
Explanation: So, each movie's overview gets represented by a 1x1365 dimensional vector.
Now, we are ready for the kill. Our data is cleaned, hypothesis is set (Overview can predict movie genre), and the feature/output vectors are prepped. Let's train some models!
End of explanation
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_tfidf = tfidf_transformer.fit_transform(X)
X_tfidf.shape
Explanation: Congratulations, we have our data set ready!
A note : As we are building our own dataset, and I didn't want you to spend all your time waiting for poster image downloads to finish, I am working with an EXTREMELY small dataset. That is why, the results we will see for the deep learning portion will not be spectacular as compared to conventional machine learning methods. If you want to see the real power, you should spend some more time scraping something of the order of 100,000 images, as opposed to 1000 odd like I am doing here. Quoting the paper I mentioned above - MORE DATA BEATS A CLEVERER ALGORITHM.
As the TA, I saw that most teams working on the project had data of the order of 100,000 movies. So, if you want to extract the power of these models, consider scraping a larger dataset than me.
Section 5 - Non-deep, Conventional ML models with above data
Here is a layout of what we will be doing -
We will implement two different models
We will decide a performance metric i.e. a quantitative method to be sure about how well difference models are doing.
Discussion of the differences between the models, their strengths, weaknesses, etc.
As discussed earlier, there are a LOT of implementation decisions to be made. Between feature engineering, hyper-parameter tuning, model selection and how interpretable do you want your model to be (Read : Bayesian vs Non-Bayesian approaches) a lot is to be decided. For example, some of these models could be:
Generalized Linear Models
SVM
Shallow (1 Layer, i.e. not deep) Neural Network
Random Forest
Boosting
Decision Tree
Or go more bayesian:
- Naive Bayes
- Linear or Quadratic Discriminant Analysis
- Bayesian Hierarchical models
The list is endless, and not all models will make sense for the kind of problem you have framed for yourself. Think about which model best fits for your purpose.
For our purposes here, I will be showing the example of 2 very simple models, one picked from each category above -
SVM
Multinomial Naive Bayes
A quick overview of the whole pipeline coming below:
A little bit of feature engineering
2 different Models
Evaluation Metrics chosen
Model comparisons
Let's start with some feature engineering.
Engineering the right features depends on 2 key ideas. Firstly, what is it that you are trying to solve? For example, if you want to guess my music preferences and you try to train a super awesome model while giving it what my height is as input features, you're going to have no luck. On the other hand, giving it my Spotify playlist will solve the problem with any model. So, CONTEXT of the problem plays a role.
Second, you can only represent based on the data at hand. Meaning, if you didn't have access to my Spotify playlist, but to my Facebook statuses - You know all my statuses about Harvard may not be useful. But if you represent me as my Facebook statuses which are YouTube links, that would also solve the problem. So, AVAILABILITY OF DATA at hand is the second factor.
A nice way to think of it is to think that you start with the problem at hand, but design features constrained by the data you have available. If you have many independent features that each correlate well with the class, learning is easy. On the other hand, if the class is a very complex function of the features, you may not be able to learn it.
In the context of this problem, we would like to predict the genre of a movie. what we have access to - movie overviews, which are text descriptions of the movie plot. The hypothesis makes sense, overview is a short description of the story and the story is clearly important in assigning genres to movies.
So, let's improve our features by playing with the words in the overviews in our data. One interesting way to go back to what we discussed earlier - TF-IDF. We originally used it to filter words, but we can also assign the tf-idf values as "importance" values to words, as opposed to treating them all equally. Tf-idf simply tries to identify the assign a weightage to each word in the bag of words.
Once again, the way it works is - Most movie descriptions have the word "The" in it. Obviously, it doesn't tell you anything special about it. So weightage should be inversely proportional to how many movies have the word in their description. This is the IDF part.
On the other hand, for the movie interstellar, if the description has the word Space 5 times, and wormhole 2 times, then it's probably more about Space than about wormhole. Thus, space should have a high weightage. This is the TF part.
We simply use TF-IDf to assign weightage to every word in the bag of words. Which makes sense, right? :)
End of explanation
msk = np.random.rand(X_tfidf.shape[0]) < 0.8
X_train_tfidf=X_tfidf[msk]
X_test_tfidf=X_tfidf[~msk]
Y_train=Y[msk]
Y_test=Y[~msk]
positions=range(len(movies_with_overviews))
# print positions
test_movies=np.asarray(positions)[~msk]
# test_movies
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import f1_score
from sklearn.metrics import make_scorer
from sklearn.metrics import classification_report
parameters = {'kernel':['linear'], 'C':[0.01, 0.1, 1.0]}
gridCV = GridSearchCV(SVC(class_weight='balanced'), parameters, scoring=make_scorer(f1_score, average='micro'))
classif = OneVsRestClassifier(gridCV)
classif.fit(X_train_tfidf, Y_train)
predstfidf=classif.predict(X_test_tfidf)
print classification_report(Y_test, predstfidf)
Explanation: Let's divide our X and Y matrices into train and test split. We train the model on the train split, and report the performance on the test split. Think of this like the questions you do in the problem sets v/s the exam. Of course, they are both (assumed to be) from the same population of questions. And doing well on Problem Sets is a good indicator that you'll do well in exams, but really, you must test before you can make any claims about you knowing the subject.
End of explanation
genre_list=sorted(list(Genre_ID_to_name.keys()))
predictions=[]
for i in range(X_test_tfidf.shape[0]):
pred_genres=[]
movie_label_scores=predstfidf[i]
# print movie_label_scores
for j in range(19):
#print j
if movie_label_scores[j]!=0:
genre=Genre_ID_to_name[genre_list[j]]
pred_genres.append(genre)
predictions.append(pred_genres)
import pickle
f=open('classifer_svc','wb')
pickle.dump(classif,f)
f.close()
for i in range(X_test_tfidf.shape[0]):
if i%50==0 and i!=0:
print 'MOVIE: ',movies_with_overviews[i]['title'],'\tPREDICTION: ',','.join(predictions[i])
Explanation: As you can see, the performance is by and large poorer for movies which are less represented like War and animation, and better for categories like Drama.
Numbers aside, let's look at our model's predictions for a small sample of movies from our test set.
End of explanation
from sklearn.naive_bayes import MultinomialNB
classifnb = OneVsRestClassifier(MultinomialNB())
classifnb.fit(X[msk].toarray(), Y_train)
predsnb=classifnb.predict(X[~msk].toarray())
import pickle
f2=open('classifer_nb','wb')
pickle.dump(classifnb,f2)
f2.close()
predictionsnb=[]
for i in range(X_test_tfidf.shape[0]):
pred_genres=[]
movie_label_scores=predsnb[i]
for j in range(19):
#print j
if movie_label_scores[j]!=0:
genre=Genre_ID_to_name[genre_list[j]]
pred_genres.append(genre)
predictionsnb.append(pred_genres)
for i in range(X_test_tfidf.shape[0]):
if i%50==0 and i!=0:
print 'MOVIE: ',movies_with_overviews[i]['title'],'\tPREDICTION: ',','.join(predictionsnb[i])
Explanation: Let's try our second model? The naive bayes model.
End of explanation
def precision_recall(gt,preds):
TP=0
FP=0
FN=0
for t in gt:
if t in preds:
TP+=1
else:
FN+=1
for p in preds:
if p not in gt:
FP+=1
if TP+FP==0:
precision=0
else:
precision=TP/float(TP+FP)
if TP+FN==0:
recall=0
else:
recall=TP/float(TP+FN)
return precision,recall
precs=[]
recs=[]
for i in range(len(test_movies)):
if i%1==0:
pos=test_movies[i]
test_movie=movies_with_overviews[pos]
gtids=test_movie['genre_ids']
gt=[]
for g in gtids:
g_name=Genre_ID_to_name[g]
gt.append(g_name)
# print predictions[i],movies_with_overviews[i]['title'],gt
a,b=precision_recall(gt,predictions[i])
precs.append(a)
recs.append(b)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
precs=[]
recs=[]
for i in range(len(test_movies)):
if i%1==0:
pos=test_movies[i]
test_movie=movies_with_overviews[pos]
gtids=test_movie['genre_ids']
gt=[]
for g in gtids:
g_name=Genre_ID_to_name[g]
gt.append(g_name)
# print predictions[i],movies_with_overviews[i]['title'],gt
a,b=precision_recall(gt,predictionsnb[i])
precs.append(a)
recs.append(b)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
Explanation: As can be seen above, the results seem promising, but how do we really compare the two models? We need to quantify our performance so that we can say which one's better. Takes us back to what we discussed right in the beginning - we're learning a function $g$ which can approximate the original unknown function $f$. For some values of $x_i$, the predictions will be wrong for sure, and we want to minimize it.
For multi label systems, we often keep track of performance using "Precision" and "Recall". These are standard metrics, and you can google to read up more about them if you're new to these terms.
Evaluation Metrics
We will use the standard precision recall metrics for evaluating our system.
End of explanation
# Loading the list of movies we had downloaded posters for eariler -
f=open('poster_movies.pckl','r')
poster_movies=pickle.load(f)
f.close()
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
import pickle
model = VGG16(weights='imagenet', include_top=False)
allnames=os.listdir(poster_folder)
imnames=[j for j in allnames if j.endswith('.jpg')]
feature_list=[]
genre_list=[]
file_order=[]
print "Starting extracting VGG features for scraped images. This will take time, Please be patient..."
print "Total images = ",len(imnames)
failed_files=[]
succesful_files=[]
i=0
for mov in poster_movies:
i+=1
mov_name=mov['original_title']
mov_name1=mov_name.replace(':','/')
poster_name=mov_name.replace(' ','_')+'.jpg'
if poster_name in imnames:
img_path=poster_folder+poster_name
#try:
img = image.load_img(img_path, target_size=(224, 224))
succesful_files.append(poster_name)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x)
print features.shape
printe model.predict(x)
file_order.append(img_path)
feature_list.append(features)
genre_list.append(mov['genre_ids'])
if np.max(np.asarray(feature_list))==0.0:
print('problematic',i)
if i%250==0 or i==1:
print "Working on Image : ",i
# except:
# failed_files.append(poster_name)
# continue
else:
continue
print "Done with all features, please pickle for future use!"
len(genre_list)
len(feature_list)
print type(feature_list[0])
feature_list[0].shape
# Reading from pickle below, this code is not to be run.
list_pickled=(feature_list,file_order,failed_files,succesful_files,genre_list)
f=open('posters_new_features.pckl','wb')
pickle.dump(list_pickled,f)
f.close()
print("Features dumped to pickle file")
f7=open('posters_new_features.pckl','rb')
list_pickled=pickle.load(f7)
f7.close()
# (feature_list2,file_order2)=list_pickled
Explanation: The average precision and recall scores for our samples are pretty good! Models seem to be working! Also, we can see that the Naive Bayes performs outperforms SVM. I strongly suggest you to go read about Multinomial Bayes and think about why it works so well for "Document Classification", which is very similar to our case as every movie overview can be thought of as a document we are assigning labels to.
Section 6 - Deep Learning : an intuitive overview
The above results were good, but it's time to bring out the big guns. So first and foremost, let's get a very short idea about what's deep learning. This is for peope who don't have background in this - it's high level and gives just the intuition.
As described above, the two most immportant concepts in doing good classification (or regression) are to 1) use the right representation which captures the right information about the data which is relevant to the problem at hand 2) Using the right model which has the capability of making sense of the representation fed to it.
While for the second part we have complicated and powerful models that we have studied at length, we don't seem to have a principled, mathematical way of doing the first part - i.e. representation. What we did above was to see "What makes sense", and go from there. That is not a good approach for complex data/ complex problems. Is there some way to automate this? Deep Learning, does just this.
To just emphasize the importance of representation in the complex tasks we usually attempt with Deep Learning, let me talk about the original problem which made it famous. The paper is often reffered to as the "Imagenet Challenge Paper", and it was basically working on object recognition in images. Let's try to think about an algorithm that tries to detect a chair.
If I ask you to "Define" a chair, how would you? - Something with 4 legs?
<img src="files/chair1.png" height="400" width="400">
<h3><center>All are chairs, none with 4 legs. (Pic Credit: Zoya Bylinskii)</center></h3>
How about some surface that we sit on then?
<img src="files/chair2.png" height="400" width="400">
<h3><center>All are surfaces we sit on, none are chairs. (Pic Credit: Zoya Bylinskii)</center></h3>
Clearly, these definitions won't work and we need something more complicated. Sadly, we can't come up with a simple text rule that our computer can search for! And we take a more principled approach.
The "Deep" in the deep learning comes from the fact that it was conventionally applied to Neural Networks. Neural Networks, as we all know, are structures organized in layers. Layers of computations. Why do we need layers? Because these layers can be seen as sub-tasks that we do in the complicated task of identifying a chair. It can be thought as a heirarchical break down of a complicated job into smalled sub-tasks.
Mathematically, each layer acts like a space transformation which takes the pixel values to a high dimensional space. When we start out, every pixel in the image is given equal importance in our matrix. With each layer, convolution operations give some parts more importance, and some lesser importance. In doing so, we transform our images to a space in which similar looking objects/object parts are closer (We are basically learning this space transformation in deep learning, nothing else)
What exactly was learnt by these neural networks is hard to know, and an active area of research. But one very crude way to visualize what it does is to think like - It starts by learning very generic features in the first layer. Something as simple as vertical and horizontal lines. In the next layer, it learns that if you combine the vectors representing vertical and horizontal vectors in different ratios, you can make all possible slanted lines. Next layer learns to combine lines to form curves - Say, something like the outline of a face. These curves come together to form 3D objects. And so on. Building sub-modules, combining them in the right way which can give it semantics.
So, in a nutshell, the first few layers of a "Deep" network learn the right representation of the data, given the problem (which is mathematically described by your objective function trying to minimize difference between ground truth and predicted labels). The last layer simply looks how close or far apart things are in this high dimensional space.
Hence, we can give any kind of data a high dimensional representation using neural networks. Below we will see high dimensional representations of both words in overviews (text) and posters (image). Let's get started with the posters i.e. extracting visual features from posters using deep learning.
Section 7 - Deep Learning for predicting genre from poster
Once again, we must make an implementation decision. This time, it has more to do with how much time are we willing to spend in return for added accuracy. We are going to use here a technique that is commonly referred to as Pre-Training in Machine Learning Literature.
Instead of me trying to re-invent the wheel here, I am going to borrow this short section on pre-training from Stanford University's lecture on <a href='http://cs231n.github.io/transfer-learning/'> CNN's</a>. To quote -
''In practice, very few people train an entire Convolutional Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. ''
There are three broad ways in which transfer learning or pre-training can be done. (The 2 concepts are different and to understand the difference clearly, I suggest you read the linked lecture thoroughly). The way we are going to about it is by using a pre-trained, released ConvNet as feature extractor. Take a ConvNet pretrained on ImageNet (a popular object detection dataset), remove the last fully-connected layer. After removing the last layer, what we have is just another neural network i.e. a stack of space tranformations. But, originally the output of this stack can be pumped into a single layer which can classify the image into categories like Car, Dog, Cat and so on.
What this means, is that in the space this stack transforms the images to, all images which contain a "dog" are closer to each other, and all images containing a "cat" are closer. Thus, it is a meaningful space where images with similar objects are closer.
Think about it, now if we pump our posters through this stack, it will embed them in a space where posters which contain similar objects are closer. This is a very meaningful feature engineering method! While this may not be ideal for genre prediction, it might be quite meaningful. For example, all posters with a gun or a car are probably action. While a smiling couple would point to romance or drama. The alternative would be to train the CNN from scratch which is fairly computationally intensive and involves a lot of tricks to get the CNN training to converge to the optimal space tranformation.
This way, we can start off with something strong, and then build on top. We pump our images through the pre-trained network to extract the visual features from the posters. Then, using these features as descriptors for the image, and genres as the labels, we train a simpler neural network from scratch which learns to do simply classification on this dataset. These 2 steps are exactly what we are going to do for predicting genres from movie posters.
Deep Learning to extract visual features from posters
The basic problem here we are answering is that can we use the posters to predict genre. First check - Does this hypothesis make sense? Yes. Because that's what graphic designers do for a living. They leave visual cues to semantics. They make sure that when we look at the poster of a horror movie, we know it's not a happy image. Things like that. Can our deep learning system infer such subtleties? Let's find out!
For Visual features, either we can train a deep neural network ourselves from scratch, or we can use a pre-trained one made available to us from the Visual Geometry Group at Oxford University, one of the most popular methods. This is called the VGG-net. Or as they call it, we will extract the VGG features of an image. Mathematically, as mentioned, it's just a space transformation in the form of layers. So, we simply need to perform this chain of transformations on our image, right? Keras is a library that makes it very easy for us to do this. Some other common ones are Tensorflow and PyTorch. While the latter two are very powerful and customizable and used more often in practice, Keras makes it easy to prototype by keeping the syntax simple.
We will be working with Keras to keep things simple in code, so that we can spend more time understanding and less time coding. Some common ways people refer to this step are - "Getting the VGG features of an image", or "Forward Propogating the image through VGG and chopping off the last layer". In keras, this is as easy as writing 4 lines.
End of explanation
(feature_list,files,failed,succesful,genre_list)=list_pickled
Explanation: Training a simple neural network model using these VGG features.
End of explanation
(a,b,c,d)=feature_list[0].shape
feature_size=a*b*c*d
feature_size
Explanation: Let's first get the labels on our 1342 samples first! As image download fails on a few instances, the best way to work with the right model is to read the poster names downloaded, and working from there. These posters cannot be uploaded to Github as they are too large, and so are being downloaded and read from my local computer. If you do re-do it, you might have to check and edit the paths in the code to make sure it runs.
End of explanation
np_features=np.zeros((len(feature_list),feature_size))
for i in range(len(feature_list)):
feat=feature_list[i]
reshaped_feat=feat.reshape(1,-1)
np_features[i]=reshaped_feat
# np_features[-1]
X=np_features
from sklearn.preprocessing import MultiLabelBinarizer
mlb=MultiLabelBinarizer()
Y=mlb.fit_transform(genre_list)
Y.shape
Explanation: This looks odd, why are we re-running the loop we ran above again below? The reason is simple, the most important thing to know about numpy is that using vstack() and hstack() are highly sub-optimal. Numpy arrays when created, a fixed size is allocated in the memory and when we stack, a new one is copied and created in a new location. This makes the code really, really slow. The best way to do it (and this remains the same with MATLAB matrices if you work with them), is to create a numpy array of zeros, and over-write it row by row. The above code was just to see what size numpy array we will need!
The final movie poster set for which we have all the information we need, is 1265 movies. In the above code we are making an X numpy array containing the visual features of one image per row. So, the VGG features are reshaped to be in the shape (1,25088) and we finally obtain a matrix of shape (1265,25088)
End of explanation
visual_problem_data=(X,Y)
f8=open('visual_problem_data_clean.pckl','wb')
pickle.dump(visual_problem_data,f8)
f8.close()
f8=open('visual_problem_data_clean.pckl','rb')
visual_features=pickle.load(f8)
f8.close()
(X,Y)=visual_features
X.shape
mask = np.random.rand(len(X)) < 0.8
X_train=X[mask]
X_test=X[~mask]
Y_train=Y[mask]
Y_test=Y[~mask]
X_test.shape
Y_test.shape
Explanation: Our binarized Y numpy array contains the binarized labels corresponding to the genre IDs of the 1277 movies
End of explanation
# Y_train[115]
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import optimizers
model_visual = Sequential([
Dense(1024, input_shape=(25088,)),
Activation('relu'),
Dense(256),
Activation('relu'),
Dense(19),
Activation('sigmoid'),
])
opt = optimizers.rmsprop(lr=0.0001, decay=1e-6)
#sgd = optimizers.SGD(lr=0.05, decay=1e-6, momentum=0.4, nesterov=False)
model_visual.compile(optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy'])
Explanation: Now, we create our own keras neural network to use the VGG features and then classify movie genres. Keras makes this super easy.
Neural network architectures have gotten complex over the years. But the simplest ones contain very standard computations organized in layers, as described above. Given the popularity of some of these, Keras makes it as easy as writing out the names of these operations in a sequential order. This way you can make a network while completely avoiding the Mathematics (HIGHLY RECOMMENDED SPENDING MORE TIME ON THE MATH THOUGH)
Sequential() allows us to make models the follow this sequential order of layers. Different kinds of layers like Dense, Conv2D etc can be used, and many activation functions like RELU, Linear etc are also available.
Important Question : Why do we need activation functions?
Copy pasting the answer I wrote for this question on <a href='https://www.quora.com/Why-do-neural-networks-need-an-activation-function/answer/Spandan-Madan?srid=5ydm'>Quora</a> Feel free to leave comments there.
""Sometimes, we tend to get lost in the jargon and confuse things easily, so the best way to go about this is getting back to our basics.
Don’t forget what the original premise of machine learning (and thus deep learning) is - IF the input and output are related by a function y=f(x), then if we have x, there is no way to exactly know f unless we know the process itself. However, machine learning gives you the ability to approximate f with a function g, and the process of trying out multiple candidates to identify the function g best approximating f is called machine learning.
Ok, that was machine learning, and how is deep learning different? Deep learning simply tries to expand the possible kind of functions that can be approximated using the above mentioned machine learning paradigm. Roughly speaking, if the previous model could learn say 10,000 kinds of functions, now it will be able to learn say 100,000 kinds (in actuality both are infinite spaces but one is larger than the other, because maths is cool that ways.)
If you want to know the mathematics of it, go read about VC dimension and how more layers in a network affect it. But I will avoid the mathematics here and rely on your intuition to believe me when I say that not all data can be classified correctly into categories using a linear function. So, we need our deep learning model to be able to approximate more complex functions than just a linear function.
Now, let’s come to your non linearity bit. Imagine a linear function y=2x+3, and another one y=4x+7. What happens if I pool them and take an average? I get another linear function y= 3x+5. So instead of doing those two computations separately and then averaging it out, I could have just used the single linear function y=3x+5. Obviously, this logic holds good if I have more than 2 such linear functions. This is exactly what will happen if you don’t have have non-linearities in your nodes, and also what others have written in their answers.
It simply follows from the definition of a linear function -
(i) If you take two linear functions, AND
(ii)Take a linear combination of them (which is how we combine the outputs of multiple nodes of a network)
You are BOUND to get a linear function because f(x)+g(x)=mx+b+nx+c=(m+n)x+(b+c)= say h(x).
And you could in essence replace your whole network by a simple matrix transformation which accounts for all linear combinations and up/downsamplings.
In a nutshell, you’ll only be trying to learn a linear approximation for original function f relating the input and the output. Which as we discussed above, is not always the best approximation. Adding non-linearities ensures that you can learn more complex functions by approximating every non-linear function as a LINEAR combination of a large number of non-linear functions.
Still new to the field, so if there’s something wrong here please comment below! Hope it helps""
Let's train our model then, using the features we extracted from VGG net
The model we will use has just 1 hidden layer between the VGG features and the final output layer. The simplest neural network you can get. An image goes into this network with the dimensions (1,25088), the first layer's output is 1024 dimensional. This hidden layer output undergoes a pointwise RELU activation. This output gets transformed into the output layer of 20 dimensions. It goes through a sigmoid.
The sigmoid, or the squashing function as it is often called, is a function which squashes numbers between 0 and 1. What are you reminded of when you think of numebers between 0 and 1? Right, probability.
By squashing the score of each of the 20 output labels between 0 and 1, sigmoid lets us interpret their scores as probabilities. Then, we can just pick the classes with the top 3 or 5 probability scores as the predicted genres for the movie poster! Simple!
End of explanation
model_visual.fit(X_train, Y_train, epochs=10, batch_size=64,verbose=1)
model_visual.fit(X_train, Y_train, epochs=50, batch_size=64,verbose=0)
Explanation: We train the model using the fit() function. The parameters it takes are - training features and training labels, epochs, batch_size and verbose.
Simplest one - verbose. 0="dont print anything as you work", 1="Inform me as you go".
Often the data set is too large to be loaded into the RAM. So, we load data in batches. For batch_size=32 and epochs=10, the model starts loading rows from X in batches of 32 everytime it calculates the loss and updates the model. It keeps on going till it has covered all the samples 10 times.
So, the no. of times model is updated = (Total Samples/Batch Size) * (Epochs)
End of explanation
Y_preds=model_visual.predict(X_test)
sum(sum(Y_preds))
Explanation: For the first 10 epochs I trained the model in a verbose fashion to show you what's happening. After that, in the below cell you can see I turned off the verbosity to keep the code cleaner.
End of explanation
f6=open('Genredict.pckl','rb')
Genre_ID_to_name=pickle.load(f6)
f6.close()
sum(Y_preds[1])
sum(Y_preds[2])
genre_list=sorted(list(Genre_ID_to_name.keys()))
precs=[]
recs=[]
for i in range(len(Y_preds)):
row=Y_preds[i]
gt_genres=Y_test[i]
gt_genre_names=[]
for j in range(19):
if gt_genres[j]==1:
gt_genre_names.append(Genre_ID_to_name[genre_list[j]])
top_3=np.argsort(row)[-3:]
predicted_genres=[]
for genre in top_3:
predicted_genres.append(Genre_ID_to_name[genre_list[genre]])
(precision,recall)=precision_recall(gt_genre_names,predicted_genres)
precs.append(precision)
recs.append(recall)
if i%50==0:
print "Predicted: ",','.join(predicted_genres)," Actual: ",','.join(gt_genre_names)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
Explanation: Let's look at some of our predictions?
End of explanation
from gensim import models
# model2 = models.Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
model2 = models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
Explanation: So, even with just the poster i.e. visual features we are able to make great predictions! Sure, text outperforms the visual features, but the important thing is that it still works. In more complicated models, we can combine the two to make even better predictions. That is precisely what I work on in my research.
These models were trained on CPU's, and a simple 1 layer model was used to show that there is a lot of information in this data that the models can extract. With a larger dataset, and more training I was able to bring these numbers to as high as 70%, which is the similar to textual features. Some teams in my class outperformed this even more. More data is the first thing you should try if you want better results. Then, you can start playing with training on GPUs, learning rate schedules and other hyperparameters. Finally, you can consider using ResNet, a much more powerful neural network model than VGG. All of these can be tried once you have a working knowledge of machine learning.
Section 8 - Deep Learning to get Textual Features
Let's do the same thing as above with text now?
We will use an off the shelf representation for words - Word2Vec model. Just like VGGnet before, this is a model made available to get a meaningful representation. As the total number of words is small, we don't even need to forward propagate our sample through a network. Even that has been done for us, and the result is stored in the form of a dictionary. We can simply look up the word in the dictionary and get the Word2Vec features for the word.
You can download the dictionary from here - https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit <br>
Download it to the directory of this tutorial i.e. in the same folder as this ipython notebook.
End of explanation
print model2['king'].shape
print model2['dog'].shape
Explanation: Now, we can simply look up for a word in the above loaded model. For example, to get the Word2Vec representation of the word "King" we just do - model2['king']
End of explanation
final_movies_set = movies_with_overviews
len(final_movies_set)
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
tokenizer = RegexpTokenizer(r'\w+')
# create English stop words list
en_stop = get_stop_words('en')
movie_mean_wordvec=np.zeros((len(final_movies_set),300))
movie_mean_wordvec.shape
Explanation: This way, we can represent the words in our overviews using this word2vec model. And then, we can use that as our X representations. So, instead of count of words, we are using a representation which is based on the semantic representation of the word. Mathematically, each word went from 3-4 dimensional (the length) to 300 dimensions!
For the same set of movies above, let's try and predict the genres from the deep representation of their overviews!
End of explanation
genres=[]
rows_to_delete=[]
for i in range(len(final_movies_set)):
mov=final_movies_set[i]
movie_genres=mov['genre_ids']
genres.append(movie_genres)
overview=mov['overview']
tokens = tokenizer.tokenize(overview)
stopped_tokens = [k for k in tokens if not k in en_stop]
count_in_vocab=0
s=0
if len(stopped_tokens)==0:
rows_to_delete.append(i)
genres.pop(-1)
# print overview
# print "sample ",i,"had no nonstops"
else:
for tok in stopped_tokens:
if tok.lower() in model2.vocab:
count_in_vocab+=1
s+=model2[tok.lower()]
if count_in_vocab!=0:
movie_mean_wordvec[i]=s/float(count_in_vocab)
else:
rows_to_delete.append(i)
genres.pop(-1)
# print overview
# print "sample ",i,"had no word2vec"
len(genres)
mask2=[]
for row in range(len(movie_mean_wordvec)):
if row in rows_to_delete:
mask2.append(False)
else:
mask2.append(True)
X=movie_mean_wordvec[mask2]
X.shape
Y=mlb.fit_transform(genres)
Y.shape
textual_features=(X,Y)
f9=open('textual_features.pckl','wb')
pickle.dump(textual_features,f9)
f9.close()
# textual_features=(X,Y)
f9=open('textual_features.pckl','rb')
textual_features=pickle.load(f9)
f9.close()
(X,Y)=textual_features
X.shape
Y.shape
mask_text=np.random.rand(len(X))<0.8
X_train=X[mask_text]
Y_train=Y[mask_text]
X_test=X[~mask_text]
Y_test=Y[~mask_text]
Explanation: Text needs some pre-processing before we can train the model. The only preprocessing we do here is - we delete commonly occurring words which we know are not informative about the genre. Think of it as the clutter in some sense. These words are often removed and are referred to as "stop words". You can look them up online. These include simple words like "a", "and", "but", "how", "or" and so on. They can be easily removed using the python package NLTK.
From the above dataset, movies with overviews which contain only stop words, or movies with overviews containing no words with word2vec representation are neglected. Others are used to build our Mean word2vec representation. Simply, put for every movie overview -
Take movie overview
Throw out stop words
For non stop words:
If in word2vec - take it's word2vec representation which is 300 dimensional
If not - throw word
For each movie, calculate the arithmetic mean of the 300 dimensional vector representations for all words in the overview which weren't thrown out
This mean becomes the 300 dimensional representation for the movie. For all movies, these are stored in a numpy array. So the X matrix becomes (1263,300). And, Y is (1263,20) i.e. binarized 20 genres, as before
Why do we take the arithmetic mean?
If you feel that we should have kept all the words separately - Then you're thinking correct, but sadly we're limited by the way current day neural networks work. I will not mull over this for the fear of stressing too much on an otherwise irrelevant detail. But if you're interested, read this awesome paper -
https://jiajunwu.com/papers/dmil_cvpr.pdf
End of explanation
from keras.models import Sequential
from keras.layers import Dense, Activation
model_textual = Sequential([
Dense(300, input_shape=(300,)),
Activation('relu'),
Dense(19),
Activation('softmax'),
])
model_textual.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model_textual.fit(X_train, Y_train, epochs=10, batch_size=500)
model_textual.fit(X_train, Y_train, epochs=10000, batch_size=500,verbose=0)
score = model_textual.evaluate(X_test, Y_test, batch_size=249)
print("%s: %.2f%%" % (model_textual.metrics_names[1], score[1]*100))
Y_preds=model_textual.predict(X_test)
genre_list.append(10769)
print "Our predictions for the movies are - \n"
precs=[]
recs=[]
for i in range(len(Y_preds)):
row=Y_preds[i]
gt_genres=Y_test[i]
gt_genre_names=[]
for j in range(19):
if gt_genres[j]==1:
gt_genre_names.append(Genre_ID_to_name[genre_list[j]])
top_3=np.argsort(row)[-3:]
predicted_genres=[]
for genre in top_3:
predicted_genres.append(Genre_ID_to_name[genre_list[genre]])
(precision,recall)=precision_recall(gt_genre_names,predicted_genres)
precs.append(precision)
recs.append(recall)
if i%50==0:
print "Predicted: ",predicted_genres," Actual: ",gt_genre_names
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
Explanation: Once again, we use a very similar, super simple architecture as before.
End of explanation |
2,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Updated TOC trends analysis (part 2)
This notebook begins to explore trends in the broader water chemistry dataset collated by Heleen, Don and John for the updated TOC analysis. Note that the results presented here should be considered preliminary as there are still some outstanding database "cleaning" tasks that I haven't found time for yet. The reason I'm jumping ahead of myself here is because I suspect the best way to identify any further data issues is to attempt the trend analysis and see what problems crop up.
My original plan - as agreed with Heleen, Don and John in May - was to tidy up the data as much as possible and then simply re-run Tore's code for the trends calculations. Unfortunately, although I've managed to find most of the necessary code (either within RESA2 or as VBA projects within linked Access databases), I have been unable to locate some of the crucial subroutines, including the ones for the bulk of the statistical analysis. In addition, as far as I can tell, the code relies on an old, third-party Excel macro for the statistical calculations, and using this involves a lot of data shuffling (first from RESA2 to Excel, then into Access and finally back into RESA2), which is quite slow and difficult to keep track of.
What I have found in RESA2 is a table called ICPW_STATISTICS3, which appears to store the output of Tore's analysis. I think my best option is therefore to recode the whole thing from scratch, and then compare my results with those in Tore's table to make sure what I'm doing is more-or-less compatible with what's been done before. This will involve digging into the internlas of RESA2, which will hopefuily be a good learning experience for me as well.
1. Trend analysis code
This section provides an overview of the code I've written for the updated trends analysis. The code was initially developed in an earlier iPython notebook and later moved into a .py file to allow the trends functionality to be imported into other notebooks. This Python file is here
Step1: 2.2. Read previous results from RESA2
As an example, we'll extract some of Tore's results for one of the Czech sites (station_id = 23499) for the period from 1990 to 2004.
Step2: 2.3. Run new code
The above table shows output from the previous analysis for the period from 1990 to 2004. The code below runs my new trend analysis for all of the Czech sites (project_name = 'ICPW_TOCTRENDS_2015_CZ') for the same period and then prints just the results for site 23499 for comparison.
Step3: The results from my code are not exactly the same as the output produced by Tore, but for all practical purposes I'd say the differences are negligible. This is actually pretty surprising given the amount of second-guessing and reverse-engineering that went into developing my code.
3. Analysis of the full TOC trends dataset
The next big question is whether my code will upscale effectively to the full TOC dataset. The RESA2 application usually crashes if a user tries to extract data for more than one large project at a time, but I'm hoping that by bypassing RESA2 and communicating directly with the underlying Oracle instance, my code will not be affected by this issue.
This section attempts to run the trends analysis on the full TOC dataset. Initially I'll use the data for all years and I'll also generate plots for each series (which I'll return to later). The projects to be included have been previously agreed with Heleen - see [section 3 of this notebook](http
Step4: 3.1. Results
It's good to see the algorithm manages to process all the sites in one go. The table below shows the first 10 rows of the output, but it's the warning messages that I'm most interested in at the moment.
Step5: 3.2. Data issues
3.2.1. Limited Al data
Many of the sites have limited Al data. This isn't a database error, but it could have implications for our ability to detect trends for this parameter.
3.2.2. Sites with no data
Step6: So there are 178 Swedish sites (out of 261 in total) that have no data whatsoever for the parameters $SO_4$, $Cl$, $Ca$, $Mg$, $NO_3$, $TOC$ or $Al$. It seems strange to have these included in a project focussing on TOC trends! Perhaps they're included due to having data for other useful parameters? Let's check.
Step7: So it appears that more than two-thirds of the Swedish sites in the database have no data at all. Is this expected, or has something gone wrong here?
3.2.3. Duplicated values
Some duplicate values are expected, because at some sites samples are taken at a variety of depths. My algorithm currently only selects samples from the upper 1 m of the water column, but this occasionally spans two depth measuremnts (usually 0 and 0.5 m). For the moment, it seems reasonable to average these values (and in most cases they are near-identical anyway), but check this with Heleen and modify the code if necessary.
Step8: Not all the duplicates can be explained by measuring at multiple depths, though, and occasionally the repeated values are significantly different. As an example, consider records for site 28970 (which corresponds to site code NF02YH0013, in Newfoundland, Canada).
Step9: There are lots of duplciated records here, and the values are not the same. Let's have a look at the database results for the first water sample in this series (the one from 29/06/1990), where we apparently have repeated measurements for NO3-N and TOC.
Step10: Note that all these records have the same sample_id, which means the problem is not related to samples being entered into the database more than once. Instead, the problem seems to be due to entering multiple methods for the same parameter
Step11: In section 3, I generated a very large number of plots. Only some of them are relevant, so to save storage space online I'll delete the ones that aren't relevant to the data in vis_df.
Step12: I've uploaded the remaining plots to a location online, which means I can link to them from the map visualisation. To do this, I need to insert the link to each plot as a new column in vis_df. I also want to add a column to specify the correct Google Fusion Tables marker symbol according to the nature of the trend and, finally, I'd like to capitalise the trend column to improve the formating on my finsihed map. The whole thing then needs to be saved as a CSV. | Python Code:
# Import custom functions
# Connect to db
resa2_basic_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template'
r'\useful_resa2_code.py')
resa2_basic = imp.load_source('useful_resa2_code', resa2_basic_path)
engine, conn = resa2_basic.connect_to_resa2()
# Import code for trends analysis
resa2_trends_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Python\toc_trends_analysis.py')
resa2_trends = imp.load_source('toc_trends_analysis', resa2_trends_path)
Explanation: Updated TOC trends analysis (part 2)
This notebook begins to explore trends in the broader water chemistry dataset collated by Heleen, Don and John for the updated TOC analysis. Note that the results presented here should be considered preliminary as there are still some outstanding database "cleaning" tasks that I haven't found time for yet. The reason I'm jumping ahead of myself here is because I suspect the best way to identify any further data issues is to attempt the trend analysis and see what problems crop up.
My original plan - as agreed with Heleen, Don and John in May - was to tidy up the data as much as possible and then simply re-run Tore's code for the trends calculations. Unfortunately, although I've managed to find most of the necessary code (either within RESA2 or as VBA projects within linked Access databases), I have been unable to locate some of the crucial subroutines, including the ones for the bulk of the statistical analysis. In addition, as far as I can tell, the code relies on an old, third-party Excel macro for the statistical calculations, and using this involves a lot of data shuffling (first from RESA2 to Excel, then into Access and finally back into RESA2), which is quite slow and difficult to keep track of.
What I have found in RESA2 is a table called ICPW_STATISTICS3, which appears to store the output of Tore's analysis. I think my best option is therefore to recode the whole thing from scratch, and then compare my results with those in Tore's table to make sure what I'm doing is more-or-less compatible with what's been done before. This will involve digging into the internlas of RESA2, which will hopefuily be a good learning experience for me as well.
1. Trend analysis code
This section provides an overview of the code I've written for the updated trends analysis. The code was initially developed in an earlier iPython notebook and later moved into a .py file to allow the trends functionality to be imported into other notebooks. This Python file is here:
C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Python\toc_trends_analysis.py
NB: The code in the Python file is slightly different and more sophisticated than in the original notebook. Use the Python file (not the notebook) as the basis for any further developments.
To run an analysis, the user must provide a list of valid RESA2 project names, plus start and end years for the period of interest. Optionally, it is possible to specify a folder where trend plots for each parameter will be stored. The code then performs the following operations:
Identifies all the monitoring stations associated with the specified projects. <br><br>
Generates a list of water samples and sampling dates for all these stations within the period of interest. If some stations have no samples within the specified period, a warning message is printed and the list of stations with no data is included in the output.
Note that RESA2 sometimes includes measurements for samples collected at several different water depths. The code currently only selects near-surface water samples (depth $\leq 1 \; m$). Check this with Heleen. <br><br>
Extracts the data for the key trends parameters (currently 'SO4', 'Cl', 'Ca', 'Mg', 'NO3-N', 'TOC' and 'Al', but this need amending - see below) for the selected water samples and converts the raw values from the units and methods originally specified into the RESA2's "standard" units (as specified in the RESA2.PARAMETER_DEFINITIONS table). <br><br>
Checks for duplicate measurements of the same parameter at the same sampling point and date. If duplicates are found, the values are averaged and a warning message is printed. A list of duplicated records is then included in the output to facilitate data checking. <br><br>
Sampled values are aggregated from the native data collection interval to annual frequency by taking medians. <br><br>
For the parameters 'SO4', 'Cl', 'Mg', 'Ca' and 'NO3-N', concentrations are recalculated as $\mu eq/l$ (denoted by the prefix 'E' e.g. ESO4).
$$EPAR \; (\mu eq/l) = \frac{1.10^6 * valency}{molar \; mass \; (g/mol)} * PAR \; (g/l)$$ <br><br>
Sea-salt corrections are then applied for the parameters 'ESO4', 'EMg' and 'ECa'] (denoted by the suffix 'X' e.g. ESO4X).
$$EPARX = EPAR_{sample} - \left[ \left( \frac{EPAR}{ECl} \right){ref} * ECl{sample} \right]$$
**NB:** I'm not sure what reference ratios were used in the original analysis. I've attempted to back-calculate them from RESA2, and it looks as though a single value has been assumed worldwide for each parameter? **Check this with Heleen**. Also, **see section 4.1.2 of the [code development notebook](http://nbviewer.jupyter.org/url/www.googledrive.com/host/0BximeC_RweaeUy1jd2k3Nm1kdms/updated_toc_trends_analysis.ipynb) for more details**. <br><br>
Calculates combined parameters e.g. ESO4_ECl is calculated as $(ESO4 + ECl)$ etc.
At present, the code generates annual time series for the following parameters:
ESO4 ($\mu eq/l$)
ESO4X ($\mu eq/l$)
ECl ($\mu eq/l$)
ESO4_ECl ($\mu eq/l$)
ECa_EMg ($\mu eq/l$)
ECaX_EMgX ($\mu eq/l$)
ENO3 ($\mu eq/l$)
ESO4_ECl_ENO3 ($\mu eq/l$)
TOC ($mg C/l$)
Al ($mg/l$)
NB: It looks as though Tore's code calculates a few additional parameters as well:
ANC
ALK
HPLUS
ENO3_DIV_ENO3_ESO4X
These will be easy to add, but I'd like to check exactly which parameters are of interest and how they should be calculated before including them. <br><br>
Performs statistical analysis for each parameter at each site for the period specified. The output includes the following summary statistics:
Period over which data are available (i.e. start and end years, which are not always the same as the period originally specified)
Number of non-missing values
Median of data values
Mean of data values
Standard deviation of data values
Standard deviation expected under the null hypothesis of the Mann-Kendall (M-K) test
M-K statistic
Normalised M-K statistic $\left(= \frac{M-K \; statistic}{Expected \; standard \; deviation} \right)$
M-K p-value
Sen's slope (a.k.a. the Theil-Sen slope). Optionally, a plot of the trend line can be produced.
Note that the algorithm uses the normal approximation to estimate the variance of the S parameter in the M-K test (and thereby the significance of any trends). This approximation is only robust when the number of non-null data points in the time series is $\geq 10$. If the number of non-missing values is fewer than 10, the code prints a warning message to indicate significance estimates may be unreliable. <br><br>
The output from the algorithm consists of (i) a dataframe of summary statistics for each parameter at each site over the period of interest; (ii) a list of stations with no data in the period specified and (iii) a list of duplicated values (where the database contains two measurements of the same parameter at the same location on the same day). If the plot option is set to True when the function is called, the code will also output (iv) plots of the Theil-Sen regression line for each parameter, saved to the specified output folder.
2. Illustrative example and testing
This section illustrates how the code can be used to estimate trends. As a basic test, I'll start by reading some results from Tore's ICPW_STATISTICS3 table, which I believe stores the output from his statistical analyses. I'll then run my code for the same stations and time periods to make sure the results are comparable.
NB: My earlier notebook focussing on the code development includes some more rigorous testing.
2.1. Import modules and connect to database
End of explanation
# Get results for test sites from RESA2
sql = ("SELECT * FROM resa2.icpw_statistics3 "
"WHERE station_id = 23499 "
"AND period = '1990-2004'")
old_df = pd.read_sql(sql, engine)
# Get just the cols to compare to my output
old_df = old_df[['station_id', 'parameter', 'period', 'nonmiss',
'average', 'median', 'stdev', 'test_stat',
'mk_stat', 'mkp', 'senslope']]
old_df.head(14).sort_values(by='parameter')
Explanation: 2.2. Read previous results from RESA2
As an example, we'll extract some of Tore's results for one of the Czech sites (station_id = 23499) for the period from 1990 to 2004.
End of explanation
# Specify projects of interest
proj_list = ['ICPW_TOCTRENDS_2015_CZ',]
# Run analysis for the period 1990 - 2004
new_df, dup_df, no_data_df = resa2_trends.run_trend_analysis(proj_list, engine,
st_yr=1990, end_yr=2004)
# Delete mk_std_dev col as not relevant here
del new_df['mk_std_dev']
new_df.head(14).sort_values(by='par_id')
Explanation: 2.3. Run new code
The above table shows output from the previous analysis for the period from 1990 to 2004. The code below runs my new trend analysis for all of the Czech sites (project_name = 'ICPW_TOCTRENDS_2015_CZ') for the same period and then prints just the results for site 23499 for comparison.
End of explanation
# Specify projects of interest
proj_list = ['ICPW_TOCTRENDS_2015_CA_ATL',
'ICPW_TOCTRENDS_2015_CA_DO',
'ICPW_TOCTRENDS_2015_CA_ICPW',
'ICPW_TOCTRENDS_2015_CA_NF',
'ICPW_TOCTRENDS_2015_CA_QU',
'ICPW_TOCTRENDS_2015_CZ',
'ICPW_TOCTRENDS_2015_Cz2',
'ICPW_TOCTRENDS_2015_FI',
'ICPW_TOCTRENDS_2015_NO',
'ICPW_TOCTRENDS_2015_SE',
'ICPW_TOCTRENDS_2015_UK',
'ICPW_TOCTRENDS_2015_US_LTM',
'ICPWaters Ca']
# Folder for saving PNG plots
plot_fold=r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Trends_Plots'
# Run analysis
res_df, dup_df, no_data_df = resa2_trends.run_trend_analysis(proj_list, engine,
st_yr=None, end_yr=None,
plot=True, fold=plot_fold)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
Explanation: The results from my code are not exactly the same as the output produced by Tore, but for all practical purposes I'd say the differences are negligible. This is actually pretty surprising given the amount of second-guessing and reverse-engineering that went into developing my code.
3. Analysis of the full TOC trends dataset
The next big question is whether my code will upscale effectively to the full TOC dataset. The RESA2 application usually crashes if a user tries to extract data for more than one large project at a time, but I'm hoping that by bypassing RESA2 and communicating directly with the underlying Oracle instance, my code will not be affected by this issue.
This section attempts to run the trends analysis on the full TOC dataset. Initially I'll use the data for all years and I'll also generate plots for each series (which I'll return to later). The projects to be included have been previously agreed with Heleen - see [section 3 of this notebook](http://nbviewer.jupyter.org/url/www.googledrive.com/host/0BximeC_RweaeUy1jd2k3Nm1kdms/toc_trends_2015_data_cleaning.ipynb#3.-Site-properties-(location,-land-use-and-elevation) for details.
End of explanation
res_df.head(10)
Explanation: 3.1. Results
It's good to see the algorithm manages to process all the sites in one go. The table below shows the first 10 rows of the output, but it's the warning messages that I'm most interested in at the moment.
End of explanation
# Get properties for sites with no data
# Basic station properties
sql = ('SELECT station_id, station_code, station_name '
'FROM resa2.stations '
'WHERE station_id in %s'
% str(tuple(no_data_df['station_id'].values)))
na_stns = pd.read_sql(sql, engine)
# Get country for each station
sql = ('SELECT station_id, value '
'FROM resa2.stations_par_values '
'WHERE station_id in %s '
'AND var_id = 261'
% str(tuple(no_data_df['station_id'].values)))
co_df = pd.read_sql(sql, engine)
# Decode special characters fro `windows-1252` encoding to unicode
na_stns['station_name'] = na_stns['station_name'].str.decode('windows-1252')
# Join
na_stns = pd.merge(na_stns, co_df, how='left',
on='station_id')
na_stns.columns = ['station_id', 'station_code', 'station_name', 'country']
print 'Number of sites with no data:', len(no_data_df)
print 'Sites with no data come from the following countries:'
print list(na_stns.country.unique())
na_stns.head()
Explanation: 3.2. Data issues
3.2.1. Limited Al data
Many of the sites have limited Al data. This isn't a database error, but it could have implications for our ability to detect trends for this parameter.
3.2.2. Sites with no data
End of explanation
# Find ANY parameters associated with the 178 Swedish sites
sql = ('SELECT * '
'FROM resa2.water_samples '
'WHERE station_id in %s'
% str(tuple(no_data_df['station_id'].values)))
swed_df = pd.read_sql(sql, engine)
print 'Number of samples associated with the 178 Swedish sites (for ALL parameters):', len(swed_df)
Explanation: So there are 178 Swedish sites (out of 261 in total) that have no data whatsoever for the parameters $SO_4$, $Cl$, $Ca$, $Mg$, $NO_3$, $TOC$ or $Al$. It seems strange to have these included in a project focussing on TOC trends! Perhaps they're included due to having data for other useful parameters? Let's check.
End of explanation
print 'Total number of duplicated records:', len(dup_df)
dup_df.head(10)
Explanation: So it appears that more than two-thirds of the Swedish sites in the database have no data at all. Is this expected, or has something gone wrong here?
3.2.3. Duplicated values
Some duplicate values are expected, because at some sites samples are taken at a variety of depths. My algorithm currently only selects samples from the upper 1 m of the water column, but this occasionally spans two depth measuremnts (usually 0 and 0.5 m). For the moment, it seems reasonable to average these values (and in most cases they are near-identical anyway), but check this with Heleen and modify the code if necessary.
End of explanation
# Get the duplicated values for this site
dup_df.query('station_id == 28970').head(10)
Explanation: Not all the duplicates can be explained by measuring at multiple depths, though, and occasionally the repeated values are significantly different. As an example, consider records for site 28970 (which corresponds to site code NF02YH0013, in Newfoundland, Canada).
End of explanation
# Get all methods applied to sample(s) from 29/06/1990 at station 28970
sql = ("SELECT value_id, sample_id, method_id, value, flag1 "
"FROM resa2.water_chemistry_values2 "
"WHERE sample_id IN (SELECT water_sample_id FROM resa2.water_samples "
"WHERE station_id = 28970 "
"AND sample_date = DATE '1990-06-29')")
df = pd.read_sql(sql, engine)
df
Explanation: There are lots of duplciated records here, and the values are not the same. Let's have a look at the database results for the first water sample in this series (the one from 29/06/1990), where we apparently have repeated measurements for NO3-N and TOC.
End of explanation
# Extract subset of results to visualise
par_str = str(('ESO4', 'ESO4X', 'ESO4X',
'ECl', 'ESO4_ECl', 'ECa_EMg',
'ECaX_EMgX', 'ENO3', 'ESO4_ECl_ENO3',
'TOC', 'Al'))
vis_df = res_df.query("(par_id in %s) and (non_missing >= 10)" % par_str)
print 'Total number of stations to visualise:', len(vis_df['station_id'].unique())
vis_df.head()
# Join in station details
# Basic station properties
sql = ('SELECT station_id, station_code, station_name, latitude, longitude '
'FROM resa2.stations '
'WHERE station_id in %s'
% str(tuple(vis_df['station_id'].unique())))
vis_stns = pd.read_sql(sql, engine)
# Get country for each station
sql = ('SELECT station_id, value '
'FROM resa2.stations_par_values '
'WHERE station_id in %s '
'AND var_id = 261'
% str(tuple(vis_df['station_id'].unique())))
co_df = pd.read_sql(sql, engine)
# Decode special characters fro `windows-1252` encoding to unicode
vis_stns['station_name'] = vis_stns['station_name'].str.decode('windows-1252')
# Join
vis_stns = pd.merge(vis_stns, co_df, how='left',
on='station_id')
vis_stns.columns = ['station_id', 'station_code', 'station_name',
'latitude', 'longitude', 'country']
# Join to stats output
vis_df = pd.merge(vis_df, vis_stns, how='left',
on='station_id')
# Reorder
vis_df = vis_df[['station_id', 'station_code', 'station_name', 'country',
'latitude', 'longitude', 'par_id', 'period', 'non_missing',
'mean', 'median', 'std_dev', 'mk_stat', 'norm_mk_stat',
'mk_p_val', 'trend', 'sen_slp']]
vis_df.head()
Explanation: Note that all these records have the same sample_id, which means the problem is not related to samples being entered into the database more than once. Instead, the problem seems to be due to entering multiple methods for the same parameter: method_id = 10265 corresponds to NO3-N concentrations measured in $\mu g/l$, whereas method_id = 10308 refers to NO3-N in $mg/l$. As the data is extracted for the trends analysis, my code automatically converts all NO3-N measurements into $\mu g/l$, which is why we end up with duplicate values of 25 and 110 in the trends results (see above). The puzzle is where do these values come from in the first place? $110 \; \mu g/l$ is, after all, a lot more than $25 \; \mu g/l$.
Looking at the database log, the value of $0.110 \; mg/l$ was uploaded on 16/02/2006, whereas the value of $25 \; \mu g/l$ was added on 16/11/2015. The only raw data I can find on the network for this location is here:
K:\Prosjekter\langtransporterte forurensninger\O-23300 - ICP-WATERS - HWI\Database\2015 DOC analysis\data delivery\CA\Couture\ICP Waters form for water chemistry_ATL_NF.xlsx
which gives the nitrate concentraion as $25 \; \mu g/l$ (i.e. consistent with the most recent database upload). At present, I can't identify where the older values have come from, and I'm reluctant to delete them without further clarification. Ask Heleen about this, but for now I'll just stick with averaging duplicated values because I have no obvious way of choosing which one(s) are correct.
4. Data visualisation
I'd like to try producing a Google map to visualise the results of the statistical analysis. If I automate this as much as possible, it should be easy to re-run the code once I'm happy with the basic input data.
I'll start off by limiting the dataset so that we're only visualising the following parameters (and only in cases where there are more than 10 data values in the series):
ESO4 (μeq/l)
ESO4X (μeq/l)
ESO4X (μeq/l)
ECl (μeq/l)
ESO4_ECl (μeq/l)
ECa_EMg (μeq/l)
ECaX_EMgX (μeq/l)
ENO3 (μeq/l)
ESO4_ECl_ENO3 (μeq/l)
TOC (mgC/l)
Al (mg/l)
End of explanation
# Delete unnecessary plots
# Folder path
png_fold = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Trends_Plots'
# Paths to keep
keep_list = []
for rec in vis_df.itertuples():
stn_id = rec.station_id
par_id = rec.par_id
period = rec.period
keep_path = os.path.join(png_fold,
'%s_%s_%s.png' % (int(stn_id), par_id, period))
keep_list.append(keep_path)
# Get a list of files in folder
search_path = os.path.join(png_fold, '*.png')
file_list = glob.glob(search_path)
# Loop over files and delete where necessary
del_list = []
for file_path in file_list:
if file_path not in keep_list:
os.remove(file_path)
Explanation: In section 3, I generated a very large number of plots. Only some of them are relevant, so to save storage space online I'll delete the ones that aren't relevant to the data in vis_df.
End of explanation
# Capitalise 'trend' column
vis_df['trend'] = vis_df['trend'].str.capitalize()
# Join in text for GFT markers
mark_df = pd.DataFrame({'trend':['Increasing', 'No trend', 'Decreasing'],
'symbol':['small_red', 'small_yellow', 'small_green']})
vis_df = pd.merge(vis_df, mark_df, how='left',
on='trend')
# Build the link path
link_stem = r'http://www.googledrive.com/host/0BximeC_RweaebnFoa2VWcGtRWms/'
vis_df['link'] = (link_stem + vis_df['station_id'].map(int).map(str) + '_' +
vis_df['par_id'] + '_' + vis_df['period'] + '.png')
# Save
out_csv = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Trends_Plots\vis_data.csv'
vis_df.to_csv(out_csv, index=False, encoding='windows-1252')
Explanation: I've uploaded the remaining plots to a location online, which means I can link to them from the map visualisation. To do this, I need to insert the link to each plot as a new column in vis_df. I also want to add a column to specify the correct Google Fusion Tables marker symbol according to the nature of the trend and, finally, I'd like to capitalise the trend column to improve the formating on my finsihed map. The whole thing then needs to be saved as a CSV.
End of explanation |
2,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Tutorial
Now you are ready to start creating your own AutoML text entity extraction model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step11: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step12: Now save the unique dataset identifier for the Dataset resource instance you created.
Step13: Data preparation
The Vertex Dataset resource for text has a couple of requirements for your text entity extraction data.
Text examples must be stored in a JSONL file. Unlike text classification and sentiment analysis, a CSV index file is not supported.
The examples must be either inline text or reference text files that are in Cloud Storage buckets.
JSONL
For text entity extraction, the JSONL file has a few requirements
Step14: Quick peek at your data
You will use a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.
Step15: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following
Step16: Train the model
Now train an AutoML text entity extraction model using your Vertex Dataset resource. To train the model, do the following steps
Step17: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step18: Now save the unique identifier of the training pipeline you created.
Step19: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step20: Deployment
Training the above model may take upwards of 120 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step21: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step22: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step23: Now get the unique identifier for the Endpoint resource you created.
Step24: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step25: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step26: Make a online prediction request
Now do a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
Step27: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters
Step28: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step29: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML text entity extraction model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create text entity extraction models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the NCBI Disease Research Abstracts dataset from National Center for Biotechnology Information. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML text entity extraction model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Text Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_extraction_io_format_1.0.0.yaml"
# Text Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_extraction_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML text entity extraction model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("biomedical-" + TIMESTAMP, DATA_SCHEMA)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/ucaip_ten_dataset.jsonl"
Explanation: Data preparation
The Vertex Dataset resource for text has a couple of requirements for your text entity extraction data.
Text examples must be stored in a JSONL file. Unlike text classification and sentiment analysis, a CSV index file is not supported.
The examples must be either inline text or reference text files that are in Cloud Storage buckets.
JSONL
For text entity extraction, the JSONL file has a few requirements:
Each data item is a separate JSON object, on a separate line.
The key/value pair text_segment_annotations is a list of character start/end positions in the text per entity with the corresponding label.
display_name: The label.
start_offset/end_offset: The character offsets of the start/end of the entity.
The key/value pair text_content is the text.
{'text_segment_annotations': [{'end_offset': value, 'start_offset': value, 'display_name': label}, ...], 'text_content': text}
Note: The dictionary key fields may alternatively be in camelCase. For example, 'display_name' can also be 'displayName'.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
You will use a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.
End of explanation
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
Explanation: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:
Uses the Dataset client.
Calls the client method import_data, with the following parameters:
name: The human readable name you give to the Dataset resource (e.g., biomedical).
import_configs: The import configuration.
import_configs: A Python list containing a dictionary, with the key/value entries:
gcs_sources: A list of URIs to the paths of the one or more index files.
import_schema_uri: The schema identifying the labeling type.
The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML text entity extraction model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
PIPE_NAME = "biomedical_pipe-" + TIMESTAMP
MODEL_NAME = "biomedical_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"multi_label": False,
"budget_milli_node_hours": 8000,
"model_type": "CLOUD",
"disable_early_stopping": False,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
multi_label: Whether True/False this is a multi-label (vs single) classification.
budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
model_type: The type of deployed model:
CLOUD: For deploying to Google Cloud.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
Finally, you create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 120 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("confusionMatrix", metrics["confusionMatrix"])
print("confidenceMetrics", metrics["confidenceMetrics"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (confusionMatrix and confidenceMetrics) you will print the result.
End of explanation
ENDPOINT_NAME = "biomedical_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "biomedical_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
automatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": data}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
return response
response = predict_item(test_item, endpoint_id, None)
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (text files) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, text models do not support additional parameters.
Request
The format of each instance is:
{ 'content': text_item }
Since the predict() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding data item in the request. You will see in the output for each prediction -- in our case there is just one:
prediction: A list of IDs assigned to each entity extracted from the text.
confidences: The confidence level between 0 and 1 for each entity.
display_names: The label name for each entity.
textSegmentStartOffsets: The character start location of the entity in the text.
textSegmentEndOffsets: The character end location of the entity in the text.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
2,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 25
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: In the previous chapter we modeled a system with constant angular
velocity.
In this chapter we take the next step, modeling a system with constant angular acceleration and deceleration.
Angular acceleration
Just as linear acceleration is the derivative of velocity, angular
acceleration is the derivative of angular velocity. And just as linear acceleration is caused by force, angular acceleration is caused by the rotational version of force, torque. If you are not familiar with torque, you can read about it at http
Step2: theta_push is the angle where I stop pushing on the turntable.
theta_test is how far the table turns during my test push.
theta_target is where we want the table to be after the second push.
We can use these parameters to compute the moment of inertia of the turntable, using the formula for a horizontal disk revolving around a vertical axis through its center
Step3: We can also compute the moment of inertia of the teapot, treating it as a point mass
Step4: The total moment of inertia is the sum of these parts
Step5: Friction in the bearings probably depends on the weight of the turntable and its contents, but probably does not depend on angular velocity.
So we'll assume that it is a constant.
We don't know what it is, so I will start with a guess, and we will use root_scalar to improve it.
Step6: For this problem we'll treat friction as a torque.
The state variables we'll use are theta, which is the angle of the table in rad, and omega, which is angular velocity in rad/s.
Step7: Now we can make a System with the initial state, init, the maximum duration of the simulation, t_end, and the parameters we are going to vary, force and torque_friction.
Step8: Here's a slope function that takes the current state, which contains angle and angular velocity, and returns the derivatives, angular velocity and angular acceleration
Step9: In this scenario, the force I apply to the turntable is always
perpendicular to the lever arm, so $\sin \theta = 1$ and the torque due
to force is $\tau = r F$.
torque_friction represents the torque due to friction. Because the
turntable is rotating in the direction of positive theta, friction
acts in the direction of negative theta.
We can test the slope function with the initial conditions
Step10: We are almost ready to run the simulation, but first there's a problem we have to address.
Two Phase Simulation
When I stop pushing on the turntable, the angular acceleration changes
abruptly. We could implement the slope function with an if statement
that checks the value of theta and sets force accordingly. And for a coarse model like this one, that might be fine. But a more robust approach is to simulate the system in two phases
Step11: We can test it with the initial conditions.
Step12: And run the first phase of the simulation.
Step13: Here are the last few time steps.
Step14: It takes a little more than a second for me to rotate the table 0.5 rad.
When I release the table, the angular velocity is about 0.87 rad / s.
Before we run the second phase, we have to extract the final time and
state of the first phase.
Step15: Now we can make a System object for Phase 2 with the initial state
from Phase 1 and with force=0.
Step16: For the second phase, we need an event function that stops when the
turntable stops; that is, when angular velocity is 0.
Step17: We'll test it with the initial conditions for Phase 2.
Step18: Now we can run the second phase.
Step19: DataFrame provides append, which appends results2 to the end of
results1.
Step20: Here are the last few time steps.
Step21: At the end, angular velocity is close to 0, and the total rotation is about 1.7 rad, a little farther than we were aiming for.
We can plot theta for both phases.
Step22: And omega.
Step23: Angular velocity, omega, increases linearly while I am pushing, and decreases linearly after I let go. The angle, theta, is the integral of angular velocity, so it forms a parabola during each phase.
In the next section, we'll use this simulation to estimate the torque
due to friction.
Estimating Friction
Let's take the code from the previous section and wrap it in a function.
Step24: I'll test it with the same parameters.
Step25: These results are the same as in the previous section.
We can use run_two_phases to write an error function we can use, with root_scalar, to find the torque due to friction that yields the
observed results from the first push, a total rotation of 1.5 rad.
Step26: With torque_friction=0.3, the table rotates a bit too far
Step27: With torque_friction=0.4, it doesn't go far enough.
Step28: So we can use those two values as a bracket for root_scalar.
Step29: The result is 0.333 N m, a little less than the initial guess.
Step30: Now that we know the torque due to friction, we can compute the force
needed to rotate the turntable through the remaining angle, that is,
from 1.5 rad to 3.14 rad.
But first, let's animate the results.
Animation
Here's a function that takes the state of the system and draws it.
Step31: This function uses a few features we have not seen before, but you can read about them in the Matplotlib documentation.
Here's what the initial condition looks like.
Step32: And here's an animation of the first push.
Step33: Summary
The example in this chapter demonstrates the concepts of torque, angular acceleration, and moment of inertia.
We used these concepts to simulate a turntable, using a hypothetical observation to estimating torque due to friction.
As an exercise, you can finish off the example, estimating the force needed to rotate the table to a given target angle.
The next chapter describes several case studies you can work on to practice the tools from the last few chapters, including projectiles, rotating objects, root_scalar, and maximize_scalar.
Exercises
Exercise
Step34: Write an error function that takes force and system, simulates the system, and returns the difference between theta_final and the remaining angle after the first push.
Step36: Use your error function and root_scalar to find the force needed for the second push.
Run the simulation with the force you computed and confirm that the table stops at the target angle after both pushes. | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 25
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
from numpy import pi
radius_disk = 0.5 # m
mass_disk = 7 # kg
radius_pot = 0.4 # m
mass_pot = 0.3 # kg
force = 2 # N
theta_push = 0.5 # radian
theta_test = 1.5 # radian
theta_target = pi # radian
Explanation: In the previous chapter we modeled a system with constant angular
velocity.
In this chapter we take the next step, modeling a system with constant angular acceleration and deceleration.
Angular acceleration
Just as linear acceleration is the derivative of velocity, angular
acceleration is the derivative of angular velocity. And just as linear acceleration is caused by force, angular acceleration is caused by the rotational version of force, torque. If you are not familiar with torque, you can read about it at http://modsimpy.com/torque.
In general, torque is a vector quantity, defined as the cross
product of $\vec{r}$ and $\vec{F}$, where $\vec{r}$ is the lever
arm, a vector from the center of rotation to the point where the force is applied, and $\vec{F}$ is the vector that represents the magnitude and direction of the force.
However, for the problems in this chapter, we only need the magnitude of torque; we don't care about the direction. In that case, we can compute
$$\tau = r F \sin \theta$$
where $\tau$ is torque, $r$ is the length of the lever arm, $F$ is the magnitude of force, and $\theta$ is the angle between $\vec{r}$ and $\vec{F}$.
Since torque is the product of a length and a force, it is expressed in newton meters (Nm).
Moment of inertia
In the same way that linear acceleration is related to force by Newton's second law of motion, $F=ma$, angular acceleration is related to torque by another form of Newton's law:
$$\tau = I \alpha$$
where $\alpha$ is angular acceleration and $I$ is moment of inertia. Just as mass is what makes it hard to accelerate an object, moment of inertia is what makes it hard to spin an object.
In the most general case, a 3-D object rotating around an arbitrary
axis, moment of inertia is a tensor, which is a function that takes a
vector as a parameter and returns a vector as a result.
Fortunately, in a system where all rotation and torque happens around a single axis, we don't have to deal with the most general case. We can treat moment of inertia as a scalar quantity.
For a small object with mass $m$, rotating around a point at distance
$r$, the moment of inertia is $I = m r^2$. For more complex objects, we can compute $I$ by dividing the object into small masses, computing
moments of inertia for each mass, and adding them up.
However, for most simple shapes, people have already done the
calculations; you can just look up the answers. For example, see
http://modsimpy.com/moment.
Teapots and turntables
Tables in Chinese restaurants often have a rotating tray or turntable
that makes it easy for customers to share dishes. These turntables are
supported by low-friction bearings that allow them to turn easily and
glide. However, they can be heavy, especially when they are loaded with food, so they have a high moment of inertia.
Suppose I am sitting at a table with a pot of tea on the turntable
directly in front of me, and the person sitting directly opposite asks
me to pass the tea. I push on the edge of the turntable with 2 N of
force until it has turned 0.5 rad, then let go. The turntable glides
until it comes to a stop 1.5 rad from the starting position. How much
force should I apply for a second push so the teapot glides to a stop
directly opposite me?
We'll answer this question in these steps:
I'll use the results from the first push to estimate the coefficient of friction for the turntable.
As an exercise, you'll use that coefficient of friction to estimate the force needed to rotate the turntable through the remaining angle.
Our simulation will use the following parameters:
The radius of the turntable is 0.5 m, and its weight is 7 kg.
The teapot weights 0.3 kg, and it sits 0.4 m from the center of the turntable.
The following figure shows the scenario, where $F$ is the force I apply to the turntable at the perimeter, perpendicular to the lever arm, $r$, and $\tau$ is the resulting torque. The blue circle near the bottom is the teapot.
Here are the parameters from the statement of the problem:
End of explanation
I_disk = mass_disk * radius_disk**2 / 2
Explanation: theta_push is the angle where I stop pushing on the turntable.
theta_test is how far the table turns during my test push.
theta_target is where we want the table to be after the second push.
We can use these parameters to compute the moment of inertia of the turntable, using the formula for a horizontal disk revolving around a vertical axis through its center:
End of explanation
I_pot = mass_pot * radius_pot**2
Explanation: We can also compute the moment of inertia of the teapot, treating it as a point mass:
End of explanation
I_total = I_disk + I_pot
Explanation: The total moment of inertia is the sum of these parts:
End of explanation
torque_friction = 0.3 # N*m
Explanation: Friction in the bearings probably depends on the weight of the turntable and its contents, but probably does not depend on angular velocity.
So we'll assume that it is a constant.
We don't know what it is, so I will start with a guess, and we will use root_scalar to improve it.
End of explanation
init = State(theta=0, omega=0)
Explanation: For this problem we'll treat friction as a torque.
The state variables we'll use are theta, which is the angle of the table in rad, and omega, which is angular velocity in rad/s.
End of explanation
system = System(init=init,
force=force,
torque_friction=torque_friction,
t_end=20)
Explanation: Now we can make a System with the initial state, init, the maximum duration of the simulation, t_end, and the parameters we are going to vary, force and torque_friction.
End of explanation
def slope_func(t, state, system):
theta, omega = state
force = system.force
torque_friction = system.torque_friction
torque = radius_disk * force - torque_friction
alpha = torque / I_total
return omega, alpha
Explanation: Here's a slope function that takes the current state, which contains angle and angular velocity, and returns the derivatives, angular velocity and angular acceleration:
End of explanation
slope_func(0, system.init, system)
Explanation: In this scenario, the force I apply to the turntable is always
perpendicular to the lever arm, so $\sin \theta = 1$ and the torque due
to force is $\tau = r F$.
torque_friction represents the torque due to friction. Because the
turntable is rotating in the direction of positive theta, friction
acts in the direction of negative theta.
We can test the slope function with the initial conditions:
End of explanation
def event_func1(t, state, system):
theta, omega = state
return theta - theta_push
Explanation: We are almost ready to run the simulation, but first there's a problem we have to address.
Two Phase Simulation
When I stop pushing on the turntable, the angular acceleration changes
abruptly. We could implement the slope function with an if statement
that checks the value of theta and sets force accordingly. And for a coarse model like this one, that might be fine. But a more robust approach is to simulate the system in two phases:
During the first phase, force is constant, and we run until theta is 0.5 radians.
During the second phase, force is 0, and we run until omega is 0.
Then we can combine the results of the two phases into a single
TimeFrame.
Here's the event function I'll use for Phase 1; it stops the simulation when theta reaches theta_push, which is when I stop pushing:
End of explanation
event_func1(0, system.init, system)
Explanation: We can test it with the initial conditions.
End of explanation
results1, details1 = run_solve_ivp(system, slope_func,
events=event_func1)
details1.message
Explanation: And run the first phase of the simulation.
End of explanation
results1.tail()
Explanation: Here are the last few time steps.
End of explanation
t_2 = results1.index[-1]
init2 = results1.iloc[-1]
Explanation: It takes a little more than a second for me to rotate the table 0.5 rad.
When I release the table, the angular velocity is about 0.87 rad / s.
Before we run the second phase, we have to extract the final time and
state of the first phase.
End of explanation
system2 = system.set(t_0=t_2, init=init2, force=0)
Explanation: Now we can make a System object for Phase 2 with the initial state
from Phase 1 and with force=0.
End of explanation
def event_func2(t, state, system):
theta, omega = state
return omega
Explanation: For the second phase, we need an event function that stops when the
turntable stops; that is, when angular velocity is 0.
End of explanation
event_func2(system2.t_0, system2.init, system2)
Explanation: We'll test it with the initial conditions for Phase 2.
End of explanation
results2, details2 = run_solve_ivp(system2, slope_func,
events=event_func2)
details2.message
Explanation: Now we can run the second phase.
End of explanation
results = results1.append(results2)
Explanation: DataFrame provides append, which appends results2 to the end of
results1.
End of explanation
results.tail()
Explanation: Here are the last few time steps.
End of explanation
results.theta.plot(label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: At the end, angular velocity is close to 0, and the total rotation is about 1.7 rad, a little farther than we were aiming for.
We can plot theta for both phases.
End of explanation
results.omega.plot(label='omega', color='C1')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
Explanation: And omega.
End of explanation
def run_two_phases(force, torque_friction, system):
# put the specified parameters into the System object
system1 = system.set(force=force,
torque_friction=torque_friction)
# run phase 1
results1, details1 = run_solve_ivp(system1, slope_func,
events=event_func1)
# get the final state from phase 1
t_2 = results1.index[-1]
init2 = results1.iloc[-1]
# run phase 2
system2 = system1.set(t_0=t_2, init=init2, force=0)
results2, details2 = run_solve_ivp(system2, slope_func,
events=event_func2)
# combine and return the results
results = results1.append(results2)
return results
Explanation: Angular velocity, omega, increases linearly while I am pushing, and decreases linearly after I let go. The angle, theta, is the integral of angular velocity, so it forms a parabola during each phase.
In the next section, we'll use this simulation to estimate the torque
due to friction.
Estimating Friction
Let's take the code from the previous section and wrap it in a function.
End of explanation
force = 2
torque_friction = 0.3
results = run_two_phases(force, torque_friction, system)
results.tail()
Explanation: I'll test it with the same parameters.
End of explanation
def error_func1(torque_friction, system):
force = system.force
results = run_two_phases(force, torque_friction, system)
theta_final = results.iloc[-1].theta
print(torque_friction, theta_final)
return theta_final - theta_test
Explanation: These results are the same as in the previous section.
We can use run_two_phases to write an error function we can use, with root_scalar, to find the torque due to friction that yields the
observed results from the first push, a total rotation of 1.5 rad.
End of explanation
guess1 = 0.3
error_func1(guess1, system)
Explanation: With torque_friction=0.3, the table rotates a bit too far:
End of explanation
guess2 = 0.4
error_func1(guess2, system)
Explanation: With torque_friction=0.4, it doesn't go far enough.
End of explanation
res = root_scalar(error_func1, system, bracket=[guess1, guess2])
Explanation: So we can use those two values as a bracket for root_scalar.
End of explanation
actual_friction = res.root
actual_friction
Explanation: The result is 0.333 N m, a little less than the initial guess.
End of explanation
from matplotlib.patches import Circle
from matplotlib.pyplot import gca, axis
def draw_func(t, state):
theta, omega = state
# draw a circle for the table
circle1 = Circle([0, 0], radius_disk)
gca().add_patch(circle1)
# draw a circle for the teapot
center = pol2cart(theta, radius_pot)
circle2 = Circle(center, 0.05, color='C1')
gca().add_patch(circle2)
axis('equal')
Explanation: Now that we know the torque due to friction, we can compute the force
needed to rotate the turntable through the remaining angle, that is,
from 1.5 rad to 3.14 rad.
But first, let's animate the results.
Animation
Here's a function that takes the state of the system and draws it.
End of explanation
state = results.iloc[0]
draw_func(0, state)
Explanation: This function uses a few features we have not seen before, but you can read about them in the Matplotlib documentation.
Here's what the initial condition looks like.
End of explanation
# animate(results, draw_func)
Explanation: And here's an animation of the first push.
End of explanation
system3 = system.set(torque_friction=actual_friction)
Explanation: Summary
The example in this chapter demonstrates the concepts of torque, angular acceleration, and moment of inertia.
We used these concepts to simulate a turntable, using a hypothetical observation to estimating torque due to friction.
As an exercise, you can finish off the example, estimating the force needed to rotate the table to a given target angle.
The next chapter describes several case studies you can work on to practice the tools from the last few chapters, including projectiles, rotating objects, root_scalar, and maximize_scalar.
Exercises
Exercise: Continuing the example from this chapter, estimate the force that delivers the teapot to the desired position.
Use this System object, with the friction we computed in the previous section.
End of explanation
remaining_angle = theta_target - theta_test
remaining_angle
Explanation: Write an error function that takes force and system, simulates the system, and returns the difference between theta_final and the remaining angle after the first push.
End of explanation
# Solution
def error_func2(force, system):
Error function for root_scalar.
force: hypothetical value
returns: offset from target value
results = run_two_phases(force, system.torque_friction, system)
theta_final = results.iloc[-1].theta
print(force, theta_final)
return theta_final - remaining_angle
# Solution
guess1 = 2.0
error_func2(guess1, system3)
# Solution
guess2 = 3.0
error_func2(guess2, system3)
# Solution
res = root_scalar(error_func2, system3, bracket=[guess1, guess2])
# Solution
force = res.root
results = run_two_phases(force, actual_friction, system3)
theta_final = results.iloc[-1].theta
theta_final + theta_test
# Solution
theta_target
Explanation: Use your error function and root_scalar to find the force needed for the second push.
Run the simulation with the force you computed and confirm that the table stops at the target angle after both pushes.
End of explanation |
2,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="clearfix" style="padding
Step1: Advanced usage
Step2: Model fitting process
N.B. Model-fitting is optional.
Fitting the three-state model takes approximately 30 seconds and the six-state model takes around 15 minutes.
Step3: Command-based NEURON simulation
Step4: Brian simulation
Step6: Custom stimulus protocol | Python Code:
%matplotlib notebook
from pyrho import *
loadGUI()
Explanation: <div class="clearfix" style="padding: 10px; padding-left: 0px">
<a href="https://github.com/ProjectPyRhO/PyRhO", target="_blank"><img src="https://raw.githubusercontent.com/ProjectPyRhO/Prometheus/master/resources/images/PyRhO_logo_H_crop.png" alt="PyRhO logo" title="PyRhO: A Multiscale Optogenetics Simulation Platform. Logo by Pepe Herrero" width="140px" style="display: inline-block; margin-top: 5px;"></a>
<a href="https://jupyter.org/" target="_blank"><img src="https://raw.githubusercontent.com/jupyter/nature-demo/master/images/jupyter-logo.png" alt="Jupyter logo" title="Jupyter" width="150px" class="pull-right" style="display: inline-block; margin: 0px;"></a>
</div>
Welcome to Prometheus: Modelling as a Service!
Prometheus is a web portal for modelling and simulating rhodopsins. It's a temporary way for you to try out PyRhO (computational tools for optogenetics) in an IPython/Jupyter notebook with absolutely no installation or set-up required. You can find more information about PyRhO in our open-access paper.
This project was conceived of and developed by Benjamin Evans and Konstantin Nikolic in the Bio-modelling group at Imperial College London. The work was kindly supported by the UK BBSRC grant: BB/L018268/1, the BBSRC Impact Acceleration Award and the UK EPSRC grant: EP/N002474/1.
If you find this service useful or have ideas for improvements, all comments are welcome at <a href="mailto:[email protected]?subject=I just tried Prometheus!">[email protected]</a>. For the latest updates, follow us on twitter @ProjectPyRhO!
Run some example Python code or use the web-based GUI!
This Notebook Server was launched just for you with tmpnb! It's a temporary "sandbox" environment where everything is configured and ready to go. Try launching the GUI below or running some of the code examples to see what PyRhO can offer you.
<div class="alert alert-warning" role="alert" style="margin: 10px">
<p>**WARNING**</p>
<p>Don't rely on this server for anything you want to last - your server will be *deleted after 10 minutes of inactivity*.</p>
<p>This is a pilot study running on a server with limited resources so if performance is slow, please try again later.</p>
</div>
If you would like to keep your work, in the File menu select Download as and then choose a format of your choice to save the notebook. If you select Notebook (.ipynb), you can install PyRhO on your computer and continue where you left off.
Quickstart -- Graphical User Interface
To run the code below:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
Feel free to create new cells using the plus button (<button class='fa fa-plus icon-plus btn btn-xs btn-default'></button>), or pressing SHIFT+ENTER while this cell is selected.
End of explanation
# Import the module and set figure rendering to be inline. (You will not see any output yet.)
# If you already used the GUI, this cell does not need to be run.
%matplotlib notebook
from pyrho import *
Explanation: Advanced usage
End of explanation
# Three-state model parameter fitting
initParams = Parameters()
initParams.add_many(
# Name Value Vary Min Max Expr
('g0', 1e5, True, 0.001, 1e6, None),
('phi_m',1e18, True, 1e15, 1e19, None),
('k_a', 5, True, 0.001, 1000, None),
('k_r', 0.1, True, 0.001, 1000, None),
('p', 0.8, True, 0.1, 5, None),
('q', 0.25, True, 0.1, 5, None),
('Gd', 0.1, True, 0.0001, 1, None),
('Gr0', 0.0002, True, 0.0001, 0.1, None),
('E', 0, True, -1000, 1000, None),
('v0', 43, True, -1e15, 1e15, None),
('v1', 17.1, True, -1e15, 1e15, None))
saveData(initParams, 'initParams')
from pyrho.datasets import *
ChR2data = loadChR2()
fitParams = fitModels(ChR2data, nStates=3, params=initParams, postFitOpt=True, relaxFact=2)
# Parameters automatically saved as 'fitted{}sParams'.format(nStates)
# Six-state model parameter fitting. N.B. fitting the six-state model takes around 15 minutes.
initParams = Parameters()
initParams.add_many(
# Name Value Vary Min Max Expr
('g0', 2.5e4, True, 0.0, 1e15, None),
('gam', 0.05, True, 0.00, 1, None),
('phi_m',3.5e17, True, 1e15, 1e19, None),
('k1', 10, True, 0.0, 1000, None),
('k2', 3, True, 0.0, 1000, None),
('p', 1, True, 0.1, 5, None),
('Gf0', 0.04, True, 0.0, 1000, None),
('k_f', 0.1, True, 0.0, 1000, None),
('Gb0', 0.02, True, 0.0, 1000, None),
('k_b', 0.15, True, 0.0, 1000, None),
('q', 1, True, 0.1, 5, None),
('Go1', 2, True, 0.0, 1000, None),
('Go2', 2, True, 0.0, 1000, None),
('Gd1', 0.1, True, 0.0, 1000, None),
('Gd2', 0.01, True, 0.0, 1000, None),
('Gr0', 3.3e-4, True, 0.0, 1000, None),
('E', 0, True, -1000, 1000, None),
('v0', 43, True, -1e15, 1e15, None),
('v1', 17.1, True, -1e15, 1e15, None))
saveData(initParams, 'initParams')
from pyrho.datasets import *
ChR2data = loadChR2()
fitParams = fitModels(ChR2data, nStates=6, params=initParams, postFitOpt=True, relaxFact=2)
# Parameters automatically saved as 'fitted{}sParams'.format(nStates)
Explanation: Model fitting process
N.B. Model-fitting is optional.
Fitting the three-state model takes approximately 30 seconds and the six-state model takes around 15 minutes.
End of explanation
RhO = models['6']()
Prot = protocols['step']()
Prot.phis = [1e16, 1e15, 1e14]
Sim = simulators['NEURON'](Prot, RhO)
Sim.run()
Sim.plot()
Explanation: Command-based NEURON simulation
End of explanation
import brian2 as br
### Define the rhodopsin model
nStates = '6'
origParams = modelFits[nStates]['ChR2'] # Load from pre-fit models
# Alternatively uncomment the line below to use fitted params from above
#origParams = loadData('fitted{}sParams'.format(nStates))
RhO = models[nStates](origParams)
### Define the stimulation protocol
Prot = protocols['step']()
Prot.phis = [1e18]
Prot.Vs = [None]
Prot.cycles = [[150, 100], [200, 75]]
### Define network parameters
from brian2.units import Mohm
from brian2.units.stdunits import ms, mV
N = 80
pConnect = 0.2
psp = 'v_post += 1.5*mV'
delay = 3*ms
netParams = {'tau_m':10*ms, 'R_m':70*Mohm, 'E_m':-70*mV, 'v_t0':-50*mV, 'sigma':10*mV, 't_ref':4*ms}
### Define neuron model
eqRhO = '''dv/dt = ((-I*R_m)+E_m-v)/tau_m + sigma*xi*tau_m**-0.5 : volt''' + RhO.brian_phi_t
eqLIF = '''dv/dt = (E_m-v)/tau_m + sigma*xi*tau_m**-0.5 : volt'''
### Create neuron groups - use Euler-Maruyama method for stochasticity
G0 = br.NeuronGroup(N, eqRhO, threshold='v>v_t0', reset='v=E_m', refractory='t_ref',
namespace=netParams, name='Inputs', method='euler')
G0.v = 'rand()*(v_t0-E_m)+E_m'
G1 = br.NeuronGroup(N/2, eqLIF, threshold='v>v_t0', reset='v=E_m', refractory='t_ref',
namespace=netParams, method='euler')
G1.v = 'rand()*(v_t0-E_m)+E_m'
G2 = br.NeuronGroup(N/4, eqLIF, threshold='v>v_t0', reset='v=E_m', refractory='t_ref',
namespace=netParams, method='euler')
G2.v = 'rand()*(v_t0-E_m)+E_m'
### Create synapses
S1 = br.Synapses(G0, G1, on_pre=psp, delay=delay)
S1.connect(True, p=pConnect)
S2 = br.Synapses(G1, G2, on_pre=psp, delay=delay)
S2.connect(True, p=pConnect)
### Create monitors
monitors = {'states' : br.StateMonitor(G0, RhO.brianStateVars, record=0), # Record states
'I' : br.StateMonitor(G0, 'I', record=0), # Record current
'V' : br.StateMonitor(G0, 'v', record=0), # Record voltage
'spikes' : [br.SpikeMonitor(G0, name='Retina'), # Record spikes
br.SpikeMonitor(G1, name='LGN'),
br.SpikeMonitor(G2, name='V1')]}
### Build the network
net = br.Network(br.collect())
net.add(monitors)
### Run the simulation
Sim = simulators['Brian'](Prot, RhO, simParams['Brian'], net, netParams, monitors)
Sim.run()
Sim.plot()
Explanation: Brian simulation
End of explanation
from scipy.interpolate import InterpolatedUnivariateSpline as spline
RhO = models['6']()
Prot = protocols['custom']()
Prot.phis = [1e15, 1e16]
Sim = simulators['Python'](Prot, RhO)
def pulseGenerator(run, phi, pulse):
Custom interpolation function for a step pulse with damped sinusoidal oscillations
pStart, pEnd = pulse
t = np.linspace(0, pEnd-pStart, 1001, endpoint=True)
x = phi + 0.5*phi*np.sin(0.2*np.pi*t)*np.exp(-.05*t)
return spline(pStart + t, x, ext=1, k=5)
Prot.phi_ft = pulseGenerator
Sim.run()
Sim.plot()
Explanation: Custom stimulus protocol
End of explanation |
2,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
COVID-19 Моделирование распространения и анализ
Целью данной лабораторной является построение модели заболевания населения COVID-19, обладающей как можно лучшей предсказательной силой. Каждый студент может на свое усмотрение выбрать факторы, учитываемой моделью, по результатам семестра проводится конкурс на лучшую модель.
Ход выполнения работы
Изучение теории и имеющихся данных.
Формулировка гипотез о связах величин и законах их изменения.
Построение математической модели, нахождение аналитических решений, написание кода для симуляции.
Качественное сравнение результатов модели и реальных данных.
Оценивание параметров модели на основе реальных данных.
Количественное сравнение предсказаний модели с историческими данными.
Извелечение практически полезной информации, например, оценка продолжительности болезни, индекса репродукции, скорости снижения иммунитета, тенденции изменения этих параметров, формирование сценариев дальнейшего развития ситуации с COVID-19.
Предложения по добавлению управляющих параметров, таких как введение каратнинов, и разработка алгоритма, позволяющего добиться контроля над распространением ковида.
Исходные данные по COVID-19
COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University Визуализация данных
Kouprianov, A. (2021). COVID-19.SPb.monitoring. Monitoring COVID-19 epidemic in St. Petersburg, Russia
Step1: Получение исходных данных
Step2: Так как имеющиеся данные описывают только изменение численности больших классов людей, то анализ этих данных будем проводить в духе SIR модели.
Всех людей мы объединяем в классы
Step3: Предварительный анализ
Некоторые выводы о зависимостях в данных можно сделать разглядывая графики.
Посмотрим, что мы можем увидеть.
Step4: В имеющихся у нас данных нет переходов из класса R в класс S, что соответствует потере иммунитета со временем.
По оценкам иммунологов иммунитет должен сохраняться хотя бы в течении 6 месяцев, причем иммуный человек заболевает со значительно меньшей вероятностью, поэтому во время первой волны заболеваний ковидом переходами R -> S можно пренебречь.
Step5: Модель SIR
Получив общее представление о связи числа болеющих с размером популяции, числом переболевших и другими величинами из анализа исторических данных, графиков и нашего общего понимания развития эпидемий, мы готовы сформулировать математическую модель, воспроизводящую эти зависимости.
Модель фиксирует качественно характер зависимостей, однако может содержать параметры, задающие, например, заразность инфекции, продолжительность инкубационного периода и т.п.
Эти параметры мы позже попробуем востановить по историческим данным.
Для понимания происходящего и предсказания будущего нам нужна модель, описывающая связь между наблюдаемыми величинами.
Всплески заболеваемости и последующее уменьшение числа болеющих можно описать простейшей SIR моделью, в рамках которой вся попопуляция разбивается на группы
Step9: Нерабочие дни вводились с 30 марта по 30 апреля. Именно на это время приходится скачок индекса репродукции. Имея данные за продолжительный промежуток времени, мы можем оценить скорость распространения достаточно точно, однако в первые месяцы эпидемии число заражений было малым, и как мы видим, наивные оценки скорости распространения инфекции дают очень грубый результат, что делает трудным принятие решений. Для более точного оценивания параметров можно воспользоваться вероятностной модель, что однако требует значительно больших усилий.
Согласно грубой оценке, скорость заражения падала с мая по июль, и где-то в середине июня базовый индекс репродукции упал ниже 1, на эти даты приходится конец первой волны эпидемии.
В июле уже заметны признаки начала второй волны.
Количество переболевших не провосходит долей процента от всего населения, поэтому эффективный индекс репродукции практически совпадает с базовым, а значит затухании эпидемии произошло не из-за исчерпания подверженных эпидемии людей, а из-за изменения условий.
Если считать, что вирус значительно не мутировал, то значит скорость распространения вируса уменьшилась из-за ограничения контактов между людьми.
Step10: На графике четко виден период экспоненциального роста числа больных (выглядит как прямая линия, если число больных построить на логарифмической оси).
Когда почти все люди заболевают, период роста сменяется периодом экспоненциального убывания числа больных.
Интересно, что некоторое число людей не успевает заболеть, и в рамках этой модели не заболеет уже никогда.
Step15: Оценка параметров модели
Исходные данные очень зашумлены, поэтому наши оценки параметров модели по паре точек весьма приблизительны.
Если предположить, что константы не менялись на всем рассматриваемом промежутке, то параметры можно оценить опираясь на все совокупность данных, значительно уменьшая шум.
Для корректной оценки нам нужна некоторая модель шума.
Рассмотрем простейшую модель, в которой скорость изменения числа людей в каждой группе определяется по SIR модели, но в каждый момент возможны отклонения от модели, которые в среднем равны нулю и независимы между собой.
$$
\begin{cases}
\dot S=\frac{dS}{dt} = -\frac{\beta I S}{N}+W_S,\
\dot I=\frac{dI}{dt} = \frac{\beta I S}{N}-\gamma I+W_I,\
\dot R=\frac{dR}{dt} = \gamma I+W_R.
\end{cases}
$$
Для каждого момента времени $t$ в уравнения входят свои случайные величины $W_S$, $W_I$ и $W_R$,
такие что они не зависят друг от друга и от себя при других $t$.
Математическое ожидание всех шумов $W_\cdot$ равно нулю, и для простоты предположим, что
они все распределены нормально со среднеквадратическим отклонением $1$.
Тогда логарифмическая функция правдоподобия равна | Python Code:
# Устанавливаем библиотеки, если это не было сделано ранее.
# ! pip3 install seaborn matplotlib numpy pandas
# Импорт библиотек
import numpy as np
import matplotlib.pyplot as plt
import urllib.request
import pandas as pd
import seaborn as sns
# Используем настройки seaborn по-умолчанию, устанавливаем только размер рисунка
sns.set(rc={'figure.figsize':(11, 4)})
Explanation: COVID-19 Моделирование распространения и анализ
Целью данной лабораторной является построение модели заболевания населения COVID-19, обладающей как можно лучшей предсказательной силой. Каждый студент может на свое усмотрение выбрать факторы, учитываемой моделью, по результатам семестра проводится конкурс на лучшую модель.
Ход выполнения работы
Изучение теории и имеющихся данных.
Формулировка гипотез о связах величин и законах их изменения.
Построение математической модели, нахождение аналитических решений, написание кода для симуляции.
Качественное сравнение результатов модели и реальных данных.
Оценивание параметров модели на основе реальных данных.
Количественное сравнение предсказаний модели с историческими данными.
Извелечение практически полезной информации, например, оценка продолжительности болезни, индекса репродукции, скорости снижения иммунитета, тенденции изменения этих параметров, формирование сценариев дальнейшего развития ситуации с COVID-19.
Предложения по добавлению управляющих параметров, таких как введение каратнинов, и разработка алгоритма, позволяющего добиться контроля над распространением ковида.
Исходные данные по COVID-19
COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University Визуализация данных
Kouprianov, A. (2021). COVID-19.SPb.monitoring. Monitoring COVID-19 epidemic in St. Petersburg, Russia: Data and scripts. Данные по всей России.
Coronavirus (Covid-19) Data in the United States
Our World in Data Данные по избыточной смертности по месяцам. Визуализация данных
Oxford Covid-19 Government Response Tracker
Яндекс. Коронавирус: дашборд Данные на карте.
Excess mortality during the COVID-19 pandemic
Публикации про данные
Что важно знать гражданам Санкт-Петербурга об эпидемии коронавируса. COVID-19 в Петербурге, сводный инфографический отчет. Фонтанка: Волна идет на подъем? О чем молчит официальная ковидная статистика в Петербурге
Данные о возрастной пирамиде
PopulationPyramid.net
Росстат: Численность населения Российской Федерации по муниципальным образованиям
End of explanation
# Загрузка данных агрегированных А. Куприяновым.
# URL = 'https://raw.githubusercontent.com/alexei-kouprianov/COVID-19.SPb.monitoring/main/data/SPb.COVID-19.united.csv'
# urllib.request.urlretrieve(URL, 'data/SPb.COVID-19.united.csv')
# Читаем локальную копию файла
data = pd.read_csv('data/SPb.COVID-19.united.csv', na_values='NA', parse_dates=[0,5], index_col=0)
# Выводим названия столбцов и типы
print(data.dtypes)
# Выводим размер таблицы
print(data.shape)
# Выводим диапазон дат.
print(f"{data.index[0]} -- {data.index[-1]}")
# Визуально проверяем корректность загруженных данных
# data
Explanation: Получение исходных данных
End of explanation
(data['ACTIVE.sk']+data['CONFIRMED.sk']-data['RECOVERED.sk']-data['DEATHS.sk']).plot(style='-r', label='Calculated')
(data['ACTIVE.sk'].shift(-1)).plot(style='k', label='Historical')
plt.legend()
plt.show()
# Посмотрим, как изменялись число заражений и число смертей.
data[['CONFIRMED.sk','CONFIRMED.spb']].plot(subplots=False)
plt.show()
data['DEATHS.sk'].plot(subplots=True)
plt.show()
# Мы видим колебания статистики в течении недели, возникающие из-за особенностей сбора данных.
# Просуммируем данные за каждую неделю, что уменьшит шум.
data7 = data.resample('7D').sum()
data7[["CONFIRMED.sk","CONFIRMED.spb"]].plot(subplots=False)
plt.show()
data7["DEATHS.sk"].plot(subplots=False)
plt.show()
# Загрузим данные по полной смертности в России (посчитано точнее, чем смертность от COVID).
# URL = "https://raw.githubusercontent.com/dkobak/excess-mortality/main/russian-data/raw/russia-monthly-deaths-allregions-final.csv"
# urllib.request.urlretrieve(URL, 'data/russia-monthly-deaths.csv')
# Читаем локальную копию файла
deaths_data = pd.read_csv('data/russia-monthly-deaths.csv', skiprows=0, header=None, ).T
deaths_data.columns = ['year','month'] + list( deaths_data.iloc[0,2:] )
deaths_data.drop(0,inplace=True)
deaths_data['day'] = 15
months = {'январь':1, 'февраль':2, 'март':3, 'апрель':4, 'май':5, 'июнь':6, 'июль':7, 'август':8, 'сентябрь':9, 'октябрь':10, 'ноябрь':11, 'декабрь':12}
deaths_data['month'] = deaths_data['month'].apply(lambda x: months[x])
index = pd.to_datetime( deaths_data[['year','month','day']] )
deaths_data.set_index(index, inplace=True)
deaths_data.drop(columns=['year','month','day'], inplace=True)
for n in deaths_data:
deaths_data[n] = deaths_data[n].astype(np.float32)
# Выводим названия столбцов и типы
# print(deaths_data.dtypes)
# Выводим размер таблицы
# print(deaths_data.shape)
# Выделим смертность по Петербургу и выведем график
spb_deaths_data = deaths_data['г. Санкт-Петербург']
print(spb_deaths_data)
spb_deaths_data.plot()
plt.show()
# spb_deaths_data.groupby(spb_deaths_data.index.year)
Explanation: Так как имеющиеся данные описывают только изменение численности больших классов людей, то анализ этих данных будем проводить в духе SIR модели.
Всех людей мы объединяем в классы: S - восприимчивые к болезни, I - зараженные/болеющие, R - невосприимчивые к болезни/выздоровевшие/погибшие.
Число больных I доступно в инсторических данных непосредственно в поле ACTIVE.sk, все данные приводятся с шагом в день. Числа S и R непосредственно не доступны, однако у нас есть данные о переходах между классами:
- Поле CONFIRMED.sk содержит число заболевших, т.е. перешедших из класса S в класс I.
- Поле RECOVERED.sk содержит число выздоровевших, а поле DEATHS.sk число погибших. Их сумма равна числу перешедших из класса I в класс R.
Значение ACTIVE.sk теоретически можно вычислить через поля CONFIRMED.sk, RECOVERED.sk, DEATH.sk, практически же сохраненные значения и вычисленные могут немного различаться.
End of explanation
# Посмотрим, как число больных связано со смертностью.
data7.plot(x="CONFIRMED.sk",y="DEATHS.sk", style='.r')
plt.show()
Explanation: Предварительный анализ
Некоторые выводы о зависимостях в данных можно сделать разглядывая графики.
Посмотрим, что мы можем увидеть.
End of explanation
# Извлечем данные по первой полне
first_wave = data7.loc[:'2020-09-01']
# Проверим визуально, что данные по смертям действительно показывают одну волну.
first_wave['DEATHS.sk'].plot()
plt.show()
Explanation: В имеющихся у нас данных нет переходов из класса R в класс S, что соответствует потере иммунитета со временем.
По оценкам иммунологов иммунитет должен сохраняться хотя бы в течении 6 месяцев, причем иммуный человек заболевает со значительно меньшей вероятностью, поэтому во время первой волны заболеваний ковидом переходами R -> S можно пренебречь.
End of explanation
# Выделим из доступных исторических данных величины, описываемые SIR моделью.
# Число невосприимчивых.
R = (first_wave['RECOVERED.sk']+first_wave['DEATHS.sk']).cumsum()
# Всего людей
N = 5384342
# Число восприимчивых
S = N - first_wave['CONFIRMED.sk'].cumsum()
# Число больных
I = N - S - R
# Число умирающих в день.
dD = first_wave['DEATHS.sk']/7
# В первую волну заболело только небольшое число жителей города, поэтому S почти не изменяется.
plt.semilogy(S/N, 'y', label='Susceptible')
plt.semilogy(I/N, 'r', label='Infectious')
plt.semilogy(R/N, 'g', label='Recovered')
plt.semilogy(dD/N, 'b', label='Deaths/day')
plt.legend()
plt.show()
plt.semilogx(S/N, R/N)
plt.xlabel('S')
plt.ylabel('R')
plt.show()
# Заменив производные в уравнениях SIR модели на конечные разности мы можем оценить
# константы модели в каждый момент времени.
# Для оценки производной мы используем центральные конечные разности,
# поэтому новые величины будут заданы на серединах интервалов между старыми отсчетами.
index = first_wave.index.shift(periods=3)[:-1]
# Вычислим производные.
dS = pd.Series( np.diff(S.to_numpy(), 1)/7, index=index)
dI = pd.Series( np.diff(I.to_numpy(), 1)/7, index=index)
dR = pd.Series( np.diff(R.to_numpy(), 1)/7, index=index)
# Вычислим средние значения на интервалах.
def midpoint(x): return pd.Series( (x[1:]+x[:-1])/2, index=index)
mS = midpoint(S.to_numpy())
mI = midpoint(I.to_numpy())
mR = midpoint(R.to_numpy())
# Оценким константы в каждый момент времени, считая, что в данных нет никакого шума.
beta = -dS/mS/mI*N
gamma = dR/mI
rho0 = beta/gamma # Basic reproduction number
rho = rho0*mS/N # Effective reproduction number
R0 = -np.log(S/N)/(R/N)
fig, ax = plt.subplots(1, figsize=(15,5))
ax.plot(beta, 'b', label='beta')
ax.plot(gamma, 'g', label='gamma')
ax.semilogy(rho0, 'k', label='rho0')
ax.semilogy(rho, 'r', label='rho')
ax.semilogy(R0, 'y', label='R0')
ax.semilogy(1+0*rho, '--k', label='threshold=1')
ax.set_xlabel("Time")
ax.legend()
plt.show()
# Шум определенно был, особенно на начальном интервале, когда число больных было очень малым.
# В нашей модели даже один больной должен выздоравливать примерно на 1/30 за день, так что производная dI/dt
# никогда не ноль, однако на практике выздоровевший больной попадет в статистику один раз, когда его выпишут из больницы,
# или даже позже.
# Если бы больных было 30, то в среднем каждый день выписывался бы 1 больной, что соответствует предсказанию модели:
# 30 раз по 1/30 больного в день = 1 больной в день.
# При большом числе больных оценка достаточно разумна, но на редких случаях болезни оценки параметров безумны.
Explanation: Модель SIR
Получив общее представление о связи числа болеющих с размером популяции, числом переболевших и другими величинами из анализа исторических данных, графиков и нашего общего понимания развития эпидемий, мы готовы сформулировать математическую модель, воспроизводящую эти зависимости.
Модель фиксирует качественно характер зависимостей, однако может содержать параметры, задающие, например, заразность инфекции, продолжительность инкубационного периода и т.п.
Эти параметры мы позже попробуем востановить по историческим данным.
Для понимания происходящего и предсказания будущего нам нужна модель, описывающая связь между наблюдаемыми величинами.
Всплески заболеваемости и последующее уменьшение числа болеющих можно описать простейшей SIR моделью, в рамках которой вся попопуляция разбивается на группы: S - восприимчивые к инфекции, I - болеющие, R - невосприимчивые к инфекции.
Внутри каждой группы люди считаются одинаково реагирующими на инфекцию, также считается, что любой человек может заразить любого, т.е. популяция однородна.
Эти допущения не вполне соответствуют действительности, зато позволяют сформулировать простую модель, точность предсказаний которой мы попробуем проверить.
В начале число больных I весьма мало, и все люди попадают в группу людей S, рискующих заболеть.
В процессе инфицирования люди из группы S постепенно перетекают в группу I, причем скорость перетекания увеличивается как с увеличением числа больных I, так и с числом людей S, которых можно заразить.
В первом приближении можно считать скорость заболеваемости пропорциональной доле больных $I/N$ и не болевших $S/N$, с коэффициентом пропорциональности $\beta$.
Больные люди со временем выздоравливают и приобретают иммунитет, причем в модели мы считаем, что люди с иммунитетом не заболевают.
Людей с иммунитетом мы относим в группу R, также в эту группу попадают погибшие.
В модели мы приближенно считаем, что за единицу времени выздоравливает определенная доля $\gamma$ болеющих.
SIR модель описывается системой дифференциальных уравнений:
$$
\begin{cases}
\frac{dS}{dt} = -\frac{\beta I S}{N},\
\frac{dI}{dt} = \frac{\beta I S}{N}-\gamma I,\
\frac{dR}{dt} = \gamma I.
\end{cases}
$$
Полное число людей $N=S+I+R$ в этой модели постоянно.
По данным Росстата население Петербурга на 1 января 2021 составило $N=5 384 342$ человек.
Направление изменения числа больных определяется базовым индексом репродукции $\rho_0=\frac{\beta}{\gamma}$.
Из второго уравнения
$$\frac{dI}{dt} = \gamma I \left(\rho_0\frac{S}{N}-1\right).$$
Величина $\rho=\rho_0(1-\frac{R+I}{N})=\rho_0\frac{S}{N}$ называется эффективным репродуктивным числом
и равна числу заражений одним инфицированным.
Если $\rho<1$, то число больных уменьшается и эпидемия идет на спад.
Если $\rho>1$, то идет экспоненциальный рост числа больных.
Так как эффективное репродуктивное число зависит от числа неинфицированных людей,
то можно вычислить минимальное необходимое число людей с иммунитетом для предотвращения роста числа заражений,
т.е. для достижения коллективного иммунитета:
$$
\rho<1\quad\Leftrightarrow\quad
1-\frac{1}{\rho_0}<\frac{R+I}{N}.
$$
End of explanation
# Используем метод классический метод Рунге-Кутты RK4 для решения систему ОДУ, см.
# https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods
class RungeKutta:
def __init__(self):
pass
def integrate(self, *x, rhs=None, dt=0.01, period=1.):
Численно интегрируем систему дифференциальных уравнений:
dx_1/dt = rhs(x_1, x_n)_1,...
dx_n/dt = rhs(x_1, x_n)_n.
Вектор начальных значений передается в args.
Правая часть вычисляется функцией `rhs`, которая должны принимать на вход `n` значений,
и столько же возвращать.
Метод возвращает вектор значений x_1,.., x_n через period отсчетов времени.
Длина промежуточных шагов интегрирования равна dt.
while period>dt:
x = self.single_step(*x, rhs=rhs, dt=dt)
period = period - dt
return self.single_step(*x, rhs=rhs, dt=period)
def single_step(self, *x0, rhs=None, dt=0.01):
dx0 = rhs(*x0) # Вычисляем производные
x1 = (xn+0.5*dt*dxn for xn, dxn in zip(x0, dx0)) # Делаем промежуточных шаг длины dt/2
dx1 = rhs(*x1) # Делаем следующий шаг...
x2 = (xn+0.5*dt*dxn for xn, dxn in zip(x0, dx1))
dx2 = rhs(*x2)
x3 = (xn+dt*dxn for xn, dxn in zip(x0, dx2))
dx3 = rhs(*x3)
# Суммируем результаты, делаем итоговый шаг.
return tuple(xn+dt/6*dxn0+dt/3*dxn1+dt/3*dxn2+dt/6*dxn3 for xn, dxn0, dxn1, dxn2, dxn3 in zip(x0, dx0, dx1, dx2, dx3))
# Небольшой тест. Уравнение x'(t)=-x(t), x(0)=1 имеет решение x(t)=exp(-t), равное x(1)=1/e в точке 1.
integrator = RungeKutta()
print( "RK4: ", integrator.integrate(1., rhs=lambda x: (-x,), period=1) )
print( "Точное решение:", np.exp(-1) )
# Напишем код для модели, и посмотрим, какое предсказание даст нам модель.
class GaussianNoise:
def __init__(self, sigma):
self._sigma = sigma
def __call__(self, *args):
return self.generate(*args)
def generate(self, *shape):
return self._sigma*np.random.randn(*shape)
class SIR:
def __init__(self, noise=0):
Аргументы:
WS, WI = предположение о шуме для отдельных компонент.
self.WS, self.WI = GaussianNoise(noise), GaussianNoise(noise)
self.integrator = RungeKutta()
def generate(self, beta, gamma, alpha=0., periods=180,
S0=5384342, I0=1, R0=0,
dt=1, t0="2020-03-02"
):
Генерирует один вариант развития эпидемии с одинаковыми начальными условиями.
Аргументы:
S, I, R = начальное число людей без иммунитета, больных, с иммунитетом.
t = начальный момент времени,
dt = шаг времени в днях,
periods = число шагов моделирования,
beta = скорость заражения,
gamma = скорость выздоровления,
index = pd.date_range(t0, periods=periods, freq=pd.DateOffset(days=dt))
S = np.zeros(periods)
I, R = np.zeros_like(S), np.zeros_like(S)
S[0], I[0], R[0] = S0, I0, R0
N = S0+I0+R0
for n in range(0, periods-1):
S[n+1], I[n+1], R[n+1] = self.integrator.integrate(
S[n], I[n], R[n],
rhs = lambda S, I, R: (-beta*S*I/N+alpha*R,beta*S*I/N-gamma*I,gamma*I-alpha*R),
period = dt,
dt = 0.1
)
WS, WI = self.WS(1)*np.sqrt(dt), self.WI(1)*np.sqrt(dt)
S[n+1] += WS
I[n+1] += WI
R[n+1] += -WS-WI
return pd.DataFrame(
data={ 'S': S, 'I': I, 'R': R },
index=index
)
def inspect_SIR(simadata):
plt.semilogy(simdata['S'], label='S')
plt.semilogy(simdata['I'], label='I')
plt.semilogy(simdata['R'], label='R')
plt.legend()
plt.ylim((1, None))
plt.show()
# Создаем модель.
model = SIR(noise=1)
# Проводим симуляцию.
simdata = model.generate(beta=0.9, gamma=0.1)
inspect_SIR(simdata)
Explanation: Нерабочие дни вводились с 30 марта по 30 апреля. Именно на это время приходится скачок индекса репродукции. Имея данные за продолжительный промежуток времени, мы можем оценить скорость распространения достаточно точно, однако в первые месяцы эпидемии число заражений было малым, и как мы видим, наивные оценки скорости распространения инфекции дают очень грубый результат, что делает трудным принятие решений. Для более точного оценивания параметров можно воспользоваться вероятностной модель, что однако требует значительно больших усилий.
Согласно грубой оценке, скорость заражения падала с мая по июль, и где-то в середине июня базовый индекс репродукции упал ниже 1, на эти даты приходится конец первой волны эпидемии.
В июле уже заметны признаки начала второй волны.
Количество переболевших не провосходит долей процента от всего населения, поэтому эффективный индекс репродукции практически совпадает с базовым, а значит затухании эпидемии произошло не из-за исчерпания подверженных эпидемии людей, а из-за изменения условий.
Если считать, что вирус значительно не мутировал, то значит скорость распространения вируса уменьшилась из-за ограничения контактов между людьми.
End of explanation
# Если мы допустим, что больные постепенно теряют иммунитет, то характер протекания эпидемии измениться.
model = SIR(noise=1)
simdata = model.generate(beta=0.9, gamma=0.1, alpha=0.001)
inspect_SIR(simdata)
# При выбранных параметрах число больных в популяции выходит со временем на константу, но никогда не падает до нуля.
# Как возникают сезонные заболевания? Можем ли мы получить периодическое изменение числа больных? Чем определяется период?
# Попробуем подобрать параметры, которые дают близкое к историческим данным число больных.
model = SIR(noise=0)
simdata = model.generate(beta=0.15, gamma=0.02)
inspect_SIR(simdata)
Explanation: На графике четко виден период экспоненциального роста числа больных (выглядит как прямая линия, если число больных построить на логарифмической оси).
Когда почти все люди заболевают, период роста сменяется периодом экспоненциального убывания числа больных.
Интересно, что некоторое число людей не успевает заболеть, и в рамках этой модели не заболеет уже никогда.
End of explanation
# Используем автоматическое дифференцирование, чтобы упростить себе жизнь.
import autograd
# В функция, которые мы хотим дифференцировать, нужно использовать специальную версию numpy.
import autograd.numpy as jnp
def sir_log_likelihood(betagamma, mS, mI, mR, dS, dI, dR, N):
Вычисляет логарифмическую функцию правдоподобия, как описано выше.
Аргументы:
betagamma - массив из двух параметров [beta, gamma].
mS, mI, mR - массивы, хранящие число людей в каждой категории в некоторые моменты времени.
dS, dI, dR - изменение числа людей из катергории за день.
N - людей суммарно.
beta,gamma = betagamma
vS=beta/N*mI*mS
vR=gamma*mI
WS = dS+vS
WI = dI-vS+vR
WR = dR-vR
return -( jnp.sum(WS**2) + jnp.sum(WI**2) + jnp.sum(WR**2) ) / mS.shape[0]
# Сохраним в массивы Numpy необходимые временные ряды.
np_dS = np.diff(S.to_numpy(), 1)/7
np_dI = np.diff(I.to_numpy(), 1)/7
np_dR = np.diff(R.to_numpy(), 1)/7
# Вычислим средние значения на интервалах.
def np_midpoint(x): return (x[1:]+x[:-1])/2
np_mS = np_midpoint(S.to_numpy())
np_mI = np_midpoint(I.to_numpy())
np_mR = np_midpoint(R.to_numpy())
# Зафиксируем индексы для тренировочного тестового наборов.
trainingset = slice(0,None,2) # все четные
testset = slice(1,None,2) # все нечетные отсчеты.
def loss(betagamma, indices):
Функция потерь.
return -sir_log_likelihood(betagamma,
np_mS[indices], np_mI[indices], np_mR[indices],
np_dS[indices], np_dI[indices], np_dR[indices],
N)
# Градиент функции потерь по параметрам.
d_loss = autograd.grad(loss, 0)
betagamma = np.random.rand(2)
print(fParameters{betagamma}
Loss {loss(betagamma, trainingset)}
Derivative {d_loss(betagamma, trainingset)})
# Для подбора параметров мы будем минимизировать функционал ошибки.
# Воспользуемся готовыми реализациями оптимизаторов.
from scipy.optimize import minimize
def train(betagamma, indices):
Функция берет начальные параметры и улучшает их, минимизируя функцию потерь.
def trace(xk):
print('.',end='\n')
res = minimize(fun=loss, x0=betagamma, args=(indices,), jac=d_loss, callback=trace, tol=1e-1)
print(res.message)
return res.x
betagamma0 = np.random.rand(2)
print(f"Initial parameters {betagamma0}")
print(f"Initial loss {loss(betagamma0, trainingset)}")
betagamma = train(betagamma0, trainingset)
print(f"Optimized parameters {betagamma}")
print(f"Optimized loss {loss(betagamma, trainingset)}")
print(f"Loss on test set {loss(betagamma, testset)}")
# Генерируем динамику согласно модели с подобранными параметрами и рисуем график.
model = SIR(noise=0)
simdata = model.generate(beta=betagamma[0], gamma=betagamma[1])
inspect_SIR(simdata)
# Как мы выдим, оптимизатор находит максимум функции правдоподобия за несколько итераций.
# Динамика, однако, отличается значительно.
# При оцененных парамтрах эпидемия развивалась бы медленнее, но первая волна через полгода даже не дошла до максимума.
# Это легко объяснить: грубо оценивая параметры выше, мы видели, что параметры модели менялись со временем,
# однако в использованом сейчас подходе параметры считались постоянными на все интервале.
# Попробуйте рассмотреть меньшие промежутки времени, в течении которых параметры не должны были сильно изменяться:
# во время локдауна, после локдауна и т.п.
Explanation: Оценка параметров модели
Исходные данные очень зашумлены, поэтому наши оценки параметров модели по паре точек весьма приблизительны.
Если предположить, что константы не менялись на всем рассматриваемом промежутке, то параметры можно оценить опираясь на все совокупность данных, значительно уменьшая шум.
Для корректной оценки нам нужна некоторая модель шума.
Рассмотрем простейшую модель, в которой скорость изменения числа людей в каждой группе определяется по SIR модели, но в каждый момент возможны отклонения от модели, которые в среднем равны нулю и независимы между собой.
$$
\begin{cases}
\dot S=\frac{dS}{dt} = -\frac{\beta I S}{N}+W_S,\
\dot I=\frac{dI}{dt} = \frac{\beta I S}{N}-\gamma I+W_I,\
\dot R=\frac{dR}{dt} = \gamma I+W_R.
\end{cases}
$$
Для каждого момента времени $t$ в уравнения входят свои случайные величины $W_S$, $W_I$ и $W_R$,
такие что они не зависят друг от друга и от себя при других $t$.
Математическое ожидание всех шумов $W_\cdot$ равно нулю, и для простоты предположим, что
они все распределены нормально со среднеквадратическим отклонением $1$.
Тогда логарифмическая функция правдоподобия равна:
$$
\log L[\beta,\gamma]=-\int_{T_1}^{T_2}[(\dot S+\beta IS/N)^2 +(\dot I-\beta IS/N+\gamma I)^2+(\dot R-\gamma I)^2]dt.
$$
Согласно принципу максимального правдоподобия
параметры можно найти как точку максимума функции правдоподобия:
$$
\beta,\gamma = \mathrm{argmax}_{\beta,\gamma} \log L[\beta,\gamma],
$$
где функции $S$, $I$ и $R$ берутся из исторических данных.
Для нашего выбора распределения шумов, задача сводится к методу наименьших квадратов.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.