Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
7,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
STA 208
Step3: The response variable is quality.
Step4: Exercise 2.1 (5 pts) Compare the leave-one-out risk with the empirical risk for linear regression, on this dataset.
Step5: Exercise 2.2 (10 pts) Perform kNN regression and compare the leave-one-out risk with the empirical risk for k from 1 to 50. Remark on the tradeoff between bias and variance for this dataset and compare against linear regression.
Step6: Conclusion Comparing the performance of kNN and linear regression, we see that 16-nearest neighbors achieves a LOO risk of 233.2 which is lower than that for linear regression (243.5).
Exercise 2.3 (10 pts) Implement forward stepwise regression (ESL section 3.3.2) for the linear model and compare the LOO risk for each stage. Recall that at each step forward stepwise regression will select a new variable that most improves the empirical risk and include that in the model (starting with the intercept). | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import LeaveOneOut
from sklearn import linear_model, neighbors
%matplotlib inline
plt.style.use('ggplot')
# dataset path
data_dir = "."
sample_data = pd.read_csv(data_dir+"/hw1.csv", delimiter=',')
sample_data.head()
Explanation: STA 208: Homework 1
This is based on the material in Chapters 2, 3 of 'Elements of Statistical Learning' (ESL), in addition to lectures 1-4.
Instructions
We use a script that extracts your answers by looking for cells in between the cells containing the exercise statements (beginning with Exercise X.X). So you
MUST add cells in between the exercise statements and add answers within them and
MUST NOT modify the existing cells, particularly not the problem statement
To make markdown, please switch the cell type to markdown (from code) - you can hit 'm' when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball.net/projects/markdown/syntax
1. Conceptual Exercises
In the following exercises you should provide an explanation, with math when necessary, for any answers. When answering with math you should use basic LaTeX, as in
$$E(Y|X=x) = \int_{\mathcal{Y}} f_{Y|X}(y|x) dy = \int_{\mathcal{Y}} \frac{f_{Y,X}(y,x)}{f_{X}(x)} dy$$
for displayed equations, and $R_{i,j} = 2^{-|i-j|}$ for inline equations. (To see the contents of this cell in markdown, double click on it or hit Enter in escape mode.) To see a list of latex math symbols see here: http://web.ift.uib.no/Teori/KURS/WRK/TeX/symALL.html
Exercise 1.1. (5 pts) Recall that the Hamming loss for Binary classification ($y \in {0,1}$) is
$$l(y,\hat y) = 1{y \ne \hat y} = (y - \hat y)^2$$
as long as $\hat y \in {0,1}$.
This loss can be extended to multiclass classification where there are $K$ possible values that $y$ can take (for example 'dog','cat','squirrel' or 1-5 stars). Explain how you can re-encode $y$ and $\hat y$ to be a $K-1$ dimensional vector that generalizes binary classification, and rewrite the loss using vector operations.
If we encode $\hat{y}$ be to $K$ dimensional vector, then $\hat{y} = e_i$, where $e_i$ is a vector with (K-1) 0 and a 1 at the $i$th index. The corresponding loss function is $l(y,\hat{y}) = \|y - \hat{y}\|_2^2$
It is also possible to encode $\hat{y}$ with $K-1$ dimensional vector and still use quadratic loss: For the first $K-1$ class, we still encode $\hat{y} = e_i$, for class $K$, we encode $\hat{y} = [\alpha, \alpha, ..., \alpha]$, all elements in this vector is $\alpha$, where $\alpha$ is the solution of $(1-\alpha)^2 + (K-2)\alpha^2 = 4$
Exercise 1.2 (5 pts) Ex. 2.7 in ESL
(a)
For convenience, we denote $[1, x]$ as $x$.
For linear regression, $\hat{f}(x_0) = x_0^T \hat{\beta}$, where $\beta = (X^TX)^{-1}X^TY$
so $\hat{f}(x_0) = x_0^T (X^TX)^{-1}X^TY = \sum_{i=1}^n x_0 (X^TX)^{-1} x_i^T y_i$
$l_i(x_0; X) = x_0 (X^TX)^{-1} x_i^T$
For k-nearest-neighbour regression, $\hat{f}(x_0) = \frac{1}{k} \sum_{i \in N_k(x_0)}y_i = \sum_{i=1}^n \frac{1}{k}I(x \in N_k(x_0)) y_i$
$l_i(x_0; X) = \frac{1}{k} I(x \in N_k(x_0))$
(b)
$\mathbb{E}{Y|X}(f(x_0)-\hat{f}(x_0))^2 = \mathbb{E}{Y|X}(f(x_0) - \mathbb{E}{Y|X}\hat{f}(x_0) + \mathbb{E}{Y|X}\hat{f}(x_0) - \hat{f}(x_0))^2 = [f(x_0)-\mathbb{E}{Y|X}\hat{f}(x_0)]^2 + \mathbb{E}{Y|X}[\mathbb{E}{Y|X}\hat{f}(x_0) - \hat{f}(x_0)]^2 \ + [f(x_0)-\mathbb{E}{Y|X}\hat{f}(x_0)][\mathbb{E}{Y|X}\hat{f}(x_0) - \mathbb{E}{Y|X}\hat{f}(x_0)] = [f(x_0)-\mathbb{E}{Y|X}\hat{f}(x_0)]^2 + [\mathbb{E}{Y|X}\hat{f}(x_0) - \hat{f}(x_0)]^2 = bias_{Y|X}^2 + var(\hat{f}(x_0)_{Y|X})$
(c)
$\mathbb{E}{Y,X}(f(x_0)-\hat{f}(x_0))^2 = \mathbb{E}{Y,X}(f(x_0) - \mathbb{E}{Y,X}\hat{f}(x_0) + \mathbb{E}{Y,X}\hat{f}(x_0) - \hat{f}(x_0))^2 = [f(x_0)-\mathbb{E}{Y,X}\hat{f}(x_0)]^2 + \mathbb{E}{Y,X}[\mathbb{E}{Y,X}\hat{f}(x_0) - \hat{f}(x_0)]^2 \ + [f(x_0)-\mathbb{E}{Y,X}\hat{f}(x_0)][\mathbb{E}{Y,X}\hat{f}(x_0) - \mathbb{E}{Y,X}\hat{f}(x_0)] = [f(x_0)-\mathbb{E}{Y,X}\hat{f}(x_0)]^2 + [\mathbb{E}{Y,X}\hat{f}(x_0) - \hat{f}(x_0)]^2 = bias_{Y,X}^2 + var(\hat{f}(x_0)_{Y,X})$
(d)
By Adam's law.
$\mathbb{E}{X}[ bias{Y|X}^2 ] = \mathbb{E}{X} [f(x_0) - \mathbb{E}{Y,X}(\hat{f}(x_0)) + \mathbb{E}{Y,X}(\hat{f}(x_0)) - \mathbb{E}{Y|X}(\hat{f}(x_0))]^2 = bias_{Y,X}^2 + var_X( \mathbb{E}{Y|X}(\hat{f}(x_0)) )$
By Eve's law.
$var(\hat{f}(x_0){Y,X}) = \mathbb{E}{X}(var(\hat{f}(x_0){Y|X})) + var_{X} (\mathbb{E}_{Y|X}(\hat{f}(x_0)))$
Exercise 1.3 (5 pts, 1 for each part) Recall that the true risk for a prediction function, $f$, a loss function, $\ell$, and a joint distribution for $Y,X$ is
$$R(f) = E \ell(y,f(x))$$
For a training set ${x_i,y_x}{i=1}^n$, the empirical risk is
$$R_n = \frac{1}{n} \sum{i=1}^n \ell(y_i,f(x_i)).$$
Let $y = x^\top \beta + \epsilon$ be a linear model for $Y|X$, where $x,\beta$ are $p$-dimensional such that $\epsilon$ is Gaussian with mean 0 and variance $\sigma^2$ (independent of X).
Let $\ell(y,\hat y) = (y - \hat y)^2$ be square error loss.
Show that $f^\star(x) = x^\top \beta$ gives the smallest true risk (also known as the Bayes rule).
Why can't we use this prediction in practice?
Recall that OLS is the empirical risk minimizer for linear functions. Why does this tell us the following:
$$ E R_n (\hat f) \le R(f^\star)$$
How do we know that $E R_n (\hat f) \le R(\hat f)$? and use this to answer Ex. 2.9 in ESL.
What about this was specific to OLS and least squares loss (can this be generalized)? What is the most general statement that you can think of that you can prove in this way?
(1)
We know that $\arg \min_{\hat{Y}} \mathbb{E}(Y-\hat{Y})^2 = \mathbb{E}(Y|X)$
$f^(x)$ is the minimier of true risk $\mathbb{E}(Y-\hat{Y})^2$
So $f^(x) = \mathbb{E}(Y|X=x) = \mathbb{E}(x^T \beta + \epsilon) = \mathbb{E}(x^T\beta) = x^T\beta$
(2)
We don't know $\beta$ in practice.
(3) Solution 1
$R_n(\hat f) \le R_n(f^\star)$ and
$$E R_n(f^\star) = E \left( \frac 1n \sum_{i=1}^n \ell(y_i,f^\star(x_i)) \right) = \frac 1n \sum_{i=1}^n R(f^\star) = R(f^\star)$$
Hence, $E R_n(\hat f) \le R(f^\star)$.
Solution 2
($R(f^*) = \mathbb{E}(Y - X\beta)^2 = \mathbb{E}(\epsilon)^2 = \sigma^2$ )
$\mathbb{E}(l(y_i, \hat{f}(x_i))) = \mathbb{E}[ \mathbb{E}(l(y_i, \hat{f}(x_i))) | X ]$
We calculate the conditional expectation first(treat $X$ as fixed matrix):
$\mathbb{E}[l(y_i, \hat{f}(x_i)) | X] = \mathbb{E}(x_i^T\beta + \epsilon_i - x_i^T (X^TX)^{-1}X^TY)^2 = \mathbb{E}(x_i^T\beta + \epsilon_i - x_i^T (X^TX)^{-1}X^T(X^T\beta + \mathbf{\epsilon}))^2\ = \mathbb{E}(x_i^T\beta + \epsilon_i - x_i^T\beta + <> x_i^T(X^TX)^{-1}X^T\mathbf{\epsilon})^2 = \mathbb{E}((e_i - x_i^T(X^TX)^{-1}X^T)^T \mathbf{\epsilon})^2 = \mathbb{E}{X}[ \mathbb{E}{\epsilon|X} ((e_i - x_i^T(X^TX)^{-1}X^T)^T \mathbf{\epsilon})^2 ] = (e_i - x_i^T(X^TX)^{-1}X^T) \sigma^2 I (e_i - x_i^T(X^TX)^{-1}X^T)^T\ = \sigma^2( 1 + x_i^T(X^TX)^{-1}x_i - 2e_iX(X^TX)^{-1}x_i ) = \sigma^2(1-x_i^T(X^TX)^{-1}x_i) <= \sigma^2$
$\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ ($(X^TX)^{-1}$ is postive definite)
We have proved that $\mathbb{E}[l(y_i, \hat{f}(x_i)) | X] \leq \sigma^2$ is true for $\forall X$ s.t. $(X^TX)$ has full rank. So $\mathbb{E}(l(y_i, \hat{f}(x_i))) = \mathbb{E}[ \mathbb{E}(l(y_i, \hat{f}(x_i))) | X ] \leq \mathbb{E}(\sigma^2) = \sigma^2$
So $\mathbb{E}(R_n(\hat{f})) = E(\frac{1}{n}\sum_{i=1}^nl(y_i,\hat{f}(x_i))) = \frac{1}{n}\sum_{i=1}^n \mathbb{E}(l(y_i, \hat{f}(x_i))) \leq \sigma^2 = R(f^*)$
(4) Solution 1
Based on (1) we have that $R(f^\star) \le R(\hat f)$. Hence, we have that
$$E R_n (\hat f) \le R(\hat f).$$
Therefore, the expected test error is greater than or equal to the expected training error.
Solution 2
For a newly observed $x_0$ and $y_0$, denote $\mathbf{\epsilon}_t$ as the $\epsilon$ for training set. we know that $\epsilon_0$ and $\mathbf{\epsilon}_t$ are independent.
$R(\hat{f}) = \mathbb{E}(y_0 - x_0(X^TX)^{-1}X^TY)^2 = \mathbb{E}(x_0\beta + \epsilon_0 - x_0(X^TX)^{-1}X^T(X\beta + \mathbf{\epsilon}_t))^2 = \mathbb{E}(\epsilon_0 - x_0(X^TX)^{-1}X^T\epsilon_t)^2 \= \sigma^2 + \mathbb{E}(x_0(X^TX)^{-1}X^T\mathbf{\epsilon}_t)^2 - \mathbb{E}(\epsilon_0) \mathbb{E}(x_0(X^TX)^{-1}X^T\mathbf{\epsilon}_t) = \sigma^2 + \mathbb{E}(x_0(X^TX)^{-1}X^T\mathbf{\epsilon}_t)^2 + 0 \geq \sigma^2 = R(f^*)$
(5) If we refer to Solution 1's then we see that the only place where the Gaussian model was used was in (1). So, the most general statement is...
Let $f^\star$ be the minimizer of $R(f)$ the true risk, and let $\hat f$ be the minimizer of $R_n$. Then $E R_n(\hat f) \le R(\hat f)$.
Exercise 1.4 Ex. 3.5 in ESL
$$\min_{\beta^c}{ \sum_{i=1}^N [y_i - \beta_0^c - \sum_{j=1}^p (x_{ij} - \bar{x}j)\beta_j^c]^2 + \lambda \sum{j=1}^N \beta_j^{c2} } = \min_{\beta^c}{ \sum_{i=1}^N [y_i - (\beta_0^c - \sum_{j=1}^p\bar{x}j\beta_j^c) - \sum{j=1}^p x_{ij}\beta_j^c]^2 + \lambda \sum_{j=1}^N \beta_j^{c2}} \ = \min_{\beta}{ \sum_{i=1}^N [y_i - \beta_0 - \sum_{j=1}^p x_{ij}\beta_j]^2 + \lambda \sum_{j=1}^N \beta_j^{2}} $$
Where $\beta_j = \beta_j^c$ for $1 \leq j \leq p$ and $\beta_0 = \beta_0^c - \sum_{j=1}^p\bar{x}_j\beta_j^c$
For Lasso, the proof is similar.
Exercise 1.5 Ex 3.9 in ESL
$X = QR$, where $Q$ is orthogonal matrix and $R$ is upper trangular matrix, then $\hat{y} = QQ^T y$.
We add a new feature $X_{new}$, denote $Q_{new} = [Q, q]$
$RSS = y^T(I - Q_{new} Q_{new}^T) y = y^T(I - QQ^T - qq^T)y = \|r\|_2^2 - (y^Tq)^2$
So we want to find the $q$ that maximize $y^Tq$
To make my algorithm efficient, we don't want to apply QR decomposition for our new data again and again.
The detailed algorithm:
for i = p+1,p+2,...,q:
$\qquad$ Calculate $q_i$ by gram-schmit: $q_i = x_i - \sum_{j=1}^p x_i^T x_j$ and normalize it by $q_i = q_i/\|q_i\|_2$
$\qquad$ Calculate $(y^Tq_i)^2$
Output the $i$ that maximize $(y^Tq_i)^2$
HW1 Wine Data Analysis
Instructions
You will be graded based on several criteria, and each is on a 5 point scale (5 is excellent - A - 1 is poor - C - 0 is not answered - D/F). You should strive to 'impress us' if you want a 5. This means excellent code, well explained conclusions, well annotated plots, correct answers, etc.
We will be grading you on several criteria:
Conclusions: Conclusions should be consistent with the evidence provided, the conclusion should be well justified, the principles of machine learning that you have learned should be respected (such as overfitting and underfitting etc.)
Correctness of calculations: code should be correct and reflect the principles learned in this course, the logic should be sound, the methods should match the setting and context, you should try many applicable methods that you have learned as long as they apply.
Code, Figures, and Text: Code should be annotated and easy to follow, with docstrings on the functions; captions, titles, for figures
Exercise 2 You should run the following code cells to import the code and reduce the variable set. Address the questions after the code.
End of explanation
X = np.array(sample_data.iloc[:,range(1,5)])
y = np.array(sample_data.iloc[:,0])
def loo_risk(X,y,regmod):
Construct the leave-one-out square error risk for a regression model
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar LOO risk
loo = LeaveOneOut()
loo_losses = []
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
regmod.fit(X_train,y_train)
y_hat = regmod.predict(X_test)
loss = np.sum((y_hat - y_test)**2)
loo_losses.append(loss)
return np.mean(loo_losses)
def emp_risk(X,y,regmod):
Return the empirical risk for square error loss
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar empirical risk
regmod.fit(X,y)
y_hat = regmod.predict(X)
return np.mean((y_hat - y)**2)
Explanation: The response variable is quality.
End of explanation
lin1 = linear_model.LinearRegression(fit_intercept=True)
print('LOO Risk: '+ str(loo_risk(X,y,lin1)))
print('Emp Risk: ' + str(emp_risk(X,y,lin1)))
Explanation: Exercise 2.1 (5 pts) Compare the leave-one-out risk with the empirical risk for linear regression, on this dataset.
End of explanation
LOOs = []
MSEs = []
K=60
Ks = range(1,K+1)
for k in Ks:
knn = neighbors.KNeighborsRegressor(n_neighbors=k)
LOOs.append(loo_risk(X,y,knn))
MSEs.append(emp_risk(X,y,knn))
plt.plot(Ks,LOOs,'r',label="LOO risk")
plt.title("Risks for kNN Regression")
plt.plot(Ks,MSEs,'b',label="Emp risk")
plt.legend()
_ = plt.xlabel('k')
min(LOOs)
print('optimal k:' + str(LOOs.index(min(LOOs))))
Explanation: Exercise 2.2 (10 pts) Perform kNN regression and compare the leave-one-out risk with the empirical risk for k from 1 to 50. Remark on the tradeoff between bias and variance for this dataset and compare against linear regression.
End of explanation
n,p = X.shape
rem = set(range(p))
supp = []
LOOs = []
while len(supp) < p:
rem = list(set(range(p)) - set(supp))
ERMs = [emp_risk(X[:,supp+[j]],y,linear_model.LinearRegression(fit_intercept=True)) for j in rem]
jmin = rem[np.argmin(ERMs)]
supp.append(jmin)
LOOs.append(loo_risk(X[:,supp],y,linear_model.LinearRegression(fit_intercept=True)))
for i,s,loo in zip(range(p),supp,LOOs):
print("Step {} added variable {} with LOO: {}".format(i,s,loo))
Explanation: Conclusion Comparing the performance of kNN and linear regression, we see that 16-nearest neighbors achieves a LOO risk of 233.2 which is lower than that for linear regression (243.5).
Exercise 2.3 (10 pts) Implement forward stepwise regression (ESL section 3.3.2) for the linear model and compare the LOO risk for each stage. Recall that at each step forward stepwise regression will select a new variable that most improves the empirical risk and include that in the model (starting with the intercept).
End of explanation |
7,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dataframes ( Pandas ) and Plotting ( Matplotlib/Seaborn )
Written by Jin Cheong & Luke Chang
In this lab we are going to learn how to load and manipulate datasets in a dataframe format using Pandas
and create beautiful plots using Matplotlib and Seaborn. Pandas is akin to a data frame in R and provides an intuitive way to interact with data in a 2D data frame. Matplotlib is a standard plotting library that is similar in functionality to Matlab's object oriented plotting. Seaborn is also a plotting library built on the Matplotlib framework which carries useful pre-configured plotting schemes.
After the tutorial you will have the chance to apply the methods to a new set of data.
Also, here is a great set of notebooks that also covers the topic
First we load the basic packages we will be using in this tutorial. Notice how we import the modules using an abbreviated name. This is to reduce the amount of text we type when we use the functions.
Step1: Pandas
Loading Data
We use the pd.read_csv() to load a .csv file into a dataframe.
Note that read_csv() has many options that can be used to make sure you load the data correctly.
Step2: Ways to check the dataframe
There are many ways to examine your dataframe.
One easy way is to execute the dataframe itself.
Step3: However, often the dataframes can be large and we may be only interested in seeing the first few rows. df.head() is useful for this purpose. shape is another useful method for getting the dimensions of the matrix. We will print the number of rows and columns in this data set by using output formatting. Use the % sign to indicate the type of data (e.g., %i=integer, %d=float, %s=string), then use the % followed by a tuple of the values you would like to insert into the text. See here for more info about formatting text.
Step4: On the top row, you have column names, that can be called like a dictionary (a dataframe can be essentially thought of as a dictionary with column names as the keys). The left most column (0,1,2,3,4...) is called the index of the dataframe. The default index is sequential integers, but it can be set to anything as long as each row is unique (e.g., subject IDs)
Step5: You can access the values of a column by calling it directly. Double bracket returns a dataframe
Step6: Single bracket returns a Series
Step7: You can also call a column like an attribute if the column name is a string
Step8: You can create new columns to fit your needs.
For instance you can set initialize a new column with zeros.
Step9: Here we can create a new column pubperyear, which is the ratio of the number of papers published per year
Step10: Indexing and slicing
Indexing in Pandas can be tricky. There are four ways to index
Step11: Next we wil try .iloc. This method references the implicit python index (starting from 0, exclusive of last number). You can think of this like row by column indexing using integers.
Step12: There is also an older method called .ix, which will likely eventually be phased out of pandas. It can be useful to combine explicit and implicit indexing.
Step13: Let's make a new data frame with just Males and another for just Females. Notice, how we added the .reset_index(drop=True) method? This is because assigning a new dataframe based on indexing another dataframe will retain the original index. We need to explicitly tell pandas to reset the index if we want it to start from zero.
Step14: Boolean or logical indexing is useful if you need to sort the data based on some True or False value.
For instance, who are the people with salaries greater than 90K but lower than 100K ?
Step15: Dealing with missing values
It is easy to quickly count the number of missing values for each column in the dataset using the isnull() method. One thing that is nice about Python is that you can chain commands, which means that the output of one method can be the input into the next method. This allows us to write intuitive and concise code. Notice how we take the sum() of all of the null cases.
The isnull() method will return a dataframe with True/False values on whether a datapoint is null or not a number (nan).
Step16: We can chain the .null() and .sum() methods to see how many null values are added up.
Step17: You can use the boolean indexing once again to see the datapoints that have missing values. We chained the method .any() which will check if there are any True values for a given axis. Axis=0 indicates rows, while Axis=1 indicates columns. So here we are creating a boolean index for row where any column has a missing value.
Step18: There are different techniques for dealing with missing data. An easy one is to simply remove rows that have any missing values using the dropna() method.
Step19: Now we can check to make sure the missing rows are removed. Let's also check the new dimensions of the dataframe.
Step20: Describing the data
We can use the .describe() method to get a quick summary of the continuous values of the data frame. We will .transpose() the output to make it slightly easier to read.
Step21: We can also get quick summary of a pandas series, or specific column of a pandas dataframe.
Step22: Manipulating data in Groups
One manipulation we often do is look at variables in groups.
One way to do this is to usethe .groupby(key) method.
The key is a column that is used to group the variables together.
For instance, if we want to group the data by gender and get group means, we perform the following.
Step23: Other default aggregation methods include .count(), .mean(), .median(), .min(), .max(), .std(), .var(), and .sum()
Before we move on, it looks like there were more than 2 genders specified in our data.
This is likely an error in the data collection process so let recap on how we might remove this datapoint.
Step24: replace original dataframe without the miscoded data
Step25: Now we have a corrected dataframe!
Step26: Another powerful tool in Pandas is the split-apply-combine method.
For instance, let's say we also want to look at how much each professor is earning in respect to the department.
Let's say we want to subtract the departmental mean from professor and divide it by the departmental standard deviation.
We can do this by using the groupby(key) method chained with the .transform(function) method.
It will group the dataframe by the key column, perform the "function" transformation of the data and return data in same format.
To learn more, see link here
Step27: Now we have salary_in_departm column showing standardized salary per department.
Step28: Combining datasets
Step29: We can reset the index to start at zero using the .reset_index() method
Step30: Plotting in pandas
Before we move into Matplotlib, here are a few plotting methods already implemented in Pandas.
Boxplot
Step31: Scatterplot
Step32: Plotting Categorical Variables. Replacing variables with .map
If we want to plot department on the x-axis, Pandas plotting functions won't know what to do
because they don't know where to put bio or chem on a numerical x-axis.
Therefore one needs to change them to numerical variable to plot them with basic functionalities (we will later see how Seaborn sovles this).
Step33: Generating bar - errorbar plots in Pandas
Step34: Matplotlib
Learn other matplotlib tutorials here
create a basic lineplot
Step35: create a basic scatterplot
Step36: Modify different aspects of the plot
Step37: Create multiple plots
Step38: Seaborn
Seaborn is a plotting library built on Matplotlib that has many pre-configured plots that are often used for visualization.
Other great tutorials about seaborn are here
Step39: Factor plots
Factor plots allow you to visualize the distribution of parameters in different forms such as point, bar, or violin graphs.
Here are some possible values for kind
Step40: Heatmap plots
Heatmap plots allow you to visualize matrices such as correlation matrices that show relationships across multiple variables | Python Code:
# matplotlib inline is an example of 'cell magic' and
# enables plotting IN the notebook and not opening another window.
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Dataframes ( Pandas ) and Plotting ( Matplotlib/Seaborn )
Written by Jin Cheong & Luke Chang
In this lab we are going to learn how to load and manipulate datasets in a dataframe format using Pandas
and create beautiful plots using Matplotlib and Seaborn. Pandas is akin to a data frame in R and provides an intuitive way to interact with data in a 2D data frame. Matplotlib is a standard plotting library that is similar in functionality to Matlab's object oriented plotting. Seaborn is also a plotting library built on the Matplotlib framework which carries useful pre-configured plotting schemes.
After the tutorial you will have the chance to apply the methods to a new set of data.
Also, here is a great set of notebooks that also covers the topic
First we load the basic packages we will be using in this tutorial. Notice how we import the modules using an abbreviated name. This is to reduce the amount of text we type when we use the functions.
End of explanation
# Import data
df = pd.read_csv('../Data/salary.csv',sep = ',', header='infer')
# recap on how to look for Docstrings.
pd.read_csv?
Explanation: Pandas
Loading Data
We use the pd.read_csv() to load a .csv file into a dataframe.
Note that read_csv() has many options that can be used to make sure you load the data correctly.
End of explanation
df
Explanation: Ways to check the dataframe
There are many ways to examine your dataframe.
One easy way is to execute the dataframe itself.
End of explanation
print('There are %i rows and %i columns in this data set' % df.shape)
df.head()
Explanation: However, often the dataframes can be large and we may be only interested in seeing the first few rows. df.head() is useful for this purpose. shape is another useful method for getting the dimensions of the matrix. We will print the number of rows and columns in this data set by using output formatting. Use the % sign to indicate the type of data (e.g., %i=integer, %d=float, %s=string), then use the % followed by a tuple of the values you would like to insert into the text. See here for more info about formatting text.
End of explanation
print("Indexes")
print(df.index)
print("Columns")
print(df.columns)
print("Columns are like keys of a dictionary")
print(df.keys())
Explanation: On the top row, you have column names, that can be called like a dictionary (a dataframe can be essentially thought of as a dictionary with column names as the keys). The left most column (0,1,2,3,4...) is called the index of the dataframe. The default index is sequential integers, but it can be set to anything as long as each row is unique (e.g., subject IDs)
End of explanation
df[['salary']]
Explanation: You can access the values of a column by calling it directly. Double bracket returns a dataframe
End of explanation
df['salary']
Explanation: Single bracket returns a Series
End of explanation
df.salary
Explanation: You can also call a column like an attribute if the column name is a string
End of explanation
df['pubperyear'] = 0
Explanation: You can create new columns to fit your needs.
For instance you can set initialize a new column with zeros.
End of explanation
df['pubperyear'] = df['publications']/df['years']
df.head()
Explanation: Here we can create a new column pubperyear, which is the ratio of the number of papers published per year
End of explanation
df.loc[0,['salary']]
Explanation: Indexing and slicing
Indexing in Pandas can be tricky. There are four ways to index: loc, iloc, ix, and explicit indexing(useful for booleans).
First, we will try using .loc. This method references the explicit index. it works for both index names and also column names.
End of explanation
df.iloc[0:3,0:3]
Explanation: Next we wil try .iloc. This method references the implicit python index (starting from 0, exclusive of last number). You can think of this like row by column indexing using integers.
End of explanation
df.ix[0:3,0:3]
Explanation: There is also an older method called .ix, which will likely eventually be phased out of pandas. It can be useful to combine explicit and implicit indexing.
End of explanation
maledf = df[df.gender==0].reset_index(drop=True)
femaledf = df[df.gender==1].reset_index(drop=True)
Explanation: Let's make a new data frame with just Males and another for just Females. Notice, how we added the .reset_index(drop=True) method? This is because assigning a new dataframe based on indexing another dataframe will retain the original index. We need to explicitly tell pandas to reset the index if we want it to start from zero.
End of explanation
df[ (df.salary > 90000) & (df.salary < 100000)]
Explanation: Boolean or logical indexing is useful if you need to sort the data based on some True or False value.
For instance, who are the people with salaries greater than 90K but lower than 100K ?
End of explanation
df.isnull()
Explanation: Dealing with missing values
It is easy to quickly count the number of missing values for each column in the dataset using the isnull() method. One thing that is nice about Python is that you can chain commands, which means that the output of one method can be the input into the next method. This allows us to write intuitive and concise code. Notice how we take the sum() of all of the null cases.
The isnull() method will return a dataframe with True/False values on whether a datapoint is null or not a number (nan).
End of explanation
df.isnull().sum()
Explanation: We can chain the .null() and .sum() methods to see how many null values are added up.
End of explanation
df[df.isnull().any(axis=1)]
# you may look at where the values are not null
# Note that indexes 18, and 24 are missing.
df[~df.isnull().any(axis=1)]
Explanation: You can use the boolean indexing once again to see the datapoints that have missing values. We chained the method .any() which will check if there are any True values for a given axis. Axis=0 indicates rows, while Axis=1 indicates columns. So here we are creating a boolean index for row where any column has a missing value.
End of explanation
df = df.dropna()
Explanation: There are different techniques for dealing with missing data. An easy one is to simply remove rows that have any missing values using the dropna() method.
End of explanation
print('There are %i rows and %i columns in this data set' % df.shape)
df.isnull().sum()
Explanation: Now we can check to make sure the missing rows are removed. Let's also check the new dimensions of the dataframe.
End of explanation
df.describe().transpose()
Explanation: Describing the data
We can use the .describe() method to get a quick summary of the continuous values of the data frame. We will .transpose() the output to make it slightly easier to read.
End of explanation
df.departm.describe()
Explanation: We can also get quick summary of a pandas series, or specific column of a pandas dataframe.
End of explanation
df.groupby('gender').mean()
Explanation: Manipulating data in Groups
One manipulation we often do is look at variables in groups.
One way to do this is to usethe .groupby(key) method.
The key is a column that is used to group the variables together.
For instance, if we want to group the data by gender and get group means, we perform the following.
End of explanation
df[df['gender']==2]
Explanation: Other default aggregation methods include .count(), .mean(), .median(), .min(), .max(), .std(), .var(), and .sum()
Before we move on, it looks like there were more than 2 genders specified in our data.
This is likely an error in the data collection process so let recap on how we might remove this datapoint.
End of explanation
df = df[df['gender']!=2]
Explanation: replace original dataframe without the miscoded data
End of explanation
df.groupby('gender').mean()
Explanation: Now we have a corrected dataframe!
End of explanation
# key: We use the departm as the grouping factor.
key = df['departm']
# Let's create an anonmyous function for calculating zscores using lambda:
# We want to standardize salary for each department.
zscore = lambda x: (x - x.mean()) / x.std()
# Now let's calculate zscores separately within each department
transformed = df.groupby(key).transform(zscore)
df['salary_in_departm'] = transformed['salary']
Explanation: Another powerful tool in Pandas is the split-apply-combine method.
For instance, let's say we also want to look at how much each professor is earning in respect to the department.
Let's say we want to subtract the departmental mean from professor and divide it by the departmental standard deviation.
We can do this by using the groupby(key) method chained with the .transform(function) method.
It will group the dataframe by the key column, perform the "function" transformation of the data and return data in same format.
To learn more, see link here
End of explanation
df.head()
Explanation: Now we have salary_in_departm column showing standardized salary per department.
End of explanation
pd.concat([femaledf,maledf],axis = 0)
Explanation: Combining datasets : pd.concat
Recall that we sliced the dataframes into male and female dataframe in 2.3 Indexing and Slicing. Now we will learn how to put dataframes together which is done by the pd.concat method. Note how the index of this output retains the old index.
End of explanation
pd.concat([maledf,femaledf],axis = 0).reset_index(drop=True)
Explanation: We can reset the index to start at zero using the .reset_index() method
End of explanation
df[['salary','gender']].boxplot(by='gender')
Explanation: Plotting in pandas
Before we move into Matplotlib, here are a few plotting methods already implemented in Pandas.
Boxplot
End of explanation
df[['salary','years']].plot(kind='scatter', x='years', y='salary')
Explanation: Scatterplot
End of explanation
# create a new numericalSeries called dept_num for visualization.
df['dept_num'] = 0
df.loc[:,['dept_num']] = df.departm.map({'bio':0, 'chem':1,'geol':2,'neuro':3,'stat':4,'physics':5,'math':6})
df.tail()
## Now plot all four categories
f, axs = plt.subplots(1, 4, sharey=True)
f.suptitle('Salary in relation to other variables')
df.plot(kind='scatter', x='gender', y='salary', ax=axs[0], figsize=(15, 4))
df.plot(kind='scatter', x='dept_num', y='salary', ax=axs[1])
df.plot(kind='scatter', x='years', y='salary', ax=axs[2])
df.plot(kind='scatter', x='age', y='salary', ax=axs[3])
# The problem is that it treats department as a continuous variable.
Explanation: Plotting Categorical Variables. Replacing variables with .map
If we want to plot department on the x-axis, Pandas plotting functions won't know what to do
because they don't know where to put bio or chem on a numerical x-axis.
Therefore one needs to change them to numerical variable to plot them with basic functionalities (we will later see how Seaborn sovles this).
End of explanation
means = df.groupby('gender').mean()['salary']
errors = df.groupby('gender').std()['salary'] / np.sqrt(df.groupby('gender').count()['salary'])
ax = means.plot.bar(yerr=errors,figsize=(5,3))
Explanation: Generating bar - errorbar plots in Pandas
End of explanation
plt.figure(figsize=(2,2))
plt.plot(range(0,10),np.sqrt(range(0,10)))
plt.show()
Explanation: Matplotlib
Learn other matplotlib tutorials here
create a basic lineplot
End of explanation
plt.figure(figsize=(2,2))
plt.scatter(df.salary,df.age,color='b',marker='*')
plt.show()
Explanation: create a basic scatterplot
End of explanation
# plt.subplots allows you to control different aspects of multiple plots
f,ax = plt.subplots(1,1,figsize=(4,2))
ax.scatter(df.salary,df.age,color='k',marker='o')
# Setting limits on axes
ax.set_xlim([40000,120000])
ax.set_ylim([20,70])
# Changing tick labels
ax.set_xticklabels([str(int(tick)/1000)+'K' for tick in ax.get_xticks()])
# changing label names
ax.set_xlabel('salary')
ax.set_ylabel('age')
# changing the title
ax.set_title('Scatterplot of age and salary')
plt.show()
# save figure
f.savefig('MyFirstPlot.png')
Explanation: Modify different aspects of the plot
End of explanation
f,axs = plt.subplots(1,2,figsize=(15,5)) # create a plot figure, specify the size and number of figures.
axs[0].scatter(df.age,df.salary,color='k',marker='o')
axs[0].set_ylim([40000,120000])
axs[0].set_xlim([20,70])
axs[0].set_yticklabels([str(int(tick)/1000)+'K' for tick in axs[0].get_yticks()])
axs[0].set_ylabel('salary')
axs[0].set_xlabel('age')
axs[0].set_title('Scatterplot of age and salary')
axs[1].scatter(df.publications,df.salary,color='k',marker='o')
axs[1].set_ylim([40000,120000])
axs[1].set_xlim([20,70])
axs[1].set_yticklabels([str(int(tick)/1000)+'K' for tick in axs[1].get_yticks()])
axs[1].set_ylabel('salary')
axs[1].set_xlabel('publications')
axs[1].set_title('Scatterplot of publication and salary')
f.suptitle('Scatterplots of salary and other factors')
plt.show()
Explanation: Create multiple plots
End of explanation
ax = sns.regplot(df.age,df.salary)
ax.set_title('Salary and age')
plt.show()
sns.jointplot("age", "salary", data=df, kind='reg');
Explanation: Seaborn
Seaborn is a plotting library built on Matplotlib that has many pre-configured plots that are often used for visualization.
Other great tutorials about seaborn are here
End of explanation
sns.catplot(x='departm',y='salary',hue='gender',data=df,ci=68,kind='bar')
plt.show()
Explanation: Factor plots
Factor plots allow you to visualize the distribution of parameters in different forms such as point, bar, or violin graphs.
Here are some possible values for kind : {point, bar, count, box, violin, strip}
End of explanation
sns.heatmap(df[['salary','years','age','publications']].corr(),annot=True,linewidths=.5)
Explanation: Heatmap plots
Heatmap plots allow you to visualize matrices such as correlation matrices that show relationships across multiple variables
End of explanation |
7,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img style='float
Step1: Skill 1
Step2: Skill 2
Step3: Skill 3
Step4: Normalized Taylor diagrams
The radius is model standard deviation error divided by observations deviation,
azimuth is arc-cosine of cross correlation (R), and distance to point (1, 0) on the
abscissa is Centered RMS. | Python Code:
import os
try:
import cPickle as pickle
except ImportError:
import pickle
run_name = '2015-08-17'
fname = os.path.join(run_name, 'config.pkl')
with open(fname, 'rb') as f:
config = pickle.load(f)
import numpy as np
from pandas import DataFrame, read_csv
from utilities import to_html, save_html, apply_skill
fname = '{}-all_obs.csv'.format(run_name)
all_obs = read_csv(os.path.join(run_name, fname), index_col='name')
def rename_cols(df):
columns = dict()
for station in df.columns:
mask = all_obs['station'].astype(str) == station
name = all_obs['station'][mask].index[0]
columns.update({station: name})
return df.rename(columns=columns)
from glob import glob
from pandas import Panel
from utilities import nc2df
def load_ncs(run_name):
fname = '{}-{}.nc'.format
ALL_OBS_DATA = nc2df(os.path.join(run_name,
fname(run_name, 'OBS_DATA')))
index = ALL_OBS_DATA.index
dfs = dict(OBS_DATA=ALL_OBS_DATA)
for fname in glob(os.path.join(run_name, "*.nc")):
if 'OBS_DATA' in fname:
continue
else:
model = fname.split('.')[0].split('-')[-1]
df = nc2df(fname)
# FIXME: Horrible work around duplicate times.
if len(df.index.values) != len(np.unique(df.index.values)):
kw = dict(subset='index', take_last=True)
df = df.reset_index().drop_duplicates(**kw).set_index('index')
kw = dict(method='time', limit=30)
df = df.reindex(index).interpolate(**kw).ix[index]
dfs.update({model: df})
return Panel.fromDict(dfs).swapaxes(0, 2)
Explanation: <img style='float: left' width="150px" src="http://bostonlightswim.org/wp/wp-content/uploads/2011/08/BLS-front_4-color.jpg">
<br><br>
The Boston Light Swim
Sea Surface Temperature time-series model skill
Load configuration
End of explanation
from utilities import mean_bias
dfs = load_ncs(run_name)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
df = rename_cols(df)
skill_score = dict(mean_bias=df.copy())
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'mean_bias.html'.format(run_name))
save_html(fname, html)
html
Explanation: Skill 1: Model Bias (or Mean Bias)
The bias skill compares the model mean temperature against the observations.
It is possible to introduce a Mean Bias in the model due to a mismatch of the
boundary forcing and the model interior.
$$ \text{MB} = \mathbf{\overline{m}} - \mathbf{\overline{o}}$$
End of explanation
from utilities import rmse
dfs = load_ncs(run_name)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['rmse'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'rmse.html'.format(run_name))
save_html(fname, html)
html
Explanation: Skill 2: Central Root Mean Squared Error
Root Mean Squared Error of the deviations from the mean.
$$ \text{CRMS} = \sqrt{\left(\mathbf{m'} - \mathbf{o'}\right)^2}$$
where: $\mathbf{m'} = \mathbf{m} - \mathbf{\overline{m}}$ and $\mathbf{o'} = \mathbf{o} - \mathbf{\overline{o}}$
End of explanation
from utilities import r2
dfs = load_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'r2.html'.format(run_name))
save_html(fname, html)
html
fname = os.path.join(run_name, 'skill_score.pkl')
with open(fname,'wb') as f:
pickle.dump(skill_score, f)
Explanation: Skill 3: R$^2$
https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from utilities.taylor_diagram import TaylorDiagram
def make_taylor(samples):
fig = plt.figure(figsize=(9, 9))
dia = TaylorDiagram(samples['std']['OBS_DATA'],
fig=fig,
label="Observation")
colors = plt.matplotlib.cm.jet(np.linspace(0, 1, len(samples)))
# Add samples to Taylor diagram.
samples.drop('OBS_DATA', inplace=True)
for model, row in samples.iterrows():
dia.add_sample(row['std'], row['corr'], marker='s', ls='',
label=model)
# Add RMS contours, and label them.
contours = dia.add_contours(colors='0.5')
plt.clabel(contours, inline=1, fontsize=10)
# Add a figure legend.
kw = dict(prop=dict(size='small'), loc='upper right')
leg = fig.legend(dia.samplePoints,
[p.get_label() for p in dia.samplePoints],
numpoints=1, **kw)
return fig
dfs = load_ncs(run_name)
# Bin and interpolate all series to 1 hour.
freq = '30min'
for station, df in list(dfs.iteritems()):
df = df.resample(freq).interpolate().dropna(axis=1)
if 'OBS_DATA' in df:
samples = DataFrame.from_dict(dict(std=df.std(),
corr=df.corr()['OBS_DATA']))
else:
continue
samples[samples < 0] = np.NaN
samples.dropna(inplace=True)
if len(samples) <= 2: # 1 obs 1 model.
continue
fig = make_taylor(samples)
fig.savefig(os.path.join(run_name, '{}.png'.format(station)))
plt.close(fig)
Explanation: Normalized Taylor diagrams
The radius is model standard deviation error divided by observations deviation,
azimuth is arc-cosine of cross correlation (R), and distance to point (1, 0) on the
abscissa is Centered RMS.
End of explanation |
7,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Microbiome machine learning analysis
Setup
Import the calour module
Step1: Regression
Loading the data
We will use the data from Qitta study 103 (https
Step2: Process the data
Get rid of the features (bacteria) with small amount of reads
We throw away all features with total reads (over all samples) < 1 (after each sample was normalized to 100 reads/sample). Note alternatively we could filter based on mean reads/sample or fraction of samples where the feature is present. Each method filters away slightly different bacteria. See filtering notebook for details on the filtering functions.
Step3: Use soil microbiome to predict its pH
Let's look at the distribution of pH for all the samples
Step4: We can then run regression analysis
Step5: This function returns a generator, which yields the prediction result for each parameter set specified in params. Here we would like to see how the number of trees (named n_estimators) in the model impact its performance. The result with n_estimators = 10 is
Step6: We can plot out the result as following. Each dot is a sample with observed and predicted pH, colored by the fold of cross validation the sample is from. The diagonal line indicates perfect predition. The correlation coefficient and its p-value between the prediction and observation are also annotated around the top of the plot.
Step7: Let's look at the result for n_estimators = 500
Step8: From the plot, you can see, with more trees in the Random Forest model, the prediction is much better with a higher correlation coefficient.
Classification
We can do analysis similarly for classification. Let's show it with another data set that was introduced in previous notebook.
Step9: Let's see if we can distinguish patient samples from control samples with classification
Step10: We can plot out the result as ROC curve or confusion matrix.
Step11: You can also plot confustion matrix
Step12: Let's look at the result for n_estimators = 500
Step13: You can also plot confustion matrix | Python Code:
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.model_selection import RepeatedStratifiedKFold
from calour.training import plot_scatter, plot_roc, plot_cm
import calour as ca
%matplotlib notebook
Explanation: Microbiome machine learning analysis
Setup
Import the calour module
End of explanation
dat=ca.read_amplicon('data/88-soil.biom',
'data/88-soil.sample.txt',
normalize=100,min_reads=10)
print(dat)
Explanation: Regression
Loading the data
We will use the data from Qitta study 103 (https://qiita.ucsd.edu/study/description/103#)
End of explanation
dat=dat.filter_abundance(1)
dat
Explanation: Process the data
Get rid of the features (bacteria) with small amount of reads
We throw away all features with total reads (over all samples) < 1 (after each sample was normalized to 100 reads/sample). Note alternatively we could filter based on mean reads/sample or fraction of samples where the feature is present. Each method filters away slightly different bacteria. See filtering notebook for details on the filtering functions.
End of explanation
dat.sample_metadata['ph'].hist()
dat.sort_samples('ph').sort_centroid(n=0.001).plot(sample_field='ph', gui='jupyter')
Explanation: Use soil microbiome to predict its pH
Let's look at the distribution of pH for all the samples:
End of explanation
it = dat.regress('ph', RandomForestRegressor(random_state=0), cv=5, params=[{'n_estimators':3}, {'n_estimators': 500}])
Explanation: We can then run regression analysis:
End of explanation
res1 = next(it)
res1.head()
Explanation: This function returns a generator, which yields the prediction result for each parameter set specified in params. Here we would like to see how the number of trees (named n_estimators) in the model impact its performance. The result with n_estimators = 10 is:
End of explanation
plot_scatter(res1, cv=True)
Explanation: We can plot out the result as following. Each dot is a sample with observed and predicted pH, colored by the fold of cross validation the sample is from. The diagonal line indicates perfect predition. The correlation coefficient and its p-value between the prediction and observation are also annotated around the top of the plot.
End of explanation
res2 = next(it)
res2.head()
plot_scatter(res2, cv=True)
Explanation: Let's look at the result for n_estimators = 500:
End of explanation
dat=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',
'data/chronic-fatigue-syndrome.sample.txt',
normalize=10000,min_reads=1000)
print(dat)
Explanation: From the plot, you can see, with more trees in the Random Forest model, the prediction is much better with a higher correlation coefficient.
Classification
We can do analysis similarly for classification. Let's show it with another data set that was introduced in previous notebook.
End of explanation
dat.sample_metadata['Subject'].value_counts()
it = dat.classify('Subject', RandomForestClassifier(random_state=0), cv=RepeatedStratifiedKFold(5, 3), params=[{'n_estimators':3}, {'n_estimators': 500}])
res1 = next(it)
res1.head()
Explanation: Let's see if we can distinguish patient samples from control samples with classification:
End of explanation
plot_roc(res1, classes=['Patient'])
Explanation: We can plot out the result as ROC curve or confusion matrix.
End of explanation
plot_cm(res1)
Explanation: You can also plot confustion matrix:
End of explanation
res2 = next(it)
res2.head()
plot_roc(res2, classes=['Patient'])
Explanation: Let's look at the result for n_estimators = 500:
End of explanation
plot_cm(res2, normalize=True)
Explanation: You can also plot confustion matrix:
End of explanation |
7,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
主题模型
王成军
[email protected]
计算传播网 http
Step1: Download data
http
Step2: Build the topic model
Step3: We can see the list of topics a document refers to
by using the model[doc] syntax
Step4: We can see that about 150 documents have 5 topics,
while the majority deal with around 10 to 12 of them.
No document talks about more than 20 topics.
Step5: 使用pyLDAvis可视化主题模型
http | Python Code:
%matplotlib inline
from __future__ import print_function
from wordcloud import WordCloud
from gensim import corpora, models, similarities, matutils
import matplotlib.pyplot as plt
import numpy as np
Explanation: 主题模型
王成军
[email protected]
计算传播网 http://computational-communication.com
2014年高考前夕,百度“基于海量作文范文和搜索数据,利用概率主题模型,预测2014年高考作文的命题方向”。如上图所示,共分为了六个主题:时间、生命、民族、教育、心灵、发展。而每个主题下面又包括了一些具体的关键词。比如,生命的主题对应:平凡、自由、美丽、梦想、奋斗、青春、快乐、孤独。
Read more
latent Dirichlet allocation (LDA)
The simplest topic model (on which all others are based) is latent Dirichlet allocation (LDA).
- LDA is a generative model that infers unobserved meanings from a large set of observations.
Reference
Blei DM, Ng J, Jordan MI. Latent dirichlet allocation. J Mach Learn Res. 2003; 3: 993–1022.
Blei DM, Lafferty JD. Correction: a correlated topic model of science. Ann Appl Stat. 2007; 1: 634.
Blei DM. Probabilistic topic models. Commun ACM. 2012; 55: 55–65.
Chandra Y, Jiang LC, Wang C-J (2016) Mining Social Entrepreneurship Strategies Using Topic Modeling. PLoS ONE 11(3): e0151342. doi:10.1371/journal.pone.0151342
<img src = './img/topic.png' width = 1000>
Topic models assume that each document contains a mixture of topics
Topics are considered latent/unobserved variables that stand between the documents and terms
It is impossible to directly assess the relationships between topics and documents and between topics and terms.
- What can be directly observed is the distribution of terms over documents, which is known as the document term matrix (DTM).
Topic models algorithmically identify the best set of latent variables (topics) that can best explain the observed distribution of terms in the documents.
The DTM is further decomposed into two matrices:
- a term-topic matrix (TTM)
- a topic-document matrix (TDM)
Each document can be assigned to a primary topic that demonstrates the highest topic-document probability and can then be linked to other topics with declining probabilities.
Assume K topics are in D documents, and each topic is denoted with $\phi_{1:K}$.
Each topic $\phi_K$ is a distribution of fixed words in the given documents.
The topic proportion in the document is denoted as $\theta_d$.
- e.g., the kth topic's proportion in document d is $\theta_{d, k}$.
Let $w_{d,n}$ denote the nth term in document d.
Further, topic models assign topics to a document and its terms.
- For example, the topic assigned to document d is denoted as $z_d$,
- and the topic assigned to the nth term in document d is denoted as $z_{d,n}$.
According to Blei et al. the joint distribution of $\phi_{1:K}$,$\theta_{1:D}$, $z_{1:D}$ and $w_{d, n}$ plus the generative process for LDA can be expressed as:
$ p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n}) $ =
$\prod_{i=1}^{K} p(\phi_i) \prod_{d =1}^D p(\theta_d)(\prod_{n=1}^N p(z_{d,n} \mid \theta_d) \times p(w_{d, n} \mid \phi_{1:K}, Z_{d, n}) ) $
Note that $\phi_{1:k},\theta_{1:D},and z_{1:D}$ are latent, unobservable variables. Thus, the computational challenge of LDA is to compute the conditional distribution of them given the observable specific words in the documents $w_{d, n}$.
Accordingly, the posterior distribution of LDA can be expressed as:
$p(\phi_{1:K}, \theta_{1:D}, z_{1:D} \mid w_{d, n}) = \frac{p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n})}{p(w_{1:D})}$
Because the number of possible topic structures is exponentially large, it is impossible to compute the posterior of LDA. Topic models aim to develop efficient algorithms to approximate the posterior of LDA.
- There are two categories of algorithms:
- sampling-based algorithms
- variational algorithms
Using the Gibbs sampling method, we can build a Markov chain for the sequence of random variables (see Eq 1). The sampling algorithm is applied to the chain to sample from the limited distribution, and it approximates the posterior.
Gensim
Unfortunately, scikit-learn does not support latent Dirichlet allocation.
Therefore, we are going to use the gensim package in Python.
Gensim is developed by Radim Řehůřek,who is a machine learning researcher and consultant in the Czech Republic. We must start by installing it. We can achieve this by running one of the following commands:
pip install gensim
End of explanation
# Load the data
corpus = corpora.BleiCorpus('/Users/chengjun/bigdata/ap/ap.dat', '/Users/chengjun/bigdata/ap/vocab.txt')
' '.join(dir(corpus))
corpus.id2word.items()[:3]
Explanation: Download data
http://www.cs.princeton.edu/~blei/lda-c/ap.tgz
Unzip the data and put them into /Users/chengjun/bigdata/ap/
End of explanation
NUM_TOPICS = 100
model = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=None)
' '.join(dir(model))
Explanation: Build the topic model
End of explanation
document_topics = [model[c] for c in corpus]
# how many topics does one document cover?
document_topics[2]
# The first topic
# format: weight, term
model.show_topic(0, 10)
# The 100 topic
# format: weight, term
model.show_topic(99, 10)
words = model.show_topic(0, 5)
words
model.show_topics(4)
for f, w in words[:10]:
print(f, w)
# write out topcis with 10 terms with weights
for ti in range(model.num_topics):
words = model.show_topic(ti, 10)
tf = sum(f for f, w in words)
with open('/Users/chengjun/github/cjc2016/data/topics_term_weight.txt', 'a') as output:
for f, w in words:
line = str(ti) + '\t' + w + '\t' + str(f/tf)
output.write(line + '\n')
# We first identify the most discussed topic, i.e., the one with the
# highest total weight
topics = matutils.corpus2dense(model[corpus], num_terms=model.num_topics)
weight = topics.sum(1)
max_topic = weight.argmax()
# Get the top 64 words for this topic
# Without the argument, show_topic would return only 10 words
words = model.show_topic(max_topic, 64)
words = np.array(words).T
words_freq=[float(i)*10000000 for i in words[0]]
words = zip(words[1], words_freq)
wordcloud = WordCloud().generate_from_frequencies(words)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
num_topics_used = [len(model[doc]) for doc in corpus]
fig,ax = plt.subplots()
ax.hist(num_topics_used, np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
fig.tight_layout()
#fig.savefig('Figure_04_01.png')
Explanation: We can see the list of topics a document refers to
by using the model[doc] syntax:
End of explanation
# Now, repeat the same exercise using alpha=1.0
# You can edit the constant below to play around with this parameter
ALPHA = 1.0
model1 = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=ALPHA)
num_topics_used1 = [len(model1[doc]) for doc in corpus]
fig,ax = plt.subplots()
ax.hist([num_topics_used, num_topics_used1], np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
# The coordinates below were fit by trial and error to look good
plt.text(9, 223, r'default alpha')
plt.text(26, 156, 'alpha=1.0')
fig.tight_layout()
Explanation: We can see that about 150 documents have 5 topics,
while the majority deal with around 10 to 12 of them.
No document talks about more than 20 topics.
End of explanation
with open('/Users/chengjun/bigdata/ap/ap.txt', 'r') as f:
dat = f.readlines()
dat[:6]
dat[4].strip()[0]
docs = []
for i in dat[:100]:
if i.strip()[0] != '<':
docs.append(i)
def clean_doc(doc):
doc = doc.replace('.', '').replace(',', '')
doc = doc.replace('``', '').replace('"', '')
doc = doc.replace('_', '').replace("'", '')
doc = doc.replace('!', '')
return doc
docs = [clean_doc(doc) for doc in docs]
texts = [[i for i in doc.lower().split()] for doc in docs]
import nltk
nltk.download()
# 会打开一个窗口,选择book,download,待下载完毕就可以使用了。
from nltk.corpus import stopwords
stop = stopwords.words('english') # 如果此处出错,请执行上一个block的代码
' '.join(stop)
stop.append('said')
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1 and token not in stop]
for text in texts]
docs[8]
' '.join(texts[9])
dictionary = corpora.Dictionary(texts)
lda_corpus = [dictionary.doc2bow(text) for text in texts]
#The function doc2bow() simply counts the number of occurences of each distinct word,
# converts the word to its integer word id and returns the result as a sparse vector.
lda_model = models.ldamodel.LdaModel(
lda_corpus, num_topics=NUM_TOPICS, id2word=dictionary, alpha=None)
import pyLDAvis.gensim
ap_data = pyLDAvis.gensim.prepare(lda_model, lda_corpus, dictionary)
pyLDAvis.enable_notebook()
pyLDAvis.display(ap_data)
pyLDAvis.save_html(ap_data, '/Users/chengjun/github/cjc2016/vis/ap_ldavis.html')
Explanation: 使用pyLDAvis可视化主题模型
http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/pyLDAvis_overview.ipynb
pip install pyldavis
读取并清洗数据
End of explanation |
7,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-indexing
When dealing with Series data, it is often useful to index each element of the series with multiple labels and then select and aggregrate data based on these indices. For example, for a collection of time series data, each point in time might be identified by trial number, block number, one or more experimental conditions, etc. This tutorial shows how to create and leverage such "multi-indices" when working with Series objects.
Creating a toy data set
Let's start by building a simple Series with only a single record.
Step1: By default, the index on the series will label the elemets with ascending integers.
Step2: For the sake of example, let's assume that these data represent two independent trials. Thus we might have one index describing the trial structure.
Step3: Furthermore, let's assume that each trial is broken into three blocks. This can be described with a second index.
Step4: Finally, in this simple example, we have two time points within each block.
Step5: A multi-index for this data can then be created as list of lists, where each sub-list contains one value from each of the individual indices.
Step6: To inspect the index, we look at the transpose so it lines up with the Series.
Step7: As a useful piece of terminology, we would say that the resulting multi-index has three levels
Step8: As we see above, once a single value has been selected from a certain level, the index values at that level become redundant and we might desire to discard them. This can be accomplished with the "squeeze" option.
Step9: We can also select multiple values at a given level by passing a list of values. He we select data from blocks 2 and 3 (level = 1; value = 2 or 3).
Step10: In the most general case, we can select multiple values at multiple levels. Let's combine the previous two examples and get the 2nd and 3rd blocks (level = 1; value = 2 or 3), but only for the 1st trial (level = 0; value = 1).
Step11: Finally, we can reverse the process of "selection" (keeping only the elements that match the values) to that of "filtering" (keeping all elements except those that match the values). This is accomplished with the "filter" keyword. To demonstrate, lets get all of the blocks except for the 2nd (level = 1; value = 2).
Step12: Aggregation
The second major multi-index operation is aggregation. Aggregation can be thought of as a two step-process. First a level is selected and the series is partitioned into pieces that share the index value at that level. Second an aggregating function is applied to each of these partitions, and a new series is reconsituted with one element for the aggregate value computed on each piece. The aggregating function should take an array as input and return a single numeric values as output.
As a simple initial demonstration, let's find the average value of our series for each trial (level = 0).
Step13: The same operation can be called through the convienience function seriesMeanByIndex
Step14: As a more complex example, we might want aggregation with respect to the values on multiple levels. For example, we might want to examine how the maximum value at each time point (level = 2) is different across the different trials (level = 0). | Python Code:
from thunder import Series
from numpy import arange, array
data = tsc.loadSeriesFromArray(arange(12))
data.first()
Explanation: Multi-indexing
When dealing with Series data, it is often useful to index each element of the series with multiple labels and then select and aggregrate data based on these indices. For example, for a collection of time series data, each point in time might be identified by trial number, block number, one or more experimental conditions, etc. This tutorial shows how to create and leverage such "multi-indices" when working with Series objects.
Creating a toy data set
Let's start by building a simple Series with only a single record.
End of explanation
data.index
Explanation: By default, the index on the series will label the elemets with ascending integers.
End of explanation
trial = array([1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2])
Explanation: For the sake of example, let's assume that these data represent two independent trials. Thus we might have one index describing the trial structure.
End of explanation
block = array([1, 1, 2, 2, 3, 3, 1, 1, 2, 2, 3, 3])
Explanation: Furthermore, let's assume that each trial is broken into three blocks. This can be described with a second index.
End of explanation
point = array([1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2])
Explanation: Finally, in this simple example, we have two time points within each block.
End of explanation
index = array([trial, block, point]).T
data.index = index
Explanation: A multi-index for this data can then be created as list of lists, where each sub-list contains one value from each of the individual indices.
End of explanation
data.index.T
Explanation: To inspect the index, we look at the transpose so it lines up with the Series.
End of explanation
selected = data.selectByIndex(1, level=0)
def displaySeries(series):
print "index"
print "-----"
print series.index.T
print "series"
print "------"
print series.values().first()
displaySeries(selected)
Explanation: As a useful piece of terminology, we would say that the resulting multi-index has three levels: level 0 (trial); level 1 (block); and level 3 (time point).
Selecting
There are two major pieces of multi-index functionality. The first is selection. To select a subset of the Series based on the multi-index, we choose a value and a level and then only elements where that level of the index matches the value will be retained. For instance, we could select only the data data points from the first trial (level = 0; value = 1).
End of explanation
selected = data.selectByIndex(1, level=0, squeeze=True)
displaySeries(selected)
Explanation: As we see above, once a single value has been selected from a certain level, the index values at that level become redundant and we might desire to discard them. This can be accomplished with the "squeeze" option.
End of explanation
selected = data.selectByIndex([2, 3], level=1)
displaySeries(selected)
Explanation: We can also select multiple values at a given level by passing a list of values. He we select data from blocks 2 and 3 (level = 1; value = 2 or 3).
End of explanation
selected = data.selectByIndex([1, [2, 3]], level=[0, 1])
displaySeries(selected)
Explanation: In the most general case, we can select multiple values at multiple levels. Let's combine the previous two examples and get the 2nd and 3rd blocks (level = 1; value = 2 or 3), but only for the 1st trial (level = 0; value = 1).
End of explanation
selected = data.selectByIndex(2, level=1, filter=True)
displaySeries(selected)
Explanation: Finally, we can reverse the process of "selection" (keeping only the elements that match the values) to that of "filtering" (keeping all elements except those that match the values). This is accomplished with the "filter" keyword. To demonstrate, lets get all of the blocks except for the 2nd (level = 1; value = 2).
End of explanation
from numpy import mean
aggregated = data.seriesAggregateByIndex(mean, level=0)
displaySeries(aggregated)
Explanation: Aggregation
The second major multi-index operation is aggregation. Aggregation can be thought of as a two step-process. First a level is selected and the series is partitioned into pieces that share the index value at that level. Second an aggregating function is applied to each of these partitions, and a new series is reconsituted with one element for the aggregate value computed on each piece. The aggregating function should take an array as input and return a single numeric values as output.
As a simple initial demonstration, let's find the average value of our series for each trial (level = 0).
End of explanation
aggregated = data.seriesMeanByIndex(level=0)
displaySeries(aggregated)
Explanation: The same operation can be called through the convienience function seriesMeanByIndex
End of explanation
aggregated = data.seriesMaxByIndex(level=[0, 2])
displaySeries(aggregated)
Explanation: As a more complex example, we might want aggregation with respect to the values on multiple levels. For example, we might want to examine how the maximum value at each time point (level = 2) is different across the different trials (level = 0).
End of explanation |
7,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning with TensorFlow
Credits
Step1: First reload the data we generated in 1_notmist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
Step6: Let's run it | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import cPickle as pickle
import numpy as np
import tensorflow as tf
Explanation: Deep Learning with TensorFlow
Credits: Forked from TensorFlow by Google
Setup
Refer to the setup instructions.
Exercise 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this exercise is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
Explanation: First reload the data we generated in 1_notmist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print 'Initialized'
for step in xrange(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print 'Loss at step', step, ':', l
print 'Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :])
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print 'Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels)
print 'Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print "Initialized"
for step in xrange(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print "Minibatch loss at step", step, ":", l
print "Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)
print "Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels)
print "Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)
Explanation: Let's run it:
End of explanation |
7,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http
Step1: 1. Klasifikator stroja potpornih vektora (SVM)
(a)
Upoznajte se s razredom svm.SVC, koja ustvari implementira sučelje prema implementaciji libsvm. Primijenite model SVC s linearnom jezgrenom funkcijom (tj. bez preslikavanja primjera u prostor značajki) na skup podataka seven (dan niže) s $N=7$ primjera. Ispišite koeficijente $w_0$ i $\mathbf{w}$. Ispišite dualne koeficijente i potporne vektore. Završno, koristeći funkciju mlutils.plot_2d_svc_problem iscrtajte podatke, decizijsku granicu i marginu. Funkcija prima podatke, oznake i klasifikator (objekt klase SVC).
Izračunajte širinu dobivene margine (prisjetite se geometrije linearnih modela).
Step2: Q
Step3: (c)
Vratit ćemo se na skupove podataka outlier ($N=8$) i unsep ($N=8$) iz prošle laboratorijske vježbe (dani niže) i pogledati kako se model SVM-a nosi s njima. Naučite ugrađeni model SVM-a (s linearnom jezgrom) na ovim podatcima i iscrtajte decizijsku granicu (skupa s marginom). Također ispišite točnost modela korištenjem funkcije metrics.accuracy_score.
Step4: Q
Step5: 3. Optimizacija hiperparametara SVM-a
Pored hiperparametra $C$, model SVM s jezgrenom funkcijom RBF ima i dodatni hiperparametar $\gamma=\frac{1}{2\sigma^2}$ (preciznost). Taj parametar također određuje složenost modela
Step6: (b)
Pomoću funkcije datasets.make_classification generirajte dva skupa podataka od $N=200$ primjera
Step7: Q
Step8: (a)
Proučite funkciju za iscrtavanje histograma hist. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ (ovdje i u sljedećim zadatcima koristite bins=50).
Step9: (b)
Proučite razred preprocessing.MinMaxScaler. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ ako su iste skalirane min-max skaliranjem (ukupno dva histograma).
Step10: Q
Step11: Q
Step12: Q
Step13: (b)
Kako biste se uvjerili da je Vaša implementacija ispravna, usporedite ju s onom u razredu neighbors.KNeighborsClassifier. Budući da spomenuti razred koristi razne optimizacijske trikove pri pronalasku najboljih susjeda, obavezno postavite parametar algorithm=brute, jer bi se u protivnom moglo dogoditi da Vam se predikcije razlikuju. Usporedite modele na danom (umjetnom) skupu podataka (prisjetite se kako se uspoređuju polja; numpy.all).
Step14: 6. Analiza algoritma k-najbližih susjeda
Algoritam k-nn ima hiperparametar $k$ (broj susjeda). Taj hiperparametar izravno utječe na složenost algoritma, pa je stoga izrazito važno dobro odabrati njegovu vrijednost. Kao i kod mnogih drugih algoritama, tako i kod algoritma k-nn optimalna vrijednost hiperametra $k$ ovisi o konkretnom problemu, uključivo broju primjera $N$, broju značajki (dimenzija) $n$ te broju klasa $K$.
Kako bismo dobili pouzdanije rezultate, potrebno je neke od eksperimenata ponoviti na različitim skupovima podataka i zatim uprosječiti dobivene vrijednosti pogrešaka. Koristite funkciju
Step15: Q
Step16: Q
Step17: Q | Python Code:
import numpy as np
import scipy as sp
import pandas as pd
import mlutils
import matplotlib.pyplot as plt
%pylab inline
Explanation: Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 3: Stroj potpornih vektora i algoritam k-najbližih susjeda
Verzija: 0.3
Zadnji put ažurirano: 9. studenog 2018.
(c) 2015-2017 Jan Šnajder, Domagoj Alagić
Objavljeno: 9. studenog 2018.
Rok za predaju: 3. prosinca 2018. u 07:00h
Upute
Treća laboratorijska vježba sastoji se od sedam zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija.
Osigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.
Vježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
End of explanation
from sklearn.svm import SVC
seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])
seven_y = np.array([1, 1, 1, 1, -1, -1, -1])
fit = SVC(gamma = 'scale', kernel = 'linear').fit(seven_X, seven_y)
mlutils.plot_2d_svc_problem(seven_X, seven_y, fit)
Explanation: 1. Klasifikator stroja potpornih vektora (SVM)
(a)
Upoznajte se s razredom svm.SVC, koja ustvari implementira sučelje prema implementaciji libsvm. Primijenite model SVC s linearnom jezgrenom funkcijom (tj. bez preslikavanja primjera u prostor značajki) na skup podataka seven (dan niže) s $N=7$ primjera. Ispišite koeficijente $w_0$ i $\mathbf{w}$. Ispišite dualne koeficijente i potporne vektore. Završno, koristeći funkciju mlutils.plot_2d_svc_problem iscrtajte podatke, decizijsku granicu i marginu. Funkcija prima podatke, oznake i klasifikator (objekt klase SVC).
Izračunajte širinu dobivene margine (prisjetite se geometrije linearnih modela).
End of explanation
from sklearn.metrics import hinge_loss
def hinge(model, x, y):
return max(0, 1-y*model.decision_function(x))
x1b = np.array([[3,2], [3.5, 2], [4,2]])
y1b = np.array([1, 1, -1])
suma = 0
for i in range(0, len(seven_X)):
suma += hinge(fit, seven_X[i].reshape(1,-1), seven_y[i])
my_hinge_loss = suma / len(seven_X)
print('my hinge loss: ' + str(my_hinge_loss[0]))
print('inbuild hinge loss: ' + str(hinge_loss(seven_y, fit.decision_function(seven_X))))
Explanation: Q: Koliko iznosi širina margine i zašto?
Q: Koji primjeri su potporni vektori i zašto?
(b)
Definirajte funkciju hinge(model, x, y) koja izračunava gubitak zglobnice modela SVM na primjeru x. Izračunajte gubitke modela naučenog na skupu seven za primjere $\mathbf{x}^{(2)}=(3,2)$ i $\mathbf{x}^{(1)}=(3.5,2)$ koji su označeni pozitivno ($y=1$) te za $\mathbf{x}^{(3)}=(4,2)$ koji je označen negativno ($y=-1$). Također, izračunajte prosječni gubitak SVM-a na skupu seven. Uvjerite se da je rezultat identičan onome koji biste dobili primjenom ugrađene funkcije metrics.hinge_loss.
End of explanation
from sklearn.metrics import accuracy_score
outlier_X = np.append(seven_X, [[12,8]], axis=0)
outlier_y = np.append(seven_y, -1)
unsep_X = np.append(seven_X, [[2,2]], axis=0)
unsep_y = np.append(seven_y, -1)
outlier = SVC(kernel = 'linear').fit(outlier_X, outlier_y)
outlier_accuracy = accuracy_score(outlier_y, outlier.predict(outlier_X))
print('outlier acc:')
print(outlier_accuracy)
unsep = SVC(kernel = 'linear').fit(unsep_X, unsep_y)
unsep_accuracy = accuracy_score(unsep_y, unsep.predict(unsep_X))
print('unsep acc:')
print(unsep_accuracy)
figure(figsize(12,4))
subplot(1,2,1)
mlutils.plot_2d_svc_problem(outlier_X, outlier_y, outlier)
subplot(1,2,2)
mlutils.plot_2d_svc_problem(unsep_X, unsep_y, unsep)
Explanation: (c)
Vratit ćemo se na skupove podataka outlier ($N=8$) i unsep ($N=8$) iz prošle laboratorijske vježbe (dani niže) i pogledati kako se model SVM-a nosi s njima. Naučite ugrađeni model SVM-a (s linearnom jezgrom) na ovim podatcima i iscrtajte decizijsku granicu (skupa s marginom). Također ispišite točnost modela korištenjem funkcije metrics.accuracy_score.
End of explanation
C = [10**(-2), 1, 10**2]
jezgra = ['linear', 'poly', 'rbf']
k = 1
figure(figsize(12, 10))
subplots_adjust(wspace=0.1, hspace = 0.2)
for i in C:
for j in jezgra:
uns = SVC(C = i, kernel = j, gamma='scale').fit(unsep_X, unsep_y)
h = uns.predict
subplot(3,3,k)
mlutils.plot_2d_svc_problem(unsep_X, unsep_y, uns)
title(str(i) + ', ' + j);
k+=1
Explanation: Q: Kako stršeća vrijednost utječe na SVM?
Q: Kako se linearan SVM nosi s linearno neodvojivim skupom podataka?
2. Nelinearan SVM
Ovaj zadatak pokazat će kako odabir jezgre utječe na kapacitet SVM-a. Na skupu unsep iz prošlog zadatka trenirajte tri modela SVM-a s različitim jezgrenim funkcijama: linearnom, polinomijalnom i radijalnom baznom (RBF) funkcijom. Varirajte hiperparametar $C$ po vrijednostima $C\in{10^{-2},1,10^2}$, dok za ostale hiperparametre (stupanj polinoma za polinomijalnu jezgru odnosno hiperparametar $\gamma$ za jezgru RBF) koristite podrazumijevane vrijednosti. Prikažite granice između klasa (i margine) na grafikonu organiziranome u polje $3x3$, gdje su stupci različite jezgre, a retci različite vrijednosti parametra $C$.
End of explanation
from sklearn.metrics import accuracy_score, zero_one_loss
def grid_search(X_train, X_validate, y_train, y_validate, c_range=(0,5), g_range=(0,5), error_surface=False):
# Vaš kôd ovdje
pass
Explanation: 3. Optimizacija hiperparametara SVM-a
Pored hiperparametra $C$, model SVM s jezgrenom funkcijom RBF ima i dodatni hiperparametar $\gamma=\frac{1}{2\sigma^2}$ (preciznost). Taj parametar također određuje složenost modela: velika vrijednost za $\gamma$ znači da će RBF biti uska, primjeri će biti preslikani u prostor u kojem su (prema skalarnome produktu) međusobno vrlo različiti, što će rezultirati složenijim modelima. Obrnuto, mala vrijednost za $\gamma$ znači da će RBF biti široka, primjeri će biti međusobno sličniji, što će rezultirati jednostavnijim modelima. To ujedno znači da, ako odabremo veći $\gamma$, trebamo jače regularizirati model, tj. trebamo odabrati manji $C$, kako bismo spriječili prenaučenost. Zbog toga je potrebno zajednički optimirati hiperparametre $C$ i $\gamma$, što se tipično radi iscrpnim pretraživanjem po rešetci (engl. grid search). Ovakav pristup primjenjuje se kod svih modela koji sadrže više od jednog hiperparametra.
(a)
Definirajte funkciju
grid_search(X_train, X_validate, y_train, y_validate, c_range=(c1,c2), g_range=(g1,g2), error_surface=False)
koja optimizira parametre $C$ i $\gamma$ pretraživanjem po rešetci. Funkcija treba pretražiti hiperparametre $C\in{2^{c_1},2^{c_1+1},\dots,2^{c_2}}$ i $\gamma\in{2^{g_1},2^{g_1+1},\dots,2^{g_2}}$. Funkcija treba vratiti optimalne hiperparametre $(C^,\gamma^)$, tj. one za koje na skupu za provjeru model ostvaruju najmanju pogrešku. Dodatno, ako je surface=True, funkcija treba vratiti matrice (tipa ndarray) pogreške modela (očekivanje gubitka 0-1) na skupu za učenje i skupu za provjeru. Svaka je matrica dimenzija $(c_2-c_1+1)\times(g_2-g_1+1)$ (retci odgovaraju različitim vrijednostima za $C$, a stupci različitim vrijednostima za $\gamma$).
End of explanation
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# Vaš kôd ovdje
Explanation: (b)
Pomoću funkcije datasets.make_classification generirajte dva skupa podataka od $N=200$ primjera: jedan s $n=2$ dimenzije i drugi s $n=1000$ dimenzija. Primjeri neka dolaze iz dviju klasa, s time da svakoj klasi odgovaraju dvije grupe (n_clusters_per_class=2), kako bi problem bio nešto složeniji, tj. nelinearniji. Neka sve značajke budu informativne. Podijelite skup primjera na skup za učenje i skup za ispitivanje u omjeru 1:1.
Na oba skupa optimirajte SVM s jezgrenom funkcijom RBF, u rešetci $C\in{2^{-5},2^{-4},\dots,2^{15}}$ i $\gamma\in{2^{-15},2^{-14},\dots,2^{3}}$. Prikažite površinu pogreške modela na skupu za učenje i skupu za provjeru, i to na oba skupa podataka (ukupno četiri grafikona) te ispišite optimalne kombinacije hiperparametara. Za prikaz površine pogreške modela možete koristiti funkciju mlutils.plot_error_surface.
End of explanation
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=500,n_features=2,n_classes=2,n_redundant=0,n_clusters_per_class=1, random_state=69)
X[:,1] = X[:,1]*100+1000
X[0,1] = 3000
mlutils.plot_2d_svc_problem(X, y)
Explanation: Q: Razlikuje li se površina pogreške na skupu za učenje i skupu za ispitivanje? Zašto?
Q: U prikazu površine pogreške, koji dio površine odgovara prenaučenosti, a koji podnaučenosti? Zašto?
Q: Kako broj dimenzija $n$ utječe na površinu pogreške, odnosno na optimalne hiperparametre $(C^, \gamma^)$?
Q: Preporuka je da povećanje vrijednosti za $\gamma$ treba biti popraćeno smanjenjem vrijednosti za $C$. Govore li vaši rezultati u prilog toj preporuci? Obrazložite.
4. Utjecaj standardizacije značajki kod SVM-a
U prvoj laboratorijskoj vježbi smo pokazali kako značajke različitih skala mogu onemogućiti interpretaciju naučenog modela linearne regresije. Međutim, ovaj problem javlja se kod mnogih modela pa je tako skoro uvijek bitno prije treniranja skalirati značajke, kako bi se spriječilo da značajke s većim numeričkim rasponima dominiraju nad onima s manjim numeričkim rasponima. To vrijedi i za SVM, kod kojega skaliranje nerijetko može znatno poboljšati rezultate. Svrha ovog zadataka jest eksperimentalno utvrditi utjecaj skaliranja značajki na točnost SVM-a.
Generirat ćemo dvoklasni skup od $N=500$ primjera s $n=2$ značajke, tako da je dimenzija $x_1$ većeg iznosa i većeg raspona od dimenzije $x_0$, te ćemo dodati jedan primjer koji vrijednošću značajke $x_1$ odskače od ostalih primjera:
End of explanation
figure(figsize(12, 4))
subplot(1,2,1)
hist(X[:,0], bins = 50);
subplot(1,2,2)
hist(X[:,1], bins = 50);
Explanation: (a)
Proučite funkciju za iscrtavanje histograma hist. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ (ovdje i u sljedećim zadatcima koristite bins=50).
End of explanation
from sklearn.preprocessing import MinMaxScaler
x0b = MinMaxScaler().fit_transform(X[:,0].reshape(-1,1), y)
x1b = MinMaxScaler().fit_transform(X[:,1].reshape(-1,1), y)
figure(figsize(12, 4))
subplot(1,2,1)
hist(x0b, bins = 50)
subplot(1,2,2)
hist(x1b, bins = 50)
Explanation: (b)
Proučite razred preprocessing.MinMaxScaler. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ ako su iste skalirane min-max skaliranjem (ukupno dva histograma).
End of explanation
from sklearn.preprocessing import StandardScaler
x0b = StandardScaler().fit_transform(X[:,0].reshape(-1,1), y)
x1b = StandardScaler().fit_transform(X[:,1].reshape(-1,1), y)
figure(figsize(12, 4))
subplot(1,2,1)
hist(x0b, bins = 50)
subplot(1,2,2)
hist(x1b, bins = 50)
Explanation: Q: Kako radi ovo skaliranje? <br>
Q: Dobiveni histogrami su vrlo slični. U čemu je razlika? <br>
(c)
Proučite razred preprocessing.StandardScaler. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ ako su iste skalirane standardnim skaliranjem (ukupno dva histograma).
End of explanation
from sklearn.model_selection import train_test_split
err_unscaled = []
err_std = []
err_minmax = []
for i in range(0, 30):
X_train, X_validate, y_train, y_validate = train_test_split(X, y, test_size = 0.5)
model_unscaled = SVC(gamma = 'scale').fit(X_train, y_train)
prediction_unscaled = model_unscaled.predict(X_validate)
err_unscaled.append(accuracy_score(y_validate, prediction_unscaled))
std_scale = StandardScaler()
X_std_train = std_scale.fit_transform(X_train) # skaliranje podataka za ucenje
X_std_valid = std_scale.transform(X_validate) # skaliranje podataka za ispitivanje
model2 = SVC(gamma = 'scale').fit(X_std_train, y_train) # treniranje SVM na skaliranom skupu za ucenje
h2 = model2.predict(X_std_valid)
err_std.append(accuracy_score(y_validate, h2))
minmax_scale = MinMaxScaler()
X_minmax_train = minmax_scale.fit_transform(X_train)
X_minmax_valid = minmax_scale.transform(X_validate)
model3 = SVC(gamma = 'scale').fit(X_minmax_train, y_train)
h3 = model3.predict(X_minmax_valid)
err_minmax.append(accuracy_score(y_validate, h3))
print('Unscaled')
print(mean(err_unscaled))
print('Std')
print(mean(err_std))
print('Min max')
print(mean(err_minmax))
Explanation: Q: Kako radi ovo skaliranje? <br>
Q: Dobiveni histogrami su vrlo slični. U čemu je razlika? <br>
(d)
Podijelite skup primjera na skup za učenje i skup za ispitivanje u omjeru 1:1. Trenirajte SVM s jezgrenom funkcijom RBF na skupu za učenje i ispitajte točnost modela na skupu za ispitivanje, koristeći tri varijante gornjeg skupa: neskalirane značajke, standardizirane značajke i min-max skaliranje. Koristite podrazumijevane vrijednosti za $C$ i $\gamma$. Izmjerite točnost svakog od triju modela na skupu za učenje i skupu za ispitivanje. Ponovite postupak više puta (npr. 30) te uprosječite rezultate (u svakom ponavljanju generirajte podatke kao što je dano na početku ovog zadatka).
NB: Na skupu za učenje treba najprije izračunati parametre skaliranja te zatim primijeniti skaliranje (funkcija fit_transform), dok na skupu za ispitivanje treba samo primijeniti skaliranje s parametrima koji su dobiveni na skupu za učenje (funkcija transform).
End of explanation
from numpy.linalg import norm
class KNN(object):
def __init__(self, n_neighbors=3):
# Vaš kôd ovdje
pass
def fit(self, X_train, y_train):
# Vaš kôd ovdje
pass
def predict(self, X_test):
# Vaš kôd ovdje
pass
Explanation: Q: Jesu li rezultati očekivani? Obrazložite. <br>
Q: Bi li bilo dobro kada bismo funkciju fit_transform primijenili na cijelom skupu podataka? Zašto? Bi li bilo dobro kada bismo tu funkciju primijenili zasebno na skupu za učenje i zasebno na skupu za ispitivanje? Zašto?
5. Algoritam k-najbližih susjeda
U ovom zadatku promatrat ćemo jednostavan klasifikacijski model imena algoritam k-najbližih susjeda. Najprije ćete ga samostalno isprogramirati kako biste se detaljno upoznali s radom ovog modela, a zatim ćete prijeći na analizu njegovih hiperparametara (koristeći ugrađeni razred, radi efikasnosti).
(a)
Implementirajte klasu KNN, koja implementira algoritam $k$ najbližih susjeda. Neobavezan parametar konstruktora jest broj susjeda n_neighbours ($k$), čija je podrazumijevana vrijednost 3. Definirajte metode fit(X, y) i predict(X), koje služe za učenje modela odnosno predikciju. Kao mjeru udaljenosti koristite euklidsku udaljenost (numpy.linalg.norm; pripazite na parametar axis). Nije potrebno implementirati nikakvu težinsku funkciju.
End of explanation
from sklearn.datasets import make_classification
X_art, y_art = make_classification(n_samples=100, n_features=2, n_classes=2,
n_redundant=0, n_clusters_per_class=2,
random_state=69)
mlutils.plot_2d_clf_problem(X_art, y_art)
from sklearn.neighbors import KNeighborsClassifier
# Vaš kôd ovdje
Explanation: (b)
Kako biste se uvjerili da je Vaša implementacija ispravna, usporedite ju s onom u razredu neighbors.KNeighborsClassifier. Budući da spomenuti razred koristi razne optimizacijske trikove pri pronalasku najboljih susjeda, obavezno postavite parametar algorithm=brute, jer bi se u protivnom moglo dogoditi da Vam se predikcije razlikuju. Usporedite modele na danom (umjetnom) skupu podataka (prisjetite se kako se uspoređuju polja; numpy.all).
End of explanation
# Vaš kôd ovdje
Explanation: 6. Analiza algoritma k-najbližih susjeda
Algoritam k-nn ima hiperparametar $k$ (broj susjeda). Taj hiperparametar izravno utječe na složenost algoritma, pa je stoga izrazito važno dobro odabrati njegovu vrijednost. Kao i kod mnogih drugih algoritama, tako i kod algoritma k-nn optimalna vrijednost hiperametra $k$ ovisi o konkretnom problemu, uključivo broju primjera $N$, broju značajki (dimenzija) $n$ te broju klasa $K$.
Kako bismo dobili pouzdanije rezultate, potrebno je neke od eksperimenata ponoviti na različitim skupovima podataka i zatim uprosječiti dobivene vrijednosti pogrešaka. Koristite funkciju: mlutils.knn_eval koja trenira i ispituje model k-najbližih susjeda na ukupno n_instances primjera, i to tako da za svaku vrijednost hiperparametra iz zadanog intervala k_range ponovi n_trials mjerenja, generirajući za svako od njih nov skup podataka i dijeleći ga na skup za učenje i skup za ispitivanje. Udio skupa za ispitivanje definiran je parametrom test_size. Povratna vrijednost funkcije jest četvorka (ks, best_k, train_errors, test_errors). Vrijednost best_k je optimalna vrijednost hiperparametra $k$ (vrijednost za koju je pogreška na skupu za ispitivanje najmanja). Vrijednosti train_errors i test_errors liste su pogrešaka na skupu za učenja odnosno skupu za testiranje za sve razmatrane vrijednosti hiperparametra $k$, dok ks upravo pohranjuje sve razmatrane vrijednosti hiperparametra $k$.
(a)
Na podatcima iz zadatka 5, pomoću funkcije mlutils.plot_2d_clf_problem iscrtajte prostor primjera i područja koja odgovaraju prvoj odnosno drugoj klasi. Ponovite ovo za $k\in[1, 5, 20, 100]$.
NB: Implementacija algoritma KNeighborsClassifier iz paketa scikit-learn vjerojatno će raditi brže od Vaše implementacije, pa u preostalim eksperimentima koristite nju.
End of explanation
# Vaš kôd ovdje
Explanation: Q: Kako $k$ utječe na izgled granice između klasa?
Q: Kako se algoritam ponaša u ekstremnim situacijama: $k=1$ i $k=100$?
(b)
Pomoću funkcije mlutils.knn_eval, iscrtajte pogreške učenja i ispitivanja kao funkcije hiperparametra $k\in{1,\dots,20}$, za $N={100, 500, 1000, 3000}$ primjera. Načinite 4 zasebna grafikona (generirajte ih u 2x2 polju). U svakoj iteraciji ispišite optimalnu vrijednost hiperparametra $k$ (najlakše kao naslov grafikona; vidi plt.title).
End of explanation
# Vaš kôd ovdje
Explanation: Q: Kako se mijenja optimalna vrijednost hiperparametra $k$ s obzirom na broj primjera $N$? Zašto?
Q: Kojem području odgovara prenaučenost, a kojem podnaučenost modela? Zašto?
Q: Je li uvijek moguće doseći pogrešku od 0 na skupu za učenje?
(c)
Kako bismo provjerili u kojoj je mjeri algoritam k-najbližih susjeda osjetljiv na prisustvo nebitnih značajki, možemo iskoristiti funkciju datasets.make_classification kako bismo generirali skup primjera kojemu su neke od značajki nebitne. Naime, parametar n_informative određuje broj bitnih značajki, dok parametar n_features određuje ukupan broj značajki. Ako je n_features > n_informative, onda će neke od značajki biti nebitne. Umjesto da izravno upotrijebimo funkciju make_classification, upotrijebit ćemo funkciju mlutils.knn_eval, koja samo preuzime ove parametre, ali nam omogućuje pouzdanije procjene.
Koristite funkciju mlutils.knn_eval na dva načina. U oba koristite $N=1000$ primjera, $n=10$ značajki i $K=5$ klasa, ali za prvi neka su svih 10 značajki bitne, a za drugi neka je bitno samo 5 od 10 značajki. Ispišite pogreške učenja i ispitivanja za oba modela za optimalnu vrijednost $k$ (vrijednost za koju je ispitna pogreška najmanja).
End of explanation
from sklearn.metrics.pairwise import pairwise_distances
from numpy.random import random
# Vaš kôd ovdje
Explanation: Q: Je li algoritam k-najbližih susjeda osjetljiv na nebitne značajke? Zašto?
Q: Je li ovaj problem izražen i kod ostalih modela koje smo dosad radili (npr. logistička regresija)?
Q: Kako bi se model k-najbližih susjeda ponašao na skupu podataka sa značajkama različitih skala? Detaljno pojasnite.
7. "Prokletstvo dimenzionalnosti"
"Prokletstvo dimenzionalnosti" zbirni je naziv za niz fenomena povezanih s visokodimenzijskim prostorima. Ti fenomeni, koji se uglavnom protive našoj intuiciji, u većini slučajeva dovode do toga da se s porastom broja dimenzija (značajki) smanjenje točnost modela.
Općenito, povećanje dimenzija dovodi do toga da sve točke u ulaznome prostoru postaju (u smislu euklidske udaljenosti) sve udaljenije jedne od drugih te se, posljedično, gube razlike u udaljenostima između točaka. Eksperimentalno ćemo provjeriti da je to doista slučaj. Proučite funkciju metrics.pairwise.pairwise_distances. Generirajte 100 slučajnih vektora u različitim dimenzijama $n\in[1,2,\ldots,50]$ dimenzija te izračunajte prosječnu euklidsku udaljenost između svih parova tih vektora. Za generiranje slučajnih vektora koristite funkciju numpy.random.random. Na istom grafu skicirajte krivulje prosječnih udaljenosti za euklidsku i kosinusnu udaljenost (parametar metric).
End of explanation |
7,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Chukwuemeka Mba-Kalu </center> <center> Joseph Onwughalu </center>
<center> An Analysis of the Brazilian Economy between 2000 and 2012 </center>
<center> Final Project In Partial Fulfillment of the Course Requirements </center> <center> Data Bootcamp </center>
<center> Stern School of Business, NYU Spring 2017 </center> <center> May 12, 2017 </center>
The Brazilian Economy
In this project we examine in detail different complexities of Brazil’s growth between the years 2000-2012. During this period, Brazil set an example for many of the major emerging economies in Latin America, Africa, and Asia.
From the years 2000-2012, Brazil was one of the fastest growing major economies in the world. It is the 8th largest economy in the world, with its GDP totalling 2.2 trillion dollars and GDP per Capita being at 10,308 dollars. While designing this project, we were interested to find out more about the main drivers of the Brazilian economy. Specifically, we aim to look at specific trends and indicators that directly affect economic growth, especially in fast-growing countries such as Brazil. Certain trends include household consumption and its effects on the GDP, bilateral aid and investment flows and its effects on the GDP per capita growth. We also aim to view the effects of economic growth on climate change and public health by observing the carbon emissions percentage changes and specific indicators like the mortality rate.
We will be looking at generally accepted economic concepts and trends, making some hypotheses, and comparing our hypotheses to the Brazil data we have. Did Brazil follow these trends on its path to economic growth?
Methodology - Data Acquisition
All the data we are using in this project was acquired from the World Bank and can be accessed and downloaded from the website. By going on the website and searching for “World data report” we were given access to information that has to be submitted by the respective countries on the site. By clicking “Brazil,” we’re shown the information of several economic indicators and their respective data over a time period of 2000-2012 that we downloaded as an excel file. We picked more than 20 metrics to include in our data, such as
Step1: Reading in and Cleaning up the Data
We downloaded our data in xlxs, retained and renamed the important columns, and deleted rows without enough data. We alse transposed the table to make it easier to plot diagrams.
Step2: GDP Growth and GDP Growth Rate in Brazil
To demonstrate Brazil's strong economic growht between 2000 and 2012, here are a few charts illustrating Brazil's GDP growth.
Gross domestic product (GDP) is the monetary value of all the finished goods and services produced within a country's borders in a specific time period. Though GDP is usually calculated on an annual basis, it can be calculated on a quarterly basis as well. GDP includes all private and public consumption, government outlays, investments and exports minus imports that occur within a defined territory. Put simply, GDP is a broad measurement of a nation’s overall economic activity.
GDP per Capita is a measure of the total output of a country that takes gross domestic product (GDP) and divides it by the number of people in the country.
Read more on Investopedia
Step3: GDP Growth vs. GDP Growth Rate
While Brazil's GDP was growing quite consistently over the 12 years, its GDP growth-rate was not steady with negative growth during the 2008 financial crisis.
Hypothesis
Step4: Actual
Step5: Actual
Step6: Actual
Step7: Actual
Step8: Actual
Step9: Population Growth
The population growth rate has a negative correlation with GDP per capita. Our explanation is that, as economies advance, the birth rate is expected to decrease. This generally causes population growth rate to fall and GDP per Capita to rise.
Hypothesis
Step10: Actual | Python Code:
# Inportant Packages
import pandas as pd
import matplotlib.pyplot as plt
import sys
import datetime as dt
print('Python version is:', sys.version)
print('Pandas version:', pd.__version__)
print('Date:', dt.date.today())
Explanation: <center> Chukwuemeka Mba-Kalu </center> <center> Joseph Onwughalu </center>
<center> An Analysis of the Brazilian Economy between 2000 and 2012 </center>
<center> Final Project In Partial Fulfillment of the Course Requirements </center> <center> Data Bootcamp </center>
<center> Stern School of Business, NYU Spring 2017 </center> <center> May 12, 2017 </center>
The Brazilian Economy
In this project we examine in detail different complexities of Brazil’s growth between the years 2000-2012. During this period, Brazil set an example for many of the major emerging economies in Latin America, Africa, and Asia.
From the years 2000-2012, Brazil was one of the fastest growing major economies in the world. It is the 8th largest economy in the world, with its GDP totalling 2.2 trillion dollars and GDP per Capita being at 10,308 dollars. While designing this project, we were interested to find out more about the main drivers of the Brazilian economy. Specifically, we aim to look at specific trends and indicators that directly affect economic growth, especially in fast-growing countries such as Brazil. Certain trends include household consumption and its effects on the GDP, bilateral aid and investment flows and its effects on the GDP per capita growth. We also aim to view the effects of economic growth on climate change and public health by observing the carbon emissions percentage changes and specific indicators like the mortality rate.
We will be looking at generally accepted economic concepts and trends, making some hypotheses, and comparing our hypotheses to the Brazil data we have. Did Brazil follow these trends on its path to economic growth?
Methodology - Data Acquisition
All the data we are using in this project was acquired from the World Bank and can be accessed and downloaded from the website. By going on the website and searching for “World data report” we were given access to information that has to be submitted by the respective countries on the site. By clicking “Brazil,” we’re shown the information of several economic indicators and their respective data over a time period of 2000-2012 that we downloaded as an excel file. We picked more than 20 metrics to include in our data, such as:
* Population
* GDP (current US Dollars)
* Household final consumption expenditure, etc. (% of GDP)
* General government final consumption expenditure (current US Dollars)
* Life expectancy at birth, total (years)
For all of our analysis and data we will be looking at the 2000-2012 time period and have filtered the spreadsheets accordingly to reflect this information.
End of explanation
path = 'C:\\Users\\emeka_000\\Desktop\\Bootcamp_Emeka.xlsx'
odata = pd.read_excel(path,
usecols = ['Series Name','2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]',
'2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]',
'2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]',
'2011 [YR2011]', '2012 [YR2012]']
) #retained only the necessary columns
odata.columns = ['Metric', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008',
'2009', '2010', '2011', '2012'] #easier column names
odata = odata.drop([20, 21, 22, 23, 24]) ##delete NaN values
odata = odata.transpose() #transpose to make diagram easier
odata #data with metrics description for the chart below
data = pd.read_excel(path,
usecols = ['2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]',
'2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]',
'2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]',
'2011 [YR2011]', '2012 [YR2012]']
) #same data but modified for pandas edits
data.columns = ['2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008',
'2009', '2010', '2011', '2012'] #all columns are now string
data = data.transpose() #data used for the rest of the project
Explanation: Reading in and Cleaning up the Data
We downloaded our data in xlxs, retained and renamed the important columns, and deleted rows without enough data. We alse transposed the table to make it easier to plot diagrams.
End of explanation
data[4].plot(kind = 'line', #line plot
title = 'Brazil Yearly GDP (2000-2012) (current US$)', #title
fontsize=15,
color='Green',
linewidth=4, #width of plot line
figsize=(20,5),).title.set_size(20) #set figure size and title size
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP (current US$) * 1e12").set_size(15) #set x and y axis, with their sizes
data[6].plot(kind = 'line',
title = 'Brazil Yearly GDP Per Capita (2000-2012) (current US$)',
fontsize=15,
color='blue',
linewidth=4,
figsize=(20,5)).title.set_size(20)
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP per capita (current US$)").set_size(15)
data[5].plot(kind = 'line',
title = 'Brazil Yearly GDP Growth (2000-2012) (%)',
fontsize=15,
color='red',
linewidth=4,
figsize=(20,5)).title.set_size(20)
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP Growth (%)").set_size(15)
Explanation: GDP Growth and GDP Growth Rate in Brazil
To demonstrate Brazil's strong economic growht between 2000 and 2012, here are a few charts illustrating Brazil's GDP growth.
Gross domestic product (GDP) is the monetary value of all the finished goods and services produced within a country's borders in a specific time period. Though GDP is usually calculated on an annual basis, it can be calculated on a quarterly basis as well. GDP includes all private and public consumption, government outlays, investments and exports minus imports that occur within a defined territory. Put simply, GDP is a broad measurement of a nation’s overall economic activity.
GDP per Capita is a measure of the total output of a country that takes gross domestic product (GDP) and divides it by the number of people in the country.
Read more on Investopedia
End of explanation
fig, ax1 = plt.subplots(figsize = (20,5))
y1 = data[8]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'green') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Household Consumption (% of GDP) vs. GDP").set_size(20)
Explanation: GDP Growth vs. GDP Growth Rate
While Brazil's GDP was growing quite consistently over the 12 years, its GDP growth-rate was not steady with negative growth during the 2008 financial crisis.
Hypothesis: Household Consumption vs. Foreign Aid
Our hypothesis is that household consumption is a bigger driver of the Brazilian economy than foreign aid. With their rising incomes, Brazilians are expected to be empowered with larger disposable incomes to spend on goods and services. Foreign aid, on the other hand, might not filter down to the masses for spending.
End of explanation
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[11]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'red') #Net official development assistance
ax2.plot(y2, 'blue') #GDP growth
plt.title("Foreign Aid vs. GDP").set_size(20)
Explanation: Actual: Household Consumption
GDP comprises of household consumption, net investments, government spending and net exports;increases or decreases in any of these areas would affect the overall GDP respectively. The data shows that despite household consumption decreasing as a % of GDP, the GDP was growing. We found this a little strange and difficult to understand. One explanation for this phenomenon could be that as emerging market economies continue to expand, there is an increased shift towards investments and government spending.
The blue line represents GDP growth and the green line represents Household Consumption.
End of explanation
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[2]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'yellow') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Foreign Direct Investment (Inflows) (% of GDP) vs. GDP").set_size(20)
Explanation: Actual: Foreign Aid
Regarding foreign aid, it should be the case that with decreases in aid there will be reduced economic growth because many developing countries do rely on that as a crucial resource. The data shows a positive corellation for Brazil. While household spending was not a major driver of the Brazil's GDP growth, foreign aid played a big role. We will now explore how foreign direct investment and government spending can affect economic growth.
The blue line represents GDP growth and the red line represents Foreign Aid.
Hypothesis: Foreign Direct Investment vs. Government Spending
For emerging market economies, the general trend is that Governments contribute a significant proportion to the GDP. Given that Brazil experienced growth between the years 2000-2012, it is expected that a consequence was increased foreign direct investment. Naturally, we’d like to compare the increases in Government Spending versus this foreign direct investment and see who generally contributed more to the GDP growth of the country.
Our hypothesis is that the increased foreign direct investment was a bigger contributor to the GDP growth than government spending. With increased globalisation, we expect many multinationals and investors started business operations in Brazil due to its large, fast-growing market.
End of explanation
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[14]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'purple') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Government Spending vs. GDP").set_size(20)
Explanation: Actual: Foreign Direct Investment
Contrary to popular belief and economic concepts, increased foreign direct investment did not act as a major contributor to the GDP growth Brazil experienced. There is no clear general trend or correlation between FDI and GDP growth.
End of explanation
data.plot.scatter(x = 5, y = 0,
title = 'Population Growth vs. GDP Growth',
figsize=(20,5)).title.set_size(20)
plt.xlabel("GDP Growth Rate").set_size(15)
plt.ylabel("Population Growth Rate").set_size(15)
Explanation: Actual: Government Spending
It is clear that government spending is positively corellated with the total GDP growth Brazil experienced. We believe that this was the major driver for Brazil's growth.
Hypothesis: Population Growth and GDP per capita
Brazil’s population growth continued to increase during this time period of 2000-2012. As mentioned earlier, Brazil’s GDP growth was also growing during the same time period. Given that GDP per capita is a nice economic indicator to highlight standard of living in a country, we wanted to see if the increasing population was negating the effects of increased economic growth.
Our hypothesis is that even though population was growing, the GDP per capita over the years generally increased at a higher rate and, all things equal, we are assured increased living standards in Brazil. This finding would prove to us that the GDP was growing at a faster rate than the population.
End of explanation
data.plot.scatter(x = 6, y = 0,
title = 'Population Growth vs. GDP per Capita',
figsize=(20,5)).title.set_size(20)
plt.xlabel("GDP per Capita").set_size(15)
plt.ylabel("Population Growth Rate").set_size(15)
Explanation: Actual: Population Growth
There is no correlation between the population growth rate and the overall GDP growth rate. The general GDP rate already accounts for population increases and decreases.
End of explanation
data[15].plot(kind = 'bar',
title = 'Renewable energy consumption (% of total) (2000-2012)',
fontsize=15,
color='green',
linewidth=4,
figsize=(20,5)).title.set_size(20)
data[12].plot(kind = 'bar',
title = 'CO2 emissions from liquid fuel consumption (2000-2012)',
fontsize=15,
color='red',
linewidth=4,
figsize=(20,5)).title.set_size(20)
data[13].plot(kind = 'bar',
title = 'CO2 emissions from gaseous fuel consumption (2000-2012)',
fontsize=15,
color='blue',
linewidth=4,
figsize=(20,5)).title.set_size(20)
Explanation: Population Growth
The population growth rate has a negative correlation with GDP per capita. Our explanation is that, as economies advance, the birth rate is expected to decrease. This generally causes population growth rate to fall and GDP per Capita to rise.
Hypothesis: Renewable Energy Expenditures and C02 Emissions
What one would expect is that as a country’s economy grows, its investments in renewable energy methods would increase as well. Such actions should lead to a decrease in CO2 emissions as cleaner energy processes are being applied. Our hypothesis disagrees with this.
We believe that despite there being significant increases in renewable energy expenditures due to increased incomes and a larger, more diversified economy, there will still be more than proportionate increases in C02 emissions. By testing this hypothesis we will begin to understand certain explanations as to why this may be true or false.
End of explanation
data.plot.scatter(x = 7, y = 19, #scatter plot
title = 'Health Expenditures vs. Life Expectancy',
figsize=(20,5)).title.set_size(20)
plt.xlabel("Health Expenditures").set_size(15)
plt.ylabel("Life Expectancy").set_size(15)
Explanation: Actual: Renewable Energy Consumption vs. CO2 Emmissions
As countries continue to grow their economies, it is expected that people’s incomes will continue to rise. Increased disposable incomes should cause better energy consumption methods but as our hypothesis states, C02 emissions still continue to rise. This could be due to the increase in population as more people are using carbon goods and products.
Hypothesis: Health Expenditures and Life Expectancy
There should be a positive correlation between health expenditures and life expenditures. Naturally, the more a country is spending on healthcare, the higher the life expectancy ought to be. Our hypothesis agrees with this positive statement and we’d like to test it. If it turns out that health expenditure increases positively affects life expectancy, then we can attribute the increase to an improved economy that allows for more health expenditures from individuals, organisations and institutions.
End of explanation |
7,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test external lookup table (multi-dimensional np array) code for rule 18
Step1: External transduction of CA.get_spacetime() | Python Code:
A = 2
r = 1
table = lookup_table(18, 2, 1)
R = 2*r + 1
scan = tuple(np.arange(0,A)[::-1])
for a in product(scan, repeat = R):
print a, table[a]
x = [0,1,1,0]
print neighborhood(x, 1)
for item in neighborhood(x, 1):
print item
print max_rule(2,1)
print example.current_state()
example.evolve(1)
print example.current_state()
example.reset([0,]*5 + [1] + [0,]*5)
print example.current_state()
example.evolve(1)
print example.current_state()
print np.random.choice([0,1], 10 )
start = n_ones(20, 1)
test = ECA(18, start)
test.evolve(10)
print test.get_spacetime()
test.rewind(5)
print 'break'
print test.get_spacetime()
domain_state = domain_18(20)
print domain_state
t = transducer(*transducer_18())
print t.scan(domain_state)
dim = 400
start = random_state(dim, 2)
test = CA(18, 2, 1, start)
test.evolve(dim)
diagram(test.get_spacetime(), edgecolors='none')
Explanation: Test external lookup table (multi-dimensional np array) code for rule 18
End of explanation
t = transducer(*transducer_18())
transduced = t.spacetime_scan(test.get_spacetime(), direction ='left')
diagram(transduced, colors = plt.cm.bone, edgecolors = 'none')
masked = t.spacetime_mask(test.get_spacetime(), 2)
diagram(masked, colors = plt.cm.bone, edgecolors = 'none')
filtered = t.spacetime_filter(test.get_spacetime(), fill=True)
diagram(filtered, colors = plt.cm.Blues, edgecolors = 'none')
A = 3
R = 2
print max_rule(A, R)
start = random_state(500, A)
test = CA(68513215896546121325651496846315418, A, R, start)
test.evolve(500)
diagram(test.get_spacetime(), colors = plt.cm.spring, edgecolors = 'none')
Explanation: External transduction of CA.get_spacetime()
End of explanation |
7,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Precipitation in the Meteorology component
Goal
Step1: Programmatically create a file holding the precipitation rate time series. This will mimic what I'll need to do in WMT, where I'll have access to the model time step and run duration. Start by defining the precipitation rate values
Step2: Next, write the values to a file to the input directory, where it's expected by the cfg file
Step3: Check the file
Step4: BMI component
Import the BMI Meteorology component and create an instance
Step5: Initialize the model. A value of snow depth h_snow is needed for the model to update.
Step6: Unlike when P is a scalar, the initial model precipitation volume flux is the first value from precip_rates.txt
Step7: Advance the model by one time step
Step8: Unlike the scalar case, there's an output volume flux of precipitation
Step9: Advance the model to the end, saving the model time and output P values (converted back to mm/hr for convenience) at each step
Step10: Check the time and flux values
Step11: Result
Step12: Initialize the model.
Step13: The initial model precipitation volume flux is the first value from precip_rates.txt
Step14: Advance the model to the end, saving the model time and output P values (converted back to mm/hr for convenience) at each step
Step15: Check the time and flux values (noting that I've included the time = 0.0 value here) | Python Code:
mps_to_mmph = 1000 * 3600
Explanation: Precipitation in the Meteorology component
Goal: In this example, I give the Meteorology component a time series of precipitation values and check whether it produces output when the model state is updated.
Define a helpful constant:
End of explanation
import numpy as np
n_steps = 10 # can get from cfg file
precip_rates = np.linspace(5, 20, num=n_steps, endpoint=False)
precip_rates
Explanation: Programmatically create a file holding the precipitation rate time series. This will mimic what I'll need to do in WMT, where I'll have access to the model time step and run duration. Start by defining the precipitation rate values:
End of explanation
np.savetxt('./input/precip_rates.txt', precip_rates, fmt='%6.2f')
Explanation: Next, write the values to a file to the input directory, where it's expected by the cfg file:
End of explanation
cat input/precip_rates.txt
Explanation: Check the file:
End of explanation
from topoflow.components.met_base import met_component
m = met_component()
Explanation: BMI component
Import the BMI Meteorology component and create an instance:
End of explanation
m.initialize('./input/meteorology-1.cfg')
m.h_snow = 0.0 # Needed for update
Explanation: Initialize the model. A value of snow depth h_snow is needed for the model to update.
End of explanation
precip = m.get_value('atmosphere_water__precipitation_leq-volume_flux') # `P` internally
print type(precip)
print precip.size
precip * mps_to_mmph
Explanation: Unlike when P is a scalar, the initial model precipitation volume flux is the first value from precip_rates.txt:
End of explanation
m.update()
print '\nCurrent time: {} s'.format(m.get_current_time())
Explanation: Advance the model by one time step:
End of explanation
print precip * mps_to_mmph # note that this is a reference, so it'll take the current value of `P`
Explanation: Unlike the scalar case, there's an output volume flux of precipitation:
End of explanation
time = [m.get_current_time().copy()]
flux = [precip.copy() * mps_to_mmph]
while m.get_current_time() < m.get_end_time():
m.update()
time.append(m.get_current_time().copy())
flux.append(m.get_value('atmosphere_water__precipitation_leq-volume_flux').copy() * mps_to_mmph)
Explanation: Advance the model to the end, saving the model time and output P values (converted back to mm/hr for convenience) at each step:
End of explanation
time
flux
Explanation: Check the time and flux values:
End of explanation
from cmt.components import Meteorology
met = Meteorology()
Explanation: Result: Fails. Input precipipation rates do not match output precipitation volume flux because of changes we made to TopoFlow source.
Babel-wrapped component
Import the Babel-wrapped Meteorology component and create an instance:
End of explanation
%cd input
met.initialize('meteorology-1.cfg')
Explanation: Initialize the model.
End of explanation
bprecip = met.get_value('atmosphere_water__precipitation_leq-volume_flux')
print type(bprecip)
print bprecip.size
print bprecip.shape
bprecip * mps_to_mmph
Explanation: The initial model precipitation volume flux is the first value from precip_rates.txt:
End of explanation
time = [met.get_current_time()]
flux = [bprecip.max() * mps_to_mmph]
count = 1
while met.get_current_time() < met.get_end_time():
met.update(met.get_time_step()*count)
time.append(met.get_current_time())
flux.append(met.get_value('atmosphere_water__precipitation_leq-volume_flux').max() * mps_to_mmph)
count += 1
Explanation: Advance the model to the end, saving the model time and output P values (converted back to mm/hr for convenience) at each step:
End of explanation
time
flux
Explanation: Check the time and flux values (noting that I've included the time = 0.0 value here):
End of explanation |
7,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lower Star Filtrations
Step1: Overview of 0D Persistence for Point Clouds
Step2: Piecewise Linear Lower Star Filtrations
First, we define a lower star time series filtration function. Then, we launch into an example
Step3: Sinusoid Example 1
Step4: Warped Sinusoid Example
Step6: Lower Star Images
First, we define a function to perform the lower star filtration
Step7: Lower Star Images
Step8: Wood Cells Example
Creative commons image obtained from <a href = "https
Step9: Black Hole
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Using persistent homology we can mathematically confirm that a black has a hole! 🕳️ <a href="https | Python Code:
%matplotlib notebook
import numpy as np
from scipy import ndimage
from ripser import ripser
from persim import plot_diagrams as plot_dgms
import matplotlib.pyplot as plt
from scipy import sparse
import time
import PIL
from mpl_toolkits.mplot3d import Axes3D
import sys
import ipywidgets as widgets
from IPython.display import display
Explanation: Lower Star Filtrations
End of explanation
# First setup point cloud
np.random.seed(0)
NClusters = 3
PPC = 10 #Points per cluster
N = NClusters*PPC
X = np.zeros((N, 2))
for c in range(NClusters):
X[c*PPC:(c+1)*PPC, :] = np.random.randn(PPC, 2) + 10*np.random.randn(2)[None, :]
# Compute persistence diagram
res = ripser(X, maxdim=0)
H0 = res['dgms'][0]
D = res['dperm2all']
def on_value_change(change):
execute_computation1()
Tauslider = widgets.FloatSlider(min=0.0, max = np.max(D), x=5,step=0.1,value=1,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
display(Tauslider)
fig = plt.figure(figsize=(9.5, 3))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
def execute_computation1():
ax1.clear()
ax2.clear()
# Get slider values
cutoff = Tauslider.value
ax1.scatter(X[:, 0], X[:, 1], 50)
for i in range(N):
for j in range(i+1, N):
if D[i, j] < cutoff:
ax1.plot(X[[i, j], 0], X[[i, j], 1], 'k', linestyle='--')
plt.sca(ax2)
plot_dgms(H0)
ax2.plot([-0.1*np.max(D), np.max(D)], [cutoff, cutoff], linestyle='--')
execute_computation1()
Explanation: Overview of 0D Persistence for Point Clouds
End of explanation
def lower_star_filtration(x):
N = len(x)
# Add edges between adjacent points in the time series, with the "distance"
# along the edge equal to the max value of the points it connects
I = np.arange(N-1)
J = np.arange(1, N)
V = np.maximum(x[0:-1], x[1::])
# Add vertex birth times along the diagonal of the distance matrix
I = np.concatenate((I, np.arange(N)))
J = np.concatenate((J, np.arange(N)))
V = np.concatenate((V, x))
#Create the sparse distance matrix
D = sparse.coo_matrix((V, (I, J)), shape=(N, N)).tocsr()
return ripser(D, maxdim=0, distance_matrix=True)['dgms'][0]
np.random.seed(2)
x = np.random.randn(10)*2
x = np.round(x*5)/5.0
H0 = lower_star_filtration(x)
def on_value_change(change):
execute_computation1()
Tauslider = widgets.FloatSlider(min=np.min(x)-0.1, max = np.max(x)+0.1, x=5,step=0.05,value=np.min(x)-0.1,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
display(Tauslider)
fig = plt.figure(figsize=(9.5, 3))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
def execute_computation1():
ax1.clear()
ax2.clear()
# Get slider values
cutoff = Tauslider.value
ax1.plot(x)
ax1.plot([0, len(x)], [cutoff, cutoff])
ax1.scatter(np.arange(len(x)), x)
for i in range(len(x)):
if x[i] <= cutoff:
ax1.scatter([i]*2, [x[i]]*2, c='C1')
if i < len(x)-1:
if x[i] <= cutoff and x[i+1] <= cutoff:
ax1.plot([i, i+1], x[i:i+2], c='C1')
plt.sca(ax2)
plot_dgms(H0)
ax2.plot([np.min(x)-0.1, cutoff], [cutoff, cutoff], linestyle='--', c='C1')
ax2.plot([cutoff, cutoff], [cutoff, np.max(x)+0.6], linestyle='--', c='C1')
execute_computation1()
Explanation: Piecewise Linear Lower Star Filtrations
First, we define a lower star time series filtration function. Then, we launch into an example
End of explanation
np.random.seed(1)
NPeriods = 5
NSamples = 100
t = np.linspace(-0.5, NPeriods, NSamples)
x = np.sin(2*np.pi*t) + t
H0 = lower_star_filtration(x)
def on_value_change(change):
execute_computation1()
Tauslider = widgets.FloatSlider(min=np.min(x)-0.1, max = np.max(x)+0.1, x=5,step=0.05,value=np.min(x)-0.1,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
display(Tauslider)
fig = plt.figure(figsize=(9.5, 3))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
def execute_computation1():
ax1.clear()
ax2.clear()
# Get slider values
cutoff = Tauslider.value
ax1.plot(x)
ax1.plot([0, len(x)], [cutoff, cutoff])
ax1.scatter(np.arange(len(x)), x)
for i in range(len(x)):
if x[i] <= cutoff:
ax1.scatter([i]*2, [x[i]]*2, c='C1')
if i < len(x)-1:
if x[i] <= cutoff and x[i+1] <= cutoff:
ax1.plot([i, i+1], x[i:i+2], c='C1')
plt.sca(ax2)
plot_dgms(H0)
ax2.plot([np.min(x)-0.1, cutoff], [cutoff, cutoff], linestyle='--', c='C1')
ax2.plot([cutoff, cutoff], [cutoff, np.max(x)+0.6], linestyle='--', c='C1')
execute_computation1()
Explanation: Sinusoid Example 1
End of explanation
np.random.seed(1)
NPeriods = 5
NSamples = 400
t = np.linspace(-0.5, NPeriods, NSamples)
x = np.sin(2*np.pi*t) + t
def on_value_change(change):
execute_computation1()
WarpSlider = widgets.FloatSlider(min=0.1, max = 5,step=0.05,value=1,description=r'warp' ,continuous_update=False)
WarpSlider.observe(on_value_change, names='value')
display(WarpSlider)
fig = plt.figure(figsize=(9.5, 3))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
def execute_computation1():
ax1.clear()
ax2.clear()
# Get slider values
p = WarpSlider.value
t = np.linspace(0, 1, NSamples)**p
t = (t*(NPeriods+0.5))-0.5
x = np.sin(2*np.pi*t) + t
ax1.plot(x)
ax1.scatter(np.arange(len(x)), x)
H0 = lower_star_filtration(x)
plt.sca(ax2)
plot_dgms(H0)
execute_computation1()
Explanation: Warped Sinusoid Example
End of explanation
def lower_star_image(D):
Construct a lower star filtration on an image
Parameters
----------
D: ndarray (M, N)
An array of image data
Returns
-------
I: ndarray (K, 2)
A 0D persistence diagram corresponding to the sublevelset filtration
idxs = np.arange(D.shape[0]*D.shape[1])
idxs = np.reshape(idxs, D.shape)
I = idxs.flatten()
J = idxs.flatten()
V = D.flatten()
# Do 8 spatial neighbors
tidxs = np.nan*np.ones((D.shape[0]+2, D.shape[1]+2), dtype=np.int64)
tidxs[1:-1, 1:-1] = idxs
tD = np.nan*np.ones_like(tidxs)
tD[1:-1, 1:-1] = D
for di in [-1, 0, 1]:
for dj in [-1, 0, 1]:
if di == 0 and dj == 0:
continue
thisJ = np.roll(tidxs, di, axis=0)
thisJ = np.roll(thisJ, dj, axis=1)
thisD = np.roll(tD, di, axis=0)
thisD = np.roll(thisD, dj, axis=1)
thisD = np.maximum(thisD, tD)
# Deal with boundaries
thisI = tidxs[np.isnan(thisD)==0]
thisJ = thisJ[np.isnan(thisD)==0]
thisD = thisD[np.isnan(thisD)==0]
I = np.concatenate((I, thisI.flatten()))
J = np.concatenate((J, thisJ.flatten()))
V = np.concatenate((V, thisD.flatten()))
sparseDM = sparse.coo_matrix((V, (I, J)), shape=(idxs.size, idxs.size))
return ripser(sparseDM, distance_matrix=True, maxdim=0)['dgms'][0]
Explanation: Lower Star Images
First, we define a function to perform the lower star filtration
End of explanation
plt.figure()
ts = np.linspace(-1, 1, 100)
x1 = np.exp(-ts**2/(0.1**2))
ts -= 0.4
x2 = np.exp(-ts**2/(0.1**2))
# Define depths of Gaussian blobs
h1 = -1
h2 = -2
h3 = -3
img = h1*x1[None, :]*x1[:, None] + h2*x1[None, :]*x2[:, None] + h3*x2[None, :]*x2[:, None]
plt.imshow(img, cmap = 'afmhot')
plt.colorbar()
plt.show()
I = lower_star_image(img)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plt.imshow(img, cmap = 'afmhot')
plt.colorbar()
plt.title("Test Image")
plt.subplot(122)
plot_dgms(I)
plt.title("0D Persistence Diagram")
plt.tight_layout()
plt.show()
Explanation: Lower Star Images: Gaussian Blobs Example
End of explanation
cells_original = plt.imread("Cells.jpg")
cells_grey = np.asarray(PIL.Image.fromarray(cells_original).convert('L'))
# Normalize to the range [0, 1]
cells_grey = -ndimage.uniform_filter(cells_grey, size=10)
cells_grey = cells_grey - np.min(cells_grey)
cells_grey = cells_grey/np.max(cells_grey)
# Do lower star filtration after adding a little bit of noise
# The noise is a hack to help find representatives for the classes
F = cells_grey + 0.001*np.random.randn(cells_grey.shape[0], cells_grey.shape[1])
I = lower_star_image(F)
I = I[I[:, 1]-I[:, 0] > 0.001, :] # Filter out low persistence values
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.title(cells_original.shape)
plt.imshow(cells_original)
plt.axis('off')
plt.subplot(132)
plt.title(cells_grey.shape)
plt.imshow(cells_grey, cmap='afmhot')
plt.colorbar()
plt.axis('off')
plt.subplot(133)
plot_dgms(I)
plt.tight_layout()
plt.show()
def on_value_change(change):
execute_computation1()
Tauslider = widgets.FloatSlider(min=0, max = 1, x=5,step=0.01,value=0.0,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
display(Tauslider)
fig = plt.figure(figsize=(10, 6))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
X, Y = np.meshgrid(np.arange(F.shape[1]), np.arange(F.shape[0]))
X = X.flatten()
Y = Y.flatten()
def execute_computation1():
ax1.clear()
ax2.clear()
# Get slider values
cutoff = Tauslider.value
idxs = np.arange(I.shape[0])
idxs = idxs[np.abs(I[:, 1] - I[:, 0]) > cutoff]
ax1.imshow(cells_original)
ax1.set_xticks([])
ax1.set_yticks([])
for idx in idxs:
bidx = np.argmin(np.abs(F - I[idx, 0]))
ax1.scatter(X[bidx], Y[bidx], 20, 'k')
plt.sca(ax2)
plot_dgms(I, lifetime=True)
ax2.plot([0, 1], [cutoff, cutoff], linestyle='--', c='C1')
plt.legend(bbox_to_anchor=(0.6, 0.6), loc=2, borderaxespad=0.)
plt.tight_layout()
execute_computation1()
Explanation: Wood Cells Example
Creative commons image obtained from <a href = "https://www.flickr.com/photos/146824358@N03/35486476174">https://www.flickr.com/photos/146824358@N03/35486476174</a>
<img src = "Cells.jpg">
End of explanation
blackhole_original = plt.imread("BlackHole.jpg")
blackhole_grey = np.asarray(PIL.Image.fromarray(blackhole_original).convert('L'))
I = lower_star_image(blackhole_grey)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.imshow(blackhole_grey)
plt.colorbar()
plt.subplot(122)
plot_dgms(I)
plt.tight_layout()
plt.figure()
plt.imshow(blackhole_grey < 60)
Explanation: Black Hole
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Using persistent homology we can mathematically confirm that a black has a hole! 🕳️ <a href="https://twitter.com/hashtag/topology?src=hash&ref_src=twsrc%5Etfw">#topology</a> <a href="https://twitter.com/hashtag/blackhole?src=hash&ref_src=twsrc%5Etfw">#blackhole</a> <a href="https://t.co/n6QeyZwwgh">pic.twitter.com/n6QeyZwwgh</a></p>— Mitchell Eithun (@mitchelleithun) <a href="https://twitter.com/mitchelleithun/status/1116443279992729602?ref_src=twsrc%5Etfw">April 11, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
End of explanation |
7,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TLS handshake overview
This is the standard, modern TLS 1.2 handshake
Step1: (C) ---> (S) ClientHello
Step2: (C) <--- (S) ServerHello
Step3: (C) <--- (S) Certificate
Step4: (C) <--- (S) CertificateStatus, ServerKeyExchange, ServerHelloDone
Step5: (C) ---> (S) ClientKeyExchange, ChangeCipherSpec, Finished
Step6: (C) <--- (S) NewSessionTicket, ChangeCipherSpec, Finished
Step7: (C) ---> (S) ApplicationData | Python Code:
# We're going to parse several successive records from the passive listening of a standard TLS handshake
from scapy.all import *
load_layer('tls')
Explanation: TLS handshake overview
This is the standard, modern TLS 1.2 handshake:
<img src="images/handshake_tls12.png" alt="Handshake TLS 1.2" width="400"/>
End of explanation
record1 = TLS(open('raw_data/tls_session_protected/01_cli.raw', 'rb').read())
record1.show()
for extension in record1.msg[0].ext:
print('')
extension.show()
Explanation: (C) ---> (S) ClientHello
End of explanation
record2 = TLS(open('raw_data/tls_session_protected/02_srv.raw', 'rb').read())
record2.show()
Explanation: (C) <--- (S) ServerHello
End of explanation
record3 = TLS(open('raw_data/tls_session_protected/03_srv.raw', 'rb').read())
record3.show()
# The Certificate message actually contains a *chain* of certificates
for cert in record3.msg[0].certs:
print(type(cert[1]))
cert[1].show()
print('')
# Let's recall the domain that the client wants to access
record1.msg[0].ext[0].show()
# Indeed the certificate may be used with other domains than its CN 'www.github.com'
x509c = record3.msg[0].certs[0][1].x509Cert
print(type(x509c))
x509c.tbsCertificate.extensions[2].show()
Explanation: (C) <--- (S) Certificate
End of explanation
# Here the server sent three TLS records in the same TCP segment
record4 = TLS(open('raw_data/tls_session_protected/04_srv.raw', 'rb').read())
record4.show()
# Let's verify the signature in the ServerKeyExchange
# First, we need to assemble the whole data being signed
cli_random = pkcs_i2osp(record1.msg[0].gmt_unix_time, 4) + record1.msg[0].random_bytes
srv_random = pkcs_i2osp(record2.msg[0].gmt_unix_time, 4) + record2.msg[0].random_bytes
ecdh_params = bytes(record4[TLSServerKeyExchange].params)
# Then we retrieve the server's Cert and verify the signature
cert_srv = record3.msg[0].certs[0][1]
cert_srv.verify(cli_random + srv_random + ecdh_params, record4[TLSServerKeyExchange].sig.sig_val, h='sha512')
Explanation: (C) <--- (S) CertificateStatus, ServerKeyExchange, ServerHelloDone
End of explanation
record5_str = open('raw_data/tls_session_protected/05_cli.raw', 'rb').read()
record5 = TLS(record5_str)
record5.show()
# Every record has a 'tls_session' context which may enhance the parsing of later records
record5 = TLS(record5_str, tls_session=record2.tls_session.mirror())
record5.show()
Explanation: (C) ---> (S) ClientKeyExchange, ChangeCipherSpec, Finished
End of explanation
record6_str = open('raw_data/tls_session_protected/06_srv.raw', 'rb').read()
record6 = TLS(record6_str, tls_session=record5.tls_session.mirror())
record6.show()
Explanation: (C) <--- (S) NewSessionTicket, ChangeCipherSpec, Finished
End of explanation
record7_str = open('raw_data/tls_session_protected/07_cli.raw', 'rb').read()
record7 = TLS(record7_str, tls_session=record6.tls_session.mirror())
record7.show()
Explanation: (C) ---> (S) ApplicationData
End of explanation |
7,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Life
Step1: <h2>What is a Random Forest? </h2>
As with all the important questions in life, this is best deferred to the Wikipedia page. A random forest is an ensemble of decision trees which will output a prediction value, in this case survival. Each decision tree is constructed by using a random subset of the training data. After you have trained your forest, you can then pass each test row through it, in order to output a prediction. Simple! Well not quite! <b>This particular python function requires floats for the input variables, so all strings need to be converted, and any missing data needs to be filled.</b>
Replace Male=2 female=1 & child=0
Step2: Decision Tree Classification
Step3: # Export Result in CSV | Python Code:
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import normalize
import random
test=pd.read_csv("test.csv")
test.head()
mData=pd.read_csv("train.csv")
mData.head()
mData = mData.drop(["PassengerId","Name","Ticket"],axis=1)
test = test.drop(["Name","Ticket"],axis=1)
mData.head()
# Family
# Instead of having two columns Parch & SibSp,
# we can have only one column represent if the passenger had any family member aboard or not,
# Meaning, if having any family member(whether parent, brother, ...etc) will increase chances of Survival or not.
mData['Family'] = mData['Parch'] + mData['SibSp']
mData['Family'].loc[mData['Family'] > 0] = 1
mData['Family'].loc[mData['Family'] == 0] = 0
test['Family'] = test['Parch'] + test['SibSp']
test['Family'].loc[test['Family'] > 0] = 1
test['Family'].loc[test['Family'] == 0] = 0
# drop Parch & SibSp
mData = mData.drop(['SibSp','Parch'], axis=1)
test = test.drop(['SibSp','Parch'], axis=1)
mData.head()
# Sex
# As we see, children(age < ~16) on aboard seem to have a high chances for Survival.
# So, we can classify passengers as males, females, and child
def get_person(passenger):
age,sex = passenger
return 'child' if age < 16 else sex
mData['Person'] = mData[['Age','Sex']].apply(get_person,axis=1)
test['Person'] = test[['Age','Sex']].apply(get_person,axis=1)
# No need to use Sex column since we created Person column
mData.drop(['Sex'],axis=1,inplace=True)
test.drop(['Sex'],axis=1,inplace=True)
mData[:10]
Explanation: Predicting Life
End of explanation
import random
def PersonFunc(x):
if x=="male":
return 2
elif x=="female":
return 1
elif x=="child":
return 0
else:
return x
def EmbarkedFunc(x):
if x=="S":
return 1
elif x=="C":
return 2
elif x=="Q":
return 3
elif np.isnan(x):
return random.choice([1,2,3])
else:
return x
mData["Person"] = mData["Person"].apply(PersonFunc)
test["Person"] = test["Person"].apply(PersonFunc)
mData["Embarked"] = mData["Embarked"].apply(EmbarkedFunc)
test["Embarked"] = test["Embarked"].apply(EmbarkedFunc)
mData[:8]
print "Nan Entries in Cabin :",mData["Cabin"].isnull().sum()
def CabinFun(x):
if type(x)==str:
return ord(x[0])-ord('A')
elif np.isnan(x):
return 0
else:
return x
mData["Cabin"] = mData["Cabin"].apply(CabinFun)
test["Cabin"] = test["Cabin"].apply(CabinFun)
mData.head()
avgAge=mData["Age"].mean()
stander_dev=avgAge=mData["Age"].std()
avgAge_test = test["Age"].mean()
satnder_dev_test = test["Age"].std()
#select Randint between avg-std,avg+std
train_replacement = np.random.randint(avgAge-stander_dev,avgAge+stander_dev, size=mData["Age"].isnull().sum())
test_replacement = np.random.randint(avgAge_test-satnder_dev_test,avgAge_test+satnder_dev_test,size=test["Age"].isnull().sum())
train_replacement
mData["Age"][np.isnan(mData["Age"])] = train_replacement
test["Age"][np.isnan(test["Age"])] = test_replacement
mData.head()
Explanation: <h2>What is a Random Forest? </h2>
As with all the important questions in life, this is best deferred to the Wikipedia page. A random forest is an ensemble of decision trees which will output a prediction value, in this case survival. Each decision tree is constructed by using a random subset of the training data. After you have trained your forest, you can then pass each test row through it, in order to output a prediction. Simple! Well not quite! <b>This particular python function requires floats for the input variables, so all strings need to be converted, and any missing data needs to be filled.</b>
Replace Male=2 female=1 & child=0
End of explanation
train_data = mData.values[0::,1::] #remove label i.e Survived col [start with zero row select all row
# & start from 1dt coloumn select all coloumn except 0th]
train_label = mData.values[0::,0] #select all row & 0th coloumn
test_data=test.values #We will remove PassangerID when required i.e During prediction
train_data=normalize(train_data, norm='l2', axis=1, copy=True)
n_test_data=normalize(test_data, norm='l2', axis=1, copy=True)
clsf = RandomForestClassifier(n_estimators=100)
clsf=clsf.fit(train_data,train_label)
pred_train=clsf.predict(train_data)
print "Accuracy Score for trainig Data :",accuracy_score(pred_train,train_label)*100,"%"
pred_test=clsf.predict(n_test_data[0::,1::])
#Convert to dataframe
result=pd.DataFrame({"PassengerId" :test_data[0::,0],"Survived" : pred_test})
result["Survived"] = result["Survived"].astype(int)
result["PassengerId"] = result["PassengerId"].astype(int)
result
Explanation: Decision Tree Classification
End of explanation
result.to_csv("Result.csv")
print "Exported"
Explanation: # Export Result in CSV
End of explanation |
7,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Additional forces
REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces.
This tutorial gives you a very quick overview of how that works.
Stark problem
We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND.
Step1: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function
Step2: Next, we need to tell REBOUND about this function.
Step3: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force.
Step4: And let's plot the result.
Step5: You can see that the eccentricity is oscillating between 0 and almost 1.
Non-conservative forces
The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before.
Step6: But we change the additional force to be
Step7: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles.
Step8: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity. | Python Code:
import rebound
rebound.reset()
rebound.integrator = "whfast"
rebound.add(m=1.)
rebound.add(m=1e-6,a=1.)
rebound.move_to_com() # Moves to the center of momentum frame
Explanation: Additional forces
REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces.
This tutorial gives you a very quick overview of how that works.
Stark problem
We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND.
End of explanation
ps = rebound.particles
c = 0.01
def starkForce():
ps[1].ax += c
Explanation: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function
End of explanation
rebound.additional_forces = starkForce
Explanation: Next, we need to tell REBOUND about this function.
End of explanation
import numpy as np
Nout = 1000
es = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
rebound.integrate(time)
es[i] = rebound.calculate_orbits()[0].e
Explanation: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
plt.plot(times, es);
Explanation: And let's plot the result.
End of explanation
rebound.reset()
rebound.integrator = "ias15"
rebound.add(m=1.)
rebound.add(m=1e-6,a=1.)
rebound.move_to_com() # Moves to the center of momentum frame
Explanation: You can see that the eccentricity is oscillating between 0 and almost 1.
Non-conservative forces
The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before.
End of explanation
ps = rebound.particles
tau = 1000.
def migrationForce():
ps[1].ax -= ps[1].vx/tau
ps[1].ay -= ps[1].vy/tau
ps[1].az -= ps[1].vz/tau
Explanation: But we change the additional force to be
End of explanation
rebound.additional_forces = migrationForce
rebound.force_is_velocity_dependent = 1
Explanation: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles.
End of explanation
Nout = 1000
a_s = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
rebound.integrate(time)
a_s[i] = rebound.calculate_orbits()[0].a
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_xlabel("time")
ax.set_ylabel("semi-major axis")
plt.plot(times, a_s);
Explanation: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity.
End of explanation |
7,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: DDSP Processor Demo
This notebook provides an introduction to the signal Processor() object. The main object type in the DDSP library, it is the base class used for Synthesizers and Effects, which share the methods
Step2: Example
Step3: get_controls()
The outputs of a neural network are often not properly scaled and constrained. The get_controls method gives a dictionary of valid control parameters based on neural network outputs.
3 inputs (amps, hd, f0)
* amplitude
Step4: Consider the plots above as outputs of a neural network. These outputs violate the synthesizer's expectations
Step5: Notice that
* Amplitudes are now all positive
* The harmonic distribution sums to 1.0
* All harmonics that are above the Nyquist frequency now have an amplitude of 0.
The amplitudes and harmonic distribution are scaled by an "exponentiated sigmoid" function (ddsp.core.exp_sigmoid). There is nothing particularly special about this function (other functions can be specified as scale_fn= during construction), but it has several nice properties
Step6: get_signal()
Synthesizes audio from controls.
Step7: __call__()
Synthesizes audio directly from the raw inputs. get_controls() is called internally to turn them into valid control parameters.
Step8: Example | Python Code:
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: <a href="https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/tutorials/0_processor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2021 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Install and import dependencies
%tensorflow_version 2.x
!pip install -qU ddsp
# Ignore a bunch of deprecation warnings
import warnings
warnings.filterwarnings("ignore")
import ddsp
import ddsp.training
from ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
sample_rate = DEFAULT_SAMPLE_RATE # 16000
Explanation: DDSP Processor Demo
This notebook provides an introduction to the signal Processor() object. The main object type in the DDSP library, it is the base class used for Synthesizers and Effects, which share the methods:
get_controls(): inputs -> controls.
get_signal(): controls -> signal.
__call__(): inputs -> signal. (i.e. get_signal(**get_controls()))
Where:
* inputs is a variable number of tensor arguments (depending on processor). Often the outputs of a neural network.
* controls is a dictionary of tensors scaled and constrained specifically for the processor
* signal is an output tensor (usually audio or control signal for another processor)
Let's see why this is a helpful approach by looking at the specific example of the Harmonic() synthesizer processor.
End of explanation
n_frames = 1000
hop_size = 64
n_samples = n_frames * hop_size
# Create a synthesizer object.
harmonic_synth = ddsp.synths.Harmonic(n_samples=n_samples,
sample_rate=sample_rate)
Explanation: Example: harmonic synthesizer
The harmonic synthesizer models a sound as a linear combination of harmonic sinusoids. Amplitude envelopes are generated with 50% overlapping hann windows. The final audio is cropped to n_samples.
__init__()
All member variables are initialized in the constructor, which makes it easy to change them as hyperparameters using the gin dependency injection library. All processors also have a name that is used by ProcessorGroup().
End of explanation
# Generate some arbitrary inputs.
# Amplitude [batch, n_frames, 1].
# Make amplitude linearly decay over time.
amps = np.linspace(1.0, -3.0, n_frames)
amps = amps[np.newaxis, :, np.newaxis]
# Harmonic Distribution [batch, n_frames, n_harmonics].
# Make harmonics decrease linearly with frequency.
n_harmonics = 30
harmonic_distribution = (np.linspace(-2.0, 2.0, n_frames)[:, np.newaxis] +
np.linspace(3.0, -3.0, n_harmonics)[np.newaxis, :])
harmonic_distribution = harmonic_distribution[np.newaxis, :, :]
# Fundamental frequency in Hz [batch, n_frames, 1].
f0_hz = 440.0 * np.ones([1, n_frames, 1], dtype=np.float32)
# Plot it!
time = np.linspace(0, n_samples / sample_rate, n_frames)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.plot(time, amps[0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Amplitude')
plt.subplot(132)
plt.plot(time, harmonic_distribution[0, :, :])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Harmonic Distribution')
plt.subplot(133)
plt.plot(time, f0_hz[0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
_ = plt.title('Fundamental Frequency')
Explanation: get_controls()
The outputs of a neural network are often not properly scaled and constrained. The get_controls method gives a dictionary of valid control parameters based on neural network outputs.
3 inputs (amps, hd, f0)
* amplitude: Amplitude envelope of the synthesizer output.
* harmonic_distribution: Normalized amplitudes of each harmonic.
* fundamental_frequency: Frequency in Hz of base oscillator
End of explanation
controls = harmonic_synth.get_controls(amps, harmonic_distribution, f0_hz)
print(controls.keys())
# Now let's see what they look like...
time = np.linspace(0, n_samples / sample_rate, n_frames)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.plot(time, controls['amplitudes'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Amplitude')
plt.subplot(132)
plt.plot(time, controls['harmonic_distribution'][0, :, :])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Harmonic Distribution')
plt.subplot(133)
plt.plot(time, controls['f0_hz'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
_ = plt.title('Fundamental Frequency')
Explanation: Consider the plots above as outputs of a neural network. These outputs violate the synthesizer's expectations:
* Amplitude is not >= 0 (avoids phase shifts)
* Harmonic distribution is not normalized (factorizes timbre and amplitude)
* Fundamental frequency * n_harmonics > nyquist frequency (440 * 20 > 8000), which will lead to aliasing.
End of explanation
x = tf.linspace(-10.0, 10.0, 1000)
y = ddsp.core.exp_sigmoid(x)
plt.figure(figsize=(18, 4))
plt.subplot(121)
plt.plot(x, y)
plt.subplot(122)
_ = plt.semilogy(x, y)
Explanation: Notice that
* Amplitudes are now all positive
* The harmonic distribution sums to 1.0
* All harmonics that are above the Nyquist frequency now have an amplitude of 0.
The amplitudes and harmonic distribution are scaled by an "exponentiated sigmoid" function (ddsp.core.exp_sigmoid). There is nothing particularly special about this function (other functions can be specified as scale_fn= during construction), but it has several nice properties:
* Output scales logarithmically with input (as does human perception of loudness).
* Centered at 0, with max and min in reasonable range for normalized neural network outputs.
* Max value of 2.0 to prevent signal getting too loud.
* Threshold value of 1e-7 for numerical stability during training.
End of explanation
audio = harmonic_synth.get_signal(**controls)
play(audio)
specplot(audio)
Explanation: get_signal()
Synthesizes audio from controls.
End of explanation
audio = harmonic_synth(amps, harmonic_distribution, f0_hz)
play(audio)
specplot(audio)
Explanation: __call__()
Synthesizes audio directly from the raw inputs. get_controls() is called internally to turn them into valid control parameters.
End of explanation
## Some weird control envelopes...
# Amplitude [batch, n_frames, 1].
amps = np.ones([n_frames]) * -5.0
amps[:50] += np.linspace(0, 7.0, 50)
amps[50:200] += 7.0
amps[200:900] += (7.0 - np.linspace(0.0, 7.0, 700))
amps *= np.abs(np.cos(np.linspace(0, 2*np.pi * 10.0, n_frames)))
amps = amps[np.newaxis, :, np.newaxis]
# Harmonic Distribution [batch, n_frames, n_harmonics].
n_harmonics = 20
harmonic_distribution = np.ones([n_frames, 1]) * np.linspace(1.0, -1.0, n_harmonics)[np.newaxis, :]
for i in range(n_harmonics):
harmonic_distribution[:, i] = 1.0 - np.linspace(i * 0.09, 2.0, 1000)
harmonic_distribution[:, i] *= 5.0 * np.abs(np.cos(np.linspace(0, 2*np.pi * 0.1 * i, n_frames)))
if i % 2 != 0:
harmonic_distribution[:, i] = -3
harmonic_distribution = harmonic_distribution[np.newaxis, :, :]
# Fundamental frequency in Hz [batch, n_frames, 1].
f0_hz = np.ones([n_frames]) * 200.0
f0_hz[:100] *= np.linspace(2, 1, 100)**2
f0_hz[200:1000] += 20 * np.sin(np.linspace(0, 8.0, 800) * 2 * np.pi * np.linspace(0, 1.0, 800)) * np.linspace(0, 1.0, 800)
f0_hz = f0_hz[np.newaxis, :, np.newaxis]
# Get valid controls
controls = harmonic_synth.get_controls(amps, harmonic_distribution, f0_hz)
# Plot!
time = np.linspace(0, n_samples / sample_rate, n_frames)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.plot(time, controls['amplitudes'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Amplitude')
plt.subplot(132)
plt.plot(time, controls['harmonic_distribution'][0, :, :])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Harmonic Distribution')
plt.subplot(133)
plt.plot(time, controls['f0_hz'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
_ = plt.title('Fundamental Frequency')
audio = harmonic_synth.get_signal(**controls)
play(audio)
specplot(audio)
Explanation: Example: Just for fun...
Let's run another example where we tweak some of the controls...
End of explanation |
7,216 | Given the following text description, write Python code to implement the functionality described.
Description:
Program to implement Simpson 's 3/8 rule
Given function to be integrated ; Function to perform calculations ; Calculates value till integral limit ; driver function
| Python Code:
def func(x ) :
return(float(1 ) /(1 + x * x ) )
def calculate(lower_limit , upper_limit , interval_limit ) :
interval_size =(float(upper_limit - lower_limit ) / interval_limit )
sum = func(lower_limit ) + func(upper_limit ) ;
for i in range(1 , interval_limit ) :
if(i % 3 == 0 ) :
sum = sum + 2 * func(lower_limit + i * interval_size )
else :
sum = sum + 3 * func(lower_limit + i * interval_size )
return(( float(3 * interval_size ) / 8 ) * sum )
interval_limit = 10
lower_limit = 1
upper_limit = 10
integral_res = calculate(lower_limit , upper_limit , interval_limit )
print(round(integral_res , 6 ) )
|
7,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
My playing with the Kaggle titanic challenge.
I got lots of the ideas for this first Kaggle advanture from here
Step1: Let's see where they got on
Step2: OK, so clearly there were more people who got on at S, and it seems their survival is disproportional. Let's check that.
Step3: Interesting, actually those from C had higher rate of survival. So, knowing more people from your home town didn't help.
Next, did how much they paid have an effect?
Step6: Before digging into how the ages factor in, let's take the advice of others and replace NaN's with random values | Python Code:
import pandas as pd
from pandas import Series, DataFrame
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style("whitegrid")
train_df = pd.read_csv("train.csv",dtype={"Age":np.float64},)
train_df.head()
# find how many ages
train_df['Age'].count()
# how many ages are NaN?
train_df['Age'].isnull().sum()
# plot ages of training data set, with NaN's removed
train_df['Age'].dropna().astype(int).hist(bins=70)
print 'Mean age = ',train_df['Age'].dropna().astype(int).mean()
Explanation: My playing with the Kaggle titanic challenge.
I got lots of the ideas for this first Kaggle advanture from here
End of explanation
train_df['Embarked'].head()
train_df.info()
train_df['Embarked'].isnull().sum()
train_df["Embarked"].count()
sns.countplot(x="Embarked",data=train_df)
sns.countplot(x='Survived',hue='Embarked',data=train_df,order=[0,1])
Explanation: Let's see where they got on
End of explanation
embark_survive_perc = train_df[["Embarked", "Survived"]].groupby(['Embarked'],as_index=False).mean()
sns.barplot(x='Embarked', y='Survived', data=embark_survive_perc,order=['S','C','Q'])
Explanation: OK, so clearly there were more people who got on at S, and it seems their survival is disproportional. Let's check that.
End of explanation
train_df['Fare'].astype(int).plot(kind='hist',bins=100, xlim=(0,50))
# get fare for survived & didn't survive passengers
fare_not_survived = train_df["Fare"].astype(int)[train_df["Survived"] == 0]
fare_survived = train_df["Fare"].astype(int)[train_df["Survived"] == 1]
# get average and std for fare of survived/not survived passengers
avgerage_fare = DataFrame([fare_not_survived.mean(), fare_survived.mean()])
std_fare = DataFrame([fare_not_survived.std(), fare_survived.std()])
avgerage_fare.index.names = std_fare.index.names = ["Survived"]
avgerage_fare.plot(yerr=std_fare,kind='bar',legend=False)
Explanation: Interesting, actually those from C had higher rate of survival. So, knowing more people from your home town didn't help.
Next, did how much they paid have an effect?
End of explanation
# get average, std, and number of NaN values in titanic_df
average_age_train = train_df["Age"].mean()
std_age_train = train_df["Age"].std()
count_nan_age_train = train_df["Age"].isnull().sum()
# generate random numbers between (mean - std) & (mean + std)
## ORIGINAL
rand_1 = np.random.randint(average_age_train - std_age_train, average_age_train + std_age_train, size = count_nan_age_train)
train_df['Age'][np.isnan(train_df["Age"])] = rand_1 ## Only way that works, but raises warnings
#df_rand = pd.DataFrame(rand_1)
# create random dummy dataframe
#dfrand = pd.DataFrame(data=np.random.randn(train_df.shape[0],train_df.shape[1]))
#dfrand.info()
#train_df[np.isnan(train_df["Age"])] = dfrand[np.isnan(train_df["Age"])] ## DOESN"T WORK!!!
#
#train_df["Age"].fillna(value=rand_1, inplace=True)
#print df_rand
#train_df["Age"][np.isnan(train_df["Age"])] = df_rand[np.isnan(train_df["Age"])]
#train_df["Age"].isnull().sum()
# replace NaN values with randoms
#train_df["Age"][np.isnan(train_df["Age"])] = rand_1
#train_df.loc[:,('Age')][np.isnan(train_df["Age"])] = rand_1
#train_df['Age'] = train_df['Age'].fillna(np.random.randint(average_age_train - std_age_train, average_age_train + std_age_train))
#train_df["Age"]
train_df["Age"] = train_df["Age"].astype(int)
# plot new Age Values
train_df['Age'].hist(bins=70)
# Compare this to that from a few cells up for the raw ages with the NaN's dropped. Not much different actually.
## Let's make a couple nice plots of survival vs age
# peaks for survived/not survived passengers by their age
facet = sns.FacetGrid(train_df, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train_df['Age'].max()))
facet.add_legend()
# average survived passengers by age
fig, axis1 = plt.subplots(1,1,figsize=(18,4))
average_age = train_df[["Age", "Survived"]].groupby(['Age'],as_index=False).mean()
sns.barplot(x='Age', y='Survived', data=average_age)
Explanation: Before digging into how the ages factor in, let's take the advice of others and replace NaN's with random values
End of explanation |
7,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-Means
Goal
Unsupervised learning algorithms look for structure in unlabelled data. One of the common objective is to find clusters. Clusters are groups of data that are similar according to a given measurement and different from the datapoints of other clusters. This definition is vague but in the end, the idea is to minimize the distance intra cluster and maximize the distance inter clusters (again, there exists various definition of distance between clusters).
K-mean is an iterative algorithm that can discover such clusters given the number of clusters we are looking for. This is the main drawback as we need to specify this number beforehand. Some more advanced version of K-means are able to discover if the number of clusters is to low or too high but I will not talk much about them here.
To do so, K-means alternates between the two following steps
Step6: Algorithm
Here are the different elements
Step7: Application
Play this part a few times in order to see the algorithm fall into a local minima (it will converge towards a bad solution). | Python Code:
import random
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from fct import normalize_min_max, plot_2d, plot_clusters
Explanation: K-Means
Goal
Unsupervised learning algorithms look for structure in unlabelled data. One of the common objective is to find clusters. Clusters are groups of data that are similar according to a given measurement and different from the datapoints of other clusters. This definition is vague but in the end, the idea is to minimize the distance intra cluster and maximize the distance inter clusters (again, there exists various definition of distance between clusters).
K-mean is an iterative algorithm that can discover such clusters given the number of clusters we are looking for. This is the main drawback as we need to specify this number beforehand. Some more advanced version of K-means are able to discover if the number of clusters is to low or too high but I will not talk much about them here.
To do so, K-means alternates between the two following steps:
- compute the centers of the clusters (for the first step, they are taken at random within the dataset, after that, they are defined as the barycenter/mean of the clusters found by the second step)
- build the clusters from the center. A data belong to the cluster defined by the closest center
To explain it more sequentially, it begins by choosing at random k centers. Then it seperates the dataset into clusters by computing the distance of all points to all center and saying that a datapoint belongs to the closest center. After that, it computes the mean of the clusters (not that this will change the position of the center and thus the distances to the other points) to find new centers and so on and so forth.
If this is unclear (as I am sure it will be if you have never heard of the algorithm), go check the wikipedia page that contains very well made visual examples of this process.
Implementation
I could not plot the successive step within the notebook. So a new window with the plot will open when executing the last cell. If it does not appear, it might be hidden behind your web browser, reducing the size of it may allow you to see the plots.
End of explanation
def build_d(datas, centers):
Return a 2D-numpy array of the distances between each
point in the dataset and the centers. The distance used is
the euclidian distance.
# the distance matrix is d
d = []
for center in centers:
# the list of distances from one center to all the points in the
# dataset
dist = []
for i in range(datas.shape[0]):
dist.append(np.linalg.norm(datas[i] - center))
d.append(dist)
return np.array(d)
def build_g(distances):
Return a 2D-numpy array of 0s and 1s that determines
to which center belong each point in the dataset.
# k is the number of clusters we look for
k = distances.shape[0]
# g is the matrix of affiliation
g = []
for i in range(distances.shape[1]):
# gg elements is 1 only if the point belongs to the
# corresponding center, else it is 0
gg = [0] * k
# computes which center is the closest to the point
gg[distances[:,i].argmin()] = 1
g.append(gg)
return np.array(g).T
def build_clusters(datas, G):
Return a list of clusters (lists as well) of points from the dataset.
k = G.shape[0]
clusters = [[] for _ in range(k)]
# i is the index of the centers, j of the datapoints
for i in range(G.shape[0]):
for j in range(G.shape[1]):
if G[i][j] == 1:
clusters[i].append(datas[j])
return clusters
def new_centers(clusters):
Return a list of points defined as the barycenter of each new cluster.
centers = []
for cluster in clusters:
# the center of each cluster is its barycenter
center = np.mean(cluster, axis=0)
centers.append(center)
return centers
def k_means(datas, k):
Return the centers of the clusters found after the iterative process.
# The initial centers are taken at random without replacement within the
# dataset
centers = random.sample(list(datas), k)
D = build_d(datas, centers)
G = build_g(D)
clusters = build_clusters(datas, G)
centers_new = new_centers(clusters)
# while the new centers are not equal to the previous ones (it means the
# situation is not stationary) then we keep iterating
while not np.array_equal(np.array(centers), np.array(centers_new)):
# plot the clusters with different colors. The centers are plotted
# in blue
plt.clf()
plot_clusters(clusters, k)
X = [center[0] for center in centers]
Y = [center[1] for center in centers]
plt.scatter(X,Y)
plt.show(block=False)
plt.pause(0.01)
# Build the new clusters from the past centers
centers = np.copy(centers_new)
D = build_d(datas, centers)
G = build_g(D)
clusters = build_clusters(datas, G)
# Build the new centers
centers_new = new_centers(clusters)
plt.close()
return centers
Explanation: Algorithm
Here are the different elements:
- D is the distance matrix, each row correspond to one datapoint and each column to one center $D[i, j] = distance(datapoint_i, center_j)$
- G is the matrix that specifies to which center belongs each datapoint. As for the distance matrix, the rows are for the datapoints and the columns are for the centers. $G[i, j] = 1$ if $center_j$ is the closest center to $datapoint_i$ else $G[i, j] = 0$
The algorithm runs while the new centers are not equal to the last ones.
End of explanation
dimension = 2
datas = pd.read_csv('datasets/data_clustering.csv')
datas = np.array(datas)
normalize_min_max(datas, dimension)
# You can play with the number of clusters K to
# see how it affects the result.
K = 4
centers = k_means(datas, K)
Explanation: Application
Play this part a few times in order to see the algorithm fall into a local minima (it will converge towards a bad solution).
End of explanation |
7,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Control Ops Tutorial
In this tutorial we show how to use control flow operators in Caffe2 and give some details about their underlying implementations.
Conditional Execution Using NetBuilder
Let's start with conditional operator. We will demonstrate how to use it in two Caffe2 APIs used for building nets
Step1: In the first example, we define several blobs and then use the 'If' operator to set the value of one of them conditionally depending on values of other blobs.
The pseudocode for the conditional examples we will implement is as follows
Step2: Note the usage of NetBuilder's ops.IfNet and ops.Else calls
Step3: Before going further, it's important to understand the semantics of execution blocks ('then' and 'else' branches in the example above), i.e. handling of reads and writes into global (defined outside of the block) and local (defined inside the block) blobs.
NetBuilder uses the following set of rules
Step4: When we execute this, we expect that y == 2.0, and that local_blob will not exist in the workspace.
Step5: Conditional Execution Using Brew Module
Brew is another Caffe2 interface used to construct nets. Unlike NetBuilder, brew does not track the hierarchy of blocks and, as a result, we need to specify which blobs are considered local and which blobs are considered global when passing 'then' and 'else' models to an API call.
Let's start by importing the necessary items for the brew API.
Step6: We will use the Caffe2's ModelHelper class to define and represent our models, as well as contain the parameter information about the models. Note that a ModelHelper object has two underlying nets
Step7: Before we run the model, let's use Caffe2's graph visualization tool net_drawer to check if the operator graph makes sense.
Step8: Now let's run the net! When using ModelHelper, we must first run the param_init_net to initialize paramaters, then we execute the main net.
Step9: Loops Using NetBuilder
Another important control flow operator is 'While', which allows repeated execution of a fragment of net. Let's consider NetBuilder's version first.
The pseudocode for this example is
Step10: As with the 'If' operator, standard block semantic rules apply. Note the usage of ops.Condition clause that should immediately follow ops.WhileNet and contains code that is executed before each iteration. The last operator in the condition clause is expected to have a single boolean output that determines whether the other iteration is executed.
In the example above we increment the counter ("i") before each iteration and accumulate its values in "y" blob, the loop's body is executed 7 times, the resulting blob values
Step11: Loops Using Brew Module
Now let's take a look at how to replicate the loop above using the ModelHelper+brew interface.
Step12: Once again, let's visualize the net using the net_drawer.
Step13: Finally, we'll run the param_init_net and net and print our final blob values.
Step14: Backpropagation
Both 'If' and 'While' operators support backpropagation. To illustrate how backpropagation with control ops work, let's consider the following examples in which we construct the operator graph using NetBuilder and obtain calculate gradients using the AddGradientOperators function. The first example shows the following conditional statement
Step15: In this case
$$x = 0.5$$
$$z = y^2 = 4^2 = 16$$
We will fetch the blob y_grad, which was generated by the AddGradientOperators call above. This blob contains the gradient of blob z with respect to y. According to basic calculus
Step16: Now, let's change value of blob "x" to -0.5 and rerun net
Step17: The next and final example illustrates backpropagation on the following loop | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from caffe2.python import workspace
from caffe2.python.core import Plan, to_execution_step, Net
from caffe2.python.net_builder import ops, NetBuilder
Explanation: Control Ops Tutorial
In this tutorial we show how to use control flow operators in Caffe2 and give some details about their underlying implementations.
Conditional Execution Using NetBuilder
Let's start with conditional operator. We will demonstrate how to use it in two Caffe2 APIs used for building nets: NetBuilder and brew.
End of explanation
with NetBuilder() as nb:
# Define our constants
ops.Const(0.0, blob_out="zero")
ops.Const(1.0, blob_out="one")
ops.Const(0.5, blob_out="x")
ops.Const(0.0, blob_out="y")
# Define our conditional sequence
with ops.IfNet(ops.GT(["x", "zero"])):
ops.Copy("one", "y")
with ops.Else():
ops.Copy("zero", "y")
Explanation: In the first example, we define several blobs and then use the 'If' operator to set the value of one of them conditionally depending on values of other blobs.
The pseudocode for the conditional examples we will implement is as follows:
if (x > 0):
y = 1
else:
y = 0
End of explanation
# Initialize a Plan
plan = Plan('if_net_test')
# Add the NetBuilder definition above to the Plan
plan.AddStep(to_execution_step(nb))
# Initialize workspace for blobs
ws = workspace.C.Workspace()
# Run the Plan
ws.run(plan)
# Fetch some blobs and print
print('x = ', ws.blobs["x"].fetch())
print('y = ', ws.blobs["y"].fetch())
Explanation: Note the usage of NetBuilder's ops.IfNet and ops.Else calls: ops.IfNet accepts a blob reference or blob name as an input, it expects an input blob to have a scalar value convertible to bool. Note that the optional ops.Else is at the same level as ops.IfNet and immediately follows the corresponding ops.IfNet. Let's execute the resulting net (execution step) and check the values of the blobs.
Note that since x = 0.5, which is indeed greater than 0, we should expect y = 1 after execution.
End of explanation
with NetBuilder() as nb:
# Define our constants
ops.Const(0.0, blob_out="zero")
ops.Const(1.0, blob_out="one")
ops.Const(2.0, blob_out="two")
ops.Const(1.5, blob_out="x")
ops.Const(0.0, blob_out="y")
# Define our conditional sequence
with ops.IfNet(ops.GT(["x", "zero"])):
ops.Copy("x", "local_blob") # create local_blob using Copy -- this is not visible outside of this block
with ops.IfNet(ops.LE(["local_blob", "one"])):
ops.Copy("one", "y")
with ops.Else():
ops.Copy("two", "y")
with ops.Else():
ops.Copy("zero", "y")
# Note that using local_blob would fail here because it is outside of the block in
# which it was created
Explanation: Before going further, it's important to understand the semantics of execution blocks ('then' and 'else' branches in the example above), i.e. handling of reads and writes into global (defined outside of the block) and local (defined inside the block) blobs.
NetBuilder uses the following set of rules:
In NetBuilder's syntax, a blob's declaration and definition occur at the same time - we define an operator which writes its output into a blob with a given name.
NetBuilder keeps track of all operators seen before the current execution point in the same block and up the stack in parent blocks.
If an operator writes into a previously unseen blob, it creates a local blob that is visible only within the current block and the subsequent children blocks. Local blobs created in a given block are effectively deleted when we exit the block. Any write into previously defined (in the same block or in the parent blocks) blob updates an originally created blob and does not result in the redefinition of a blob.
An operator's input blobs have to be defined earlier in the same block or in the stack of parent blocks.
As a result, in order to see the values computed by a block after its execution, the blobs of interest have to be defined outside of the block. This rule effectively forces visible blobs to always be correctly initialized.
To illustrate concepts of block semantics and provide a more sophisticated example, let's consider the following net:
End of explanation
# Initialize a Plan
plan = Plan('if_net_test_2')
# Add the NetBuilder definition above to the Plan
plan.AddStep(to_execution_step(nb))
# Initialize workspace for blobs
ws = workspace.C.Workspace()
# Run the Plan
ws.run(plan)
# Fetch some blobs and print
print('x = ', ws.blobs["x"].fetch())
print('y = ', ws.blobs["y"].fetch())
# Assert that the local_blob does not exist in the workspace
# It should have been destroyed because of its locality
assert "local_blob" not in ws.blobs
Explanation: When we execute this, we expect that y == 2.0, and that local_blob will not exist in the workspace.
End of explanation
from caffe2.python import brew
from caffe2.python.workspace import FeedBlob, RunNetOnce, FetchBlob
from caffe2.python.model_helper import ModelHelper
Explanation: Conditional Execution Using Brew Module
Brew is another Caffe2 interface used to construct nets. Unlike NetBuilder, brew does not track the hierarchy of blocks and, as a result, we need to specify which blobs are considered local and which blobs are considered global when passing 'then' and 'else' models to an API call.
Let's start by importing the necessary items for the brew API.
End of explanation
# Initialize model, which will represent our main conditional model for this test
model = ModelHelper(name="test_if_model")
# Add variables and constants to our conditional model; notice how we add them to the param_init_net
model.param_init_net.ConstantFill([], ["zero"], shape=[1], value=0.0)
model.param_init_net.ConstantFill([], ["one"], shape=[1], value=1.0)
model.param_init_net.ConstantFill([], ["x"], shape=[1], value=0.5)
model.param_init_net.ConstantFill([], ["y"], shape=[1], value=0.0)
# Add Greater Than (GT) conditional operator to our model
# which checks if "x" > "zero", and outputs the result in the "cond" blob
model.param_init_net.GT(["x", "zero"], "cond")
# Initialize a then_model, and add an operator which we will set to be
# executed if the conditional model returns True
then_model = ModelHelper(name="then_test_model")
then_model.net.Copy("one", "y")
# Initialize an else_model, and add an operator which we will set to be
# executed if the conditional model returns False
else_model = ModelHelper(name="else_test_model")
else_model.net.Copy("zero", "y")
# Use the brew module's handy cond operator to facilitate the construction of the operator graph
brew.cond(
model=model, # main conditional model
cond_blob="cond", # blob with condition value
external_blobs=["x", "y", "zero", "one"], # data blobs used in execution of conditional
then_model=then_model, # pass then_model
else_model=else_model) # pass else_model
Explanation: We will use the Caffe2's ModelHelper class to define and represent our models, as well as contain the parameter information about the models. Note that a ModelHelper object has two underlying nets:
(1) param_init_net: Responsible for parameter initialization
(2) net: Contains the main network definition, i.e. the graph of operators that the data flows through
Note that ModelHelper is similar to NetBuilder in that we define the operator graph first, and actually run later. With that said, let's define some models to act as conditional elements, and use the brew module to form the conditional statement that we want to run. We will construct the same statement used in the first example above.
End of explanation
from caffe2.python import net_drawer
from IPython import display
graph = net_drawer.GetPydotGraph(model.net, rankdir="LR")
display.Image(graph.create_png(), width=800)
Explanation: Before we run the model, let's use Caffe2's graph visualization tool net_drawer to check if the operator graph makes sense.
End of explanation
# Run param_init_net once
RunNetOnce(model.param_init_net)
# Run main net (once in this case)
RunNetOnce(model.net)
# Fetch and examine some blobs
print("x = ", FetchBlob("x"))
print("y = ", FetchBlob("y"))
Explanation: Now let's run the net! When using ModelHelper, we must first run the param_init_net to initialize paramaters, then we execute the main net.
End of explanation
with NetBuilder() as nb:
# Define our variables
ops.Const(0, blob_out="i")
ops.Const(0, blob_out="y")
# Define loop code and conditions
with ops.WhileNet():
with ops.Condition():
ops.Add(["i", ops.Const(1)], ["i"])
ops.LE(["i", ops.Const(7)])
ops.Add(["i", "y"], ["y"])
Explanation: Loops Using NetBuilder
Another important control flow operator is 'While', which allows repeated execution of a fragment of net. Let's consider NetBuilder's version first.
The pseudocode for this example is:
i = 0
y = 0
while (i <= 7):
y = i + y
i += 1
End of explanation
# Initialize a Plan
plan = Plan('while_net_test')
# Add the NetBuilder definition above to the Plan
plan.AddStep(to_execution_step(nb))
# Initialize workspace for blobs
ws = workspace.C.Workspace()
# Run the Plan
ws.run(plan)
# Fetch blobs and print
print("i = ", ws.blobs["i"].fetch())
print("y = ", ws.blobs["y"].fetch())
Explanation: As with the 'If' operator, standard block semantic rules apply. Note the usage of ops.Condition clause that should immediately follow ops.WhileNet and contains code that is executed before each iteration. The last operator in the condition clause is expected to have a single boolean output that determines whether the other iteration is executed.
In the example above we increment the counter ("i") before each iteration and accumulate its values in "y" blob, the loop's body is executed 7 times, the resulting blob values:
End of explanation
# Initialize model, which will represent our main conditional model for this test
model = ModelHelper(name="test_while_model")
# Add variables and constants to our model
model.param_init_net.ConstantFill([], ["i"], shape=[1], value=0)
model.param_init_net.ConstantFill([], ["one"], shape=[1], value=1)
model.param_init_net.ConstantFill([], ["seven"], shape=[1], value=7)
model.param_init_net.ConstantFill([], ["y"], shape=[1], value=0)
# Initialize a loop_model that represents the code to run inside of loop
loop_model = ModelHelper(name="loop_test_model")
loop_model.net.Add(["i", "y"], ["y"])
# Initialize cond_model that represents the conditional test that the loop
# abides by, as well as the incrementation step
cond_model = ModelHelper(name="cond_test_model")
cond_model.net.Add(["i", "one"], "i")
cond_model.net.LE(["i", "seven"], "cond")
# Use brew's loop operator to facilitate the creation of the loop's operator graph
brew.loop(
model=model, # main model that contains data
cond_blob="cond", # explicitly specifying condition blob
external_blobs=["cond", "i", "one", "seven", "y"], # data blobs used in execution of the loop
loop_model=loop_model, # pass loop_model
cond_model=cond_model # pass condition model (optional)
)
Explanation: Loops Using Brew Module
Now let's take a look at how to replicate the loop above using the ModelHelper+brew interface.
End of explanation
graph = net_drawer.GetPydotGraph(model.net, rankdir="LR")
display.Image(graph.create_png(), width=800)
Explanation: Once again, let's visualize the net using the net_drawer.
End of explanation
RunNetOnce(model.param_init_net)
RunNetOnce(model.net)
print("i = ", FetchBlob("i"))
print("y = ", FetchBlob("y"))
Explanation: Finally, we'll run the param_init_net and net and print our final blob values.
End of explanation
import numpy as np
# Feed blob called x, which is simply a 1-D numpy array [0.5]
FeedBlob("x", np.array(0.5, dtype='float32'))
# _use_control_ops=True forces NetBuilder to output single net as a result
# x is external for NetBuilder, so we let nb know about it through initial_scope param
with NetBuilder(_use_control_ops=True, initial_scope=["x"]) as nb:
ops.Const(0.0, blob_out="zero")
ops.Const(1.0, blob_out="one")
ops.Const(4.0, blob_out="y")
ops.Const(0.0, blob_out="z")
with ops.IfNet(ops.GT(["x", "zero"])):
ops.Pow("y", "z", exponent=2.0)
with ops.Else():
ops.Pow("y", "z", exponent=3.0)
# we should get a single net as output
assert len(nb.get()) == 1, "Expected a single net produced"
net = nb.get()[0]
# add gradient operators for 'z' blob
grad_map = net.AddGradientOperators(["z"])
Explanation: Backpropagation
Both 'If' and 'While' operators support backpropagation. To illustrate how backpropagation with control ops work, let's consider the following examples in which we construct the operator graph using NetBuilder and obtain calculate gradients using the AddGradientOperators function. The first example shows the following conditional statement:
x = 1-D numpy float array
y = 4
z = 0
if (x > 0):
z = y^2
else:
z = y^3
End of explanation
# Run the net
RunNetOnce(net)
# Fetch blobs and print
print("x = ", FetchBlob("x"))
print("y = ", FetchBlob("y"))
print("z = ", FetchBlob("z"))
print("y_grad = ", FetchBlob("y_grad"))
Explanation: In this case
$$x = 0.5$$
$$z = y^2 = 4^2 = 16$$
We will fetch the blob y_grad, which was generated by the AddGradientOperators call above. This blob contains the gradient of blob z with respect to y. According to basic calculus:
$$y_grad = \frac{\partial{z}}{\partial{y}}y^2 = 2y = 2(4) = 8$$
End of explanation
# To re-run net with different input, simply feed new blob
FeedBlob("x", np.array(-0.5, dtype='float32'))
RunNetOnce(net)
print("x = ", FetchBlob("x"))
print("y = ", FetchBlob("y"))
print("z = ", FetchBlob("z"))
print("y_grad = ", FetchBlob("y_grad"))
Explanation: Now, let's change value of blob "x" to -0.5 and rerun net:
End of explanation
with NetBuilder(_use_control_ops=True) as nb:
# Define variables and constants
ops.Copy(ops.Const(0), "i")
ops.Copy(ops.Const(1), "one")
ops.Copy(ops.Const(2), "two")
ops.Copy(ops.Const(2.0), "x")
ops.Copy(ops.Const(3.0), "y")
ops.Copy(ops.Const(2.0), "z")
# Define loop statement
# Computes x^4, y^2, z^3
with ops.WhileNet():
with ops.Condition():
ops.Add(["i", "one"], "i")
ops.LE(["i", "two"])
ops.Pow("x", "x", exponent=2.0)
with ops.IfNet(ops.LT(["i", "two"])):
ops.Pow("y", "y", exponent=2.0)
with ops.Else():
ops.Pow("z", "z", exponent=3.0)
# Sum s = x + y + z
ops.Add(["x", "y"], "x_plus_y")
ops.Add(["x_plus_y", "z"], "s")
assert len(nb.get()) == 1, "Expected a single net produced"
net = nb.get()[0]
# Add gradient operators to output blob 's'
grad_map = net.AddGradientOperators(["s"])
workspace.RunNetOnce(net)
print("x = ", FetchBlob("x"))
print("x_grad = ", FetchBlob("x_grad")) # derivative: 4x^3
print("y = ", FetchBlob("y"))
print("y_grad = ", FetchBlob("y_grad")) # derivative: 2y
print("z = ", FetchBlob("z"))
print("z_grad = ", FetchBlob("z_grad")) # derivative: 3z^2
Explanation: The next and final example illustrates backpropagation on the following loop:
x = 2
y = 3
z = 2
i = 0
while (i <= 2):
x = x^2
if (i < 2):
y = y^2
else:
z = z^3
i += 1
s = x + y + z
Note that this code essentially computes the sum of x^4 (by squaring x twice), y^2, and z^3.
End of explanation |
7,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Signal denoising using RNNs in PyTorch
In this post, I'll use PyTorch to create a simple Recurrent Neural Network (RNN) for denoising a signal. I started learning RNNs using PyTorch. However, I felt that many of the examples were fairly complex. So, here's an attempt to create a simple educational example.
Problem description
Given a noisy sine wave as an input, we want to estimate the denoised signal. This is shown in the figure below.
Customary imports
Step1: Creating noisy and denoised signals
Let's now write functions to cerate a sine wave, add some noise on top of it. This way we're able to create a noisy verison of the sine wave.
Step2: Let's now invoke the functions we defined to generate the figure we saw in the problem description.
Step3: Creating dataset
Now, let's write a simple function to generate a dataset of such noisy and denoised samples.
Step4: Now, creating the dataset, and dividing it into train and test set.
Step5: Creating RNN
We have 1d sine waves, which we want to denoise. Thus, we have input dimension of 1. Let's create a simple 1-layer RNN with 30 hidden units.
Step6: Training
Step7: Great. As expected, the loss reduces over time.
Generating prediction on test set
Step8: Visualising sample denoising
Step9: Bidirectional RNN
Seems reasonably neat to me! If only the first few points were better esimtated. Any idea why they're not? Maybe, we need a bidirectional RNN? Let's try one, and I'll also add dropout to prevent overfitting.
Step10: Hmm. The estimated signal looks better for the initial few points. But, gets worse for the final few points. Oops! Guess, now the reverse RNN causes problems for its first few points!
From RNNs to GRU
Let's now replace our RNN with GRU to see if the model improves.
Step11: The GRU prediction seems to far better! Maybe, the RNNs suffer from the vanishing gradients problem?
Visualising estimations as model improves
Let's now write a simple function to visualise the estimations as a function of iterations. We'd expect the estimations to improve over time.
Step12: This looks great! We can see how our model learns to learn reasonably good denoised signals over time. It doesn't start great though. Would a better initialisation help? I certainly feel that for this particular problem it would, as predicting the output the same as input is a good starting point!
Bonus
Step13: Testing using a network with few parameters. | Python Code:
import numpy as np
import math, random
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(0)
Explanation: Signal denoising using RNNs in PyTorch
In this post, I'll use PyTorch to create a simple Recurrent Neural Network (RNN) for denoising a signal. I started learning RNNs using PyTorch. However, I felt that many of the examples were fairly complex. So, here's an attempt to create a simple educational example.
Problem description
Given a noisy sine wave as an input, we want to estimate the denoised signal. This is shown in the figure below.
Customary imports
End of explanation
# Generating a clean sine wave
def sine(X, signal_freq=60.):
return np.sin(2 * np.pi * (X) / signal_freq)
# Adding uniform noise
def noisy(Y, noise_range=(-0.35, 0.35)):
noise = np.random.uniform(noise_range[0], noise_range[1], size=Y.shape)
return Y + noise
# Create a noisy and clean sine wave
def sample(sample_size):
random_offset = random.randint(0, sample_size)
X = np.arange(sample_size)
out = sine(X + random_offset)
inp = noisy(out)
return inp, out
Explanation: Creating noisy and denoised signals
Let's now write functions to cerate a sine wave, add some noise on top of it. This way we're able to create a noisy verison of the sine wave.
End of explanation
inp, out = sample(100)
plt.plot(inp, label='Noisy')
plt.plot(out, label ='Denoised')
plt.legend()
Explanation: Let's now invoke the functions we defined to generate the figure we saw in the problem description.
End of explanation
def create_dataset(n_samples=10000, sample_size=100):
data_inp = np.zeros((n_samples, sample_size))
data_out = np.zeros((n_samples, sample_size))
for i in range(n_samples):
sample_inp, sample_out = sample(sample_size)
data_inp[i, :] = sample_inp
data_out[i, :] = sample_out
return data_inp, data_out
Explanation: Creating dataset
Now, let's write a simple function to generate a dataset of such noisy and denoised samples.
End of explanation
data_inp, data_out = create_dataset()
train_inp, train_out = data_inp[:8000], data_out[:8000]
test_inp, test_out = data_inp[8000:], data_out[8000:]
import torch
import torch.nn as nn
from torch.autograd import Variable
Explanation: Now, creating the dataset, and dividing it into train and test set.
End of explanation
input_dim = 1
hidden_size = 30
num_layers = 1
class CustomRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(CustomRNN, self).__init__()
self.rnn = nn.RNN(input_size=input_size, hidden_size=hidden_size, batch_first=True)
self.linear = nn.Linear(hidden_size, output_size, )
self.act = nn.Tanh()
def forward(self, x):
pred, hidden = self.rnn(x, None)
pred = self.act(self.linear(pred)).view(pred.data.shape[0], -1, 1)
return pred
r= CustomRNN(input_dim, hidden_size, 1)
r
Explanation: Creating RNN
We have 1d sine waves, which we want to denoise. Thus, we have input dimension of 1. Let's create a simple 1-layer RNN with 30 hidden units.
End of explanation
# Storing predictions per iterations to visualise later
predictions = []
optimizer = torch.optim.Adam(r.parameters(), lr=1e-2)
loss_func = nn.L1Loss()
for t in range(301):
hidden = None
inp = Variable(torch.Tensor(train_inp.reshape((train_inp.shape[0], -1, 1))), requires_grad=True)
out = Variable(torch.Tensor(train_out.reshape((train_out.shape[0], -1, 1))) )
pred = r(inp)
optimizer.zero_grad()
predictions.append(pred.data.numpy())
loss = loss_func(pred, out)
if t%20==0:
print(t, loss.data[0])
loss.backward()
optimizer.step()
Explanation: Training
End of explanation
t_inp = Variable(torch.Tensor(test_inp.reshape((test_inp.shape[0], -1, 1))), requires_grad=True)
pred_t = r(t_inp)
# Test loss
print(loss_func(pred_t, Variable(torch.Tensor(test_out.reshape((test_inp.shape[0], -1, 1))))).data[0])
Explanation: Great. As expected, the loss reduces over time.
Generating prediction on test set
End of explanation
sample_num = 23
plt.plot(pred_t[sample_num].data.numpy(), label='Pred')
plt.plot(test_out[sample_num], label='GT')
plt.legend()
plt.title("Sample num: {}".format(sample_num))
Explanation: Visualising sample denoising
End of explanation
bidirectional = True
if bidirectional:
num_directions = 2
else:
num_directions = 1
class CustomRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(CustomRNN, self).__init__()
self.rnn = nn.RNN(input_size=input_size, hidden_size=hidden_size,
batch_first=True, bidirectional=bidirectional, dropout=0.1)
self.linear = nn.Linear(hidden_size*num_directions, output_size, )
self.act = nn.Tanh()
def forward(self, x):
pred, hidden = self.rnn(x, None)
pred = self.act(self.linear(pred)).view(pred.data.shape[0], -1, 1)
return pred
r= CustomRNN(input_dim, hidden_size, 1)
r
# Storing predictions per iterations to visualise later
predictions = []
optimizer = torch.optim.Adam(r.parameters(), lr=1e-2)
loss_func = nn.L1Loss()
for t in range(301):
hidden = None
inp = Variable(torch.Tensor(train_inp.reshape((train_inp.shape[0], -1, 1))), requires_grad=True)
out = Variable(torch.Tensor(train_out.reshape((train_out.shape[0], -1, 1))) )
pred = r(inp)
optimizer.zero_grad()
predictions.append(pred.data.numpy())
loss = loss_func(pred, out)
if t%20==0:
print(t, loss.data[0])
loss.backward()
optimizer.step()
t_inp = Variable(torch.Tensor(test_inp.reshape((test_inp.shape[0], -1, 1))), requires_grad=True)
pred_t = r(t_inp)
# Test loss
print(loss_func(pred_t, Variable(torch.Tensor(test_out.reshape((test_inp.shape[0], -1, 1))))).data[0])
sample_num = 23
plt.plot(pred_t[sample_num].data.numpy(), label='Pred')
plt.plot(test_out[sample_num], label='GT')
plt.legend()
plt.title("Sample num: {}".format(sample_num))
Explanation: Bidirectional RNN
Seems reasonably neat to me! If only the first few points were better esimtated. Any idea why they're not? Maybe, we need a bidirectional RNN? Let's try one, and I'll also add dropout to prevent overfitting.
End of explanation
bidirectional = True
if bidirectional:
num_directions = 2
else:
num_directions = 1
class CustomRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(CustomRNN, self).__init__()
self.rnn = nn.GRU(input_size=input_size, hidden_size=hidden_size,
batch_first=True, bidirectional=bidirectional, dropout=0.1)
self.linear = nn.Linear(hidden_size*num_directions, output_size, )
self.act = nn.Tanh()
def forward(self, x):
pred, hidden = self.rnn(x, None)
pred = self.act(self.linear(pred)).view(pred.data.shape[0], -1, 1)
return pred
r= CustomRNN(input_dim, hidden_size, 1)
r
# Storing predictions per iterations to visualise later
predictions = []
optimizer = torch.optim.Adam(r.parameters(), lr=1e-2)
loss_func = nn.L1Loss()
for t in range(201):
hidden = None
inp = Variable(torch.Tensor(train_inp.reshape((train_inp.shape[0], -1, 1))), requires_grad=True)
out = Variable(torch.Tensor(train_out.reshape((train_out.shape[0], -1, 1))) )
pred = r(inp)
optimizer.zero_grad()
predictions.append(pred.data.numpy())
loss = loss_func(pred, out)
if t%20==0:
print(t, loss.data[0])
loss.backward()
optimizer.step()
t_inp = Variable(torch.Tensor(test_inp.reshape((test_inp.shape[0], -1, 1))), requires_grad=True)
pred_t = r(t_inp)
# Test loss
print(loss_func(pred_t, Variable(torch.Tensor(test_out.reshape((test_inp.shape[0], -1, 1))))).data[0])
sample_num = 23
plt.plot(pred_t[sample_num].data.numpy(), label='Pred')
plt.plot(test_out[sample_num], label='GT')
plt.legend()
plt.title("Sample num: {}".format(sample_num))
Explanation: Hmm. The estimated signal looks better for the initial few points. But, gets worse for the final few points. Oops! Guess, now the reverse RNN causes problems for its first few points!
From RNNs to GRU
Let's now replace our RNN with GRU to see if the model improves.
End of explanation
plt.rcParams['animation.ffmpeg_path'] = './ffmpeg'
from matplotlib.animation import FuncAnimation
fig, ax = plt.subplots(figsize=(4, 3))
fig.set_tight_layout(True)
# Query the figure's on-screen size and DPI. Note that when saving the figure to
# a file, we need to provide a DPI for that separately.
print('fig size: {0} DPI, size in inches {1}'.format(
fig.get_dpi(), fig.get_size_inches()))
def update(i):
label = 'Iteration {0}'.format(i)
ax.cla()
ax.plot(np.array(predictions)[i, 0, :, 0].T, label='Pred')
ax.plot(train_out[0, :], label='GT')
ax.legend()
ax.set_title(label)
anim = FuncAnimation(fig, update, frames=range(0, 201, 4), interval=20)
anim.save('learning.mp4',fps=20)
plt.close()
from IPython.display import Video
Video("learning.mp4")
Explanation: The GRU prediction seems to far better! Maybe, the RNNs suffer from the vanishing gradients problem?
Visualising estimations as model improves
Let's now write a simple function to visualise the estimations as a function of iterations. We'd expect the estimations to improve over time.
End of explanation
for num_unknown_values in range(50):
train_out[np.random.choice(list(range(0, 8000))), np.random.choice(list(range(0, 100)))] = np.NAN
np.isnan(train_out).sum()
Explanation: This looks great! We can see how our model learns to learn reasonably good denoised signals over time. It doesn't start great though. Would a better initialisation help? I certainly feel that for this particular problem it would, as predicting the output the same as input is a good starting point!
Bonus: Handling missing values in denoised training data
The trick to handling missing values in the denoised training data (the quantity we wish to estimate) is to compute the loss only over the present values. This requires creating a mask for finding all entries except missing.
One such way to do so would be: mask = out > -1* 1e8 where out is the tensor containing missing values.
Let's first add some unknown values (np.NaN) in the training output data.
End of explanation
r= CustomRNN(input_dim, 2, 1)
r
# Storing predictions per iterations to visualise later
predictions = []
optimizer = torch.optim.Adam(r.parameters(), lr=1e-2)
loss_func = nn.L1Loss()
for t in range(20):
hidden = None
inp = Variable(torch.Tensor(train_inp.reshape((train_inp.shape[0], -1, 1))), requires_grad=True)
out = Variable(torch.Tensor(train_out.reshape((train_out.shape[0], -1, 1))) )
pred = r(inp)
optimizer.zero_grad()
predictions.append(pred.data.numpy())
# Create a mask to compute loss only on defined quantities
mask = out > -1* 1e8
loss = loss_func(pred[mask], out[mask])
if t%20==0:
print(t, loss.data[0])
loss.backward()
optimizer.step()
Explanation: Testing using a network with few parameters.
End of explanation |
7,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training at scale with the Vertex AI Training Service
Learning Objectives
Step1: Change the following cell as necessary
Step2: Confirm below that the bucket is regional and its region equals to the specified region
Step3: Create BigQuery tables
If you have not already created a BigQuery dataset for our data, run the following cell
Step4: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
Step5: Make the validation dataset be 1/10 the size of the training dataset.
Step6: Export the tables as CSV files
Step7: Make code compatible with Vertex AI Training Service
In order to make our code compatible with Vertex AI Training Service we need to make the following changes
Step8: Move code into a python package
The first thing to do is to convert your training code snippets into a regular Python package.
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
Create the package directory
Our package directory contains 3 files
Step9: Paste existing code into model.py
A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file.
In the cell below, we write the contents of the cell into model.py packaging the model we
developed in the previous labs so that we can deploy it to Vertex AI Training Service.
Step10: Modify code to read data from and write checkpoint files to GCS
If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.)
This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.
We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.
Step11: Run trainer module package locally
Now we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step.
Step12: Run your training package on Vertex AI using a pre-built container
Once the code works in standalone mode locally, you can run it on the Cloud using Vertex AI and use pre-built containers. First, we need to package our code as a source distribution. For this, we can use setuptools.
Step13: We will store our package in the Cloud Storage bucket.
Step14: Submit Custom Job using the gcloud CLI
To submit this source distribution the Cloud we use gcloud ai custom-jobs create and simply specify some additional parameters for Vertex AI Training service
Step15: Submit Custom Job using the Vertex AI Python SDK
The gcloud CLI is just one of multiple ways to interact with Vertex AI, which also include the Console GUI, directly calling the REST APIs (e.g. using curl), and the most flexible interface being the Vertex AI SDK available in multiple languages.
Below, we use the Vertex AI Python SDK to accomplish the same Pre-Built Container training as above -- getting familiar with the SDK will come in handy later when we use advanced features such as hyperparameter tuning.
Run your training package using a custom container
Vertex AI Training also supports training in custom containers, allowing users to bring their own Docker containers with any pre-installed ML framework or algorithm to run on Vertex AI Training.
In this last section, we'll see how to submit a Cloud training job using a customized Docker image.
Containerizing our ./taxifare/trainer package involves 3 steps
Step16: Remark | Python Code:
import os
from google.cloud import bigquery
Explanation: Training at scale with the Vertex AI Training Service
Learning Objectives:
1. Learn how to organize your training code into a Python package
1. Train your model using cloud infrastructure via Google Cloud Vertex AI Training Service
1. (optional) Learn how to run your training package using Docker containers and push training Docker images on a Docker registry
Introduction
In this notebook we'll make the jump from training locally, to do training in the cloud. We'll take advantage of Google Cloud's Vertex AI Training Service.
Vertex AI Training Service is a managed service that allows the training and deployment of ML models without having to provision or maintain servers. The infrastructure is handled seamlessly by the managed service for us.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Specify your project name and bucket name in the cell below.
End of explanation
# Change with your own bucket and project below:
BUCKET = "<BUCKET>"
PROJECT = "<PROJECT>"
REGION = "<YOUR REGION>"
OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET)
os.environ['BUCKET'] = BUCKET
os.environ['OUTDIR'] = OUTDIR
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
Explanation: Change the following cell as necessary:
End of explanation
%%bash
gsutil ls -Lb gs://$BUCKET | grep "gs://\|Location"
echo $REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
Explanation: Confirm below that the bucket is regional and its region equals to the specified region:
End of explanation
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset)
print("Dataset created")
except:
print("Dataset already exists")
Explanation: Create BigQuery tables
If you have not already created a BigQuery dataset for our data, run the following cell:
End of explanation
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Explanation: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Explanation: Make the validation dataset be 1/10 the size of the training dataset.
End of explanation
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
Explanation: Export the tables as CSV files
End of explanation
!gsutil ls gs://$BUCKET/taxifare/data
Explanation: Make code compatible with Vertex AI Training Service
In order to make our code compatible with Vertex AI Training Service we need to make the following changes:
Upload data to Google Cloud Storage
Move code into a trainer Python package
Submit training job with gcloud to train on Vertex AI
Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
To do this run the notebook 0_export_data_from_bq_to_gcs.ipynb, which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
End of explanation
ls ./taxifare/trainer/
Explanation: Move code into a python package
The first thing to do is to convert your training code snippets into a regular Python package.
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
Create the package directory
Our package directory contains 3 files:
End of explanation
%%writefile ./taxifare/trainer/model.py
import datetime
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
shuffle_buffer_size=1000000
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
#TODO 1a
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
def train_and_evaluate(hparams):
#TODO 1b
batch_size = hparams['batch_size']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
nnsize = [int(s) for s in hparams['nnsize'].split()]
eval_data_path = hparams['eval_data_path']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'export/savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path,
save_weights_only=True,
verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(model, model_export_path)
return history
Explanation: Paste existing code into model.py
A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file.
In the cell below, we write the contents of the cell into model.py packaging the model we
developed in the previous labs so that we can deploy it to Vertex AI Training Service.
End of explanation
%%writefile taxifare/trainer/task.py
import argparse
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help="Batch size for training steps",
type=int,
default=32
)
parser.add_argument(
"--eval_data_path",
help="GCS location pattern of eval files",
required=True
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes (provide space-separated sizes)",
default="32 8"
)
parser.add_argument(
"--nbuckets",
help="Number of buckets to divide lat and lon with",
type=int,
default=10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help="Number of times to evaluate model on eval data training.",
type=int,
default=5
)
parser.add_argument(
"--num_examples_to_train_on",
help="Number of examples to train on.",
type=int,
default=100
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--train_data_path",
help="GCS location pattern of train files containing eval URLs",
required=True
)
args = parser.parse_args()
hparams = args.__dict__
model.train_and_evaluate(hparams)
Explanation: Modify code to read data from and write checkpoint files to GCS
If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.)
This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.
We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.
End of explanation
%%bash
EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*
TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*
OUTPUT_DIR=./taxifare-model
test ${OUTPUT_DIR} && rm -rf ${OUTPUT_DIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python3 -m trainer.task \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTPUT_DIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size 5 \
--num_examples_to_train_on 100 \
--num_evals 1 \
--nbuckets 10 \
--lr 0.001 \
--nnsize "32 8"
Explanation: Run trainer module package locally
Now we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step.
End of explanation
%%writefile taxifare/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name='taxifare_trainer',
version='0.1',
packages=find_packages(),
include_package_data=True,
description='Taxifare model training application.'
)
%%bash
cd taxifare
python ./setup.py sdist --formats=gztar
cd ..
Explanation: Run your training package on Vertex AI using a pre-built container
Once the code works in standalone mode locally, you can run it on the Cloud using Vertex AI and use pre-built containers. First, we need to package our code as a source distribution. For this, we can use setuptools.
End of explanation
%%bash
gsutil cp taxifare/dist/taxifare_trainer-0.1.tar.gz gs://${BUCKET}/taxifare/
Explanation: We will store our package in the Cloud Storage bucket.
End of explanation
%%bash
# Output directory and jobID
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$TIMESTAMP
JOB_NAME=taxifare_$TIMESTAMP
echo ${OUTDIR} ${REGION} ${JOB_NAME}
PYTHON_PACKAGE_URIS=gs://${BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz
MACHINE_TYPE=n1-standard-4
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.task
# Model and training hyperparameters
BATCH_SIZE=50
NUM_EXAMPLES_TO_TRAIN_ON=5000
NUM_EVALS=100
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
WORKER_POOL_SPEC="machine-type=$MACHINE_TYPE,\
replica-count=$REPLICA_COUNT,\
executor-image-uri=$PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,\
python-module=$PYTHON_MODULE"
ARGS="--eval_data_path=$EVAL_DATA_PATH,\
--output_dir=$OUTDIR,\
--train_data_path=$TRAIN_DATA_PATH,\
--batch_size=$BATCH_SIZE,\
--num_examples_to_train_on=$NUM_EXAMPLES_TO_TRAIN_ON,\
--num_evals=$NUM_EVALS,\
--nbuckets=$NBUCKETS,\
--lr=$LR,\
--nnsize=$NNSIZE"
gcloud ai custom-jobs create \
--region=${REGION} \
--display-name=$JOB_NAME \
--python-package-uris=$PYTHON_PACKAGE_URIS \
--worker-pool-spec=$WORKER_POOL_SPEC \
--args="$ARGS"
Explanation: Submit Custom Job using the gcloud CLI
To submit this source distribution the Cloud we use gcloud ai custom-jobs create and simply specify some additional parameters for Vertex AI Training service:
- job_name: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- region: Cloud region to train in. See here for supported Vertex AI Custom model training regions
The arguments within --args are sent to our task.py.
Because this is on the entire dataset, it will take a while. You can monitor the job from the GCP console in the Vertex AI Training section.
End of explanation
%%writefile ./taxifare/Dockerfile
FROM us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest
# TODO 3
COPY . /code
WORKDIR /code
ENTRYPOINT ["python3", "-m", "trainer.task"]
%%bash
PROJECT_DIR=$(cd ./taxifare && pwd)
IMAGE_NAME=taxifare_training_container
DOCKERFILE=$PROJECT_DIR/Dockerfile
IMAGE_URI=gcr.io/$PROJECT/$IMAGE_NAME
docker build $PROJECT_DIR -f $DOCKERFILE -t $IMAGE_URI
docker push $IMAGE_URI
Explanation: Submit Custom Job using the Vertex AI Python SDK
The gcloud CLI is just one of multiple ways to interact with Vertex AI, which also include the Console GUI, directly calling the REST APIs (e.g. using curl), and the most flexible interface being the Vertex AI SDK available in multiple languages.
Below, we use the Vertex AI Python SDK to accomplish the same Pre-Built Container training as above -- getting familiar with the SDK will come in handy later when we use advanced features such as hyperparameter tuning.
Run your training package using a custom container
Vertex AI Training also supports training in custom containers, allowing users to bring their own Docker containers with any pre-installed ML framework or algorithm to run on Vertex AI Training.
In this last section, we'll see how to submit a Cloud training job using a customized Docker image.
Containerizing our ./taxifare/trainer package involves 3 steps:
Writing a Dockerfile in ./taxifare
Building the Docker image
Pushing it to the Google Cloud container registry in our GCP project
The Dockerfile specifies
1. How the container needs to be provisioned so that all the dependencies in our code are satisfied
2. Where to copy our trainer Package in the container
3. What command to run when the container is ran (the ENTRYPOINT line)
End of explanation
%%bash
# Output directory and jobID
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$TIMESTAMP
JOB_NAME=taxifare_container_$TIMESTAMP
echo ${OUTDIR} ${REGION} ${JOB_NAME}
# Model and training hyperparameters
BATCH_SIZE=50
NUM_EXAMPLES_TO_TRAIN_ON=5000
NUM_EVALS=100
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# Vertex AI machines to use for training
MACHINE_TYPE=n1-standard-4
REPLICA_COUNT=1
# GCS paths.
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
IMAGE_NAME=taxifare_training_container
IMAGE_URI=gcr.io/$PROJECT/$IMAGE_NAME
WORKER_POOL_SPEC="machine-type=$MACHINE_TYPE,\
replica-count=$REPLICA_COUNT,\
container-image-uri=$IMAGE_URI"
ARGS="--eval_data_path=$EVAL_DATA_PATH,\
--output_dir,=$OUTDIR,\
--train_data_path=$TRAIN_DATA_PATH,\
--batch_size=$BATCH_SIZE,\
--num_examples_to_train_on=$NUM_EXAMPLES_TO_TRAIN_ON,\
--num_evals=$NUM_EVALS,\
--nbuckets=$NBUCKETS,\
--lr=$LR,\
--nnsize=$NNSIZE"
gcloud ai custom-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--worker-pool-spec=$WORKER_POOL_SPEC \
--args="$ARGS"
Explanation: Remark: If you prefer to build the container image from the command line, we have written a script for that ./taxifare/scripts/build.sh. This script reads its configuration from the file ./taxifare/scripts/env.sh. You can configure these arguments the way you want in that file. You can also simply type make build from within ./taxifare to build the image (which will invoke the build script). Similarly, we wrote the script ./taxifare/scripts/push.sh to push the Docker image, which you can also trigger by typing make push from within ./taxifare.
Train using a custom container on Vertex AI
TODO: To submit to the Cloud we use gcloud ai custom-jobs create and simply specify some additional parameters for Vertex AI Training Service:
- job_name: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- image-uri: The uri of the Docker image we pushed in the Google Cloud registry
- region: Cloud region to train in. See here for supported Vertex AI Training Service regions
The arguments within --args are sent to our task.py.
You can track your job and view logs using cloud console.
End of explanation |
7,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Independent Analysis - Srinivas (handle
Step1: we've now dropped the last of the discrete numerical inexplicable data, and removed children from the mix
Extracting the samples we are interested in
Step2: we see here that there 1383 people who have ADHD but are not Bipolar and 440 people who are Bipolar but do not have ADHD
Dimensionality reduction
PCA
Step3: We see here that most of the variance is preserved with just 24 features.
Manifold Techniques
ISOMAP
Step4: Multi dimensional scaling
Step5: As is evident above, the 2 manifold techniques don't really offer very different dimensionality reductions. Therefore we are just going to roll with Multi dimensional scaling
Clustering and other grouping experiments
Mean-Shift - mds
Step6: Though I'm not sure how to tweak the hyper-parameters of the bandwidth estimation function, there doesn't seem to be much difference. Minute variations to the bandwidth result in large cluster differences. Perhaps the data isn't very suitable for a contrived clustering technique like Mean-Shift. Therefore let us attempt something more naive and simplistic like K-Means
K-Means clustering - mds
Step7: As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups
Classification Experiments
Let's experiment with a bunch of classifiers | Python Code:
# Standard
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Dimensionality reduction and Clustering
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.cluster import MeanShift, estimate_bandwidth
from sklearn import manifold, datasets
from itertools import cycle
# Plotting tools and classifiers
from matplotlib.colors import ListedColormap
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn import cross_validation
from sklearn.cross_validation import LeaveOneOut
from sklearn.cross_validation import LeavePOut
# Let's read the data in and clean it
def get_NaNs(df):
columns = list(df.columns.get_values())
row_metrics = df.isnull().sum(axis=1)
rows_with_na = []
for i, x in enumerate(row_metrics):
if x > 0: rows_with_na.append(i)
return rows_with_na
def remove_NaNs(df):
rows_with_na = get_NaNs(df)
cleansed_df = df.drop(df.index[rows_with_na], inplace=False)
return cleansed_df
initial_data = pd.DataFrame.from_csv('Data_Adults_1_reduced.csv')
cleansed_df = remove_NaNs(initial_data)
# Let's also get rid of nominal data
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
X = cleansed_df.select_dtypes(include=numerics)
print X.shape
# Let's now clean columns getting rid of certain columns that might not be important to our analysis
cols2drop = ['GROUP_ID', 'doa', 'Baseline_header_id', 'Concentration_header_id',
'Baseline_Reading_id', 'Concentration_Reading_id']
X = X.drop(cols2drop, axis=1, inplace=False)
print X.shape
# For our studies children skew the data, it would be cleaner to just analyse adults
X = X.loc[X['Age'] >= 18]
print X.shape
Explanation: Independent Analysis - Srinivas (handle: thewickedaxe)
PLEASE SCROLL TO THE BOTTOM OF THE NOTEBOOK TO FIND THE QUESTIONS AND THEIR ANSWERS
In this notebook we we explore dimensionality reduction with ISOMAP and MDS and their effects on classification
Initial Data Cleaning
End of explanation
# Let's extract ADHd and Bipolar patients (mutually exclusive)
ADHD = X.loc[X['ADHD'] == 1]
ADHD = ADHD.loc[ADHD['Bipolar'] == 0]
BP = X.loc[X['Bipolar'] == 1]
BP = BP.loc[BP['ADHD'] == 0]
print ADHD.shape
print BP.shape
# Keeping a backup of the data frame object because numpy arrays don't play well with certain scikit functions
ADHD_df = ADHD.copy(deep = True)
BP_df = BP.copy(deep = True)
ADHD = pd.DataFrame(ADHD.drop(['Patient_ID'], axis = 1, inplace = False))
BP = pd.DataFrame(BP.drop(['Patient_ID'], axis = 1, inplace = False))
Explanation: we've now dropped the last of the discrete numerical inexplicable data, and removed children from the mix
Extracting the samples we are interested in
End of explanation
combined = pd.concat([ADHD, BP])
combined_backup = pd.concat([ADHD, BP])
pca = PCA(n_components = 24, whiten = "True").fit(combined)
combined = pca.transform(combined)
print sum(pca.explained_variance_ratio_)
combined = pd.DataFrame(combined)
ADHD_reduced_df = combined[:1383]
BP_reduced_df = combined[1383:]
ADHD_reduced_df_id = ADHD_reduced_df.copy(deep = True)
BP_reduced_df_id = BP_reduced_df.copy(deep = True)
ADHD_reduced_df_id['Patient_ID'] = 123
BP_reduced_df_id['Patient_ID'] = 123
print ADHD_reduced_df.shape
print BP_reduced_df.shape
print ADHD_reduced_df_id.shape
print BP_reduced_df_id.shape
# resorting to some hacky crap, that I am ashamed to write, but pandas is refusing to cooperate
z = []
for x in BP_df['Patient_ID']:
z.append(x)
BP_reduced_df_id['Patient_ID'] = z
z = []
for x in ADHD_df['Patient_ID']:
z.append(x)
ADHD_reduced_df_id['Patient_ID'] = z
ADHD_pca = ADHD_reduced_df.copy(deep = True)
BP_pca = BP_reduced_df.copy(deep = True)
Explanation: we see here that there 1383 people who have ADHD but are not Bipolar and 440 people who are Bipolar but do not have ADHD
Dimensionality reduction
PCA
End of explanation
combined = manifold.Isomap(20, 20).fit_transform(combined_backup)
ADHD_iso = combined[:1383]
BP_iso = combined[1383:]
print pd.DataFrame(ADHD_iso).head()
Explanation: We see here that most of the variance is preserved with just 24 features.
Manifold Techniques
ISOMAP
End of explanation
mds = manifold.MDS(20).fit_transform(combined_backup)
ADHD_mds = combined[:1383]
BP_mds = combined[1383:]
print pd.DataFrame(ADHD_mds).head()
Explanation: Multi dimensional scaling
End of explanation
ADHD_clust = pd.DataFrame(ADHD_mds)
BP_clust = pd.DataFrame(BP_mds)
# This is a consequence of how we dropped columns, I apologize for the hacky code
data = pd.concat([ADHD_clust, BP_clust])
# Let's see what happens with Mean Shift clustering
bandwidth = estimate_bandwidth(data.get_values(), quantile=0.2, n_samples=1823) * 0.8
ms = MeanShift(bandwidth=bandwidth)
ms.fit(data.get_values())
labels = ms.labels_
cluster_centers = ms.cluster_centers_
labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
print('Estimated number of clusters: %d' % n_clusters_)
for cluster in range(n_clusters_):
ds = data.get_values()[np.where(labels == cluster)]
plt.plot(ds[:,0], ds[:,1], '.')
lines = plt.plot(cluster_centers[cluster, 0], cluster_centers[cluster, 1], 'o')
Explanation: As is evident above, the 2 manifold techniques don't really offer very different dimensionality reductions. Therefore we are just going to roll with Multi dimensional scaling
Clustering and other grouping experiments
Mean-Shift - mds
End of explanation
kmeans = KMeans(n_clusters=2)
kmeans.fit(data.get_values())
labels = kmeans.labels_
centroids = kmeans.cluster_centers_
print('Estimated number of clusters: %d' % len(centroids))
print data.shape
for label in [0, 1]:
ds = data.get_values()[np.where(labels == label)]
plt.plot(ds[:,0], ds[:,1], '.')
lines = plt.plot(centroids[i,0], centroids[i,1], 'o')
Explanation: Though I'm not sure how to tweak the hyper-parameters of the bandwidth estimation function, there doesn't seem to be much difference. Minute variations to the bandwidth result in large cluster differences. Perhaps the data isn't very suitable for a contrived clustering technique like Mean-Shift. Therefore let us attempt something more naive and simplistic like K-Means
K-Means clustering - mds
End of explanation
ADHD_mds = pd.DataFrame(ADHD_mds)
BP_mds = pd.DataFrame(BP_mds)
BP_mds['ADHD-Bipolar'] = 0
ADHD_mds['ADHD-Bipolar'] = 1
data = pd.concat([ADHD_mds, BP_mds])
class_labels = data['ADHD-Bipolar']
data = data.drop(['ADHD-Bipolar'], axis = 1, inplace = False)
print data.shape
data = data.get_values()
# Leave one Out cross validation
def leave_one_out(classifier, values, labels):
leave_one_out_validator = LeaveOneOut(len(values))
classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator)
accuracy = classifier_metrics.mean()
deviation = classifier_metrics.std()
return accuracy, deviation
p_val = 100
knn = KNeighborsClassifier(n_neighbors = 5)
svc = SVC(gamma = 2, C = 1)
rf = RandomForestClassifier(n_estimators = 22)
dt = DecisionTreeClassifier(max_depth = 22)
qda = QDA()
gnb = GaussianNB()
classifier_accuracy_list = []
classifiers = [(knn, "KNN"), (svc, "SVM"), (rf, "Random Forest"), (dt, "Decision Tree"),
(qda, "QDA"), (gnb, "Gaussian NB")]
for classifier, name in classifiers:
accuracy, deviation = leave_one_out(classifier, data, class_labels)
print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)
classifier_accuracy_list.append((name, accuracy))
Explanation: As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups
Classification Experiments
Let's experiment with a bunch of classifiers
End of explanation |
7,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook is a revised version of notebook from Amy Wu
E2E ML on GCP
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Dataflow API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Get your project number
Now that the project ID is set, you get your corresponding project number.
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a built-in Swivel job using the Cloud SDK, you need a Cloud Storage bucket for storing the input dataset and pipeline artifacts (the trained model).
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step11: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step12: Import libraries and define constants
Define constants used in this tutorial.
Step13: Import packages used in this tutorial.
Step14: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step15: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard
Step16: Copy the Swivel template
First, download the provioded Swivel template and configuration script.
Step17: Set your pipeline configurations
Change your pipeline configurations
Step18: Both swivel_pipeline_basic.json and swivel_pipeline.json are generated.
Create the Swivel job for MovieLens items embeddings
You will submit the pipeline job by passing the compiled specification to the create_run_from_job_spec() method. Note that you are passing a parameter_values dictionary that specifies the pipeline input parameters to use.
The following table shows the runtime parameters required by the Swivel job
Step19: Copy the dataset to Cloud Storage
Next, copy the MovieLens dataset to your Cloud Storage bucket.
Step20: Submit the pipeline job
Next, submit the pipeline job to Vertex AI Pipelines.
Step21: After the job is submitted successfully, you can view its details (including run name that you'll need below) and logs.
Use TensorBoard to check the model
You may use the TensorBoard to check the model training process. In order to do that, you need to find the path to the trained model artifact. After the job finishes successfully (~ a few hours), you can view the trained model output path in the Vertex ML Metadata browser. It is going to have the following format
Step22: When the training starts, you can view the logs in TensorBoard
Step23: For Vertex AI Workbench Notebooks, you can do the following
Step24: Deploy the model to Vertex AI Endpoint
Deploying the Vertex AI Model resoure to a Vertex AI Endpoint for online predictions
Step25: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings
Step26: Load the movie ids and titles for querying embeddings
Next, download the movie IDs and titles sample dataset and read them into a pandas Dataframe.
Step27: Creating embeddings
Now that you have deployed the encoder model on Vertex AI Prediction, you can call the model to generate embeddings for new data.
Make an online prediction with SDK
Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed model using Vertex AI SDK for Python.
BLAH The input data you want predicted embeddings on should be provided as a list of movie IDs. Note that you should also provide a unique key field (of type str) for each input instance so that you can associate each output embedding with its corresponding input.
Step28: Make an online prediction request for embeddings
Next, make the prediction request using the predict() method.
Step29: Explore the movie embedding
Using the embeddings, explore how similiar each movie in the sample movie list is similiar to each other movie in the sample movie list.
Note
Step30: You can use the TensorBoard Embedding Projector to graphically represent high dimensional embeddings, which can be helpful in examining and understanding your embeddings.
Make an online prediction with gcloud
You can also do online prediction using the gcloud CLI.
Step31: Make a batch prediction
Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine.
Create the batch input file
Next, you generate the batch input file to generate embeddings for the dataset, which you subsequently use to create an index with Vertex AI Matching Engine. In this example, the dataset contains a 200000 unique identifiers (1...200000). You will use the trained encoder to generate a predicted embedding for each unique identifier.
The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
Step32: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters
Step33: Get the predicted embeddings
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format
Step34: Save the embeddings in JSONL format
Next, you store the predicted embeddings as a JSONL formatted file. Each embedding is stored as
Step35: Store the JSONL formatted embeddings in Cloud Storage
Next, you upload the training data to your Cloud Storage bucket.
Step36: Create Matching Engine Index
Next, you create the index for your embeddings. Currently, two indexing algorithms are supported
Step37: Setup VPC peering network
To use a Matching Engine Index, you setup a VPC peering network between your project and the Vertex AI Matching Engine service project. This eliminates additional hops in network traffic and allows using efficient gRPC protocol.
Learn more about VPC peering.
IMPORTANT
Step38: Create the VPC connection
Next, create the connection for VPC peering.
Note
Step39: Check the status of your peering connections.
Step40: Construct the full network name
You need to have the full network resource name when you subsequently create an Matching Engine Index Endpoint resource for VPC peering.
Step41: Create an IndexEndpoint with VPC Network
Next, you create a Matching Engine Index Endpoint, similar to the concept of creating a Private Endpoint for prediction with a peer-to-peer network.
To create the Index Endpoint resource, you call the method create() with the following parameters
Step42: Deploy the Matching Engine Index to the Index Endpoint resource
Next, deploy your index to the Index Endpoint using the method deploy_index() with the following parameters
Step43: Create and execute an online query
Now that your index is deployed, you can make queries.
First, you construct a vector query using synthetic data, to use as the example to return matches for.
Next, you make the matching request using the method match(), with the following parameters
Step44: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade pip -q
! pip3 install {USER_FLAG} --upgrade scikit-learn -q
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile -q
! pip3 install {USER_FLAG} --upgrade tensorflow -q
Explanation: Notebook is a revised version of notebook from Amy Wu
E2E ML on GCP: MLOps stage 6 : serving: get started with Vertex AI Matching Engine and Swivel builtin algorithm
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_swivel.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/ocommunity/ml_ops/stage6/get_started_with_matching_engine_swivel.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_swivel.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Run in Vertex Workbench
</a>
</td>
</table>
Overview
This notebook demonstrate how to train an embedding with Submatrix-wise Vector Embedding Learner (Swivel) using Vertex AI Pipelines. The purpose of the embedding learner is to compute cooccurrences between tokens in a given dataset and to use the cooccurrences to generate embeddings.
Vertex AI provides a pipeline template for training with Swivel, so you don't need to design your own pipeline or write
your own training code.
Dataset
This tutorial uses the movielens sample dataset in the public bucket gs://cloud-samples-data/vertex-ai/matching-engine/swivel, which was generated from the MovieLens movie rating dataset. This dataset is processed so that each line contains the movies that have same rating by the same user. The directory also includes movies.csv, which maps the movie ids to their names.
Objective
In this notebook, you will learn how to train custom embeddings using Vertex AI Pipelines and subsequently train and deploy a matching engine index using the embeddings.
The steps performed include:
Train the Swivel algorithm to generate embeddings (encoder) for the dataset.
Make example predictions (embeddings) from then trained encoder.
Generate embeddings using the trained Swivel builtin algorithm.
Store embeddings to format supported by Matching Engine.
Create a Matching Engine Index for the embeddings.
Deploy the Matching Engine Index to a Index Endpoint.
Make a matching engine prediction request.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Dataflow
Cloud Storage
Learn about Vertex AI
pricing, Cloud Storage
pricing, and Dataflow pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install packages required for executing this notebook.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Dataflow API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
shell_output = ! gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'
PROJECT_NUMBER = shell_output[0]
print("Project Number:", PROJECT_NUMBER)
Explanation: Get your project number
Now that the project ID is set, you get your corresponding project number.
End of explanation
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a built-in Swivel job using the Cloud SDK, you need a Cloud Storage bucket for storing the input dataset and pipeline artifacts (the trained model).
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your service account from gcloud
if not IS_COLAB:
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
if IS_COLAB:
shell_output = ! gcloud projects describe $PROJECT_ID
project_number = shell_output[-1].split(":")[1].strip().replace("'", "")
SERVICE_ACCOUNT = f"{project_number}[email protected]"
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
!gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
!gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
SOURCE_DATA_PATH = "{}/swivel".format(BUCKET_URI)
PIPELINE_ROOT = "{}/pipeline_root".format(BUCKET_URI)
Explanation: Import libraries and define constants
Define constants used in this tutorial.
End of explanation
import json
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from sklearn.metrics.pairwise import cosine_similarity
Explanation: Import packages used in this tutorial.
End of explanation
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
! gsutil cp gs://cloud-samples-data/vertex-ai/matching-engine/swivel/pipeline/* .
Explanation: Copy the Swivel template
First, download the provioded Swivel template and configuration script.
End of explanation
YOUR_PIPELINE_SUFFIX = "swivel-pipeline-movie" # @param {type:"string"}
MACHINE_TYPE = "n1-standard-16" # @param {type:"string"}
ACCELERATOR_COUNT = 2 # @param {type:"integer"}
ACCELERATOR_TYPE = "NVIDIA_TESLA_V100" # @param {type:"string"}
! chmod +x swivel_template_configuration*
! ./swivel_template_configuration_basic.sh -pipeline_suffix {YOUR_PIPELINE_SUFFIX} -project_number {PROJECT_NUMBER} -project_id {PROJECT_ID} -machine_type {MACHINE_TYPE} -accelerator_count {ACCELERATOR_COUNT} -accelerator_type {ACCELERATOR_TYPE} -pipeline_root {BUCKET_NAME}
! ./swivel_template_configuration.sh -pipeline_suffix {YOUR_PIPELINE_SUFFIX} -project_number {PROJECT_NUMBER} -project_id {PROJECT_ID} -machine_type {MACHINE_TYPE} -accelerator_count {ACCELERATOR_COUNT} -accelerator_type {ACCELERATOR_TYPE} -pipeline_root {BUCKET_NAME}
! sed "s/\\t/ /g" swivel_pipeline_basic.json > tmp.json
! mv tmp.json swivel_pipeline_basic.json
Explanation: Set your pipeline configurations
Change your pipeline configurations:
pipeline_suffix: Suffix of your pipeline name (lowercase and hyphen are allowed).
machine_type: e.g. n1-standard-16.
accelerator_count: Number of GPUs in each machine.
accelerator_type: e.g. NVIDIA_TESLA_P100, NVIDIA_TESLA_V100.
region: e.g. us-east1 (optional, default is us-central1)
network_name: e.g., my_network_name (optional, otherwise it uses "default" network).
VPC Network peering, subnetwork and private IP address configuration
Executing the following cell will generate two files:
1. swivel_pipeline_basic.json: The basic template allows public IPs and default network for the Dataflow job, and doesn't require setting up VPC Network peering for Vertex AI and you will use it in this notebook sample.
1. swivel_pipeline.json: This template enables private IPs and subnet configuration for the Dataflow job, also requires setting up VPC Network peering for the Vertex custom training. This template includes the following args:
* "--subnetwork=regions/%REGION%/subnetworks/%NETWORK_NAME%",
* "--no_use_public_ips",
* \"network\": \"projects/%PROJECT_NUMBER%/global/networks/%NETWORK_NAME%\"
WARNING In order to specify private IPs and configure VPC network, you need to set up VPC Network peering for Vertex AI for your subnetwork (e.g. "default" network on "us-central1") before submitting the following job. This is required for using private IP addresses for DataFlow and Vertex AI.
End of explanation
# MovieLens items embedding sample
PARAMETER_VALUES = {
"embedding_dim": 100, # <---CHANGE THIS (OPTIONAL)
"input_base": "{}/movielens_25m/train".format(SOURCE_DATA_PATH),
"input_type": "items", # For movielens sample
"max_vocab_size": 409600, # <---CHANGE THIS (OPTIONAL)
"num_epochs": 5, # <---CHANGE THIS (OPTIONAL)
}
Explanation: Both swivel_pipeline_basic.json and swivel_pipeline.json are generated.
Create the Swivel job for MovieLens items embeddings
You will submit the pipeline job by passing the compiled specification to the create_run_from_job_spec() method. Note that you are passing a parameter_values dictionary that specifies the pipeline input parameters to use.
The following table shows the runtime parameters required by the Swivel job:
| Parameter |Data type | Description | Required |
|----------------------------|----------|--------------------------------------------------------------------|------------------------|
| embedding_dim | int | Dimensions of the embeddings to train. | No - Default is 100 |
| input_base | string | Cloud Storage path where the input data is stored. | Yes |
| input_type | string | Type of the input data. Can be either 'text' (for wikipedia sample) or 'items'(for movielens sample). | Yes |
| max_vocab_size | int | Maximum vocabulary size to generate embeddings for. | No - Default is 409600 |
|num_epochs | int | Number of epochs for training. | No - Default is 20 |
In short, the items input type means that each line of your input data should be space-separated item ids. Each line is tokenized by splitting on whitespace. The text input type means that each line of your input data should be equivalent to a sentence. Each line is tokenized by lowercasing, and splitting on whitespace.
End of explanation
# Copy the MovieLens sample dataset
! gsutil cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/movielens_25m/train/* {SOURCE_DATA_PATH}/movielens_25m
Explanation: Copy the dataset to Cloud Storage
Next, copy the MovieLens dataset to your Cloud Storage bucket.
End of explanation
# Instantiate PipelineJob object
pl = aiplatform.PipelineJob(
display_name=YOUR_PIPELINE_SUFFIX,
# Whether or not to enable caching
# True = always cache pipeline step result
# False = never cache pipeline step result
# None = defer to cache option for each pipeline component in the pipeline definition
enable_caching=False,
# Local or GCS path to a compiled pipeline definition
template_path="swivel_pipeline_basic.json",
# Dictionary containing input parameters for your pipeline
parameter_values=PARAMETER_VALUES,
# GCS path to act as the pipeline root
pipeline_root=PIPELINE_ROOT,
)
# Submit the Pipeline to Vertex AI
# Optionally you may specify the service account below: submit(service_account=SERVICE_ACCOUNT)
# You must have iam.serviceAccounts.actAs permission on the service account to use it
pl.submit()
Explanation: Submit the pipeline job
Next, submit the pipeline job to Vertex AI Pipelines.
End of explanation
! gsutil -m cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/models/movielens/model {SOURCE_DATA_PATH}/movielens_model
SAVEDMODEL_DIR = os.path.join(SOURCE_DATA_PATH, "movielens_model/model")
LOGS_DIR = os.path.join(SOURCE_DATA_PATH, "movielens_model/tensorboard")
Explanation: After the job is submitted successfully, you can view its details (including run name that you'll need below) and logs.
Use TensorBoard to check the model
You may use the TensorBoard to check the model training process. In order to do that, you need to find the path to the trained model artifact. After the job finishes successfully (~ a few hours), you can view the trained model output path in the Vertex ML Metadata browser. It is going to have the following format:
{BUCKET_URI}/pipeline_root/{PROJECT_NUMBER}/swivel-{TIMESTAMP}/EmbTrainerComponent_-{SOME_NUMBER}/model/
You may copy this path for the MODELOUTPUT_DIR below.
Alternatively, you can download a pretrained model to {SOURCE_DATA_PATH}/movielens_model and proceed. This pretrained model is for demo purpose and not optimized for production usage.
End of explanation
# If on Vertex AI Workbench Notebooks, then don't execute this code.
if not os.getenv("DL_ANACONDA_HOME"):
if "google.colab" in sys.modules:
# Load the TensorBoard notebook extension.
%load_ext tensorboard
# If on Vertex AI Workbench Notebooks, then don't execute this code.
if not os.getenv("DL_ANACONDA_HOME"):
if "google.colab" in sys.modules:
%tensorboard --logdir $LOGS_DIR
Explanation: When the training starts, you can view the logs in TensorBoard:
End of explanation
DEPLOY_IMAGE = "us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-4:latest"
# Upload the trained model to Model resource
model = aiplatform.Model.upload(
display_name="movies_" + TIMESTAMP,
artifact_uri=SAVEDMODEL_DIR,
serving_container_image_uri=DELOY_IMAGE,
)
Explanation: For Vertex AI Workbench Notebooks, you can do the following:
Open Cloud Shell from the Google Cloud Console.
Install dependencies: pip3 install tensorflow tensorboard-plugin-profile
Run the following command: tensorboard --logdir {LOGS_DIR}. You will see a message "TensorBoard 2.x.0 at http://localhost:<PORT>/ (Press CTRL+C to quit)" as the output. Take note of the port number.
You can click on the Web Preview button and view the TensorBoard dashboard and profiling results. You need to configure Web Preview's port to be the same port as you receive from step 3.
Upload the model to Vertex AI Model resource
First, import the model using the upload() method, with the following parameters:
display_name: A human readable name for the model resource.
artifact_uri: The Cloud Storage location of the model artifacts.
serving_container_image_uri: The deployment container. In this tutorial, you use the prebuilt Two-Tower deployment container.
End of explanation
endpoint = aiplatform.Endpoint.create(display_name="swivel_embedding_" + TIMESTAMP)
Explanation: Deploy the model to Vertex AI Endpoint
Deploying the Vertex AI Model resoure to a Vertex AI Endpoint for online predictions:
Create an Endpoint resource exposing an external interface to users consuming the model.
After the Endpoint is ready, deploy one or more instances of a model to the Endpoint. The deployed model runs the Swivel encoder to serve embeddings.
Refer to Vertex AI Predictions guide to Deploy a model using the Vertex AI API for more information about the APIs used in the following cells.
Create a Vertex AI Endpoint
Next, you create the Vertex AI Endpoint, from which you subsequently deploy your Vertex AI Model resource to.
End of explanation
response = endpoint.deploy(
model=model,
deployed_model_display_name=DISPLAY_NAME,
machine_type=DEPLOY_COMPUTE,
traffic_split={"0": 100},
)
print(endpoint)
Explanation: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings:
The machine type.
The (if any) type and number of GPUs.
Static, manual or auto-scaling of VM instances.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.
deployed_model_displayed_name: The human readable name for the deployed model instance.
machine_type: The machine type for each VM instance.
Do to the requirements to provision the resource, this may take upto a few minutes.
End of explanation
! gsutil cp gs://cloud-samples-data/vertex-ai/matching-engine/swivel/movielens_25m/movies.csv ./movies.csv
movies = pd.read_csv("movies.csv")
print(f"Movie count: {len(movies.index)}")
movies.head()
Explanation: Load the movie ids and titles for querying embeddings
Next, download the movie IDs and titles sample dataset and read them into a pandas Dataframe.
End of explanation
# Change to your favourite movies.
query_movies = [
"Lion King, The (1994)",
"Aladdin (1992)",
"Star Wars: Episode IV - A New Hope (1977)",
"Star Wars: Episode VI - Return of the Jedi (1983)",
"Terminator 2: Judgment Day (1991)",
"Aliens (1986)",
"Godfather, The (1972)",
"Goodfellas (1990)",
]
def get_movie_id(title):
return list(movies[movies.title == title].movieId)[0]
instances = [str(get_movie_id(title)) for title in query_movies]
print(instances)
Explanation: Creating embeddings
Now that you have deployed the encoder model on Vertex AI Prediction, you can call the model to generate embeddings for new data.
Make an online prediction with SDK
Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed model using Vertex AI SDK for Python.
BLAH The input data you want predicted embeddings on should be provided as a list of movie IDs. Note that you should also provide a unique key field (of type str) for each input instance so that you can associate each output embedding with its corresponding input.
End of explanation
predictions = endpoint.predict(instances=instances)
embeddings = predictions.predictions
print("Number of embeddings:", len(embeddings))
print(embeddings[0])
Explanation: Make an online prediction request for embeddings
Next, make the prediction request using the predict() method.
End of explanation
for idx1 in range(0, len(input_items) - 1, 2):
item1 = instances[idx1]
title1 = query_movies[idx1]
print(title1)
print("==================")
embedding1 = embeddings[idx1]
for idx2 in range(0, len(instances)):
item2 = input_items[idx2]
embedding2 = embeddings[idx2]
similarity = round(cosine_similarity([embedding1], [embedding2])[0][0], 5)
title1 = query_movies[idx1]
title2 = query_movies[idx2]
print(f" - Similarity to '{title2}' = {similarity}")
print()
Explanation: Explore the movie embedding
Using the embeddings, explore how similiar each movie in the sample movie list is similiar to each other movie in the sample movie list.
Note: A value of 1.0 means they are the same embedding.
End of explanation
import json
request = json.dumps({"instances": input_items})
with open("request.json", "w") as writer:
writer.write(f"{request}\n")
ENDPOINT_ID = endpoint.resource_name
! gcloud ai endpoints predict {ENDPOINT_ID} \
--region={REGION} \
--json-request=request.json
Explanation: You can use the TensorBoard Embedding Projector to graphically represent high dimensional embeddings, which can be helpful in examining and understanding your embeddings.
Make an online prediction with gcloud
You can also do online prediction using the gcloud CLI.
End of explanation
QUERY_EMBEDDING_PATH = f"{BUCKET_URI}/embeddings/train.jsonl"
import tensorflow as tf
with tf.io.gfile.GFile(QUERY_EMBEDDING_PATH, "w") as f:
for i in range(1, 200001):
query = str(i)
f.write(json.dumps(query) + "\n")
print("\nNumber of embeddings: ")
! gsutil cat {QUERY_EMBEDDING_PATH} | wc -l
! gsutil cat {QUERY_EMBEDDING_PATH} | head
Explanation: Make a batch prediction
Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine.
Create the batch input file
Next, you generate the batch input file to generate embeddings for the dataset, which you subsequently use to create an index with Vertex AI Matching Engine. In this example, the dataset contains a 200000 unique identifiers (1...200000). You will use the trained encoder to generate a predicted embedding for each unique identifier.
The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
End of explanation
MIN_NODES = 1
MAX_NODES = 4
batch_predict_job = model.batch_predict(
job_display_name=f"batch_predict_swivel",
gcs_source=[QUERY_EMBEDDING_PATH],
gcs_destination_prefix=f"{BUCKET_URI}/embeddings/output",
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters:
- instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- job_display_name: The human readable name for the prediction job.
- gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.
- model_parameters: Additional filtering parameters for serving prediction results.
- machine_type: The type of machine to use for training.
- accelerator_type: The hardware accelerator type.
- accelerator_count: The number of accelerators to attach to a worker replica.
- starting_replica_count: The number of compute instances to initially provision.
- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Compute instance scaling
You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
End of explanation
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
result_files = []
for prediction_result in prediction_results:
result_file = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
result_files.append(result_file)
print(result_files)
Explanation: Get the predicted embeddings
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
instance: The prediction request.
prediction: The prediction response.
End of explanation
embeddings = []
for result_file in result_files:
with tf.io.gfile.GFile(result_file, "r") as f:
instances = list(f)
for instance in instances:
instance = instance.replace('\\"', "'")
result = json.loads(instance)
prediction = result["prediction"]
key = result["instance"]
embedding = {"id": key, "embedding": prediction}
embeddings.append(embedding)
print("Number of embeddings", len(embeddings))
print("Encoding Dimensions", len(embeddings[0]["embedding"]))
print("Example embedding", embeddings[0])
with open("embeddings.json", "w") as f:
for i in range(len(embeddings)):
f.write(json.dumps(embeddings[i]).replace('"', "'"))
f.write("\n")
! head -n 2 embeddings.json
Explanation: Save the embeddings in JSONL format
Next, you store the predicted embeddings as a JSONL formatted file. Each embedding is stored as:
{ 'id': .., 'embedding': [ ... ] }
The format of the embeddings for the index can be in either CSV, JSON, or Avro format.
Learn more about Embedding Formats for Indexing
End of explanation
EMBEDDINGS_URI = f"{BUCKET_URI}/embeddings/swivel/"
! gsutil cp embeddings.json {EMBEDDINGS_URI}
Explanation: Store the JSONL formatted embeddings in Cloud Storage
Next, you upload the training data to your Cloud Storage bucket.
End of explanation
DIMENSIONS = len(embeddings[0]["embedding"])
DISPLAY_NAME = "movies"
tree_ah_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(
display_name=DISPLAY_NAME,
contents_delta_uri=EMBEDDINGS_URI,
dimensions=DIMENSIONS,
approximate_neighbors_count=50,
distance_measure_type="DOT_PRODUCT_DISTANCE",
description="Two tower generated embeddings",
labels={"label_name": "label_value"},
# TreeAH specific parameters
leaf_node_embedding_count=100,
leaf_nodes_to_search_percent=7,
)
INDEX_RESOURCE_NAME = tree_ah_index.resource_name
print(INDEX_RESOURCE_NAME)
Explanation: Create Matching Engine Index
Next, you create the index for your embeddings. Currently, two indexing algorithms are supported:
create_tree_ah_index(): Shallow tree + Asymmetric hashing.
create_brute_force_index(): Linear search.
In this tutorial, you use the create_tree_ah_index()for production scale. The method is called with the following parameters:
display_name: A human readable name for the index.
contents_delta_uri: A Cloud Storage location for the embeddings, which are either to be inserted, updated or deleted.
dimensions: The number of dimensions of the input vector
approximate_neighbors_count: (for Tree AH) The default number of neighbors to find via approximate search before exact reordering is performed. Exact reordering is a procedure where results returned by an approximate search algorithm are reordered via a more expensive distance computation.
distance_measure_type: The distance measure used in nearest neighbor search.
SQUARED_L2_DISTANCE: Euclidean (L2) Distance
L1_DISTANCE: Manhattan (L1) Distance
COSINE_DISTANCE: Cosine Distance. Defined as 1 - cosine similarity.
DOT_PRODUCT_DISTANCE: Default value. Defined as a negative of the dot product.
description: A human readble description of the index.
labels: User metadata in the form of a dictionary.
leaf_node_embedding_count: Number of embeddings on each leaf node. The default value is 1000 if not set.
leaf_nodes_to_search_percent: The default percentage of leaf nodes that any query may be searched. Must be in range 1-100, inclusive. The default value is 10 (means 10%) if not set.
This may take upto 30 minutes.
Learn more about Configuring Matching Engine Indexes.
End of explanation
# This is for display only; you can name the range anything.
PEERING_RANGE_NAME = "vertex-ai-prediction-peering-range"
NETWORK = "default"
# NOTE: `prefix-length=16` means a CIDR block with mask /16 will be
# reserved for use by Google services, such as Vertex AI.
! gcloud compute addresses create $PEERING_RANGE_NAME \
--global \
--prefix-length=16 \
--description="peering range for Google service" \
--network=$NETWORK \
--purpose=VPC_PEERING
Explanation: Setup VPC peering network
To use a Matching Engine Index, you setup a VPC peering network between your project and the Vertex AI Matching Engine service project. This eliminates additional hops in network traffic and allows using efficient gRPC protocol.
Learn more about VPC peering.
IMPORTANT: you can only setup one VPC peering to servicenetworking.googleapis.com per project.
Create VPC peering for default network
For simplicity, we setup VPC peering to the default network. You can create a different network for your project.
If you setup VPC peering with any other network, make sure that the network already exists and that your VM is running on that network.
End of explanation
! gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--network=$NETWORK \
--ranges=$PEERING_RANGE_NAME \
--project=$PROJECT_ID
Explanation: Create the VPC connection
Next, create the connection for VPC peering.
Note: If you get a PERMISSION DENIED, you may not have the neccessary role 'Compute Network Admin' set for your default service account. In the Cloud Console, do the following steps.
Goto IAM & Admin
Find your service account.
Click edit icon.
Select Add Another Role.
Enter 'Compute Network Admin'.
Select Save
End of explanation
! gcloud compute networks peerings list --network $NETWORK
Explanation: Check the status of your peering connections.
End of explanation
full_network_name = f"projects/{PROJECT_NUMBER}/global/networks/{NETWORK}"
Explanation: Construct the full network name
You need to have the full network resource name when you subsequently create an Matching Engine Index Endpoint resource for VPC peering.
End of explanation
index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(
display_name="index_endpoint_for_demo",
description="index endpoint description",
network=full_network_name,
)
INDEX_ENDPOINT_NAME = index_endpoint.resource_name
print(INDEX_ENDPOINT_NAME)
Explanation: Create an IndexEndpoint with VPC Network
Next, you create a Matching Engine Index Endpoint, similar to the concept of creating a Private Endpoint for prediction with a peer-to-peer network.
To create the Index Endpoint resource, you call the method create() with the following parameters:
display_name: A human readable name for the Index Endpoint.
description: A description for the Index Endpoint.
network: The VPC network resource name.
End of explanation
DEPLOYED_INDEX_ID = "tree_ah_twotower_deployed_" + TIMESTAMP
MIN_NODES = 1
MAX_NODES = 2
DEPLOY_COMPUTE = "n1-standard-16"
index_endpoint.deploy_index(
display_name="deployed_index_for_demo",
index=tree_ah_index,
deployed_index_id=DEPLOYED_INDEX_ID,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
print(index_endpoint.deployed_indexes)
Explanation: Deploy the Matching Engine Index to the Index Endpoint resource
Next, deploy your index to the Index Endpoint using the method deploy_index() with the following parameters:
display_name: A human readable name for the deployed index.
index: Your index.
deployed_index_id: A user assigned identifier for the deployed index.
machine_type: (optional) The VM instance type.
min_replica_count: (optional) Minimum number of VM instances for auto-scaling.
max_replica_count: (optional) Maximum number of VM instances for auto-scaling.
Learn more about Machine resources for Index Endpoint
End of explanation
# The number of nearest neighbors to be retrieved from database for each query.
NUM_NEIGHBOURS = 10
# Test query
queries = [embeddings[0]["embedding"], embeddings[1]["embedding"]]
matches = index_endpoint.match(
deployed_index_id=DEPLOYED_INDEX_ID, queries=queries, num_neighbors=NUM_NEIGHBOURS
)
for instance in matches:
print("INSTANCE")
for match in instance:
print(match)
Explanation: Create and execute an online query
Now that your index is deployed, you can make queries.
First, you construct a vector query using synthetic data, to use as the example to return matches for.
Next, you make the matching request using the method match(), with the following parameters:
deployed_index_id: The identifier of the deployed index.
queries: A list of queries (instances).
num_neighbors: The number of closest matches to return.
End of explanation
# Delete endpoint resource
endpoint.delete(force=True)
# Delete model resource
model.delete()
# Force undeployment of indexes and delete endpoint
try:
index_endpoint.delete(force=True)
except Exception as e:
print(e)
# Delete indexes
try:
tree_ah_index.delete()
except Exception as e:
print(e)
# Delete Cloud Storage objects that were created
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $OUTPUT_DIR
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
7,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 3
Imports
Step2: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step6: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step7: Use interact to animate the plot_soliton_data function versus time. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 3
Imports
End of explanation
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
sol = 1/(2*c*(np.cosh((.5*c**.5)*(x-c*t-a))**2))
return sol
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
phi = np.zeros((xpoints, tpoints), dtype=float)
for i in x:
for j in t:
phi[i,j] = soliton(x[i], t[j], c, a)
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
fig = plt.figure(figsize=(10,7))
plt.plot(soliton(x, t[i], c, a))
plt.title('Soliton wave at t[i] vs. x')
plt.tick_params(right=False, top=False, axis='both', direction='out', length=7)
plt.ylabel('Soliton Wave')
plt.xlabel('x')
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_soliton_data, i=(0,10,.1))
assert True # leave this for grading the interact with plot_soliton_data cell
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation |
7,225 | Given the following text description, write Python code to implement the functionality described.
Description:
Remove all nodes which don 't lie in any path with sum>= k
A utility function to create a new Binary Tree node with given data ; print the tree in LVR ( Inorder traversal ) way . ; Main function which truncates the binary tree . ; Base Case ; Initialize left and right Sums as Sum from root to this node ( including this node ) ; Recursively prune left and right subtrees ; Get the maximum of left and right Sums ; If maximum is smaller than k , then this node must be deleted ; A wrapper over pruneUtil ( ) ; Driver Code ; k is 45
| Python Code:
class newNode :
def __init__(self , data ) :
self . data = data
self . left = self . right = None
def Print(root ) :
if(root != None ) :
Print(root . left )
print(root . data , end = "▁ ")
Print(root . right )
def pruneUtil(root , k , Sum ) :
if(root == None ) :
return None
lSum =[Sum[0 ] +(root . data ) ]
rSum =[lSum[0 ] ]
root . left = pruneUtil(root . left , k , lSum )
root . right = pruneUtil(root . right , k , rSum )
Sum[0 ] = max(lSum[0 ] , rSum[0 ] )
if(Sum[0 ] < k[0 ] ) :
root = None
return root
def prune(root , k ) :
Sum =[0 ]
return pruneUtil(root , k , Sum )
if __name__== ' __main __' :
k =[45 ]
root = newNode(1 )
root . left = newNode(2 )
root . right = newNode(3 )
root . left . left = newNode(4 )
root . left . right = newNode(5 )
root . right . left = newNode(6 )
root . right . right = newNode(7 )
root . left . left . left = newNode(8 )
root . left . left . right = newNode(9 )
root . left . right . left = newNode(12 )
root . right . right . left = newNode(10 )
root . right . right . left . right = newNode(11 )
root . left . left . right . left = newNode(13 )
root . left . left . right . right = newNode(14 )
root . left . left . right . right . left = newNode(15 )
print("Tree ▁ before ▁ truncation ")
Print(root )
print()
root = prune(root , k )
print("Tree ▁ after ▁ truncation ")
Print(root )
|
7,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Głęboka sieć neuronowa w aspektowej analizie wydźwięku
Tomek Korbak
24 maja 2016
Problem
Wytrenować klasyfikator, który dostając na wejściu zdanie języka polskiego, zwróci jego wydźwięk, to znaczy słowo wyrażające opinię.
Step1: Dane pochodzą z ręcznie tagowanego treebanku (korpusu anotowanego składniowo) opracowanego przez Zespół Inżynierii Lingwistycznej IPI PAN na bazie Narodowego Korpusu Języka Polskiego (Wawer, 2015).
Treebank liczy około 1431 zdania. (To dość mało).
Step2: Architektura sieci
Zasadniczą rolę odegrają dwie warstwy, same będące pełnoprawnymi sieciami neuronowymi
Step3: Word embeddings
Word embedding to rodzaj statystycznego modelu językowego, który reprezentuje słowa (rzadziej złożone frazy lub całe dokumenty) jako punkty w n-wymiarowej przestrzeni liniowej.
Bardzo pożądaną cechą tego mapowania jest to, że relacje geometryczne między punktami tej przestrzeni odwzorowują relacje semantyczne między kodowanymi słowami.
Model taki trenuje się na bardzo dużych korpusach, każąc mu rozpoznawać wzorce współwystępowania słów. Najczęściej używanym algorytmem jest Word2Vec, opracowany przez Google (Mikolov et al., 2013a)
Skorzystaliśmy z gotowego embeddingu opracowanego przez IPI PAN, wytrenowanego (z użyciem Word2Vec) na całej polskojęzycznej Wikipedii oraz trzystumilionowym zbalansowanym podkorpusie NKJP.
Step4: Różnica wektorów „Paryż” i „Francja” reprezentuje pojęcie STOLICA?
<img src="capitals.png">
<center>Rzut (przez analizę składowych głównych) 1000-wymiarowego word embeddingu dla języka angielskiego. Przedruk z (Mikolov et al., 2013).</center>
Warstwa LSTM
Long Short-Term Memory to bardzo popularna architektura rekurencyjnych sieci neuronowych, często używana do etykietowania lub predykcji szeregów czasowych (Hochreiter i Schmidhuber, 1997).
LSTM, dzięki połączeniom rekurencyjnym, utrzymuje coś w rodzaju pamięci roboczej, którą w każdej iteracji może aktualizować.
Zdolność do zapamiętywania odległych zależności (long-term dependecies), takich jak związek zgody, czyni ją najpopularniejszą architekturą do przetwarzania języka naturalnego.
Przetwarzanie danych sekwencyjnych przez rekurencyjną sieć neuronową
Przy n-tej iteracji sieć dostaje na wejście n-ty element ciągu uczącego oraz pewien wektor zapamiętany z (n-1)-tej iteracji.
<img src="RNN-unrolled.png"> <br />
<center><small>Przedruk z http
Step5: Uczenie
<img src="plot.png">
Step6: Ocena trafności
Oceny trafności dokonano na 143-zdaniowym podzbiorze treebanku, niewykorzystanym podczas treningu.
Step7: Wartość bardzo przeszacowana ze względu na nierównomierną częśtość występowania klas (1
Step8: Nie wygląda imponująco, ale...
* Przy zgadywaniu trafność wynosiłaby 0,0625% (= $1/40^2$),
* Maksymalny możliwy wynik oscyluje wokół 80%, bo taka jest średnia zgodność ludzkich anotacji (Ogneva, 2012).
Plany na przyszłość
Zwiększenie trafności przewidywań
Dodanie drugiej (i kolejnych) warstwy LSTM powinno umożliwić budowanie przez sieć bardziej złożonych hierarchicznych reprezentacji zależności w zdaniach
Wydłużenie treningu sieci przy wykonywaniu obliczeń przez GPU lub szybszą maszynę
Rozszerzenie problemu o inne trenowanie innych klasyfikatorów | Python Code:
import json
from itertools import chain
from pprint import pprint
from time import time
import os
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from gensim.models import Word2Vec
from gensim.corpora.dictionary import Dictionary
os.environ['THEANO_FLAGS'] = "device=gpu1"
import theano
# theano.config.device = 'gpu' # Compute using GPU
# theano.config.floatX = 'float32'
from keras.preprocessing import sequence
from keras.models import Sequential, Model
from keras.layers import Input
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense, Dropout
from keras.layers.wrappers import TimeDistributed
from keras.utils.visualize_util import plot
np.random.seed(1337)
print theano.config.device
def indices_to_one_hot_encodings(index, vector_length):
return [[1, 0] if i == index else [0, 1] for i in xrange(vector_length)]
# Load and process treebank data
treebank_file1 = open('json/OPTA-treebank-0.1.json')
treebank_file2 = open('skladnica_output.json')
treebank = chain(list(json.load(treebank_file1)), list(json.load(treebank_file2)))
X = []
y = []
labels = []
for entry in treebank:
tree = entry['parsedSent']
words = []
sentiment = None
for index, node in enumerate(tree):
word = node.split('\t')[1].lower()
words.append(word)
if node.split('\t')[10] == 'S':
sentiment = index
if sentiment:
labels.append(words[sentiment])
X.append(words)
y.append(indices_to_one_hot_encodings(sentiment, len(words)))
dataset_length = len(X)
slicing_point = int(dataset_length*0.9)
X_train_raw = X[:slicing_point]
y_train_raw = y[:slicing_point]
X_test_raw = X[slicing_point+1:]
y_test_raw = y[slicing_point+1:]
treebank_vocabulary = set(chain(*X))
print len(treebank_vocabulary)
X_train = X_train_raw
y_train = labels
len(X_train) + len(X_test_raw)
# Przykłady z danych treningowych:
for index in [2, 44, 111, 384, 69]:
print ' '.join(X_train[index]), '\n', y_train[index], '\n'
Explanation: Głęboka sieć neuronowa w aspektowej analizie wydźwięku
Tomek Korbak
24 maja 2016
Problem
Wytrenować klasyfikator, który dostając na wejściu zdanie języka polskiego, zwróci jego wydźwięk, to znaczy słowo wyrażające opinię.
End of explanation
w2v_model = Word2Vec.load('w2v_allwiki_nkjp300_200.model')
# Import w2v's dictionary to a bag-of-words model
w2v_vocabulary = Dictionary()
w2v_vocabulary.doc2bow(w2v_model.vocab.keys(), allow_update=True)
print w2v_vocabulary.items()[:10]
# Initialize dicts for representing w2v's dictionary as indices and 200-dim vectors
w2indx = {v: k+1 for k, v in w2v_vocabulary.items()}
w2vec = {word: w2v_model[word] for word in w2indx.keys()}
w2v_vocabulary_size = len(w2indx) + 1
w2v_vocabulary_dimension = len(w2vec.values()[0])
def map_treebank_words_to_w2v_indices(treebank_data, w2indx):
treebank_data_vec = []
for sentence in treebank_data:
vectorized_sentence = []
for word in sentence:
try:
vectorized_sentence.append(w2indx[word])
except KeyError: # words absent in w2v model will be indexed as 0s
vectorized_sentence.append(0)
treebank_data_vec.append(vectorized_sentence)
return treebank_data_vec
X_train = map_treebank_words_to_w2v_indices(X_train_raw, w2indx)
X_test = map_treebank_words_to_w2v_indices(X_test_raw, w2indx)
print X_test[4]
# Define numpy weights matrix for embedding layer
embedding_weights = np.zeros((w2v_vocabulary_size , w2v_vocabulary_dimension))
for word, index in w2indx.items():
embedding_weights[index, :] = w2vec[word]
# max sentence length
max(
len(max(X_train, key=lambda sentence: len(sentence))),
len(max(X_test, key=lambda sentence: len(sentence)))
)
# Normalize sequences length to 40 (will be extended with 0s)
sentence_length = 40
X_train = sequence.pad_sequences(X_train, maxlen=sentence_length)
X_test = sequence.pad_sequences(X_test, maxlen=sentence_length)
y_train = sequence.pad_sequences(y_train_raw, maxlen=sentence_length, value=[0, 1])
y_test = sequence.pad_sequences(y_test_raw, maxlen=sentence_length, value=[0, 1])
# print X_train[2]
# print y_train[2]
inputs = Input(shape=(sentence_length,), dtype='int32')
x = Embedding(
input_dim=w2v_vocabulary_size,
output_dim=w2v_vocabulary_dimension,
input_length=sentence_length,
mask_zero=True,
weights=[embedding_weights]
)(inputs)
lstm_out = LSTM(200, return_sequences=True)(x)
regularized_data = Dropout(0.3)(lstm_out)
predictions = TimeDistributed(Dense(2, activation='sigmoid'))(regularized_data)
model = Model(input=inputs, output=predictions)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
Explanation: Dane pochodzą z ręcznie tagowanego treebanku (korpusu anotowanego składniowo) opracowanego przez Zespół Inżynierii Lingwistycznej IPI PAN na bazie Narodowego Korpusu Języka Polskiego (Wawer, 2015).
Treebank liczy około 1431 zdania. (To dość mało).
End of explanation
model.summary()
from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
Explanation: Architektura sieci
Zasadniczą rolę odegrają dwie warstwy, same będące pełnoprawnymi sieciami neuronowymi:
* Warstwa embedding, mapująca słowa na wektory liczb zmiennoprzecinkowych w sposób spełniający pewne kryteria
* Warstwa LSTM, sieć rekurencyjna szczególnie dobrze radzącą sobie z przetwarzaniem szeregów czasowych
End of explanation
# w modelu, który wykorzystaliśmy, słowa są reprezentowane jako
# 200-elementowe wektory 32-bitowych liczb zmiennoprzecinkowych
w2v_model['filozofia']
w2v_model['filozofia'].shape
w2v_model.similarity(u'filozofia', u'inżynieria')
w2v_model.similarity(u'filozofia', u'nauka')
w2v_model.similarity(u'filozofia', u'literatura')
# wskaż słowo niepasujące do pozostałych
w2v_model.doesnt_match(['Kant', 'Leibniz', 'Derrida', 'Wittgenstein'])
# Kobieta + król - mężczyzna = królowa
# Medialny przykład z (Mikolov et al., 2013b)
w2v_model.most_similar(positive=[u'kobieta', u'król'], negative=[u'mężczyzna'])
# Paryż - Francja + Polska = Warszawa
w2v_model.most_similar(positive=[u'Paryż', u'Polska'], negative=[u'Francja'])
# filozofia - logika = literatura
w2v_model.most_similar(positive=[u'filozofia',], negative=[u'logika'])
# filozofia - postmodernizm = wiedza
w2v_model.most_similar(positive=[u'filozofia',], negative=[u'postmodernizm'])
Explanation: Word embeddings
Word embedding to rodzaj statystycznego modelu językowego, który reprezentuje słowa (rzadziej złożone frazy lub całe dokumenty) jako punkty w n-wymiarowej przestrzeni liniowej.
Bardzo pożądaną cechą tego mapowania jest to, że relacje geometryczne między punktami tej przestrzeni odwzorowują relacje semantyczne między kodowanymi słowami.
Model taki trenuje się na bardzo dużych korpusach, każąc mu rozpoznawać wzorce współwystępowania słów. Najczęściej używanym algorytmem jest Word2Vec, opracowany przez Google (Mikolov et al., 2013a)
Skorzystaliśmy z gotowego embeddingu opracowanego przez IPI PAN, wytrenowanego (z użyciem Word2Vec) na całej polskojęzycznej Wikipedii oraz trzystumilionowym zbalansowanym podkorpusie NKJP.
End of explanation
batch_size = 5
n_epoch = 5
hist = model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=n_epoch,
validation_data=(X_test, y_test), verbose=2)
# epochs = 10
# for i in range(epochs):
# print('Epoch', i, '/', epochs)
# model.fit
Explanation: Różnica wektorów „Paryż” i „Francja” reprezentuje pojęcie STOLICA?
<img src="capitals.png">
<center>Rzut (przez analizę składowych głównych) 1000-wymiarowego word embeddingu dla języka angielskiego. Przedruk z (Mikolov et al., 2013).</center>
Warstwa LSTM
Long Short-Term Memory to bardzo popularna architektura rekurencyjnych sieci neuronowych, często używana do etykietowania lub predykcji szeregów czasowych (Hochreiter i Schmidhuber, 1997).
LSTM, dzięki połączeniom rekurencyjnym, utrzymuje coś w rodzaju pamięci roboczej, którą w każdej iteracji może aktualizować.
Zdolność do zapamiętywania odległych zależności (long-term dependecies), takich jak związek zgody, czyni ją najpopularniejszą architekturą do przetwarzania języka naturalnego.
Przetwarzanie danych sekwencyjnych przez rekurencyjną sieć neuronową
Przy n-tej iteracji sieć dostaje na wejście n-ty element ciągu uczącego oraz pewien wektor zapamiętany z (n-1)-tej iteracji.
<img src="RNN-unrolled.png"> <br />
<center><small>Przedruk z http://colah.github.io/posts/2015-08-Understanding-LSTMs/</small></center>
Przy każdej iteracji sieć decyduje, która informacje usunąć z pamięci roboczej, a które od niej dodać. Reguły aktualizacji pamięci roboczej (jako macierz wag połączeń) także podlegają uczeniu.
<img src="LSTM3-chain.png"> <br />
<center><small>Przedruk z http://colah.github.io/posts/2015-08-Understanding-LSTMs/</small></center>
Przepływ danych przez sieć
<img align="right" src="model.png">
Przykład | Opis
--- | ---
'Kotek' | token
89762 | indeks tokenu w modelu w2v
array([ 0.21601944, ..., dtype=float32) | 200-elementowy wektor
...kolejne wektory... | dalsze etapy przetwarzania
[0.9111, 0.0999] | zero-jedynkowy rozkład prawdopodobieństwa przynależności do klasy wydźwięk lub nie-wydźwięk
End of explanation
# plt.rcParams['figure.figsize'] = (10,10)
# axes = plt.gca()
# x_min = hist.epoch[0]
# x_max = hist.epoch[-1]+1
# axes.set_xlim([x_min,x_max])
# plt.scatter(hist.epoch, hist.history['acc'], color='r')
# plt.plot(hist.history['acc'], color='r', label=u'Trafność mierzona na zbiorze treningowym')
# plt.scatter(hist.epoch, hist.history['val_acc'], color='c')
# plt.plot(hist.history['val_acc'], color='c', label=u'Trafność mierzona na zbiorze walidacyjnym')
# plt.xlabel('epoki')
# plt.ylabel(u'Trafność')
# plt.title(u'Trafność w kolejnych epokach')
# plt.legend()
# plt.show()
Explanation: Uczenie
<img src="plot.png">
End of explanation
# Ułamek poprawnie sklasyfikowanych tokenów
score, acc = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=0)
print 'Test accuracy:', acc
predictions = model.predict(X_test, verbose=1)
def change_encoding_word(word):
return 1 if list(np.rint(word)) == [1, 0] else 0
def change_encoding(one_hot_encoded_sentence):
# Switch from ndarray([[0.88, 0.11], [0.34, 0.98]]) encoding to [1, 0] encoding
# and finally index number
normalized_sentence = []
for word in one_hot_encoded_sentence:
normalized_sentence.append(change_encoding_word(word))
return normalized_sentence
def accurately_evaluated_samples():
total_accuracy = 0
for n, sentence in enumerate(predictions):
index_of_sentiment = np.argmax(change_encoding(sentence))
# print change_encoding_word(y_test[n][index_of_sentiment])
total_accuracy += change_encoding_word(y_test[n][index_of_sentiment])
return total_accuracy
Explanation: Ocena trafności
Oceny trafności dokonano na 143-zdaniowym podzbiorze treebanku, niewykorzystanym podczas treningu.
End of explanation
# Ułamek tokenów-wydźwięków, które poprawnie rozpoznano jako wydźwięki
float(accurately_evaluated_samples())/y_test.shape[0]
Explanation: Wartość bardzo przeszacowana ze względu na nierównomierną częśtość występowania klas (1:39 dla wydźwięku vs nie-wydźwięku)
Bardziej adekwatna metryka
End of explanation
hist.history
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
Explanation: Nie wygląda imponująco, ale...
* Przy zgadywaniu trafność wynosiłaby 0,0625% (= $1/40^2$),
* Maksymalny możliwy wynik oscyluje wokół 80%, bo taka jest średnia zgodność ludzkich anotacji (Ogneva, 2012).
Plany na przyszłość
Zwiększenie trafności przewidywań
Dodanie drugiej (i kolejnych) warstwy LSTM powinno umożliwić budowanie przez sieć bardziej złożonych hierarchicznych reprezentacji zależności w zdaniach
Wydłużenie treningu sieci przy wykonywaniu obliczeń przez GPU lub szybszą maszynę
Rozszerzenie problemu o inne trenowanie innych klasyfikatorów:
Ekstrakcja obiektów, których aspektom przypisywany jest wydźwięk
Klasyfikowanie wydźwięku (pozytywny lub negatywny) dla pary <obiekt, wydźwięk>
Kod źródłowy
Repozytorium jest dostępne pod adresem: https://github.com/tomekkorbak/lstm-for-aspect-based-sentiment-analysis
Użyte oprogramowanie
Keras (wysokopoziomowy wrapper na Theano)
Theano (implementacja sieci i jej treningu)
Scikit-learn (walidacja)
Gensim (model językowy Word2Vec)
Numpy (pomocnicze obliczenia numeryczne)
Bibliografia
Hochreiter, S. i Schmidhuber, J, (1997). Long Short-Term Memory, „Neural Computation”, 9 (8), ss. 1735-1780.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G., i Dean, J. (2013a). Distributed Representations of Words and Phrases and their Compositionality. „Advances in Neural Information Processing Systems 26”, ss. 3111-3119.
Mikolov, T., Chen, K., Corrado, G. i Dean, J. (2013b). Efficient estimation of word representations in vector space.
Ogneva, M. (2012). How Companies Can Use Sentiment Analysis to Improve Their Business
Wawer, A. (2015). Towards Domain-Independent Opinion Target Extraction. „IEEE 15th International Conference on Data Mining Workshops”, ss. 1326-1331.
<center>Dziękuję za uwagę</center>
End of explanation |
7,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recursive least squares
Recursive least squares is an expanding window version of ordinary least squares. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability.
The RecursiveLS class allows computation of recursive residuals and computes CUSUM and CUSUM of squares statistics. Plotting these statistics along with reference lines denoting statistically significant deviations from the null hypothesis of stable parameters allows an easy visual indication of parameter stability.
Finally, the RecursiveLS model allows imposing linear restrictions on the parameter vectors, and can be constructed using the formula interface.
Step1: Example 1
Step2: First, construct and fit the model, and print a summary. Although the RLS model computes the regression parameters recursively, so there are as many estimates as there are datapoints, the summary table only presents the regression parameters estimated on the entire sample; except for small effects from initialization of the recursions, these estimates are equivalent to OLS estimates.
Step3: The recursive coefficients are available in the recursive_coefficients attribute. Alternatively, plots can generated using the plot_recursive_coefficient method.
Step4: The CUSUM statistic is available in the cusum attribute, but usually it is more convenient to visually check for parameter stability using the plot_cusum method. In the plot below, the CUSUM statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
Step5: Another related statistic is the CUSUM of squares. It is available in the cusum_squares attribute, but it is similarly more convenient to check it visually, using the plot_cusum_squares method. In the plot below, the CUSUM of squares statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
Step6: Example 2
Step7: After constructing the moving averages using the $\beta = 0.95$ filter of Lucas (with a window of 10 years on either side), we plot each of the series below. Although they appear to move together prior for part of the sample, after 1990 they appear to diverge.
Step8: The CUSUM plot now shows substantial deviation at the 5% level, suggesting a rejection of the null hypothesis of parameter stability.
Step9: Similarly, the CUSUM of squares shows substantial deviation at the 5% level, also suggesting a rejection of the null hypothesis of parameter stability.
Step10: Example 3
Step11: Formula
One could fit the same model using the class method from_formula. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
np.set_printoptions(suppress=True)
Explanation: Recursive least squares
Recursive least squares is an expanding window version of ordinary least squares. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability.
The RecursiveLS class allows computation of recursive residuals and computes CUSUM and CUSUM of squares statistics. Plotting these statistics along with reference lines denoting statistically significant deviations from the null hypothesis of stable parameters allows an easy visual indication of parameter stability.
Finally, the RecursiveLS model allows imposing linear restrictions on the parameter vectors, and can be constructed using the formula interface.
End of explanation
print(sm.datasets.copper.DESCRLONG)
dta = sm.datasets.copper.load_pandas().data
dta.index = pd.date_range('1951-01-01', '1975-01-01', freq='AS')
endog = dta['WORLDCONSUMPTION']
# To the regressors in the dataset, we add a column of ones for an intercept
exog = sm.add_constant(dta[['COPPERPRICE', 'INCOMEINDEX', 'ALUMPRICE', 'INVENTORYINDEX']])
Explanation: Example 1: Copper
We first consider parameter stability in the copper dataset (description below).
End of explanation
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
Explanation: First, construct and fit the model, and print a summary. Although the RLS model computes the regression parameters recursively, so there are as many estimates as there are datapoints, the summary table only presents the regression parameters estimated on the entire sample; except for small effects from initialization of the recursions, these estimates are equivalent to OLS estimates.
End of explanation
print(res.recursive_coefficients.filtered[0])
res.plot_recursive_coefficient(range(mod.k_exog), alpha=None, figsize=(10,6));
Explanation: The recursive coefficients are available in the recursive_coefficients attribute. Alternatively, plots can generated using the plot_recursive_coefficient method.
End of explanation
print(res.cusum)
fig = res.plot_cusum();
Explanation: The CUSUM statistic is available in the cusum attribute, but usually it is more convenient to visually check for parameter stability using the plot_cusum method. In the plot below, the CUSUM statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
End of explanation
res.plot_cusum_squares();
Explanation: Another related statistic is the CUSUM of squares. It is available in the cusum_squares attribute, but it is similarly more convenient to check it visually, using the plot_cusum_squares method. In the plot below, the CUSUM of squares statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
End of explanation
start = '1959-12-01'
end = '2015-01-01'
m2 = DataReader('M2SL', 'fred', start=start, end=end)
cpi = DataReader('CPIAUCSL', 'fred', start=start, end=end)
def ewma(series, beta, n_window):
nobs = len(series)
scalar = (1 - beta) / (1 + beta)
ma = []
k = np.arange(n_window, 0, -1)
weights = np.r_[beta**k, 1, beta**k[::-1]]
for t in range(n_window, nobs - n_window):
window = series.iloc[t - n_window:t + n_window+1].values
ma.append(scalar * np.sum(weights * window))
return pd.Series(ma, name=series.name, index=series.iloc[n_window:-n_window].index)
m2_ewma = ewma(np.log(m2['M2SL'].resample('QS').mean()).diff().iloc[1:], 0.95, 10*4)
cpi_ewma = ewma(np.log(cpi['CPIAUCSL'].resample('QS').mean()).diff().iloc[1:], 0.95, 10*4)
Explanation: Example 2: Quantity theory of money
The quantity theory of money suggests that "a given change in the rate of change in the quantity of money induces ... an equal change in the rate of price inflation" (Lucas, 1980). Following Lucas, we examine the relationship between double-sided exponentially weighted moving averages of money growth and CPI inflation. Although Lucas found the relationship between these variables to be stable, more recently it appears that the relationship is unstable; see e.g. Sargent and Surico (2010).
End of explanation
fig, ax = plt.subplots(figsize=(13,3))
ax.plot(m2_ewma, label='M2 Growth (EWMA)')
ax.plot(cpi_ewma, label='CPI Inflation (EWMA)')
ax.legend();
endog = cpi_ewma
exog = sm.add_constant(m2_ewma)
exog.columns = ['const', 'M2']
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
res.plot_recursive_coefficient(1, alpha=None);
Explanation: After constructing the moving averages using the $\beta = 0.95$ filter of Lucas (with a window of 10 years on either side), we plot each of the series below. Although they appear to move together prior for part of the sample, after 1990 they appear to diverge.
End of explanation
res.plot_cusum();
Explanation: The CUSUM plot now shows substantial deviation at the 5% level, suggesting a rejection of the null hypothesis of parameter stability.
End of explanation
res.plot_cusum_squares();
Explanation: Similarly, the CUSUM of squares shows substantial deviation at the 5% level, also suggesting a rejection of the null hypothesis of parameter stability.
End of explanation
endog = dta['WORLDCONSUMPTION']
exog = sm.add_constant(dta[['COPPERPRICE', 'INCOMEINDEX', 'ALUMPRICE', 'INVENTORYINDEX']])
mod = sm.RecursiveLS(endog, exog, constraints='COPPERPRICE = ALUMPRICE')
res = mod.fit()
print(res.summary())
Explanation: Example 3: Linear restrictions and formulas
Linear restrictions
It is not hard to implement linear restrictions, using the constraints parameter in constructing the model.
End of explanation
mod = sm.RecursiveLS.from_formula(
'WORLDCONSUMPTION ~ COPPERPRICE + INCOMEINDEX + ALUMPRICE + INVENTORYINDEX', dta,
constraints='COPPERPRICE = ALUMPRICE')
res = mod.fit()
print(res.summary())
Explanation: Formula
One could fit the same model using the class method from_formula.
End of explanation |
7,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Introduction
There are multiple reasons for analyzing a version control system like your Git repository. See for example Adam Tornhill's book "Your Code as a Crime Scene" or his upcoming book "Software Design X-Rays" for plenty of inspirations
Step1: With the <tt>git_bin</tt>, we can execute almost any Git command we like directly. In our hypothetical use case, we want to retrieve some information about the change frequency of files. For this, we need the complete history of the Git repo including statistics for the changed files (via <tt>--numstat</tt>).
We use a little trick to make sure, that the format for the file's statistics fits nicely with the commit's metadata (SHA <tt>%h</tt>, UNIX timestamp <tt>%at</tt> and author's name <tt>%aN</tt>). The <tt>--numstat</tt> option provides data for additions, deletions and the affected file name in one line, separated by the tabulator character <tt>\t</tt>
Step2: We now read in the complete files' history in the <tt>git_log</tt> variable. Don't let confuse you by all the <tt>\t</tt> characters.
Let's read the result into a Pandas <tt>DataFrame</tt> by using the <tt>read_csv</tt> method. Because we can't provide a file path to a CSV data, we have to use StringIO to read in our in-memory buffered content.
Pandas will read the first line of the tabular-separated "file", sees the many tabular-separated columns and parses all other lines in the same format / column layout. Additionaly, we set the <tt>header</tt> to <tt>None</tt> because we don't have one and provide nice names for all the columns that we read in.
Step3: We got two different kind of content for the rows
Step4: And drop all the commit metadata rows that don't contain file statitics.
Step5: We are finished! This is it.
In summary, you'll need "one-liner" for converting a Git log file output that was exported with
git log --numstat --pretty=format
Step6: Bonus section
We can now convert some columns to their correct data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deleted lines of code respectively. But there are also a few exceptions for binary files like images. We skip these lines with the <tt>errors='coerce'</tt> option. This will lead to <tt>Nan</tt> in the rows that will be dropped after the conversion.
The <tt>timestamp</tt> column is a UNIX timestamp with the past seconds since January 1st 1970 we can easily convert with Pandas' <tt>to_datetime</tt> method. | Python Code:
import git
GIT_REPO_PATH = r'../../spring-petclinic/'
repo = git.Repo(GIT_REPO_PATH)
git_bin = repo.git
git_bin
Explanation: Introduction
Introduction
There are multiple reasons for analyzing a version control system like your Git repository. See for example Adam Tornhill's book "Your Code as a Crime Scene" or his upcoming book "Software Design X-Rays" for plenty of inspirations:
You can
- analyze knowledge islands
- distinguish often changing code from stable code parts
- identify code that is temporal coupled
Having the necessary data for those analyses in a Pandas <tt>DataFrame</tt> gives you many many possibilities to quickly gain insights about the evolution of your software system.
The idea
In another blog post I showed you a way to read in Git log data with Pandas's DataFrame and GitPython. Looking back, this was really complicated and tedious. So with a few tricks we can do it much more better this time:
We use GitPython's feature to directly access an underlying Git installation. This is way more faster than using GitPython's object representation of the repository makes it possible to have everything we need in one notebook.
We use in-memory reading by using StringIO to avoid unnecessary file access. This avoids storing the Git output on disk and read it from from disc again. This method is way more faster.
We also exploit Pandas's <tt>read_csv</tt> method even more. This makes the transformation of the Git log into a <tt>DataFrame</tt> as easy as pie.
Reading the Git log
The first step is to connect GitPython with the Git repo. If we have an instance of the repo, we can gain access to the underlying Git installation of the operation system via <tt>repo.git</tt>.
In this case, again, we tap the Spring Pet Clinic project, a small sample application for the Spring framework.
End of explanation
git_log = git_bin.execute('git log --numstat --pretty=format:"\t\t\t%h\t%at\t%aN"')
git_log[:80]
Explanation: With the <tt>git_bin</tt>, we can execute almost any Git command we like directly. In our hypothetical use case, we want to retrieve some information about the change frequency of files. For this, we need the complete history of the Git repo including statistics for the changed files (via <tt>--numstat</tt>).
We use a little trick to make sure, that the format for the file's statistics fits nicely with the commit's metadata (SHA <tt>%h</tt>, UNIX timestamp <tt>%at</tt> and author's name <tt>%aN</tt>). The <tt>--numstat</tt> option provides data for additions, deletions and the affected file name in one line, separated by the tabulator character <tt>\t</tt>:
<p>
<tt>1<b>\t</b>1<b>\t</b>some/file/name.ext</tt>
</p>
We use the same tabular separator <tt>\t</tt> for the format string:
<p>
<tt>%h<b>\t</b>%at<b>\t</b>%aN</tt>
</p>
And here is the trick: Additionally, we add the amount of tabulators of the file's statistics plus an additional tabulator in front of the format string to pretend that there are empty file statistics' information in front of the format string.
The results looks like this:
<p>
<tt>\t\t\t%h\t%at\t%aN</tt>
</p>
Note: If you want to export the Git log on the command line into a file to read that file later, you need to use the tabulator character xxx as separator instead of <tt>\t</tt> in the format string. Otherwise, the trick doesn't work.
OK, let's first executed the Git log export:
End of explanation
import pandas as pd
from io import StringIO
commits_raw = pd.read_csv(StringIO(git_log),
sep="\t",
header=None,
names=['additions', 'deletions', 'filename', 'sha', 'timestamp', 'author']
)
commits_raw.head()
Explanation: We now read in the complete files' history in the <tt>git_log</tt> variable. Don't let confuse you by all the <tt>\t</tt> characters.
Let's read the result into a Pandas <tt>DataFrame</tt> by using the <tt>read_csv</tt> method. Because we can't provide a file path to a CSV data, we have to use StringIO to read in our in-memory buffered content.
Pandas will read the first line of the tabular-separated "file", sees the many tabular-separated columns and parses all other lines in the same format / column layout. Additionaly, we set the <tt>header</tt> to <tt>None</tt> because we don't have one and provide nice names for all the columns that we read in.
End of explanation
commits = commits_raw.fillna(method='ffill')
commits.head()
Explanation: We got two different kind of content for the rows:
For each other row, we got some statistics about the modified files:
<pre>
2 0 src/main/asciidoc/appendices/bibliography.adoc
</pre>
It contains the number of lines inserted, the number of lines deleted and the relative path of the file. With a little trick and a little bit of data wrangling, we can read that information into a nicely structured DataFrame.
The last steps are easy. We fill all the empty file statistics rows with the commit's metadata.
End of explanation
commits = commits.dropna()
commits.head()
Explanation: And drop all the commit metadata rows that don't contain file statitics.
End of explanation
pd.read_csv("../../spring-petclinic/git.log",
sep="\t",
header=None,
names=[
'additions',
'deletions',
'filename',
'sha',
'timestamp',
'author']).fillna(method='ffill').dropna().head()
Explanation: We are finished! This is it.
In summary, you'll need "one-liner" for converting a Git log file output that was exported with
git log --numstat --pretty=format:"%x09%x09%x09%h%x09%at%x09%aN" > git.log
into a <tt>DataFrame</tt>:
End of explanation
commits['additions'] = pd.to_numeric(commits['additions'], errors='coerce')
commits['deletions'] = pd.to_numeric(commits['deletions'], errors='coerce')
commits = commits.dropna()
commits['timestamp'] = pd.to_datetime(commits['timestamp'], unit="s")
commits.head()
%matplotlib inline
commits[commits['filename'].str.endswith(".java")]\
.groupby('filename')\
.count()['additions']\
.hist()
Explanation: Bonus section
We can now convert some columns to their correct data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deleted lines of code respectively. But there are also a few exceptions for binary files like images. We skip these lines with the <tt>errors='coerce'</tt> option. This will lead to <tt>Nan</tt> in the rows that will be dropped after the conversion.
The <tt>timestamp</tt> column is a UNIX timestamp with the past seconds since January 1st 1970 we can easily convert with Pandas' <tt>to_datetime</tt> method.
End of explanation |
7,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Session 0
Step1: Now press 'a' or 'b' to create new cells. You can also use the toolbar to create new cells. You can also use the arrow keys to move up and down.
<a name="kernel"></a>
Kernel
Note the numbers on each of the cells inside the brackets, after "running" the cell. These denote the current execution count of your python "kernel". Think of the kernel as another machine within your computer that understands Python and interprets what you write as code into executions that the processor can understand.
<a name="importing-libraries"></a>
Importing Libraries
When you launch a new notebook, your kernel is a blank state. It only knows standard python syntax. Everything else is contained in additional python libraries that you have to explicitly "import" like so
Step2: After exectuing this cell, your kernel will have access to everything inside the os library which is a common library for interacting with the operating system. We'll need to use the import statement for all of the libraries that we include.
<a name="loading-data"></a>
Loading Data
Let's now move onto something more practical. We'll learn how to see what files are in a directory, and load any images inside that directory into a variable.
<a name="structuring-data-as-folders"></a>
Structuring data as folders
With Deep Learning, we'll always need a dataset, or a collection of data. A lot of it. We're going to create our dataset by putting a bunch of images inside a directory. Then, whenever we want to load the dataset, we will tell python to find all the images inside the directory and load them. Python lets us very easily crawl through a directory and grab each file. Let's have a look at how to do this.
<a name="using-the-os-library-to-get-data"></a>
Using the os library to get data
We'll practice with a very large dataset called Celeb Net. This dataset has about 200,000 images of celebrities. The researchers also provide a version of the dataset which has every single face cropped and aligned so that each face is in the middle! We'll be using this aligned dataset. To read more about the dataset or to download it, follow the link here
Step3: Using the os package, we can list an entire directory. The documentation or docstring, says that listdir takes one parameter, path
Step4: This is the location of the directory we need to list. Let's save it to a variable so that we can easier inspect the directory of images we just downloaded
Step5: We can also specify to include only certain files like so
Step6: or even
Step7: We could also combine file types if we happened to have multiple types
Step8: Let's set this list to a variable, so we can perform further actions on it
Step9: And now we can index that list using the square brackets
Step10: We can even go in the reverse direction, which wraps around to the end of the list
Step11: <a name="loading-an-image"></a>
Loading an image
matplotlib is an incredibly powerful python library which will let us play with visualization and loading of image data. We can import it like so
Step12: Now we can refer to the entire module by just using plt instead of matplotlib.pyplot every time. This is pretty common practice.
We'll now tell matplotlib to "inline" plots using an ipython magic function
Step13: This isn't python, so won't work inside of any python script files. This only works inside notebook. What this is saying is that whenever we plot something using matplotlib, put the plots directly into the notebook, instead of using a window popup, which is the default behavior. This is something that makes notebook really useful for teaching purposes, as it allows us to keep all of our images/code in one document.
Have a look at the library by checking the documentation with help(plt). Another option to get information is to write plt. and then press <tab>. This shows a dropdown of all available functions in this library
Step14: Selecting a function from the dropdown and adding a ? at the end will bring up the function's documentation.
plt contains a very useful function for loading images
Step15: Here we see that it actually returns a variable which requires us to use another library, NumPy. NumPy makes working with numerical data a lot easier. Let's import it as well
Step16: Let's try loading the first image in our dataset
Step17: plt.imread will not know where that file is. We can tell it where to find the file by using os.path.join
Step18: Now we get a bunch of numbers! I'd rather not have to keep prepending the path to my files, so I can create the list of files like so
Step19: Let's set this to a variable, img, and inspect a bit further what's going on
Step20: <a name="rgb-image-representation"></a>
RGB Image Representation
It turns out that all of these numbers are capable of describing an image. We can use the function imshow to see this
Step21: Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor
Step22: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels. We can use the square brackets just like when we tried to access elements of our list
Step23: We use the special colon operator to 'say take every value in this dimension'. This is saying, 'give me every row, every column, and the 0th dimension of the color channels'.
What we see now is a heatmap of our image corresponding to each color channel.
<a name="understanding-data-types-and-ranges-uint8-float32"></a>
Understanding data types and ranges (uint8, float32)
Let's take a look at the range of values of our image
Step24: The numbers are all between 0 to 255. What a strange number you might be thinking. Unless you are one of 10 types of people in this world, those that understand binary and those that don't. Don't worry if you're not. You are likely better off.
256 values is how much information we can stick into a byte. We measure a byte using bits, and each byte takes up 8 bits. Each bit can be either 0 or 1. When we stack up 8 bits, or 10000000 in binary, equivalent to 2 to the 8th power, we can express up to 256 possible values, giving us our range, 0 to 255. You can compute any number of bits using powers of two. 2 to the power of 8 is 256. How many values can you stick in 16 bits (2 bytes)? Or 32 bits (4 bytes) of information? Let's ask python
Step25: numpy arrays have a field which will tell us how many bits they are using
Step26: uint8
Step27: This is saying, let me see this data as a floating point number, meaning with decimal places, and with 32 bits of precision, rather than the previous data types 8 bits. This will become important when we start to work with neural networks, as we'll need all of those extra possible values!
<a name="visualizing-your-data-as-images"></a>
Visualizing your data as images
We've seen how to look at a single image. But what if we have hundreds, thousands, or millions of images? Is there a good way of knowing what our dataset looks like without looking at their file names, or opening up each image one at a time?
One way we can do that is to randomly pick an image.
We've already seen how to read the image located at one of our file locations
Step28: to pick a random image from our list of files, we can use the numpy random module
Step29: This function will produce random integers between a range of values that we specify. We say, give us random integers from 0 to the length of files.
We can now use the code we've written before to show a random image from our list of files
Step30: This might be something useful that we'd like to do often. So we can use a function to help us in the future
Step31: This function takes one parameter, a variable named filename, which we will have to specify whenever we call it. That variable is fed into the plt.imread function, and used to load an image. It is then drawn with plt.imshow. Let's see how we can use this function definition
Step32: or simply
Step34: We use functions to help us reduce the main flow of our code. It helps to make things clearer, using function names that help describe what is going on.
<a name="image-manipulation"></a>
Image Manipulation
<a name="cropping-images"></a>
Cropping images
We're going to create another function which will help us crop the image to a standard size and help us draw every image in our list of files as a grid.
In many applications of deep learning, we will need all of our data to be the same size. For images this means we'll need to crop the images while trying not to remove any of the important information in it. Most image datasets that you'll find online will already have a standard size for every image. But if you're creating your own dataset, you'll need to know how to make all the images the same size. One way to do this is to find the longest edge of the image, and crop this edge to be as long as the shortest edge of the image. This will convert the image to a square one, meaning its sides will be the same lengths. The reason for doing this is that we can then resize this square image to any size we'd like, without distorting the image. Let's see how we can do that
Step35: There are a few things going on here. First, we are defining a function which takes as input a single variable. This variable gets named img inside the function, and we enter a set of if/else-if conditionals. The first branch says, if the rows of img are greater than the columns, then set the variable extra to their difference and divide by 2. The // notation means to perform an integer division, instead of a floating point division. So 3 // 2 = 1, not 1.5. We need integers for the next line of code which says to set the variable crop to img starting from extra rows, and ending at negative extra rows down. We can't be on row 1.5, only row 1 or 2. So that's why we need the integer divide there. Let's say our image was 128 x 96 x 3. We would have extra = (128 - 96) // 2, or 16. Then we'd start from the 16th row, and end at the -16th row, or the 112th row. That adds up to 96 rows, exactly the same number of columns as we have.
Let's try another crop function which can crop by an arbitrary amount. It will take an image and a single factor from 0-1, saying how much of the original image to crop
Step36: <a name="resizing-images"></a>
Resizing images
For resizing the image, we'll make use of a python library, scipy. Let's import the function which we need like so
Step37: Notice that you can hit tab after each step to see what is available. That is really helpful as I never remember what the exact names are.
Step38: The imresize function takes a input image as its first parameter, and a tuple defining the new image shape as rows and then columns.
Let's see how our cropped image can be imresized now
Step39: Great! To really see what's going on, let's turn off the interpolation like so
Step40: Each one of these squares is called a pixel. Since this is a color image, each pixel is actually a mixture of 3 values, Red, Green, and Blue. When we mix those proportions of Red Green and Blue, we get the color shown here.
We can combine the Red Green and Blue channels by taking the mean, or averaging them. This is equivalent to adding each channel, R + G + B, then dividing by the number of color channels, (R + G + B) / 3. We can use the numpy.mean function to help us do this
Step41: This is an incredibly useful function which we'll revisit later when we try to visualize the mean image of our entire dataset.
<a name="croppingresizing-images"></a>
Cropping/Resizing Images
We now have functions for cropping an image to a square image, and a function for resizing an image to any desired size. With these tools, we can begin to create a dataset. We're going to loop over our 10 files, crop the image to a square to remove the longer edge, and then crop again to remove some of the background, and then finally resize the image to a standard size of 64 x 64 pixels.
Step42: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets
Step43: Since all of the images are the same size, we can make use of numpy's array instead of a list.
Remember that an image has a shape describing the height, width, channels
Step44: <a name="the-batch-dimension"></a>
The Batch Dimension
There is a convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape should be
Step45: We could also use the numpy.concatenate function, but we have to create a new dimension for each image. Numpy let's us do this by using a special variable np.newaxis | Python Code:
4*2
Explanation: Session 0: Preliminaries with Python/Notebook
<p class="lead">
Parag K. Mital<br />
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br />
<a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br />
<a href="https://twitter.com/hashtag/CADL">#CADL</a>
</p>
This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
<a name="learning-goals"></a>
Learning Goals
Install and run Jupyter Notebook with the Tensorflow library
Learn to create a dataset of images using os.listdir and plt.imread
Understand how images are represented when using float or uint8
Learn how to crop and resize images to a standard size.
Table of Contents
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Introduction
Using Notebook
Cells
Kernel
Importing Libraries
Loading Data
Structuring data as folders
Using the os library to get data
Loading an image
RGB Image Representation
Understanding data types and ranges (uint8, float32)
Visualizing your data as images
Image Manipulation
Cropping images
Resizing images
Cropping/Resizing Images
The Batch Dimension
Conclusion
<!-- /MarkdownTOC -->
<a name="introduction"></a>
Introduction
This preliminary session will cover the basics of working with image data in Python, and creating an image dataset. Please make sure you are running at least Python 3.4 and have Tensorflow 0.9.0 or higher installed. If you are unsure of how to do this, please make sure you have followed the installation instructions. We'll also cover loading images from a directory, resizing and cropping images, and changing an image datatype from unsigned int to float32. If you feel comfortable with all of this, please feel free to skip straight to Session 1. Otherwise, launch jupyter notebook and make sure you are reading the session-0.ipynb file.
<a name="using-notebook"></a>
Using Notebook
Make sure you have launched jupyter notebook and are reading the session-0.ipynb file. If you are unsure of how to do this, please make sure you follow the installation instructions. This will allow you to interact with the contents and run the code using an interactive python kernel!
<a name="cells"></a>
Cells
After launching this notebook, try running/executing the next cell by pressing shift-enter on it.
End of explanation
import os
Explanation: Now press 'a' or 'b' to create new cells. You can also use the toolbar to create new cells. You can also use the arrow keys to move up and down.
<a name="kernel"></a>
Kernel
Note the numbers on each of the cells inside the brackets, after "running" the cell. These denote the current execution count of your python "kernel". Think of the kernel as another machine within your computer that understands Python and interprets what you write as code into executions that the processor can understand.
<a name="importing-libraries"></a>
Importing Libraries
When you launch a new notebook, your kernel is a blank state. It only knows standard python syntax. Everything else is contained in additional python libraries that you have to explicitly "import" like so:
End of explanation
# Load the os library
import os
# Load the request module
import urllib.request
# Import SSL which we need to setup for talking to the HTTPS server
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# Create a directory
os.mkdir('img_align_celeba')
# Now perform the following 10 times:
for img_i in range(1, 11):
# create a string using the current loop counter
f = '000%03d.jpg' % img_i
# and get the url with that string appended the end
url = 'https://s3.amazonaws.com/cadl/celeb-align/' + f
# We'll print this out to the console so we can see how far we've gone
print(url, end='\r')
# And now download the url to a location inside our new directory
urllib.request.urlretrieve(url, os.path.join('img_align_celeba', f))
Explanation: After exectuing this cell, your kernel will have access to everything inside the os library which is a common library for interacting with the operating system. We'll need to use the import statement for all of the libraries that we include.
<a name="loading-data"></a>
Loading Data
Let's now move onto something more practical. We'll learn how to see what files are in a directory, and load any images inside that directory into a variable.
<a name="structuring-data-as-folders"></a>
Structuring data as folders
With Deep Learning, we'll always need a dataset, or a collection of data. A lot of it. We're going to create our dataset by putting a bunch of images inside a directory. Then, whenever we want to load the dataset, we will tell python to find all the images inside the directory and load them. Python lets us very easily crawl through a directory and grab each file. Let's have a look at how to do this.
<a name="using-the-os-library-to-get-data"></a>
Using the os library to get data
We'll practice with a very large dataset called Celeb Net. This dataset has about 200,000 images of celebrities. The researchers also provide a version of the dataset which has every single face cropped and aligned so that each face is in the middle! We'll be using this aligned dataset. To read more about the dataset or to download it, follow the link here:
http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
For now, we're not going to be using the entire dataset but just a subset of it. Run the following cell which will download the first 10 images for you:
End of explanation
help(os.listdir)
Explanation: Using the os package, we can list an entire directory. The documentation or docstring, says that listdir takes one parameter, path:
End of explanation
files = os.listdir('img_align_celeba')
Explanation: This is the location of the directory we need to list. Let's save it to a variable so that we can easier inspect the directory of images we just downloaded:
End of explanation
[file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i]
Explanation: We can also specify to include only certain files like so:
End of explanation
[file_i for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i and '00000' in file_i]
Explanation: or even:
End of explanation
[file_i for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i or '.png' in file_i or '.jpeg' in file_i]
Explanation: We could also combine file types if we happened to have multiple types:
End of explanation
files = [file_i
for file_i in os.listdir('img_align_celeba')
if file_i.endswith('.jpg')]
Explanation: Let's set this list to a variable, so we can perform further actions on it:
End of explanation
print(files[0])
print(files[1])
Explanation: And now we can index that list using the square brackets:
End of explanation
print(files[-1])
print(files[-2])
Explanation: We can even go in the reverse direction, which wraps around to the end of the list:
End of explanation
import matplotlib.pyplot as plt
Explanation: <a name="loading-an-image"></a>
Loading an image
matplotlib is an incredibly powerful python library which will let us play with visualization and loading of image data. We can import it like so:
End of explanation
%matplotlib inline
Explanation: Now we can refer to the entire module by just using plt instead of matplotlib.pyplot every time. This is pretty common practice.
We'll now tell matplotlib to "inline" plots using an ipython magic function:
End of explanation
# uncomment the lines to try them
# help(plt)
# plt.<tab>
Explanation: This isn't python, so won't work inside of any python script files. This only works inside notebook. What this is saying is that whenever we plot something using matplotlib, put the plots directly into the notebook, instead of using a window popup, which is the default behavior. This is something that makes notebook really useful for teaching purposes, as it allows us to keep all of our images/code in one document.
Have a look at the library by checking the documentation with help(plt). Another option to get information is to write plt. and then press <tab>. This shows a dropdown of all available functions in this library:
End of explanation
plt.imread?
Explanation: Selecting a function from the dropdown and adding a ? at the end will bring up the function's documentation.
plt contains a very useful function for loading images:
End of explanation
import numpy as np
# help(np)
# np.<tab>
Explanation: Here we see that it actually returns a variable which requires us to use another library, NumPy. NumPy makes working with numerical data a lot easier. Let's import it as well:
End of explanation
# img = plt.imread(files[0])
# outputs: FileNotFoundError
Explanation: Let's try loading the first image in our dataset:
We have a list of filenames, and we know where they are. But we need to combine the path to the file and the filename itself. If we try and do this:
End of explanation
print(os.path.join('img_align_celeba', files[0]))
plt.imread(os.path.join('img_align_celeba', files[0]))
Explanation: plt.imread will not know where that file is. We can tell it where to find the file by using os.path.join:
End of explanation
files = [os.path.join('img_align_celeba', file_i)
for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i]
Explanation: Now we get a bunch of numbers! I'd rather not have to keep prepending the path to my files, so I can create the list of files like so:
End of explanation
img = plt.imread(files[0])
# img.<tab>
Explanation: Let's set this to a variable, img, and inspect a bit further what's going on:
End of explanation
img = plt.imread(files[0])
plt.imshow(img)
Explanation: <a name="rgb-image-representation"></a>
RGB Image Representation
It turns out that all of these numbers are capable of describing an image. We can use the function imshow to see this:
End of explanation
img.shape
# outputs: (218, 178, 3)
Explanation: Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor:
End of explanation
plt.figure()
plt.imshow(img[:, :, 0])
plt.figure()
plt.imshow(img[:, :, 1])
plt.figure()
plt.imshow(img[:, :, 2])
Explanation: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels. We can use the square brackets just like when we tried to access elements of our list:
End of explanation
np.min(img), np.max(img)
Explanation: We use the special colon operator to 'say take every value in this dimension'. This is saying, 'give me every row, every column, and the 0th dimension of the color channels'.
What we see now is a heatmap of our image corresponding to each color channel.
<a name="understanding-data-types-and-ranges-uint8-float32"></a>
Understanding data types and ranges (uint8, float32)
Let's take a look at the range of values of our image:
End of explanation
2**32
Explanation: The numbers are all between 0 to 255. What a strange number you might be thinking. Unless you are one of 10 types of people in this world, those that understand binary and those that don't. Don't worry if you're not. You are likely better off.
256 values is how much information we can stick into a byte. We measure a byte using bits, and each byte takes up 8 bits. Each bit can be either 0 or 1. When we stack up 8 bits, or 10000000 in binary, equivalent to 2 to the 8th power, we can express up to 256 possible values, giving us our range, 0 to 255. You can compute any number of bits using powers of two. 2 to the power of 8 is 256. How many values can you stick in 16 bits (2 bytes)? Or 32 bits (4 bytes) of information? Let's ask python:
End of explanation
img.dtype
Explanation: numpy arrays have a field which will tell us how many bits they are using: dtype:
End of explanation
img.astype(np.float32)
Explanation: uint8: Let's decompose that: unsigned, int, 8. That means the values do not have a sign, meaning they are all positive. They are only integers, meaning no decimal places. And that they are all 8 bits.
Something which is 32-bits of information can express a single value with a range of nearly 4.3 billion different possibilities (2**32). We'll generally need to work with 32-bit data when working with neural networks. In order to do that, we can simply ask numpy for the correct data type:
End of explanation
plt.imread(files[0])
Explanation: This is saying, let me see this data as a floating point number, meaning with decimal places, and with 32 bits of precision, rather than the previous data types 8 bits. This will become important when we start to work with neural networks, as we'll need all of those extra possible values!
<a name="visualizing-your-data-as-images"></a>
Visualizing your data as images
We've seen how to look at a single image. But what if we have hundreds, thousands, or millions of images? Is there a good way of knowing what our dataset looks like without looking at their file names, or opening up each image one at a time?
One way we can do that is to randomly pick an image.
We've already seen how to read the image located at one of our file locations:
End of explanation
print(np.random.randint(0, len(files)))
print(np.random.randint(0, len(files)))
print(np.random.randint(0, len(files)))
Explanation: to pick a random image from our list of files, we can use the numpy random module:
End of explanation
filename = files[np.random.randint(0, len(files))]
img = plt.imread(filename)
plt.imshow(img)
Explanation: This function will produce random integers between a range of values that we specify. We say, give us random integers from 0 to the length of files.
We can now use the code we've written before to show a random image from our list of files:
End of explanation
def plot_image(filename):
img = plt.imread(filename)
plt.imshow(img)
Explanation: This might be something useful that we'd like to do often. So we can use a function to help us in the future:
End of explanation
f = files[np.random.randint(0, len(files))]
plot_image(f)
Explanation: This function takes one parameter, a variable named filename, which we will have to specify whenever we call it. That variable is fed into the plt.imread function, and used to load an image. It is then drawn with plt.imshow. Let's see how we can use this function definition:
End of explanation
plot_image(files[np.random.randint(0, len(files))])
Explanation: or simply:
End of explanation
def imcrop_tosquare(img):
Make any image a square image.
Parameters
----------
img : np.ndarray
Input image to crop, assumed at least 2d.
Returns
-------
crop : np.ndarray
Cropped image.
if img.shape[0] > img.shape[1]:
extra = (img.shape[0] - img.shape[1])
if extra % 2 == 0:
crop = img[extra // 2:-extra // 2, :]
else:
crop = img[max(0, extra // 2 + 1):min(-1, -(extra // 2)), :]
elif img.shape[1] > img.shape[0]:
extra = (img.shape[1] - img.shape[0])
if extra % 2 == 0:
crop = img[:, extra // 2:-extra // 2]
else:
crop = img[:, max(0, extra // 2 + 1):min(-1, -(extra // 2))]
else:
crop = img
return crop
Explanation: We use functions to help us reduce the main flow of our code. It helps to make things clearer, using function names that help describe what is going on.
<a name="image-manipulation"></a>
Image Manipulation
<a name="cropping-images"></a>
Cropping images
We're going to create another function which will help us crop the image to a standard size and help us draw every image in our list of files as a grid.
In many applications of deep learning, we will need all of our data to be the same size. For images this means we'll need to crop the images while trying not to remove any of the important information in it. Most image datasets that you'll find online will already have a standard size for every image. But if you're creating your own dataset, you'll need to know how to make all the images the same size. One way to do this is to find the longest edge of the image, and crop this edge to be as long as the shortest edge of the image. This will convert the image to a square one, meaning its sides will be the same lengths. The reason for doing this is that we can then resize this square image to any size we'd like, without distorting the image. Let's see how we can do that:
End of explanation
def imcrop(img, amt):
if amt <= 0 or amt >= 1:
return img
row_i = int(img.shape[0] * amt) // 2
col_i = int(img.shape[1] * amt) // 2
return img[row_i:-row_i, col_i:-col_i]
Explanation: There are a few things going on here. First, we are defining a function which takes as input a single variable. This variable gets named img inside the function, and we enter a set of if/else-if conditionals. The first branch says, if the rows of img are greater than the columns, then set the variable extra to their difference and divide by 2. The // notation means to perform an integer division, instead of a floating point division. So 3 // 2 = 1, not 1.5. We need integers for the next line of code which says to set the variable crop to img starting from extra rows, and ending at negative extra rows down. We can't be on row 1.5, only row 1 or 2. So that's why we need the integer divide there. Let's say our image was 128 x 96 x 3. We would have extra = (128 - 96) // 2, or 16. Then we'd start from the 16th row, and end at the -16th row, or the 112th row. That adds up to 96 rows, exactly the same number of columns as we have.
Let's try another crop function which can crop by an arbitrary amount. It will take an image and a single factor from 0-1, saying how much of the original image to crop:
End of explanation
#from scipy.<tab>misc import <tab>imresize
Explanation: <a name="resizing-images"></a>
Resizing images
For resizing the image, we'll make use of a python library, scipy. Let's import the function which we need like so:
End of explanation
from scipy.misc import imresize
imresize?
Explanation: Notice that you can hit tab after each step to see what is available. That is really helpful as I never remember what the exact names are.
End of explanation
square = imcrop_tosquare(img)
crop = imcrop(square, 0.2)
rsz = imresize(crop, (64, 64))
plt.imshow(rsz)
Explanation: The imresize function takes a input image as its first parameter, and a tuple defining the new image shape as rows and then columns.
Let's see how our cropped image can be imresized now:
End of explanation
plt.imshow(rsz, interpolation='nearest')
Explanation: Great! To really see what's going on, let's turn off the interpolation like so:
End of explanation
mean_img = np.mean(rsz, axis=2)
print(mean_img.shape)
plt.imshow(mean_img, cmap='gray')
Explanation: Each one of these squares is called a pixel. Since this is a color image, each pixel is actually a mixture of 3 values, Red, Green, and Blue. When we mix those proportions of Red Green and Blue, we get the color shown here.
We can combine the Red Green and Blue channels by taking the mean, or averaging them. This is equivalent to adding each channel, R + G + B, then dividing by the number of color channels, (R + G + B) / 3. We can use the numpy.mean function to help us do this:
End of explanation
imgs = []
for file_i in files:
img = plt.imread(file_i)
square = imcrop_tosquare(img)
crop = imcrop(square, 0.2)
rsz = imresize(crop, (64, 64))
imgs.append(rsz)
print(len(imgs))
Explanation: This is an incredibly useful function which we'll revisit later when we try to visualize the mean image of our entire dataset.
<a name="croppingresizing-images"></a>
Cropping/Resizing Images
We now have functions for cropping an image to a square image, and a function for resizing an image to any desired size. With these tools, we can begin to create a dataset. We're going to loop over our 10 files, crop the image to a square to remove the longer edge, and then crop again to remove some of the background, and then finally resize the image to a standard size of 64 x 64 pixels.
End of explanation
plt.imshow(imgs[0])
Explanation: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets:
End of explanation
imgs[0].shape
Explanation: Since all of the images are the same size, we can make use of numpy's array instead of a list.
Remember that an image has a shape describing the height, width, channels:
End of explanation
data = np.array(imgs)
data.shape
Explanation: <a name="the-batch-dimension"></a>
The Batch Dimension
There is a convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape should be:
N x H x W x C
The Number of images, or the batch size, is first; then the Height or number of rows in the image; then the Width or number of cols in the image; then finally the number of channels the image has. A Color image should have 3 color channels, RGB. A Grayscale image should just have 1 channel.
We can combine all of our images to look like this in a few ways. The easiest way is to tell numpy to give us an array of all the images:
End of explanation
data = np.concatenate([img_i[np.newaxis] for img_i in imgs], axis=0)
data.shape
Explanation: We could also use the numpy.concatenate function, but we have to create a new dimension for each image. Numpy let's us do this by using a special variable np.newaxis
End of explanation |
7,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
Step1: Say $f(t) = t * exp(- t)$, and $F(s)$ is the Laplace transform of $f(t)$. Let us first evaluate this transform using sympy.
Step2: Suppose we are confronted with a dataset $F(s)$, but we need to know $f(t)$. This means an inverse Laplace transform has to be performed. However, numerically this operation is ill-defined. In order to solve this, Tikhonov regularization can be performed.
To demonstrate this, we first generate mock data corresponding to $F(s)$ and will then try to find (our secretly known) $f(t)$.
Step3: We will now invert this data, using the procedure outlined in \cite{}.
Step4: A CallableModel is needed because derivatives of matrix expressions sometimes cause problems.
Build required matrices, ignore s=0 because it causes a singularity.
Step5: Perform the fit
Step6: Check the quality of the reconstruction
Step7: Reconstruct $f(t)$ and compare with the known original. | Python Code:
from symfit import (
variables, parameters, Model, Fit, exp, laplace_transform, symbols,
MatrixSymbol, sqrt, Inverse, CallableModel
)
import numpy as np
import matplotlib.pyplot as plt
Explanation: Example: Matrix Equations using Tikhonov Regularization
This is an example of the use of matrix expressions in symfit models. This is illustrated by performing an inverse Laplace transform using Tikhonov regularization, but this could be adapted to other problems involving matrix quantities.
End of explanation
t, f, s, F = variables('t, f, s, F')
model = Model({f: t * exp(- t)})
laplace_model = Model(
{F: laplace_transform(model[f], t, s, noconds=True)}
)
print(laplace_model)
Explanation: Say $f(t) = t * exp(- t)$, and $F(s)$ is the Laplace transform of $f(t)$. Let us first evaluate this transform using sympy.
End of explanation
epsilon = 0.01 # 1 percent noise
s_data = np.linspace(0, 10, 101)
F_data = laplace_model(s=s_data).F
F_sigma = epsilon * F_data
np.random.seed(2)
F_data = np.random.normal(F_data, F_sigma)
plt.errorbar(s_data, F_data, yerr=F_sigma, fmt='none', label=r'$\mathcal{L}[f] = F(s)$')
plt.xlabel(r'$s_i$')
plt.ylabel(r'$F(s_i)$')
plt.xlim(0, None)
plt.legend()
Explanation: Suppose we are confronted with a dataset $F(s)$, but we need to know $f(t)$. This means an inverse Laplace transform has to be performed. However, numerically this operation is ill-defined. In order to solve this, Tikhonov regularization can be performed.
To demonstrate this, we first generate mock data corresponding to $F(s)$ and will then try to find (our secretly known) $f(t)$.
End of explanation
N_s = symbols('N_s', integer=True) # Number of s_i points
M = MatrixSymbol('M', N_s, N_s)
W = MatrixSymbol('W', N_s, N_s)
Fs = MatrixSymbol('Fs', N_s, 1)
c = MatrixSymbol('c', N_s, 1)
d = MatrixSymbol('d', 1, 1)
I = MatrixSymbol('I', N_s, N_s)
a, = parameters('a')
model_dict = {
W: Inverse(I + M / a**2),
c: - W * Fs,
d: sqrt(c.T * c),
}
tikhonov_model = CallableModel(model_dict)
print(tikhonov_model)
Explanation: We will now invert this data, using the procedure outlined in \cite{}.
End of explanation
I_mat = np.eye(len(s_data[1:]))
s_i, s_j = np.meshgrid(s_data[1:], s_data[1:])
M_mat = 1 / (s_i + s_j)
delta = np.atleast_2d(np.linalg.norm(F_sigma))
print('d', delta)
Explanation: A CallableModel is needed because derivatives of matrix expressions sometimes cause problems.
Build required matrices, ignore s=0 because it causes a singularity.
End of explanation
model_data = {
I.name: I_mat,
M.name: M_mat,
Fs.name: F_data[1:],
}
all_data = dict(**model_data, **{d.name: delta})
fit = Fit(tikhonov_model, **all_data)
fit_result = fit.execute()
print(fit_result)
Explanation: Perform the fit
End of explanation
ans = tikhonov_model(**model_data, **fit_result.params)
F_re = - M_mat.dot(ans.c) / fit_result.value(a)**2
print(ans.c.shape, F_re.shape)
plt.errorbar(s_data, F_data, yerr=F_sigma, label=r'$F(s)$', fmt='none')
plt.plot(s_data[1:], F_re, label=r'$F_{re}(s)$')
plt.xlabel(r'$x$')
plt.xlabel(r'$F(s)$')
plt.xlim(0, None)
plt.legend()
Explanation: Check the quality of the reconstruction
End of explanation
t_data = np.linspace(0, 10, 101)
f_data = model(t=t_data).f
f_re_func = lambda x: - np.exp(- x * s_data[1:]).dot(ans.c) / fit_result.value(a)**2
f_re = [f_re_func(t_i) for t_i in t_data]
plt.axhline(0, color='black')
plt.axvline(0, color='black')
plt.plot(t_data, f_data, label=r'$f(t)$')
plt.plot(t_data, f_re, label=r'$f_{re}(t)$')
plt.xlabel(r'$t$')
plt.xlabel(r'$f(t)$')
plt.legend()
Explanation: Reconstruct $f(t)$ and compare with the known original.
End of explanation |
7,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Character level language model - Dinosaurus land
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width
Step1: 1 - Problem Statement
1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
Step2: The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the <EOS> (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.
Step3: 1.2 - Overview of the model
Your model will have the following structure
Step5: Expected output
Step7: Expected output
Step12: Expected output
Step13: Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
Step14: Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like maconucon, marloralus and macingsersaurus. Your model hopefully also learned that dinosaur names tend to end in saurus, don, aura, tor, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, dromaeosauroides is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce
Step15: To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets".
Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an input (<40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well. | Python Code:
import numpy as np
from utils import *
import random
from random import shuffle
Explanation: Character level language model - Dinosaurus land
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width:250;height:300px;">
</td>
</table>
Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this dataset. (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath!
By completing this assignment you will learn:
How to store text data for processing using an RNN
How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit
How to build a character-level text generation recurrent neural network
Why clipping the gradients is important
We will begin by loading in some functions that we have provided for you in rnn_utils. Specifically, you have access to functions such as rnn_forward and rnn_backward which are equivalent to those you've implemented in the previous assignment.
End of explanation
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
Explanation: 1 - Problem Statement
1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
End of explanation
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
Explanation: The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the <EOS> (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.
End of explanation
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
None
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
Explanation: 1.2 - Overview of the model
Your model will have the following structure:
Initialize parameters
Run the optimization loop
Forward propagation to compute the loss function
Backward propagation to compute the gradients with respect to the loss function
Clip the gradients to avoid exploding gradients
Using the gradients, update your parameter with the gradient descent update rule.
Return the learned parameters
<img src="images/rnn.png" style="width:450;height:300px;">
<caption><center> Figure 1: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". </center></caption>
At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$.
2 - Building blocks of the model
In this part, you will build two important blocks of the overall model:
- Gradient clipping: to avoid exploding gradients
- Sampling: a technique used to generate characters
You will then apply these two functions to build the model.
2.1 - Clipping the gradients in the optimization loop
In this section you will implement the clip function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values.
In the exercise below, you will implement a function clip that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a maxValue (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone.
<img src="images/clip.png" style="width:400;height:150px;">
<caption><center> Figure 2: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. </center></caption>
Exercise: Implement the function below to return the clipped gradients of your dictionary gradients. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this hint for examples of how to clip in numpy. You will need to use the argument out = ....
End of explanation
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)
x = None
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = None
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = None
z = None
y = None
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = None
# Append the index to "indices"
None
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = None
x[None] = None
# Update "a_prev" to be "a"
a_prev = None
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
Explanation: Expected output:
<table>
<tr>
<td>
**gradients["dWaa"][1][2] **
</td>
<td>
10.0
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]**
</td>
<td>
-10.0
</td>
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td>
0.29713815361
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td>
[ 10.]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>
[ 8.45833407]
</td>
</tr>
</table>
2.2 - Sampling
Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:
<img src="images/dinos3.png" style="width:500;height:300px;">
<caption><center> Figure 3: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>
Exercise: Implement the sample function below to sample characters. You need to carry out 4 steps:
Step 1: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$
Step 2: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:
$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$
$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$
$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$
Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a softmax() function that you can use.
Step 3: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use np.random.choice.
Here is an example of how to use np.random.choice():
python
np.random.seed(0)
p = np.array([0.1, 0.0, 0.7, 0.2])
index = np.random.choice([0, 1, 2, 3], p = p.ravel())
This means that you will pick the index according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
Step 4: The last step to implement in sample() is to overwrite the variable x, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
End of explanation
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = None
# Backpropagate through time (≈1 line)
gradients, a = None
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = None
# Update parameters (≈1 line)
parameters = None
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
Explanation: Expected output:
<table>
<tr>
<td>
**list of sampled indices:**
</td>
<td>
[18, 2, 26, 0]
</td>
</tr><tr>
<td>
**list of sampled characters:**
</td>
<td>
['r', 'b', 'z', '\n']
</td>
</tr>
</table>
3 - Building the language model
It is time to build the character-level language model for text generation.
3.1 - Gradient descent
In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:
Forward propagate through the RNN to compute the loss
Backward propagate through time to compute the gradients of the loss with respect to the parameters
Clip the gradients if necessary
Update your parameters using gradient descent
Exercise: Implement this optimization process (one step of stochastic gradient descent).
We provide you with the following functions:
```python
def rnn_forward(X, Y, a_prev, parameters):
Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in the backpropagation.
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states.
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
Updates parameters using the Gradient Descent Update Rule.
...
return parameters
```
End of explanation
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = None
X = None
Y = None
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = None
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
Explanation: Expected output:
<table>
<tr>
<td>
**Loss **
</td>
<td>
126.503975722
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]**
</td>
<td>
0.194709315347
</td>
<tr>
<td>
**np.argmax(gradients["dWax"])**
</td>
<td> 93
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td> -0.007773876032
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td> [-0.06809825]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>[ 0.01538192]
</td>
</tr>
<tr>
<td>
**a_last[4]**
</td>
<td> [-1.]
</td>
</tr>
</table>
3.2 - Training the model
Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order.
Exercise: Follow the instructions and implement model(). When examples[index] contains one dinosaur name (string), to create an example (X, Y), you can use this:
python
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
Note that we use: index= j % len(examples), where j = 1....num_iterations, to make sure that examples[index] is always a valid statement (index is smaller than len(examples)).
The first entry of X being None will be interpreted by rnn_forward() as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that Y is equal to X but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
End of explanation
parameters = model(data, ix_to_char, char_to_ix)
Explanation: Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
End of explanation
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
Explanation: Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like maconucon, marloralus and macingsersaurus. Your model hopefully also learned that dinosaur names tend to end in saurus, don, aura, tor, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, dromaeosauroides is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!
<img src="images/mangosaurus.jpeg" style="width:250;height:300px;">
4 - Writing like Shakespeare
The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative.
A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short.
<img src="images/shakespeare.jpg" style="width:500;height:400px;">
<caption><center> Let's become poets! </center></caption>
We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
End of explanation
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
Explanation: To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets".
Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an input (<40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
End of explanation |
7,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tensorflow Image Recognition Tutorial
This tutorial shows how we can use MLDB's TensorFlow integration to do image recognition. TensorFlow is Google's open source deep learning library.
We will load the Inception-v3 model to generate descriptive labels for an image. The Inception model is a deep convolutional neural network and was trained on the ImageNet Large Visual Recognition Challenge dataset, where the task was to classify images into 1000 classes.
To offer context and a basis for comparison, this notebook is inspired by TensorFlow's Image Recognition tutorial.
Initializing pymldb and other imports
The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.
Step1: Loading a TensorFlow graph
To load a pre-trained TensorFlow graphs in MLDB, we use the tensorflow.graph function type.
Below, we start by creating two functions. First, the fetcher function allows us to fetch a binary blob from a remote URL. Second, the inception function that will be used to execute the trained network and that we parameterize in the following way
Step2: Scoring an image
To demonstrate how to run the network on an image, we re-use the same image as in the Tensorflow tutorial, the picture of Admiral Grace Hopper
Step3: This is great! With only 3 REST calls we were able to run a deep neural network on an arbitrary image off the internet.
Inception as a real-time endpoint
Not only is this function available in SQL queries within MLDB, but as all MLDB functions, it is also available as a REST endpoint. This means that when we created the inception function above, we essentially created an real-time API running the Inception model that any external service or device can call to get predictions back.
The following REST call demonstrates how this looks
Step4: Interpreting the prediction
Running the network gives us a 1008-dimensional vector. This is because the network was originally trained on the Image net categories and we created the inception function to return the softmax layer which is the output of the model.
To allow us to interpret the predictions the network makes, we can import the ImageNet labels in an MLDB dataset like this
Step5: The contents of the dataset look like this
Step7: The labels line up with the softmax layer that we extract from the network. By joining the output of the network with the imagenet_labels dataset, we can essentially label the output of the network.
The following query scores the image just as before, but then transposes the output and then joins the result to the labels dataset. We then sort on the score to keep only the 10 highest values | Python Code:
from pymldb import Connection
mldb = Connection()
Explanation: Tensorflow Image Recognition Tutorial
This tutorial shows how we can use MLDB's TensorFlow integration to do image recognition. TensorFlow is Google's open source deep learning library.
We will load the Inception-v3 model to generate descriptive labels for an image. The Inception model is a deep convolutional neural network and was trained on the ImageNet Large Visual Recognition Challenge dataset, where the task was to classify images into 1000 classes.
To offer context and a basis for comparison, this notebook is inspired by TensorFlow's Image Recognition tutorial.
Initializing pymldb and other imports
The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.
End of explanation
inceptionUrl = 'file://mldb/mldb_test_data/models/inception_dec_2015.zip'
print mldb.put('/v1/functions/fetch', {
"type": 'fetcher',
"params": {}
})
print mldb.put('/v1/functions/inception', {
"type": 'tensorflow.graph',
"params": {
"modelFileUrl": 'archive+' + inceptionUrl + '#tensorflow_inception_graph.pb',
"inputs": 'fetch({url})[content] AS "DecodeJpeg/contents"',
"outputs": "softmax"
}
})
Explanation: Loading a TensorFlow graph
To load a pre-trained TensorFlow graphs in MLDB, we use the tensorflow.graph function type.
Below, we start by creating two functions. First, the fetcher function allows us to fetch a binary blob from a remote URL. Second, the inception function that will be used to execute the trained network and that we parameterize in the following way:
modelFileUrl: Path to the Inception-v3 model file. The archive prefix and # separator allow us to load a file inside a zip archive. (more details)
input: As input to the graph, we provide the output of the fetch function called with the url parameter. When we call it later on, the image located at the specified URL will be downloaded and passed to the graph.
output: This specifies the layer from which to return the values. The softmax layer is the last layer in the network so we specify that one.
End of explanation
amazingGrace = "https://www.tensorflow.org/versions/r0.7/images/grace_hopper.jpg"
mldb.query("SELECT inception({url: '%s'}) as *" % amazingGrace)
Explanation: Scoring an image
To demonstrate how to run the network on an image, we re-use the same image as in the Tensorflow tutorial, the picture of Admiral Grace Hopper:
<img src="https://www.tensorflow.org/versions/r0.7/images/grace_hopper.jpg" width=350>
The following query applies the inception function on the URL of her picture:
End of explanation
result = mldb.get('/v1/functions/inception/application', input={"url": amazingGrace})
print result.url + '\n\n' + repr(result) + '\n'
import numpy as np
print "Shape:"
print np.array(result.json()["output"]["softmax"]["val"]).shape
Explanation: This is great! With only 3 REST calls we were able to run a deep neural network on an arbitrary image off the internet.
Inception as a real-time endpoint
Not only is this function available in SQL queries within MLDB, but as all MLDB functions, it is also available as a REST endpoint. This means that when we created the inception function above, we essentially created an real-time API running the Inception model that any external service or device can call to get predictions back.
The following REST call demonstrates how this looks:
End of explanation
print mldb.put("/v1/procedures/imagenet_labels_importer", {
"type": "import.text",
"params": {
"dataFileUrl": 'archive+' + inceptionUrl + '#imagenet_comp_graph_label_strings.txt',
"outputDataset": {"id": "imagenet_labels", "type": "sparse.mutable"},
"headers": ["label"],
"named": "lineNumber() -1",
"offset": 1,
"runOnCreation": True
}
})
Explanation: Interpreting the prediction
Running the network gives us a 1008-dimensional vector. This is because the network was originally trained on the Image net categories and we created the inception function to return the softmax layer which is the output of the model.
To allow us to interpret the predictions the network makes, we can import the ImageNet labels in an MLDB dataset like this:
End of explanation
mldb.query("SELECT * FROM imagenet_labels LIMIT 5")
Explanation: The contents of the dataset look like this:
End of explanation
mldb.query(
SELECT scores.pred as score
NAMED imagenet_labels.label
FROM transpose(
(
SELECT flatten(inception({url: '%s'})[softmax]) as *
NAMED 'pred'
)
) AS scores
LEFT JOIN imagenet_labels ON
imagenet_labels.rowName() = scores.rowName()
ORDER BY score DESC
LIMIT 10
% amazingGrace)
Explanation: The labels line up with the softmax layer that we extract from the network. By joining the output of the network with the imagenet_labels dataset, we can essentially label the output of the network.
The following query scores the image just as before, but then transposes the output and then joins the result to the labels dataset. We then sort on the score to keep only the 10 highest values:
End of explanation |
7,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: オブジェクト検出
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step4: 使用例
画像のダウンロードと視覚化用のヘルパー関数
必要最低限の単純な機能性を得るために、TF オブジェクト検出 API から採用された視覚化コードです。
Step5: モジュールを適用する
Open Images v4 から公開画像を読み込み、ローカルの保存して表示します。
Step6: オブジェクト検出モジュールを選択し、ダウンロードされた画像に適用します。モジュールのリストを示します。
FasterRCNN+InceptionResNet V2
Step7: その他の画像
時間トラッキングを使用して、追加の画像に推論を実行します。 | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Imports and function definitions
# For running inference on the TF-Hub module.
import tensorflow as tf
import tensorflow_hub as hub
# For downloading the image.
import matplotlib.pyplot as plt
import tempfile
from six.moves.urllib.request import urlopen
from six import BytesIO
# For drawing onto the image.
import numpy as np
from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont
from PIL import ImageOps
# For measuring the inference time.
import time
# Print Tensorflow version
print(tf.__version__)
# Check available GPU devices.
print("The following GPU devices are available: %s" % tf.test.gpu_device_name())
Explanation: オブジェクト検出
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
<td> <a href="https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a>
</td>
</table>
この Colab では、オブジェクト検出を実行するようにトレーニングされた TF-Hub モジュールの使用を実演します。
セットアップ
End of explanation
def display_image(image):
fig = plt.figure(figsize=(20, 15))
plt.grid(False)
plt.imshow(image)
def download_and_resize_image(url, new_width=256, new_height=256,
display=False):
_, filename = tempfile.mkstemp(suffix=".jpg")
response = urlopen(url)
image_data = response.read()
image_data = BytesIO(image_data)
pil_image = Image.open(image_data)
pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)
pil_image_rgb = pil_image.convert("RGB")
pil_image_rgb.save(filename, format="JPEG", quality=90)
print("Image downloaded to %s." % filename)
if display:
display_image(pil_image)
return filename
def draw_bounding_box_on_image(image,
ymin,
xmin,
ymax,
xmax,
color,
font,
thickness=4,
display_str_list=()):
Adds a bounding box to an image.
draw = ImageDraw.Draw(image)
im_width, im_height = image.size
(left, right, top, bottom) = (xmin * im_width, xmax * im_width,
ymin * im_height, ymax * im_height)
draw.line([(left, top), (left, bottom), (right, bottom), (right, top),
(left, top)],
width=thickness,
fill=color)
# If the total height of the display strings added to the top of the bounding
# box exceeds the top of the image, stack the strings below the bounding box
# instead of above.
display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]
# Each display_str has a top and bottom margin of 0.05x.
total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)
if top > total_display_str_height:
text_bottom = top
else:
text_bottom = top + total_display_str_height
# Reverse list and print from bottom to top.
for display_str in display_str_list[::-1]:
text_width, text_height = font.getsize(display_str)
margin = np.ceil(0.05 * text_height)
draw.rectangle([(left, text_bottom - text_height - 2 * margin),
(left + text_width, text_bottom)],
fill=color)
draw.text((left + margin, text_bottom - text_height - margin),
display_str,
fill="black",
font=font)
text_bottom -= text_height - 2 * margin
def draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1):
Overlay labeled boxes on an image with formatted scores and label names.
colors = list(ImageColor.colormap.values())
try:
font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf",
25)
except IOError:
print("Font not found, using default font.")
font = ImageFont.load_default()
for i in range(min(boxes.shape[0], max_boxes)):
if scores[i] >= min_score:
ymin, xmin, ymax, xmax = tuple(boxes[i])
display_str = "{}: {}%".format(class_names[i].decode("ascii"),
int(100 * scores[i]))
color = colors[hash(class_names[i]) % len(colors)]
image_pil = Image.fromarray(np.uint8(image)).convert("RGB")
draw_bounding_box_on_image(
image_pil,
ymin,
xmin,
ymax,
xmax,
color,
font,
display_str_list=[display_str])
np.copyto(image, np.array(image_pil))
return image
Explanation: 使用例
画像のダウンロードと視覚化用のヘルパー関数
必要最低限の単純な機能性を得るために、TF オブジェクト検出 API から採用された視覚化コードです。
End of explanation
# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg
image_url = "https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg" #@param
downloaded_image_path = download_and_resize_image(image_url, 1280, 856, True)
Explanation: モジュールを適用する
Open Images v4 から公開画像を読み込み、ローカルの保存して表示します。
End of explanation
module_handle = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1" #@param ["https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1", "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"]
detector = hub.load(module_handle).signatures['default']
def load_img(path):
img = tf.io.read_file(path)
img = tf.image.decode_jpeg(img, channels=3)
return img
def run_detector(detector, path):
img = load_img(path)
converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...]
start_time = time.time()
result = detector(converted_img)
end_time = time.time()
result = {key:value.numpy() for key,value in result.items()}
print("Found %d objects." % len(result["detection_scores"]))
print("Inference time: ", end_time-start_time)
image_with_boxes = draw_boxes(
img.numpy(), result["detection_boxes"],
result["detection_class_entities"], result["detection_scores"])
display_image(image_with_boxes)
run_detector(detector, downloaded_image_path)
Explanation: オブジェクト検出モジュールを選択し、ダウンロードされた画像に適用します。モジュールのリストを示します。
FasterRCNN+InceptionResNet V2: 高精度
ssd+mobilenet V2: 小規模で高速
End of explanation
image_urls = [
# Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg
"https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg",
# By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
"https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg",
# Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg
"https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg",
]
def detect_img(image_url):
start_time = time.time()
image_path = download_and_resize_image(image_url, 640, 480)
run_detector(detector, image_path)
end_time = time.time()
print("Inference time:",end_time-start_time)
detect_img(image_urls[0])
detect_img(image_urls[1])
detect_img(image_urls[2])
Explanation: その他の画像
時間トラッキングを使用して、追加の画像に推論を実行します。
End of explanation |
7,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deploying a Model and Predicting with Cloud Machine Learning Engine
This notebook is the final step in a series of notebooks for doing machine learning on cloud. The previous notebook, demonstrated evaluating a model. In a real-world scenario, it is likely that there are multiple evaluation datasets, as well as multiple models that need to be evaluated, before there is a model suitable for deployment.
Workspace Setup
The first step is to setup the workspace that we will use within this notebook - the python libraries, and the Google Cloud Storage bucket that will be used to contain the inputs and outputs produced over the course of the steps.
Step1: The storage bucket was created earlier. We'll re-declare it here, so we can use it.
Step2: Model
Lets take a quick look at the model that was previously produced as a result of the training job. This is the model that was evaluated, and is going to be deployed.
Step3: Deployment
Cloud Machine Learning Engine provides APIs to deploy and manage models. The first step is to create a named model resource, which can be referred to by name. The second step is to deploy the trained model binaries as a version within the model resource.
NOTE
Step4: At this point the model is ready for batch prediction jobs. It is also automatically exposed as an HTTP endpoint for performing online prediction.
Online Prediction
Online prediction is accomplished by issuing HTTP requests to the specific model version endpoint. Instances to be predicted are formatted as JSON in the request body. The structure of instances depend on the model. The census model in this sample was trained using data formatted as CSV, and so the model expects inputs as CSV formatted strings.
Prediction results are returned as JSON in the response.
HTTP requests must contain an OAuth token auth header to succeed. In the Datalab notebook, the OAuth token corresponding to the environment is accessible without a requiring OAuth flow. Actual applications will need to determine the best strategy for acquringing OAuth tokens, generally using Application Default Credentials.
Step5: It is quite simple to issue these requests using your HTTP library of choice. Actual applications should include the logic to handle errors, including retries.
Batch Prediction
While online prediction is optimized for low-latency requests over small lists of instances, batch prediction is designed for high-throughput prediction for large datasets. The same model can be used for both.
Batch prediction jobs can also be submitted via the API. They are easily submitted via the gcloud tool as well.
Step6: Each batch prediction job must have a unique name within the scope of a project. The specified name below may need to be changed if you are re-running this notebook.
Step7: NOTE
Step8: The status of the job can be inspected in the Cloud Console. Once it is completed, the outputs should be visible in the specified output path. | Python Code:
import google.datalab as datalab
import google.datalab.ml as ml
import mltoolbox.regression.dnn as regression
import os
import requests
import time
Explanation: Deploying a Model and Predicting with Cloud Machine Learning Engine
This notebook is the final step in a series of notebooks for doing machine learning on cloud. The previous notebook, demonstrated evaluating a model. In a real-world scenario, it is likely that there are multiple evaluation datasets, as well as multiple models that need to be evaluated, before there is a model suitable for deployment.
Workspace Setup
The first step is to setup the workspace that we will use within this notebook - the python libraries, and the Google Cloud Storage bucket that will be used to contain the inputs and outputs produced over the course of the steps.
End of explanation
storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/'
storage_region = 'us-central1'
workspace_path = os.path.join(storage_bucket, 'census')
training_path = os.path.join(workspace_path, 'training')
model_name = 'census'
model_version = 'v1'
Explanation: The storage bucket was created earlier. We'll re-declare it here, so we can use it.
End of explanation
!gsutil ls -r {training_path}/model
Explanation: Model
Lets take a quick look at the model that was previously produced as a result of the training job. This is the model that was evaluated, and is going to be deployed.
End of explanation
!gcloud ml-engine models create {model_name} --regions {storage_region}
!gcloud ml-engine versions create {model_version} --model {model_name} --origin {training_path}/model
Explanation: Deployment
Cloud Machine Learning Engine provides APIs to deploy and manage models. The first step is to create a named model resource, which can be referred to by name. The second step is to deploy the trained model binaries as a version within the model resource.
NOTE: These steps can take a few minutes.
End of explanation
api = 'https://ml.googleapis.com/v1/projects/{project}/models/{model}/versions/{version}:predict'
url = api.format(project=datalab.Context.default().project_id,
model=model_name,
version=model_version)
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer ' + datalab.Context.default().credentials.get_access_token().access_token
}
body = {
'instances': [
'490,64,2,0,1,0,2,8090,015,01,1,00590,00500,1,18,0,2,1',
'1225,32,5,0,4,5301,2,9680,015,01,1,00100,00100,1,21,2,1,1',
'1226,30,1,0,1,0,2,8680,020,01,1,00100,00100,1,16,0,2,1'
]
}
response = requests.post(url, json=body, headers=headers)
predictions = response.json()['predictions']
predictions
Explanation: At this point the model is ready for batch prediction jobs. It is also automatically exposed as an HTTP endpoint for performing online prediction.
Online Prediction
Online prediction is accomplished by issuing HTTP requests to the specific model version endpoint. Instances to be predicted are formatted as JSON in the request body. The structure of instances depend on the model. The census model in this sample was trained using data formatted as CSV, and so the model expects inputs as CSV formatted strings.
Prediction results are returned as JSON in the response.
HTTP requests must contain an OAuth token auth header to succeed. In the Datalab notebook, the OAuth token corresponding to the environment is accessible without a requiring OAuth flow. Actual applications will need to determine the best strategy for acquringing OAuth tokens, generally using Application Default Credentials.
End of explanation
%file /tmp/instances.csv
490,64,2,0,1,0,2,8090,015,01,1,00590,00500,1,18,0,2,1
1225,32,5,0,4,5301,2,9680,015,01,1,00100,00100,1,21,2,1,1
1226,30,1,0,1,0,2,8680,020,01,1,00100,00100,1,16,0,2,1
prediction_data_path = os.path.join(workspace_path, 'data/prediction.csv')
!gsutil -q cp /tmp/instances.csv {prediction_data_path}
Explanation: It is quite simple to issue these requests using your HTTP library of choice. Actual applications should include the logic to handle errors, including retries.
Batch Prediction
While online prediction is optimized for low-latency requests over small lists of instances, batch prediction is designed for high-throughput prediction for large datasets. The same model can be used for both.
Batch prediction jobs can also be submitted via the API. They are easily submitted via the gcloud tool as well.
End of explanation
job_name = 'census_prediction_' + str(int(time.time()))
prediction_path = os.path.join(workspace_path, 'predictions')
Explanation: Each batch prediction job must have a unique name within the scope of a project. The specified name below may need to be changed if you are re-running this notebook.
End of explanation
!gcloud ml-engine jobs submit prediction {job_name} --model {model_name} --version {model_version} --data-format TEXT --input-paths {prediction_data_path} --output-path {prediction_path} --region {storage_region}
Explanation: NOTE: A batch prediction job can take a few minutes, due to overhead of provisioning resources, which is reasonable for large jobs, but can far exceed the time to complete a tiny dataset such as the one used in this sample.
End of explanation
!gsutil ls {prediction_path}
!gsutil cat {prediction_path}/prediction*
Explanation: The status of the job can be inspected in the Cloud Console. Once it is completed, the outputs should be visible in the specified output path.
End of explanation |
7,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot bike-share data with Matplotlib
Step1: Question 1
Step2: Question 2
Step3: Question 3
Step4: Question 4 | Python Code:
from pandas import DataFrame, Series
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
weather = pd.read_table('daily_weather.tsv')
usage = pd.read_table('usage_2012.tsv')
stations = pd.read_table('stations.tsv')
newseasons = {'Summer': 'Spring', 'Spring': 'Winter', 'Fall': 'Summer', 'Winter': 'Fall'}
weather['season_desc'] = weather['season_desc'].map(newseasons)
weather['Day'] = pd.DatetimeIndex(weather.date).date
weather['Month'] = pd.DatetimeIndex(weather.date).month
Explanation: Plot bike-share data with Matplotlib
End of explanation
weather['temp'].plot()
# weather.plot(kind='line', y='temp', x='Day')
plt.show()
weather[['Month', 'humidity', 'temp']].groupby('Month').aggregate(np.mean).plot(kind='bar')
plt.show()
Explanation: Question 1: Plot Daily Temp of 2012
Plot the daily temperature over the course of the year. (This should probably be a line chart.) Create a bar chart that shows the average temperature and humidity by month.
End of explanation
w = weather[['season_desc', 'temp', 'total_riders']]
w_fal = w.loc[w['season_desc'] == 'Fall']
w_win = w.loc[w['season_desc'] == 'Winter']
w_spr = w.loc[w['season_desc'] == 'Spring']
w_sum = w.loc[w['season_desc'] == 'Summer']
plt.scatter(w_fal['temp'], w_fal['total_riders'], c='y', label='Fall', s=100, alpha=.5)
plt.scatter(w_win['temp'], w_win['total_riders'], c='r', label='Winter', s=100, alpha=.5)
plt.scatter(w_spr['temp'], w_spr['total_riders'], c='b', label='Sprint', s=100, alpha=.5)
plt.scatter(w_sum['temp'], w_sum['total_riders'], c='g', label='Summer', s=100, alpha=.5)
plt.legend(loc='lower right')
plt.xlabel('Temperature')
plt.ylabel('Total Riders')
plt.show()
Explanation: Question 2: Rental Volumes compared to Temp
Use a scatterplot to show how the daily rental volume varies with temperature. Use a different series (with different colors) for each season.
End of explanation
w = weather[['season_desc', 'windspeed', 'total_riders']]
w_fal = w.loc[w['season_desc'] == 'Fall']
w_win = w.loc[w['season_desc'] == 'Winter']
w_spr = w.loc[w['season_desc'] == 'Spring']
w_sum = w.loc[w['season_desc'] == 'Summer']
plt.scatter(w_fal['windspeed'], w_fal['total_riders'], c='y', label='Fall', s=100, alpha=.5)
plt.scatter(w_win['windspeed'], w_win['total_riders'], c='r', label='Winter', s=100, alpha=.5)
plt.scatter(w_spr['windspeed'], w_spr['total_riders'], c='b', label='Sprint', s=100, alpha=.5)
plt.scatter(w_sum['windspeed'], w_sum['total_riders'], c='g', label='Summer', s=100, alpha=.5)
plt.legend(loc='lower right')
plt.xlabel('Wind Speed')
plt.ylabel('Total Riders')
plt.show()
Explanation: Question 3: Daily Rentals compared to Windspeed
Create another scatterplot to show how daily rental volume varies with windspeed. As above, use a different series for each season.
End of explanation
s = stations[['station','lat','long']]
u = pd.concat([usage['station_start']], axis=1, keys=['station'])
counts = u['station'].value_counts()
c = DataFrame(counts.index, columns=['station'])
c['counts'] = counts.values
m = pd.merge(s, c, on='station')
plt.scatter(m['long'], m['lat'], c='b', label='Location', s=(m['counts'] * .05), alpha=.1)
plt.legend(loc='lower right')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.show()
Explanation: Question 4: Rental Volumes by Geographical Location
How do the rental volumes vary with geography? Compute the average daily rentals for each station and use this as the radius for a scatterplot of each station's latitude and longitude.
End of explanation |
7,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MP2 theory for a closed-shell reference
In this notebook we will use wicked to generate equations for the MP2 method using an orbital-invariant formalism
Step1: Here we define the operator and derive the residual equations
Step3: Compute the Hartree–Fock and MP2 energy
Step4: Prepare integrals for Forte
Step5: Define orbital spaces and dimensions
Step6: Build the Fock matrix and the zeroth-order Fock matrix
Step7: Build the MP denominators | Python Code:
import wicked as w
w.reset_space()
w.add_space("o", "fermion", "occupied", ["i", "j", "k", "l", "m", "n"])
w.add_space("v", "fermion", "unoccupied", ["a", "b", "c", "d", "e", "f"])
Explanation: MP2 theory for a closed-shell reference
In this notebook we will use wicked to generate equations for the MP2 method using an orbital-invariant formalism
End of explanation
T2 = w.op("T2", ["v+ v+ o o"])
F0 = w.op("F0", ["o+ o", "v+ v"])
F1 = w.op("F1", ["o+ v", "v+ o"])
V1 = w.utils.gen_op("V",2,"ov","ov")
wt = w.WickTheorem()
expr = wt.contract(w.commutator(F0, T2), 0, 4)
expr += wt.contract(V1, 0, 4)
mbeq = expr.to_manybody_equation("R")
# code generation
code = ["def evaluate_residual(F0,T2,V):",
" # contributions to the residual",
" Roovv = np.zeros((nocc,nocc,nvir,nvir))"]
for eq in mbeq["oo|vv"]:
contraction = eq.compile("einsum")
code.append(f' {contraction}')
code.append(f' return Roovv')
funct = '\n'.join(code)
print(f'Defined the function:\n\n{funct}\n')
exec(funct)
import psi4
import forte
import forte.utils
from forte import forte_options
import numpy as np
from math import isclose
Explanation: Here we define the operator and derive the residual equations
End of explanation
# setup xyz geometry for linear H4
geometry =
H 0.0 0.0 0.0
H 0.0 0.0 1.0
H 0.0 0.0 2.0
H 0.0 0.0 3.0
symmetry c1
(Escf, psi4_wfn) = forte.utils.psi4_scf(geometry,
basis='sto-3g',
reference='rhf',
options={'E_CONVERGENCE' : 1.e-12})
psi4.set_options({'mp2_type': 'conv','freeze_core': True})
Emp2_psi4 = psi4.energy('mp2')
print(f"RHF energy: {Escf:.12f} Eh")
print(f"MP2 energy: {Emp2_psi4:.12f} Eh")
print(f"MP2 corr. energy: {Emp2_psi4 - Escf:.12f} Eh")
Explanation: Compute the Hartree–Fock and MP2 energy
End of explanation
# Define the orbital spaces
mo_spaces = {'RESTRICTED_DOCC': [2],'RESTRICTED_UOCC': [2]}
# pass Psi4 options to Forte
options = psi4.core.get_options()
options.set_current_module('FORTE')
forte_options.get_options_from_psi4(options)
# Grab the number of MOs per irrep
nmopi = psi4_wfn.nmopi()
# Grab the point group symbol (e.g. "C2V")
point_group = psi4_wfn.molecule().point_group().symbol()
# create a MOSpaceInfo object
mo_space_info = forte.make_mo_space_info_from_map(nmopi, point_group,mo_spaces, [])
# make a ForteIntegral object
ints = forte.make_ints_from_psi4(psi4_wfn, forte_options, mo_space_info)
Explanation: Prepare integrals for Forte
End of explanation
occmos = mo_space_info.corr_absolute_mo('RESTRICTED_DOCC')
virmos = mo_space_info.corr_absolute_mo('RESTRICTED_UOCC')
allmos = mo_space_info.corr_absolute_mo('CORRELATED')
nocc = 2 * len(occmos)
nvir = 2 * len(virmos)
Explanation: Define orbital spaces and dimensions
End of explanation
# Build the Fock matrix blocks
F = {'oo': forte.spinorbital_oei(ints,occmos, occmos),
'vv': forte.spinorbital_oei(ints,virmos, virmos),
'ov': forte.spinorbital_oei(ints,occmos, virmos)}
# OO block
v = forte.spinorbital_tei(ints,occmos,occmos,occmos, occmos)
F['oo'] += np.einsum('piqi->pq', v)
# VV block
v = forte.spinorbital_tei(ints,virmos, occmos, virmos, occmos)
F['vv'] += np.einsum('piqi->pq', v)
# OV block
v = forte.spinorbital_tei(ints, occmos, occmos, virmos, occmos)
F['ov'] += np.einsum('piqi->pq', v)
# Build the diagonal orbital energies
Fdiag = {'oo': np.diag(F['oo']), 'vv': np.diag(F['vv'])}
F0 = {'oo' : np.diag(Fdiag['oo']),'vv' : np.diag(Fdiag['vv']) }
# Build the two-electron integrals
V = {}
V["oovv"] = forte.spinorbital_tei(ints,occmos,occmos,virmos,virmos)
Explanation: Build the Fock matrix and the zeroth-order Fock matrix
End of explanation
d2 = np.zeros((nocc,nocc,nvir,nvir))
fo = Fdiag['oo']
fv = Fdiag['vv']
for i in range(nocc):
for j in range(nocc):
for a in range(nvir):
for b in range(nvir):
si = i % 2
sj = j % 2
sa = a % 2
sb = b % 2
if si == sj == sa == sb:
d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])
if si == sa and sj == sb and si != sj:
d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])
if si == sb and sj == sa and si != sj:
d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])
# Compute the MP2 correlation energy
Emp2 = 0.0
for i in range(nocc):
for j in range(nocc):
for a in range(nvir):
for b in range(nvir):
Emp2 += 0.25 * V["oovv"][i][j][a][b] ** 2 / (fo[i] + fo[j] - fv[a] - fv[b])
print(f"MP2 corr. energy: {Emp2:.12f} Eh")
def antisymmetrize_residual(Roovv):
# antisymmetrize the residual
Roovv_anti = np.zeros((nocc,nocc,nvir,nvir))
Roovv_anti += np.einsum("ijab->ijab",Roovv)
Roovv_anti -= np.einsum("ijab->jiab",Roovv)
Roovv_anti -= np.einsum("ijab->ijba",Roovv)
Roovv_anti += np.einsum("ijab->jiba",Roovv)
return Roovv_anti
def update_amplitudes(T2,R,d2):
T2["oovv"] += np.einsum("ijab,ijab->ijab",R,d2)
def compute_energy(T2,V):
energy = 0.25 * np.einsum('ijab,ijab->', V["oovv"], T2["oovv"])
return energy
T2 = {}
T2["oovv"] = np.zeros((nocc,nocc,nvir,nvir))
maxiter = 10
for i in range(maxiter):
Roovv = evaluate_residual(F0,T2,V)
Roovv = antisymmetrize_residual(Roovv)
update_amplitudes(T2,Roovv,d2)
Emp2_wicked = compute_energy(T2,V)
# check for convergence
norm_R = np.linalg.norm(Roovv)
print(f"{i} {Emp2_wicked:+.12f} {norm_R}")
if norm_R < 1.0e-9:
break
print(f"MP2 corr. energy: {Emp2_wicked:+.12f} Eh")
print(f"Err corr. energy: {Emp2_wicked - Emp2:+.12f} Eh")
assert isclose(Emp2_wicked,Emp2)
Explanation: Build the MP denominators
End of explanation |
7,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: The path to the saved GoogleNews Word2Vec model.
Step4: Read data
Original question datasets.
Step5: Build features
Raw implementations from Abhishek below (excluding the features we already have in other notebooks)
Step6: Build final features
Step7: Save features | Python Code:
from pygoose import *
import os
import warnings
import gensim
from fuzzywuzzy import fuzz
from nltk import word_tokenize
from nltk.corpus import stopwords
from scipy.stats import skew, kurtosis
from scipy.spatial.distance import cosine, cityblock, jaccard, canberra, euclidean, minkowski, braycurtis
Explanation: Feature: "Abhishek's Features"
Based on Abhishek Thakur's features published on GitHub and Kaggle forum.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = '3rdparty_abhishek'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
google_news_model_path = os.path.join(project.aux_dir, 'word2vec', 'GoogleNews-vectors-negative300.bin.gz')
Explanation: The path to the saved GoogleNews Word2Vec model.
End of explanation
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('').drop(['id', 'qid1', 'qid2'], axis=1)
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('').drop(['test_id'], axis=1)
stop_words = stopwords.words('english')
Explanation: Read data
Original question datasets.
End of explanation
def wmd(model, s1, s2):
s1 = str(s1).lower().split()
s2 = str(s2).lower().split()
stop_words = stopwords.words('english')
s1 = [w for w in s1 if w not in stop_words]
s2 = [w for w in s2 if w not in stop_words]
return model.wmdistance(s1, s2)
def norm_wmd(model, s1, s2):
s1 = str(s1).lower().split()
s2 = str(s2).lower().split()
stop_words = stopwords.words('english')
s1 = [w for w in s1 if w not in stop_words]
s2 = [w for w in s2 if w not in stop_words]
return model.wmdistance(s1, s2)
def sent2vec(model, s):
words = s.lower()
words = word_tokenize(words)
words = [w for w in words if not w in stop_words]
words = [w for w in words if w.isalpha()]
M = []
for w in words:
try:
M.append(model[w])
except:
continue
M = np.array(M)
v = M.sum(axis=0)
return v / np.sqrt((v ** 2).sum())
def extend_with_features(data):
data['common_words'] = data.apply(lambda x: len(set(str(x['question1']).lower().split()).intersection(set(str(x['question2']).lower().split()))), axis=1)
data['fuzz_qratio'] = data.apply(lambda x: fuzz.QRatio(str(x['question1']), str(x['question2'])), axis=1)
data['fuzz_WRatio'] = data.apply(lambda x: fuzz.WRatio(str(x['question1']), str(x['question2'])), axis=1)
model = gensim.models.KeyedVectors.load_word2vec_format(google_news_model_path, binary=True)
data['wmd'] = data.apply(lambda x: wmd(model, x['question1'], x['question2']), axis=1)
norm_model = gensim.models.KeyedVectors.load_word2vec_format(google_news_model_path, binary=True)
norm_model.init_sims(replace=True)
data['norm_wmd'] = data.apply(lambda x: norm_wmd(norm_model, x['question1'], x['question2']), axis=1)
question1_vectors = np.zeros((data.shape[0], 300))
for i, q in progressbar(enumerate(data.question1.values), total=len(data)):
question1_vectors[i, :] = sent2vec(model, q)
question2_vectors = np.zeros((data.shape[0], 300))
for i, q in progressbar(enumerate(data.question2.values), total=len(data)):
question2_vectors[i, :] = sent2vec(model, q)
question1_vectors = np.nan_to_num(question1_vectors)
question2_vectors = np.nan_to_num(question2_vectors)
data['cosine_distance'] = [cosine(x, y) for (x, y) in zip(question1_vectors, question2_vectors)]
data['cityblock_distance'] = [cityblock(x, y) for (x, y) in zip(question1_vectors, question2_vectors)]
data['jaccard_distance'] = [jaccard(x, y) for (x, y) in zip(question1_vectors, question2_vectors)]
data['canberra_distance'] = [canberra(x, y) for (x, y) in zip(question1_vectors, question2_vectors)]
data['euclidean_distance'] = [euclidean(x, y) for (x, y) in zip(question1_vectors, question2_vectors)]
data['minkowski_distance'] = [minkowski(x, y, 3) for (x, y) in zip(question1_vectors, question2_vectors)]
data['braycurtis_distance'] = [braycurtis(x, y) for (x, y) in zip(question1_vectors, question2_vectors)]
data['skew_q1vec'] = [skew(x) for x in question1_vectors]
data['skew_q2vec'] = [skew(x) for x in question2_vectors]
data['kur_q1vec'] = [kurtosis(x) for x in question1_vectors]
data['kur_q2vec'] = [kurtosis(x) for x in question2_vectors]
warnings.filterwarnings('ignore')
extend_with_features(df_train)
extend_with_features(df_test)
df_train.drop(['is_duplicate', 'question1', 'question2'], axis=1, inplace=True)
df_test.drop(['question1', 'question2'], axis=1, inplace=True)
Explanation: Build features
Raw implementations from Abhishek below (excluding the features we already have in other notebooks):
End of explanation
X_train = np.array(df_train.values, dtype='float64')
X_test = np.array(df_test.values, dtype='float64')
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
df_train.describe().T
Explanation: Build final features
End of explanation
feature_names = [
'abh_common_words',
'abh_fuzz_qratio',
'abh_fuzz_WRatio',
'abh_wmd',
'abh_norm_wmd',
'abh_cosine_distance',
'abh_cityblock_distance',
'abh_jaccard_distance',
'abh_canberra_distance',
'abh_euclidean_distance',
'abh_minkowski_distance',
'abh_braycurtis_distance',
'abh_skew_q1vec',
'abh_skew_q2vec',
'abh_kur_q1vec',
'abh_kur_q2vec',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation |
7,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build a histogram with percentages correct for each category
Step1: Stats of text length for correct and incorrect | Python Code:
df_test = df[(df["is_test"] == True)]
df_test["prediction"] = predictions
#print df_test.head()
# Compare the percent correct to the results from earlier to make sure things are lined up right
print "Calculated accuracy:", sum(df_test["label"] == df_test["prediction"]) / float(len(df_test))
print "Model accuracy:", best_score
df_correct = df_test[(df_test["label"] == df_test["prediction"])]
df_incorrect = df_test[(df_test["label"] != df_test["prediction"])]
#df_correct.describe()
#df_test.describe()
#plt.hist(correct_labels)
#print df.describe()
print "Correct predictions:", df_correct.groupby(["label"])["prediction"].count()
print "Incorrect predictions:", df_incorrect.groupby(["label"])["prediction"].count()
Explanation: Build a histogram with percentages correct for each category
End of explanation
print df_correct.describe()
print df_incorrect.describe()
#print model_results
d3_data = {}
for m in model_results:
d3_data[m["feat_name"]] = {}
d3_data[m["feat_name"]]["C"] = []
d3_data[m["feat_name"]]["G"] = []
d3_data[m["feat_name"]]["S"] = []
#print m["feat_name"], m["model_params"], m["model_score"]
for s in m["grid_scores"]:
d3_data[m["feat_name"]]["C"].append(s[0]["C"])
d3_data[m["feat_name"]]["G"].append(s[0]["gamma"])
d3_data[m["feat_name"]]["S"].append(s[1])
#print d3_data
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
def d3_plot(X, Y, Z):
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_xlabel("C", weight="bold", size="xx-large")
ax.set_xticks([0, 5000, 10000, 15000])
ax.set_xlim(0, max(X))
ax.set_ylabel("gamma", weight="bold", size="xx-large")
ax.set_yticks([0, 1.5, 3, 4.5])
ax.set_ylim(0, max(Y))
ax.set_zlabel("Accuracy", weight="bold", size="xx-large")
#ax.set_zticks([0.5, 0.6, 0.70])
ax.set_zlim(0.5, 0.75)
ax.scatter(X, Y, Z, c='b', marker='o')
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
plt.show()
d3_plot(np.array(d3_data["area"]["C"]), np.array(d3_data["area"]["G"]), np.array(d3_data["area"]["S"]))
d3_plot(np.array(d3_data["line"]["C"]), np.array(d3_data["line"]["G"]), np.array(d3_data["line"]["S"]))
d3_plot(np.array(d3_data["word"]["C"]), np.array(d3_data["word"]["G"]), np.array(d3_data["word"]["S"]))
Explanation: Stats of text length for correct and incorrect
End of explanation |
7,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
All About
Step1: Graphic Interpretation
The graphic above illustrates the pattern of follow-ups in the CMMI data set for each of the 1,640 unique patients. Using your cursor, you can hover over a particular color to find out the specific care setting. Each concentric circle going out from the middle represent a new follow-up visit for a person. For example, in the figure above, starting in the center, there is a red layer in the first concentric circle. If you hover over the first red circle, this says 41.8%. This means that 41.8% of the 1,640 patients reported 'Long Term Care' at their first visit. Hovering over the next layer that is black, gives a value of 7.26%. This means that 7.26% of the population had a first visit labeled as 'Long Term Care' and then had no additional visits.
Statistical Inference
I'm not sure if there is a hypothesis we want to test in relation to these two variables (i.e. Care Setting and number of follow-up visits).
<a id='ua'></a>
Unadjusted Associations with Care Setting (First Visit Only)
In this AIM, we will look at testing the null hypothesis of no association between each row variable and the column variable (ConsultLoc). There is obviously a time aspect to this data, but for this aim, we will stick to the first encounter only.
Here is how the data is munged for this aim | Python Code:
from IPython.core.display import display, HTML;from string import Template;
HTML('<script src="//d3js.org/d3.v3.min.js" charset="utf-8"></script>')
css_text2 = '''
#main { float: left; width: 750px;}#sidebar { float: right; width: 100px;}#sequence { width: 600px; height: 70px;}#legend { padding: 10px 0 0 3px;}#sequence text, #legend text { font-weight: 400; fill: #000000; font-size: 0.75em;}#graph-div2 { position: relative;}#graph-div2 { stroke: #fff;}#explanation { position: absolute; top: 330px; left: 405px; width: 140px; text-align: center; color: #666; z-index: -1;}#percentage { font-size: 2.3em;}
'''
with open('interactive_circle_cl.js', 'r') as myfile:
data=myfile.read()
js_text_template2 = Template(data)
html_template = Template('''
<style> $css_text </style>
<div id="sequence"></div>
<div id="graph-div2"></div>
<div id="explanation" style="visibility: hidden;">
<span id="percentage"></span><br/>
of patients meet this criteria
</div>
</div>
<script> $js_text </script>
''');
js_text2 = js_text_template2.substitute({'graphdiv': 'graph-div2'});
HTML(html_template.substitute({'css_text': css_text2, 'js_text': js_text2}))
Explanation: All About: Care Setting
NOTE:
- There is only 1 encounter that contains the value consultloc=7 (Palliative Care Unit) - it is on the third visit for this particular individual. Because there is only 1 individual with this trait, this will cause problems, so I need to collapse this category to something. After discussing with Don (7/26/16), we decided to delete this encounter.
There are only 7 encounters that contain the values consultLoc=6 (Emergency Department) - this will cause problems with modeling, from viewing individuals with this value, it seems the most appropriate level to collapse this too is consultloc=1 (Hospital - ICU ( includes MICU, SICU, TICU)). After discussing with Don (7/26/16), we decided to delete these encounters.
It is important to note that we are dealing with a dataset of 5,066 encounters. As such, it is possible that a particular patient's care setting field (on QDACT) will change (or be different) over time. Therefore for the remainer of this notebook, we will only explore the first care setting assigned to a patient and how that correlated to their number of follow-up visits. Also, it is important to note that due to the nebulous design of this exploration, we are not adjusting for the multiple tests that follow. This could be a critique that many reviewers would have if this work is ever submitted.
Because this is only exploratory (not confirmatory or a clincal trial), I would recommend not adjusting (and have not done so below).
Table of Contents
Follow-up Visit Distribution by Care Setting Graphic
Unadjusted Associations with Care Setting (First Visit Only)
<a id='fuvdig'></a>
Follow-up Visit Distribution by Care Setting (Interactive Graphic)
To explore the entire follow-up distribution of the CMMI population stratified by care setting, we will use an interactive graphic. Because it is interactive, it requires you to place your cursor in the first cell below (starting with 'from IPython.core.display...') and then press the play button in the toolbar above. You will need to press play 5 times. After pressing play 5 times, the interactive graphic will appear. Instructions for interpreting the graphic are given below the figure.
End of explanation
import pandas as pd
table = pd.read_csv(open('./python_scripts/11_primarydiagnosis_tables_catv2_consultLoc.csv','r'))
#Anxiety
table[0:5]
#Appetite
table[5:10]
#Constipation
table[10:15]
#Depression
table[15:20]
#Drowsiness
table[20:25]
#Nausea
table[25:30]
#Pain
table[30:35]
#Shortness
table[35:40]
#Tiredness
table[40:45]
#Well Being
table[45:50]
# PPSScore
table[50:51]
Explanation: Graphic Interpretation
The graphic above illustrates the pattern of follow-ups in the CMMI data set for each of the 1,640 unique patients. Using your cursor, you can hover over a particular color to find out the specific care setting. Each concentric circle going out from the middle represent a new follow-up visit for a person. For example, in the figure above, starting in the center, there is a red layer in the first concentric circle. If you hover over the first red circle, this says 41.8%. This means that 41.8% of the 1,640 patients reported 'Long Term Care' at their first visit. Hovering over the next layer that is black, gives a value of 7.26%. This means that 7.26% of the population had a first visit labeled as 'Long Term Care' and then had no additional visits.
Statistical Inference
I'm not sure if there is a hypothesis we want to test in relation to these two variables (i.e. Care Setting and number of follow-up visits).
<a id='ua'></a>
Unadjusted Associations with Care Setting (First Visit Only)
In this AIM, we will look at testing the null hypothesis of no association between each row variable and the column variable (ConsultLoc). There is obviously a time aspect to this data, but for this aim, we will stick to the first encounter only.
Here is how the data is munged for this aim:
- Get the first encounter for each internalid, we start with a dataset that is 5,066 records in length and end up with a data set that is 1,640 records in length. Note: we sort data by internalid and AssessmentDate, if there is another alternative date variable we should use, please let us know. This will determine the "first visit" by sort order.
- We apply our "set-to-missing" algorithm to every variable to be analyzed. This will limit the the number of indiviudals to at least 1,618 as there are 22 who are initially missing their consultloc. We should also rerun the missing data analysis by incorporating this algorithm as it is a truer state of the data.
- This number will further be limited by running each row variable through the set-to-missing algorithm. Each row will try to be as transparent as possible about this process.
- Because this is a public server, actual data can't be posted, but the source code used to get to these results can. Here is the location to that file.
End of explanation |
7,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
```
Copyright 2020 The IREE Authors
Licensed under the Apache License v2.0 with LLVM Exceptions.
See https
Step1: 2. Import TensorFlow and Other Dependencies
Step2: 3. Load the MNIST Dataset
Step3: 4. Create a Simple DNN
MLIR-HLO (the MLIR dialect we use to convert TensorFlow models into assembly that IREE can compile) does not currently support training with a dynamic number of examples, so we compile the model with a fixed batch size (by specifying the batch size in the tf.TensorSpecs).
Step4: 5. Compile the Model with IREE
tf.keras adds a large number of methods to TrainableDNN, and most of them
cannot be compiled with IREE. To get around this we tell IREE exactly which
methods we would like it to compile.
Step5: Choose one of IREE's three backends to compile to. (Note
Step6: 6. Train the Compiled Model on MNIST
This compiled model is portable, demonstrating that IREE can be used for training on a mobile device. On mobile, IREE has a ~1000 fold binary size advantage over the current TensorFlow solution (which is to use the now-deprecated TF Mobile, as TFLite does not support training at this time).
Step7: 7. Evaluate on Heldout Test Examples | Python Code:
%%capture
!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://github.com/google/iree/releases
# Import IREE's TensorFlow Compiler and Runtime.
import iree.compiler.tf
import iree.runtime
Explanation: ```
Copyright 2020 The IREE Authors
Licensed under the Apache License v2.0 with LLVM Exceptions.
See https://llvm.org/LICENSE.txt for license information.
SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
```
Training and Executing an MNIST Model with IREE
Overview
This notebook covers installing IREE and using it to train a simple neural network on the MNIST dataset.
1. Install and Import IREE
End of explanation
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tf.random.set_seed(91)
np.random.seed(91)
plt.style.use("seaborn-whitegrid")
plt.rcParams["font.family"] = "monospace"
plt.rcParams["figure.figsize"] = [8, 4.5]
plt.rcParams["figure.dpi"] = 150
# Print version information for future notebook users to reference.
print("TensorFlow version: ", tf.__version__)
print("Numpy version: ", np.__version__)
Explanation: 2. Import TensorFlow and Other Dependencies
End of explanation
# Keras datasets don't provide metadata.
NUM_CLASSES = 10
NUM_ROWS, NUM_COLS = 28, 28
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape into grayscale images:
x_train = np.reshape(x_train, (-1, NUM_ROWS, NUM_COLS, 1))
x_test = np.reshape(x_test, (-1, NUM_ROWS, NUM_COLS, 1))
# Rescale uint8 pixel values into float32 values between 0 and 1:
x_train = x_train.astype(np.float32) / 255
x_test = x_test.astype(np.float32) / 255
# IREE doesn't currently support int8 tensors, so we cast them to int32:
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
print("Sample image from the dataset:")
sample_index = np.random.randint(x_train.shape[0])
plt.figure(figsize=(5, 5))
plt.imshow(x_train[sample_index].reshape(NUM_ROWS, NUM_COLS), cmap="gray")
plt.title(f"Sample #{sample_index}, label: {y_train[sample_index]}")
plt.axis("off")
plt.tight_layout()
Explanation: 3. Load the MNIST Dataset
End of explanation
BATCH_SIZE = 32
class TrainableDNN(tf.Module):
def __init__(self):
super().__init__()
# Create a Keras model to train.
inputs = tf.keras.layers.Input((NUM_COLS, NUM_ROWS, 1))
x = tf.keras.layers.Flatten()(inputs)
x = tf.keras.layers.Dense(128)(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Dense(10)(x)
outputs = tf.keras.layers.Softmax()(x)
self.model = tf.keras.Model(inputs, outputs)
# Create a loss function and optimizer to use during training.
self.loss = tf.keras.losses.SparseCategoricalCrossentropy()
self.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2)
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]) # inputs
])
def predict(self, inputs):
return self.model(inputs, training=False)
# We compile the entire training step by making it a method on the model.
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]), # inputs
tf.TensorSpec([BATCH_SIZE], tf.int32) # labels
])
def learn(self, inputs, labels):
# Capture the gradients from forward prop...
with tf.GradientTape() as tape:
probs = self.model(inputs, training=True)
loss = self.loss(labels, probs)
# ...and use them to update the model's weights.
variables = self.model.trainable_variables
gradients = tape.gradient(loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
return loss
Explanation: 4. Create a Simple DNN
MLIR-HLO (the MLIR dialect we use to convert TensorFlow models into assembly that IREE can compile) does not currently support training with a dynamic number of examples, so we compile the model with a fixed batch size (by specifying the batch size in the tf.TensorSpecs).
End of explanation
exported_names = ["predict", "learn"]
Explanation: 5. Compile the Model with IREE
tf.keras adds a large number of methods to TrainableDNN, and most of them
cannot be compiled with IREE. To get around this we tell IREE exactly which
methods we would like it to compile.
End of explanation
backend_choice = "dylib-llvm-aot (CPU)" #@param [ "vmvx (CPU)", "dylib-llvm-aot (CPU)", "vulkan-spirv (GPU/SwiftShader – requires additional drivers) " ]
backend_choice = backend_choice.split(' ')[0]
# Compile the TrainableDNN module
# Note: extra flags are needed to i64 demotion, see https://github.com/google/iree/issues/8644
vm_flatbuffer = iree.compiler.tf.compile_module(
TrainableDNN(),
target_backends=[backend_choice],
exported_names=exported_names,
extra_args=["--iree-mhlo-demote-i64-to-i32=false",
"--iree-flow-demote-i64-to-i32"])
compiled_model = iree.runtime.load_vm_flatbuffer(
vm_flatbuffer,
backend=backend_choice)
Explanation: Choose one of IREE's three backends to compile to. (Note: Using Vulkan requires installing additional drivers.)
End of explanation
#@title Benchmark inference and training
print("Inference latency:\n ", end="")
%timeit -n 100 compiled_model.predict(x_train[:BATCH_SIZE])
print("Training latancy:\n ", end="")
%timeit -n 100 compiled_model.learn(x_train[:BATCH_SIZE], y_train[:BATCH_SIZE])
# Run the core training loop.
losses = []
step = 0
max_steps = x_train.shape[0] // BATCH_SIZE
for batch_start in range(0, x_train.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_train.shape[0]:
continue
inputs = x_train[batch_start:batch_start + BATCH_SIZE]
labels = y_train[batch_start:batch_start + BATCH_SIZE]
loss = compiled_model.learn(inputs, labels).to_host()
losses.append(loss)
step += 1
print(f"\rStep {step:4d}/{max_steps}: loss = {loss:.4f}", end="")
#@title Plot the training results
import bottleneck as bn
smoothed_losses = bn.move_mean(losses, 32)
x = np.arange(len(losses))
plt.plot(x, smoothed_losses, linewidth=2, label='loss (moving average)')
plt.scatter(x, losses, s=16, alpha=0.2, label='loss (per training step)')
plt.ylim(0)
plt.legend(frameon=True)
plt.xlabel("training step")
plt.ylabel("cross-entropy")
plt.title("training loss");
Explanation: 6. Train the Compiled Model on MNIST
This compiled model is portable, demonstrating that IREE can be used for training on a mobile device. On mobile, IREE has a ~1000 fold binary size advantage over the current TensorFlow solution (which is to use the now-deprecated TF Mobile, as TFLite does not support training at this time).
End of explanation
#@title Evaluate the network on the test data.
accuracies = []
step = 0
max_steps = x_test.shape[0] // BATCH_SIZE
for batch_start in range(0, x_test.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_test.shape[0]:
continue
inputs = x_test[batch_start:batch_start + BATCH_SIZE]
labels = y_test[batch_start:batch_start + BATCH_SIZE]
prediction = compiled_model.predict(inputs).to_host()
prediction = np.argmax(prediction, -1)
accuracies.append(np.sum(prediction == labels) / BATCH_SIZE)
step += 1
print(f"\rStep {step:4d}/{max_steps}", end="")
print()
accuracy = np.mean(accuracies)
print(f"Test accuracy: {accuracy:.3f}")
#@title Display inference predictions on a random selection of heldout data
rows = 4
columns = 4
images_to_display = rows * columns
assert BATCH_SIZE >= images_to_display
random_index = np.arange(x_test.shape[0])
np.random.shuffle(random_index)
x_test = x_test[random_index]
y_test = y_test[random_index]
predictions = compiled_model.predict(x_test[:BATCH_SIZE]).to_host()
predictions = np.argmax(predictions, -1)
fig, axs = plt.subplots(rows, columns)
for i, ax in enumerate(np.ndarray.flatten(axs)):
ax.imshow(x_test[i, :, :, 0])
color = "#000000" if predictions[i] == y_test[i] else "#ff7f0e"
ax.set_xlabel(f"prediction={predictions[i]}", color=color)
ax.grid(False)
ax.set_yticks([])
ax.set_xticks([])
fig.tight_layout()
Explanation: 7. Evaluate on Heldout Test Examples
End of explanation |
7,241 | Given the following text description, write Python code to implement the functionality described.
Description:
Find two non
Function to check if two non - intersecting subarrays with equal sum exists or not ; Sort the given array ; Traverse the array ; Check for duplicate elements ; If no duplicate element is present in the array ; Driver code ; Given array ; Size of array
| Python Code:
def findSubarrays(arr , N ) :
arr . sort() ;
i = 0 ;
for i in range(N - 1 ) :
if(arr[i ] == arr[i + 1 ] ) :
print("YES ") ;
return ;
print("NO ") ;
if __name__== ' __main __' :
arr =[4 , 3 , 0 , 1 , 2 , 0 ] ;
N = len(arr ) ;
findSubarrays(arr , N ) ;
|
7,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some remarks on sorting
You can either use the sorted() Python built-in function which will produce a sorted version of anything you can iterate over. Lists have a .sort() method which sorts them in-place.
Step1: Sort in reverse order
Step3: Define the sort order
You can define a function which extracts the key by which to sort from the elements of the array you are sorting. For example, if you want to sort strings by their length, you would write a function which gives the length of the string and tell sort() to use it. Like so
Step5: This is very useful for example when sorting dictionaries by their value | Python Code:
l1=[3,1,4,6,7]
l2=sorted(l1)
print(l1,l2) #l2 is a different list from l1
l1.sort() #now l1 is sorted in-place
print(l1)
Explanation: Some remarks on sorting
You can either use the sorted() Python built-in function which will produce a sorted version of anything you can iterate over. Lists have a .sort() method which sorts them in-place.
End of explanation
print(sorted(l1,reverse=True))
Explanation: Sort in reverse order:
reverse=True
End of explanation
def lenk(s):
This function simply returns the length of s
return len(s)
l=["sjsjsj","zzz","aaaaaaaaaaaaaaaaaa"]
print("Default sort order:", sorted(l))
print("New sort order by length:", sorted(l,key=lenk)) #Note: not lenk(), just lenk
print("New sort order by length (lambda):", sorted(l,key=lambda s: len(s))) #Same but with lambda functions
Explanation: Define the sort order
You can define a function which extracts the key by which to sort from the elements of the array you are sorting. For example, if you want to sort strings by their length, you would write a function which gives the length of the string and tell sort() to use it. Like so:
End of explanation
s="ACGTTGGCCAGATCGACTATGGCAGATTGACTAGCATACGATCGCATCAGAT"
counter={}
for char in s:
counter[char]=counter.get(char,0)+1 #Easy way to count items into a dictionary
#Now we want to print the dictionary sorted by count from most to least common and
#in case two letters have the same count, we want them sorted alphabetically
def get_val(key_value):
Gets one key-value pair from the dictionary and returns the value, and then the key
return key_value[1], key_value[0] #the value is the first thing we sort on, the letter the second
items=list(counter.items()) #all items from the dictionary as a list of (key,value) pairs
print("Items:", items)
items.sort(reverse=True, key=get_val)
for letter,count in items:
print(letter, count)
Explanation: This is very useful for example when sorting dictionaries by their value:
End of explanation |
7,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallelize image filters with dask
This notebook will show how to parallize CPU-intensive workload using dask array. A simple uniform filter (equivalent to a mean filter) from scipy.ndimage is used for illustration purposes.
Step1: Get the image
Step2: Initial speed
Lets try the filter directly on the image.
Step3: With dask
First, we'll create the dask array with one chunk only (chunks=img.shape).
Step4: depth defines the overlap. We have one chunk only, so overlap is not necessary.
compute must be called to start the computation.
Step5: As we can see, the performance is the same as applying the filter directly.
Now, lets chop up the image in chunks so that we can leverage all the cores in our computer.
Step6: We have four cores, so lets split the array in four chunks.
Step7: Pixels in both axes are even, so we can split the array in equally sized chunks. If we had odd shapes, chunks would not be the same size (given four cpu cores). E.g. 101x101 image => 50x50 and 51x51 chunks.
Step8: Now, lets see if the filtering is faster.
Step9: It is
Step10: To overcome this edge effect in the seams, we need to define a higher depth so that dask does the computation with an overlap. We need an overlap of 25 pixels (half the size of the neighborhood in mean).
Step11: Edge effect is gone, nice! The dots in the difference is due to uniform_filter's limited precision. From the manual
Step12: As you see, adjusting the chunk size did not affect the performance significant, though its a good idea to identify your bottleneck and adjust the chunk size accordingly.
That's all! By chopping up the computation we utilized all cpu cores and got a speedup at best | Python Code:
%pylab inline
from scipy.ndimage import uniform_filter
import dask.array as da
def mean(img):
"ndimage.uniform_filter with `size=51`"
return uniform_filter(img, size=51)
Explanation: Parallelize image filters with dask
This notebook will show how to parallize CPU-intensive workload using dask array. A simple uniform filter (equivalent to a mean filter) from scipy.ndimage is used for illustration purposes.
End of explanation
!if [ ! -e stitched--U00--V00--C00--Z00.png ]; then wget -q https://github.com/arve0/master/raw/master/stitched--U00--V00--C00--Z00.png; fi
img = imread('stitched--U00--V00--C00--Z00.png')
img = (img*255).astype(np.uint8) # image read as float32, image is 8 bit grayscale
imshow(img[::16, ::16])
mp = str(img.shape[0] * img.shape[1] * 1e-6 // 1)
'%s Mega pixels, shape %s, dtype %s' % (mp, img.shape, img.dtype)
Explanation: Get the image
End of explanation
# filter directly
%time mean_nd = mean(img)
imshow(mean_nd[::16, ::16]);
Explanation: Initial speed
Lets try the filter directly on the image.
End of explanation
img_da = da.from_array(img, chunks=img.shape)
Explanation: With dask
First, we'll create the dask array with one chunk only (chunks=img.shape).
End of explanation
%time mean_da = img_da.map_overlap(mean, depth=0).compute()
imshow(mean_da[::16, ::16]);
Explanation: depth defines the overlap. We have one chunk only, so overlap is not necessary.
compute must be called to start the computation.
End of explanation
from multiprocessing import cpu_count
cpu_count()
Explanation: As we can see, the performance is the same as applying the filter directly.
Now, lets chop up the image in chunks so that we can leverage all the cores in our computer.
End of explanation
img.shape, mean_da.shape, mean_nd.shape
Explanation: We have four cores, so lets split the array in four chunks.
End of explanation
chunk_size = [x//2 for x in img.shape]
img_da = da.rechunk(img_da, chunks=chunk_size)
Explanation: Pixels in both axes are even, so we can split the array in equally sized chunks. If we had odd shapes, chunks would not be the same size (given four cpu cores). E.g. 101x101 image => 50x50 and 51x51 chunks.
End of explanation
%time mean_da = img_da.map_overlap(mean, depth=0).compute()
imshow(mean_da[::16, ::16]);
Explanation: Now, lets see if the filtering is faster.
End of explanation
size = 50
mask = np.index_exp[chunk_size[0]-size:chunk_size[0]+size, chunk_size[1]-size:chunk_size[1]+size]
figure(figsize=(12,4))
subplot(131)
imshow(mean_nd[mask]) # filtered directly
subplot(132)
imshow(mean_da[mask]) # filtered in chunks with dask
subplot(133)
imshow(mean_nd[mask] - mean_da[mask]); # difference
Explanation: It is :-)
If one opens the process manager, one will see that the python process is eating more then 100% CPU.
As we are looking at neighbor pixels to compute the mean intensity for the center pixel, you might wonder what happens in the seams between chunks? Lets examine that.
End of explanation
%time mean_da = img_da.map_overlap(mean, depth=25).compute()
figure(figsize=(12,4))
subplot(131)
imshow(mean_nd[mask]) # filtered directly
subplot(132)
imshow(mean_da[mask]) # filtered in chunks with dask
subplot(133)
imshow(mean_nd[mask] - mean_da[mask]); # difference
Explanation: To overcome this edge effect in the seams, we need to define a higher depth so that dask does the computation with an overlap. We need an overlap of 25 pixels (half the size of the neighborhood in mean).
End of explanation
img_da = da.rechunk(img_da, 1000)
%time mean_da = img_da.map_overlap(mean, depth=25).compute()
imshow(mean_da[::16, ::16]);
Explanation: Edge effect is gone, nice! The dots in the difference is due to uniform_filter's limited precision. From the manual:
The multi-dimensional filter is implemented as a sequence of one-dimensional uniform filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision.
Lets see if we can improve the performance. As we do not get 4x speedup, there might be that computation is not only CPU-bound. Chunksize of 1000 is a good place to start.
End of explanation
'%0.1fx' % (2.7/1.24)
Explanation: As you see, adjusting the chunk size did not affect the performance significant, though its a good idea to identify your bottleneck and adjust the chunk size accordingly.
That's all! By chopping up the computation we utilized all cpu cores and got a speedup at best:
End of explanation |
7,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Interpolation
Step1: <div id='intro' />
Introduction
Previously in our jupyter notebooks, we learn about interpolation. Methods like Newton's Divided Difference, Lagrange, among others. Other alternative for interpolate a set of data points is using Cubic Splines.
This technique, avoids the Runge's Phenomenon and creates a 3-degree polynomial easily.
<div id='sp' />
Splines
The most common spline is the linear spline. Given a set of points $(x_{1},y_{1}), (x_{2},y_{2}),...,(x_{n},y_{n}) $, this spline connects each point creating a non-smooth curve. However, this polynomial haves a problem. It's no smooth curve! For to avoid this problem, the cubic splines creates a set of 3-degree polynomial (specifically n-1 polynomials)... a much better curve.
Step2: The orange curve is generated with cubic splines (using the scipy implementation). The other colors are the derivatives of the Cubic Spline as indicated in the legend.
However, if we thinks about this curve, we can say that exists an infinitely many quantity of polynomials such that meets all the points. Our goal is to create an unique polynomial. Given this condition, there is 4 properties that defines the cubic spline we are looking for.
<div id='pr'/>
Properties of Splines
When we want creates a spline of n data points, we obtains a set of n-1 3-degree polynomials. For example
Step3: Example 1 - Hand made interpolation
Step4: Example 2 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy as sp
from scipy import interpolate
import ipywidgets as widgets
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
from scipy.interpolate import CubicSpline
M=8
Explanation: <center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Interpolation: Splines </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.22</h2>
</center>
Table of Contents
Introduction
Splines
Properties
Solving
The additional Property
Exercises
Acknowledgements
End of explanation
# Code based on Example from: https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.CubicSpline.html#scipy.interpolate.CubicSpline
# The data
x = np.linspace(0,2*np.pi,12)
y = np.sin(x)*x
# Building interpolation object
cs = CubicSpline(x, y)
# Defining a finer mesh to plot the function
xx = np.linspace(0,2*np.pi,1000)
yyo = np.sin(xx)*xx
yyo1 = np.cos(xx)*xx+np.sin(xx)
yyo2 = -np.sin(xx)*xx+2*np.cos(xx)
yyo3 = -np.cos(xx)*xx-3*np.sin(xx)
yyo4 = np.sin(xx)*xx-4*np.cos(xx)
#Interpolating the date with the spline
yy = cs(xx)
yy1 = cs(xx, 1)
yy2 = cs(xx, 2)
yy3 = cs(xx, 3)
yy4 = cs(xx, 4)
# Plotting the splines and its derivatives
plt.figure(figsize=(M,M))
plt.plot(x,y,'k.',markersize=20,label=r'Data Points')
plt.plot(xx,yy, linewidth=4, label=r'S$(x)$')
plt.plot(xx,yy1, linewidth=4, label=r'$\frac{d}{dx}$S$(x)$')
plt.plot(xx,yy2, linewidth=4, label=r'$\frac{d^2}{dx^2}$S$(x)$')
plt.plot(xx,yy3, linewidth=4, label=r'$\frac{d^3}{dx^3}$S$(x)$')
plt.plot(xx,yy4, linewidth=4, label=r'$\frac{d^4}{dx^4}$S$(x)$')
plt.plot(xx,yyo4,'k--',linewidth=4, label='test', alpha=0.4)
plt.plot(x,y,'k.',markersize=20)
plt.title(r'Cubic Spline is defined as S$(x)$')
plt.axis('tight')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.grid(True)
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
plt.figure()
plt.semilogy(np.abs(yy-yyo))
plt.grid(True)
Explanation: <div id='intro' />
Introduction
Previously in our jupyter notebooks, we learn about interpolation. Methods like Newton's Divided Difference, Lagrange, among others. Other alternative for interpolate a set of data points is using Cubic Splines.
This technique, avoids the Runge's Phenomenon and creates a 3-degree polynomial easily.
<div id='sp' />
Splines
The most common spline is the linear spline. Given a set of points $(x_{1},y_{1}), (x_{2},y_{2}),...,(x_{n},y_{n}) $, this spline connects each point creating a non-smooth curve. However, this polynomial haves a problem. It's no smooth curve! For to avoid this problem, the cubic splines creates a set of 3-degree polynomial (specifically n-1 polynomials)... a much better curve.
End of explanation
def cubic_spline(x, y, end=None, k1=0, k2=0, p1=0, p2=0):
#x: x-coordinates of points
#y: y-coordinates of points
#end: Natural, Adjusted, Clamped, Parabolically, NaK
n = len(x)
A = np.zeros((3*n-3, 3*n-3))
b = np.zeros(3*n-3)
delta_x=np.diff(x)
#Building the linear system of equations
#1st property
for i in np.arange(n-1):
b[i]= y[i+1]-y[i]
A[i,3*i:3*(i+1)] = [delta_x[i],delta_x[i]**2,delta_x[i]**3]
#2nd property
for i in np.arange(n-2):
A[(n-1)+i,3*i:3*(i+1)+1]=[1, 2*delta_x[i], 3*delta_x[i]**2, -1]
#3rd property
for i in np.arange(n-2):
A[(n-1)+(n-2)+i,3*i:3*(i+1)+2] = [0, 2, 6*delta_x[i], 0, -2]
#Ending conditions (4th property)
if end =='Natural':
A[-2,1]= 2
A[-1,-2] = 2
A[-1,-1] = 6*delta_x[-1]
elif end == 'Adjusted':
A[-2,1]= 2
A[-1,-2] = 2
A[-1,-1] = 6*delta_x[-1]
b[-2:] = [k1,k2]
print('Adjusted',b[-2:])
elif end == 'Clamped':
A[-2,0]=1
A[-1,-3:] = [1,2*delta_x[-1],3*delta_x[-1]**2]
b[-2:] = [p1,p2]
elif end == 'Parabolically':
A[-2,2]=1
A[-1,-1]=1
elif end == 'NaK':
A[-2,2:6]=[6,0,0,-6]
A[-1,-4:]=[6,0,0,-6]
#Solving the system
sol = np.linalg.solve(A,b)
S = {'b':sol[::3],
'c':sol[1::3],
'd':sol[2::3],
'x':x,
'y':y
}
return S
# 'der' computes the 'der'-derivative of the Spline,
# but it has not been implemented. Can you do it? Please do it!
def cubic_spline_eval(xx,S,der=0):
x=S['x']
y=S['y']
b=S['b']
c=S['c']
d=S['d']
n=len(x)
yy=np.zeros_like(xx)
for i in np.arange(n-1):
jj = np.where(np.logical_and(x[i]<=xx,xx<=x[i+1]))
yy[jj]=y[i]+b[i]*(xx[jj]-x[i])+c[i]*(xx[jj]-x[i])**2+d[i]*(xx[jj]-x[i])**3
return yy
Explanation: The orange curve is generated with cubic splines (using the scipy implementation). The other colors are the derivatives of the Cubic Spline as indicated in the legend.
However, if we thinks about this curve, we can say that exists an infinitely many quantity of polynomials such that meets all the points. Our goal is to create an unique polynomial. Given this condition, there is 4 properties that defines the cubic spline we are looking for.
<div id='pr'/>
Properties of Splines
When we want creates a spline of n data points, we obtains a set of n-1 3-degree polynomials. For example:
Given a set of points $(x_{1},y_{1}), (x_{2},y_{2}),...,(x_{n},y_{n})$, the splines is:
\begin{equation} S_{1}(x) = y_{1} + b_{1}(x-x_{1}) + c_{1}(x-x_{1})^{2} + d_{1}(x-x_{1})^{3} \
S_{2}(x) = y_{2} + b_{2}(x-x_{2}) + c_{2}(x-x_{2})^{2} + d_{2}(x-x_{2})^{3} \
... \
... \
... \
S_{n-1}(x) = y_{n-1} + b_{n-1}(x-x_{n-1}) + c_{n-1}(x-x_{n-1})^{2} + d_{n-1}(x-x_{n-1})^{3}
\end{equation}
Thus, our goal is obtains the $y, b, c$ and $d$ coefficients. With this values, we are creating the spline $S(x)$ that meets all the data points. This spline have the next properties:
Property 1 (Are the points connected?)
The first property checks if each x-coordinate reachs the correspondant y-coordinate in the spline $S(x)$. Thus, checks if the spline meets the data points.
$$S_{i}(x_{i}) = y_{i}$$ $$ S_{i}(x_{i+1}) = y_{i+1}$$
$$i \in [1,n-1]$$
Property 2 (Slope Continuity)
The second property ensures that the slopes of the splines at the points' neighborhood to be equals. This guarantees the smoothly of $S(x)$
$$S'{i-1}(x{i}) = S'{i}(x{i})$$
$$i \in [2,n-1]$$
Property 3 (Curvature Continuity)
This property ensure that the curvature between differents polynomials will be equal at the neighborhood of points, avoiding abrupt changes of the curve at the data points.
$$S''{i-1}(x{i}) = S''{i}(x{i})$$
$$i \in [2,n-1]$$
<div id='so' />
Solving the system
If we have n points, we know that our splines will be composed of n-1 curves $S_{i}(x)$. We have too, (3n-3) unknowns variables ($b_{i}, c_{i}, d_{i}$ for each spline). However, we can build a system of equations for find this variables. How can i do this? Easy.. Using the previous properties!
Using the previously defined splines for n points:
\begin{equation} S_{1}(x) = y_{1} + b_{1}(x-x_{1}) + c_{1}(x-x_{1})^{2} + d_{1}(x-x_{1})^{3} \
S_{2}(x) = y_{2} + b_{2}(x-x_{2}) + c_{2}(x-x_{2})^{2} + d_{2}(x-x_{2})^{3} \
\vdots \
S_{n-1}(x) = y_{n-1} + b_{n-1}(x-x_{n-1}) + c_{n-1}(x-x_{n-1})^{2} + d_{n-1}(x-x_{n-1})^{3}
\end{equation}
We need too, the first derivatives of this curves:
\begin{equation} S'{1}(x) = b{1} + 2c_{1}(x-x_{1}) + 3d_{1}(x-x_{1})^{2} \
S'{1}(x) = b{2} + 2c_{2}(x-x_{2}) + 3d_{2}(x-x_{2})^{2} \
\vdots \
S'{n-1}(x) = b{n-1} + 2c_{n-1}(x-x_{n-1}) + 3d_{n-1}(x-x_{n-1})^{2} \
\end{equation}
And its second derivatives:
\begin{equation} S''{1}(x) = 2c{1} + 6d_{1}(x-x_{1}) \
S''{2}(x) = 2c{2} + 6d_{2}(x-x_{2}) \
\vdots \
S''{n-1}(x) = 2c{n-1} + 6d_{n-1}(x-x_{n-1}) \
\end{equation}
Using the first property, we get (n-1) equations:
\begin{equation} b_{1}(x_{2}-x_{1}) + c_{1}(x_{2}-x_{1})^2 + d_{1}(x_{2}-x_{1})^3 = y_{2} - y_{1} \hspace{1cm}(1)\
b_{2}(x_{3}-x_{2}) + c_{2}(x_{3}-x_{2})^2 + d_{2}(x_{3}-x_{2})^3 = y_{3} - y_{2} \hspace{1cm}(2)\
\vdots\
b_{n-1}(x_{n}-x_{n-1}) + c_{n-1}(x_{n}-x_{n-1})^2 + d_{n-1}(x_{n}-x_{n-1})^3 = y_{n} - y_{n-1} \hspace{1cm}(n-1)
\end{equation}
Using the second property, we get (n-2) equations:
\begin{equation} b_{1}+2c_{1}(x_{2}-x_{1}) + 3d_{1}(x_{2}-x_{1})^2 - b_{2}= 0 \hspace{1cm}(1)\
b_{2}+2c_{2}(x_{3}-x_{2}) + 3d_{2}(x_{3}-x_{2})^2 - b_{3}= 0 \hspace{1cm}(2)\
\vdots\
b_{n-2}+2c_{n-2}(x_{n-1}-x_{n-2}) + 3d_{n-2}(x_{n-1}-x_{n-2})^2 -b_{n-1}=0 \hspace{1cm}(n-2)\
\end{equation}
Using the third property, we get (n-2) equations:
\begin{equation} 2c_{1}+6d_{1}(x_{2}-x_{1}) - 2c_{2} = 0 \hspace{1cm}(1)\
2c_{2}+6d_{2}(x_{3}-x_{2}) - 2c_{3}=0 \hspace{1cm}(2)\
\vdots\
2c_{n-2}+6d_{n-2}(x_{n-1}-x_{n-2}) - 2c_{n-1} = 0 \hspace{1cm}(n-2)\
\end{equation}
If we adds all our equations, we obtains (3n-5) equations. Clearly, the matrix in that system is not square (we need 2 equations more). For this, we have another property, that defines the edges conditions of the splines
<div id='ad'/>
Splines Ending options
For this special property, we have the following 5 properties:
Natural Spline:
This property create a spline with zero curvature, thus:
\begin{align}
S''{1}(x{1}) &= 2c_{1} = 0\
S''{n-1}(x{n}) &= 2c_{n-1}+6d_{n-1}(x_{n}-x_{n-1}) = 0
\end{align}
Adjusted curvature:
This property create a spline which curvature is equal to a parameter previously defined, not necessarily zero.
\begin{align}
S''{1}(x{1}) &= 2c_{1} = \kappa_{1}\
S''{n-1}(x{n}) &= 2c_{n-1}+6d_{n-1}(x_{n}-x_{n-1}) = \kappa_{2}
\end{align}
Clamped cubic spline:
This property adjust the slopes at the edges of splines to a value previously defined
\begin{align}
S'{1}(x{1}) & = b_{1} = p_{1} \
S'{n-1}(x{n}) & = b_{n-1}+2c_{n-1}(x_{n}-x_{n-1}) + 3d_{n-1}(x_{n}-x_{n-1})^2 = p_{2}
\end{align}
Ended Parabolically
With this property, the edges of the splines are 2-degree polynomials. Hence the coefficients:
\begin{align}
d_{1} &= 0 \
d_{n-1} &= 0
\end{align}
Why does this property does not work for $n\le 3$?
Not-a-Knot cubic spline
This condition, checks the continuity at the edges, for the third derivative:
\begin{align}
S'''{1}(x{2}) &= S'''{2}(x{2})\
6d_{1}&=6d_{2}\
S'''{n-2}(x{n-1}) &= S'''{n-1}(x{n-1})\
6d_{n-2}&=6d_{n-1}
\end{align}
Why does this property does not work for $n\le 4$?
Each property give us the 2 equations needed. Thanks to this, we have (3n-3) unknowns and equations
Finally, for to find the coefficients of the spline, we''ll build the system of equations.
Example:
If we have the following 3 points: $(x_1,y_1),(x_1,y_1),(x_3,y_3)$
We will get 6 unknowns
So, to build a Natural spline, we need to solve the following linear system of equation:
$$ \begin{bmatrix} (x_2-x_1) & (x_2-x_1)^2 & (x_2-x_1)^3 & 0 & 0 & 0 \
0 & 0 & 0 &(x_3-x_2) & (x_3-x_2)^2 & (x_3-x_2)^3 \
1 & 2(x_2-x_1) & 3(x_2-x_1)^2 & -1 & 0 & 0 \
0 & 2 & 6(x_2-x_1) & 0 & -2 & 0 \
0 & 2 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 2 & 6(x_3-x_2) \
\end{bmatrix}
\left[ \begin{array}{c} b_1 \ c_1 \ d_1 \ b_2 \ c_2 \ d_2 \end{array} \right] =
\left[ \begin{array}{c} y_2-y_1 \ y_3-y_2 \ 0 \ 0 \ 0 \ 0 \end{array} \right]
$$
Finally, find the coefficients of splines is reduced to solve an equation system, and we already know this from previous notebooks!
Now the code:
End of explanation
x = np.array([1,2,4,5])
y = np.array([2,1,4,3])
S = cubic_spline(x,y,end='Natural')
x1 = np.linspace(1,2,200)
x2 = np.linspace(2,4,200)
x3 = np.linspace(4,5,200)
S1 = y[0]+S['b'][0]*(x1-x[0])+S['c'][0]*(x1-x[0])**2+S['d'][0]*(x1-x[0])**3
S2 = y[1]+S['b'][1]*(x2-x[1])+S['c'][1]*(x2-x[1])**2+S['d'][1]*(x2-x[1])**3
S3 = y[2]+S['b'][2]*(x3-x[2])+S['c'][2]*(x3-x[2])**2+S['d'][2]*(x3-x[2])**3
plt.figure(figsize=(M,M))
plt.plot(x,y,'k.',markersize=20,label='Data Points')
plt.plot(x1,S1,'b',linewidth=5,label=r'S$1(x)$')
plt.plot(x2,S2,'g',linewidth=5,label=r'S$2(x)$')
plt.plot(x3,S3,'r',linewidth=5,label=r'S$2(x)$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.grid(True)
plt.show()
Explanation: Example 1 - Hand made interpolation
End of explanation
def show_spline(type_ending='Natural',k1=0, k2=0, p1=0, p2=0):
x = np.array([1,2,4,5,7,9])
y = np.array([2,1,4,3,3,4])
xx=np.linspace(np.min(x),np.max(x),1000)
S = cubic_spline(x,y,end=type_ending, k1=k1, k2=k2, p1=p1, p2=p2)
plt.figure(figsize=(M,M))
plt.plot(xx,cubic_spline_eval(xx,S),'-',linewidth=5,label=r'S$(x)$')
plt.plot(x,y,'k.',markersize=20,label='Data Points')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.grid(True)
plt.show()
widgets.interact(show_spline, type_ending=['Natural','Adjusted','Clamped','Parabolically','NaK'],
k1=(-20,20,1),k2=(-20,20,1),p1=(-2,2,0.2),p2=(-2,2,0.2))
Explanation: Example 2
End of explanation |
7,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook uses the March Madness dataset provided by Kaggel.com. Pleas use kaggle.com to access that data.
I put the flat data into a SQLite database on my local, for the notebook explaining that process please go to https
Step1: Create a connection to the database, pull data from the RegularSeasonDetailedResults table, separate each season into a pandas dataframe and put the dataframes into a dictionary with the year of each season as the key.
Step2: Functions to modify each dataframe in the dictionary . Additions include low level statistics like % freethrows , and splits each game into two rows so that winner and loser team stats each have their own row.
Step3: function to create a dictionary for every season where team is key, and game data for team is the value value
Step4: Using the previous functions to add features, separate each game into two rows in a pandas dataframe, and separate team data.
Step5: Now you are ready to start exploring the team_data! | Python Code:
# imports
import sqlite3 as sql
from sklearn import datasets
from sklearn import metrics
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: This notebook uses the March Madness dataset provided by Kaggel.com. Pleas use kaggle.com to access that data.
I put the flat data into a SQLite database on my local, for the notebook explaining that process please go to https://github.com/mtchem/ETL-MarchMadness-data/blob/master/data_to_SQLite_DB.ipynb
End of explanation
# creates a connection to by local SQLite NCAA database
## * I pull from the database each time to guarantee that no raw/original data is modified
db = r'<path to database>'
conn = sql.connect(db)
# Creates dataframe for each year, puts it in a dictonary with the year as a key
d={}
for year in range(2009,2017,1):
year = str(year)
SQL_str = 'SELECT * FROM RegularSeasonDetailedResults WHERE Season =' + year
d["df{0}".format(year)]= pd.read_sql_query(SQL_str, conn)
# creates a copy of the dataframe dictionary
data_d = d.copy()
Explanation: Create a connection to the database, pull data from the RegularSeasonDetailedResults table, separate each season into a pandas dataframe and put the dataframes into a dictionary with the year of each season as the key.
End of explanation
# takes a dictionary of dataframes and modifies each df. Returns dictionary with modified dfs.
def modify_dictdf(d):
# new dictionary
new_d = {}
# adds some low level agg. stats like %freethrow
for year, df in d.items():
df['Wspread'] = df['Wscore'] - df['Lscore']
df['Lspread'] = df['Lscore'] - df['Wscore']
df['W%_freethrow'] = df['Wftm']/(df['Wftm']+df['Wfta'])
df['W%_field_goals'] = df['Wfgm']/(df['Wfgm']+df['Wfga'])
df['W%_3pt'] = df['Wfgm3']/(df['Wfgm3']+df['Wfga3'])
df['L%_freethrow'] = df['Lftm']/(df['Lftm']+df['Lfta'])
df['L%_field_goals'] = df['Lfgm']/(df['Lfgm']+df['Lfga'])
df['L%_3pt'] = df['Lfgm3']/(df['Lfgm3']+df['Lfga3'])
# difference in offensive rebounds
df['W_dor'] = df['Wor']-df['Lor']
df['L_dor'] = df['Lor']-df['Wor']
# difference in defensive rebounds
df['W_ddR'] = df['Wdr']-df['Ldr']
df['L_ddR'] = df['Ldr']-df['Wdr']
# creates two rows for every game so that each game's stats are represented for both teams
#splits game data to team data
df_a = df[['Wteam', 'Wscore', 'Wfgm','Wfga', 'Wfgm3', 'Wfga3', 'Wftm', 'Wfta', 'Wor',
'Wdr', 'Wast', 'Wto', 'Wstl', 'Wblk', 'Wpf', 'W%_freethrow', 'W%_field_goals',
'W%_3pt', 'Wspread', 'W_dor', 'W_ddR']]
df_b = df[['Lteam', 'Lscore','Lfgm', 'Lfga', 'Lfgm3', 'Lfga3','Lftm', 'Lfta', 'Lor', 'Ldr',
'Last', 'Lto', 'Lstl', 'Lblk', 'Lpf','L%_freethrow', 'L%_field_goals', 'L%_3pt',
'Lspread', 'L_dor', 'L_ddR']]
# renames columns of winner and loser dataframes
df_a = df_a.rename(columns = {'Wteam':'team', 'Wscore': 'score', 'Wfgm': 'fgm','Wfga': 'fga',
'Wfgm3': 'fgm3', 'Wfga3':'fga3', 'Wftm':'ftm', 'Wfta':'fta',
'Wor':'or', 'Wdr':'dr', 'Wast':'ast','Wto':'to', 'Wstl':'stl',
'Wblk':'blk', 'Wpf':'pf', 'W%_freethrow':'%_freethrow',
'W%_field_goals':'%field_goal', 'W%_3pt':'%_3pt',
'Wspread':'spread', 'W_dor':'dor', 'W_ddR':'ddR' })
df_b = df_b.rename(columns = {'Lteam':'team', 'Lscore': 'score','Lfgm': 'fgm', 'Lfga': 'fga',
'Lfgm3': 'fgm3', 'Lfga3':'fga3','Lftm':'ftm', 'Lfta':'fta',
'Lor':'or', 'Ldr':'dr', 'Last':'ast','Lto':'to','Lstl':'stl',
'Lblk':'blk','Lpf':'pf','L%_freethrow':'%_freethrow',
'L%_field_goals':'%field_goal', 'L%_3pt':'%_3pt',
'Lspread':'spread', 'L_dor':'dor', 'L_ddR':'ddR'})
# combines winner and loser dataframes so that each game has two data rows and some extra info
df_comp = df_a.append(df_b)
# applies outcome function
add_outcome = outcome(df_comp)
# makes a new dictionary of dataframes
new_d[year] = add_outcome
return new_d
# takes a dataframe, adds an 'outcome' column where win = 1 and loss = 0, returns dataframe
def outcome(df):
outcome = []
for num in df.spread:
if num > 0:
outcome.append(1)
if num < 0:
outcome.append(0)
df['outcome'] = outcome
return df
Explanation: Functions to modify each dataframe in the dictionary . Additions include low level statistics like % freethrows , and splits each game into two rows so that winner and loser team stats each have their own row.
End of explanation
def team_data(df):
clean_data = modify_dictdf(data_d)
# creates dictionary for every season where team is key, and game df for team is value
team_data = {}
for key in clean_data.keys():
df = clean_data[key]
name = 'team_dict' + key[2:]
name = {}
# list of team unique team ID numbers
teams = list(set(list(df.team)))
for team in teams:
name[team] = df[df.team == team]
team_data[key] = name
return team_data
Explanation: function to create a dictionary for every season where team is key, and game data for team is the value value
End of explanation
# addition of low level statistics and separating winner and loser fields so that each game has two rows
modified_data = modify_dictdf(data_d)
# adds low level statistics, splits each game into two rows, and separates
# **depending on your computer this could take a few minutes
team_data = team_data(data_d)
Explanation: Using the previous functions to add features, separate each game into two rows in a pandas dataframe, and separate team data.
End of explanation
# closes the connection to the database
conn.close()
Explanation: Now you are ready to start exploring the team_data!
End of explanation |
7,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 11
Step1: 1. New parameters in additional_net_params
There are a few unique additions for the grid envs to additional_net_params to be aware of.
grid_array
grid_array passes information on the road network to the scenario, specifying the parameters you see above
Step2: 2. Defining Traffic Light Phases
To start off, we define how Sumo represents traffic light phases. A phase is defined as the states that the traffic lights around an intersection can take. The typical four-way, traffic-light-controlled intersection is modeled by a string of length 12. Consider the phase "GrGr". Every letter in this phase string ("G", "r", "G", "r") corresponds to an edge in the intersection, in clockwise order. Explicitly, the northern and southern edges of the intersection both have a state of "G", where the eastern and western edges of the intersection both have a state of "r".
Sumo traffic lights are initiated to a default set of phases, and will not deviate from the phases provided in their configuration files. We will describe in this section how to define custom phases for traffic lights.
NOTE
Step3: Following this step, the TrafficLightParams class should be passed into your scenario as element traffic_lights.
Step4: That's it! The traffic light logic will be passed into Flow's internals, which will generate an additional-file containing all of the information needed to generate the traffic lights you specified in the simulation.
3. Using the Default Traffic Light Baseline
We have developed a traffic light baseline that can be used for any experiments on a grid. This baseline uses actuated traffic lights (section 5), and has been fine-tuned on many iterations of experiments with varying parameters. The actual parameters are located in the TrafficLightParams class under the getter function actuated_default(). For reference, these values are
Step5: To view the baseline in action, simply initialize the TrafficLightParams class with the baseline argument set to True, and pass it into your additional_net_params. Nothing else needs to be done; no traffic lights need to be added.
Step6: 4. Static Traffic Lights
Static traffic lights are traffic lights with pre-defined phases. They cannot dynamically adjust according to traffic needs; they simply follow the same pattern repeatedly. To see static traffic lights in action, the TrafficLightParams object should be instantiated with baseline=False.
When adding individual traffic lights, the following parameters in addition to node_id are involved
Step7: 5. Actuated Traffic Lights
For more flexibility than the static traffic lights defined above, and more control than RL-controlled traffic lights, actuated traffic lights are a good option to consider.
As an excerpt from Sumo's documentation
Step8: 6. Controlling Your Traffic Lights via RL
This is where we switch from the grid.py experiment script to green_wave.py.
To control traffic lights via RL, no tl_logic element is necessary. This is because rllab is controlling all the parameters you were able to customize in the prior sections. Your additional_net_params should look something like this
Step9: This will enable the program to recognize all nodes as traffic lights. The experiment then gives control to the environment; we are using TrafficLightGridEnv, which is an environment created for RL that applies RL-specified traffic light actions (e.g. change the state) via TraCI at each timestep.
This is all you need to run an RL experiment! It is worth taking a look at the TrafficLightGridEnv class to further understanding of the experiment internals. The rest of this tutorial is an optional walkthrough through the various components of TrafficLightGridEnv
Step10: The variable self.last_change indicates the last time the lights were allowed to change from a red-green state to a red-yellow state.
The variable self.direction indicates the direction of the intersection, i.e. the direction that is currently being allowed to flow. 0 indicates flow from top to bottom, and 1 indicates flow from left to right.
The variable self.currently_yellow with a value of 0 indicates that the traffic light is in a red-yellow state. 1 indicates that the traffic light is in a red-green state.
self.last_change is contingent on an instance variable self.min_switch_time. This is a variable that can be set in additional_env_params with the key name "switch_time". Setting switch_time enables more control over the RL experiment by preventing traffic lights from switching until switch_time timesteps have occurred. In practice, this can be used to prevent flickering.
Step11: Action Space
The action space for RL-controlled traffic lights directly matches the number of traffic lights in the system. Each traffic light node corresponds to one action. The action space is thus defined as
Step12: Observation Space
The existing observation space for our existing traffic lights experiments is designed to be a fully observable state space with these metrics in mind. For each vehicle, we want to know its velocity, its distance (in [unit]) from the next intersection, and the unique edge it is traveling on. For each traffic light, we want to know its current state (i.e. what direction it is flowing), when it last changed, and whether it was yellow. Recall that the traffic light states are encoded in self.min_switch_time.
Step13: State Space
The state space collects the information that the observation_space specifies. There are helper functions that exist in the TrafficLightGridEnv to construct the state space. | Python Code:
from flow.core.params import NetParams
from flow.scenarios.grid import SimpleGridScenario
from flow.core.params import TrafficLightParams
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams, \
InFlows, SumoCarFollowingParams
from flow.core.params import VehicleParams
import numpy as np
Explanation: Tutorial 11: Traffic Lights
This tutorial walks through how to add traffic lights to experiments. This tutorial will use the following files:
Experiment script for RL: examples/rllab/green_wave.py
Experiment script for non-RL: examples/sumo/grid.py
Scenario: grid.py (class SimpleGridScenario)
Environment for RL: green_wave_env.py (class TrafficLightGridEnv)
Environment for non-RL: loop_accel.py (class AccelEnv)
There are two main classes of traffic lights that Sumo supports: actuated and static traffic lights. This tutorial will cover both types. Over the course of this tutorial, we'll discuss 4 different types of traffic lights to introduce into your road network:
Static Traffic Lights
Actuated (baseline) Traffic Lights
Actuated Traffic Lights
RL Traffic Lights
Let's begin!
First, import all necessary classes.
End of explanation
inner_length = 300
long_length = 500
short_length = 300
n = 2 # rows
m = 3 # columns
num_cars_left = 20
num_cars_right = 20
num_cars_top = 20
num_cars_bot = 20
tot_cars = (num_cars_left + num_cars_right) * m \
+ (num_cars_top + num_cars_bot) * n
grid_array = {"short_length": short_length, "inner_length": inner_length,
"long_length": long_length, "row_num": n, "col_num": m,
"cars_left": num_cars_left, "cars_right": num_cars_right,
"cars_top": num_cars_top, "cars_bot": num_cars_bot}
Explanation: 1. New parameters in additional_net_params
There are a few unique additions for the grid envs to additional_net_params to be aware of.
grid_array
grid_array passes information on the road network to the scenario, specifying the parameters you see above: row_num, col_num, inner_length, short_length, long_length, cars_top, cars_bot, cars_left, cars_right. This is required for any grid experiment.
tl_logic
tl_logic should be used for users who want to exert more control over individual traffic lights. tl_logic simply tells the env whether the traffic lights are controlled by RL or whether a default pattern or sumo actuation is to be used. Use "actuated" if you want SUMO to control the traffic lights.
For this tutorial, we will assume the following parameters for the grid_array, which specifies a grid network with 2 rows and 3 columns. traffic_lights should be set to True for every experiment in this tutorial.
End of explanation
tl_logic = TrafficLightParams()
nodes = ["center0", "center1", "center2", "center3", "center4", "center5"]
phases = [{"duration": "31", "state": "GrGr"},
{"duration": "6", "state": "yryr"},
{"duration": "31", "state": "rGrG"},
{"duration": "6", "state": "ryry"}]
for node_id in nodes:
tl_logic.add(node_id, tls_type="static", programID="1", offset=None, phases=phases)
Explanation: 2. Defining Traffic Light Phases
To start off, we define how Sumo represents traffic light phases. A phase is defined as the states that the traffic lights around an intersection can take. The typical four-way, traffic-light-controlled intersection is modeled by a string of length 12. Consider the phase "GrGr". Every letter in this phase string ("G", "r", "G", "r") corresponds to an edge in the intersection, in clockwise order. Explicitly, the northern and southern edges of the intersection both have a state of "G", where the eastern and western edges of the intersection both have a state of "r".
Sumo traffic lights are initiated to a default set of phases, and will not deviate from the phases provided in their configuration files. We will describe in this section how to define custom phases for traffic lights.
NOTE: If the API is used at any point to modify the traffic light state, i.e. functions such as setRedYellowGreenState, this will override the traffic light's default phase.
To do anything with traffic lights, you should interface with Flow's TrafficLightParams class
Once the TrafficLightParams class is instantiated, traffic lights can be added via the classes' add function. One prerequisite of using this function is knowing the node id of any node you intend to manipulate. This information is baked into the experiment's scenario class, as well as the experiment's nod.xml file. For the experiment we are using with 2 rows and 3 columns, there are 6 nodes: "center0" to "center5".
In this particular example, each of the 6 traffic light nodes corresponds to the same set of possible phases; in other words, at any time, each node will be at the same phase. You can, however, customize each traffic light node to have different phases.
End of explanation
additional_net_params = {"grid_array": grid_array, "speed_limit": 35,
"horizontal_lanes": 1, "vertical_lanes": 1,
"traffic_lights": True}
net_params = NetParams(additional_params=additional_net_params)
scenario = SimpleGridScenario(name="grid",
vehicles=VehicleParams(),
net_params=net_params,
initial_config=InitialConfig(),
traffic_lights=tl_logic)
Explanation: Following this step, the TrafficLightParams class should be passed into your scenario as element traffic_lights.
End of explanation
tl_type = "actuated"
program_id = 1
max_gap = 3.0
detector_gap = 0.8
show_detectors = True
phases = [{"duration": "31", "minDur": "8", "maxDur": "45", "state": "GrGr"},
{"duration": "6", "minDur": "3", "maxDur": "6", "state": "yryr"},
{"duration": "31", "minDur": "8", "maxDur": "45", "state": "rGrG"},
{"duration": "6", "minDur": "3", "maxDur": "6", "state": "ryry"}]
Explanation: That's it! The traffic light logic will be passed into Flow's internals, which will generate an additional-file containing all of the information needed to generate the traffic lights you specified in the simulation.
3. Using the Default Traffic Light Baseline
We have developed a traffic light baseline that can be used for any experiments on a grid. This baseline uses actuated traffic lights (section 5), and has been fine-tuned on many iterations of experiments with varying parameters. The actual parameters are located in the TrafficLightParams class under the getter function actuated_default(). For reference, these values are:
End of explanation
tl_logic = TrafficLightParams(baseline=True)
additional_net_params = {"grid_array": grid_array,
"speed_limit": 35,
"horizontal_lanes": 1,
"vertical_lanes": 1,
"traffic_lights": True,
"tl_logic": tl_logic}
Explanation: To view the baseline in action, simply initialize the TrafficLightParams class with the baseline argument set to True, and pass it into your additional_net_params. Nothing else needs to be done; no traffic lights need to be added.
End of explanation
tl_logic = TrafficLightParams(baseline=False)
phases = [{"duration": "31", "state": "GrGr"},
{"duration": "6", "state": "yryr"},
{"duration": "31", "state": "rGrG"},
{"duration": "6", "state": "ryry"}]
tl_logic.add("center0", phases=phases, programID=1)
Explanation: 4. Static Traffic Lights
Static traffic lights are traffic lights with pre-defined phases. They cannot dynamically adjust according to traffic needs; they simply follow the same pattern repeatedly. To see static traffic lights in action, the TrafficLightParams object should be instantiated with baseline=False.
When adding individual traffic lights, the following parameters in addition to node_id are involved:
tls_type: [optional] Specifies actuated or static traffic lights, defaults to static
programID: [optional] The name for this traffic light program. It cannot be the same ID as your base program, which is 0, defaults to 10
offset: [optional] The initial time offset of the program
An example of adding one static traffic light to our system is as follows:
End of explanation
tl_logic = TrafficLightParams(baseline=False)
phases = [{"duration": "31", "minDur": "8", "maxDur": "45", "state": "GrGr"},
{"duration": "6", "minDur": "3", "maxDur": "6", "state": "yryr"},
{"duration": "31", "minDur": "8", "maxDur": "45", "state": "rGrG"},
{"duration": "6", "minDur": "3", "maxDur": "6", "state": "ryry"}]
tl_logic.add("center1",
tls_type="actuated",
programID="1",
phases=phases,
maxGap=5.0,
detectorGap=0.9,
showDetectors=False)
tl_logic.add("center2",
tls_type="actuated")
Explanation: 5. Actuated Traffic Lights
For more flexibility than the static traffic lights defined above, and more control than RL-controlled traffic lights, actuated traffic lights are a good option to consider.
As an excerpt from Sumo's documentation: "SUMO supports gap-based actuated traffic control This control scheme is common in Germany and works by prolonging traffic phases whenever a continuous stream of traffic is detected. It switches to the next phase after detecting a sufficent time gap between sucessive vehicles. This allows for better distribution of green-time among phases and also affects cycle duration in response to dynamic traffic conditions."
The difference between phases for static and actuated traffic lights is that actuated traffic light phases take in two additional parameters, minDur and maxDur, which describe the allowed range of time durations for each phase.
In addition to these sub-parameters of phases and all the required parameters of static of traffic lights, the following optional parameters are involved, and default to values set by Sumo:
maxGap: [optional] int, describes the maximum time gap between successive vehicle that will cause the current phase to be prolonged
detectorGap: [optional] int, determines the time distance between the (automatically generated) detector and the stop line in seconds (at each lanes maximum speed)
showDetectors: [optional] bool, toggles whether or not detectors are shown in sumo-gui
file: [optional] str, which file the detector shall write results into
freq: [optional] int, the period over which collected values shall be aggregated
An example of adding two actuated traffic lights to our system is as follows. The first specifies more custom control, while the second specifies minimal control.
End of explanation
additional_net_params = {"speed_limit": 35, "grid_array": grid_array,
"horizontal_lanes": 1, "vertical_lanes": 1,
"traffic_lights": True}
Explanation: 6. Controlling Your Traffic Lights via RL
This is where we switch from the grid.py experiment script to green_wave.py.
To control traffic lights via RL, no tl_logic element is necessary. This is because rllab is controlling all the parameters you were able to customize in the prior sections. Your additional_net_params should look something like this:
End of explanation
# keeps track of the last time the traffic lights in an intersection were allowed to change (the last time the lights were allowed to change from a red-green state to a red-yellow state.).
self.last_change = np.zeros((self.rows * self.cols, 1))
# keeps track of the direction of the intersection (the direction that is currently being allowed to flow. 0 indicates flow from top to bottom, and 1 indicates flow from left to right.)
self.direction = np.zeros((self.rows * self.cols, 1))
# value of 0 indicates that the intersection is in a red-yellow state. 1 indicates that the intersection is in a red-green state.
self.currently_yellow = np.zeros((self.rows * self.cols, 1))
Explanation: This will enable the program to recognize all nodes as traffic lights. The experiment then gives control to the environment; we are using TrafficLightGridEnv, which is an environment created for RL that applies RL-specified traffic light actions (e.g. change the state) via TraCI at each timestep.
This is all you need to run an RL experiment! It is worth taking a look at the TrafficLightGridEnv class to further understanding of the experiment internals. The rest of this tutorial is an optional walkthrough through the various components of TrafficLightGridEnv:
Keeping Track of Traffic Light State
Flow keeps track of the traffic light states (i.e. for each intersection, time elapsed since the last change, which direction traffic is flowing, and whether or not the traffic light is currently displaying yellow) in the following variables:
End of explanation
additional_env_params = {"target_velocity": 50, "switch_time": 3.0}
Explanation: The variable self.last_change indicates the last time the lights were allowed to change from a red-green state to a red-yellow state.
The variable self.direction indicates the direction of the intersection, i.e. the direction that is currently being allowed to flow. 0 indicates flow from top to bottom, and 1 indicates flow from left to right.
The variable self.currently_yellow with a value of 0 indicates that the traffic light is in a red-yellow state. 1 indicates that the traffic light is in a red-green state.
self.last_change is contingent on an instance variable self.min_switch_time. This is a variable that can be set in additional_env_params with the key name "switch_time". Setting switch_time enables more control over the RL experiment by preventing traffic lights from switching until switch_time timesteps have occurred. In practice, this can be used to prevent flickering.
End of explanation
@property
def action_space(self):
return Box(low=0, high=1, shape=(self.k.traffic_light.num_traffic_lights,),
dtype=np.float32)
Explanation: Action Space
The action space for RL-controlled traffic lights directly matches the number of traffic lights in the system. Each traffic light node corresponds to one action. The action space is thus defined as:
End of explanation
@property
def observation_space(self):
speed = Box(low=0, high=1, shape=(self.initial_vehicles.num_vehicles,),
dtype=np.float32)
dist_to_intersec = Box(low=0., high=np.inf,
shape=(self.initial_vehicles.num_vehicles,),
dtype=np.float32)
edge_num = Box(low=0., high=1, shape=(self.initial_vehicles.num_vehicles,),
dtype=np.float32)
traffic_lights = Box(low=0., high=np.inf,
shape=(3 * self.rows * self.cols,),
dtype=np.float32)
return Tuple((speed, dist_to_intersec, edge_num, traffic_lights))
Explanation: Observation Space
The existing observation space for our existing traffic lights experiments is designed to be a fully observable state space with these metrics in mind. For each vehicle, we want to know its velocity, its distance (in [unit]) from the next intersection, and the unique edge it is traveling on. For each traffic light, we want to know its current state (i.e. what direction it is flowing), when it last changed, and whether it was yellow. Recall that the traffic light states are encoded in self.min_switch_time.
End of explanation
def get_state(self):
# compute the normalizers
max_speed = self.k.scenario.max_speed()
grid_array = self.net_params.additional_params["grid_array"]
max_dist = max(grid_array["short_length"],
grid_array["long_length"],
grid_array["inner_length"])
# get the state arrays
speeds = [self.k.vehicle.get_speed(veh_id) / max_speed for veh_id in
self.k.vehicle.get_ids()]
dist_to_intersec = [self.get_distance_to_intersection(veh_id)/max_dist
for veh_id in self.k.vehicle.get_ids()]
edges = [self._convert_edge(self.k.vehicle.get_edge(veh_id)) / (
self.k.scenario.num_edges - 1) for veh_id in self.k.vehicle.get_ids()]
state = [speeds, dist_to_intersec, edges,
self.last_change.flatten().tolist()]
return np.array(state)
Explanation: State Space
The state space collects the information that the observation_space specifies. There are helper functions that exist in the TrafficLightGridEnv to construct the state space.
End of explanation |
7,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting the full vector-valued MNE solution
The source space that is used for the inverse computation defines a set of
dipoles, distributed across the cortex. When visualizing a source estimate, it
is sometimes useful to show the dipole directions in addition to their
estimated magnitude. This can be accomplished by computing a
Step1: Plot the source estimate
Step2: Plot the activation in the direction of maximal power for this data
Step3: The normal is very similar
Step4: You can also do this with a fixed-orientation inverse. It looks a lot like
the result above because the loose=0.2 orientation constraint keeps
sources close to fixed orientation | Python Code:
# Author: Marijn van Vliet <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
smoothing_steps = 7
# Read evoked data
meg_path = data_path / 'MEG' / 'sample'
fname_evoked = meg_path / 'sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
# Read inverse solution
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
# Apply inverse solution, set pick_ori='vector' to obtain a
# :class:`mne.VectorSourceEstimate` object
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
# Use peak getter to move visualization to the time point of the peak magnitude
_, peak_time = stc.magnitude().get_peak(hemi='lh')
Explanation: Plotting the full vector-valued MNE solution
The source space that is used for the inverse computation defines a set of
dipoles, distributed across the cortex. When visualizing a source estimate, it
is sometimes useful to show the dipole directions in addition to their
estimated magnitude. This can be accomplished by computing a
:class:mne.VectorSourceEstimate and plotting it with
:meth:stc.plot <mne.VectorSourceEstimate.plot>, which uses
:func:~mne.viz.plot_vector_source_estimates under the hood rather than
:func:~mne.viz.plot_source_estimates.
It can also be instructive to visualize the actual dipole/activation locations
in 3D space in a glass brain, as opposed to activations imposed on an inflated
surface (as typically done in :meth:mne.SourceEstimate.plot), as it allows
you to get a better sense of the underlying source geometry.
End of explanation
brain = stc.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
smoothing_steps=smoothing_steps)
# You can save a brain movie with:
# brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16, framerate=10,
# interpolation='linear', time_viewer=True)
Explanation: Plot the source estimate:
End of explanation
stc_max, directions = stc.project('pca', src=inv['src'])
# These directions must by design be close to the normals because this
# inverse was computed with loose=0.2
print('Absolute cosine similarity between source normals and directions: '
f'{np.abs(np.sum(directions * inv["source_nn"][2::3], axis=-1)).mean()}')
brain_max = stc_max.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Max power', smoothing_steps=smoothing_steps)
Explanation: Plot the activation in the direction of maximal power for this data:
End of explanation
brain_normal = stc.project('normal', inv['src'])[0].plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Normal', smoothing_steps=smoothing_steps)
Explanation: The normal is very similar:
End of explanation
fname_inv_fixed = (
meg_path / 'sample_audvis-meg-oct-6-meg-fixed-inv.fif')
inv_fixed = read_inverse_operator(fname_inv_fixed)
stc_fixed = apply_inverse(
evoked, inv_fixed, lambda2, 'dSPM', pick_ori='vector')
brain_fixed = stc_fixed.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
smoothing_steps=smoothing_steps)
Explanation: You can also do this with a fixed-orientation inverse. It looks a lot like
the result above because the loose=0.2 orientation constraint keeps
sources close to fixed orientation:
End of explanation |
7,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple CNN
We are going to define a simple Convolutional Network and we are going to train it from scrath on the dataset. The results of this model is going to be our benchmark
We are going to use Keras library with tensorflow as a backend.
Common configuration
Step1: Check that we are using the GPU
Step2: Model
Model definition
Step3: Model arquitecture
We have the following model arquitecture
Step4: Keras callbacks
We are going to define two callbacks that are going to be called in the training. EarlyStopping to stop the training if its not getting better. And a tensorboard callback to log information to be used by tensorboard.
Step5: Model Optimizer
Step6: Compile the model
Step7: Training
Train data generator
Step8: Validation data generator
Step9: Model fitting
Step10: Training plots
Step11: Evaluate the model
Step13: Test the model | Python Code:
%autosave 0
IMAGE_SIZE = (360,404) # The dimensions to which all images found will be resized.
BATCH_SIZE = 32
NUMBER_EPOCHS = 8
TENSORBOARD_DIRECTORY = "../logs/simple_model/tensorboard"
TRAIN_DIRECTORY = "../data/train/"
VALID_DIRECTORY = "../data/valid/"
TEST_DIRECTORY = "../data/test/"
NUMBER_TRAIN_SAMPLES = 17500
NUMBER_VALIDATION_SAMPLES = 5000
NUMBER_TEST_SAMPLES = 2500
WEIGHTS_DIRECTORY = "../weights/"
Explanation: Simple CNN
We are going to define a simple Convolutional Network and we are going to train it from scrath on the dataset. The results of this model is going to be our benchmark
We are going to use Keras library with tensorflow as a backend.
Common configuration
End of explanation
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
get_available_gpus()
import tensorflow as tf
# Creates a graph.
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
Explanation: Check that we are using the GPU:
End of explanation
from keras.models import Model
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers import Input, Dense, Flatten
inputs = Input(shape = (IMAGE_SIZE[0], IMAGE_SIZE[1], 3))
# First CNN Layer
x = Convolution2D(16, (3, 3),
activation='relu',
data_format="channels_last",
kernel_initializer="he_uniform")(inputs)
x = MaxPooling2D(pool_size=(3, 3),
strides=(2, 2),
data_format="channels_last")(x)
# Second CNN Layer
x = Convolution2D(32, (3, 3),
activation='relu',
data_format="channels_last",
kernel_initializer="he_uniform")(x)
x = MaxPooling2D(pool_size=(2, 2),
strides=(2, 2),
data_format="channels_last")(x)
# Third CNN Layer
x = Convolution2D(64, (3, 3),
activation='relu',
data_format="channels_last",
kernel_initializer="he_uniform")(x)
x = MaxPooling2D(pool_size=(2, 2),
strides=(2, 2),
data_format="channels_last")(x)
# Third CNN Layer
x = Convolution2D(128, (3, 3),
activation='relu',
data_format="channels_last",
kernel_initializer="he_uniform")(x)
x = MaxPooling2D(pool_size=(2, 2),
strides=(2, 2),
data_format="channels_last")(x)
x = Flatten()(x)
x = Dense(96, activation='relu',kernel_initializer="he_uniform")(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=inputs, outputs=predictions)
Explanation: Model
Model definition
End of explanation
model.summary()
Explanation: Model arquitecture
We have the following model arquitecture:
End of explanation
from keras.callbacks import EarlyStopping
from keras.callbacks import TensorBoard
# Early stop in case of getting worse
early_stop = EarlyStopping(monitor = 'val_loss', patience = 3, verbose = 0)
#TensorBoard
# run tensorboard with tensorboard --logdir=/full_path_to_your_logs
#tensorboard_path = TENSORBOARD_DIRECTORY
#tensorboard_logger = TensorBoard(log_dir=tensorboard_path, histogram_freq=0, write_graph=False, write_images=False)
#print('Logging basic info to be used by TensorBoard to {}. To see this log run:'.format(tensorboard_path))
#print('tensorboard --logdir={}'.format(tensorboard_path))
callbacks = [early_stop]#, tensorboard_logger]
Explanation: Keras callbacks
We are going to define two callbacks that are going to be called in the training. EarlyStopping to stop the training if its not getting better. And a tensorboard callback to log information to be used by tensorboard.
End of explanation
OPTIMIZER_LEARNING_RATE = 1e-2
OPTIMIZER_DECAY = 1e-4 # LearningRate = LearningRate * 1/(1 + decay * epoch)
OPTIMIZER_MOMENTUM = 0.89
OPTIMIZER_NESTEROV_ENABLED = False
from keras.optimizers import SGD
optimizer = SGD(lr=OPTIMIZER_LEARNING_RATE,
decay=OPTIMIZER_DECAY,
momentum=OPTIMIZER_MOMENTUM,
nesterov=OPTIMIZER_NESTEROV_ENABLED)
Explanation: Model Optimizer
End of explanation
model.compile(loss='categorical_crossentropy',
optimizer=optimizer, \
metrics=["accuracy"])
Explanation: Compile the model
End of explanation
from keras.preprocessing.image import ImageDataGenerator
## train generator with shuffle but no data augmentation
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_batch_generator = train_datagen.flow_from_directory(TRAIN_DIRECTORY,
target_size = IMAGE_SIZE,
class_mode = 'categorical',
batch_size = BATCH_SIZE)
Explanation: Training
Train data generator
End of explanation
from keras.preprocessing.image import ImageDataGenerator
## train generator with shuffle but no data augmentation
validation_datagen = ImageDataGenerator(rescale = 1./255)
valid_batch_generator = validation_datagen.flow_from_directory(VALID_DIRECTORY,
target_size = IMAGE_SIZE,
class_mode = 'categorical',
batch_size = BATCH_SIZE)
Explanation: Validation data generator
End of explanation
# fine-tune the model
hist = model.fit_generator(
train_batch_generator,
steps_per_epoch=NUMBER_TRAIN_SAMPLES/BATCH_SIZE,
epochs=NUMBER_EPOCHS, # epochs: Integer, total number of iterations on the data.
validation_data=valid_batch_generator,
validation_steps=NUMBER_VALIDATION_SAMPLES/BATCH_SIZE,
callbacks=callbacks,
verbose=2)
model_save_path = WEIGHTS_DIRECTORY + 'simple_cnn_weights.h5'
print('Saving TOP (FCN) weigths to ', model_save_path)
model.save_weights(model_save_path, overwrite=True)
Explanation: Model fitting
End of explanation
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.plot(hist.history['acc']); plt.plot(hist.history['val_acc']);
plt.title('model accuracy'); plt.ylabel('accuracy');
plt.xlabel('epoch'); plt.legend(['train', 'valid'], loc='upper left');
# summarize history for loss
plt.subplot(1, 2, 2)
plt.plot(hist.history['loss']); plt.plot(hist.history['val_loss']);
plt.title('model loss'); plt.ylabel('loss');
plt.xlabel('epoch'); plt.legend(['train', 'valid'], loc='upper left');
plt.show()
Explanation: Training plots
End of explanation
############
# load weights
############
model_save_path = WEIGHTS_DIRECTORY + 'simple_cnn_weights.h5'
print("Loading weights from: {}".format(model_save_path))
model.load_weights(model_save_path)
from keras.preprocessing.image import ImageDataGenerator
## train generator with shuffle but no data augmentation
validation_datagen = ImageDataGenerator(rescale = 1./255)
test_batch_generator = validation_datagen.flow_from_directory(TEST_DIRECTORY,
target_size = IMAGE_SIZE,
class_mode = 'categorical',
batch_size = BATCH_SIZE)
model.evaluate_generator(test_batch_generator,
steps = NUMBER_TEST_SAMPLES/BATCH_SIZE)
Explanation: Evaluate the model
End of explanation
from keras.preprocessing.image import ImageDataGenerator
## train generator with shuffle but no data augmentation
test_datagen = ImageDataGenerator(rescale = 1./255)
test_batch_generator = test_datagen.flow_from_directory(
TEST_DIRECTORY,
target_size = IMAGE_SIZE,
batch_size=1,
shuffle = False, # Important !!!
classes = None,
class_mode = None)
test_batch_generator.classes.shape
import pickle
test_classes_file = open("../results/simple_cnn_true.pickle", "wb" )
pickle.dump( test_batch_generator.classes, test_classes_file )
true_values = test_batch_generator.classes
len(test_batch_generator.filenames)
test_filenames = open("../results/simple_cnn_filenames.pickle", "wb" )
pickle.dump( test_batch_generator.filenames, test_filenames )
import numpy as np
pred = []
for i in range(int(NUMBER_TEST_SAMPLES)):
X = next(test_batch_generator) # get the next batch
#print(X.shape)
pred1 = model.predict(X, batch_size = 1, verbose = 0) #predict on a batch
pred = pred + pred1.tolist()
probabilities = np.array(pred)
print(probabilities.shape)
assert probabilities.shape == (NUMBER_TEST_SAMPLES, 2)
test_filenames = open("../results/simple_cnn_probabilities.pickle", "wb" )
pickle.dump( probabilities, test_filenames )
probabilities[0]
predictions=np.argmax(probabilities,1)
test_filenames = open("../results/simple_cnn_predictions.pickle", "wb" )
pickle.dump( predictions, test_filenames )
predictions[0]
import matplotlib.pyplot as plt
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
import itertools
from sklearn.metrics import confusion_matrix
class_names = ['cat', 'dog']
cnf_matrix = confusion_matrix(true_values, predictions)
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix')
plt.show()
from numpy.random import random, permutation
#1. A few correct labels at random
correct = np.where(predictions==true_values)[0]
idx = permutation(correct)[:4]
#plots_idx(idx, probs[idx])
len(correct)
type(int(idx[0]))
from scipy import ndimage
from PIL import Image
import matplotlib.pyplot as plt
im = ndimage.imread("../data/test/" + test_batch_generator.filenames[idx[0]])
image = Image.fromarray(im)
plt.imshow(image)
plt.title(probabilities[idx[0]])
plt.show()
im = ndimage.imread("../data/test/" + test_batch_generator.filenames[idx[1]])
image = Image.fromarray(im)
plt.imshow(image)
plt.title(probabilities[idx[1]])
plt.show()
im = ndimage.imread("../data/test/" + test_batch_generator.filenames[idx[2]])
image = Image.fromarray(im)
plt.imshow(image)
plt.title(probabilities[idx[2]])
plt.show()
from numpy.random import random, permutation
#1. A few correct labels at random
correct = np.where(predictions != true_values)[0]
idx = permutation(correct)[:4]
#plots_idx(idx, probs[idx])
im = ndimage.imread("../data/test/" + test_batch_generator.filenames[idx[0]])
image = Image.fromarray(im)
plt.imshow(image)
plt.title(probabilities[idx[0]])
plt.show()
im = ndimage.imread("../data/test/" + test_batch_generator.filenames[idx[1]])
image = Image.fromarray(im)
plt.imshow(image)
plt.title(probabilities[idx[1]])
plt.show()
im = ndimage.imread("../data/test/" + test_batch_generator.filenames[idx[2]])
image = Image.fromarray(im)
plt.imshow(image)
plt.title(probabilities[idx[2]])
plt.show()
Explanation: Test the model
End of explanation |
7,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 5
The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data.
Step2: Download the data from the source website if necessary.
Step4: Read the data into a string.
Step5: Build the dictionary and replace rare words with UNK token.
Step6: Let's display the internal variables to better understand their structure
Step7: Function to generate a training batch for the skip-gram model.
Step8: Note
Step9: Train a skip-gram model.
Step10: This is what an embedding looks like
Step11: All the values are abstract, there is practical meaning of the them. Moreover, the final embeddings are normalized as you can see here
Step12: Problem
An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.
For the continuous bag of words, the train inputs are slightly different from the skip-gram
Step13: Note the instruction change on the loss function, with reduce_sum to sum the word vectors in the context | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE
Explanation: Deep Learning
Assignment 5
The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data.
End of explanation
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
Explanation: Download the data from the source website if necessary.
End of explanation
def read_data(filename):
Extract the first file enclosed in a zip file as a list of words
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
print('Data size %d' % len(words))
Explanation: Read the data into a string.
End of explanation
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words # Hint to reduce memory.
Explanation: Build the dictionary and replace rare words with UNK token.
End of explanation
print(data[:10])
print(count[:10])
print(dictionary.items()[:10])
print(reverse_dictionary.items()[:10])
Explanation: Let's display the internal variables to better understand their structure:
End of explanation
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:32]])
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 0
batch, labels = generate_batch(batch_size=16, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(16)])
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 1
batch, labels = generate_batch(batch_size=16, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(16)])
Explanation: Function to generate a training batch for the skip-gram model.
End of explanation
print(batch)
print(labels)
Explanation: Note: the labels is a sliding random value of the word surrounding the words of the batch.
It is not obvious with the output above, but all the data are based on index, and not the word directly.
End of explanation
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
Explanation: Train a skip-gram model.
End of explanation
print(final_embeddings[0])
Explanation: This is what an embedding looks like:
End of explanation
print(np.sum(np.square(final_embeddings[0])))
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
Explanation: All the values are abstract, there is practical meaning of the them. Moreover, the final embeddings are normalized as you can see here:
End of explanation
data_index = 0
def generate_batch(batch_size, bag_window):
global data_index
span = 2 * bag_window + 1 # [ bag_window target bag_window ]
batch = np.ndarray(shape=(batch_size, span - 1), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size):
# just for testing
buffer_list = list(buffer)
labels[i, 0] = buffer_list.pop(bag_window)
batch[i] = buffer_list
# iterate to the next buffer
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:16]])
for bag_window in [1, 2]:
data_index = 0
batch, labels = generate_batch(batch_size=4, bag_window=bag_window)
print('\nwith bag_window = %d:' % (bag_window))
print(' batch:', [[reverse_dictionary[w] for w in bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(4)])
Explanation: Problem
An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.
For the continuous bag of words, the train inputs are slightly different from the skip-gram:
End of explanation
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
###skip_window = 1 # How many words to consider left and right.
###num_skips = 2 # How many times to reuse an input to generate a label.
bag_window = 2 # How many words to consider left and right.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size, bag_window * 2])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embeds = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, tf.reduce_sum(embeds, 1),
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, bag_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
Explanation: Note the instruction change on the loss function, with reduce_sum to sum the word vectors in the context:
End of explanation |
7,250 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
While nan == nan is always False, in many cases people want to treat them as equal, and this is enshrined in pandas.DataFrame.equals: | Problem:
import pandas as pd
import numpy as np
np.random.seed(10)
df = pd.DataFrame(np.random.randint(0, 20, (10, 10)).astype(float), columns=["c%d"%d for d in range(10)])
df.where(np.random.randint(0,2, df.shape).astype(bool), np.nan, inplace=True)
def g(df):
cols = (df.columns[df.iloc[0,:].fillna('Nan') != df.iloc[8,:].fillna('Nan')]).values
result = []
for col in cols:
result.append((df.loc[0, col], df.loc[8, col]))
return result
result = g(df.copy()) |
7,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 2
Imports
Step2: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
Explanation: Algorithms Exercise 2
Imports
End of explanation
def find_peaks(a):
Find the indices of the local maxima in a sequence.
peaks = []
for i in range(len(a)):
if i==0:
if a[i]>a[i+1]:
peaks.append(i)
elif i!=0 and i!=len(a)-1:
if a[i]>a[i-1] and a[i]>a[i+1]:
peaks.append(i)
elif i==len(a)-1:
if a[i]>a[i-1]:
peaks.append(i)
return peaks
a = [2,0,1,0,2,0,1]
p1 = find_peaks(a)
print(p1)
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
pi_list = []
for i in range(len(pi_digits_str)):
pi_list.append(int(pi_digits_str[i]))
pi_array = np.array(pi_list)
pi_peaks = find_peaks(pi_array)
pi_diff = np.diff(pi_peaks)
max(pi_diff)
list(np.arange(2,11))
g = plt.figure(figsize=(6,6))
plt.hist(pi_diff, bins=max(pi_diff)+1, range=(.5,max(pi_diff)+1.5))
plt.xlim(1.5,12.5)
plt.xticks(np.arange(2,13))
plt.xlabel('Distance Between Peaks')
plt.ylabel('Count')
plt.title('Distance Between Maxima for the First 10,000 Digits of Pi');
assert True # use this for grading the pi digits histogram
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation |
7,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1> ILI286 - Computación Científica II </h1>
<h2> Valores y Vectores Propios </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version
Step1: Matriz y vector de prueba
Step3: <div id='pi' />
Power Iteration
A continuación se entrega el código del algoritmo clásico de Power Iteration. Pruebe cambiando las matrices y los parámetros del algoritmo.
Step5: <div id='invpi' />
Inverse Power Iteration
Una de las complicaciones que tiene el algoritmo anterior, es que sólo permite encontrar el valor y vectores propios dominantes. Luego ¿Cómo encontramos el resto?. Para responder esta pregunta, es necesario examinar dos propiedades importantes
Step7: <div id='rq' />
Rayleigh Quotient Iteration
Como se analizó anteriormente, PI e Inverse PI tienen convergencia lineal con razón de convergencia $S \approx \frac{\lambda_2}{\lambda_1}$. Además sabemos que Inverse PI converge hacia el valor propio más cercano al shift, y que mientras más cercano sea el shift a tal valor, más rápido se logra la convergencia.
Entonces la idea de RQI es la siguiente
Step8: Preguntas
Step9: <div id='sp' />
$\texttt{SciPy}$ Eigenvalue
La librería scipy tiene implementados algoritmos que permite calcular los valores y vectores propios. Las opciones posibles son | Python Code:
import numpy as np
from scipy import linalg
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: <center>
<h1> ILI286 - Computación Científica II </h1>
<h2> Valores y Vectores Propios </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.14</h2>
</center>
Tabla de Contenidos
Introducción
Marco Teórico
Algoritmos e Implementaciones
Power Iteration
Inverse Power Iteration
Rayleigh Quotient Iteration
SciPy Eigenvalue
Problema de Aplicación
Acknowledgements
<div id='intro' />
Introducción
Determinar los valores y vectores propios de una matriz, aporta gran información acerca de las características y propiedades de esta, como también posee gran cantidad de aplicaciones prácticas como: Análisis de convergencia de sistemas dinámicos, PCA (Principal Component Analysis), análisis espectral, Eigenfaces, etc.
Sin embargo la determinación de los valores y vectores propios no es un problema simple. Como ya debe haber estudiado en cursos anteriores, existe un método directo basado en cálculo de las raíces del polinomio característico $p(x)$. Pero este problema resulta ser mal condicionado, esto es, a pequeñas variaciones en la matriz $A$ original, existe una gran variación en los resultados de los valores y vectores propios obtenidos (Ver polinomio de Wilkinson, texto guia).
En este notebook estudiaremos un método iterativo conocido como Power Iteration (y sus extensiones), que de modo similar a una iteración de punto fijo, permite obtener numéricamente los eigen(valores/vectores).
<div id='teo' />
Marco Teórico
La motivación tras PI (Power Iteration) es que la multiplicación por matrices, tiende a "dirigir" a los vectores hacia el vector propio dominante (aquel con valor propio de mayor magnitud).
El algoritmo en cuestión es como sigue:
python
x = 'Initial guess'
for i in range n_iter:
u = x / ||x|| #normalization step
x = dot(A,u) #power iteration step
lamb = dot(u, dot(A, u)) #Rayleigh quotient
return x / ||x||
en donde se agrega una paso de normalización, para evitar que la magnitud del vector aumente sin límite, y el valor propio asociado se obtiene por medio del cociente de Rayleigh:
$$ \lambda = \frac{\mathbf{x}^\,A\,\mathbf{x}}{\mathbf{x}^\,\mathbf{x}} $$
Para entender porque se de esta convergencia, considere una matriz $A \in \mathbb{R}^{m \times m}$ con valores propios reales $\lambda_1, \lambda_2, \ldots, \lambda_m$ tales que $|\lambda_1| > |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_m|$, tales que los vectores propios ${\mathbf{v}1, \mathbf{v}_2, \ldots, \mathbf{v}_m }$ conforman una base de $\mathbb{R}^m$. Sea entonces $x_0$ el _initial guess, este puede ser expresado como una combinación lineal de los vectores propios $\mathbf{v}_k$:
\begin{align}
A x_0 &= c_1 \,A\,\mathbf{v}_1 + \cdots + c_m \,A\,\mathbf{v}_m = c_1 \lambda_1 \mathbf{v}_1 + \cdots + c_m \lambda_m \mathbf{v}_m \
A^2 x_0 & = c_1 \lambda_1 \,A\,\mathbf{v}_1 + \cdots + c_m \lambda_m \,A\,\mathbf{v}_m = c_1 \lambda_1^2 \mathbf{v}_1 + \cdots + c_m \lambda_m^2 \mathbf{v}_m \
\vdots &= \vdots \
A^k x_0 &= c_1 \lambda_1^k \mathbf{v}_1 + \cdots + c_m \lambda_m^k \mathbf{v}_m
\end{align}
Factorizando $\lambda_1^k$ del último resultado se obtiene:
$$ \frac{A^k x_0}{\lambda_1^k} = c_1 \mathbf{v}_1 + c_2 \left(\frac{\lambda_2}{\lambda_1}\right)^k \mathbf{v}_2 + \cdots + c_m \left(\frac{\lambda_m}{\lambda_1}\right)^k \mathbf{v}_m$$
Dado que $|\lambda_1|>|\lambda_i| \ \ \forall i \neq 1$, a medida que $k \rightarrow \infty$ todos los términos excepto el primero tienden a cero, con razón de convergencia $S \leq |\lambda_2/\lambda_1|$. Obteniendo como resultado un vector que es múltiplo del vector propio dominante.
Nota: Para más detalles revisar: Numerical Analysis, Tymothy Sauer, Chapter 12: Eigenvalues and Singular Values
<div id='alg' />
Algoritmos e Implementaciones
Librerías utilizadas durante la clase
End of explanation
A = np.array([[1, 0.5],[0.5, 1]])
x = np.array([1.,0.])
A = np.array([[1., 0.5,-0.1],[0.5, 1.,10.0],[2.,3.,5.]])
x = np.array([1.,0.,0.])
print("A =\n",A)
print("x =",x)
Explanation: Matriz y vector de prueba
End of explanation
def power_iteration(A, x, k, verbose=False):
Program 12.1 Power iteration
Computes dominant eigenvector of square matrix
Input: matrix A, initial (nonzero) vector x, number of steps k
Output: dominant eigenvalue lam, eigenvector u
if verbose: print("Power Iteration Method\n%s"%('='*80))
for j in range(k):
u = x/np.linalg.norm(x)
x = np.dot(A, u)
lam = np.dot(u, x) #not really necessary to compute it at each iteration
if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T)))
u = x/np.linalg.norm(x)
if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T)))
return (lam, u)
# Testing algorithm
lam, u = power_iteration(A, x, 20, verbose=True)
print("lambda = {0}".format(lam))
print("u (dominant eigenvector) = {0}".format(u))
Explanation: <div id='pi' />
Power Iteration
A continuación se entrega el código del algoritmo clásico de Power Iteration. Pruebe cambiando las matrices y los parámetros del algoritmo.
End of explanation
def inverse_power_iteration(A, x, s, k, verbose=False):
Program 12.2 Inverse Power iteration
Computes eigenvector of square matrix nearest to input s
Input: matrix A, initial (nonzero) vector x, shift s, number of steps k
Output: dominant eigenvalue lam, eigenvector of inv(A-sI)
if verbose: print("Inverse Power Iteration Method\n%s"%('='*80))
As = A - s*np.eye(*A.shape)
for j in range(k):
u = x/np.linalg.norm(x)
x = np.linalg.solve(As, u) # Critical line!
lam = np.dot(u.T, x)
if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,1./lam+s,str(u.T)))
u = x/np.linalg.norm(x)
if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,1./lam+s,str(u.T)))
return (1./lam+s, u)
# Testing algoritm
lam, u = inverse_power_iteration(A, x, s=1./4, k=10, verbose=True)
print("lambda = {0}".format(lam))
print("v = {0}".format(u))
Explanation: <div id='invpi' />
Inverse Power Iteration
Una de las complicaciones que tiene el algoritmo anterior, es que sólo permite encontrar el valor y vectores propios dominantes. Luego ¿Cómo encontramos el resto?. Para responder esta pregunta, es necesario examinar dos propiedades importantes:
Los valores propios de la matriz inversa $A^{-1}$ son los recíprocos de los valores propios de $A$, es decir: $\lambda_1^{-1}, \lambda_2^{-1}, \ldots , \lambda_m^{-1}$. Los vectores propios de se mantienen inalterados.
Los valores propios de la matriz con shift $A - sI$ son: $\lambda_1-s, \lambda_2-s, \ldots, \lambda_m-s$. Del mismo modo, los vectores propios se mantienen inalterados.
Tarea: Pruebe estas propiedades!
La idea es entonces realizar un shift $\widetilde{s}$ cercano a algún valor propio $s_k$, y computar PI sobre $(A - \widetilde{s}I)^{-1}$. Luego:
$$ |\lambda_k - \widetilde{s}| < |\lambda_i - \widetilde{s}| \leftrightarrow \bigg| \frac{1}{\lambda_k - \widetilde{s}} \bigg| > \bigg| \frac{1}{\lambda_i - \widetilde{s}} \bigg| \ \ \forall i \neq k \ $$
entonces $\frac{1}{\lambda_k - \widetilde{s}}$ corresponderá con el vector propio dominante de $(A - \widetilde{s}\,I)^{-1}$. Notar que por lo enunciado en las propiedades, los vectores propios se mantienen sin alteraciones.
La idea anterior se ve reflejada en el algoritmo implementado a continuación:
End of explanation
def rqi(A, x, k, verbose=False):
Program 12.3 Rayleigh Quotient Iteration
Input: matrix A, initial (nonzero) vector x, number of steps k
Output: eigenvalue lam, eigenvector of inv(A-sI)
if verbose: print("Rayleigh Quotient Iteration\n%s"%('='*80))
for j in range(k):
u = x/np.linalg.norm(x)
lam = np.dot(u.T, np.dot(A, u))
try:
x = np.linalg.solve(A -lam*np.eye(*A.shape), u)
except numpy.linalg.LinAlgError:
break
if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T)))
u = x/np.linalg.norm(x)
lam = float(np.dot(u.T, np.dot(A, u)))
if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T)))
return (lam, u)
Explanation: <div id='rq' />
Rayleigh Quotient Iteration
Como se analizó anteriormente, PI e Inverse PI tienen convergencia lineal con razón de convergencia $S \approx \frac{\lambda_2}{\lambda_1}$. Además sabemos que Inverse PI converge hacia el valor propio más cercano al shift, y que mientras más cercano sea el shift a tal valor, más rápido se logra la convergencia.
Entonces la idea de RQI es la siguiente: Si en cada iteración se tiene una aproximación del valor propio que andamos buscando, podemos ocupar esta aproximación como shift $s$, y dado que el shift será más cercano al valor propio, se acelerará la convergencia.
Tal valor aproximado es obtenido con el cociente de Rayleigh, y entonces el shift es actualizado con este cociente en cada iteración. Como resultado se produce el siguiente trade-off:
La convergencia pasa a ser cuadrática (de modo general) y cúbica para matrices simétricas.
Sin embargo, se paga el costo de tener que resolver un sistema de ecuaciones diferentes en cada iteración.
A continuación se presenta una implementación del RQI:
End of explanation
# Testing algorithm
lam, v = rqi(A, x, k=2)
print("lambda = {0}".format(lam))
print("v = {0}".format(v))
Explanation: Preguntas:
1. ¿Porque es necesario el try y except en las líneas 11 y 13? ¿Que significa que el sistema no pueda ser resuelto?
2. Como puede observar RQI no recibe shift como parámetro. ¿A cuál valor/vector propio convergerá? ¿Como forzar/guiar a que tienda hacia un valor/vector propio distinto?
End of explanation
# Full matrices
from scipy import linalg as LA
N = 3
Aux = np.random.rand(N,N)
A = Aux + Aux.T # symmetric, so we'll deal with real eigs.
print(LA.eigvals(A)) # Only the eigenvalues, A not necessarily symmetric
print("*"*80)
print(LA.eigvalsh(A)) # Only the eigenvalues, A symmetric
print("*"*80)
print(LA.eig(A)) # All the eigenvalues and eigenvectors, A not necessarily symmetric
print("*"*80)
print(LA.eigh(A)) # All the eigenvalues and eigenvectors, A symmetric (faster)
print("*"*80)
lambdas, V = LA.eigh(A) # All the eigenvalues and eigenvectors, A symmetric (faster)
l1 = lambdas[0]
v1 = V[:,0]
print(l1)
print(v1)
print(np.dot(A, v1))
print(l1*v1)
Explanation: <div id='sp' />
$\texttt{SciPy}$ Eigenvalue
La librería scipy tiene implementados algoritmos que permite calcular los valores y vectores propios. Las opciones posibles son:
En la librería scipy.linalg: eigvals/eigvalsh/eigvals_banded, eig/eigh/eig_banded,
En la librería scipy.sparse.linalg: eigen, eigs, eigsh.
En general siempre conviene utilizar las funciones desde scipy y no de numpy. La librería numpy hace un excelente trabajo al permitir el uso de vectores de tipo numérico, pero contiene solo algunos algoritmos numéricos y no necesariamente los más rápidos.
A continuación se muestra como utilizar algunas de estas funciones.
End of explanation |
7,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h3>Current School Panda</h3>
Working with directory school data
Creative Commons in all schools
This script uses a csv file from Creative Commons New Zealand and csv file from Ministry of Education.
The ccnz csv file contains schools names that have cc licence, type of licence,
The Ministry of Education csv file contains every public school in New Zealand and info about them.
Standards for website addresses - if school name ends with school then cut it from name and add to .
eg Horowhenua Collage
horowhenua.collage.nz
not
horowhenuacollage.school.nz
Auckland Girls Grammar School
aucklandgirlsgrammar.school.nz
not
aucklandgirlsgrammarschool.school.nz
Everyschool has their own domain name and Linux server hosting the site. Private/Public keys. Static site, git repo. Nikola blog.
What made you choose that particular Creative Commons licence?
I like the CC
Step1: Compare the schools on List of CC schools with list of all public/private schools.
Why shouldn't it be default for all public schools licence to be under a Creative Commons BY license?
Step2: Cycle through only first 89 values - stop when reaching
Step3: Create restfulapi of schools thaat have cc and those that don't
Merge two dicts together.
Both are
{name of school | Python Code:
crcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=5, index_col=0, usecols=[0,1,2])
Explanation: <h3>Current School Panda</h3>
Working with directory school data
Creative Commons in all schools
This script uses a csv file from Creative Commons New Zealand and csv file from Ministry of Education.
The ccnz csv file contains schools names that have cc licence, type of licence,
The Ministry of Education csv file contains every public school in New Zealand and info about them.
Standards for website addresses - if school name ends with school then cut it from name and add to .
eg Horowhenua Collage
horowhenua.collage.nz
not
horowhenuacollage.school.nz
Auckland Girls Grammar School
aucklandgirlsgrammar.school.nz
not
aucklandgirlsgrammarschool.school.nz
Everyschool has their own domain name and Linux server hosting the site. Private/Public keys. Static site, git repo. Nikola blog.
What made you choose that particular Creative Commons licence?
I like the CC:BY licence because it offers the most freedom to people.
I am not a fan of licenses that restrict commercial use. I believe everyone should be able to do what the like with my work with minimal interference.
If I could I would remove non-commercial licenses.
In the early days of my art blogging I would license under cc nc. This was wrong and I later changed this to a cc by licence.
With my photography I once had a photo I taken in the newpaper. It made the front page. I was offered money and seeked permission. I was fine with it of course - the license allows this. At the bottom of the photo it read: PHOTO: William Mckee.
Perfect.
The only thing I ask is they attribute.
I like the idea of sharealike but at the end of the I really don't care and would hate to chase down people to license it wrong. Sure, I don't like it that people could take my stuff and make it not open. I think everything should be open and free.
My art site - artcontrol.me is currently down but when it was up I licenced the site under a cc:by. Elements of the site are still up - such as my YouTube channel.
I attended art school in Wellington - The Learning Connexion. My focus was on drawing and painting. I taught myself programming on the bus to art school. Even when I was drawing on the easel I would be 'drawing' python code. During breaks I would often get my laptop out.
I volunteered at Whaihanga Early Learning Centre. I spend the majority of my time there in the art area doing collabarth works with others. Oil Pastel, coloured pencil and pencil were my mediums of choice. Sometimes I would use paint, but it's quite messy.
Copyright shouldn't be default. Apply and pay if you want copyright. CC license by default. That will sort the world.
End of explanation
#crcom
aqcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=6, usecols=[0])
aqjsz = aqcom.to_json()
dicthol = json.loads(aqjsz)
dschoz = dicthol['School']
#dicthol
dscv = dschoz.values()
ccschool = list()
for ds in range(87):
#print(dschoz[str(ds)])
ccschool.append((dschoz[str(ds)]))
schccd = dict()
scda = dict({'cc' : True})
sanoc = dict({'cc' : False})
#schccd.update({ccs : scda})
for ccs in ccschool:
#These schools have a cc license. Update the list of all schools with cc and value = true.
#Focus on schools that don't have cc license.
#Filter schools in area that don't have cc license.
#print (ccs)
schccd.update({ccs : scda})
ccschz = list()
for dsc in range(87):
#print (dschoz[str(dsc)])
ccschz.append((dschoz[str(dsc)]))
#Append in names of schools that are missing from this dict.
#Something like
#schccd.update{school that doesnt have cc : {'cc' : False}}
#schccd
Explanation: Compare the schools on List of CC schools with list of all public/private schools.
Why shouldn't it be default for all public schools licence to be under a Creative Commons BY license?
End of explanation
noclist = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[1])
webskol = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[6])
websjs = webskol.to_json()
dictscha = json.loads(websjs)
numsweb = dictscha['School website']
lenmuns = len(numsweb)
#for nuran in range(lenmuns):
# print (numsweb[str(nuran)])
#noclist.values[0:10]
aqjaq = noclist.to_json()
jsaqq = json.loads(aqjaq)
najsa = jsaqq['Name']
alsl = len(najsa)
allschlis = list()
for alr in range(alsl):
allschlis.append(najsa[str(alr)])
#allschlis
newlis = list(set(allschlis) - set(ccschool))
empd = dict()
Explanation: Cycle through only first 89 values - stop when reaching : These are schools that have expressed an interest in CC, and may have a policy in progress.
New spreadsheet for schools in progress of CC license. Where are they up to? What is the next steps?
Why are schools using a license that isn't CC:BY. They really should be using the same license. CC NC is unexceptable. SA would be OK but majority of schools already have CC BY so best to go with what is common so you don't have conflicts of licenses.
End of explanation
sstru = json.dumps(schccd)
for newl in newlis:
#print (newl)
empd.update({newl : sanoc})
empdum = json.dumps(empd)
trufal = empd.copy()
trufal.update(schccd)
trfaj = json.dumps(trufal)
savjfin = open('/home/wcmckee/ccschool/index.json', 'w')
savjfin.write(trfaj)
savjfin.close()
#savtru = open('/home/wcmckee/ccschool/cctru.json', 'w')
#savtru.write(sstru)
#savtru.close()
#for naj in najsa.values():
#print (naj)
# for schk in schccd.keys():
#print(schk)
# allschlis.append(schk)
#for i in ccschz[:]:
# if i in allschlis:
# ccschz.remove(i)
# allschlis.remove(i)
#Cycle though some schools rather than everything.
#Cycle though all schools and find schools that have cc
#for naj in range(2543):
#print(najsa[str(naj)])
# for schk in schccd.keys():
# if schk in (najsa[str(naj)]):
#Remove these schools from the list
# print (schk)
Explanation: Create restfulapi of schools thaat have cc and those that don't
Merge two dicts together.
Both are
{name of school : 'cc' : 'True'/'False'}
End of explanation |
7,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: PCMDI
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
7,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='beginning'></a> <!--\label{beginning}-->
* Outline
* Glossary
* 4. The Visibility space
* Previous
Step1: Import section specific modules
Step2: 4.5.2 $uv$ coverage
Step3: From the list above, you can select different configurations corresponding to real instrumental layouts.
However, if you want to specify the relative positions of the antennas in an $ENU$ reference frame yourself, you can edit the following block and set the variable "custom" to "1".
Step4: Let's plot the distribution of the antennas from the selected (or customized) interferometer
Step5: <a id="fig
Step6: 4.5.2.1.2 The snapshot $\boldsymbol{uv}$ coverage
Step7: <a id="vis
Step8: <a id="fig | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: <a id='beginning'></a> <!--\label{beginning}-->
* Outline
* Glossary
* 4. The Visibility space
* Previous: 4.5.1 UV Coverage: UV tracks
* Next: 4.6 The Fourier Approximation & the van Cittert-Zernike theorem
Import standard modules:
End of explanation
from IPython.display import display
from ipywidgets import *
from mpl_toolkits.mplot3d import Axes3D
import plotBL
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
config = widgets.Dropdown(
options={'VLAa':'configs/vlaa.enu.txt',
'VLAb':'configs/vlab.enu.txt',
'VLAc':'configs/vlac.enu.txt',
'VLAd':'configs/vlad.enu.txt',
'WSRT':'configs/wsrt.enu.txt',
'kat7':'configs/kat-7.enu.txt',
'meerkat':'configs/meerkat.enu.txt'},
value="configs/vlaa.enu.txt",
Description="Antennas:")
display(config)
Explanation: 4.5.2 $uv$ coverage : Improving your $uv$ coverage
In $\S$ 4.5.1 ➞, we have explored the ways in which the visibility function is sampled. Depending on the interferometer's location (in latitude), the projected baseline w.r.t. the observed source, and the time and frequency of our observation, the $uv$ plane is sampled along tracks which can be derived from the projected baseline in the ($u$,$v$,$w$) reference frame. These $uv$ tracks are portions of ellipses. Over the course of an observation, samples are accumulated along these tracks and form the $uv$ coverage.
A precise knowledge of the sky requires good knowledge of the visibility function, and therefore a complete as possible sampling of the $uv$ plane.
<span style="background-color:red">TLG:RC: Please rewrite last sentence.</span>
<span style="background-color:cyan">TLG:GM: Check if the italic words are in the glossary. </span>
We will see, in this section, how the $uv$ coverage can be improved using multiple element interferometers ($\S$ 4.5.2.1 ⤵) and using time ($\S$ 4.5.2.2 ⤵) and frequency ($\S$ 4.5.2.3 ⤵) integration to our advantage.
<a id="vis:sec:4521"></a> <!---\label{vis:sec:4521}--->
4.5.2.1 Configuration of an $N$-element interferometer
Until now, we only considered 2-element interferometers. In practice, interferometers are built from $N>2$ elements. We then compute the corresponding $\frac{N(N-1)}{2}$ independent cross-correlations (doing this for each frequency channel, each time step and each polarization). For an $N$-element interferometer, the sampling in the $uv$ plane is thus proportional to $\sim N^2$.
In the following, we will show the characteristic $uv$ coverage of existing, well-known radio interferometers, observing at a single frequency and for a single time step (i.e. a snapshot observation). You may select the array of your choice from the list below:
End of explanation
# you need to re-evaluate this box if you modify the array.
antennaPosition=np.genfromtxt(config.value)
# custom antenna distribution
custom=0
if (custom):
antennaPosition = np.zeros((10, 2), dtype=float)
antennaPosition[0,:] = [0,0]
antennaPosition[1,:] = [-4, 5]
antennaPosition[2,:] = [4, 5]
antennaPosition[3,:] = [-10,0]
antennaPosition[4,:] = [-8,-3]
antennaPosition[5,:] = [-4,-5]
antennaPosition[6,:] = [0,-6]
antennaPosition[7,:] = [4,-5]
antennaPosition[8,:] = [8,-3]
antennaPosition[9,:] = [10,0]
Explanation: From the list above, you can select different configurations corresponding to real instrumental layouts.
However, if you want to specify the relative positions of the antennas in an $ENU$ reference frame yourself, you can edit the following block and set the variable "custom" to "1".
End of explanation
%matplotlib inline
mxabs = np.max(abs(antennaPosition[:]))*1.1;
# make use of pylab librery to plot
fig=plt.figure(figsize=(6,6))
plt.plot((antennaPosition[:,0]-np.mean(antennaPosition[:,0]))/1e3, \
(antennaPosition[:,1]-np.mean(antennaPosition[:,1]))/1e3, 'o')
plt.axes().set_aspect('equal')
plt.xlim(-mxabs/1e3, mxabs/1e3)
plt.ylim(-mxabs/1e3, (mxabs+5)/1e3)
plt.xlabel("E (km)")
plt.ylabel("N (km)")
plt.title("Antenna positions")
Explanation: Let's plot the distribution of the antennas from the selected (or customized) interferometer:
End of explanation
# Observation parameters
c=3e8 # Speed of light
f=1420e6 # Frequency
lam = c/f # Wavelength
time_steps = 1200 # time steps
h = np.linspace(-6,6,num=time_steps)*np.pi/12 # Hour angle window
# declination convert in radian
L = np.radians(34.0790) # Latitude of the VLA
dec = np.radians(34.)
Explanation: <a id="fig:4414"></a>
Figure 4.5.14: Distribution of the antennas in a $ENU$ reference frame. Each dot represents an antenna which will be part of $N$-1 baselines.
As for the previous simulations, we will simulate an observation using the VLA as the default case.
4.5.2.1.1 Observation configuration
End of explanation
%matplotlib inline
Ntimes=3
plotBL.plotuv(antennaPosition,L,dec,h,Ntimes,lam)
Explanation: 4.5.2.1.2 The snapshot $\boldsymbol{uv}$ coverage
End of explanation
from ipywidgets import *
from IPython.display import display
def Interactplot(key,Ntimes):
print "Ntimes="+str(Ntimes)
plotBL.plotuv(antennaPosition,L,dec,h,Ntimes,lam)
slider=IntSlider(description="Ntimes",min=2,max=1200,step=100,continuous_update=False)
slider.on_trait_change(Interactplot,'value')
display(slider)
Interactplot("",2)
Explanation: <a id="vis:fig:4515"></a> <!---\label{vis:fig:4515}--->
Figure 4.5.15: Snapshot $uv$ coverage of the interferometer. Red and blue points correspond to symmetric $uv$ points: indeed, each baseline gives us the measurement of $V_\nu$ at ($u$,$v$) and its complex conjugate $V^*_\nu$ at ($-u$,$-v$).
Depending on the number of elements and their relative distribution on the ground, the shape of the snapshot $uv$ coverage can vary dramatically from one interferometer to another. One may prefer an antenna distribution which maximizes an instantaneous coverage which spans the entire $uv$ plane (e.g. VLA in Fig. 4.5.15 ⤵) rather than a compact array which samples a smaller portion of the $uv$ plane. With the raw increase of total sensitivity, improving the instantaneous $uv$ coverage is the main motivation for increasing the number of antennas and optimising their relative positions. A good $uv$ coverage is obtained if the $uv$ plane is sufficiently - and smoothly! - sampled. In the next section, we will see how the Earth contributes to improve the $uv$ coverage.
<a id="vis:sec:4522"></a> <!---\label{vis:sec:4522}--->
4.5.2.3 Time integration: Earth Rotation Synthesis
Observing a source for several hours requires tracking this source with fringe (or delay) tracking. As the source moves with the sky, the projected baselines - as seen from the source - will continuously vary along elliptical $uv$ tracks.
With good enough time sampling, it is possible to sample each $uv$ track along the observation and accumulate a number of $\frac{N(N-1)}{2}\times N_\text{times}$ independent measurements.
The following experiment will allow you to change the number of time steps collected during an observation, from a snapshot observation up to a 12 hour observation.
The next block will let you plot a snapshot observation. You can increase the duration of the observation with the slider which defines the total number of time steps for which the $uv$ coverage is computed. When using the slider, the new resulting plot will be generated next to the previous one to allow for visual comparisons.
(Test values of 2, $\sim$300, $\sim$1200 to see the effects on the $uv$ coverage. Give the notebook some time to regenerate the plot, especially for large values of time samples!).
End of explanation
df=10e6 # frequency step
f0=c/lam # starting frequency
lamb0=lam # starting wavelength
def Interactplot(key,Nfreqs):
print "Nfreqs="+str(Nfreqs)
plotBL.plotuv_freq(antennaPosition,L,dec,h,Nfreqs,lamb0,df)
slider=IntSlider(description="Nfreqs",min=1,max=200,step=1,continuous_update=False)
slider.on_trait_change(Interactplot,'value')
display(slider)
Interactplot("",1)
Explanation: <a id="fig:4416"></a>
Figure 4.5.16: Sampled $uv$ coverage of the interferometer for various values of time samples. Red and blue points corresponds to symmetric $uv$ points, since each baseline give us the measurement of $V_\nu$ at ($u$,$v$) and its complex conjugate $V^*_\nu$ at ($-u$,$-v$).
With an increasing number of time steps, you can start to see each individual baselines' elliptical $uv$ track in the $uv$ plane. It is easy to see that the longer the observation, the better our sampling of the visibility function.
<a id="vis:sec:4523"></a> <!---\label{vis:sec:4523}--->
4.5.2.3 Integration in frequency: Frequency Synthesis
We have seen in $\S$ 4.3 ➞ that the $u$, $v$, $w$ coordinates are usually normalized by $\lambda$. As a consequence, observing with the same interferometer at a different frequency gives a different set of spatial frequencies. Sampling at a different ($u,v$)-coordinates results in a rescaling of the uv coverage at the original frequency <span style="background-color:red">TLG:RC: Please rewrite last sentence.</span> Indeed, for the same snapshot $uv$ coverage at wavelength $\lambda_1$, the $uv$ coverage at $\lambda_2 < \lambda_1$ will be a shrunk version of that a $\lambda_1$.
If the distribution of our sampling frequencies is (quasi) continuous, e.g. if the observing system operates in a bandwidth [$f_\text{min}$,$f_\text{max}$], with $N_\text{freqs}$ channels, we can contiguously sample portions of the $uv$ plane with $\frac{N(N-1)}{2}\times N_\text{freqs}$ independent measurements. For each baseline, a radial track will be generated due to the frequency scaling effect of $u$ and $v$.
<div class=warn>
<b>Warning:</b> It is only possible to do this up to a point: see *bandwidth smearing* in [$\S$ 9.3 ➞](../9_Practical/9_3_observing_smearing.ipynb)) for more details on the limits of this technique.
</div>
<span style="background-color:cyan">TLG:GM: Check if the italic words are in the glossary. </span>
In the next block, we will plot a snapshot observation. You can increase the number of continuous frequency channels (at 10 MHz steps) with the slider. Upon a change on the slider, a new plot will be generated next to the previous one, to allow visual comparisons (Note: give the notebook some time to generate the plot).
(Test values of 1, 20, 200 to see the effects on the $uv$ coverage).
End of explanation |
7,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluating an Exam Using Ply
This notebook shows how we can use the package ply
to implement a scanner. Our goal is to implement a program that can be used to evaluate the results of an exam. Assume the result of an exam is stored in the string data that is defined below
Step1: This data show that there has been a exam with the subject <em style="color
Step2: Auxiliary Functions
The function mark(max_points, points) takes two arguments
Step3: Lets test this function.
Step4: Token Declarations
We begin by <em style="color
Step5: Token Definitions
Next, we need to provide the definition of the tokens. One way to define tokens is via python functions.
In this notebook we are only going to use these <em style="color
Step6: The Token MAXDEF
The token MAXDEF matches a substring of the form MaxPoints = 60. Note that the regular expression defining the semantics of this token uses the expression \s* to match the white space before and after the character =. This is necessary because ply.lex uses <em style="color
Step7: The Token Name
The token NAME matches the name of a student followed by a colon. In general, a student name can be any sequence of letters that contain optional hyphens and blanks. Note that it is not necessary to use \s inside of a character range, as we can use a blank character instead.
Furthermore, note that the hypen - is the last character in the square brackets so it cannot be mistaken for the hyphen of a range.
The action code has to reset the variable sum_points that is stored in lexer.sum_pointsto 0.
Step8: The Token NUMBER
The token NUMBER matches a natural number. We have to convert the value, which is initially a string of digits, into an integer. Furthermore, this value is then added to the number of points the current student has achieved in previous exercises.
Step9: The Token IGNORE
The token IGNORE matches a line that contains only whitespace. In order to keep track of line numbers we have to increment lexer.lineno. However, we do not return a token at the end of the function. Hence, if the input contains an empty line, this line is silently discarded.
Step10: The Token LINEBREAK
The token LINEBREAK matches a single newline character \n. If a student name is
currently defined, then we output the result for this student. Note that we set lexer.name back to the empty string once we have processed the student.
This allows for empty lines between different students.
Step11: <hr style="height
Step12: Error Handling
The function t_error is called when a string at the beginning of the input that
has not yet been processed can not be matched by any of the regular expressions defined in the various tokens defined above. In our implementation we print the first character that could not be matched, discard this character and continue.
Step13: Tricking Ply
The line below is necessary to trick ply.lex into assuming this program is part of an ordinary python file instead of being a Jupyter notebook.
Step14: Generating the Scanner and Running It
The next line generates the scanner.
Step15: Next, we feed an input string into the generated scanner.
Step16: In order to scan the data that we provided in the last line, the function scan iterates
over all tokens generated by our scanner.
Step17: Finally, we can run the scanner. | Python Code:
data = '''Class: Algorithms and Complexity
Group: TIT09AID
MaxPoints = 60
Exercise: 1. 2. 3. 4. 5. 6.
Jim Smith: 9 12 10 6 6 0
John Slow: 4 4 2 0 - -
Susi Sorglos: 9 12 12 9 9 6
'''
Explanation: Evaluating an Exam Using Ply
This notebook shows how we can use the package ply
to implement a scanner. Our goal is to implement a program that can be used to evaluate the results of an exam. Assume the result of an exam is stored in the string data that is defined below:
End of explanation
import ply.lex as lex
import re
Explanation: This data show that there has been a exam with the subject <em style="color:blue">Algorithms and Complexity</em>
in the group <em style="color:blue">TIT09AID</em>. Furthermore, the equation
MaxPoints = 60
shows that in order to achieve the best mark, <em style="color:blue">60</em> points would have been necessary.
There have been 6 different exercises in this exam and, in this small example, only three students took part, namely Jim Smith, John Slow, and Susi Sorglos. Each of the rows decribing the results of the students begins with the name of the student followed by the number of points that he or she has achieved in the different exercises. Our goal is to write a program that is able to compute the marks for all students.
Imports
We will be use the package ply.
In this example, we will only use the scanner generator that is provided by the module ply.lex. Later on, we will also use Python regular expression. Therefore, we also have to import the module re.
End of explanation
def mark(max_points, points):
return round(min(5.0, 7 - 6 * points / max_points), 2)
Explanation: Auxiliary Functions
The function mark(max_points, points) takes two arguments:
- points is the number of points achieved by the student whose mark is to be computed.
- max_points is the number of points that need to be achieved in order to get the best mark of $1.0$.
It is assumed that the relation between the mark of an exam and the number of points achieved in this exam is mostly linear and that a student who has achieved $50\%$ of lexer.max_points points will get the mark $4.0$, while a student who has achieved $100\%$ of lexer.max_points points will be marked as $1.0$.
However, the worst mark is $5.0$. Therefore, if the mark would fall below that line, the min function below assures that it is lifted up to $5.0$.
End of explanation
for points in range(0, 60+1, 5):
print(f'mark(60, {points}) = {mark(60, points)}')
Explanation: Lets test this function.
End of explanation
tokens = [ 'HEADER',
'MAXDEF',
'NAME',
'NUMBER',
'IGNORE',
'LINEBREAK'
]
print(data)
Explanation: Token Declarations
We begin by <em style="color:blue">declaring</em> the list of tokens. Note that the variable tokens is a keyword of ply to define the names of the token classes. In this case, we have declared six different tokens.
The <em style="color:blue">definitions</em> of these tokens are given later.
- HEADER will match the first two lines of the string data as well as the fifth line that begins with
the string Exercise:.
- MAXDEF is a token that will match the line MaxPoints = 60.
- NAME is a token that will match the name of a student.
- NUMBER is a token that will match a natural number.
- IGNORE is a token that will match an empty line. For example, the fourth line in data is empty.
- LINEBREAK is a token that will match the newline character \n at the end of a line.
End of explanation
def t_HEADER(t):
r'[A-Za-z]+:.*\n'
t.lexer.lineno += 1
Explanation: Token Definitions
Next, we need to provide the definition of the tokens. One way to define tokens is via python functions.
In this notebook we are only going to use these <em style="color:blue">functional token definitions</em>.
The <em style="color:blue">document string</em> of these functions is a <em style="color:blue">raw string</em> that contains the regular expression defining the semantics of the token. The regular expression can be followed by code that is needed to further process the token. The name of the function defining a token has to have the form t_name, where name is the name of the token as declared in the list tokens.
The HEADER Token
The token HEADER matches any string that is made up of upper and lower case characters followed by a colon. This colon may be followed by arbitrary characters.
The token extends to the end of the line and includes the terminating newline.
When the function t_HEADER is called it is provided with a token t. This is an object that has five
attributes:
- t.lexer is an object of class Lexer that contains the scanner that was used to extract the token t.
We are free to attach additional attributes to this Lexer object.
- t.type is a string containing the type of the token. For tokens processed in the function
t_HEADER this type is always the string HEADER.
- t.value is the actual string matched by the token.
- t.lexpos is the position of the token in the input string that is scanned.
Furthermore, the lexer object has one important attribute:
- t.lexer.lineno is the line number. However, it is our responsibility to update this variable
by incrementing t.lexer.lineno every time we read a newline.
In the case of the token HEADER we need to increment the attribute t.lineno, as the regular expression contain a newline.
End of explanation
def t_MAXDEF(t):
r'MaxPoints\s*=\s*[1-9][0-9]*'
t.lexer.max_points = int(re.findall(r'[0-9]+', t.value)[0])
t.lexer.name = ''
Explanation: The Token MAXDEF
The token MAXDEF matches a substring of the form MaxPoints = 60. Note that the regular expression defining the semantics of this token uses the expression \s* to match the white space before and after the character =. This is necessary because ply.lex uses <em style="color:blue">verbose regular expressions</em> that can contain whitespace for formatting. Hence a blank character "" inside a regular expression is silently discarded.
After defining the regular expression, the function t_MAXDEF has some <em style="color:blue">action code</em> that is used to extract the maximal number of points from the token value and store this number in the variable t.lexer.max_points.
t.value is the string that is matched by the regular expression.
We extract the maximum number of points using conventional Python regular expressions. Furthermore, we initialize the student name,
which is stored in t.lexer.name, to the empty string.
End of explanation
def t_NAME(t):
r'[a-zA-Z -]+:'
t.lexer.name = t.value[:-1] # cut of colon
t.lexer.sum_points = 0 # start counting
Explanation: The Token Name
The token NAME matches the name of a student followed by a colon. In general, a student name can be any sequence of letters that contain optional hyphens and blanks. Note that it is not necessary to use \s inside of a character range, as we can use a blank character instead.
Furthermore, note that the hypen - is the last character in the square brackets so it cannot be mistaken for the hyphen of a range.
The action code has to reset the variable sum_points that is stored in lexer.sum_pointsto 0.
End of explanation
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.lexer.sum_points += int(t.value)
Explanation: The Token NUMBER
The token NUMBER matches a natural number. We have to convert the value, which is initially a string of digits, into an integer. Furthermore, this value is then added to the number of points the current student has achieved in previous exercises.
End of explanation
def t_IGNORE(t):
r'^[ \t]*\n'
t.lexer.lineno += 1
Explanation: The Token IGNORE
The token IGNORE matches a line that contains only whitespace. In order to keep track of line numbers we have to increment lexer.lineno. However, we do not return a token at the end of the function. Hence, if the input contains an empty line, this line is silently discarded.
End of explanation
def t_LINEBREAK(t):
r'\n'
t.lexer.lineno += 1
if t.lexer.name != '':
n = t.lexer.name
m = t.lexer.max_points
p = t.lexer.sum_points
print(f'{n} has {p} points and achieved the mark {mark(m, p)}.')
t.lexer.name = ''
Explanation: The Token LINEBREAK
The token LINEBREAK matches a single newline character \n. If a student name is
currently defined, then we output the result for this student. Note that we set lexer.name back to the empty string once we have processed the student.
This allows for empty lines between different students.
End of explanation
t_ignore = '- \t'
Explanation: <hr style="height:3px;background-color:blue">
We have now defined all of the tokens.
Note that the scanner tries the regular expressions in the same order that we
have used to define these tokens.
<hr style="height:3px;background-color:blue">
Ignoring Characters
The string t_ignore specifies those characters that should be ignored. Note that this string is not interpreted as a regular expression. It is just a string of single characters. These characters are allowed to occur as part of other tokens, but when they occur on their own and would otherwise generate a scanning error, they are silently discarded instead of triggering an error.
In this example we ignore hyphens, blanks, and tabs.
End of explanation
def t_error(t):
print(f"Illegal character '{t.value[0]} at line {t.lexer.lineno}.'")
t.lexer.skip(1)
Explanation: Error Handling
The function t_error is called when a string at the beginning of the input that
has not yet been processed can not be matched by any of the regular expressions defined in the various tokens defined above. In our implementation we print the first character that could not be matched, discard this character and continue.
End of explanation
__file__ = 'main'
Explanation: Tricking Ply
The line below is necessary to trick ply.lex into assuming this program is part of an ordinary python file instead of being a Jupyter notebook.
End of explanation
lexer = lex.lex()
Explanation: Generating the Scanner and Running It
The next line generates the scanner.
End of explanation
lexer.input(data)
Explanation: Next, we feed an input string into the generated scanner.
End of explanation
def scan(lexer):
for t in lexer:
pass
Explanation: In order to scan the data that we provided in the last line, the function scan iterates
over all tokens generated by our scanner.
End of explanation
scan(lexer)
Explanation: Finally, we can run the scanner.
End of explanation |
7,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Препроцессинг фич
Step1: Обучение моделей
Step2: XGBoost
Step3: LightGBM
Step4: Vowpal Wabbit
Step5: Lasso
Step6: Submission
Step7: XGBoost
Step8: LightGBM
Step9: Lasso
Step10: Ensemble
Step11: Lasso
Step12: LightGBM
Step13: Результаты испытаний
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro + other, no ratio feats, superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude, xgb fillna, predict price meter
Step14: Получаем преобразованные train/test
Step15: Fix from Sberbank
Step16: Auto ML
Step17: Смотрим на данные | Python Code:
def align_to_lb_score(df):
# https://www.kaggle.com/c/sberbank-russian-housing-market/discussion/32717
df = df.copy()
trainsub = df[df.timestamp < '2015-01-01']
trainsub = trainsub[trainsub.product_type=="Investment"]
ind_1m = trainsub[trainsub.price_doc <= 1000000].index
ind_2m = trainsub[trainsub.price_doc == 2000000].index
ind_3m = trainsub[trainsub.price_doc == 3000000].index
train_index = set(df.index.copy())
for ind, gap in zip([ind_1m, ind_2m, ind_3m], [10, 3, 2]):
ind_set = set(ind)
ind_set_cut = ind.difference(set(ind[::gap]))
train_index = train_index.difference(ind_set_cut)
df = df.loc[train_index]
df["price_doc"] = np.log1p(df["price_doc"].values)
return df
def preprocess_anomaly(df):
# удаляем из обучающей выборки все нулевые данные. В test данные все заполнены
df = df.dropna(subset=["preschool_education_centers_raion", "num_room",
"max_floor", "material", "kitch_sq", "floor"])
df["product_type"].fillna("Investment", inplace=True)
df["full_sq"] = map(lambda x: x if x > 10 else float("NaN"), df["full_sq"])
df["life_sq"] = map(lambda x: x if x > 5 else float("NaN"), df["life_sq"])
df["kitch_sq"] = map(lambda x: x if x > 2 else float("NaN"), df["kitch_sq"])
# superclean
# https://www.kaggle.com/keremt/very-extensive-cleaning-by-sberbank-discussions
df.ix[df[df.life_sq > df.full_sq].index, "life_sq"] = np.NaN
df.ix[df[df.kitch_sq >= df.life_sq].index, "kitch_sq"] = np.NaN
df.ix[df[df.kitch_sq == 0].index, "kitch_sq"] = np.NaN
df.ix[df[df.kitch_sq == 1].index, "kitch_sq"] = np.NaN
df.ix[df[df.num_room == 0].index, "num_room"] = np.NaN
df.ix[df[df.floor == 0].index, "floor"] = np.NaN
df.ix[df[df.max_floor == 0].index, "max_floor"] = np.NaN
df.ix[df[df.floor > df.max_floor].index, "max_floor"] = np.NaN
df.ix[df[df.state == 33].index, "state"] = np.NaN
df.ix[df[df.build_year == 20052009].index, "build_year"] = 2005
df.ix[df[df.build_year == 20].index, "build_year"] = 2000
df.ix[df[df.build_year == 215].index, "build_year"] = 2015
df.ix[df[df.build_year < 1500].index, "build_year"] = np.NaN
df.ix[df[df.build_year > 2022].index, "build_year"] = np.NaN
return df
def smoothed_likelihood(targ_mean, nrows, globalmean, alpha=10):
try:
return (targ_mean * nrows + globalmean * alpha) / (nrows + alpha)
except Exception:
return float("NaN")
def mess_y_categorial(df, nfolds=3, alpha=10):
from copy import copy
folds = np.array_split(df, nfolds)
newfolds = []
for i in range(nfolds):
fold = folds[i]
other_folds = copy(folds)
other_folds.pop(i)
other_fold = pd.concat(other_folds)
newfolds.append(mess_y_categorial_fold(fold, other_fold, alpha=10))
return pd.concat(newfolds)
def mess_y_categorial_fold(fold_raw, other_fold, cols=None, y_col="price_doc", alpha=10):
fold = fold_raw.copy()
if not cols:
cols = list(fold.select_dtypes(include=["object"]).columns)
globalmean = other_fold[y_col].mean()
for c in cols:
target_mean = other_fold[[c, y_col]].fillna("").groupby(c).mean().to_dict()[y_col]
nrows = other_fold[c].fillna("").value_counts().to_dict()
fold[c + "_sll"] = fold[c].fillna("").apply(
lambda x: smoothed_likelihood(target_mean.get(x), nrows.get(x), globalmean, alpha)
)
return fold
def feature_exclude(df):
# Убираем build_year, вместо него остается age_of_building
# Вероятно из-за build_year переобучение
feats = ["build_year", "build_year_cat_le"]
with open("greedy_search.tsv") as gs:
for line in gs:
row = line.strip().split("\t")
if len(row) < 6:
continue
if row[5] == "remove":
feats.append(row[0])
df = df.drop(feats, axis=1, errors="ignore")
return df
ALPHA = 50
lbl = sk.preprocessing.LabelEncoder()
def preprocess_categorial(df):
for c in list(df.columns):
if df[c].dtype == 'object':
try:
try:
lbl.fit(list(train_raw[c].values) + list(test[c].values) + list(df[c].values))
except KeyError as e:
lbl.fit(df[c].values)
df[c + "_le"] = lbl.transform(list(df[c].values))
except ValueError as e:
print c, e
raise
df = mess_y_categorial(df, 5, alpha=ALPHA)
df = df.select_dtypes(exclude=['object'])
return df
def apply_categorial(test, train):
for c in list(test.columns):
if test[c].dtype == 'object':
try:
lbl.fit(list(train_raw[c].values) + list(test[c].values) + list(train[c].values))
except KeyError:
lbl.fit(test[c].values)
test[c + "_le"] = lbl.transform(list(test[c].values))
test = mess_y_categorial_fold(test, train, alpha=ALPHA)
test = test.select_dtypes(exclude=['object'])
return test
def apply_macro(df):
macro_cols = [
'timestamp', "balance_trade", "balance_trade_growth", "eurrub", "average_provision_of_build_contract",
"micex_rgbi_tr", "micex_cbi_tr", "deposits_rate", "mortgage_value", "mortgage_rate",
"income_per_cap", "rent_price_4+room_bus", "museum_visitis_per_100_cap", "apartment_build"
]
return df.merge(macro[macro_cols], on='timestamp', how='left')
def preprocess(df):
df = df.copy()
ecology = ["no data", "poor", "satisfactory", "good", "excellent"]
df["ecology_index"] = map(ecology.index, df["ecology"].values)
df["age_of_building"] = df["timestamp"].apply(lambda x: x.split("-")[0]).astype(int) - df["build_year"]
df["is_build_in_progress"] = df["age_of_building"].apply(lambda x: "yes" if x < 0 else "no")
bool_feats = [
"thermal_power_plant_raion",
"incineration_raion",
"oil_chemistry_raion",
"radiation_raion",
"railroad_terminal_raion",
"big_market_raion",
"nuclear_reactor_raion",
"detention_facility_raion",
"water_1line",
"big_road1_1line",
"railroad_1line",
"culture_objects_top_25"
]
for bf in bool_feats:
try:
df[bf + "_bool"] = map(lambda x: x == "yes", df[bf].values)
except:
pass
df = preprocess_anomaly(df)
df['rel_floor'] = df['floor'] / df['max_floor'].astype(float)
df['rel_kitch_sq'] = df['kitch_sq'] / df['full_sq'].astype(float)
df['rel_life_sq'] = df['life_sq'] / df['full_sq'].astype(float)
df["material_cat"] = df.material.fillna(0).astype(int).astype(str).replace("0", "")
df["state_cat"] = df.state.fillna(0).astype(int).astype(str).replace("0", "")
# df["num_room_cat"] = df.num_room.fillna(0).astype(int).astype(str).replace("0", "")
# df["build_year_cat"] = df.build_year.fillna(0).astype(int).astype(str).replace("0", "")
df["build_year_ten"] = (df.build_year / 10).round()
df["ID_metro"] = df.ID_metro.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_railroad_station_walk"] = df.ID_railroad_station_walk.replace("", "-10").fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_railroad_station_avto"] = df.ID_railroad_station_avto.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_big_road1"] = df.ID_big_road1.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_big_road2"] = df.ID_big_road2.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_bus_terminal"] = df.ID_bus_terminal.fillna(-10).astype(int).astype(str).replace("-10", "")
# # ratio of living area to full area #
df["ratio_life_sq_full_sq"] = df["life_sq"] / np.maximum(df["full_sq"].astype("float"),1)
df["ratio_life_sq_full_sq"].ix[df["ratio_life_sq_full_sq"]<0] = 0
df["ratio_life_sq_full_sq"].ix[df["ratio_life_sq_full_sq"]>1] = 1
# # ratio of kitchen area to living area #
df["ratio_kitch_sq_life_sq"] = df["kitch_sq"] / np.maximum(df["life_sq"].astype("float"),1)
df["ratio_kitch_sq_life_sq"].ix[df["ratio_kitch_sq_life_sq"]<0] = 0
df["ratio_kitch_sq_life_sq"].ix[df["ratio_kitch_sq_life_sq"]>1] = 1
# # ratio of kitchen area to full area #
df["ratio_kitch_sq_full_sq"] = df["kitch_sq"] / np.maximum(df["full_sq"].astype("float"),1)
df["ratio_kitch_sq_full_sq"].ix[df["ratio_kitch_sq_full_sq"]<0] = 0
df["ratio_kitch_sq_full_sq"].ix[df["ratio_kitch_sq_full_sq"]>1] = 1
df = df.drop(["timestamp"], axis=1, errors="ignore")
return df
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv", index_col="id")
test = pd.read_csv("data/test.csv", index_col="id")
macro = pd.read_csv("data/macro.csv")
train_pr = align_to_lb_score(train_raw)
train_pr = preprocess(train_pr)
train_pr = preprocess_categorial(train_pr)
train = feature_exclude(train_pr)
train.head()
important_feats = ["full_sq", "life_sq", "kitch_sq", "max_floor"]
# important_feats = ["full_sq", "life_sq"]
# Учим модели для заполнения NA важных полей, последовательность важна
feats_to_remove = ["price_doc", "rel_kitch_sq", "rel_life_sq", "id", "build_year_cat_le",
"age_of_building", "rel_floor", "num_room_cat_le", "build_year_ten",
"ratio_life_sq_full_sq", "ratio_kitch_sq_full_sq", "ratio_kitch_sq_life_sq"]
%%cache na_models.pkl na_models
na_models = {}
xgb_params = {
'max_depth': 5,
'n_estimators': 200,
'learning_rate': 0.05,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
for f in important_feats:
t = train[train[f].notnull()]
fX = t.drop([f] + feats_to_remove, axis=1, errors="ignore")
fy = t[f].values
dtrain_all = xgb.DMatrix(fX.values, fy, feature_names=fX.columns)
model = xgb.train(xgb_params, dtrain_all, num_boost_round=400, verbose_eval=40)
na_models[f] = model
print f
print feat_imp(model).head(10)
def fill_na_xgb(df_orig):
df = df_orig.copy()
for f in important_feats:
X_pr = df[df[f].isnull()].drop([f] + feats_to_remove, axis=1, errors="ignore")
if not len(X_pr):
continue
X_pr = xgb.DMatrix(X_pr.values, feature_names=X_pr.columns)
df.loc[df[f].isnull(), f] = na_models[f].predict(X_pr).round()
df[f] = df[f].astype(int)
return df
train = fill_na_xgb(train)
Explanation: Препроцессинг фич
End of explanation
from sklearn.model_selection import train_test_split
X = train.drop(["price_doc"], axis=1)
y = train["price_doc"].values
bound = int(len(X) * 0.7)
X_train, X_val, y_train, y_val = X[:bound].copy(), X[bound+1:].copy(), y[:bound].copy(), y[bound+1:].copy()
Explanation: Обучение моделей
End of explanation
dtrain_all = xgb.DMatrix(X.values, y, feature_names=X.columns)
dtrain = xgb.DMatrix(X_train.values, y_train, feature_names=X.columns)
dval = xgb.DMatrix(X_val.values, y_val, feature_names=X.columns)
xgb_params = {
'eta': 0.01,
'max_depth': 5,
'subsample': 0.7,
'colsample_bytree': 0.7,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
# Uncomment to tune XGB `num_boost_rounds`
model = xgb.train(xgb_params, dtrain, num_boost_round=4000, evals=[(dval, 'val')],
early_stopping_rounds=20, verbose_eval=40)
num_boost_round = model.best_iteration
cv_output = xgb.cv(xgb_params, dtrain_all, num_boost_round=4000,
verbose_eval=100, early_stopping_rounds=100, nfold=5)
xgbmodel = xgb.train(xgb_params, dtrain, num_boost_round=num_boost_round, verbose_eval=40)
y_pred = xgbmodel.predict(dtrain)
print "predict-train:", rmse(y_pred, y_train)
submdf = pd.DataFrame({"id": X_train.index, "price_doc": unlog(y_pred)})
submdf.to_csv("xgb_train_preds.csv", index=False)
y_pred = xgbmodel.predict(dval)
print "predict-val:", rmse(y_pred, y_val)
submdf = pd.DataFrame({"id": X_val.index, "price_doc": unlog(y_pred)})
submdf.to_csv("xgb_val_preds.csv", index=False)
feat_imp(model).head(10)
Explanation: XGBoost
End of explanation
RS = 20170501
np.random.seed(RS)
FACT_ROUNDS=0
ROUNDS = 2000
lgb_params = {
'objective': 'regression',
'metric': 'rmse',
'boosting': 'gbdt',
'learning_rate': 0.01,
# 'verbose': 1,
# 'num_leaves': 2 ** 5,
'bagging_fraction': 0.95,
'bagging_freq': 1,
'bagging_seed': RS,
# 'feature_fraction': 0.7,
# 'feature_fraction_seed': RS,
'subsample': 0.7,
'colsample_bytree': 0.7,
# 'max_bin': 100,
'max_depth': 10,
'num_rounds': ROUNDS
}
lgb_train_all = lgb.Dataset(X, y)
lgb_train = lgb.Dataset(X_train, y_train)
cvres = pd.DataFrame(lgb.cv(params=lgb_params, train_set=lgb_train, nfold=5, shuffle=False,
early_stopping_rounds=100, verbose_eval=100, num_boost_round=ROUNDS))
FACT_ROUNDS = len(cvres)
lgbmodel = lgb.train(lgb_params, lgb_train, num_boost_round=FACT_ROUNDS or ROUNDS)
pd.DataFrame({
"name": lgbmodel.feature_name(),
"imp": lgbmodel.feature_importance()}
).sort_values("imp", ascending=False).head(20)
y_pred = lgbmodel.predict(X_train)
print "predict-train:", rmse(y_pred, y_train)
submdf = pd.DataFrame({"id": X_train.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lgb_train_preds.csv", index=False)
y_pred = lgbmodel.predict(X_val)
print "predict-val:", rmse(y_pred, y_val)
submdf = pd.DataFrame({"id": X_val.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lgb_val_preds.csv", index=False)
Explanation: LightGBM
End of explanation
from vowpalwabbit.sklearn_vw import VWRegressor
Explanation: Vowpal Wabbit
End of explanation
from sklearn.base import TransformerMixin
from scipy.stats import skew
class SkewLogAlign(TransformerMixin):
skewed_feats = None
skew_treshold = 0.75
def __init__(self, skew_treshold=0.75):
self.skew_treshold = skew_treshold
def fit(self, X, y=None):
#log transform skewed numeric features:
df = pd.DataFrame(X, dtype=np.float64)
skewed_feats = df.apply(lambda x: skew(x.dropna())) #compute skewness
skewed_feats = skewed_feats[skewed_feats > 0.75]
self.skewed_feats = skewed_feats.index
return self
def transform(self, X):
df = pd.DataFrame(X, dtype=np.float64)
df[self.skewed_feats] = np.log1p(df[self.skewed_feats].values)
return df.values
import sys
class FillNaWithConstant(TransformerMixin):
nan_value = 0
inf_value = None
minf_value = None
def __init__(self, nan_value=0, inf_value=sys.maxint - 1, minf_value=-sys.maxint - 1):
self.nan_value = nan_value
self.inf_value = inf_value
self.minf_value = minf_value
def fit(self, X, y=None):
return self
def transform(self, X):
df = pd.DataFrame(X).fillna(self.nan_value)
df = df.replace(np.inf, self.inf_value)
df = df.replace(-np.inf, self.minf_value)
return df.values
from sklearn.pipeline import Pipeline
lasso_feat_pipeline = Pipeline([
("skew", SkewLogAlign()),
("fillna", FillNaWithConstant()),
])
from sklearn.linear_model import LassoCV
LASSO_alphas = [1, 0.1, 0.001, 0.0005]
lasso_cv_model = LassoCV(alphas = [1, 0.1, 0.001, 0.0005], cv=5, max_iter=50000, verbose=True, n_jobs=-1)
lasso_cv_model.fit(lasso_feat_pipeline.transform(X.values), y)
print "alpha:", lasso_cv_model.alpha_
print "MSE:"
print zip(LASSO_alphas, np.sqrt(lasso_cv_model.mse_path_))
print pd.Series(lasso_cv_model.coef_, index=X.columns).sort_values(ascending=False)[:20]
from sklearn.linear_model import Lasso
best_alpha = 0.001
lasso_model = Pipeline([
("feat", lasso_feat_pipeline),
("clf", Lasso(alpha=best_alpha, max_iter=50000))
])
lasso_model.fit(X_train.values, y_train)
y_pred = lasso_model.predict(X_train.values)
print "predict-train:", rmse(y_pred, y_train)
submdf = pd.DataFrame({"id": X_train.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lasso_train_preds.csv", index=False)
y_pred = lasso_model.predict(X_val.values)
print "predict-validation:", rmse(y_pred, y_val)
submdf = pd.DataFrame({"id": X_val.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lasso_val_preds.csv", index=False)
Explanation: Lasso
End of explanation
test_pr = preprocess(test)
train_pr = preprocess(train_raw)
test_pr = apply_categorial(test_pr, train_pr)
test_pr = feature_exclude(test_pr)
test_pr = fill_na_xgb(test_pr)
Explanation: Submission
End of explanation
# XGB
dtest = xgb.DMatrix(test_pr.values, feature_names=test_pr.columns)
y_pred = xgbmodel.predict(dtest)
submdf = pd.DataFrame({"id": test_pr.index, "price_doc": unlog(y_pred)})
submdf.to_csv("xgb_sub.csv", index=False)
!head xgb_sub.csv
Explanation: XGBoost
End of explanation
y_pred = lgbmodel.predict(test_pr)
submdf = pd.DataFrame({"id": test_pr.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lgb_sub.csv", index=False)
!head lgb_sub.csv
Explanation: LightGBM
End of explanation
y_pred = lasso_model.predict(test_pr.values)
submdf = pd.DataFrame({"id": test_pr.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lasso_sub.csv", index=False)
!head lasso_sub.csv
Explanation: Lasso
End of explanation
models = ["lgb", "xgb"]
etrain = pd.DataFrame(index=X_val.index)
etrain = etrain.join(train[["price_doc"]])
for i, p in enumerate(models):
pred = pd.read_csv("%s_val_preds.csv" % p, index_col="id", names=["id", "p_%s" % i], header=0)
etrain = etrain.join(pred)
eX = etrain.drop("price_doc", axis=1)
ey = etrain["price_doc"].values
etrain.head()
Explanation: Ensemble
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LassoCV
emodel = Pipeline([
("skew", SkewLogAlign()),
("fillna", FillNaWithConstant()),
("clf", LassoCV(alphas=None, cv=5, max_iter=50000, verbose=True, n_jobs=-1))
])
emodel.fit(eX.values, ey)
lmodel = emodel.named_steps["clf"]
print "alpha:", lmodel.alpha_
print "MSE:"
print np.sqrt(lmodel.mse_path_)
print pd.Series(lmodel.coef_, index=eX.columns).sort_values(ascending=False)[:20]
Explanation: Lasso
End of explanation
eFACT_ROUNDS = 0
elgb_train = lgb.Dataset(eX, ey)
cvres = pd.DataFrame(lgb.cv(params=lgb_params, train_set=elgb_train, nfold=7, shuffle=False,
early_stopping_rounds=100, verbose_eval=100, num_boost_round=ROUNDS))
eFACT_ROUNDS = len(cvres)
emodel = lgb.train(lgb_params, elgb_train, num_boost_round=eFACT_ROUNDS or ROUNDS)
etest = test_pr[[]].copy()
for i, p in enumerate(models):
pred = pd.read_csv("%s_sub.csv" % p, index_col="id", names=["id", "p_%s" % i], header=0)
etest = etest.join(pred)
y_pred = emodel.predict(etest.values)
df = pd.DataFrame({"id": etest.index, "price_doc": unlog(y_pred)})
df.to_csv("ensemble_sub.csv", index=False)
!head ensemble_sub.csv
Explanation: LightGBM
End of explanation
from tqdm import tqdm
def get_best_score(train):
xgb_params = {
'max_depth': 5,
'n_estimators': 200,
'learning_rate': 0.01,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
cvres = xgb.cv(xgb_params, train, num_boost_round=4000, early_stopping_rounds=40)
return cvres["test-rmse-mean"].min(), cvres["test-rmse-mean"].argmin()
def df2DMatrix(df):
return xgb.DMatrix(data=df.drop("price_doc", axis=1).values, label=df["price_doc"].values)
def greedy_remove_features(df, feature_importances):
train = df
with open("greedy_search.tsv", "a") as f:
best_score, iterno = get_best_score(df2DMatrix(df))
f.write("\t".join(["INITIAL", str(best_score), str(iterno)]) + "\n")
to_analyze = sorted(feature_importances.items(), key=lambda x: x[1])
for feat, feat_importance in tqdm(to_analyze):
f.flush()
candidate_train = train.drop(feat, axis=1)
cand_best_score, iterno = get_best_score(df2DMatrix(candidate_train))
if cand_best_score > best_score:
# стало хуже, оставляем фичу
f.write("\t".join([feat, str(cand_best_score), str(best_score), str(feat_importance), str(iterno), "skip"]) + "\n")
f.flush()
continue
f.write("\t".join([feat, str(cand_best_score), str(best_score), str(feat_importance), str(iterno), "remove"]) + "\n")
best_score = cand_best_score
train = candidate_train
feature_importances = imp_features.set_index("feature").to_dict()["importance"]
train_gs = train
with open("greedy_search.tsv") as gs:
for line in gs:
row = line.strip().split("\t")
if len(row) < 6:
continue
if row[5] == "remove":
try:
train_gs = train_gs.drop(row[0], axis=1)
except ValueError:
pass
print "drop", row[0]
feature_importances.pop(row[0], None)
greedy_remove_features(train_gs, feature_importances)
Explanation: Результаты испытаний
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro + other, no ratio feats, superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude, xgb fillna, predict price meter:
val-rmse:42206.6
predict-train: 36746.0165399
kaggle: 0.31331
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro + other, no ratio feats, no superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude, xgb fillna, predict price doc:
val-rmse:2.57852e+06
train-rmse:1.90168e+06+26844.3 test-rmse:2.66642e+06+56338.9
predict-train: 2021259.19865
kaggle: 0.31386
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro + other, no ratio feats, no superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude, xgb fillna, predict price meter:
val-rmse:42206.6
predict-train: 36746.0165399
kaggle: 0.31331
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro + other, no ratio feats, no superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude:
val-rmse:2.55793e+06
train-rmse:1.74066e+06+28727.3 test-rmse:2.65025e+06+64969.5
predict-train: 1881896.66663
kaggle: 0.31344
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro + other, no ratio feats, no superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude 143:
val-rmse:2.54654e+06
train-rmse:1.74594e+06+24020 test-rmse:2.66053e+06+67300.3
predict-train: 1883352.60935
kaggle: 0.31364
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro, no ratio feats, no superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude 143:
val-rmse:2.55613e+06
train-rmse:1.74466e+06+27385.6 test-rmse:2.66422e+06+69734.1
predict-train: 1888051.35357
kaggle: 0.31366
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro with other ID, ratio feats, no superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle, feature_exclude 143:
val-rmse:2.58557e+06
train-rmse:1.98509e+06+26803.7 test-rmse:2.68755e+06+59691.1
predict-train: 2092731.29028
kaggle: 0.31731
5*200, no macro, add rel features, no log price, train_without_noise:
val-rmse:2.63772e+06
train-rmse:1.9989e+06+10986.4 test-rmse:2.69158e+06+53020
predict-train: 2076010.27131
kaggle: 0.31720
5*200, no macro, add rel features, no log price, train_with_noise:
val-rmse:2.53378e+06
train-rmse:1.95069e+06+16166.4 test-rmse:2.69703e+06+61455.1
predict-train: 2054421.59869
kaggle: 0.32056
5*200, macro, add rel features, no log price, train_without_noise:
val-rmse:2.79632e+06
train-rmse:1.81015e+06+19781.2 test-rmse:2.6641e+06+123875
predict-train: 1904063.27368
kaggle: 0.32976
5*200, no macro, add rel features, no log price, train_without_noise:
val-rmse:2.61682e+06
train-rmse:1.81123e+06+27681.2 test-rmse:2.66923e+06+53925.7
predict-train: 1899129.43771
kaggle: 0.31592
5*200, no macro, add rel features, no log price, train_without_noise, 4000 iter:
val-rmse:2.61055e+06
train-rmse:1.71826e+06+30076.1 test-rmse:2.66515e+06+54583.5
predict-train: 1814572.97424
kaggle: 0.31602
7*300, no macro, add rel features, no log price, train_without_noise, 4000 iter:
val-rmse:2.59955e+06
train-rmse:1.41393e+06+21208.1 test-rmse:2.6763e+06+35553.3
predict-train: 1548257.49121
kaggle: 0.31768
4*300, no macro, add rel features, no log price, train_without_noise, 4000 iter:
val-rmse:2.63407e+06
train-rmse:1.96513e+06+21470.8 test-rmse:2.69417e+06+74288.3
predict-train: 2062299.41091
kaggle: 0.31952
7*200, no macro, add rel features, no log price, train_without_noise, 4000 iter:
val-rmse:2.59955e+06
train-rmse:1.41393e+06+21208.1 test-rmse:2.6763e+06+35553.3
predict-train: 1548257.49121
5*300, no macro, add rel features, no log price, train_without_noise, 4000 iter:
val-rmse:2.61055e+06
train-rmse:1.71826e+06+30076.1 test-rmse:2.66515e+06+54583.5
predict-train: 1814572.97424
5*200, no macro, add rel features, no log price, train_without_noise, 4000 iter, not fillna:
val-rmse:2.61664e+06
train-rmse:1.77892e+06+23111 test-rmse:2.65829e+06+56398.6
predict-train: 1875799.54634
kaggle: 0.31521
5*200, no macro, add rel features, no log price, train_without_noise, 4000 iter, not fillna, superclean:
val-rmse:2.6265e+06
train-rmse:1.78478e+06+22545.4 test-rmse:2.66179e+06+60626.3
predict-train: 1881672.27588
kaggle: 0.31476
5*200, no macro, add rel features, no log price, train_without_noise, 4000 iter, not fillna, superclean, no super features + Label Encoding:
val-rmse:2.56494e+06
train-rmse:1.78862e+06+18589.1 test-rmse:2.69283e+06+79861.4
predict-train: 1923466.41923
kaggle: 0.31434
5*200, no macro, add rel features, no log price, train_without_noise, 4000 iter, not fillna, superclean, remove material state num_room:
val-rmse:2.56932e+06
train-rmse:1.88495e+06+20133.7 test-rmse:2.69624e+06+70491.2
predict-train: 1979198.19201
kaggle: 0.31513
5*200, no macro, add rel features, no log price, train_without_noise, 4000 iter, not fillna, superclean, ID metro/bus...:
val-rmse:2.60017e+06
train-rmse:1.80654e+06+19453.5 test-rmse:2.68203e+06+68169.5
predict-train: 1906439.98603
kaggle: 0.31927
5*200, no macro, add rel features, no log price, train_without_noise, 4000 iter, not fillna, superclean, ID metro, remove 50 features:
val-rmse:2.93665e+06
train-rmse:1.73425e+06+19462.4 test-rmse:2.68682e+06+140661
predict-train: 1861268.6455
kaggle: 0.31555
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro, remove 50 features, add ratio feats:
val-rmse:2.59747e+06
train-rmse:1.75828e+06+26639.4 test-rmse:2.68491e+06+67201.8
predict-train: 1875707.6581
kaggle: 0.31760
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro, no ratio feats, superfeatures + Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle:
val-rmse:2.5419e+06
train-rmse:1.74381e+06+22710.7 test-rmse:2.65787e+06+66889.9
predict-train: 1862467.67153
kaggle: 0.31716
5*200, no macro, add rel features, no log price, train_without_noise,
4000 iter, not fillna, superclean, ID metro, no ratio feats, no superfeatures, Label Encoding,
is_build_in_progress + age_of_building, kfold wo shuffle:
val-rmse:2.5676e+06
train-rmse:1.81485e+06+24274 test-rmse:2.67324e+06+60153.1
predict-train: 1947645.83102
kaggle: 0.31376
Feature Greedy selection
End of explanation
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
train_new_pr = feature_exclude(preprocess_categorial(preprocess(train_raw, dropid=False)))
test_new_pr = feature_exclude(preprocess_categorial(preprocess(test, dropid=False)))
# нужно сделать fillna, чтобы получить филлеры для NA из моделей
filled_train = fill_na_xgb(train_new_pr)
filled_test = fill_na_xgb(test_new_pr)
filled_train = filled_train.set_index("id")
filled_test = filled_test.set_index("id")
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
train_new = preprocess_anomaly(train_raw)
test_new = preprocess_anomaly(test)
train_new = train_new.set_index("id")
test_new = test_new.set_index("id")
train_new = train_new.join(filled_train[important_feats], rsuffix="_filled")
test_new = test_new.join(filled_test[important_feats], rsuffix="_filled")
for impf in important_feats:
train_new[impf] = train_new[impf].fillna(train_new["%s_filled" % impf])
train_new = train_new.drop(["%s_filled" % impf], axis=1)
test_new[impf] = test_new[impf].fillna(test_new["%s_filled" % impf])
test_new = test_new.drop(["%s_filled" % impf], axis=1)
# train_new = feature_exclude(train_new)
# test_new = feature_exclude(test_new)
train_new.to_csv("data/train_cleaned.csv", encoding="utf_8")
test_new.to_csv("data/test_cleaned.csv", encoding="utf_8")
Explanation: Получаем преобразованные train/test
End of explanation
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
def update(source, patch):
dtypes = source.dtypes
source.update(patch, overwrite=True)
for c, t in dtypes.iteritems():
source[c] = source[c].astype(t)
return source
train_raw.set_index("id")
test.set_index("id")
fx = pd.read_excel('data/BAD_ADDRESS_FIX.xlsx').drop_duplicates('id').set_index('id')
train_raw = update(train_raw, fx)
test = update(test, fx)
train_raw.reset_index()
test.reset_index()
print('Fix in train: ', train_raw.index.intersection(fx.index).shape[0])
print('Fix in test : ', test.index.intersection(fx.index).shape[0])
train_raw.to_csv("data/train_fix.csv", index=False, encoding="utf-8")
test.to_csv("data/test_fix.csv", index=False, encoding="utf-8")
Explanation: Fix from Sberbank
End of explanation
from auto_ml import Predictor
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
train_pr = preprocess(train_raw)
train_pr = preprocess_categorial(train_pr)
train = feature_exclude(train_pr)
# Tell auto_ml which column is 'output'
# Also note columns that aren't purely numerical
# Examples include ['nlp', 'date', 'categorical', 'ignore']
column_descriptions = {
'price_doc': 'output'
}
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
ml_predictor.train(train)
file_name = ml_predictor.save()
print file_name
# Score the model on test data
test_score = ml_predictor.score(df_test, df_test.MEDV)
Explanation: Auto ML
End of explanation
#Checking for missing data
NAs = pd.concat([
train.isnull().sum(),
test_pr.isnull().sum()
], axis=1, keys=['Train', 'Test'])
NAs[NAs.sum(axis=1) > 0]
Explanation: Смотрим на данные
End of explanation |
7,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style.
Step1: In addition, the DrawControl also has last_action and last_draw attributes that are created dynamically anytime a new drawn path arrives.
Step2: It's possible to remove all drawings from the map
Step3: Let's draw a second map and try to import this GeoJSON data into it.
Step4: We can use link to synchronize traitlets of the two maps
Step5: Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details.
Now let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit. | Python Code:
dc = DrawControl(marker={'shapeOptions': {'color': '#0000FF'}},
rectangle={'shapeOptions': {'color': '#0000FF'}},
circle={'shapeOptions': {'color': '#0000FF'}},
circlemarker={},
)
def handle_draw(target, action, geo_json):
print(action)
print(geo_json)
dc.on_draw(handle_draw)
m.add_control(dc)
Explanation: Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style.
End of explanation
dc.last_action
dc.last_draw
Explanation: In addition, the DrawControl also has last_action and last_draw attributes that are created dynamically anytime a new drawn path arrives.
End of explanation
dc.clear_circles()
dc.clear_polylines()
dc.clear_rectangles()
dc.clear_markers()
dc.clear_polygons()
dc.clear()
Explanation: It's possible to remove all drawings from the map
End of explanation
m2 = Map(center=center, zoom=zoom, layout=dict(width='600px', height='400px'))
m2
Explanation: Let's draw a second map and try to import this GeoJSON data into it.
End of explanation
map_center_link = link((m, 'center'), (m2, 'center'))
map_zoom_link = link((m, 'zoom'), (m2, 'zoom'))
new_poly = GeoJSON(data=dc.last_draw)
m2.add_layer(new_poly)
Explanation: We can use link to synchronize traitlets of the two maps:
End of explanation
dc2 = DrawControl(polygon={'shapeOptions': {'color': '#0000FF'}}, polyline={},
circle={'shapeOptions': {'color': '#0000FF'}})
m2.add_control(dc2)
Explanation: Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details.
Now let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit.
End of explanation |
7,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ACM Digital Library bibliometric analysis of legacy software
An incomplete bibliographical inquiry into what the ACM Digital Library has to say about legacy.
Basically two research questions come to mind immidiately. Firstly, have legacy-related publications been on the raise. Secondly, what subtopics can be analyzed?
The first question could afford analysing knowledge being captured in new concept and practices e.g. refactoring or soa. The second question could afford validation against qualitative methods.
Step1: Data loading, sanitization and massage
Search "legacy" at ACM Digital Library. Just a simple search, to which the web interface gives 1541 results in mid November 2016. The number of items in the library is ~460000.
A CSV is downloaded from ACM DL, in default sorting order of the library's own idea of relevance... whatever that means for them. BibTeX is also available.
Step2: The available data columns are
Step3: A peek at the topmost data items.
Step4: Does the id field uniquely identify items on the search list? If so, using it as index could be a good idea.
Step5: Ok apparently it is used for deduplication. Who knows why are the items twice in the downloaded list.
What datatypes did Pandas infer from the CSV
Step6: Massage the keywords to be lists. Note that str.split(',') returns a [''], therefore the little if filter in there.
Step7: Are any items missing the year?
Step8: Complementary data
To contextualize the legacy search results, get the number of total publications in ACM per year.
These were semimanually extracted from the ACM DL search results listing DOM, with the following Javascript
javascript
acmYearly = {};
theChartData.labels.forEach(function(y) {acmYearly[y] = theChartData.datasets[0].data[theChartData.labels.indexOf(y)]});
console.log(acmYearly);
Step9: Data overview
Let's check how many percent all the 1541 search results given by the website search list. Would be great if this was 100%.
Step10: With the above peek at the ID field, how many unique items did we receive in the download?
Step11: Ok capped at 1000 I guess, which brings the percentage of the website search results available to us down to
Step12: Data exploration
Fraction of published items per year which ACM identifies as relevant for legacy search.
Histogram of publication years
Step13: What about the ACM Digital Library total, what does it's profile look like over time?
Step14: Similar overall shape, which isn't a surprise. Overlay, with arbitrary scaling of 300.
Step15: Right, so they seem to have somewhat similar shape. Legacy as a concept was lagging overall ACM DL, until it caught by increasing growth during 1990s.
What about the ratio of this subset of the whole ACM DL? Has it increased or decreased over time? Ie. has the proportion of publications about legacy changed?
Step16: All the pre-1990 publications are
Step17: And over 1000 publications after 1990, until 2016. First 10 of which are
Step18: Did something happen around 1990s, as the fraction of publications related to legacy started increasing? Let's plot a global linear regression model, as well as separate linear regression models before and after 1990.
Step19: Statistical validation of the above would be good, of course, to check against randomness.
A histogram of keywords
The keywords are interesting. All keywords in this dataset are already related to legacy one way or the other, since the data under inspection here is a subset of the total ACM Digital Library.
Keywords of course live a life of their own, and I guess there increase in number forever.
Which keywords are popular?
Step20: How many keywords do each item have?
Step21: Ok almost 400 items are without any keywords. There are some outliers, let's inspect the ones with more than 15 keywords. Sounds excessive...
Step22: And the keyword lists for the above
Step23: That is excessive, but seems legit to me.
Total number of unique keywords
Step24: Of which occur in 10 or more items in the subset
Step25: and further those that occur in 3-10 items
Step26: Of the remainder, number of keywords which appear on only two items
Step27: and only on one item
Step28: Keywords with 'legacy' in them
Step29: Network analysis of keywords
Keywords are a comma separated list in keywords, let's pull all of them out to a graph.
An analysis of which keywords are actually plentiful, their temporal distibution etc. centrality metrics, subgraph overlap etc. would be great.
Step30: So there is one dominant component, and 150 small ones. It's best to explore them interactively with Gephi.
Step31: Degree distribution of the keyword graph, ie. are there a few nodes which have huge degree and then a large number of nodes with smaller number of connections, like a power network. Additionally, let's see where the keywords with the work legacy in them are placed, by indicating them with green vertical lines. In the left diagram below, hubs are towards the right.
Step32: Eyeballing the above, most of the legacy keywords are where the mass of the distribution is, ie. at low degrees. One of the legacy nodes is a top hub, and there are some in the mid-ranges.
The top 3 keywords with the highest degree, ie. towards the right of the above graph are
Step34: Let's plot the top hub out.
Step36: Communities
Community detection with the Louvain algorithm, explained in Blondel, Guillaume, Lambiotte, Lefebvre | Python Code:
import pandas as pd
import networkx as nx
import community
import itertools
import matplotlib.pyplot as plt
import numpy as np
import re
%matplotlib inline
Explanation: ACM Digital Library bibliometric analysis of legacy software
An incomplete bibliographical inquiry into what the ACM Digital Library has to say about legacy.
Basically two research questions come to mind immidiately. Firstly, have legacy-related publications been on the raise. Secondly, what subtopics can be analyzed?
The first question could afford analysing knowledge being captured in new concept and practices e.g. refactoring or soa. The second question could afford validation against qualitative methods.
End of explanation
legacybib = pd.read_csv("ACMDL201612108240806.csv")
Explanation: Data loading, sanitization and massage
Search "legacy" at ACM Digital Library. Just a simple search, to which the web interface gives 1541 results in mid November 2016. The number of items in the library is ~460000.
A CSV is downloaded from ACM DL, in default sorting order of the library's own idea of relevance... whatever that means for them. BibTeX is also available.
End of explanation
legacybib.columns
Explanation: The available data columns are
End of explanation
legacybib.head(3)
Explanation: A peek at the topmost data items.
End of explanation
assert 0, sum(legacybib.id.duplicated())
legacybib[legacybib.id.duplicated(keep=False)].head(2 * 2)
Explanation: Does the id field uniquely identify items on the search list? If so, using it as index could be a good idea.
End of explanation
legacybib.dtypes
Explanation: Ok apparently it is used for deduplication. Who knows why are the items twice in the downloaded list.
What datatypes did Pandas infer from the CSV
End of explanation
legacybib.keywords.fillna('', inplace=True)
legacybib.keywords = legacybib.keywords.map(lambda l: [k.lower().strip() for k in l.split(',') if k])
Explanation: Massage the keywords to be lists. Note that str.split(',') returns a [''], therefore the little if filter in there.
End of explanation
legacybib[legacybib.year.isnull()].year
Explanation: Are any items missing the year?
End of explanation
acmPerYearData = { 1951: 43, 1952: 77, 1953: 34, 1954: 71, 1955: 72, 1956: 162, 1957: 144, 1958: 234, 1959: 335,
1960: 302, 1961: 521, 1962: 519, 1963: 451, 1964: 537, 1965: 561, 1966: 633, 1967: 754, 1968: 669, 1969: 907,
1970: 800, 1971: 1103, 1972: 1304, 1973: 1704, 1974: 1698, 1975: 1707, 1976: 2086, 1977: 1943, 1978: 2235, 1979: 1687,
1980: 2152, 1981: 2241, 1982: 2578, 1983: 2485, 1984: 2531, 1985: 2608, 1986: 3143, 1987: 3059, 1988: 3827, 1989: 4155,
1990: 4313, 1991: 4551, 1992: 5019, 1993: 5107, 1994: 5939, 1995: 6179, 1996: 6858, 1997: 7181, 1998: 8003, 1999: 7628,
2000: 9348, 2001: 8691, 2002: 10965, 2003: 11624, 2004: 14493, 2005: 16715, 2006: 19222, 2007: 19865, 2008: 21631, 2009: 23827,
2010: 27039, 2011: 25985, 2012: 27737, 2013: 25832, 2014: 26928, 2015: 27131, 2016: 25557, 2017: 39}
acmPerYear = pd.Series(acmPerYearData)
Explanation: Complementary data
To contextualize the legacy search results, get the number of total publications in ACM per year.
These were semimanually extracted from the ACM DL search results listing DOM, with the following Javascript
javascript
acmYearly = {};
theChartData.labels.forEach(function(y) {acmYearly[y] = theChartData.datasets[0].data[theChartData.labels.indexOf(y)]});
console.log(acmYearly);
End of explanation
round(len(legacybib) / 1541 * 100, 2)
Explanation: Data overview
Let's check how many percent all the 1541 search results given by the website search list. Would be great if this was 100%.
End of explanation
len(legacybib.id.unique())
Explanation: With the above peek at the ID field, how many unique items did we receive in the download?
End of explanation
round(len(legacybib.id.unique()) / 1541 * 100, 2)
Explanation: Ok capped at 1000 I guess, which brings the percentage of the website search results available to us down to
End of explanation
legacybib.year.hist(bins=legacybib.year.max() - legacybib.year.min(), figsize=(10,2))
Explanation: Data exploration
Fraction of published items per year which ACM identifies as relevant for legacy search.
Histogram of publication years
End of explanation
acmPerYear.plot(figsize=(10, 2))
Explanation: What about the ACM Digital Library total, what does it's profile look like over time?
End of explanation
#plt.hist(legacybib.year.dropna(), label="Year histogram")
plt.plot(legacybib.year.groupby(legacybib.year).count(), label='legacy publication')
plt.plot(acmPerYear * 0.003, label="total publications * 0.003")
plt.legend()
plt.legend(loc='best')
Explanation: Similar overall shape, which isn't a surprise. Overlay, with arbitrary scaling of 300.
End of explanation
plt.plot(pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear), 'o')
Explanation: Right, so they seem to have somewhat similar shape. Legacy as a concept was lagging overall ACM DL, until it caught by increasing growth during 1990s.
What about the ratio of this subset of the whole ACM DL? Has it increased or decreased over time? Ie. has the proportion of publications about legacy changed?
End of explanation
legacybib[legacybib.year <= 1990][["year", "title"]].sort_values("year")
Explanation: All the pre-1990 publications are:
End of explanation
legacybib[legacybib.year > 1990][["year", "title"]].sort_values("year").head(10)
Explanation: And over 1000 publications after 1990, until 2016. First 10 of which are
End of explanation
pre1990range = np.arange(legacybib.year.min(), 1991)
post1990range = np.arange(1990, legacybib.year.max())
# Linear regression models
# note the use of np.polyfit
propLm = np.polyfit(pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear).dropna().index, pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear).dropna(), 1)
pre1990 = np.polyfit(pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear)[pre1990range].dropna().index, pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear)[pre1990range].dropna(), 1)
post1990 = np.polyfit(pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear)[post1990range].dropna().index, pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear)[post1990range].dropna(), 1)
# Plot the fractions of legacy vs. all publications, the models, and a legend
plt.plot(pd.Series(legacybib.groupby(legacybib.year).year.count() / acmPerYear), 'o')
plt.plot(np.arange(legacybib.year.min(), legacybib.year.max()), np.poly1d(propLm)(np.arange(legacybib.year.min(), legacybib.year.max())), label="global lm")
plt.plot(pre1990range, np.poly1d(pre1990)(pre1990range), linestyle="dashed", label="pre 1990 lm")
plt.plot(post1990range, np.poly1d(post1990)(post1990range), linestyle="dashed", label="post 1990 lm")
plt.title("Fraction of legacy related publications against ACM")
plt.legend(loc="best")
Explanation: Did something happen around 1990s, as the fraction of publications related to legacy started increasing? Let's plot a global linear regression model, as well as separate linear regression models before and after 1990.
End of explanation
# this could be a pandas.Series instead of dict
keywordhist = {}
for kws in legacybib.keywords:
for k in kws:
if k in keywordhist:
keywordhist[k] = keywordhist[k] + 1
else:
keywordhist[k] = 1
Explanation: Statistical validation of the above would be good, of course, to check against randomness.
A histogram of keywords
The keywords are interesting. All keywords in this dataset are already related to legacy one way or the other, since the data under inspection here is a subset of the total ACM Digital Library.
Keywords of course live a life of their own, and I guess there increase in number forever.
Which keywords are popular?
End of explanation
legacybib.keywords.map(lambda kws: len(kws)).describe()
plt.title("Histogram of numbers of keywords per item")
plt.hist(legacybib.keywords.map(lambda kws: len(kws)), bins=max(legacybib.keywords.map(lambda kws: len(kws))) - 1)
Explanation: How many keywords do each item have?
End of explanation
legacybib[legacybib.keywords.map(lambda kws: len(kws)) > 15][["id", "title", "author", "keywords"]]
Explanation: Ok almost 400 items are without any keywords. There are some outliers, let's inspect the ones with more than 15 keywords. Sounds excessive...
End of explanation
[keywordlist for keywordlist in legacybib[legacybib.keywords.map(lambda kws: len(kws)) > 15].keywords]
Explanation: And the keyword lists for the above
End of explanation
len(keywordhist)
Explanation: That is excessive, but seems legit to me.
Total number of unique keywords:
End of explanation
[(k, keywordhist[k]) for k in sorted(keywordhist, key=keywordhist.get, reverse=True) if keywordhist[k] >= 10]
Explanation: Of which occur in 10 or more items in the subset
End of explanation
[(k, keywordhist[k]) for k in sorted(keywordhist, key=keywordhist.get, reverse=True) if keywordhist[k] < 10 and keywordhist[k] >= 3]
Explanation: and further those that occur in 3-10 items
End of explanation
len([k for k in keywordhist if keywordhist[k] == 2])
Explanation: Of the remainder, number of keywords which appear on only two items
End of explanation
len([k for k in keywordhist if keywordhist[k] == 1])
Explanation: and only on one item
End of explanation
sorted([(k, keywordhist[k]) for k in keywordhist if re.match("legacy", k)], key=lambda k: k[1], reverse=True)
Explanation: Keywords with 'legacy' in them
End of explanation
keywordg = nx.Graph()
legacybib.keywords.map(lambda item: keywordg.add_edges_from([p for p in itertools.permutations(item, 2)]), na_action='ignore')
print("Number of components", len([comp for comp in nx.connected_components(keywordg)]))
print("Largest ten components sizes", sorted([len(comp) for comp in nx.connected_components(keywordg)], reverse=True)[:10])
Explanation: Network analysis of keywords
Keywords are a comma separated list in keywords, let's pull all of them out to a graph.
An analysis of which keywords are actually plentiful, their temporal distibution etc. centrality metrics, subgraph overlap etc. would be great.
End of explanation
nx.write_gexf(keywordg, "keywordg.gexf")
Explanation: So there is one dominant component, and 150 small ones. It's best to explore them interactively with Gephi.
End of explanation
fig, (ax1, ax2) = plt.subplots(1,2)
fig.set_size_inches(10, 2)
ax1.set_title("Keyword degree histogram")
ax1.plot(nx.degree_histogram(keywordg))
ax1.vlines([keywordg.degree(l) for l in keywordg if re.match('legacy', l)], ax1.get_ylim()[0], ax1.get_ylim()[1], colors='green')
ax2.set_title("Keyword degree diagram, log/log")
ax2.loglog(nx.degree_histogram(keywordg))
Explanation: Degree distribution of the keyword graph, ie. are there a few nodes which have huge degree and then a large number of nodes with smaller number of connections, like a power network. Additionally, let's see where the keywords with the work legacy in them are placed, by indicating them with green vertical lines. In the left diagram below, hubs are towards the right.
End of explanation
keywordgDegrees = pd.Series(keywordg.degree()).sort_values(ascending=False)
keywordgDegrees.head(3)
Explanation: Eyeballing the above, most of the legacy keywords are where the mass of the distribution is, ie. at low degrees. One of the legacy nodes is a top hub, and there are some in the mid-ranges.
The top 3 keywords with the highest degree, ie. towards the right of the above graph are:
End of explanation
def plotNeighborhood(graph, ego, color = "green", includeEgo = False):
from math import sqrt
Plot neighbourhood of keyword in graph, after possibly removing the ego.
graph : networkx.Graph-like graph
The graph to get the neighbourhood from
ego : node in graph
The node whose neighbourhood to plot
color : string
Name of the color to use for plotting
includeEgo : bool
Include the ego node
The function defaults to removing the ego node, because by definition
it is connected to each of the nodes in the subgraph. With the ego
removed, the result basically tells how the neighbours are connected
with one another.
plt.rcParams["figure.figsize"] = (10, 10)
subgraph = nx.Graph()
if includeEgo:
subgraph = graph.subgraph(graph.neighbors(ego) + [ego])
else:
subgraph = graph.subgraph(graph.neighbors(ego))
plt.title("Neighbourhood of " + ego + " (" + str(len(subgraph)) + ")")
plt.axis('off')
pos = nx.spring_layout(subgraph, k = 1/sqrt(len(subgraph) * 2))
nx.draw_networkx(subgraph,
pos = pos,
font_size = 9,
node_color = color,
alpha = 0.8,
edge_color = "light" + color)
plt.show()
plotNeighborhood(keywordg, "legacy systems")
plotNeighborhood(keywordg, "legacy software")
Explanation: Let's plot the top hub out.
End of explanation
def plotCommunities(graph):
Plot community information from a graph.
Basically just copied from http://perso.crans.org/aynaud/communities/index.html
at this point, while in development
# zoom in on something, for dev. purposes
graph = graph.subgraph(graph.neighbors('legacy software'))
# graph = [c for c in nx.connected_component_subgraphs(graph)][0]
graph = max(nx.connected_component_subgraphs(graph), key=len) # I love you Python
partition = community.best_partition(graph)
size = float(len(set(partition.values())))
pos = nx.spring_layout(graph)
count = 0
for com in set(partition.values()):
count = count + 1
list_nodes = [nodes for nodes in partition.keys() if partition[nodes] == com]
plt.axis('off')
nx.draw_networkx_nodes(graph, pos, list_nodes, node_size = 40, node_color = str(count/size), alpha=0.4)
nx.draw_networkx_labels(graph, pos, font_size = 9)
nx.draw_networkx_edges(graph, pos, alpha=0.1)
plt.show()
plotCommunities(keywordg)
Explanation: Communities
Community detection with the Louvain algorithm, explained in Blondel, Guillaume, Lambiotte, Lefebvre: Fast unfolding of communities in large networks (2008). For weighted networks, modularity of a partition is $Q = \frac{1}{2m}\sum_{i, j} \Big[A_{ij} - \frac{k_i k_j}{2m}\Big] \delta(c_i, c_j)$, where $A_{ij}$ is the weight matrix, $c_i$ the community of $i$, and $\delta$-function is 1 if $u = v$, and 0 otherwise (erm what?) and $m = \frac{1}{2}\sum_{i,j}A_{ij}$.
End of explanation |
7,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 2
Imports
Step1: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps
Step2: Integral 1
\begin{equation}
\int_{0}^{a}{\sqrt{a^2 - x^2}} dx=\frac{\pi a^2}{4}
\end{equation}
Step3: Integral 2
\begin{equation}
\int_{0}^{\infty} e^{-ax^2} dx =\frac{1}{2}\sqrt{\frac{\pi}{a}}
\end{equation}
Step4: Integral 3
\begin{equation}
\int_{0}^{\infty} \frac{x}{e^x-1} dx =\frac{\pi^2}{6}
\end{equation}
Step5: Integral 4
\begin{equation}
\int_{0}^{\infty} \frac{x}{e^x+1} dx =\frac{\pi^2}{12}
\end{equation}
Step6: Integral 5
\begin{equation}
\int_{0}^{1} \frac{ln x}{1-x} dx =-\frac{\pi^2}{6}
\end{equation} | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
Explanation: Integration Exercise 2
Imports
End of explanation
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
# YOUR CODE HERE
def integrand(x, a):
return (np.sqrt(a**2 - x**2))
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, a, args=(a,))
return I
def integral_exact(a):
return (0.25*np.pi*a**2)
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 1
\begin{equation}
\int_{0}^{a}{\sqrt{a^2 - x^2}} dx=\frac{\pi a^2}{4}
\end{equation}
End of explanation
# YOUR CODE HERE
def integrand(x, a):
return np.exp(-a*x**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.sqrt(np.pi/a)
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 2
\begin{equation}
\int_{0}^{\infty} e^{-ax^2} dx =\frac{1}{2}\sqrt{\frac{\pi}{a}}
\end{equation}
End of explanation
# YOUR CODE HERE
def integrand(x, a):
return x/(np.exp(x)-1)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return (1/6.0)*np.pi**2
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 3
\begin{equation}
\int_{0}^{\infty} \frac{x}{e^x-1} dx =\frac{\pi^2}{6}
\end{equation}
End of explanation
# YOUR CODE HERE
def integrand(x, a):
return x/(np.exp(x)+1)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return (1/12.0)*np.pi**2
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 4
\begin{equation}
\int_{0}^{\infty} \frac{x}{e^x+1} dx =\frac{\pi^2}{12}
\end{equation}
End of explanation
# YOUR CODE HERE
def integrand(x, a):
return np.log(x)/(1-x)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, 1, args=(a,))
return I
def integral_exact(a):
return (-1.0/6.0)*np.pi**2
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 5
\begin{equation}
\int_{0}^{1} \frac{ln x}{1-x} dx =-\frac{\pi^2}{6}
\end{equation}
End of explanation |
7,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explicit 1D Benchmarks
This file demonstrates how to generate, plot, and output data for 1d benchmarks
Choose from
Step1: Generate the data with noise
Step2: Plot inline and save image
Step3: Output json and csv data
Step4: Output clean json and csv data | Python Code:
from pypge.benchmarks import explicit
import numpy as np
# visualization libraries
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# plot the visuals in ipython
%matplotlib inline
Explanation: Explicit 1D Benchmarks
This file demonstrates how to generate, plot, and output data for 1d benchmarks
Choose from:
Koza_01
Koza_02
Koza_03
Lipson_01
Lipson_02
Lipson_03
Nguyen_01
Nguyen_02
Nguyen_03
Nguyen_04
Nguyen_05
Nguyen_06
Nguyen_07
Nguyen_08
Imports
End of explanation
# Set your output directories
img_dir = "../img/benchmarks/explicit/"
data_dir = "../data/benchmarks/explicit/"
# choose your problem here
prob = explicit.Nguyen_04(noise=0.01,npts=1000)
# you can also specify the following params as keyword arguments
#
# params = {
# 'name': "Koza_01",
# 'xs_str': ["x"],
# 'eqn_str': "x**4 + x**3 + x**2 + x",
# 'xs_params': [ (-4.0,4.0) ],
# 'npts': 200,
# 'noise': 1.0
# }
# or make your own with the following
#
# explicit.Explicit_1D(params):
Explanation: Generate the data with noise
End of explanation
print prob['name'], prob['eqn']
print prob['xpts'].shape
fig = plt.figure()
fig.set_size_inches(16, 12)
plt.plot(prob['xpts'][0], prob['ypure'], 'r.')
plt.legend(loc='center left', bbox_to_anchor=(0.67, 0.12))
plt.title(prob['name'] + " Clean", fontsize=36)
plt.savefig(img_dir + prob['name'].lower() + "_clean.png", dpi=200)
# plt.show()
### You can only do one of 'savefig()' or 'show()'
fig = plt.figure()
fig.set_size_inches(16, 12)
plt.plot(prob['xpts'][0], prob['ypts'], 'b.')
plt.legend(loc='center left', bbox_to_anchor=(0.67, 0.12))
plt.title(prob['name'] + " Noisy", fontsize=36)
plt.savefig(img_dir + prob['name'].lower() + "_noisy.png", dpi=200)
# plt.show()
Explanation: Plot inline and save image
End of explanation
data = np.array([prob['xpts'][0], prob['ypts']]).T
print data.shape
cols = [['x', 'out']]
out_data = cols + data.tolist()
import json
json_out = json.dumps( out_data, indent=4)
# print json_out
f_json = open(data_dir + prob['name'].lower() + ".json", 'w')
f_json.write(json_out)
f_json.close()
f_csv = open(data_dir + prob['name'].lower() + ".csv", 'w')
for row in out_data:
line = ", ".join([str(col) for col in row]) + "\n"
f_csv.write(line)
f_csv.close()
Explanation: Output json and csv data
End of explanation
data = np.array([prob['xpts'][0], prob['ypure']]).T
print data.shape
cols = [['x', 'out']]
out_data = cols + data.tolist()
import json
json_out = json.dumps( out_data, indent=4)
# print json_out
f_json = open(data_dir + prob['name'].lower() + "_clean.json", 'w')
f_json.write(json_out)
f_json.close()
f_csv = open(data_dir + prob['name'].lower() + "_clean.csv", 'w')
for row in out_data:
line = ", ".join([str(col) for col in row]) + "\n"
f_csv.write(line)
f_csv.close()
Explanation: Output clean json and csv data
End of explanation |
7,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Find the best $\alpha$ for $p = 7, N_y = 256, N_z = 2048$
Step1: Find the best $\alpha$ for $p = 15, N_y = 128, N_z = 1024$
Step2: Find the best $\alpha$ for $p = 31, N_y = 64, N_z = 512$ | Python Code:
alphs = list(np.linspace(0,pi/2, 16, endpoint=False))
Re=2000;
N = 7
Nl = 257
Nz = 2049
yms = []; y1s = []; y10s = []; zms = []
for alph in alphs:
yl=mesh(alph, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
yms.append(ym)
y1s.append(y1)
y10s.append(y10)
zms.append(zm)
alpha = 0.1
plot_units(yms, y1s, y10s, zms, alpha)
yl=mesh(alpha, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
print("Final values:", ym, y1, y10, zm, cm)
yl=mesh(.1, 257)
plt.figure(figsize=(256,1))
plot_mesh(yl)
plt.savefig('mesh_8.png')
print(yl)
Explanation: Find the best $\alpha$ for $p = 7, N_y = 256, N_z = 2048$
End of explanation
alphs = list(np.linspace(0,pi/2, 16, endpoint=False))
Re=2000;
N = 16
Nl = 129
Nz = 1025
yms = []; y1s = []; y10s = []; zms = []; cms = []
for alph in alphs:
yl=mesh(alph, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
yms.append(ym)
y1s.append(y1)
y10s.append(y10)
zms.append(zm)
cms.append(cm)
alpha = 0.22
plot_units(yms, y1s, y10s, zms, alpha)
yl=mesh(alpha, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
print("Final values:", ym, y1, y10, zm, cm)
yl=mesh(0.22, 129)
plt.figure(figsize=(128,1))
plot_mesh(yl)
plt.savefig('mesh_16.png')
print(yl)
Explanation: Find the best $\alpha$ for $p = 15, N_y = 128, N_z = 1024$
End of explanation
alphs = list(np.linspace(0,pi/2, 16, endpoint=False))
Re=2000;
N = 31
Nl = 65
Nz = 513
yms = []; y1s = []; y10s = []; zms = []
for alph in alphs:
yl=mesh(alph, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
yms.append(ym)
y1s.append(y1)
y10s.append(y10)
zms.append(zm)
alpha = 0.36
plot_units(yms, y1s, y10s, zms, alpha)
yl=mesh(alpha, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
print("Final values:", ym, y1, y10, zm, cm)
yl=mesh(.36, 65)
plt.figure(figsize=(64,1))
plot_mesh(yl)
plt.savefig('mesh_32.png')
print(yl)
Explanation: Find the best $\alpha$ for $p = 31, N_y = 64, N_z = 512$
End of explanation |
7,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code block cell to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were told to make a prediction about any passenger aboard the RMS Titanic who we did not know anything about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers as a whole did not survive the ship sinking.
The function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: Answer: Predictions have an accuracy of 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Predictions have an accuracy of 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. Consider, for example, all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'female", "Pclass == 3"])
Explanation: Answer: Predictions have an accuracy of 79.35%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
if passenger['Pclass'] == 3 and passenger['Age'] >= 30:
predictions.append(0)
else:
predictions.append(1)
else:
if passenger['Age'] <= 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
7,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>lcacoffee</h1>
script that displays coffees sold by hour at lca2015.
Currently it opens a .json file and converts it into a python dict.
It's missing monday data.
sale by hour is
Step1: Need to open all the sale[day].json files and append data.
Step2: Read rows in pandas only to 12.
Step3: What the hell. This was working the other day.
the filza = (lcacoffee folder + each saleday file.json)
Step4: The daysales appended into the allsales file are in fact lists.
Options include removing the lists and turning it all into one big json object
Appending all the data into one day.
eg
Step5: ok if i cycle through jdunp between 0 and 23 i get the results.
cycle through ints but as a string. must add ' '
Break down coffee sales by hour.
Ignore/delete hours with zero sales.
Need to create new dict with this data.
How would it look?
Step6: I need to filter the - - from the results. I really only need the values that have numbers.
Take number in brackets away from number not in brackets.
The number in brackets is total amount of coffees sold. The number not in brackets is amount of volchers used.
The number that I get when i take away is the coffee sold without volchers.
New dict that shows only the times that coffee were sold and the amount of coffgfges that were solf. Maybe that would works.
Step7: How come it is only adding the wednesday data in the results. It needs to have all the datas.
Needs to take the number in brackets away from the number not in brackets.
Step8: json total and grandt total are useless. remove this so it doesn't muck up when I add everything together.
Don't want to manualy remove the files. Maybe edit the files with a script to remove the total and grand total.
Checked the json for grand total and total and removes them if they are there.
Search through the json | Python Code:
import json
import os
import pandas
import getpass
theuser = getpass.getuser()
Explanation: <h1>lcacoffee</h1>
script that displays coffees sold by hour at lca2015.
Currently it opens a .json file and converts it into a python dict.
It's missing monday data.
sale by hour is: key - the hour (24hr). Value is total paid sales (people who ran out of vouchers). Number in brackets is total number of coffees sold.
Number in brackets - Number not in brackets = amount of coffees sold with vouchers.
Each sale[day].json file contains a json objects. Important keys/values are:
Product : cappuccino/flat white etc...
Tags : These are taged with coffee followed by ; and if its free ;.
count : amount of these coffees sold.
SKU : ? guess its a ID?
Sales ($) : Amount of dollars from selling coffee
Tax ($) : Amount of tax paid from selling the coffee
Revenue : This is Sales + Tax
Costs ($) : Cost to make the coffees.
Revenue ($) : Amount of revenue made from selling the coffee
Margin ($) : Percent of money made from selling coffee.
There are other keys but the value of them is empty. No point dealing with them.
Open up all the sale[day].json files and create a all salesfile. Append all together.
Amount of recenue made for each product.
Create list of all coffees
Reverse Engineer this - Sell coffee rather than looking at sales.
count +1 on product count. cost/tax etc goes up.
Add date + time that coffee was sold.
For each count how much cost/tax/revenue.
End of explanation
jsonfold = ('/home/' + theuser + '/github/lcacoffee/')
alldata = ('salebyhour.json')
tueda = ('saletues.json')
weda = ('saleweds.json')
thura = ('salethurs.json')
fria = ('salefri.json')
salhr = (jsonfold + alldata)
salajs = ('/home/wcmckee/cofres/salesall.json')
Explanation: Need to open all the sale[day].json files and append data.
End of explanation
fripan = pandas.read_json('/home/wcmckee/github/lcacoffee/salefri.json')
fricvs = fripan.to_csv()
savcsv = open('/home/wcmckee/cofres/saltest.csv', 'w')
savcsv.write(fricvs)
savcsv.close()
pnrdz = pandas.read_csv('/home/wcmckee/cofres/saltest.csv', index_col=7)
pnrdz[0:13]
Explanation: Read rows in pandas only to 12.
End of explanation
lisz = [tueda, weda, thura, fria]
opaz = open('/home/wcmckee/cofres/salesall.json', 'w')
filza = ('/home/' + theuser + '/github/lcacoffee/' + lis)
for lis in lisz:
opaz = open(salajs, 'a')
print ('/home/' + theuser + '/github/lcacoffee/' + lis)
opdayz = open(filza, 'r')
opaz.write(str(opdayz.read()))
opaz.close()
opdayz.close()
opaz.close()
opzall = open(salajs, 'r')
Explanation: What the hell. This was working the other day.
the filza = (lcacoffee folder + each saleday file.json)
End of explanation
opzall.read()
opcvs = open(salhr, 'r')
opzrd = opcvs.read()
jdunp = json.loads(opzrd)
valia = []
#pandas.read_json(jdunp)
jdunp.count(int)
len(jdunp)
Explanation: The daysales appended into the allsales file are in fact lists.
Options include removing the lists and turning it all into one big json object
Appending all the data into one day.
eg: Product: "cappuccino" Count: 450 (total of week).
File needs steriozie removing the \n
End of explanation
for numtwn in range(0,24):
print "'" + str(numtwn) + "'"
for jdr in jdunp:
print jdr['0']
for numtwn in range(0,24):
print "'" + str(numtwn) + "'"
for dej in jdunp:
print dej.values()
valia.append(dej.values())
dezrand = len(valia)
azlis = []
for vals in valia:
print vals
azlis.append(vals)
Explanation: ok if i cycle through jdunp between 0 and 23 i get the results.
cycle through ints but as a string. must add ' '
Break down coffee sales by hour.
Ignore/delete hours with zero sales.
Need to create new dict with this data.
How would it look?
End of explanation
betra = []
for azl in azlis:
betra.append(azl)
anoe = []
anez = []
for betr in betra:
betr.append(anoe)
for deta in betr:
#print deta
if '- -' in deta:
print deta
else:
anez.append(deta)
fdic = []
for resut in anez:
print resut
fdic.append(resut)
Explanation: I need to filter the - - from the results. I really only need the values that have numbers.
Take number in brackets away from number not in brackets.
The number in brackets is total amount of coffees sold. The number not in brackets is amount of volchers used.
The number that I get when i take away is the coffee sold without volchers.
New dict that shows only the times that coffee were sold and the amount of coffgfges that were solf. Maybe that would works.
End of explanation
fdic
optue = open('/home/wcmckee/Downloads/saletues.json', 'r')
rdtue = optue.read()
tuejs = json.loads(rdtue)
tuejs
Explanation: How come it is only adding the wednesday data in the results. It needs to have all the datas.
Needs to take the number in brackets away from the number not in brackets.
End of explanation
saltax = []
salcoun = []
for bran in tuejs:
#print bran['Revenue ($)']
print bran['Sales incl. tax ($)']
saltax.append(bran['Sales incl. tax ($)'])
salcoun.append(bran['Count'])
satxtot = sum(saltax)
satxtot
for bran in tuejs:
#print bran['Revenue ($)']
print bran['Product']
salcoun
Explanation: json total and grandt total are useless. remove this so it doesn't muck up when I add everything together.
Don't want to manualy remove the files. Maybe edit the files with a script to remove the total and grand total.
Checked the json for grand total and total and removes them if they are there.
Search through the json
End of explanation |
7,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prerequisites
This notebook contains examples which are expected to be run with exactly 4 MPI processes; not because they wouldn't work otherwise, but simply because it's what their description assumes. For this, you need to
Step1: In this tutorial, to run commands in parallel over the engines, we will use the %px line magic.
Step2: Overview of MPI in Devito
Distributed-memory parallelism via MPI is designed so that users can "think sequentially" for as much as possible. The few things requested to the user are
Step3: An Operator will then generate MPI code, including sends/receives for halo exchanges. Below, we introduce a running example through which we explain how domain decomposition as well as data access (read/write) and distribution work. Performance optimizations are discussed in a later section.
Let's start by creating a TimeFunction.
Step4: Domain decomposition is performed when creating a Grid. Users may supply their own domain decomposition, but this is not shown in this notebook. Devito exploits the MPI Cartesian topology abstraction to logically split the Grid over the available MPI processes. Since u is defined over a decomposed Grid, its data get distributed too.
Step5: Globally, u consists of 4x4 points -- this is what users "see". But locally, as shown above, each rank has got a 2x2 subdomain. The key point is
Step6: The only limitation, currently, is that a data access cannot require a direct data exchange among two or more processes (e.g., the assignment u.data[0, 0] = u.data[3, 3] will raise an exception unless both entries belong to the same MPI rank).
We can finally write out a trivial Operator to try running something.
Step7: And we can now check again the (distributed) content of our u.data
Step8: Everything as expected. We could also peek at the generated code, because we may be curious to see what sort of MPI calls Devito has generated...
Step9: Hang on. There's nothing MPI-specific here! At least apart from the header file #include "mpi.h". What's going on? Well, it's simple. Devito was smart enough to realize that this trivial Operator doesn't even need any sort of halo exchange -- the Eq implements a pure "map computation" (i.e., fully parallel), so it can just let each MPI process do its job without ever synchronizing with halo exchanges. We might want try again with a proper stencil Eq.
Step10: Uh-oh -- now the generated code looks more complicated than before, though it still is pretty much human-readable. We can spot the following routines
Step11: This is again a global data view. The shown with_halo is the "true" halo surrounding the physical domain, not the halo used for the MPI halo exchanges (often referred to as "ghost region"). So it gets trivial for a user to initialize the "true" halo region (which is typically read by a stencil Eq when an Operator iterates in proximity of the domain bounday).
Step12: MPI and SparseFunction
A SparseFunction represents a sparse set of points which are generically unaligned with the Grid. A sparse point could be anywhere within a grid, and is therefore attached some coordinates. Given a sparse point, Devito looks at its coordinates and, based on the domain decomposition, logically assigns it to a given MPI process; this is purely logical ownership, as in Python-land, before running an Operator, the sparse point physically lives on the MPI rank which created it. Within op.apply, right before jumping to C-land, the sparse points are scattered to their logical owners; upon returning to Python-land, the sparse points are gathered back to their original location.
In the following example, we attempt injection of four sparse points into the neighboring grid points via linear interpolation.
Step13: Let
Step14: Performance optimizations
The Devito compiler applies several optimizations before generating code.
Redundant halo exchanges are identified and removed. A halo exchange is redundant if a prior halo exchange carries out the same Function update and the data is not “dirty” yet.
Computation/communication overlap, with explicit prodding of the asynchronous progress engine to make sure that non-blocking communications execute in background during the compute part.
Halo exchanges could also be reshuffled to maximize the extension of the computation/communication overlap region.
To run with all these optimizations enabled, instead of DEVITO_MPI=1, users should set DEVITO_MPI=full, or, equivalently
Step15: We could now peek at the generated code to see that things now look differently. | Python Code:
import ipyparallel as ipp
c = ipp.Client(profile='mpi')
Explanation: Prerequisites
This notebook contains examples which are expected to be run with exactly 4 MPI processes; not because they wouldn't work otherwise, but simply because it's what their description assumes. For this, you need to:
Install an MPI distribution on your system, such as OpenMPI, MPICH, or Intel MPI (if not already available).
Install some optional dependencies, including mpi4py and ipyparallel; from the root Devito directory, run
pip install -r requirements-optional.txt
Create an ipyparallel MPI profile, by running our simple setup script. From the root directory, run
./scripts/create_ipyparallel_mpi_profile.sh
Launch and connect to an ipyparallel cluster
We're finally ready to launch an ipyparallel cluster. Open a new terminal and run the following command
ipcluster start --profile=mpi -n 4
Once the engines have started successfully, we can connect to the cluster
End of explanation
%%px --group-outputs=engine
from mpi4py import MPI
print(f"Hi, I'm rank %d." % MPI.COMM_WORLD.rank)
Explanation: In this tutorial, to run commands in parallel over the engines, we will use the %px line magic.
End of explanation
%%px
from devito import configuration
configuration['mpi'] = True
%%px
# Keep generated code as simple as possible
configuration['language'] = 'C'
# Fix platform so that this notebook can be tested by py.test --nbval
configuration['platform'] = 'knl7210'
Explanation: Overview of MPI in Devito
Distributed-memory parallelism via MPI is designed so that users can "think sequentially" for as much as possible. The few things requested to the user are:
Like any other MPI program, run with mpirun -np X python ...
Some pre- and/or post-processing may be rank-specific (e.g., we may want to plot on a given MPI rank only, even though this might be hidden away in the next Devito releases, when newer support APIs will be provided.
Parallel I/O (if and when necessary) to populate the MPI-distributed datasets in input to a Devito Operator. If a shared file system is available, there are a few simple alternatives to pick from, such as NumPy’s memory-mapped arrays.
To enable MPI, users have two options. Either export the environment variable DEVITO_MPI=1 or, programmatically:
End of explanation
%%px
from devito import Grid, TimeFunction, Eq, Operator
grid = Grid(shape=(4, 4))
u = TimeFunction(name="u", grid=grid, space_order=2, time_order=0)
Explanation: An Operator will then generate MPI code, including sends/receives for halo exchanges. Below, we introduce a running example through which we explain how domain decomposition as well as data access (read/write) and distribution work. Performance optimizations are discussed in a later section.
Let's start by creating a TimeFunction.
End of explanation
%%px --group-outputs=engine
u.data
Explanation: Domain decomposition is performed when creating a Grid. Users may supply their own domain decomposition, but this is not shown in this notebook. Devito exploits the MPI Cartesian topology abstraction to logically split the Grid over the available MPI processes. Since u is defined over a decomposed Grid, its data get distributed too.
End of explanation
%%px
u.data[0, 1:-1, 1:-1] = 1.
%%px --group-outputs=engine
u.data
Explanation: Globally, u consists of 4x4 points -- this is what users "see". But locally, as shown above, each rank has got a 2x2 subdomain. The key point is: for the user, the fact that u.data is distributed is completely abstracted away -- the perception is that of indexing into a classic NumPy array, regardless of whether MPI is enabled or not. All sort of NumPy indexing schemes (basic, slicing, etc.) are supported. For example, we can write into a slice-generated view of our data.
End of explanation
%%px
op = Operator(Eq(u.forward, u + 1))
summary = op.apply(time_M=0)
Explanation: The only limitation, currently, is that a data access cannot require a direct data exchange among two or more processes (e.g., the assignment u.data[0, 0] = u.data[3, 3] will raise an exception unless both entries belong to the same MPI rank).
We can finally write out a trivial Operator to try running something.
End of explanation
%%px --group-outputs=engine
u.data
Explanation: And we can now check again the (distributed) content of our u.data
End of explanation
%%px --targets 0
print(op)
Explanation: Everything as expected. We could also peek at the generated code, because we may be curious to see what sort of MPI calls Devito has generated...
End of explanation
%%px --targets 0
op = Operator(Eq(u.forward, u.dx + 1))
print(op)
Explanation: Hang on. There's nothing MPI-specific here! At least apart from the header file #include "mpi.h". What's going on? Well, it's simple. Devito was smart enough to realize that this trivial Operator doesn't even need any sort of halo exchange -- the Eq implements a pure "map computation" (i.e., fully parallel), so it can just let each MPI process do its job without ever synchronizing with halo exchanges. We might want try again with a proper stencil Eq.
End of explanation
%%px --group-outputs=engine
u.data_with_halo
Explanation: Uh-oh -- now the generated code looks more complicated than before, though it still is pretty much human-readable. We can spot the following routines:
haloupdate0 performs a blocking halo exchange, relying on three additional functions, gather0, sendrecv0, and scatter0;
gather0 copies the (generally non-contiguous) boundary data into a contiguous buffer;
sendrecv0 takes the buffered data and sends it to one or more neighboring processes; then it waits until all data from the neighboring processes is received;
scatter0 copies the received data into the proper array locations.
This is the simplest halo exchange scheme available in Devito. There are a few, and some of them apply aggressive optimizations, as shown later on.
Before looking at other scenarios and performance optimizations, there is one last thing it is worth discussing -- the data_with_halo view.
End of explanation
%%px
u.data_with_halo[:] = 1.
%%px --group-outputs=engine
u.data_with_halo
Explanation: This is again a global data view. The shown with_halo is the "true" halo surrounding the physical domain, not the halo used for the MPI halo exchanges (often referred to as "ghost region"). So it gets trivial for a user to initialize the "true" halo region (which is typically read by a stencil Eq when an Operator iterates in proximity of the domain bounday).
End of explanation
%%px
from devito import Function, SparseFunction
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
x, y = grid.dimensions
f = Function(name='f', grid=grid)
coords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
Explanation: MPI and SparseFunction
A SparseFunction represents a sparse set of points which are generically unaligned with the Grid. A sparse point could be anywhere within a grid, and is therefore attached some coordinates. Given a sparse point, Devito looks at its coordinates and, based on the domain decomposition, logically assigns it to a given MPI process; this is purely logical ownership, as in Python-land, before running an Operator, the sparse point physically lives on the MPI rank which created it. Within op.apply, right before jumping to C-land, the sparse points are scattered to their logical owners; upon returning to Python-land, the sparse points are gathered back to their original location.
In the following example, we attempt injection of four sparse points into the neighboring grid points via linear interpolation.
End of explanation
%%px
sf.data[:] = 5.
op = Operator(sf.inject(field=f, expr=sf))
summary = op.apply()
%%px --group-outputs=engine
f.data
Explanation: Let:
* O be a grid point
* x be a halo point
* A, B, C, D be the sparse points
We show the global view, that is what the user "sees".
O --- O --- O --- O
| A | | |
O --- O --- O --- O
| | C | B |
O --- O --- O --- O
| | D | |
O --- O --- O --- O
And now the local view, that is what the MPI ranks own when jumping to C-land.
```
Rank 0 Rank 1
O --- O --- x x --- O --- O
| A | | | | |
O --- O --- x x --- O --- O
| | C | | C | B |
x --- x --- x x --- x --- x
Rank 2 Rank 3
x --- x --- x x --- x --- x
| | C | | C | B |
O --- O --- x x --- O --- O
| | D | | D | |
O --- O --- x x --- O --- O
```
We observe that the sparse points along the boundary of two or more MPI ranks are duplicated and thus redundantly computed over multiple processes. However, the contributions from these points to the neighboring halo points are naturally ditched, so the final result of the interpolation is as expected. Let's convince ourselves that this is the case. We assign a value of $5$ to each sparse point. Since we are using linear interpolation and all points are placed at the exact center of a grid quadrant, we expect that the contribution of each sparse point to a neighboring grid point will be $5 * 0.25 = 1.25$. Based on the global view above, we eventually expect f to look like as follows:
1.25 --- 1.25 --- 0.00 --- 0.00
| | | |
1.25 --- 2.50 --- 2.50 --- 1.25
| | | |
0.00 --- 2.50 --- 3.75 --- 1.25
| | | |
0.00 --- 1.25 --- 1.25 --- 0.00
Let's check this out.
End of explanation
%%px
configuration['mpi'] = 'full'
Explanation: Performance optimizations
The Devito compiler applies several optimizations before generating code.
Redundant halo exchanges are identified and removed. A halo exchange is redundant if a prior halo exchange carries out the same Function update and the data is not “dirty” yet.
Computation/communication overlap, with explicit prodding of the asynchronous progress engine to make sure that non-blocking communications execute in background during the compute part.
Halo exchanges could also be reshuffled to maximize the extension of the computation/communication overlap region.
To run with all these optimizations enabled, instead of DEVITO_MPI=1, users should set DEVITO_MPI=full, or, equivalently
End of explanation
%%px
op = Operator(Eq(u.forward, u.dx + 1))
# Uncomment below to show code (it's quite verbose)
# print(op)
Explanation: We could now peek at the generated code to see that things now look differently.
End of explanation |
7,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="top"></a>
Cloud Statistics
<hr>
Notebook Summary
This notebook explores Landsat 7 and Landsat 8 Data Cubes and reports cloud statistics
for selected regions within a cube. This is valuable information for performing analyses.
For example, if there are extensive clouds for a season it may significantly impact the
mosaic product or index values. Another example is that a user may want to find a single
date when there are few clouds to assess land features.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platforms and Products
Get the Extents of the Cube
Define the Extents of the Analysis
Calculate the Cloud Coverage Percentage for Each Pixel
Create a Table of Cloud Coverage Percentage for Each Date
Create a Plot of Cloud Coverage Percentage for Each Date
Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period
Review an RGB Scene for a Selected Time Slice
<span id="import">Import Dependencies and Connect to the Data Cube ▴</span>
Step1: <span id="plat_prod">Choose Platforms and Products ▴</span>
List available products for each platform
Step2: Choose products
<p style="color
Step3: <span id="extents">Get the Extents of the Cube ▴</span>
Step4: Visualize the available area
Step5: <span id="define_extents">Define the Extents of the Analysis ▴</span>
<p style="color
Step6: Visualize the selected area
Step7: <span id="calc_cloud_coverage">Calculate the Cloud Coverage Percentage for Each Pixel▴</span>
Step8: <span id="create_cloud_cov_table">Create a Table of Cloud Coverage Percentage for Each Date▴</span>
Step9: <span id="plot_cloud_cov">Create a Plot of Cloud Coverage Percentage for Each Date▴</span>
Step10: <span id="pct_clear_img">Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period▴</span>
Step11: <span id="rgb_time_slice">Review an RGB Scene for a Selected Time Slice▴</span>
Step12: <p style="color | Python Code:
# Enable importing of utilities.
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import numpy as np
import xarray as xr
import pandas as pd
import matplotlib.pyplot as plt
# Load Data Cube Configuration
import datacube
import utils.data_cube_utilities.data_access_api as dc_api
api = dc_api.DataAccessApi()
dc = api.dc
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
Explanation: <a id="top"></a>
Cloud Statistics
<hr>
Notebook Summary
This notebook explores Landsat 7 and Landsat 8 Data Cubes and reports cloud statistics
for selected regions within a cube. This is valuable information for performing analyses.
For example, if there are extensive clouds for a season it may significantly impact the
mosaic product or index values. Another example is that a user may want to find a single
date when there are few clouds to assess land features.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platforms and Products
Get the Extents of the Cube
Define the Extents of the Analysis
Calculate the Cloud Coverage Percentage for Each Pixel
Create a Table of Cloud Coverage Percentage for Each Date
Create a Plot of Cloud Coverage Percentage for Each Date
Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period
Review an RGB Scene for a Selected Time Slice
<span id="import">Import Dependencies and Connect to the Data Cube ▴</span>
End of explanation
# Get available products
products_info = dc.list_products()
# List LANDSAT 7 products
print("LANDSAT 7 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_7"]
# List LANDSAT 8 products
print("LANDSAT 8 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_8"]
Explanation: <span id="plat_prod">Choose Platforms and Products ▴</span>
List available products for each platform
End of explanation
# These are the platforms (satellites) and products (datacube sets)
# used for this demonstration. Uncomment only 1 set.
platform = 'LANDSAT_8'
product = 'ls8_usgs_sr_scene'
collection = 'c1'
level = 'l2'
# platform = 'LANDSAT_8'
# product = 'ls8_l2_c2'
# collection = 'c2'
# level = 'l2'
band_no_data_values = dc.list_measurements().loc[product, 'nodata']
Explanation: Choose products
<p style="color:red";><b>CHANGE INPUTS BELOW
End of explanation
from utils.data_cube_utilities.dc_load import get_product_extents
from utils.data_cube_utilities.dc_time import dt_to_str
full_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)
# Print the extents of the data.
print("Latitude Extents:", full_lat)
print("Longitude Extents:", full_lon)
print("Time Extents:", list(map(dt_to_str, (min_max_dates[0], min_max_dates[1]))))
Explanation: <span id="extents">Get the Extents of the Cube ▴</span>
End of explanation
from utils.data_cube_utilities.dc_display_map import display_map
display_map(full_lat, full_lon)
Explanation: Visualize the available area
End of explanation
# Select an analysis region (Lat-Lon) within the extents listed above.
# Select a time period (Min-Max) within the extents listed above (Year-Month-Day)
# This region and time period will be used for the cloud assessment
# Nairobi, Kenya
latitude = (-1.3407, -1.2809)
longitude = (36.7640, 36.9206)
# Mombasa, Kenya
# latitude = (-4.12, -3.975)
# longitude = (39.55, 39.7)
# Mau Forest - Western Kenya
# latitude = (-0.13406, 0.21307)
# longitude = (35.28322, 35.56681)
# Dar es Salaam, Tanzania
# latitude = (-7.0, -6.7)
# longitude = (39.1, 39.4)
# Lake Sulunga, Tanzania
# latitude = (-6.2622, -5.8822)
# longitude = (34.9802, 35.3602)
# Freetown, Sierra Leone
# latitude = (8.3267, 8.5123)
# longitude = (-13.3109, -13.1197 )
# Vietnam
# latitude = (10.9358, 11.0358)
# longitude = (107.1899, 107.2899)
# Ghanas
# latitude = (5.5, 5.7) # Accra
# longitude = (-0.4, 0.0) # Accra
# Time Period
time_extents = ('2016-01-01', '2016-01-31')
Explanation: <span id="define_extents">Define the Extents of the Analysis ▴</span>
<p style="color:red";><b>CHANGE INPUTS BELOW
End of explanation
display_map(latitude,longitude)
Explanation: Visualize the selected area
End of explanation
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_invalid, landsat_qa_clean_mask
def build_cloud_coverage_table_landsat(product,
platform,
collection,
level,
latitude,
longitude,
time = None,
dc = None,
extra_band = 'green',
band_no_data_values = None):
dc = dc if dc is not None else datacube.Datacube(app = "")
load_params = dict(platform=platform,
product=product,
latitude = latitude,
longitude = longitude,
measurements = [extra_band, 'pixel_qa'],
group_by='solar_day')
if time is not None:
load_params["time"] = time
landsat_dataset = dc.load(**load_params)
clean_mask = landsat_qa_clean_mask(landsat_dataset, platform=platform,
collection=collection, level=level) & \
landsat_clean_mask_invalid(landsat_dataset, platform, collection, level)
data_mask = xr.full_like(clean_mask, True)
if band_no_data_values is not None:
for data_var_name in landsat_dataset.data_vars:
band_data_mask = landsat_dataset[data_var_name] != band_no_data_values[data_var_name]
data_mask = data_mask & band_data_mask
clean_data_mask = clean_mask & data_mask
landsat_dataset = landsat_dataset.where(clean_data_mask)
times = list(landsat_dataset.time.values)
scene_slice_list = list(map(lambda t: landsat_dataset.sel(time = str(t)), times))
clean_data_mask_list = [clean_data_mask.sel(time=str(time)).values for time in clean_data_mask.time.values]
# Calculate the percentage of all pixels which are not cloud.
percentage_list = [clean_data_mask.mean()*100 for clean_data_mask in clean_data_mask_list]
clean_pixel_count_list = list(map(np.sum, clean_data_mask_list))
data = {"times": times,
"clean_percentage": percentage_list,
"clean_count": clean_pixel_count_list }
return landsat_dataset, pd.DataFrame(data=data, columns=["times", "clean_percentage", "clean_count"]), \
clean_mask, data_mask, clean_data_mask
extra_band = 'green'
landsat_dataset, coverage_table, clean_mask, data_mask, clean_data_mask = \
build_cloud_coverage_table_landsat(product = product,
platform = platform,
collection = collection,
level = level,
latitude = latitude,
longitude = longitude,
time = time_extents,
extra_band=extra_band,
band_no_data_values=band_no_data_values)
Explanation: <span id="calc_cloud_coverage">Calculate the Cloud Coverage Percentage for Each Pixel▴</span>
End of explanation
pd.set_option('display.max_rows', len(coverage_table))
coverage_table
Explanation: <span id="create_cloud_cov_table">Create a Table of Cloud Coverage Percentage for Each Date▴</span>
End of explanation
plt.figure(figsize = (15,5))
plt.plot(coverage_table["times"].values, coverage_table["clean_percentage"].values, 'bo', markersize=8)
plt.title("Percentage of Clean (not cloud) Pixels for Each Time Slice")
plt.show()
Explanation: <span id="plot_cloud_cov">Create a Plot of Cloud Coverage Percentage for Each Date▴</span>
End of explanation
# We are really plotting the fraction of times that are not no_data which are clear.
# This is done to account for regions filled with no_data - such as when querying across path/rows.
clear_and_data_per_px = clean_data_mask.sum(dim='time')
data_per_px = data_mask.sum(dim='time')
frac_clear_per_data_per_px = clear_and_data_per_px / data_per_px
num_cbar_ticks = 8 # The number of ticks to use for the colorbar.
quad_mesh = (frac_clear_per_data_per_px).plot(figsize=(12,10),cmap = "RdYlGn", vmin=0, vmax=1)
plt.show()
print("Percent of pixels with data: {:.2%}".format(data_mask.mean().values))
print("Percent of pixels that are clear: {:.2%}".format(clean_mask.mean().values))
print("Percent of pixels that are clear and have data: {:.2%}".format(clean_data_mask.mean().values))
(frac_clear_per_data_per_px == 0).sum() / frac_clear_per_data_per_px.count()
print("Number of pixels which have no non-cloud data:", (frac_clear_per_data_per_px == 0).sum().values)
print("Total number of pixels:", frac_clear_per_data_per_px.count().values)
Explanation: <span id="pct_clear_img">Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period▴</span>
End of explanation
# Load the data to create an RGB image
landsat_dataset = dc.load(latitude = latitude,
longitude = longitude,
platform = platform,
time = time_extents,
product = product,
measurements = ['red', 'green', 'blue'],
group_by='solar_day')
Explanation: <span id="rgb_time_slice">Review an RGB Scene for a Selected Time Slice▴</span>
End of explanation
from utils.data_cube_utilities.dc_rgb import rgb
# CHANGE HERE >>>>>>>>>>>>>>
time_ind = 0 # The acquisition to select. The first acquisition has index 0.
# Select one of the time slices and create an RGB image.
# Time slices are numbered from 0 to x and shown in the table above
# Review the clean_percentage values above to select scenes with few clouds
# Clouds will be visible in WHITE and cloud-shadows will be visible in BLACK
rgb(landsat_dataset.isel(time=time_ind), width=12)
plt.show()
Explanation: <p style="color:red";><b>CHANGE INPUTS BELOW
End of explanation |
7,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1st Order ODE
Let's solve
Step1: Try smaller timestep
Step2: Our numeric result is more accurate when we use a smaller timestep, but it's still not perfect
2nd Order ODE
Let's solve
Step3: Try again, with a smaller timestep | Python Code:
y_0 = 1
t_0 = 0
t_f = 10
def dy_dt(y):
return .5*y
def analytic_solution_1st_order(t):
return np.exp(.5*t)
dt = .5
t_array = np.arange(t_0, t_f, dt)
y_array = np.empty_like(t_array)
y_array[0] = y_0
for i in range(len(y_array)-1):
y_array[i+1] = y_array[i] + (dt * dy_dt(y_array[i]))
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_1st_order(t_array), label="analytic")
plt.legend(loc="best")
plt.xlabel("t")
plt.ylabel("y")
Explanation: 1st Order ODE
Let's solve:
$$ \dot{y}(t) = .5 \cdot y(t)$$
with the initial condition:
$$ y(t=0)=1 $$
End of explanation
dt = .1
t_array = np.arange(t_0, t_f, dt)
y_array = np.empty_like(t_array)
y_array[0] = y_0
for i in range(len(y_array)-1):
y_array[i+1] = y_array[i] + (dt * dy_dt(y_array[i]))
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_1st_order(t_array), label="analytic")
plt.legend(loc="best")
plt.xlabel("t")
plt.ylabel("y")
Explanation: Try smaller timestep
End of explanation
y_0 = 1
dy_dt_0 = 0
#which gives us:
z_0 = (y_0, dy_dt_0)
t_0 = 0
t_f = 10
def dz_dt(z):
y, dy_dt = z #unpack the z vector
return np.array([dy_dt, -5*y])
def analytic_solution_2nd_order(t):
return np.cos(np.sqrt(5)*t)
dt = .1
t_array = np.arange(t_0, t_f, dt)
z_list = [z_0]
for i in range(len(t_array)-1):
z_list.append(z_list[i] + dt*dz_dt(z_list[i]))
z_array = np.array(z_list)
y_array = z_array[:,0]
dy_dt_array = z_array[:,1]
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_2nd_order(t_array), label="analytic")
plt.legend(loc="best")
Explanation: Our numeric result is more accurate when we use a smaller timestep, but it's still not perfect
2nd Order ODE
Let's solve:
$$ \ddot{y}(t) = - 5 y(t)$$
with the initial condition:
$$ y(t=0)=1 $$
$$ \dot{y}(t=0) = 0 $$
To do this we need to convert our 2nd order ODE into two 1st order ODEs. Let's define a new vector:
$$ z \equiv (y, \dot{y}) $$
For this vector, we have its derivative:
$$ \dot{z} = (\dot{y}, - 5 y) $$
End of explanation
dt = .01
t_array = np.arange(t_0, t_f, dt)
z_list = [z_0]
for i in range(len(t_array)-1):
z_list.append(z_list[i] + dt*dz_dt(z_list[i]))
z_array = np.array(z_list)
y_array = z_array[:,0]
dy_dt_array = z_array[:,1]
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_2nd_order(t_array), label="analytic")
plt.legend(loc="best")
Explanation: Try again, with a smaller timestep
End of explanation |
7,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pattern Mining - Association Rule Mining
A frequent pattern is a substructure that appears frequently in a dataset. Finding the frequent patterns of a dataset is a essential step in data mining tasks such as feature extraction and a necessary ingredient of association rule learning. This kind of algorithms are extremely useful in the field of Market Basket Analysis, which in turn provide retailers with invaluable information about their customer shopping habbits and needs.
Here, I will shortly describe the GraphLab Create frequent pattern mining toolkit, the tools it provides and its functionality. Major advantage of this high-level ML toolkit is the ease it provides to train an association rule mining algorithm, as well as the high interpretability of the returned results. Under the hood the GLG frequent pattern mining toolkit runs a TFP-Growth algorithm, introduced by Wang, Jianyong, et al. in 2005. For a recent review of the various directions in the field consult Han, Jiawei, et al. "Frequent pattern mining
Step1: A simple retailer example
Step2: As we can see below, all the coffee products have similar sale frequencies and there is no some particular subset of products that is more preferred than the remaining ones.
Step3: Next, we split the bakery_sf data set in a training and a test part.
Step4: In order to run a frequent pattern mining algorithm, we require an item column, (the column 'Item' in this example), and a set of feature columns that uniquely identify a transaction (the columns ['Receipt', 'StoreNum'] in this example, since we need to take in account the geophraphical location of each store and the accompanied social-economic criteria that may exist).
In addition we need to specify the 3 basic parameters of the FP-Growth algorithm which is called by the high-level GraphLab Create (GLC) function. These are
Step5: Here, we obtain the most frequent feature patterns.
Step6: Note that the 'pattern' column contains the patterns that occur frequently together, whereas the 'support' column contains the number of times these patterns occur together in the entire dataset.
In this example, the pattern
Step7: Alternatively, by decreasing the min_support one can obtain more patterns of sold coffee products which are again assumed frequent but with respect to this new threshold.
Step8: To see some details of the trained model
Step9: Top-k frequent patterns
In practice, we rarely know the appropriate min_support threshold to use. As an alternative to specifying a minimum support, we can specify a maximum number of patterns to mine using the max_patterns parameter. Instead of mining all patterns above a minimum support threshold, we mine the most frequent patterns until the maximum number of closed patterns are found. For large data sets, this mining process can be time-consuming. We recommend specifying a somehow large initial minimum support bound to speed up the mining.
Step10: The top-5 most frequent patterns are
Step11: We can always save the trained model by calling
Step12: Business Use Case
Step13: Feature Engineering to help other ML Tasks in Pipeline
The .extract_feature() method
Using the set of closed patterns, we can convert pattern data to binary features vectors. These feature vectors can be used for other machine learning tasks, such as clustering or classification. For each input pattern x, the j-th extracted feature f_{x}[j] is a binary indicator of whether the j-th closed pattern is contained in x.
First, we train the top100_freq_patterns model as shown below
Step14: Here are the 100 unique closed patterns which are found frequent
Step15: Next, we apply the extract_features() method of this newly trained top100_freq_patterns model on the test data set.
Step16: Once the features are extracted, we can use them downstream in other applications such as clustering, classification, churn prediction, recommender systems etc.
Step17: Example
Step18: Next, we count the instances that each of the top100_freq_patterns occurs per EmpId.
Step19: Finally, we train a kmeans algorithm to produce a 3 centered cluster.
Step20: And we can provide a countplot of the Number of Stores per Cluster Id as below. | Python Code:
import graphlab as gl
from graphlab import aggregate as agg
from visualization_helper_functions import *
Explanation: Pattern Mining - Association Rule Mining
A frequent pattern is a substructure that appears frequently in a dataset. Finding the frequent patterns of a dataset is a essential step in data mining tasks such as feature extraction and a necessary ingredient of association rule learning. This kind of algorithms are extremely useful in the field of Market Basket Analysis, which in turn provide retailers with invaluable information about their customer shopping habbits and needs.
Here, I will shortly describe the GraphLab Create frequent pattern mining toolkit, the tools it provides and its functionality. Major advantage of this high-level ML toolkit is the ease it provides to train an association rule mining algorithm, as well as the high interpretability of the returned results. Under the hood the GLG frequent pattern mining toolkit runs a TFP-Growth algorithm, introduced by Wang, Jianyong, et al. in 2005. For a recent review of the various directions in the field consult Han, Jiawei, et al. "Frequent pattern mining: current status and future directions.", Data Mining and Knowledge Discovery 15.1 (2007): 55-86.
Load GraphLab Create and Necessary Helper Functions
End of explanation
bakery_sf = gl.SFrame('./bakery_sf')
bakery_sf
Explanation: A simple retailer example: Loading Data, Exploratory Data Analysis
Here we discuss a simple example of receipt data from a bakery. The dataset consists of items like 'ApplePie' and 'GanacheCookie'. The task is to identify sets of items that are frequently bought together. The dataset consists of 266209 rows and 6 columns which look like the following. The dataset was constructed by modifying the Extended BAKERY dataset.
End of explanation
%matplotlib inline
item_freq_plot(bakery_sf, 'Item', ndigits=3, topk=30,
seaborn_style='whitegrid', seaborn_palette='deep', color='b')
Explanation: As we can see below, all the coffee products have similar sale frequencies and there is no some particular subset of products that is more preferred than the remaining ones.
End of explanation
(train, test) = bakery_sf.random_split(0.8, seed=1)
print 'Number of Rows in training set [80pct of Known Examples]: %d' % train.num_rows()
print 'Number of Rows in test set [20pct of Known Examples]: %d' % test.num_rows()
Explanation: Next, we split the bakery_sf data set in a training and a test part.
End of explanation
min_support = int(train.num_rows()*0.001)
model = gl.frequent_pattern_mining.create(train, 'Item',
features=['Receipt', 'StoreNum'],
min_support=min_support,
max_patterns=500,
min_length=4)
Explanation: In order to run a frequent pattern mining algorithm, we require an item column, (the column 'Item' in this example), and a set of feature columns that uniquely identify a transaction (the columns ['Receipt', 'StoreNum'] in this example, since we need to take in account the geophraphical location of each store and the accompanied social-economic criteria that may exist).
In addition we need to specify the 3 basic parameters of the FP-Growth algorithm which is called by the high-level GraphLab Create (GLC) function. These are:
min_support: The minimum number of times that a pattern must occur in order to be considered a frequent one. Here, we choose a threshold of 1‰ of total transactions in record to be the min_support.
max_patterns: The maximum number of frequent patterns to be mined.
min_length: The minimum size (number of elements in the set) of each pattern being mined.
End of explanation
print 'The most frequent feature patters are:'
print '-----------------------------------------'
model.frequent_patterns.print_rows(max_column_width=80, max_row_width=90)
Explanation: Here, we obtain the most frequent feature patterns.
End of explanation
min_support = int(train.num_rows()*0.001)
model = gl.frequent_pattern_mining.create(train, 'Item',
features=['Receipt', 'StoreNum'],
min_support=min_support,
max_patterns=500,
min_length=3)
print 'The most frequent feature patters are:'
print '-----------------------------------------'
model.frequent_patterns.print_rows(num_rows=35, max_column_width=80, max_row_width=90)
Explanation: Note that the 'pattern' column contains the patterns that occur frequently together, whereas the 'support' column contains the number of times these patterns occur together in the entire dataset.
In this example, the pattern:
[CoffeeEclair, HotCoffee, ApplePie, AlmondTwist]
occurred 877 times in the training data.
Definition
A frequent pattern is a set of items with a support greater than user-specified minimum support threshold.
However, there is significant redundancy in mining frequent patterns; every subset of a frequent pattern is also frequent (e.g. 'CoffeeEclair' must be frequent if ['CoffeeEclair', 'HotCoffee'] is frequent). The frequent pattern mining toolkit avoids this redundancy by mining the closed frequent patterns, i.e. frequent patterns with no superset of the same support. This is achieved by the very design of the TFP-Growth Algorithm.
Note, that by relaxing the min_length requirement, one can obtain more frequent patterns of sold coffe products.
End of explanation
min_support = int(train.num_rows()*(1e-04))
model = gl.frequent_pattern_mining.create(train, 'Item',
features=['Receipt', 'StoreNum'],
min_support=min_support,
max_patterns=500,
min_length=4)
print 'The most frequent feature patters are:'
print '-----------------------------------------'
model.frequent_patterns.print_rows(num_rows=60, max_row_width=90, max_column_width=80)
Explanation: Alternatively, by decreasing the min_support one can obtain more patterns of sold coffee products which are again assumed frequent but with respect to this new threshold.
End of explanation
print model
Explanation: To see some details of the trained model:
End of explanation
min_support = int(train.num_rows()*1e-03)
top5_freq_patterns = gl.frequent_pattern_mining.create(train, 'Item',
features=['Receipt', 'StoreNum'],
min_support=min_support,
max_patterns=5,
min_length=4)
Explanation: Top-k frequent patterns
In practice, we rarely know the appropriate min_support threshold to use. As an alternative to specifying a minimum support, we can specify a maximum number of patterns to mine using the max_patterns parameter. Instead of mining all patterns above a minimum support threshold, we mine the most frequent patterns until the maximum number of closed patterns are found. For large data sets, this mining process can be time-consuming. We recommend specifying a somehow large initial minimum support bound to speed up the mining.
End of explanation
print top5_freq_patterns
Explanation: The top-5 most frequent patterns are:
End of explanation
top5_freq_patterns.save('./top5_freq_patterns_model')
Explanation: We can always save the trained model by calling:
End of explanation
predictions = top5_freq_patterns.predict(test)
predictions.print_rows(max_row_width=100)
Explanation: Business Use Case: Compute Association Rules and Make Predictions
An association rule is an ordered pair of item sets (prefix \( A \), prediction \( B \)) denoted \( A\Rightarrow B \) such that \( A \) and \( B \) are disjoint whereas \( A\cup B \) is frequent. The most popular criteria for scoring association rules is to measure the confidence of the rule: the ratio of the support of \( A\cup B \) to the support of \( A \).
\[ \textrm{Confidence}(A\Rightarrow B) = \frac{\textrm{Supp}(A\cup B)}{\textrm{Supp}(A)}. \]
The confidence of the rule \( A\Rightarrow B \) is our empirical estimate of the conditional probability for \( B \) given \( A \).
One can make predictions using the predict() or predict_topk() method for single and multiple predictions respectively. The output of both the methods is an SFrame with the following columns:
prefix: The antecedent or left-hand side of an association rule. It must be a frequent pattern and a subset of the associated pattern.
prediction: The consequent or right-hand side of the association rule. It must be disjoint of the prefix.
confidence: The confidence of the association rule as defined above.
prefix support: The frequency of the prefix pattern in the training data.
joint support: The frequency of the co-occurrence ( prefix + prediction) in the training data
End of explanation
top100_freq_patterns = gl.frequent_pattern_mining.\
create(train, 'Item',
features=['Receipt', 'StoreNum'],
# occurs at least once in our data record
min_support=1,
# do not search for more than 100 patterns
max_patterns = 100,
# test data have only one coffee product sold per tid .
# We search for patterns of at least 2 coffee products
min_length=2)
Explanation: Feature Engineering to help other ML Tasks in Pipeline
The .extract_feature() method
Using the set of closed patterns, we can convert pattern data to binary features vectors. These feature vectors can be used for other machine learning tasks, such as clustering or classification. For each input pattern x, the j-th extracted feature f_{x}[j] is a binary indicator of whether the j-th closed pattern is contained in x.
First, we train the top100_freq_patterns model as shown below:
End of explanation
top100_freq_patterns.frequent_patterns.\
print_rows(num_rows=100, max_row_width=90, max_column_width=80)
Explanation: Here are the 100 unique closed patterns which are found frequent:
End of explanation
features = top100_freq_patterns.extract_features(train)
Explanation: Next, we apply the extract_features() method of this newly trained top100_freq_patterns model on the test data set.
End of explanation
features.print_rows(num_rows=10, max_row_width=90, max_column_width=100)
Explanation: Once the features are extracted, we can use them downstream in other applications such as clustering, classification, churn prediction, recommender systems etc.
End of explanation
emps = train.groupby(['Receipt', 'StoreNum'],
{'EmpId': agg.SELECT_ONE('EmpId')})
emps
Explanation: Example: Employee Space Clustering by using occurrences of frequency patterns
First, we provide an aggregated form of our data by selecting one Selling Employee (EmpId) at random.
End of explanation
emp_space = emps.join(features).\
groupby('EmpId', {'all_features': agg.SUM('extracted_features')})
emp_space
Explanation: Next, we count the instances that each of the top100_freq_patterns occurs per EmpId.
End of explanation
cl_model = gl.kmeans.create(emp_space,
features = ['all_features'],
num_clusters=3)
emp_space['cluster_id'] = cl_model['cluster_id']['cluster_id']
emp_space
Explanation: Finally, we train a kmeans algorithm to produce a 3 centered cluster.
End of explanation
%matplotlib inline
segments_countplot(emp_space, x='cluster_id',
figsize_tuple=(12,7), title='Number of Stores per Cluster ID')
Explanation: And we can provide a countplot of the Number of Stores per Cluster Id as below.
End of explanation |
7,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks with 17flowers
A simple deep learning example on how to start classifying images with your own data.
This notebook is expected to be executed after 17flowers_data.ipynb (for data creation).
Setup
After preparing necessary environment, please make sure to have these files where you execute this notebook.
https
Step1: Use a pretrained VGG model with our Vgg16 class
Step2: The original pre-trained Vgg16 class classifies images into one of the 1000 categories. This number of categories depends on the dataset which Vgg16 was trained with. (http
Step3: Use Vgg16 for basic image recognition
Let's grab batches of data from our test folder
Step4: With finetuning, the model vgg outputs the probability of 17 classes. As you can see below, they are generally relevant to the input images.
Note that these images are not used during model training.
Step5: Without finetune?
Step6: Again get some images for prediction
Step7: Without finetuning, the model vgg2 outputs the probability of 1000 classes. As you can see below, they are sometimes totally irrelevant. | Python Code:
from __future__ import division, print_function
%matplotlib inline
path = "data/17flowers/"
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
# check that ~/.keras/keras.json is set for Theano and includes "image_data_format": "channels_first"
from importlib import reload # Python 3
import utils; reload(utils)
from utils import plots
Explanation: Convolutional Neural Networks with 17flowers
A simple deep learning example on how to start classifying images with your own data.
This notebook is expected to be executed after 17flowers_data.ipynb (for data creation).
Setup
After preparing necessary environment, please make sure to have these files where you execute this notebook.
https://github.com/roebius/deeplearning1_keras2/tree/master/nbs
- utils.py
- vgg16.py
End of explanation
# As large as you can, but no larger than 64 is recommended.
batch_size = 8
#batch_size = 64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
Explanation: Use a pretrained VGG model with our Vgg16 class
End of explanation
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
# please note that the vgg model is compiled inside the finetune method.
vgg.finetune(batches)
vgg.fit(batches, val_batches, batch_size, nb_epoch=1)
Explanation: The original pre-trained Vgg16 class classifies images into one of the 1000 categories. This number of categories depends on the dataset which Vgg16 was trained with. (http://image-net.org/challenges/LSVRC/2014/browse-synsets)
In order to classify images into the categories which we prepare (17 categories of flowers, in this notebook), fine-tuning technology is useful. It:
- keeps the most weights from the pre-trained Vgg16 model, but modifies only a few parts of the weights
- changes the dimension of the output layer (from 1000 to 17, in this notebook)
End of explanation
batches = vgg.get_batches(path+'test', batch_size=5)
imgs,labels = next(batches)
print(labels)
#plots(imgs, titles=labels)
plots(imgs)
Explanation: Use Vgg16 for basic image recognition
Let's grab batches of data from our test folder:
End of explanation
vgg.predict(imgs, True)
vgg.classes[:20]
Explanation: With finetuning, the model vgg outputs the probability of 17 classes. As you can see below, they are generally relevant to the input images.
Note that these images are not used during model training.
End of explanation
# make new Vgg16 instance
vgg2 = Vgg16()
# need to compile the model before using
vgg2.compile()
Explanation: Without finetune?
End of explanation
batches = vgg2.get_batches(path+'test', batch_size=5)
imgs,labels = next(batches)
print(labels)
plots(imgs)
Explanation: Again get some images for prediction
End of explanation
vgg2.predict(imgs, True)
vgg2.classes[:20]
Explanation: Without finetuning, the model vgg2 outputs the probability of 1000 classes. As you can see below, they are sometimes totally irrelevant.
End of explanation |
7,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tarea 2
Step1: Realizar y verificar la descomposición SVD
Step2: Para alguna imagen de su elección, elegir distintos valores de aproximación a la imagen original
Step3: Ejercicio 2
Step4: Programar una función que dada cualquier matriz devuelva la pseudoinversa usando la descomposición SVD. Hacer otra función que resuelva cualquier sistema de ecuaciones de la forma Ax=b usando esta pseudoinversa.
Step5: 2. Jugar con el sistema Ax=b donde A=[[1,1],[0,0]] y b puede tomar distintos valores.
(a) Observar que pasa si b esta en la imagen de A (contestar cuál es la imagen) y si no está (ej. b = [1,1]).
Step6: (b) Contestar, ¿la solución resultante es única? Si hay más de una solución, investigar que carateriza a la solución devuelta.
Step8: (c) Repetir cambiando A=[[1,1],[0,1e-32]], ¿En este caso la solucíon es única? ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior?
Step9: Ejercicio3
Leer el archivo study_vs_sat.csv y almacenearlo como un data frame de pandas.
Step10: Plantear como un problema de optimización que intente hacer una aproximación de la forma sat_score ~ alpha + beta*study_hours minimizando la suma de los errores de predicción al cuadrado, ¿Cuál es el gradiente de la función que se quiere optimizar (hint
Step11: Programar una función que reciba valores de alpha, beta y el vector study_hours y devuelva un vector array de numpy de predicciones alpha + beta*study_hours_i, con un valor por cada individuo
Step12: Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que X[alpha,beta] nos devuelve alpha + beta study_hours_i en cada entrada y que entonces el problema se vuelve sat_score ~ X*[alpha,beta]
Step13: Calculen la pseudoinversa X^+ de X y computen (X^+)*sat_score para obtener alpha y beta soluciones.
Step14: Comparen la solución anterior con la de la fórmula directa de solución exacta (alpha,beta)=(X^tX)^(-1)X^t*sat_score
Step15: Usen la libreria matplotlib para visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score. | Python Code:
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
#url = sys.argv[1]
url = 'pikachu.png'
img = Image.open(url)
imggray = img.convert('LA')
Explanation: Tarea 2: Álgebra Lineal y Descomposición SVD
Teoría de Algebra Lineal y Optimización
1. ¿Por qué una matriz equivale a una transformación lineal entre espacios vectoriales?
Una multiplicación de una matriz, implica escalamientos, reducciones de dimensiones y rotaciones que se pueden observar en el espacio geométrico. Todas estas transformaciones son lineales en tanto se pueden representar como combinaciones lineales de la forma $\alpha*x + \beta$, donde $\alpha$ y $\beta$ son parámetros que determina la matriz.
2. ¿Cuál es el efecto de transformación lineal de una matriz diagonal y el de una matriz ortogonal?
Una matriz diagonal va a rescalar cada una de las columnas del vector o matriz al que se le aplique la transformación lineal, mientras que una matriz ortogonal va a generar una transformación isométrica que puede ser cualquiera de tres tipos: rotación, traslación o generar el reflejo.
3. ¿Qué es la descomposición en valores singulares de una matriz (Single Value Descomposition)?
Es la factorización de una matriz en la multiplicación de tres matrices: la de eigenvectores, los componentes singulares y la transpuesta de eigenvectores. La descomposicion se lleva a cabo de la siguiente manera: $ A = U \Sigma V^{T} $ En donde U = matriz de eigenvectores, $\Sigma$ = matriz de valores singulares o raices cuadradas de eigenvalores de $A^{T}A$ y $V^{T}$ = matriz transpuesta de eigenvectores por derecha. La descomposición SVD tiene muchas aplicaciones en las áreas de principal component analytsis, dimensionality reduction, compresión de imágenes, etcétera.
4. ¿Qué es diagonalizar una matriz y que representan los eigenvectores?
Es la factorización de la matriz en las tres matrices básicas mencionadas arriba (eigenvectores, eigenvalores e inversa de eigenvectores). Esto se observa de la siguiente manera: $A = PDP^{-1}$ donde P = matriz de eigenvectores, D = Matriz diagonal de eigenvalores y $P^{-1}$ = matriz inversa de eigenvectores. Los eigenvectores de una transformación lineal son vectores que cumplen que el resultado de multiplicarlos por la matriz de transformación equivale a multiplicarlos por un escalar que llamamos 'eigenvalor' ($ Ax = \lambda x $). Esto implica que a estos vectores al aplicarles la transformación lineal (multiplicarlos por la matriz de transformación) no cambian su dirección solamente cambian su longitud o su sentido (en caso de que haya eigenvalor con signo negativo).
5. ¿Intuitivamente que son los eigenvectores?
Se pueden interpretar como ejes cartesianos de la transformación lineal.
6. ¿Cómo interpretas la descomposición en valores singulares como una composición de tres tipos de transformaciones lineales simples?
Las tres matrices que conforman la SVD se pueden ver como transformaciones simples que son una rotación inicial, un escalamiento a lo largo de los ejes principales (valores singulares) y una rotación final.
7. ¿Qué relación hay entre la descomposición en valores singulares y la diagonalización?
La descomposición svd es una generalización de la diagonalización de una matriz. Cuando una matriz no es cuadrada, no es diagonalizable, sin embargo si se puede descomponer mediante svd. Así mismo si se utiliza la descomposición svd de una matriz, es posible resolver sistemas de ecuaciones lineales que no tengan una única solución sino que la solución devuelta es el ajuste a mínimos cuadrados, (obviamente si tienen solución, regresa la solución al sistema).
8. ¿Cómo se usa la descomposición en valores singulares para dar una aproximación de rango menor a una matriz?
En la descomposición svd de una matriz mxn, se obtiene una descomposición de tres matrices cuya multiplicación resulta en la matriz original completa, sin embargo dadas las propiedades de la descomposición, se pueden utilizar solamente las componentes principales de mayor relevancia desechando columnas de la matriz $U_{mxr}$, filas o columnas de la matriz de componentes principales $\Sigma {rxr} $ y filas de la matriz $V{rxn}^{T}$ La multiplicación de estas resulta en el tamaño mxn de la matriz original pero con un determinado error y a medida que se consideran más componentes principales, columnas y vectores de las matrices U,S,VT , mejor será la reconstrucción de la matriz original. Esta descomposición es muy útil para comprimir información y análisis de componentes principales (PCA).
9. Describe el método de minimización por descenso gradiente
Es un método iterativo de primer orden de minimización de una función dado su gradiente. Conociendo la función a optimizar, se calcula el gradiente (vector de derivadas parciales), posteriormente se considera un punto aleatorio de inicio, se substituye la coordenada en el vector gradiente, se analiza en cuál de las componentes del vector gradiente (x, y, o z) es el valor más negativo y se da un pequeño incremento en esa dirección de manera que se acerca al mínimo local, y así sucesivamente hasta llegar a un punto de convergencia (mínimo local). Entre sus aplicaciones están: Encontrar mínimos locales de funciones, Solución de sistemas de ecuaciones lineales y solución de sistemas de ecuaciones no lineales.
10. Menciona 4 ejemplos de problemas de optimización(dos con restricciones y dos sin restricciones) que te parezcan interesantes como científico de datos
Con restricciones: Modelos de crecimiento económico, Modelos de impuestos óptimos. Sin restricciones: Maximización intertemporal de extracción de recursos de una mina, minimización la varianza.
Aplicaciones en Python¶
Ejercicio 1
Recibir el path de un archivo de una imagen y convertirlo en una matriz numérica que represente la versión en blanco y negro de la imagen
End of explanation
imggrayArray = np.array(list(imggray.getdata(band=0)), float)
imggrayArray.shape = (imggray.size[1], imggray.size[0])
imggrayArray = np.matrix(imggrayArray)
plt.imshow(imggray)
plt.show()
u, s, v = np.linalg.svd(imggrayArray)
print("U: ")
print(u)
print("S: ")
print(s)
print("V: ")
print(v)
Explanation: Realizar y verificar la descomposición SVD
End of explanation
for i in range(1,60,15):
reconstimg = np.matrix(u[:, :i]) * np.diag(s[:i]) * np.matrix(v[:i,:])
plt.imshow(reconstimg, cmap='gray')
plt.show()
Explanation: Para alguna imagen de su elección, elegir distintos valores de aproximación a la imagen original
End of explanation
def lin_solve_pseudo(A,b):
pseudoinv = pseudoinverse(A)
return np.matmul(pseudoinv,b)
def pseudoinverse(A):
u,s,v = np.linalg.svd(A)
diagonal = np.diag(s)
if v.shape[0] > diagonal.shape[1]:
print("Agregando columnas a la sigma")
vector = np.array([[0 for x in range(v.shape[0] - diagonal.shape[1])] for y in range(diagonal.shape[0])])
diagonal = np.concatenate((diagonal, vector), axis=1)
elif u.shape[1] > diagonal.shape[0]:
print("Agregando renglones a la sigma")
vector = np.array([[0 for x in range(diagonal.shape[0])] for y in range(u.shape[1]-diagonal.shape[0])])
diagonal = np.concatenate((diagonal, vector), axis=0)
for a in range(diagonal.shape[0]):
for b in range(diagonal.shape[1]):
if diagonal[a][b] != 0:
diagonal[a][b] = 1/diagonal[a][b]
resultante = np.dot(np.transpose(v),np.transpose(diagonal))
resultante = np.dot(resultante,np.transpose(u))
return resultante
Explanation: Ejercicio 2
End of explanation
A = np.array([[1,1,1],[1,1,3],[2,4,4]])
b = np.array([[18,30,68]])
solve = lin_solve_pseudo(A,np.transpose(b))
print(solve)
Explanation: Programar una función que dada cualquier matriz devuelva la pseudoinversa usando la descomposición SVD. Hacer otra función que resuelva cualquier sistema de ecuaciones de la forma Ax=b usando esta pseudoinversa.
End of explanation
print("La imagen de A es cualquier vector de dos coordenadas en donde la segunda componente siempre sea cero")
print("Vector b en imagen de A")
A = np.array([[1,1],[0,0]])
b = np.array([[12,0]])
solve = lin_solve_pseudo(A, np.transpose(b))
print(solve)
print("Cuando b esta en la imagen, la funcion lin_solve_pseudo devuelve la solucion unica a su sistema")
print("Vector b no en imagen de A")
b = np.array([[12,8]])
solve = lin_solve_pseudo(A, np.transpose(b))
print(solve)
print("Cuando b no esta en la imagen, devuelve la solucion mas aproximada a su sistema")
Explanation: 2. Jugar con el sistema Ax=b donde A=[[1,1],[0,0]] y b puede tomar distintos valores.
(a) Observar que pasa si b esta en la imagen de A (contestar cuál es la imagen) y si no está (ej. b = [1,1]).
End of explanation
print("Vector b no en imagen de A")
b = np.array([[12,8]])
solve = lin_solve_pseudo(A, np.transpose(b))
print(solve)
print("Cuando b no esta en la imagen, devuelve la solucion mas aproximada a su sistema")
Explanation: (b) Contestar, ¿la solución resultante es única? Si hay más de una solución, investigar que carateriza a la solución devuelta.
End of explanation
A = np.array([[1,1],[0,1e-32]])
b = np.array([[12,9]])
solve = lin_solve_pseudo(A, np.transpose(b))
print(solve)
cadena = En este caso, la solucion devuelta siempre es el valor de la segunda coordenada del vector b por e+32\
y es el valor de ambas incognitas, solo que con signos contrarios ej(x1=-9.0e+32, x2=9.0e+32) \
esto debido a que cualquier numero entre un numero muy pequenio tiende a infinito, de manera que la \
coordenada dos del vector tiene mucho peso con referencia a la coordenada uno del vector
print(cadena)
Explanation: (c) Repetir cambiando A=[[1,1],[0,1e-32]], ¿En este caso la solucíon es única? ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior?
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv("./study_vs_sat.csv", sep=',')
print(data)
Explanation: Ejercicio3
Leer el archivo study_vs_sat.csv y almacenearlo como un data frame de pandas.
End of explanation
hrs_studio = np.array(data["study_hours"])
sat_score = np.array(data["sat_score"])
A = np.vstack([hrs_studio, np.ones(len(hrs_studio))]).T
m,c = np.linalg.lstsq(A,sat_score)[0]
print("Beta y alfa: ")
print(m,c)
Explanation: Plantear como un problema de optimización que intente hacer una aproximación de la forma sat_score ~ alpha + beta*study_hours minimizando la suma de los errores de predicción al cuadrado, ¿Cuál es el gradiente de la función que se quiere optimizar (hint: las variables que queremos optimizar son alpha y beta)?
End of explanation
def predict(alfa, beta, study_hours):
study_hours_i=[]
for a in range(len(study_hours)):
study_hours_i.append(alfa + beta*np.array(study_hours[a]))
return study_hours_i
print("prediccion")
print(predict(353.165, 25.326, hrs_studio))
Explanation: Programar una función que reciba valores de alpha, beta y el vector study_hours y devuelva un vector array de numpy de predicciones alpha + beta*study_hours_i, con un valor por cada individuo
End of explanation
unos = np.ones((len(hrs_studio),1))
hrs_studio = [hrs_studio]
hrs_studio = np.transpose(hrs_studio)
x = np.hstack((unos, hrs_studio))
print("La prediccion es: ")
print(np.matmul(x,np.array([[353.165],[25.326]])))
Explanation: Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que X[alpha,beta] nos devuelve alpha + beta study_hours_i en cada entrada y que entonces el problema se vuelve sat_score ~ X*[alpha,beta]
End of explanation
X_pseudo = pseudoinverse(x)
print("Las alfas y betas son: ")
print(np.matmul(X_pseudo,sat_score))
Explanation: Calculen la pseudoinversa X^+ de X y computen (X^+)*sat_score para obtener alpha y beta soluciones.
End of explanation
def comparacion(X, sat_score):
x_transpose = np.transpose(X)
return np.matmul(np.linalg.inv(np.matmul(x_transpose,X)), np.matmul(x_transpose,sat_score))
Explanation: Comparen la solución anterior con la de la fórmula directa de solución exacta (alpha,beta)=(X^tX)^(-1)X^t*sat_score
End of explanation
plt.plot(hrs_studio, sat_score, 'x', label='Datos', markersize=20)
plt.plot(hrs_studio, m*hrs_studio + c, 'r', label='Línea de regresión')
plt.legend()
plt.show()
Explanation: Usen la libreria matplotlib para visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score.
End of explanation |
7,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the text we want to process
Step1: Let's take a smaller chunk from the text
Step2: Tokenizing text
Tokens are meaningful chunks of text
Step3: Stop words
Stop words are words that you want to filter out from your text for downstream analysis. They are typically very common words which don't contain much useful information for the task at hand. There is no universal set of stop words and some domain knowledge is helpful for deciding what you want to include when processing your text.
Step4: How would you create a list of tokens that doesn't include the stopwords?
Step5: Make a function that can combine some of our pre-processing tasks to clean up the raw text
Step6: If some of those words don't seem important, we can add them to 'stops' and clean text again
Step7: Stemming
Step8: Let's make another function that stems the word tokens during the processing stage
Step9: Lemmatizing
In English, for example, run, runs, ran and running are forms of the same lexeme, with run as the lemma. Lexeme, in this context, refers to the set of all the forms that have the same meaning, and lemma refers to the particular form that is chosen by convention to represent the lexeme.
Step10: Part-of-speech (POS) tagging
The process of marking up a word in a text as corresponding to a particular part of speech, based on both its definition and its context.
POS tagging is tricky because some words can have more than one POS depending on the context.
"Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo."
Step11: ngrams
Step12: Stopwords and punctuation will have an effect on ngrams!
Step13: Bag of words (BOW) text representation for machine learning | Python Code:
with open('book.txt', 'r') as file:
text = file.readlines()
Explanation: Get the text we want to process
End of explanation
# using a list comprehension to simplify iterating over the the text structure
snippet = " ".join(block.strip() for block in text[175:200])
snippet
# alternative with for-loop
other_snippet = []
for block in text[175:200]:
other_snippet.append(block.strip())
other_snippet = " ".join(other_snippet)
other_snippet
whole_text = " ".join(block.strip() for block in text)
whole_text[5000:7500]
Explanation: Let's take a smaller chunk from the text:
End of explanation
from nltk.tokenize import sent_tokenize, word_tokenize
str.split?
# you can try to separate sentences by splitting on punctuation
snippet.split('.')
# The sentence tokenizer has some clever tricks to do a better job
sent_tokenize(snippet)
# splitting a text into tokens based on white space
snippet.split()
words = word_tokenize(snippet)
# word tokenize treats punctuation as a token
words
# let's plot the frequency of occurrence of different words
nltk.FreqDist?
fdist = nltk.FreqDist(words)
fdist.plot(30)
Explanation: Tokenizing text
Tokens are meaningful chunks of text:
Sentences
words
punctuation
numbers
?
We'll look at some tools in nltk to help break the raw text into sentences and word tokens.
End of explanation
from nltk.corpus import stopwords
stops = stopwords.words('english')
stops
Explanation: Stop words
Stop words are words that you want to filter out from your text for downstream analysis. They are typically very common words which don't contain much useful information for the task at hand. There is no universal set of stop words and some domain knowledge is helpful for deciding what you want to include when processing your text.
End of explanation
filtered_words = [word.lower() for word in words if word.lower() not in stops]
filtered_words
filtered_fdist = nltk.FreqDist(filtered_words)
filtered_fdist.plot(30)
import string
string.punctuation
stops = stopwords.words('english') + list(string.punctuation)
stops
filtered_words = [word.lower() for word in words if word.lower() not in stops]
filtered_fdist2 = nltk.FreqDist(filtered_words)
filtered_fdist2.plot(30)
Explanation: How would you create a list of tokens that doesn't include the stopwords?
End of explanation
def process_text(text):
# break text into word tokens
tokens = word_tokenize(text)
# remove stopwords
filtered_words = [token.lower() for token in tokens if not token.lower() in stops]
# filter for short punctuation
filtered_words = [w for w in filtered_words if (len(w) > 2)]
return filtered_words
whole_text[:110]
len(whole_text)
%%time
clean_text = process_text(whole_text)
fdist_whole_text = nltk.FreqDist(clean_text)
fdist_whole_text.plot(25)
Explanation: Make a function that can combine some of our pre-processing tasks to clean up the raw text:
End of explanation
boring_words = ['sir', 'upon', 'said', 'one']
stops += boring_words
%%time
cleaned_text = process_text(whole_text)
fdist_whole_text['holmes']
fdist_whole_text['watson']
Explanation: If some of those words don't seem important, we can add them to 'stops' and clean text again
End of explanation
from nltk.stem import PorterStemmer
help(nltk.stem)
ps = PorterStemmer()
print(ps.stem('Happy'))
print(ps.stem('Happiness'))
print(ps.stem('Had'))
print(ps.stem('Fishing'))
print(ps.stem('Fish'))
print(ps.stem('Fisher'))
print(ps.stem('Fishes'))
print(ps.stem('Fished'))
words = process_text(snippet)
stemmed = [ps.stem(word) for word in words]
for w, stem in zip(words, stemmed):
print('{} ---> {}'.format(w, stem))
Explanation: Stemming
End of explanation
def stem_process(text):
# tokenize
tokens = word_tokenize(text)
# remove stops
filtered_words = [token.lower() for token in tokens if not token.lower() in stops]
filtered_words = [w for w in filtered_words if (len(w) > 2)]
# stem
stemmed_words = [ps.stem(w) for w in filtered_words]
return stemmed_words
%%time
stemmed = stem_process(whole_text)
stemmed
fdist_stems = nltk.FreqDist(stemmed)
fdist_stems.plot(30)
Explanation: Let's make another function that stems the word tokens during the processing stage
End of explanation
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemmatizer.lemmatize?
print(lemmatizer.lemmatize('having'))
print(lemmatizer.lemmatize('have'))
print(lemmatizer.lemmatize('had'))
print()
print(lemmatizer.lemmatize('fishing'))
print(lemmatizer.lemmatize('fish'))
print(lemmatizer.lemmatize('fisher'))
print(lemmatizer.lemmatize('fishes'))
print(lemmatizer.lemmatize('fished'))
print()
print(lemmatizer.lemmatize('am'))
print(lemmatizer.lemmatize('is'))
print(lemmatizer.lemmatize('was'))
# including POS for the lemmatizer can improve its output
print(lemmatizer.lemmatize('having', pos='v'))
print(lemmatizer.lemmatize('have', pos='v'))
print(lemmatizer.lemmatize('had', pos='v'))
print()
print(lemmatizer.lemmatize('fishing', pos='v'))
print(lemmatizer.lemmatize('fish', pos='v'))
print(lemmatizer.lemmatize('fisher', pos='n'))
print(lemmatizer.lemmatize('fishes', pos='v'))
print(lemmatizer.lemmatize('fished', pos='v'))
print()
print(lemmatizer.lemmatize('am', pos='v'))
print(lemmatizer.lemmatize('is', pos='v'))
print(lemmatizer.lemmatize('was', pos='v'))
lemmatized = [lemmatizer.lemmatize(word) for word in words]
for w, lemma in zip(words, lemmatized):
print('{} ---> {}'.format(w, lemma))
lemmatizer.lemmatize('running', pos='v')
def lemma_process(text):
# tokenize
tokens = word_tokenize(text)
# remove stops
filtered_words = [token.lower() for token in tokens if not token.lower() in stops]
filtered_words = [w for w in filtered_words if (len(w) > 2)]
# lemmatize
lemmatized_words = [lemmatizer.lemmatize(w) for w in filtered_words]
return lemmatized_words
%%time
lemma_text = lemma_process(whole_text)
lemma_fdist = nltk.FreqDist(lemma_text)
lemma_fdist.plot(30)
Explanation: Lemmatizing
In English, for example, run, runs, ran and running are forms of the same lexeme, with run as the lemma. Lexeme, in this context, refers to the set of all the forms that have the same meaning, and lemma refers to the particular form that is chosen by convention to represent the lexeme.
End of explanation
nltk.help.upenn_tagset()
snippet
nltk.pos_tag(word_tokenize(sent_tokenize(snippet)[1]))
def process_POS(text):
sentences = sent_tokenize(text)
tagged_words = []
for sentence in sentences:
words = word_tokenize(sentence)
tagged = nltk.pos_tag(words)
tagged_words.append(tagged)
return tagged_words
tagged_sentences = process_POS(snippet)
tagged_sentences
sentences =[]
for sentence in tagged_sentences[:5]:
print(sentence)
lemmas = []
for word, pos in sentence:
if pos == 'VBP':
lemmas.append(lemmatizer.lemmatize(word, 'v'))
elif pos in ['NN', 'NNS']:
lemmas.append(lemmatizer.lemmatize(word, 'n'))
else:
lemmas.append(lemmatizer.lemmatize(word))
sentences.append(lemmas)
Explanation: Part-of-speech (POS) tagging
The process of marking up a word in a text as corresponding to a particular part of speech, based on both its definition and its context.
POS tagging is tricky because some words can have more than one POS depending on the context.
"Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo."
End of explanation
from nltk import ngrams
from collections import Counter
bigrams = Counter(ngrams(word_tokenize(whole_text), 2))
for phrase, freq in bigrams.most_common(30):
print("{}\t{}".format(phrase, freq))
trigrams = Counter(ngrams(word_tokenize(whole_text), 3))
for phrase, freq in trigrams.most_common(30):
print("{}\t{}".format(phrase, freq))
Explanation: ngrams
End of explanation
stemmed = stem_process(whole_text)
stemmed_bigrams = Counter(ngrams(stemmed, 2))
stemmed_bigrams.most_common(20)
stemmed_trigrams = Counter(ngrams(stemmed, 3))
stemmed_trigrams.most_common(20)
Explanation: Stopwords and punctuation will have an effect on ngrams!
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes to watch football games."]
X = vectorizer.fit_transform(data_corpus)
print(X.toarray())
print(vectorizer.get_feature_names())
from nltk.sentiment.vader import SentimentIntensityAnalyzer
vader = SentimentIntensityAnalyzer()
text = "I dont hate movies!"
vader.polarity_scores(text)
Explanation: Bag of words (BOW) text representation for machine learning
End of explanation |
7,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finite Time of Integration (fti)
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Relevant Parameters
An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually.
Step3: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
Step4: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
Step5: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled.
Step6: Influence on Light Curves
Step7: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse. | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Finite Time of Integration (fti)
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
print(b['exptime'])
Explanation: Relevant Parameters
An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually.
End of explanation
b['exptime'] = 1, 'hr'
Explanation: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
End of explanation
print(b['fti_method'])
b['fti_method'] = 'oversample'
Explanation: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
End of explanation
print(b['fti_oversample'])
Explanation: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled.
End of explanation
b.run_compute(fti_method='none', irrad_method='none', model='fti_off')
b.run_compute(fti_method='oversample', irrad_method='none', model='fit_on')
Explanation: Influence on Light Curves
End of explanation
axes, artists = b.plot(show=True)
Explanation: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse.
End of explanation |
7,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Electron Plasma Waves
Created by Rui Calado and Jorge Vieira, 2018
In this notebook, we are going to study the dispersion relation for electron plasma waves.
Theory
Electron plasma waves are longitudinal waves that may propagate in unmagnetized plasmas. To derive the dispersion relation for such waves let us start by considering the following setup
Step1: We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM field $E_z$ at every timestep so we can analyze them later
Step2: Electrostatic / Electromagnetic Waves
As discussed above, the simulation was initialized with a broad spectrum of waves through the thermal noise of the plasma. We can see the noisy fields in the plot below
Step3: Electrostatic Plasma Waves
To analyze the dispersion relation of the electrostatic plasma waves we use a 2D (Fast) Fourier transform of $E_x(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction.
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
Step4: Electromagnetic Plasma Waves
To analyze the dispersion relation of the electrostatic plasma waves we use a 2D (Fast) Fourier transform of $E_z(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction.
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible. | Python Code:
import em1ds as zpic
#v_the = 0.001
v_the = 0.02
#v_the = 0.20
electrons = zpic.Species( "electrons", -1.0, ppc = 64, uth=[v_the,v_the,v_the])
sim = zpic.Simulation( nx = 500, box = 50.0, dt = 0.0999/2, species = electrons )
sim.filter_set("sharp", ck = 0.99)
#sim.filter_set("gaussian", ck = 50.0)
Explanation: Electron Plasma Waves
Created by Rui Calado and Jorge Vieira, 2018
In this notebook, we are going to study the dispersion relation for electron plasma waves.
Theory
Electron plasma waves are longitudinal waves that may propagate in unmagnetized plasmas. To derive the dispersion relation for such waves let us start by considering the following setup:
* $\nabla\times\mathbf{E}=0$ (Longitudinal waves)
* $T_i=T_e=0$ (Cold plasma)
* $\mathbf{B}=0$ (Unmagnetized)
We start by writing the continuity and momentum equations for the electron and ion species:
$$\large \left{\begin{array}{lcr}
\frac{\partial n_{e,i}}{\partial t}+\nabla\cdot(n_{e,i}\mathbf{v}{e,i})=0 \
\frac{\partial \mathbf{v}{e,i}}{\partial t}=\mp \frac{e}{m_{e,i}}\mathbf{E}\
\end{array}\right. .
$$
Then we consider Poisson's equation:
$$\epsilon_0\nabla\cdot\mathbf{E}=e(n_i-n_e).$$
Applying a time derivative twice,
$$\epsilon_0\nabla\cdot\left(\frac{\partial^2 \mathbf{E}}{\partial t^2}\right)=e\left(\frac{\partial^2n_i}{\partial t^2}-\frac{\partial^2n_e}{\partial t^2}\right).$$
Using the continuity and momentum equations we get:
$$\frac{\partial^2 \mathbf{E}}{\partial t^2}-\frac{e^2n_0}{\epsilon_0}\left(\frac{1}{m_i}+\frac{1}{m_e}\right)\mathbf{E}=0.$$
This is the equation for a harmonic oscillator. Neglecting the $1/m_i$ term since $m_e\ll m_i$, our oscillation frequency is the electron plasma frequency:
$$\omega=\frac{e^2n_0}{\epsilon_0 m_e}\equiv \omega_{p}.$$
Warm plasma
Now, what happens if we consider a warm plasma instead? Neglecting ion motion, the first step is to add a pressure term to the electron momentum equation:
$$\frac{\partial \mathbf{v}{e}}{\partial t}=- \frac{e}{m{e}}\mathbf{E}-\frac{\gamma k_BT_e}{m_en_0}\nabla n_1.$$
We also note that Poisson's equation now takes the form:
$$\nabla\cdot\mathbf{E}=-\frac{e}{\epsilon_0}n_1.$$
Taking the divergence of the momentum equation, we get:
$$\nabla\cdot\left( \frac{\partial \mathbf{v}}{\partial t} \right)=-\frac{e}{m_e}\nabla\cdot\mathbf{E}-\frac{\gamma kT_e}{m_en_0}\nabla\cdot (\nabla n_e).$$
Using the unchanged continuity equation and Poisson's equation:
$$\frac{\partial^2n_1}{\partial t^2}+\omega_{p}^2n_1-\frac{\gamma k_BT_e}{m_e}\nabla\cdot(\nabla n_1)=0.$$
Considering the high frequency regime, there will be no heat losses in our time scale, and so we will take the adiabatic coefficient $\gamma=3$ for 1D longitudinal oscillations. Additionally, we use the definition $v_{th}^2=k_BT_e/m_e$ to write:
$$\frac{\partial^2n_1}{\partial t^2}+\omega_{p}^2n_1-3v_{th}^2\nabla\cdot(\nabla n_1)=0.$$
The final step consists of considering sinusoidal waves such that $n_1=\text{n}_1\exp^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}$ and then Fourier analyzing the equation $\left(\nabla=i\mathbf{k},\ \frac{\partial}{\partial t}=-i\omega \right)$, which results in the dispersion relation:
$$\omega^2=\omega_{p}^2+3v_{th}^2k^2.$$
Simulations with ZPIC
End of explanation
import numpy as np
niter = 4000
Ex_t = np.zeros((niter,sim.nx))
Ez_t = np.zeros((niter,sim.nx))
tmax = niter * sim.dt
print("\nRunning simulation up to t = {:g} ...".format(tmax))
while sim.t <= tmax:
print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r')
Ex_t[sim.n,:] = sim.emf.Ex
Ez_t[sim.n,:] = sim.emf.Ez
sim.iter()
print("\nDone.")
Explanation: We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM field $E_z$ at every timestep so we can analyze them later:
End of explanation
import matplotlib.pyplot as plt
iter = sim.n//2
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ex_t[iter,:], label = "$E_x$")
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ez_t[iter,:], label = "$E_z$")
plt.grid(True)
plt.xlabel("$x_1$ [$c/\omega_n$]")
plt.ylabel("$E$ field []")
plt.title("$E_x$, $E_z$, t = {:g}".format( iter * sim.dt))
plt.legend()
plt.show()
Explanation: Electrostatic / Electromagnetic Waves
As discussed above, the simulation was initialized with a broad spectrum of waves through the thermal noise of the plasma. We can see the noisy fields in the plot below:
End of explanation
import matplotlib.pyplot as plt
import matplotlib.colors as colors
# (omega,k) power spectrum
win = np.hanning(niter)
for i in range(sim.nx):
Ex_t[:,i] *= win
sp = np.abs(np.fft.fft2(Ex_t))**2
sp = np.fft.fftshift( sp )
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1.0),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
k = np.linspace(-k_max, k_max, num = 512)
w=np.sqrt(1 + 3 * v_the**2 * k**2)
plt.plot( k, w, label = "Electron Plasma Wave", color = 'r',ls = '-.' )
plt.ylim(0,2)
plt.xlim(0,k_max)
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("Wave dispersion relation")
plt.legend()
plt.show()
Explanation: Electrostatic Plasma Waves
To analyze the dispersion relation of the electrostatic plasma waves we use a 2D (Fast) Fourier transform of $E_x(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction.
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.colors as colors
# (omega,k) power spectrum
win = np.hanning(niter)
for i in range(sim.nx):
Ez_t[:,i] *= win
sp = np.abs(np.fft.fft2(Ez_t))**2
sp = np.fft.fftshift( sp )
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-5, vmax = 0.01),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
k = np.linspace(-k_max, k_max, num = 512)
w=np.sqrt(1 + k**2)
plt.plot( k, w, label = "$\omega^2 = \omega_p^2 + k^2 c^2$", color = 'r', ls = '-.' )
plt.ylim(0,k_max)
plt.xlim(0,k_max)
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("EM-wave dispersion relation")
plt.legend()
plt.show()
Explanation: Electromagnetic Plasma Waves
To analyze the dispersion relation of the electrostatic plasma waves we use a 2D (Fast) Fourier transform of $E_z(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction.
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
End of explanation |
7,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 2. 자료의 정리
변수와 자료
도수분포표
1. 변수 ( variable, feature )
양적변수 ( quantitative variable, real value )
수치로 나타낼 수 있는 변수
- 이산변수 ( discrete )
- 정숫값을 취한 수 있는 변수
- ex. 자녀수, 자동차판매대수 등
- 연속변수 ( continuous )
- 모든 실수값을 취할 수 있는 변수
- ex. 길이, 무게 등
질적변수 ( qualitative variable, categorical value )
수치로 나타낼 수 없는 변수
- 명목변수 ( nominal )
- ex. 성별, 종교, 출생지, 운동선수 등번호 등
- 서열변수 ( ordinal )
- 측정대상 간의 순서를 매기기 위해 사용되는 변수
- ex. 성적 A, B, C 등급
2. 도수분포표
수집된 자료를 적절한 등급(또는 범주)으로 분류하고 각 등급에 해당되는 빈도수 등을 정리한 표
<img src="http
Step1: 구간 설정
- 구간 개수
Step2: 구간별 빈도수 측정
- 50미만 | Python Code:
import pandas as pd
import numpy as np
np.random.seed(0)
data = np.random.randint(50, 100, size=(8, 5))
data[0][0] = 12
data
np.sort(data.flatten())
Explanation: Ch 2. 자료의 정리
변수와 자료
도수분포표
1. 변수 ( variable, feature )
양적변수 ( quantitative variable, real value )
수치로 나타낼 수 있는 변수
- 이산변수 ( discrete )
- 정숫값을 취한 수 있는 변수
- ex. 자녀수, 자동차판매대수 등
- 연속변수 ( continuous )
- 모든 실수값을 취할 수 있는 변수
- ex. 길이, 무게 등
질적변수 ( qualitative variable, categorical value )
수치로 나타낼 수 없는 변수
- 명목변수 ( nominal )
- ex. 성별, 종교, 출생지, 운동선수 등번호 등
- 서열변수 ( ordinal )
- 측정대상 간의 순서를 매기기 위해 사용되는 변수
- ex. 성적 A, B, C 등급
2. 도수분포표
수집된 자료를 적절한 등급(또는 범주)으로 분류하고 각 등급에 해당되는 빈도수 등을 정리한 표
<img src="http://trsketch.dothome.co.kr/_contents/2009curi/images/img900027.png", width=600>
도수분포표 작성요령
모든 자료는 빠짐없이 도수분포표에 포함되어야 한다. 극단적 수치(이상치)가 있다고 해도 제외하지 말고 구간의 간격을 ~이상, 또는 ~이하로 표시하여 다 포함시켜야 한다.
이상치를 제외한 나머지 등급의 구간은 모두 같아야 한다.
등급은 서로 중복되지 않아야 한다.
등급은 연속적으로 표시되어야 한다. 해당 사례수가 없다고 해서 그 구간을 제외해선 안 된다.
등급의 구간의 크기는 홀수로 정하는 것이 좋다. 그래야 그 구간의 중간점을 쉽게 정할 수 있다.
등급 구간의 첫번째 숫자는 한눈에 잘 띄는 숫자로 시작하는 것이 좋다. 예를 들면 93이상 ~ 98미만 보단 90이상 ~ 95미만 이 좋다.
등급의 구간
- 등급의 구간 = ( 자료의 최대값 - 자료의 최소값 ) / 등급의 수
Q. 연습문제
End of explanation
interval = 5
interval_len = ( data.max() - 50 ) / interval
interval_len
Explanation: 구간 설정
- 구간 개수 : 5개
- 구간 크기 : 10
End of explanation
data1 = [[45, 1, 1],
[55, 10, 11],
[65, 9, 20],
[75, 10, 30],
[85, 6, 36],
[95, 4, 40]]
df = pd.DataFrame(data1,
index=[u'~50', u'50~60', u'60~70', u'70~80', u'80~90', u'90~100'],
columns=[u"중간값" ,u"빈도수", u"누적빈도"])
df
import matplotlib.pyplot as plt
X = df.중간값
y = df.빈도수
plt.bar(X, y, width=5, align='center')
plt.title("histogram")
plt.xlabel("score")
plt.ylabel("frequency")
plt.xticks(X, [u'~50', u'50~60', u'60~70', u'70~80', u'80~90', u'90~100'])
plt.show()
X = df.중간값
y = df.누적빈도
plt.plot(X, y, "--o")
plt.xlim(45, 95)
plt.title("cumulative line plot")
plt.xlabel("score")
plt.ylabel("cumulative frequency")
plt.xticks(X, [u'~50', u'50~60', u'60~70', u'70~80', u'80~90', u'90~100'])
plt.show()
Explanation: 구간별 빈도수 측정
- 50미만 : 1
- 50이상 60미만 : 10
- 60이상 70미만 : 9
- 70이상 80미만 : 10
- 80이상 90미만 : 6
- 90이상 100미만 : 4
End of explanation |
7,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pre-processing and training LDA
The purpose of this tutorial is to show you how to pre-process text data, and how to train the LDA model on that data. This tutorial will not explain you the LDA model, how inference is made in the LDA model, and it will not necessarily teach you how to use Gensim's implementation. There are plenty of resources for all of those things, but what is somewhat lacking is a hands-on tutorial that helps you train an LDA model with good results... so here is my contribution towards that.
I have used a corpus of NIPS papers in this tutorial, but if you're following this tutorial just to learn about LDA I encourage you to consider picking a corpus on a subject that you are familiar with. Qualitatively evaluating the output of an LDA model is challenging and can require you to understand the subject matter of your corpus (depending on your goal with the model).
I would also encourage you to consider each step when applying the model to your data, instead of just blindly applying my solution. The different steps will depend on your data and possibly your goal with the model.
In the following sections, we will go through pre-processing the data and training the model.
Note
Step1: Pre-process and vectorize the documents
Among other things, we will
Step2: We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a stemmer in this case because it produces more readable words. Output that is easy to read is very desirable in topic modelling.
Step3: We find bigrams in the documents. Bigrams are sets of two adjacent words. Using bigrams we can get phrases like "machine_learning" in our output (spaces are replaced with underscores); without bigrams we would only get "machine" and "learning".
Note that in the code below, we find bigrams and then add them to the original data, because we would like to keep the words "machine" and "learning" as well as the bigram "machine_learning".
Note that computing n-grams of large dataset can be very computationally intentensive and memory intensive.
Step4: We remove rare words and common words based on their document frequency. Below we remove words that appear in less than 20 documents or in more than 50% of the documents. Consider trying to remove words only based on their frequency, or maybe combining that with this approach.
Step5: Finally, we transform the documents to a vectorized form. We simply compute the frequency of each word, including the bigrams.
Step6: Let's see how many tokens and documents we have to train on.
Step7: Training
We are ready to train the LDA model. We will first discuss how to set some of the training parameters.
First of all, the elephant in the room
Step8: We can compute the topic coherence of each topic. Below we display the average topic coherence and print the topics in order of topic coherence.
Note that we use the "Umass" topic coherence measure here (see docs, https | Python Code:
# Read data.
import os
from smart_open import smart_open
# Folder containing all NIPS papers.
data_dir = 'nipstxt/'
# Folders containin individual NIPS papers.
yrs = ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12']
dirs = ['nips' + yr for yr in yrs]
# Read all texts into a list.
docs = []
for yr_dir in dirs:
files = os.listdir(data_dir + yr_dir)
for filen in files:
# Note: ignoring characters that cause encoding errors.
with smart_open(data_dir + yr_dir + '/' + filen, 'rb') as fid:
txt = fid.read()
docs.append(txt)
Explanation: Pre-processing and training LDA
The purpose of this tutorial is to show you how to pre-process text data, and how to train the LDA model on that data. This tutorial will not explain you the LDA model, how inference is made in the LDA model, and it will not necessarily teach you how to use Gensim's implementation. There are plenty of resources for all of those things, but what is somewhat lacking is a hands-on tutorial that helps you train an LDA model with good results... so here is my contribution towards that.
I have used a corpus of NIPS papers in this tutorial, but if you're following this tutorial just to learn about LDA I encourage you to consider picking a corpus on a subject that you are familiar with. Qualitatively evaluating the output of an LDA model is challenging and can require you to understand the subject matter of your corpus (depending on your goal with the model).
I would also encourage you to consider each step when applying the model to your data, instead of just blindly applying my solution. The different steps will depend on your data and possibly your goal with the model.
In the following sections, we will go through pre-processing the data and training the model.
Note:
This tutorial uses the nltk library, although you can replace it with something else if you want. Python 3 is used, although Python 2.7 can be used as well.
In this tutorial we will:
Load data.
Pre-process data.
Transform documents to a vectorized form.
Train an LDA model.
If you are not familiar with the LDA model or how to use it in Gensim, I suggest you read up on that before continuing with this tutorial. Basic understanding of the LDA model should suffice. Examples:
Gentle introduction to the LDA model: http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation/
Gensim's LDA API documentation: https://radimrehurek.com/gensim/models/ldamodel.html
Topic modelling in Gensim: http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html
Data
We will be using some papers from the NIPS (Neural Information Processing Systems) conference. NIPS is a machine learning conference so the subject matter should be well suited for most of the target audience of this tutorial.
You can download the data from Sam Roweis' website (http://www.cs.nyu.edu/~roweis/data.html).
Note that the corpus contains 1740 documents, and not particularly long ones. So keep in mind that this tutorial is not geared towards efficiency, and be careful before applying the code to a large dataset.
Below we are simply reading the data.
End of explanation
# Tokenize the documents.
from nltk.tokenize import RegexpTokenizer
# Split the documents into tokens.
tokenizer = RegexpTokenizer(r'\w+')
for idx in range(len(docs)):
docs[idx] = docs[idx].lower() # Convert to lowercase.
docs[idx] = tokenizer.tokenize(docs[idx]) # Split into words.
# Remove numbers, but not words that contain numbers.
docs = [[token for token in doc if not token.isnumeric()] for doc in docs]
# Remove words that are only one character.
docs = [[token for token in doc if len(token) > 1] for doc in docs]
Explanation: Pre-process and vectorize the documents
Among other things, we will:
Split the documents into tokens.
Lemmatize the tokens.
Compute bigrams.
Compute a bag-of-words representation of the data.
First we tokenize the text using a regular expression tokenizer from NLTK. We remove numeric tokens and tokens that are only a single character, as they don't tend to be useful, and the dataset contains a lot of them.
End of explanation
# Lemmatize the documents.
from nltk.stem.wordnet import WordNetLemmatizer
# Lemmatize all words in documents.
lemmatizer = WordNetLemmatizer()
docs = [[lemmatizer.lemmatize(token) for token in doc] for doc in docs]
Explanation: We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a stemmer in this case because it produces more readable words. Output that is easy to read is very desirable in topic modelling.
End of explanation
# Compute bigrams.
from gensim.models import Phrases
# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(docs, min_count=20)
for idx in range(len(docs)):
for token in bigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
Explanation: We find bigrams in the documents. Bigrams are sets of two adjacent words. Using bigrams we can get phrases like "machine_learning" in our output (spaces are replaced with underscores); without bigrams we would only get "machine" and "learning".
Note that in the code below, we find bigrams and then add them to the original data, because we would like to keep the words "machine" and "learning" as well as the bigram "machine_learning".
Note that computing n-grams of large dataset can be very computationally intentensive and memory intensive.
End of explanation
# Remove rare and common tokens.
from gensim.corpora import Dictionary
# Create a dictionary representation of the documents.
dictionary = Dictionary(docs)
# Filter out words that occur less than 20 documents, or more than 50% of the documents.
dictionary.filter_extremes(no_below=20, no_above=0.5)
Explanation: We remove rare words and common words based on their document frequency. Below we remove words that appear in less than 20 documents or in more than 50% of the documents. Consider trying to remove words only based on their frequency, or maybe combining that with this approach.
End of explanation
# Vectorize data.
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(doc) for doc in docs]
Explanation: Finally, we transform the documents to a vectorized form. We simply compute the frequency of each word, including the bigrams.
End of explanation
print('Number of unique tokens: %d' % len(dictionary))
print('Number of documents: %d' % len(corpus))
Explanation: Let's see how many tokens and documents we have to train on.
End of explanation
# Train LDA model.
from gensim.models import LdaModel
# Set training parameters.
num_topics = 10
chunksize = 2000
passes = 20
iterations = 400
eval_every = None # Don't evaluate model perplexity, takes too much time.
# Make a index to word dictionary.
temp = dictionary[0] # This is only to "load" the dictionary.
id2word = dictionary.id2token
%time model = LdaModel(corpus=corpus, id2word=id2word, chunksize=chunksize, \
alpha='auto', eta='auto', \
iterations=iterations, num_topics=num_topics, \
passes=passes, eval_every=eval_every)
Explanation: Training
We are ready to train the LDA model. We will first discuss how to set some of the training parameters.
First of all, the elephant in the room: how many topics do I need? There is really no easy answer for this, it will depend on both your data and your application. I have used 10 topics here because I wanted to have a few topics that I could interpret and "label", and because that turned out to give me reasonably good results. You might not need to interpret all your topics, so you could use a large number of topics, for example 100.
The chunksize controls how many documents are processed at a time in the training algorithm. Increasing chunksize will speed up training, at least as long as the chunk of documents easily fit into memory. I've set chunksize = 2000, which is more than the amount of documents, so I process all the data in one go. Chunksize can however influence the quality of the model, as discussed in Hoffman and co-authors [2], but the difference was not substantial in this case.
passes controls how often we train the model on the entire corpus. Another word for passes might be "epochs". iterations is somewhat technical, but essentially it controls how often we repeat a particular loop over each document. It is important to set the number of "passes" and "iterations" high enough.
I suggest the following way to choose iterations and passes. First, enable logging (as described in many Gensim tutorials), and set eval_every = 1 in LdaModel. When training the model look for a line in the log that looks something like this:
2016-06-21 15:40:06,753 - gensim.models.ldamodel - DEBUG - 68/1566 documents converged within 400 iterations
If you set passes = 20 you will see this line 20 times. Make sure that by the final passes, most of the documents have converged. So you want to choose both passes and iterations to be high enough for this to happen.
We set alpha = 'auto' and eta = 'auto'. Again this is somewhat technical, but essentially we are automatically learning two parameters in the model that we usually would have to specify explicitly.
End of explanation
top_topics = model.top_topics(corpus, num_words=20)
# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.
avg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics
print('Average topic coherence: %.4f.' % avg_topic_coherence)
from pprint import pprint
pprint(top_topics)
Explanation: We can compute the topic coherence of each topic. Below we display the average topic coherence and print the topics in order of topic coherence.
Note that we use the "Umass" topic coherence measure here (see docs, https://radimrehurek.com/gensim/models/ldamodel.html#gensim.models.ldamodel.LdaModel.top_topics), Gensim has recently obtained an implementation of the "AKSW" topic coherence measure (see accompanying blog post, http://rare-technologies.com/what-is-topic-coherence/).
If you are familiar with the subject of the articles in this dataset, you can see that the topics below make a lot of sense. However, they are not without flaws. We can see that there is substantial overlap between some topics, others are hard to interpret, and most of them have at least some terms that seem out of place. If you were able to do better, feel free to share your methods on the blog at http://rare-technologies.com/lda-training-tips/ !
End of explanation |
7,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
CIFAR Code Along with CNN
The <a href='https
Step1: Load the CIFAR-10 dataset
PyTorch makes the CIFAR-10 train and test datasets available through <a href='https
Step2: Create loaders
Step3: Define strings for labels
We can call the labels whatever we want, so long as they appear in the order of 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'. Here we're using 5-character labels padded with spaces so that our reports line up later.
Step4: We don't want to use the variable name "class" here, as it would overwrite Python's built-in keyword.
View a batch of images
Step5: Define the model
In the previous section we used two convolutional layers and two pooling layers before feeding data through a fully connected hidden layer to our output. The model follows CONV/RELU/POOL/CONV/RELU/POOL/FC/RELU/FC. We'll use the same format here.
The only changes are
Step6: <div class="alert alert-info"><strong>Why <tt>(6x6x16)</tt> instead of <tt>(5x5x16)</tt>?</strong><br>
With MNIST the kernels and pooling layers resulted in $\;(((28−2)/2)−2)/2=5.5 \;$ which rounds down to 5 pixels per side.<br>
With CIFAR the result is $\;(((32-2)/2)-2)/2 = 6.5\;$ which rounds down to 6 pixels per side.</div>
Step7: Including the bias terms for each layer, the total number of parameters being trained is
Step8: Define loss function & optimizer
Step9: Train the model
This time we'll feed the data directly into the model without flattening it first.<br>
<div class="alert alert-info"><font color=blue><strong>OPTIONAL
Step10: Optional
Step11: Plot the loss and accuracy comparisons
Step12: Evaluate Test Data
Step13: This is not as impressive as with MNIST, which makes sense. We would have to adjust our parameters to obtain better results.<br>
Still, it's much better than the 10% we'd get with random chance!
Display the confusion matrix
In order to map predictions against ground truth, we need to run the entire test set through the model.<br>
Also, since our model was not as accurate as with MNIST, we'll use a <a href='https
Step14: For more info on the above chart, visit the docs on <a href='https
Step15: Now that everything is set up, run and re-run the cell below to view all of the missed predictions.<br>
Use <kbd>Ctrl+Enter</kbd> to remain on the cell between runs. You'll see a <tt>StopIteration</tt> once all the misses have been seen.
Step16: <div class="alert alert-info"><font color=blue><h2>Optional | Python Code:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torchvision.utils import make_grid
import numpy as np
import pandas as pd
import seaborn as sn # for heatmaps
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
CIFAR Code Along with CNN
The <a href='https://en.wikipedia.org/wiki/CIFAR-10'>CIFAR-10</a> dataset is similar to MNIST, except that instead of one color channel (grayscale) there are three channels (RGB).<br>
Where an MNIST image has a size of (1,28,28), CIFAR images are (3,32,32). There are 10 categories an image may fall under:
0. airplane
1. automobile
2. bird
3. cat
4. deer
5. dog
6. frog
7. horse
8. ship
9. truck
As with the previous code along, make sure to watch the theory lectures! You'll want to be comfortable with:
* convolutional layers
* filters/kernels
* pooling
* depth, stride and zero-padding
Perform standard imports
End of explanation
transform = transforms.ToTensor()
train_data = datasets.CIFAR10(root='../Data', train=True, download=True, transform=transform)
test_data = datasets.CIFAR10(root='../Data', train=False, download=True, transform=transform)
train_data
test_data
Explanation: Load the CIFAR-10 dataset
PyTorch makes the CIFAR-10 train and test datasets available through <a href='https://pytorch.org/docs/stable/torchvision/index.html'><tt><strong>torchvision</strong></tt></a>. The first time they're called, the datasets will be downloaded onto your computer to the path specified. From that point, torchvision will always look for a local copy before attempting another download.<br>The set contains 50,000 train and 10,000 test images.
Refer to the previous section for explanations of transformations, batch sizes and <a href='https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader'><tt><strong>DataLoader</strong></tt></a>.
End of explanation
torch.manual_seed(101) # for reproducible results
train_loader = DataLoader(train_data, batch_size=10, shuffle=True)
test_loader = DataLoader(test_data, batch_size=10, shuffle=False)
Explanation: Create loaders
End of explanation
class_names = ['plane', ' car', ' bird', ' cat', ' deer', ' dog', ' frog', 'horse', ' ship', 'truck']
Explanation: Define strings for labels
We can call the labels whatever we want, so long as they appear in the order of 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'. Here we're using 5-character labels padded with spaces so that our reports line up later.
End of explanation
np.set_printoptions(formatter=dict(int=lambda x: f'{x:5}')) # to widen the printed array
# Grab the first batch of 10 images
for images,labels in train_loader:
break
# Print the labels
print('Label:', labels.numpy())
print('Class: ', *np.array([class_names[i] for i in labels]))
# Print the images
im = make_grid(images, nrow=5) # the default nrow is 8
plt.figure(figsize=(10,4))
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
Explanation: We don't want to use the variable name "class" here, as it would overwrite Python's built-in keyword.
View a batch of images
End of explanation
class ConvolutionalNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 3, 1) # changed from (1, 6, 5, 1)
self.conv2 = nn.Conv2d(6, 16, 3, 1)
self.fc1 = nn.Linear(6*6*16, 120) # changed from (4*4*16) to fit 32x32 images with 3x3 filters
self.fc2 = nn.Linear(120,84)
self.fc3 = nn.Linear(84, 10)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.max_pool2d(X, 2, 2)
X = F.relu(self.conv2(X))
X = F.max_pool2d(X, 2, 2)
X = X.view(-1, 6*6*16)
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = self.fc3(X)
return F.log_softmax(X, dim=1)
Explanation: Define the model
In the previous section we used two convolutional layers and two pooling layers before feeding data through a fully connected hidden layer to our output. The model follows CONV/RELU/POOL/CONV/RELU/POOL/FC/RELU/FC. We'll use the same format here.
The only changes are:
* take in 3-channel images instead of 1-channel
* adjust the size of the fully connected input<br>
Our first convolutional layer will have 3 input channels, 6 output channels, a kernel size of 3 (resulting in a 3x3 filter), and a stride length of 1 pixel.<br>These are passed in as <tt>nn.Conv2d(3,6,3,1)</tt>
End of explanation
torch.manual_seed(101)
model = ConvolutionalNetwork()
model
Explanation: <div class="alert alert-info"><strong>Why <tt>(6x6x16)</tt> instead of <tt>(5x5x16)</tt>?</strong><br>
With MNIST the kernels and pooling layers resulted in $\;(((28−2)/2)−2)/2=5.5 \;$ which rounds down to 5 pixels per side.<br>
With CIFAR the result is $\;(((32-2)/2)-2)/2 = 6.5\;$ which rounds down to 6 pixels per side.</div>
End of explanation
def count_parameters(model):
params = [p.numel() for p in model.parameters() if p.requires_grad]
for item in params:
print(f'{item:>6}')
print(f'______\n{sum(params):>6}')
count_parameters(model)
Explanation: Including the bias terms for each layer, the total number of parameters being trained is:<br>
$\quad\begin{split}(3\times6\times3\times3)+6+(6\times16\times3\times3)+16+(576\times120)+120+(120\times84)+84+(84\times10)+10 &=\
162+6+864+16+69120+120+10080+84+840+10 &= 81,302\end{split}$<br>
End of explanation
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
Explanation: Define loss function & optimizer
End of explanation
import time
start_time = time.time()
epochs = 10
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
b+=1
# Apply the model
y_pred = model(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%1000 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/50000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
# Apply the model
y_val = model(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
Explanation: Train the model
This time we'll feed the data directly into the model without flattening it first.<br>
<div class="alert alert-info"><font color=blue><strong>OPTIONAL: </strong>In the event that training takes too long, you can interrupt the kernel, skip ahead to the bottom of the notebook, and load a trained version of the model that's been saved in this folder.</font></div>
End of explanation
torch.save(model.state_dict(), 'CIFAR10-CNN-Model.pt')
Explanation: Optional: Save the model
This will save your trained model, without overwriting the saved model we have provided called <strong>CIFAR10-CNN-Model-master.pt</strong>
End of explanation
plt.plot(train_losses, label='training loss')
plt.plot(test_losses, label='validation loss')
plt.title('Loss at the end of each epoch')
plt.legend();
plt.plot([t/500 for t in train_correct], label='training accuracy')
plt.plot([t/100 for t in test_correct], label='validation accuracy')
plt.title('Accuracy at the end of each epoch')
plt.legend();
Explanation: Plot the loss and accuracy comparisons
End of explanation
print(test_correct) # contains the results of all 10 epochs
print()
print(f'Test accuracy: {test_correct[-1].item()*100/10000:.3f}%') # print the most recent result as a percent
Explanation: Evaluate Test Data
End of explanation
# Create a loader for the entire the test set
test_load_all = DataLoader(test_data, batch_size=10000, shuffle=False)
with torch.no_grad():
correct = 0
for X_test, y_test in test_load_all:
y_val = model(X_test)
predicted = torch.max(y_val,1)[1]
correct += (predicted == y_test).sum()
arr = confusion_matrix(y_test.view(-1), predicted.view(-1))
df_cm = pd.DataFrame(arr, class_names, class_names)
plt.figure(figsize = (9,6))
sn.heatmap(df_cm, annot=True, fmt="d", cmap='BuGn')
plt.xlabel("prediction")
plt.ylabel("label (ground truth)")
plt.show();
Explanation: This is not as impressive as with MNIST, which makes sense. We would have to adjust our parameters to obtain better results.<br>
Still, it's much better than the 10% we'd get with random chance!
Display the confusion matrix
In order to map predictions against ground truth, we need to run the entire test set through the model.<br>
Also, since our model was not as accurate as with MNIST, we'll use a <a href='https://seaborn.pydata.org/generated/seaborn.heatmap.html'>heatmap</a> to better display the results.
End of explanation
misses = np.array([])
for i in range(len(predicted.view(-1))):
if predicted[i] != y_test[i]:
misses = np.append(misses,i).astype('int64')
# Display the number of misses
len(misses)
# Display the first 8 index positions
misses[:8]
# Set up an iterator to feed batched rows
r = 8 # row size
row = iter(np.array_split(misses,len(misses)//r+1))
Explanation: For more info on the above chart, visit the docs on <a href='https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html'>scikit-learn's confusion_matrix</a>, <a href='https://seaborn.pydata.org/generated/seaborn.heatmap.html'>seaborn heatmaps</a>, and <a href='https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html'>matplotlib colormaps</a>.
Examine the misses
We can track the index positions of "missed" predictions, and extract the corresponding image and label. We'll do this in batches to save screen space.
End of explanation
np.set_printoptions(formatter=dict(int=lambda x: f'{x:5}')) # to widen the printed array
nextrow = next(row)
lbls = y_test.index_select(0,torch.tensor(nextrow)).numpy()
gues = predicted.index_select(0,torch.tensor(nextrow)).numpy()
print("Index:", nextrow)
print("Label:", lbls)
print("Class: ", *np.array([class_names[i] for i in lbls]))
print()
print("Guess:", gues)
print("Class: ", *np.array([class_names[i] for i in gues]))
images = X_test.index_select(0,torch.tensor(nextrow))
im = make_grid(images, nrow=r)
plt.figure(figsize=(8,4))
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
Explanation: Now that everything is set up, run and re-run the cell below to view all of the missed predictions.<br>
Use <kbd>Ctrl+Enter</kbd> to remain on the cell between runs. You'll see a <tt>StopIteration</tt> once all the misses have been seen.
End of explanation
# Instantiate the model and load saved parameters
model2 = ConvolutionalNetwork()
model2.load_state_dict(torch.load('CIFAR10-CNN-Model-master.pt'))
model2.eval()
# Evaluate the saved model against the test set
test_load_all = DataLoader(test_data, batch_size=10000, shuffle=False)
with torch.no_grad():
correct = 0
for X_test, y_test in test_load_all:
y_val = model2(X_test)
predicted = torch.max(y_val,1)[1]
correct += (predicted == y_test).sum()
print(f'Test accuracy: {correct.item()}/{len(test_data)} = {correct.item()*100/(len(test_data)):7.3f}%')
# Display the confusion matrix as a heatmap
arr = confusion_matrix(y_test.view(-1), predicted.view(-1))
df_cm = pd.DataFrame(arr, class_names, class_names)
plt.figure(figsize = (9,6))
sn.heatmap(df_cm, annot=True, fmt="d", cmap='BuGn')
plt.xlabel("prediction")
plt.ylabel("label (ground truth)")
plt.show();
Explanation: <div class="alert alert-info"><font color=blue><h2>Optional: Load a Saved Model</h2>
In the event that training the ConvolutionalNetwork takes too long, you can load a trained version by running the following code:</font>
<pre style='background-color:rgb(217,237,247)'>
model2 = ConvolutionalNetwork()
model2.load_state_dict(torch.load('CIFAR10-CNN-Model-master.pt'))
model2.eval()</pre>
</div>
End of explanation |
7,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center><h1>CNN 多通道情感分析</h1></center>
一个有三个通道,分别是word embedding,POS 标签 embedding, 词的情感极性强度embedding
Step1: POS当作一个通道。
Tag word 的方法: http
Step2: 情感极性当作一个通道。
读取情感强度文件,构建字典
Step3: 构建情感极性强度通道
Step4: 否定词。
1.资料:http
Step5: Glove训练好的词向量
利用glove基于twitter训练公开的数据
Step6: 获取训练好的word embedding 数组,用来初始化 Embedding
Step7: 将一个batch大小的index数据,利用index_wordembedding进行embedding
Step8: 构建模型
模型参数
Step9: 错误记录
1.输入的变量和后面同名
CNN -Rand 模型
Step10: CNN-static 模型
Step11: CNN-non-static 模型
Step12: CNN-multichannel 模型
Step13: 模型图
Step14: 模型输入
Step15: 测试数据
Step16: 训练模型
cnn random 模型
Step17: cnn random 结果
|time |max_len | batch_size | max_features | embedding_dims | nb_filter | filter_length | dense1_hindden |val_acc |
| - |- |- |- |- |- |- | - |- |
| 2016-11-25 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4169|
cnn static 模型
Step18: cnn static 结果
|time |max_len | batch_size | max_features | embedding_dims | nb_filter | filter_length | dense1_hindden |val_acc |
| - |- |- |- |- |- |- | - |- |
| 2016-11-25 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4253|
cnn non-static 模型
Step19: cnn non-static 结果
|time |max_len | batch_size | max_features | embedding_dims | nb_filter | filter_length | dense1_hindden |val_acc |
| - |- |- |- |- |- |- | - |- |
| 2016-11-25 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4204|
| 2016-11-26 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4471| | Python Code:
import keras
from os.path import join
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout,Activation, Lambda,Input
from keras.layers import Embedding
from keras.layers import Convolution1D
from keras.datasets import imdb
from keras import backend as K
from keras.layers import Convolution1D, GlobalMaxPooling1D,Convolution2D,Merge,merge
from keras.utils import np_utils
from keras.models import Model
import nltk
from nltk.tag import pos_tag
import numpy as np
from keras.regularizers import l2
import theano
Explanation: <center><h1>CNN 多通道情感分析</h1></center>
一个有三个通道,分别是word embedding,POS 标签 embedding, 词的情感极性强度embedding
End of explanation
file_names = ['stsa.fine.test','stsa.fine.train','stsa.fine.dev']
file_path = '/home/bruce/data/sentiment/citai_process'
def read_file(fname=''):
with open(join(file_path,fname)) as fr:
lines = fr.readlines()
lines = [line.strip().lower() for line in lines]
lables = [int(line[0:1]) for line in lines]
words = [line[2:].split() for line in lines]
return words,lables
train_X,train_y = read_file(fname='stsa.fine.train')
test_X,test_y = read_file(fname='stsa.fine.test')
dev_X,dev_y = read_file(fname='stsa.fine.dev')
print(len(train_X))
print(len(test_X))
print(len(dev_X))
print(train_X[0:2])
print(train_y[0:2])
def tag_sentence(X=[]):
tag_X=[]
for line in X:
word_tag = pos_tag(line,tagset='universal')
tag = [i[1] for i in word_tag]
tag_X.append(tag)
return tag_X
train_tag_X = tag_sentence(X=train_X)
dev_tag_X = tag_sentence(X=dev_X)
test_tag_X = tag_sentence(X=test_X)
print(train_X[0])
print(train_tag_X[0])
Explanation: POS当作一个通道。
Tag word 的方法: http://www.nltk.org/book/ch05.html
End of explanation
senti_file = '/home/bruce/data/sentiment/sentiment_diction/wordwithStrength.txt'
def construct_senti_dict(senti_file=''):
with open(senti_file) as fr:
lines = fr.readlines()
lines = [line.strip().split() for line in lines]
lines = [(i[0],float(i[1])) for i in lines]
return dict(lines)
sentiment_dict=construct_senti_dict(senti_file)
print('sentiment number =',len(sentiment_dict))
Explanation: 情感极性当作一个通道。
读取情感强度文件,构建字典
End of explanation
def sentiment_strength(X=[],sentiment_dict=sentiment_dict):
sentiment_X = [[sentiment_dict[w] if w in sentiment_dict else 0 for w in line ]for line in X]
sentiment_X = [[ str(int(val*10)) if val <=0 else '+'+str(int(val*10)) for val in line] for line in sentiment_X]
return sentiment_X
train_sentiment_X = sentiment_strength(X=train_X,sentiment_dict=sentiment_dict)
dev_sentiment_X = sentiment_strength(X=dev_X,sentiment_dict=sentiment_dict)
test_sentiment_X = sentiment_strength(X=test_X,sentiment_dict=sentiment_dict)
assert len(train_sentiment_X) == len(train_X)
print(train_sentiment_X[0:5])
print(train_X[0:5])
print(train_y[0:5])
Explanation: 构建情感极性强度通道
End of explanation
def token_to_index(datas=[]):
word_index={}
count=1
for data in datas:
for list_ in data:
for w in list_:
if w not in word_index:
word_index[w] = count
count = count + 1
print('leng of word_index =',len(word_index))
for i in range(len(datas)):
datas[i] = [[ word_index[w] for w in line ] for line in datas[i]]
return datas,word_index
X,word_index = token_to_index(datas=[train_X,dev_X,train_sentiment_X,train_tag_X,dev_sentiment_X,dev_tag_X])
train_X,dev_X,train_sentiment_X,train_tag_X,dev_sentiment_X,dev_tag_X = X
print('length of dict_index = ',len(word_index))
print(train_sentiment_X[0:2])
print(train_X[0:2])
print(train_y[0:2])
Explanation: 否定词。
1.资料:http://web.stanford.edu/~cgpotts/papers/potts-salt20-negation.pdf
2.Negation handing in NLP http://stackoverflow.com/questions/28720174/negation-handling-in-nlp
数据预处理
End of explanation
embedding_dim = 100
we_file = '/home/bruce/data/glove/twitter/glove.twitter.27B.{0}d.txt'.format(embedding_dim)
def get_index_wordembedding(we_file='',word_index={}):
index_wordembedding ={}
zeros = np.zeros(embedding_dim)
for line in open(we_file):
elements = line.strip().split()
if elements[0] in word_index:
index = word_index[elements[0]]
wordembedding = [float(i) for i in elements[1:]]
index_wordembedding[index] = wordembedding
print('总word的数目= ',len(word_index))
print('总word embedding 的数目 = ',len(index_wordembedding))
for word,index in word_index.items():
if index not in index_wordembedding:
index_wordembedding[index] = zeros
assert len(index_wordembedding) == len(word_index)
return index_wordembedding
index_wordembedding = get_index_wordembedding(we_file=we_file,word_index=word_index)
Explanation: Glove训练好的词向量
利用glove基于twitter训练公开的数据
End of explanation
def get_trained_embedding(index_wordembedding=None):
index_we = sorted(index_wordembedding.items())
print('index_we[0] =',index_we[0])
trained_embedding = [t[1] for t in index_we]
zeros = np.zeros(embedding_dim)
trained_embedding = np.vstack((zeros,trained_embedding))
return np.array(trained_embedding)
Explanation: 获取训练好的word embedding 数组,用来初始化 Embedding
End of explanation
def batch_indexData_embedding(X=None,index_wordembedding={}):
zeros = np.zeros(embedding_dim)
return [ [ index_wordembedding[w] if w in index_wordembedding else zeros for w in line ] for line in X ]
Explanation: 将一个batch大小的index数据,利用index_wordembedding进行embedding
End of explanation
max_len = 36
batch_size=50
max_features= 14526
#embedding_dims=50
nb_filter = 100
filter_length1 = 3
filter_length2 = 4
filter_length3 = 5
dense1_hindden = 150*2
nb_classes = 5
Explanation: 构建模型
模型参数
End of explanation
print('Build model...')
input_random = Input(shape=(max_len,), dtype='int32', name='main_input1')
embedding = Embedding(output_dim=embedding_dim, input_dim=max_features)(input_random)
# 卷积层
conv1 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(embedding)
conv2 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(embedding)
conv3 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(embedding)
conv1 =GlobalMaxPooling1D(conv1)
conv2 =GlobalMaxPooling1D()(conv2)
conv3 =GlobalMaxPooling1D()(conv3)
merged_vector = merge([conv1,conv2,conv3], mode='concat')
# 全连接层
dense_layer = Dense(dense1_hindden)
dens1 = dense_layer(merged_vector)
print('dense_layer input_shape should == (300,)')
print(dense_layer.input_shape)
dens1 = Activation('relu')(dens1)
# softmax层
dens2 = Dense(nb_classes)(dens1)
output_random = Activation('softmax')(dens2)
model = Model(input=input_random,output=output_random)
print('finish build model')
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: 错误记录
1.输入的变量和后面同名
CNN -Rand 模型
End of explanation
input_static = Input(shape=(max_len,embedding_dim), name='main_input2')
# 卷积层
conv1 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(input_static)
conv2 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(input_static)
conv3 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(input_static)
conv1 =GlobalMaxPooling1D()(conv1)
conv2 =GlobalMaxPooling1D()(conv2)
conv3 =GlobalMaxPooling1D()(conv3)
merged_vector = merge([conv1,conv2,conv3], mode='concat')
# 全连接层
dens1 = Dense(dense1_hindden)(merged_vector)
dens1 = Activation('relu')(dens1)
# softmax层
dens2 = Dense(nb_classes)(dens1)
output_static = Activation('softmax')(dens2)
model = Model(input=input_static,output=output_static)
print('finish build model')
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: CNN-static 模型
End of explanation
print('Build model...')
input_non_static = Input(shape=(max_len,), dtype='int32', name='main_input1')
#初始化Embedding层
trained_embedding = get_trained_embedding(index_wordembedding=index_wordembedding)
embedding_layer = Embedding(max_features,
embedding_dim,
weights=[trained_embedding]
)
embedding = embedding_layer(input_non_static)
conv1 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(embedding)
conv2 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(embedding)
conv3 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu'
)(embedding)
dropout = Dropout(0.5)
conv1 =GlobalMaxPooling1D()(conv1)
conv2 =GlobalMaxPooling1D()(conv2)
conv3 =GlobalMaxPooling1D()(conv3)
#conv1 = dropout(conv1)
#conv2 = dropout(conv2)
#conv3 = dropout(conv3)
merged_vector = merge([conv1,conv2,conv3], mode='concat')
# 全连接层
dense_layer = Dense(dense1_hindden)
dens1 = dense_layer(merged_vector)
print('dense_layer input shpae = ',dense_layer.input_shape)
dens1 = Activation('relu')(dens1)
dens1 = dropout(dens1)
# softmax层
dens2 = Dense(nb_classes)(dens1)
output_non_static = Activation('softmax')(dens2)
model = Model(input=input_non_static,output=output_non_static)
print('finish build model')
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: CNN-non-static 模型
End of explanation
print('Build model...')
input1 = Input(shape=(max_len,), dtype='int32', name='main_input1')
input2 = Input(shape=(max_len,embedding_dim), name='main_input2')
#input3 = Input(shape=(max_len,), dtype='int32', name='main_input3')
embedding = Embedding(output_dim=embedding_dim, input_dim=max_features)
embedding1 = embedding(input1)
#embedding2 = embedding(input2)
#embedding3 = embedding(input3)
#---------------------------------------------------------------------------
#卷积方法一:每个通道,用不同的卷积核
'''
cov1_out1 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length,
border_mode = 'valid',
activation='relu'
)(embedding1)
cov1_out2 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length,
border_mode = 'valid',
activation='relu'
)(embedding2)
cov1_out3 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length,
border_mode = 'valid',
activation='relu'
)(embedding3)
'''
# 卷积方法二:每个通道用相同的卷积核
conv11 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length1,
border_mode = 'valid',
activation='relu',
W_regularizer=l2(3)
)
conv12 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length2,
border_mode = 'valid',
activation='relu',
W_regularizer=l2(3)
)
conv13 = Convolution1D(nb_filter = nb_filter,
filter_length = filter_length3,
border_mode = 'valid',
activation='relu',
W_regularizer=l2(3)
)
dropout = Dropout(0.5)
#第一个通道
cov1_out11 = conv11(embedding1)
cov1_out12 = conv12(embedding1)
cov1_out13 = conv13(embedding1)
cov1_out11 = dropout(cov1_out11)
cov1_out12 = dropout(cov1_out12)
cov1_out13 = dropout(cov1_out13)
#第二个通道
cov1_out14 = conv11(input2)
cov1_out15 = conv12(input2)
cov1_out16 = conv13(input2)
cov1_out14 = dropout(cov1_out14)
cov1_out15 = dropout(cov1_out15)
cov1_out16 = dropout(cov1_out16)
#cov1_out2 = conv(embedding2)
#cov1_out3 = conv(embedding3)
#------------------------------------------------------------------------------
maxpooling = GlobalMaxPooling1D()
conv11 = maxpooling(cov1_out11)
conv12 = maxpooling(cov1_out12)
conv13 = maxpooling(cov1_out13)
conv14 = maxpooling(cov1_out14)
conv15 = maxpooling(cov1_out15)
conv16 = maxpooling(cov1_out16)
merged_vector = merge([conv11,conv12,conv13,conv14,conv15,conv16], mode='concat')
#dropout = Dropout(0.5)
#merged_vector = dropout(merged_vector)
dens1 = Dense(dense1_hindden)(merged_vector)
dens1 = Activation('relu')(dens1)
dens2 = Dense(nb_classes)(dens1)
output = Activation('softmax')(dens2)
model = Model(input=[input1,input2],output=output)
print('finish build model')
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: CNN-multichannel 模型
End of explanation
from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
Explanation: 模型图
End of explanation
print(type(train_y[0]))
train_y_model = np_utils.to_categorical(train_y, nb_classes)
dev_y_model = np_utils.to_categorical(dev_y, nb_classes)
train_X_model = sequence.pad_sequences(train_X, maxlen=max_len)
dev_X_model = sequence.pad_sequences(dev_X, maxlen=max_len)
train_sentiment_X_model = sequence.pad_sequences(train_sentiment_X,maxlen=max_len)
train_tag_X_model= sequence.pad_sequences(train_tag_X,maxlen=max_len)
dev_sentiment_X_model = sequence.pad_sequences(dev_sentiment_X,maxlen=max_len)
dev_tag_X_model = sequence.pad_sequences(dev_tag_X,maxlen=max_len)
#train_embedding_X_model = batch_indexData_embedding(X=train_X_model,index_wordembedding=index_wordembedding)
dev_embedding_X_model = batch_indexData_embedding(X=dev_X_model,index_wordembedding=index_wordembedding)
dev_embedding_X_model = np.array(dev_embedding_X_model)
Explanation: 模型输入
End of explanation
#转为index
def to_index(word_index={},data=[]):
return [[word_index[w] if w in word_index else 0 for w in sentence] for sentence in data]
test_index_X = to_index(word_index,test_X)
#删补
test_index_X = sequence.pad_sequences(test_index_X, maxlen=max_len)
#embedding
test_embedding_X = batch_indexData_embedding(X=test_index_X,index_wordembedding=index_wordembedding)
test_y = np_utils.to_categorical(test_y, nb_classes)
def my_generator4(X1=None,X2=None,X3=None,x4=None,y=None):
i = 0
max_i = int(len(X1)/batch_size)
while True:
i = i % max_i
x1_batch = X1[i*batch_size:(i+1)*batch_size]
x2_batch = X2[i*batch_size:(i+1)*batch_size]
x3_batch = X3[i*batch_size:(i+1)*batch_size]
y_batch = y[i*batch_size:(i+1)*batch_size]
yield ([x1_batch,x2_batch,x3_batch],y_batch)
i = i + 1
def my_generator3(X1=None,y=None):
i = 0
max_i = int(len(X1)/batch_size)
while True:
i = i % max_i
x1_batch = X1[i*batch_size:(i+1)*batch_size]
x2_batch = batch_indexData_embedding(X=x1_batch,index_wordembedding=index_wordembedding)
x2_batch = np.array(x2_batch)
y_batch = y[i*batch_size:(i+1)*batch_size]
yield ([x1_batch,x2_batch],y_batch)
i = i + 1
def my_generator1(X1=None,y=None):
i = 0
max_i = int(len(X1)/batch_size)
while True:
i = i % max_i
x1_batch = X1[i*batch_size:(i+1)*batch_size]
y_batch = y[i*batch_size:(i+1)*batch_size]
yield (x1_batch,y_batch)
i = i + 1
def my_generator2(X1=None,y=None):
i = 0
max_i = int(len(X1)/batch_size)
while True:
i = i % max_i
x1_batch = X1[i*batch_size:(i+1)*batch_size]
x1_batch = batch_indexData_embedding(X=x1_batch,index_wordembedding=index_wordembedding)
x1_batch = np.array(x1_batch)
y_batch = y[i*batch_size:(i+1)*batch_size]
yield (x1_batch,y_batch)
i = i + 1
Explanation: 测试数据
End of explanation
model.fit_generator(my_generator1(train_X_model,train_y_model),samples_per_epoch = 32*100,nb_epoch=100,verbose=1,validation_data=(dev_X_model,dev_y_model))
Explanation: 训练模型
cnn random 模型
End of explanation
model.fit_generator(my_generator2(train_X_model,train_y_model),samples_per_epoch = 32*100,nb_epoch=100,verbose=1,validation_data=(test_embedding_X,test_y))
Explanation: cnn random 结果
|time |max_len | batch_size | max_features | embedding_dims | nb_filter | filter_length | dense1_hindden |val_acc |
| - |- |- |- |- |- |- | - |- |
| 2016-11-25 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4169|
cnn static 模型
End of explanation
model.fit_generator(my_generator1(train_X_model,train_y_model),samples_per_epoch = 50*40,nb_epoch=100,verbose=1,validation_data=(test_index_X,test_y))
Explanation: cnn static 结果
|time |max_len | batch_size | max_features | embedding_dims | nb_filter | filter_length | dense1_hindden |val_acc |
| - |- |- |- |- |- |- | - |- |
| 2016-11-25 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4253|
cnn non-static 模型
End of explanation
model.fit_generator(my_generator(train_X_model,train_sentiment_X_model,train_tag_X_model,train_y_model),samples_per_epoch = 32*100,nb_epoch=100,verbose=1,validation_data=([dev_X_model,dev_sentiment_X_model,dev_tag_X_model],dev_y_model))
Explanation: cnn non-static 结果
|time |max_len | batch_size | max_features | embedding_dims | nb_filter | filter_length | dense1_hindden |val_acc |
| - |- |- |- |- |- |- | - |- |
| 2016-11-25 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4204|
| 2016-11-26 9:52 |36 |50 | 14526 | 100 | 各100 | 3,4,5 | 300 | 0.4471|
End of explanation |
7,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPS tracks
http
Step1: TODO
Step2: Convert the data to Web Mercator
Step3: Contextily helper function
Step4: Add background tiles to plot
Step5: Save selected departments into a GeoJSON file | Python Code:
import pandas as pd
import geopandas as gpd
Explanation: GPS tracks
http://geopandas.org/gallery/plotting_basemap_background.html#adding-a-background-map-to-plots
https://ocefpaf.github.io/python4oceanographers/blog/2015/08/03/fiona_gpx/
End of explanation
df = gpd.read_file("communes-20181110.shp")
!head test.gpx
!head test.csv
# https://gis.stackexchange.com/questions/114066/handling-kml-csv-with-geopandas-drivererror-unsupported-driver-ucsv
df_tracks = pd.read_csv("test.csv", skiprows=3)
df_tracks.head()
df_tracks.columns
from shapely.geometry import LineString
# https://shapely.readthedocs.io/en/stable/manual.html
positions = df_tracks.loc[:, ["Longitude (deg)", "Latitude (deg)"]]
positions
LineString(positions.values)
# https://stackoverflow.com/questions/38961816/geopandas-set-crs-on-points
df_tracks = gpd.GeoDataFrame(geometry=[LineString(positions.values)], crs = {'init' :'epsg:4326'})
df_tracks.head()
df_tracks.plot()
communes_list = [
"78160", # Chevreuse
"78575", # Saint-Rémy-lès-Chevreuse
]
df = df.loc[df.insee.isin(communes_list)]
df
ax = df_tracks.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
ax = df.plot(ax=ax, alpha=0.5, edgecolor='k')
#df.plot(ax=ax)
Explanation: TODO: put the following lists in a JSON dict and make it avaliable in a public Git repository (it can be usefull for other uses)
TODO: put the generated GeoJSON files in a public Git repository
End of explanation
df_tracks_wm = df_tracks.to_crs(epsg=3857)
df_wm = df.to_crs(epsg=3857)
df_tracks_wm
ax = df_tracks_wm.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
Explanation: Convert the data to Web Mercator
End of explanation
import contextily as ctx
def add_basemap(ax, zoom, url='http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'):
xmin, xmax, ymin, ymax = ax.axis()
basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom, url=url)
ax.imshow(basemap, extent=extent, interpolation='bilinear')
# restore original x/y limits
ax.axis((xmin, xmax, ymin, ymax))
Explanation: Contextily helper function
End of explanation
ax = df_tracks_wm.plot(figsize=(16, 16), alpha=0.5, edgecolor='k')
ax = df_wm.plot(ax=ax, alpha=0.5, edgecolor='k')
#add_basemap(ax, zoom=13, url=ctx.sources.ST_TONER_LITE)
add_basemap(ax, zoom=14)
ax.set_axis_off()
Explanation: Add background tiles to plot
End of explanation
import fiona
fiona.supported_drivers
!rm tracks.geojson
df_tracks.to_file("tracks.geojson", driver="GeoJSON")
!ls -lh tracks.geojson
df = gpd.read_file("tracks.geojson")
df
ax = df.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
Explanation: Save selected departments into a GeoJSON file
End of explanation |
7,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization 1
Step1: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
Step2: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
?plt.scatter()
from matplotlib import markers
markers.MarkerStyle.markers.keys()
x = np.random.rand(100)
y = np.random.rand(100)
plt.scatter(x, y, label = 'The Dots', c = u'r', marker = u'o')
plt.grid(True)
plt.box(False)
plt.xlabel('The X-Axis')
plt.ylabel('The Y-Axis')
plt.legend(loc=0) ##I have no idea if you wanted a legend... but I tried to find the best place for it
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
data = np.random.rand(100)
data
?plt.hist()
plt.hist(data, bins = 30, histtype = u'step', color = 'g')
plt.box(True)
plt.xlabel('The X-Axis for Histograms')
plt.ylabel('The Y-Axis for Histograms')
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation |
7,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Generating the training data
The first step is to generate some training data. For this, we will use NumPy's random
number generator. As discussed in the previous section, we will fix the seed of the random
number generator, so that re-running the script will always generate the same values
Step2: We can pick a single data point with 0 <= x <= 100 and 0 <= y <= 100
Step3: As shown in the preceding output, this will pick two random integers between 0 and 100.
We will interpret the first integer as the data point's $x$ coordinate on the map, and the
second integer as the point's $y$ coordinate. Similarly, let's pick a label for the data point
Step5: Turns out that this data point would have class 0.
Let's wrap this process in a function that takes as input the number of data points to
generate (that is, num_samples) and the number of features every data point has (that is,
num_features)
Step6: Let's put the function to test and generate an arbitrary number of data points, let's say
eleven, whose coordinates are chosen randomly
Step7: As we can see from the preceding output, the train_data variable is an 11 x 2 array, where
each row corresponds to a single data point. We can also inspect the first data point with its
corresponding label by indexing into the array
Step8: This tells us that the first data point is a blue square (because it has class 0) and lives at
location $(x, y) = (71, 60)$ on the town map. If we want, we can plot this data point on
the town map using Matplotlib
Step9: But what if we want to visualize the whole training set at once? Let's write a function for
that. The function should take as input a list of all the data points that are blue squares
(all_blue) and a list of the data points that are red triangles (all_red)
Step10: Let's try it on our dataset! First we have to split all the data points into red and blue sets. We
can quickly select all the elements of the labels array created earlier that are equal to 0,
using the following command (where ravel flattens the array)
Step11: All the blue data points are then all the rows of the train_data array created earlier,
whose corresponding label is 0
Step12: The same can be done for all the red data points
Step13: Finally, let's plot all the data points
Step14: Training the classifier
Now it's time to train the classifier.
As all other machine learning functions, the $k$-NN classifier is part of OpenCV 3.1's ml module. You can create a new classifier using the following command
Step15: We then pass our training data to the train method
Step16: Here, we have to tell knn that our data is an $N \times 2$ array (that is, every row is a data point).
Upon success, the function returns True.
Predicting the label of a new data point
The other really helpful method that knn provides is called findNearest. It can be used to
predict the label of a new data point based on its nearest neighbors.
Thanks to our generate_data function, it is actually really easy to generate a new data
point! We can think of a new data point as a dataset of size 1
Step17: Our function also returns a random label, but we are not interested in that. Instead, we
want to predict it using our trained classifier! We can tell Python to ignore an output value
with an underscore (_).
Let's have a look at our town map again. We will plot the training set as we did earlier, but
also add the new data point as a green circle (since we don't know yet whether it is
supposed to be a blue square or a red triangle)
Step18: If you had to guess based on its neighbors, what label would you assign the new data pointblue
or red?
Well, it depends, doesn't it? If we look at the house closest to it (the one living roughly at $(x,
y) = (85, 75)$), we would probably assign
the new data point to be a red triangle as well. This is exactly what our classifier would
predict for $k=1$
Step19: Here, knn reports that the nearest neighbor is 250 arbitrary units away, that the neighbor
has label 1 (which we said corresponds to red triangles), and that therefore the new data
point should also have label 1. The same would be true if we looked at the $k=2$ nearest
neighbors, and the $k=3$ nearest neighbors.
But we want to be careful not to pick arbitrary
even numbers for $k$. Why is that? Refer to page 64 for the answer.
Finally, what would happen if we dramatically widened our search window and classified
the new data point based on its $k=7$ nearest neighbors (circled with a solid line in the figure
mentioned earlier)?
Let's find out by calling the findNearest method with $k=7$ neighbors
Step20: Suddenly, the predicted label is 0 (blue square). The reason is that we now have four
neighbors within the solid circle that are blue squares (label 0), and only three that are red
triangles (label 1). So the majority vote would suggest making the newcomer a blue square
as well.
For $k=6$, there is a tie
Step21: Alternatively, predictions can be made with the predict method. But first, need to set k | Python Code:
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Measuring-Model-Performance-with-Scoring-Functions | Contents | Using Regression Models to Predict Continuous Outcomes >
Understanding the k-NN Classifier
The $k$-NN algorithm is arguably one of the simplest machine learning algorithms. The
reason for this is that we basically only need to store the training dataset. Then, in order to
make a prediction for a new data point, we only need to find the closest data point in the
training dataset-its nearest neighbor.
In a nutshell, the $k$-NN algorithm argues that a data point probably belongs to the same
class as its neighbors.
Of course, some neighborhoods might be a little more complicated. In this case, we would
not just consider our closest neighbor (where $k=1$), but instead our $k$ nearest neighbors.
That's all there is to it.
Implementing k-NN in OpenCV
Using OpenCV, we can easily create a $k$-NN model via the function cv2.ml.KNearest_create. Building the model then involves the following steps:
- Generate some training data.
- Create a k-NN object for a given number k.
- Find the k nearest neighbors of a new data point that we want to classify.
- Assign the class label of the new data point by majority vote.
- Plot the result.
We first import all the necessary modules: OpenCV for the $k$-NN algorithm, NumPy for
data munging, and Matplotlib for plotting. If you are working in a Jupyter Notebook, don't
forget to call the %matplotlib inline magic:
End of explanation
np.random.seed(42)
Explanation: Generating the training data
The first step is to generate some training data. For this, we will use NumPy's random
number generator. As discussed in the previous section, we will fix the seed of the random
number generator, so that re-running the script will always generate the same values:
End of explanation
single_data_point = np.random.randint(0, 100, 2)
single_data_point
Explanation: We can pick a single data point with 0 <= x <= 100 and 0 <= y <= 100:
End of explanation
single_label = np.random.randint(0, 2)
single_label
Explanation: As shown in the preceding output, this will pick two random integers between 0 and 100.
We will interpret the first integer as the data point's $x$ coordinate on the map, and the
second integer as the point's $y$ coordinate. Similarly, let's pick a label for the data point:
End of explanation
def generate_data(num_samples, num_features=2):
Randomly generates a number of data points
data_size = (num_samples, num_features)
train_data = np.random.randint(0, 100, size=data_size)
labels_size = (num_samples, 1)
labels = np.random.randint(0, 2, size=labels_size)
return train_data.astype(np.float32), labels
Explanation: Turns out that this data point would have class 0.
Let's wrap this process in a function that takes as input the number of data points to
generate (that is, num_samples) and the number of features every data point has (that is,
num_features):
End of explanation
train_data, labels = generate_data(11)
train_data
Explanation: Let's put the function to test and generate an arbitrary number of data points, let's say
eleven, whose coordinates are chosen randomly:
End of explanation
train_data[0], labels[0]
Explanation: As we can see from the preceding output, the train_data variable is an 11 x 2 array, where
each row corresponds to a single data point. We can also inspect the first data point with its
corresponding label by indexing into the array:
End of explanation
plt.plot(train_data[0, 0], train_data[0, 1], 'sb')
plt.xlabel('x coordinate')
plt.ylabel('y coordinate')
Explanation: This tells us that the first data point is a blue square (because it has class 0) and lives at
location $(x, y) = (71, 60)$ on the town map. If we want, we can plot this data point on
the town map using Matplotlib:
End of explanation
def plot_data(all_blue, all_red):
plt.figure(figsize=(10, 6))
plt.scatter(all_blue[:, 0], all_blue[:, 1], c='b', marker='s', s=180)
plt.scatter(all_red[:, 0], all_red[:, 1], c='r', marker='^', s=180)
plt.xlabel('x coordinate (feature 1)')
plt.ylabel('y coordinate (feature 2)')
Explanation: But what if we want to visualize the whole training set at once? Let's write a function for
that. The function should take as input a list of all the data points that are blue squares
(all_blue) and a list of the data points that are red triangles (all_red):
End of explanation
labels.ravel() == 0
Explanation: Let's try it on our dataset! First we have to split all the data points into red and blue sets. We
can quickly select all the elements of the labels array created earlier that are equal to 0,
using the following command (where ravel flattens the array):
End of explanation
blue = train_data[labels.ravel() == 0]
Explanation: All the blue data points are then all the rows of the train_data array created earlier,
whose corresponding label is 0:
End of explanation
red = train_data[labels.ravel() == 1]
Explanation: The same can be done for all the red data points:
End of explanation
plot_data(blue, red)
Explanation: Finally, let's plot all the data points:
End of explanation
knn = cv2.ml.KNearest_create()
Explanation: Training the classifier
Now it's time to train the classifier.
As all other machine learning functions, the $k$-NN classifier is part of OpenCV 3.1's ml module. You can create a new classifier using the following command:
End of explanation
knn.train(train_data, cv2.ml.ROW_SAMPLE, labels)
Explanation: We then pass our training data to the train method:
End of explanation
newcomer, _ = generate_data(1)
newcomer
Explanation: Here, we have to tell knn that our data is an $N \times 2$ array (that is, every row is a data point).
Upon success, the function returns True.
Predicting the label of a new data point
The other really helpful method that knn provides is called findNearest. It can be used to
predict the label of a new data point based on its nearest neighbors.
Thanks to our generate_data function, it is actually really easy to generate a new data
point! We can think of a new data point as a dataset of size 1:
End of explanation
plot_data(blue, red)
plt.plot(newcomer[0, 0], newcomer[0, 1], 'go', markersize=14);
Explanation: Our function also returns a random label, but we are not interested in that. Instead, we
want to predict it using our trained classifier! We can tell Python to ignore an output value
with an underscore (_).
Let's have a look at our town map again. We will plot the training set as we did earlier, but
also add the new data point as a green circle (since we don't know yet whether it is
supposed to be a blue square or a red triangle):
End of explanation
ret, results, neighbor, dist = knn.findNearest(newcomer, 1)
print("Predicted label:\t", results)
print("Neighbor's label:\t", neighbor)
print("Distance to neighbor:\t", dist)
Explanation: If you had to guess based on its neighbors, what label would you assign the new data pointblue
or red?
Well, it depends, doesn't it? If we look at the house closest to it (the one living roughly at $(x,
y) = (85, 75)$), we would probably assign
the new data point to be a red triangle as well. This is exactly what our classifier would
predict for $k=1$:
End of explanation
ret, results, neighbor, dist = knn.findNearest(newcomer, 7)
print("Predicted label:\t", results)
print("Neighbor's label:\t", neighbor)
print("Distance to neighbor:\t", dist)
Explanation: Here, knn reports that the nearest neighbor is 250 arbitrary units away, that the neighbor
has label 1 (which we said corresponds to red triangles), and that therefore the new data
point should also have label 1. The same would be true if we looked at the $k=2$ nearest
neighbors, and the $k=3$ nearest neighbors.
But we want to be careful not to pick arbitrary
even numbers for $k$. Why is that? Refer to page 64 for the answer.
Finally, what would happen if we dramatically widened our search window and classified
the new data point based on its $k=7$ nearest neighbors (circled with a solid line in the figure
mentioned earlier)?
Let's find out by calling the findNearest method with $k=7$ neighbors:
End of explanation
ret, results, neighbors, dist = knn.findNearest(newcomer, 6)
print("Predicted label:\t", results)
print("Neighbors' labels:\t", neighbors)
print("Distance to neighbors:\t", dist)
Explanation: Suddenly, the predicted label is 0 (blue square). The reason is that we now have four
neighbors within the solid circle that are blue squares (label 0), and only three that are red
triangles (label 1). So the majority vote would suggest making the newcomer a blue square
as well.
For $k=6$, there is a tie:
End of explanation
knn.setDefaultK(7)
knn.predict(newcomer)
knn.setDefaultK(6)
knn.predict(newcomer)
Explanation: Alternatively, predictions can be made with the predict method. But first, need to set k:
End of explanation |
7,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gravitational Redshift (rv_grav)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Relevant Parameters
Gravitational redshifts are only accounted for flux-weighted RVs (dynamical RVs literally only return the z-component of the velocity of the center-of-mass of each star).
First let's run a model with the default radii for our stars.
Step3: Note that gravitational redshift effects for RVs (rv_grav) are disabled by default. We could call add_compute and then set them to be true, or just temporarily override them by passing rv_grav to the run_compute call.
Step4: Now let's run another model but with much smaller stars (but with the same masses).
Step5: Now let's run another model, but with gravitational redshift effects disabled
Step6: Influence on Radial Velocities
Step7: Besides the obvious change in the Rossiter-McLaughlin effect (not due to gravitational redshift), we can see that making the radii smaller shifts the entire RV curve up (the spectra are redshifted as they have to climb out of a steeper potential at the surface of the stars). | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Gravitational Redshift (rv_grav)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.set_value_all('atm', 'blackbody')
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
print b['value@requiv@primary@component'], b['value@requiv@secondary@component']
Explanation: Relevant Parameters
Gravitational redshifts are only accounted for flux-weighted RVs (dynamical RVs literally only return the z-component of the velocity of the center-of-mass of each star).
First let's run a model with the default radii for our stars.
End of explanation
b.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='defaultradii_true')
Explanation: Note that gravitational redshift effects for RVs (rv_grav) are disabled by default. We could call add_compute and then set them to be true, or just temporarily override them by passing rv_grav to the run_compute call.
End of explanation
b['requiv@primary'] = 0.4
b['requiv@secondary'] = 0.4
b.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='smallradii_true')
Explanation: Now let's run another model but with much smaller stars (but with the same masses).
End of explanation
b.run_compute(rv_method='flux-weighted', rv_grav=False, irrad_method='none', model='smallradii_false')
Explanation: Now let's run another model, but with gravitational redshift effects disabled
End of explanation
afig, mplfig = b.filter(model=['defaultradii_true', 'smallradii_true']).plot(legend=True, show=True)
afig, mplfig = b.filter(model=['smallradii_true', 'smallradii_false']).plot(legend=True, show=True)
Explanation: Influence on Radial Velocities
End of explanation
print b['rvs@rv01@primary@defaultradii_true'].get_value().min()
print b['rvs@rv01@primary@smallradii_true'].get_value().min()
print b['rvs@rv01@primary@smallradii_false'].get_value().min()
print b['rvs@rv01@primary@defaultradii_true'].get_value().max()
print b['rvs@rv01@primary@smallradii_true'].get_value().max()
print b['rvs@rv01@primary@smallradii_false'].get_value().max()
Explanation: Besides the obvious change in the Rossiter-McLaughlin effect (not due to gravitational redshift), we can see that making the radii smaller shifts the entire RV curve up (the spectra are redshifted as they have to climb out of a steeper potential at the surface of the stars).
End of explanation |
7,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 4
Imports
Step1: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 4
Imports
End of explanation
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
matrix = np.ones((n), dtype=np.int64)
D = (n-1)*(np.diag(matrix))
return D
print np.ones((1))
print complete_deg(5)
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
matrix = np.ones((n), dtype=np.int64)
D = np.diag(matrix)
A = matrix - D
return A
print complete_adj(5)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
print np.linalg.eigvals(complete_deg(2)-complete_adj(2))
print np.linalg.eigvals(complete_deg(5)-complete_adj(5))
print np.linalg.eigvals(complete_deg(10)-complete_adj(10))
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation |
7,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 5.4
Step1: The Data
We'll start off by exploring our dataset to see what attributes we have and how the class of the tumor is represented
Before we proceed, ensure to include headers to the dataset provided by the University of Wisconsin. We will use the following headers
Step2: To understand the meaning of the abbreviations we can consult the dataset's website to find a description of each attribute in order. We are going to train on all features unlike the logistic regression example (where we just trained on three). This does mean that we will be unable to visualize the results, but will get a feel for how to work with high-dimensional data.
If you noticed the Class attribute at the end (which gives the class of the tumor), you'll find that it takes either 2 or 4, where 2 represents a benign tumor while 4 represents a malignant tumor. We'll change that to more expressive values and make a benign tumor represented by 0 (false) and mlignants by 1s (true).
You'll notice that the ID attribute of data that is useless to our modelling, since it provides no information about the tumor itself, and is instead a way of identifying a specific tumor. We will hence strip this from our dataset before training.
Step3: We will now need to split up the dataset into two separate tensors
Step4: Training the Neural Network Model
For the training, we are going to split the dataset into a training set and a test set. The training set will be 70% of the original data set, and will be what the neural network will learn from. We will test the accuracy of the neural network's learned weights by using the test set, which is composed of 30% of the original data.
It's very important to shuffle the dataset before partitioning into training/test sets. Why? Because the data given to us by University of Wisconsin may be in some sorted (apparent or unapparent) order. It may be that most y=0 examples are in the latter half of the data. It may be that most of the nearby recorded patients are similar, or were recorded during similar dates/times. We want none of that - we want to remove any information and ensure that our partitioning is random, so that our test results represent true probabilites of picking a random training case from an entire population of permutations of the feature vector. For example, I once got a 5% difference in test results since the data I had was sorted beforehand. Essentially, we want to make sure that we're being absolutely fair about the partition, and not accidentally making our test results too good/too bad.
We can't use numpy's traditional shuffle function because this shuffles one array only. If we independently shuffled x and y, the order between them would be lost (ie. if training case x had output of 1 beforehand, this may be accidentally changed to an output of 0, since we use corresponding indicdes to match up the inputs and outputs).
Step5: scikit-neuralnetwork offers a really neat, easy to use API (from sknn.mlp import Classifier, Layer) for training neural networks. This API has support for many different paradigms like dropout, momentum, weight decay, mini-batch gradient descent etc. and even different neural network types like Convolutional Neural Networks. Today, however, our goal is to get a simple Artificial Neural Network setup!
Our first job is to configure the architecture for the neural network. We will need to decide
Step6: Our neural network has been configured and built! Now, we just need to train it using our training set, using the intuitively named function "fit"
Step7: The output in my console due to the DEBUG logging looked like this (it's fun to see training in action!)
Step8: Unlike logistic regression, the weights of neural networks, unfortunately, are not very interpretable. There are also a high number of these weights. For example, the weights my system produced were outputted as so | Python Code:
import pandas as pd # we use this library to import a CSV of cancer tumor data
import numpy as np # we use this library to help us represent traditional Python arrays/lists as matrices/tensors with linear algebra operations
from sknn.mlp import Classifier, Layer # we use this library for the actual neural network code
from sklearn.utils import shuffle # we use this library for randomly shuffling arrays/tensors
import sys # we use for accessing output window
import logging # we use this library for outputting real time statistics/updates on training progress
logging.basicConfig(format="%(message)s", level=logging.DEBUG, stream=sys.stdout) # set the logging mode to DEBUG to output training information, use "INFO" for less volume of output
Explanation: Example 5.4: Classifying Malignant/Benign Breast Tumors with Artificial Neural Networks
Welcome to the practical section of module 5.4. Here we'll be using Artificial Neural Networks with the Wisconsin Breast Cancer Database just like in the practical example for module 5.2 to predict whether a patient's tumor is benign or malignant based on tumor cell charactaristics. This is just one example from many to which machine learning and classification could offer great insights and aid. Make sure to delete any rows with missing data (which will contain a "?" character in a feature cell).
By the end of the module, we'll have a trained an artificial neural network model on the a subset of the features presented in the dataset that is very accurate at diagnosing the condition of the tumor based on these features. We'll also see how we can make interseting inferences from the model that could be helpful for the physicians in diagnosing cancer.
Since scikit-learn's newest stable version does not support neural networks / multi-layer perceptrons, we will be using the scikit-neuralnetwork third-party implementation. To install scikit-neuralnetwork, please consult the Installation section of the documentation.
First, we will import all our dependencies. Make sure to install all of these separately:
End of explanation
dataset = pd.read_csv('./datasets/breast-cancer-wisconson.csv') # import the CSV data into an array using the panda dependency
print dataset[:10]
Explanation: The Data
We'll start off by exploring our dataset to see what attributes we have and how the class of the tumor is represented
Before we proceed, ensure to include headers to the dataset provided by the University of Wisconsin. We will use the following headers:
ID,CT,UCS,UCSh,MA,SECS,BN,BC,NN,M,Class
Add this to the beginning line of your .csv file.
End of explanation
dataset = dataset[["CT", "UCS", "UCSh", "MA", "SECS", "BN", "BC", "NN", "M", "Class"]] # remove the ID attribute from the dataset
dataset.is_copy = False # this is just to hide a nasty warning!
[0 if tclass == 2 else 1 for tclass in dataset["Class"]] # convert Class attributes to 0/1 if they are 2/4 in dataset["Class"] column
Explanation: To understand the meaning of the abbreviations we can consult the dataset's website to find a description of each attribute in order. We are going to train on all features unlike the logistic regression example (where we just trained on three). This does mean that we will be unable to visualize the results, but will get a feel for how to work with high-dimensional data.
If you noticed the Class attribute at the end (which gives the class of the tumor), you'll find that it takes either 2 or 4, where 2 represents a benign tumor while 4 represents a malignant tumor. We'll change that to more expressive values and make a benign tumor represented by 0 (false) and mlignants by 1s (true).
You'll notice that the ID attribute of data that is useless to our modelling, since it provides no information about the tumor itself, and is instead a way of identifying a specific tumor. We will hence strip this from our dataset before training.
End of explanation
X = np.array(dataset[["CT", "UCS", "UCSh", "MA", "SECS", "BN", "BC", "NN", "M"]]) # X is composed of all the n feature columns
y = np.array(dataset["Class"]) # y is composed of just the output class column
Explanation: We will now need to split up the dataset into two separate tensors: X and y. X will contain the features and their values for each training example, and y will contain all the outputs. In this training set, let's say m refers to the number of training examples (of which there are just over 600) and n refers to the number of features (of which there are 9). Thus, X will be a matrix where $X\in \mathbb{R}^{m\:\cdot \:n}$ and y is a vector where $y\in \mathbb{R}^m$ (because we only have one output - a probability of the tumor being malignant).
We simply separate by the "Class" attribute and the other features.
End of explanation
X, y = shuffle(X, y, random_state=0) # we use scikit-learn's synchronized shuffle feature to shuffle two arrays in unison
dataset_size = len(dataset) # get the size of overall dataset
training_size = np.floor(dataset_size * 0.7).astype(int) # get the training size as 70% of the dataset size (or roughly 0.7 * dataset_size) and as an integer
X_train = X[:training_size] # extract the first 70% of inputs for training
y_train = y[:training_size] # extract the first 70% of inputs for training
X_test = X[training_size:] # extract rest 30% of inputs for testing
y_test = y[training_size:] # extract rest 30% of outputs for testing
Explanation: Training the Neural Network Model
For the training, we are going to split the dataset into a training set and a test set. The training set will be 70% of the original data set, and will be what the neural network will learn from. We will test the accuracy of the neural network's learned weights by using the test set, which is composed of 30% of the original data.
It's very important to shuffle the dataset before partitioning into training/test sets. Why? Because the data given to us by University of Wisconsin may be in some sorted (apparent or unapparent) order. It may be that most y=0 examples are in the latter half of the data. It may be that most of the nearby recorded patients are similar, or were recorded during similar dates/times. We want none of that - we want to remove any information and ensure that our partitioning is random, so that our test results represent true probabilites of picking a random training case from an entire population of permutations of the feature vector. For example, I once got a 5% difference in test results since the data I had was sorted beforehand. Essentially, we want to make sure that we're being absolutely fair about the partition, and not accidentally making our test results too good/too bad.
We can't use numpy's traditional shuffle function because this shuffles one array only. If we independently shuffled x and y, the order between them would be lost (ie. if training case x had output of 1 beforehand, this may be accidentally changed to an output of 0, since we use corresponding indicdes to match up the inputs and outputs).
End of explanation
nn = Classifier(layers=[ # create a new Classifier object (neural network classifier), and pass all the layers as parameters
Layer("Rectifier", units=100), # create the first post-input hidden layer, a Rectifier (ReLU) layer of 100 units
Layer("Rectifier", units=50), # create the second hidden layer, a Rectifier layer of 50 units
Layer("Softmax"), units=2], # create the final output layer, a Softmax layer that will output two probabilities, as mentioned before
learning_rate=0.001, n_iter=100) # pass in hyperparameters as a separate parameter to the layers
Explanation: scikit-neuralnetwork offers a really neat, easy to use API (from sknn.mlp import Classifier, Layer) for training neural networks. This API has support for many different paradigms like dropout, momentum, weight decay, mini-batch gradient descent etc. and even different neural network types like Convolutional Neural Networks. Today, however, our goal is to get a simple Artificial Neural Network setup!
Our first job is to configure the architecture for the neural network. We will need to decide:
* The number of hidden layers
* The size of each hidden layer
* The activation function used at each hidden layer
* The learning rate, number of iterations, and other hyperparameters
Some types of activation functions offered by scikit-neuralnetwork include:
* Linear
* Rectifier
* Sigmoid
* Softmax
Where Softmax computes a sigmoid probability distribution over multiple outputs. Generally, it is conventional to use Softmax as the activation function for the output layer (when we have 1 output it really is just the same as a Sigmoid layer, but scikit-neuralnetwork will still output a warning). Recall the formula for the sigmoid activation function:
$Sigmoid\left(z\right)=\frac{1}{1+e^{-z}}$
This function "squeezes" any real value into a probability of range (0, 1).
The Linear and Rectifier (ReLU) may be used, but we obviously need to use Sigmoid/Softmax in our neural network because we are performing a classification task. Generally, I found that using Rectifier units throughout and then having a Softmax (Sigmoid) layer at the end produced the best results.
Now we need to decide on our structure (as in, the hidden layers and their sizes). Our neural network will end up looking like the following:
<img src="./../images/neuralnet.png">
That is, for this example, we will use two hidden layers (both of which are Rectifiers). Generally, the greater number of hidden layers you have, the greater complexity. Two should be fine for our task though. In our neural network, we will have 9 input nodes (for our 9 features), and I have chosen 100 neurons for the first hidden layer as well as 50 for the second. The Softmax layer will just have one output because we only want one output (probability of malignancy). You can play around with these numbers, bar the Softmax output layer :) We really need not that many hidden units in each hidden layer, but I want to demonstrate the scale we can create with this API.
NOTE: In actuality, our Softmax layer for this coding example will have two outputs. For classifiers, whether its binary classification or multi-class, scikit-neuralnetwork uses a one-hot-encoded representation of the labels with cross-entropy-loss. This requires that the output layer (the softmax layer) to be a probability distribution over all labels, hence the number of the units in the softmax layer needs to be the number of labels. In binary classification, we have 2 labels: 0 and 1, so the expected behavior is for the softmax layer to have 2 units.
Finally, we need to choose our hyperparameters. A learning rate of 0.001 and number of iterations/epochs of 100 should suffice. Our code will look like the following:
End of explanation
nn.fit(X_train, y_train) # begin training using backpropagation!
Explanation: Our neural network has been configured and built! Now, we just need to train it using our training set, using the intuitively named function "fit":
End of explanation
print nn.get_parameters() # output the weights of the neural network
Explanation: The output in my console due to the DEBUG logging looked like this (it's fun to see training in action!):
<img src="./../images/train_img.png">
NOTE: We do not need to prefix 1s to the dataset beforehand to achieve bias terms that vertically shift the decision boundary regions. The API provides this by default.
Results
We can see the weights that were produced (including the bias terms) using the following line of code:
End of explanation
error_count = 0.0 # initialize the error counter
prob_predictions = nn.predict_proba(X_test) # predict the outputs of the X_test instances, in the form of probabilities (hence predict_proba)
for i in xrange(len(prob_predictions)): # iterate through all the predictions (equal in size to test set)
# create a discrete decision for the tumor being malignant or benign, using 0.5 as the lowest probability needed for a predicted malignancy (general rounding)
# as discussed before, our network actually outputs [probability_of_benign, probability_of_malignant], so we will want to
# fetch the probability_of_malignant value and round this one (that's how it would be for a single output network if it worked!)
discrete_prediction = 0 if prob_predictions[i][1] < 0.5 else 1
if not y_test[i] == discrete_prediction: # if the actual, correct value for this test tumor does not equal the discrete prediction
error_count += 1.0 # increment the number of errors
error_rate = error_count / len(prob_predictions) * 100 # get the percentage of errors by dividing total errors by number of instances, multiplying by 100
print error_count # print number of raw errors
print str(error_rate) + "%" # output this error percentage
Explanation: Unlike logistic regression, the weights of neural networks, unfortunately, are not very interpretable. There are also a high number of these weights. For example, the weights my system produced were outputted as so:
<img src="./../images/weights.png">
However, it is more important to output the acurracy of these results, and how much error is made. Unlike logistic regression where we used a cost function that outputted a real number, we are going to find the percent error of our system like so:
Create predictions in the form of probabilities for each test tumor being malignant
Iterate through the test set with index i
Fetch the predicted output probability of this test tumor
Round this probability to either a 1 (malignant) or a 0 (benign)
Compare this to the correct test output at the ith index
If an error occurs, increment some pre-initialized error counter
Get the percentage error by dividing the error counter by the total number of test examples, multiplying by 100
End of explanation |
7,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
unique_eng_words = set([word for word in source_text.split()])
print(unique_eng_words)
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()]
for sentence in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()]
+ [target_vocab_to_int['<EOS>']]
for sentence in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input_ = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
lr = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return input_, targets, lr, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# Encoder
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings,start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# Decoder RNNs
with tf.variable_scope('decoding') as decoding_scope:
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# Apply embedding to the input data for the encoder
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input using your encoding_layer
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data using your process_encoding_input
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Embed dec_input
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decode the encoded input
train_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state,
target_vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob)
return (train_logits, inference_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 1024
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence_lower = sentence.lower()
word_ids = []
for word in sentence_lower.split():
word_id = vocab_to_int.get(word, vocab_to_int['<UNK>'])
word_ids.append(word_id)
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
# translate_sentence = "New Jersey is usually chilly during july , and it is usually freezing in november"
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
7,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Files
2. Create a file in the current working directory called contacts.txt by running the cell below
Step2: 3. Open the file and use .read() to save the contents of the file to a string called fields. Make sure the file is closed at the end.
Step3: Working with PDF Files
4. Use PyPDF2 to open the file Business_Proposal.pdf. Extract the text of page 2.
Step4: 5. Open the file contacts.txt in append mode. Add the text of page 2 from above to contacts.txt.
CHALLENGE
Step5: Regular Expressions
6. Using the page_two_text variable created above, extract any email addresses that were contained in the file Business_Proposal.pdf. | Python Code:
abbr = 'NLP'
full_text = 'Natural Language Processing'
# Enter your code here:
print(f'{abbr} stands for {full_text}')
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Python Text Basics Assessment - Solutions
Welcome to your assessment! Complete the tasks described in bold below by typing the relevant code in the cells.
f-Strings
1. Print an f-string that displays NLP stands for Natural Language Processing using the variables provided.
End of explanation
%%writefile contacts.txt
First_Name Last_Name, Title, Extension, Email
Explanation: Files
2. Create a file in the current working directory called contacts.txt by running the cell below:
End of explanation
# Write your code here:
with open('contacts.txt') as c:
fields = c.read()
# Run fields to see the contents of contacts.txt:
fields
Explanation: 3. Open the file and use .read() to save the contents of the file to a string called fields. Make sure the file is closed at the end.
End of explanation
# Perform import
import PyPDF2
# Open the file as a binary object
f = open('Business_Proposal.pdf','rb')
# Use PyPDF2 to read the text of the file
pdf_reader = PyPDF2.PdfFileReader(f)
# Get the text from page 2 (CHALLENGE: Do this in one step!)
page_two_text = pdf_reader.getPage(1).extractText()
# Close the file
f.close()
# Print the contents of page_two_text
print(page_two_text)
Explanation: Working with PDF Files
4. Use PyPDF2 to open the file Business_Proposal.pdf. Extract the text of page 2.
End of explanation
# Simple Solution:
with open('contacts.txt','a+') as c:
c.write(page_two_text)
c.seek(0)
print(c.read())
# CHALLENGE Solution (re-run the %%writefile cell above to obtain an unmodified contacts.txt file):
with open('contacts.txt','a+') as c:
c.write(page_two_text[8:])
c.seek(0)
print(c.read())
Explanation: 5. Open the file contacts.txt in append mode. Add the text of page 2 from above to contacts.txt.
CHALLENGE: See if you can remove the word "AUTHORS:"
End of explanation
import re
# Enter your regex pattern here. This may take several tries!
pattern = r'\w+@\w+.\w{3}'
re.findall(pattern, page_two_text)
Explanation: Regular Expressions
6. Using the page_two_text variable created above, extract any email addresses that were contained in the file Business_Proposal.pdf.
End of explanation |
7,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Genetic Algorithm Workshop
In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Genetic Algorithm is a
* Meta-heuristic
* Inspired by Natural Selection
* Traditionally works on binary data. Can be adopted for other data types as well.
You can find an example illustrating GA below
Step11: The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions
Step12: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population.
Step13: Crossover
We perform a single point crossover between two points
Step14: Mutation
Randomly change a decision such that
Step16: Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows
Step17: Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3. Non-dominated Sort + Niching
Elitism
Step18: Putting it all together and making the GA
Step19: Visualize
Lets plot the initial population with respect to the final frontier. | Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "<unity-id>"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
Explanation: Genetic Algorithm Workshop
In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Genetic Algorithm is a
* Meta-heuristic
* Inspired by Natural Selection
* Traditionally works on binary data. Can be adopted for other data types as well.
You can find an example illustrating GA below
End of explanation
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions)
new.objectives = self.objectives
return new
class Problem(O):
Class representing the cone problem.
def __init__(self):
O.__init__(self)
# TODO 2: Code up decisions and objectives below for the problem
# using the auxilary classes provided above.
self.decisions = [Decision('r', 0, 10), Decision('h', 0, 20)]
self.objectives = [Objective('S'), Objective('T')]
@staticmethod
def evaluate(point):
[r, h] = point.decisions
# TODO 3: Evaluate the objectives S and T for the point.
return point.objectives
@staticmethod
def is_valid(point):
[r, h] = point.decisions
# TODO 4: Check if the point has valid decisions
return True
def generate_one(self):
# TODO 5: Generate a valid instance of Point.
return None
cone = Problem()
point = cone.generate_one()
cone.evaluate(point)
print(point)
Explanation: The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions: r in [0, 10] cm; h in [0, 20] cm
Objectives: minimize S, T
Constraints: V > 200cm<sup>3</sup>
End of explanation
def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
return population
# or if ur python OBSESSED
# return [problem.generate_one() for _ in xrange(size)]
print(populate(cone, 5))
Explanation: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population.
End of explanation
def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
return None
pop = populate(cone,5)
crossover(pop[0], pop[1])
Explanation: Crossover
We perform a single point crossover between two points
End of explanation
def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the problem
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
return None
Explanation: Mutation
Randomly change a decision such that
End of explanation
def bdom(problem, one, two):
Return if one dominates two
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
return dominates
Explanation: Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows:
* Consider two points one and two.
* For every objective o and t in one and two, o <= t
* Atleast one objective o and t in one and two, o < t
Note: Binary Domination is not the best method to evaluate fitness but due to its simplicity we choose to use it for this workshop.
End of explanation
def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
return dominates
def elitism(problem, population, retain_size):
# TODO 11: Sort the population with respect to the fitness
# of the points and return the top 'retain_size' points of the population
return population[:retain_size]
Explanation: Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3. Non-dominated Sort + Niching
Elitism: Sort points with respect to the fitness and select the top points.
End of explanation
def ga(pop_size = 100, gens = 250):
problem = Problem()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size)
gen += 1
print("")
return initial_population, population
Explanation: Putting it all together and making the GA
End of explanation
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga()
plot_pareto(initial, final)
Explanation: Visualize
Lets plot the initial population with respect to the final frontier.
End of explanation |
7,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy를 활용한 선형대수 입문
선형대수(linear algebra)는 데이터 분석에 필요한 각종 계산을 위한 기본적인 학문이다.
데이터 분석을 하기 위해서는 실제로 수많은 숫자의 계산이 필요하다. 하나의 데이터 레코드(record)가 수십개에서 수천개의 숫자로 이루어져 있을 수도 있고 수십개에서 수백만개의 이러한 데이터 레코드를 조합하여 계산하는 과정이 필요할 수 있다.
선형대수를 사용하는 첫번째 장점은 이러한 데이터 계산의 과정을 아주 단순한 수식으로 서술할 수 있다는 점이다. 그러기 위해서는 선형대수에서 사용되는 여러가지 기호와 개념을 익혀야 한다.
데이터의 유형
선형대수에서 다루는 데이터는 크게 스칼라(scalar), 벡터(vector), 행렬(matrix), 이 세가지 이다.
간단하게 말하자면 스칼라는 숫자 하나로 이루어진 데이터이고 벡터는 여러개의 숫자로 이루어진 데이터 레코드이며 행렬은 벡터, 즉 데이터 레코드가 여러개 있는 데이터 집합이라고 볼 수 있다.
스칼라
스칼라는 하나의 숫자를 말한다. 예를 들어 어떤 붓꽃(iris) 샘플의 꽃잎의 길이를 측정하는 하나의 숫자가 나올 것이다. 이 숫자는 스칼라이다.
스칼라는 보통 $x$와 같이 알파벳 소문자로 표기하며 실수(real number)인 숫자 중의 하나이므로 실수 집합의 원소라는 의미에서 다음과 같이 표기한다.
$$ x \in \mathbb{R} $$
벡터
벡터는 복수개의 숫자가 특정 순서대로 모여 있는 것을 말한다. 사실 대부분의 데이터 분석에서 하나의 데이터 레코드는 여러개의 숫자로 이루어진 경우가 많다. 예를 들어 붓꽃의 종을 알아내기 위해 크기를 측정하게 되면 꽃잎의 길이 $x_1$ 뿐 아니라 꽃잎의 폭 $x_2$ , 꽃받침의 길이 $x_3$ , 꽃받침의 폭 $x_4$ 이라는 4개의 숫자를 측정할 수 있다. 이렇게 측정된 4개의 숫자를 하나의 쌍(tuple) $x$ 로 생각하여 다음과 같이 표기한다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
x_{3} \
x_{4} \
\end{bmatrix}
$$
여기에서 주의할 점은 벡터는 복수의 행(row)을 가지고 하나의 열(column)을 가지는 형태로 위에서 아래로 표기한다는 점이다.
이 때 $x$는 4개의 실수(real number)로 이루어져 있기 때문에 4차원 벡터라고 하고 다음과 같이 4차원임을 표기한다.
$$ x \in \mathbb{R}^4 $$
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 벡터를 feature vector라고 하기도 한다.
만약 4개가 아니라 $N$개의 숫자가 모여 있는 경우의 표기는 다음과 같다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
,\;\;\;\;
x \in \mathbb{R}^N
$$
NumPy를 사용하면 벡터는 1차원 ndarray 객체 혹은 열의 갯수가 1개인 2차원 ndarray 객체로 표현한다. 벡터를 처리하는 프로그램에 따라서는 두 가지 중 특정한 형태만 원하는 경우도 있을 수 있기 때문에 주의해야 한다. 예를 들어 파이썬 scikit-learn 패키지에서는 벡터를 요구하는 경우에 열의 갯수가 1개인 2차원 ndarray 객체를 선호한다.
Step1: 행렬
행렬은 복수의 차원을 가지는 데이터 레코드가 다시 여러개 있는 경우의 데이터를 합쳐서 표기한 것이다. 예를 들어 앞서 말한 붓꽃의 예에서 6개의 붓꽃에 대해 크기를 측정하였다면 4차원 붓꽃 데이터가 6개가 있다. 즉, $4 \times 6 = 24$개의 실수 숫자가 있는 것이다. 이 숫자 집합을
행렬로 나타내면 다음과 같다. 행렬은 보통 $X$와 같이 알파벳 대문자로 표기한다.
$$X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
$$
행렬 안에서 원소의 위치를 표기할 때는 $x_{2, 3}$ 처럼 두 개의 숫자 쌍을 아랫 첨자(sub-script)로 붙여서 표기한다. 첫번째 숫자가 행(row)을 뜻하고 두번째 숫자가 열(column)을 뜻한다. 예를 들어 $x_{2, 3}$ 는 두번째 행(위에서 아래로 두번째), 세번째 열(왼쪽에서 오른쪽으로 세번째)의 숫자를 뜻한다.
붓꽃의 예에서는 하나의 데이터 레코드가 4차원이였다는 점을 기억하자. 따라서 이 행렬 표기에서는 하나의 행(row)이 붓꽃 하나에 대한 데이터 레코드가 된다.
하나의 데이터 레코드를 나타낼 때는 하나의 열(column)로 나타내고 복수의 데이터 레코드 집합을 나타낼 때는 하나의 데이터 레코드가 하나의 행(row)으로 표기하는 것은 일관성이 없어 보지만 데이터 분석에서 쓰는 일반적인 관례이므로 익히고 있어야 한다.
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 행렬를 feature matrix라고 하기도 한다.
이 행렬의 크기를 수식으로 표시할 때는 행의 크기 곱하기 열의 크기의 형태로 다음과 같이 나타낸다.
$$ X \in \mathbb{R}^{4\times 6} $$
벡터도 열의 수가 1인 특수한 행렬이기 때문에 벡터의 크기를 표시할 때 행렬 표기에 맞추어 다음과 같이 쓰기도 한다.
$$ x \in \mathbb{R}^{4\times 1} $$
NumPy를 이용하여 행렬을 표기할 때는 2차원 ndarray 객체를 사용한다.
Step2: 특수한 행렬
몇가지 특수한 행렬에 대해서는 별도의 이름이 붙어있다.
행렬에서 행의 숫자와 열의 숫자가 같은 위치를 대각(diagonal)이라고 하고 대각 위치에 있지 않은 것들은 비대각(off-diagonal)이라고 한다. 모든 비대각 요소가 0인 행렬을 대각 행렬(diagonal matrix)이라고 한다.
$$ D \in \mathbb{R}^{N \times N} $$
$$
D =
\begin{bmatrix}
D_{1} & 0 & \cdots & 0 \
0 & D_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & D_{N} \
\end{bmatrix}
$$
NumPy로 대각행렬을 생성하려면 diag 명령을 사용한다.
Step3: 대각 행렬 중에서도 모든 대각 성분의 값이 1인 대각 행렬을 단위 행렬(identity matrix)이라고 한다. 단위 행렬은 보통 알파벳 대문자 $I$로 표기하는 경우가 많다.
$$ I \in \mathbb{R}^{N \times N} $$
$$
I =
\begin{bmatrix}
1 & 0 & \cdots & 0 \
0 & 1 & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & 1 \
\end{bmatrix}
$$
NumPy로 단위행렬을 생성하려면 identity 혹은 eye 명령을 사용한다.
Step4: 연산
행렬의 연산을 이용하면 대량의 데이터에 대한 계산을 간단한 수식으로 나타낼 수 있다. 물론 행렬에 대한 연산은 보통의 숫자 즉, 스칼라에 대한 사칙 연산과는 다른 규칙을 적용하므로 이 규칙을 외워야 한다.
전치 연산
전치(transpose) 연산은 행렬의 행과 열을 바꾸는 연산을 말한다. 벡터 기호에 $T$라는 윗첨자(super-script)를 붙어서 표기한다. 예를 들어 앞에서 보인 $4\times 6$ 차원의 행렬을 전치 연산하면 $6\times 4$ 차원의 행렬이 된다.
$$
X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
\;\;\;
\rightarrow
\;\;\;
X^T =
\begin{bmatrix}
x_{1, 1} & x_{2, 1} & x_{3, 1} & x_{4, 1} & x_{5, 1} & x_{6, 1} \
x_{1, 2} & x_{2, 2} & x_{3, 2} & x_{4, 2} & x_{5, 2} & x_{6, 2} \
x_{1, 3} & x_{2, 3} & x_{3, 3} & x_{4, 3} & x_{5, 3} & x_{6, 3} \
x_{1, 4} & x_{2, 4} & x_{3, 4} & x_{4, 4} & x_{5, 4} & x_{6, 4} \
\end{bmatrix}
$$
벡터도 열의 수가 1인 특수한 행렬이므로 벡터에 대해서도 전치 연산을 적용할 수 있다. 이 때 $x$와 같이 열의 수가 1인 행렬을 열 벡터(column vector)라고 하고 $x^T$와 같이 행의 수가 1인 행렬을 행 벡터(row vector)라고 한다.
$$
x =
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
\; \rightarrow \;
x^T =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
$$
NumPy에서는 ndarray 객체의 T라는 속성을 이용하여 전치 행렬을 구한다. 이 때 T는 메서드(method)가 아닌 속성(attribute)에 유의한다.
Step5: 행렬의 행 표기법과 열 표기법
전치 연산과 행 벡터, 열 벡터를 이용하면 행렬을 다음과 같이 복수의 열 벡터들 $c_i$, 또는 복수의 열 벡터들 $r_j^T$ 을 합친(concatenated) 형태로 표기할 수도 있다.
$$
X
=
\begin{bmatrix}
c_1 & c_2 & \cdots & c_M
\end{bmatrix}
=
\begin{bmatrix}
r_1^T \ r_2^T \ \vdots \ r_N^T
\end{bmatrix}
$$
$$ X \in \mathbb{R}^{N\times M} ,\;\;\; c_i \in R^{N \times 1} \; (i=1,\cdots,M) ,\;\;\; r_j \in R^{1 \times M} \; (j=1,\cdots,N) $$
행렬 덧셈과 뺄셈
행렬의 덧셈과 뺄셈은 같은 크기를 가진 두개의 행렬에 대해 정의되며 각각의 원소에 대해 덧셈과 뺄셈을 하면 된다. 이러한 연산을 element-wise 연산이라고 한다.
Step6: 벡터 곱셈
두 행렬의 곱셈을 정의하기 전에 우선 두 벡터의 곱셈을 알아보자. 벡터의 곱셈에는 내적(inner product)과 외적(outer product) 두 가지가 있다 여기에서는 내적에 대해서만 설명한다. 내적은 dot product라고 하기도 한다.
두 벡터의 곱(내적)이 정의되려면 우선 두 벡터의 길이가 같으며 앞의 벡터가 행 벡터이고 뒤의 벡터가 열 벡터이어야 한다. 이때 두 벡터의 곱은 다음과 같이 각 원소들을 element-by-element로 곱한 다음에 그 값들을 다시 모두 합해서 하나의 스칼라값으로 계산된다.
$$
x^T y =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{N} \
\end{bmatrix}
= x_1 y_1 + \cdots + x_N y_N
= \sum_{i=1}^N x_i y_i
$$
$$ x \in \mathbb{R}^{N \times 1} , \; y \in \mathbb{R}^{N \times 1} \; \rightarrow \; x^T y \in \mathbb{R} $$
벡터의 곱은 왜 이렇게 복잡하게 정의된 것일까. 벡터의 곱을 사용한 예를 몇가지 살펴보자
가중합
가중합(weighted sum)이란 복수의 데이터를 단순히 합하는 것이 아니라 각각의 수에 중요도에 따른 어떤 가중치를 곱한 후 이 값을 합하는 것을 말한다. 만약 데이터가 $x_1, \cdots, x_N$ 이고 가중치가 $w_1, \cdots, w_N$ 이면 가중합은 다음과 같다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i $$
이를 벡터의 곱으로 나타내면 다음과 같이 $w^Tx$ 또는 $x^Tw$ 라는 간단한 수식으로 표시할 수 있다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i =
\begin{bmatrix}
w_{1} && w_{2} && \cdots && w_{N}
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_N
\end{bmatrix}
= w^Tx =
\begin{bmatrix}
x_{1} && x_{2} && \cdots && x_{N}
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= x^Tw $$
NumPy에서 벡터 혹은 이후에 설명할 행렬의 곱은 dot이라는 명령으로 계산한다. 2차원 행렬로 표시한 벡터의 경우에는 결과값이 스칼라가 아닌 2차원 행렬값임에 유의한다.
Step7: 제곱합
데이터 분석시에 분산(variance), 표준 편차(standard deviation)을 구하는 경우에는 각각의 데이터를 제곱한 값을 모두 더하는 계산 즉 제곱합(sum of squares)을 계산하게 된다. 이 경우에도 벡터의 곱을 사용하여 $x^Tx$로 쓸 수 있다.
$$
x^T x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} x_i^2
$$
행렬의 곱셈
벡터의 곱셈을 정의한 후에는 다음과 같이 행렬의 곱셈을 정의할 수 있다.
$A$ 행렬과 $B$ 행렬을 곱한 결과인 $C$ 행렬의 $i$번째 행, $j$번째 열의 원소의 값은 $A$ 행렬의 $i$번째 행 벡터 $a_i^T$와 $B$ 행렬의 $j$번째 열 벡터 $b_j$의 곱으로 계산된 숫자이다.
$$ C = AB \; \rightarrow \; [c_{ij}] = a_i^T b_j $$
이 정의가 성립하려면 앞의 행렬 $A$의 열의 수가 뒤의 행렬 $B$의 행의 수와 일치해야만 한다.
$$ A \in \mathbb{R}^{N \times L} , \; B \in \mathbb{R}^{L \times M} \; \rightarrow \; AB \in \mathbb{R}^{N \times M} $$
Step8: 그럼 이러한 행렬의 곱셈은 데이터 분석에서 어떤 경우에 사용될까. 몇가지 예를 살펴본다.
가중 벡터합
어떤 데이터 레코드 즉, 벡터의 가중합은 $w^Tx$ 또는 $x^Tw$로 표시할 수 있다는 것을 배웠다. 그런데 만약 이렇게 $w$ 가중치를 사용한 가중합을 하나의 벡터 $x$가 아니라 여러개의 벡터 $x_1, \cdots, x_M$개에 대해서 모두 계산해야 한다면 이 계산을 다음과 같이 $Xw$라는 기호로 간단하게 표시할 수 있다.
$$
\begin{bmatrix}
w_1 x_{1,1} + w_2 x_{1,2} + \cdots + w_N x_{1,N} \
w_1 x_{2,1} + w_2 x_{2,2} + \cdots + w_N x_{2,N} \
\vdots \
w_1 x_{M,1} + w_2 x_{M,2} + \cdots + w_N x_{M,N} \
\end{bmatrix}
=
\begin{bmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,N} \
x_{2,1} & x_{2,2} & \cdots & x_{2,N} \
\vdots & \vdots & \vdots & \vdots \
x_{M,1} & x_{M,2} & \cdots & x_{M,N} \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
=
\begin{bmatrix}
x_1^T \
x_2^T \
\vdots \
x_N^T \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= X w
$$
잔차
선형 회귀 분석(linear regression)을 한 결과는 가중치 벡터 $w$라는 형태로 나타나고 예측치는 이 가중치 벡터를 사용한 독립 변수 데이터 레코드 즉, 벡터 $x_i$의 가중합 $w^Tx_i$이 된다. 이 예측치와 실제 값 $y_i$의 차이를 오차(error) 혹은 잔차(residual) $e_i$ 이라고 한다. 이러한 잔차 값을 모든 독립 변수 벡터에 대해 구하면 잔차 벡터 $e$가 된다.
$$ e_i = y_i - w^Tx_i $$
잔차 벡터는 다음과 같이 $y-Xw$로 간단하게 표기할 수 있다.
$$
e =
\begin{bmatrix}
e_{1} \
e_{2} \
\vdots \
e_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
w^T x_{1} \
w^T x_{2} \
\vdots \
w^T x_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1}w \
x^T_{2}w \
\vdots \
x^T_{M}w \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1} \
x^T_{2} \
\vdots \
x^T_{M} \
\end{bmatrix}
w
= y - Xw
$$
$$
e = y - Xw
$$
Step9: 잔차 제곱합
잔차의 크기는 잔차 벡터의 각 원소를 제곱한 후 더한 잔차 제곱합(RSS
Step10: 이차 형식
벡터의 이차 형식(Quadratic Form) 이란 어떤 벡터의 각 원소에 대해 가능한 모든 쌍의 조합 $(x_i, x_j)$을 구한 다음 그 곱셈$x_ix_j$을 더한 것을 말한다. 이 때 각 쌍에 대해 서로 다른 가중치 $a_{i,j}$를 적용하여 $a_{i,j}x_ix_j$의 합을 구한다면 다음과 같이 $x^TAx$라는 간단한 식으로 쓸 수 있다.
$$
x^T A x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,N} \
a_{2,1} & a_{2,2} & \cdots & a_{2,N} \
\vdots & \vdots & \ddots & \vdots \
a_{N,1} & a_{N,2} & \cdots & a_{N,N} \
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} \sum_{j=1}^{N} a_{i,j} x_i x_j
$$
예를 들어 $ x = [1, 2, 3]^T $ 이고 A가 다음과 같다면
$$ A =
\begin{pmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9 \
\end{pmatrix}
$$
NumPy 에서 벡터의 이차 형식은 다음과 같이 계산한다. | Python Code:
x = np.array([1, 2, 3, 4])
x
x = np.array([[1], [2], [3], [4]])
x
Explanation: NumPy를 활용한 선형대수 입문
선형대수(linear algebra)는 데이터 분석에 필요한 각종 계산을 위한 기본적인 학문이다.
데이터 분석을 하기 위해서는 실제로 수많은 숫자의 계산이 필요하다. 하나의 데이터 레코드(record)가 수십개에서 수천개의 숫자로 이루어져 있을 수도 있고 수십개에서 수백만개의 이러한 데이터 레코드를 조합하여 계산하는 과정이 필요할 수 있다.
선형대수를 사용하는 첫번째 장점은 이러한 데이터 계산의 과정을 아주 단순한 수식으로 서술할 수 있다는 점이다. 그러기 위해서는 선형대수에서 사용되는 여러가지 기호와 개념을 익혀야 한다.
데이터의 유형
선형대수에서 다루는 데이터는 크게 스칼라(scalar), 벡터(vector), 행렬(matrix), 이 세가지 이다.
간단하게 말하자면 스칼라는 숫자 하나로 이루어진 데이터이고 벡터는 여러개의 숫자로 이루어진 데이터 레코드이며 행렬은 벡터, 즉 데이터 레코드가 여러개 있는 데이터 집합이라고 볼 수 있다.
스칼라
스칼라는 하나의 숫자를 말한다. 예를 들어 어떤 붓꽃(iris) 샘플의 꽃잎의 길이를 측정하는 하나의 숫자가 나올 것이다. 이 숫자는 스칼라이다.
스칼라는 보통 $x$와 같이 알파벳 소문자로 표기하며 실수(real number)인 숫자 중의 하나이므로 실수 집합의 원소라는 의미에서 다음과 같이 표기한다.
$$ x \in \mathbb{R} $$
벡터
벡터는 복수개의 숫자가 특정 순서대로 모여 있는 것을 말한다. 사실 대부분의 데이터 분석에서 하나의 데이터 레코드는 여러개의 숫자로 이루어진 경우가 많다. 예를 들어 붓꽃의 종을 알아내기 위해 크기를 측정하게 되면 꽃잎의 길이 $x_1$ 뿐 아니라 꽃잎의 폭 $x_2$ , 꽃받침의 길이 $x_3$ , 꽃받침의 폭 $x_4$ 이라는 4개의 숫자를 측정할 수 있다. 이렇게 측정된 4개의 숫자를 하나의 쌍(tuple) $x$ 로 생각하여 다음과 같이 표기한다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
x_{3} \
x_{4} \
\end{bmatrix}
$$
여기에서 주의할 점은 벡터는 복수의 행(row)을 가지고 하나의 열(column)을 가지는 형태로 위에서 아래로 표기한다는 점이다.
이 때 $x$는 4개의 실수(real number)로 이루어져 있기 때문에 4차원 벡터라고 하고 다음과 같이 4차원임을 표기한다.
$$ x \in \mathbb{R}^4 $$
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 벡터를 feature vector라고 하기도 한다.
만약 4개가 아니라 $N$개의 숫자가 모여 있는 경우의 표기는 다음과 같다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
,\;\;\;\;
x \in \mathbb{R}^N
$$
NumPy를 사용하면 벡터는 1차원 ndarray 객체 혹은 열의 갯수가 1개인 2차원 ndarray 객체로 표현한다. 벡터를 처리하는 프로그램에 따라서는 두 가지 중 특정한 형태만 원하는 경우도 있을 수 있기 때문에 주의해야 한다. 예를 들어 파이썬 scikit-learn 패키지에서는 벡터를 요구하는 경우에 열의 갯수가 1개인 2차원 ndarray 객체를 선호한다.
End of explanation
X = np.array([[11,12,13],[21,22,23]])
X
Explanation: 행렬
행렬은 복수의 차원을 가지는 데이터 레코드가 다시 여러개 있는 경우의 데이터를 합쳐서 표기한 것이다. 예를 들어 앞서 말한 붓꽃의 예에서 6개의 붓꽃에 대해 크기를 측정하였다면 4차원 붓꽃 데이터가 6개가 있다. 즉, $4 \times 6 = 24$개의 실수 숫자가 있는 것이다. 이 숫자 집합을
행렬로 나타내면 다음과 같다. 행렬은 보통 $X$와 같이 알파벳 대문자로 표기한다.
$$X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
$$
행렬 안에서 원소의 위치를 표기할 때는 $x_{2, 3}$ 처럼 두 개의 숫자 쌍을 아랫 첨자(sub-script)로 붙여서 표기한다. 첫번째 숫자가 행(row)을 뜻하고 두번째 숫자가 열(column)을 뜻한다. 예를 들어 $x_{2, 3}$ 는 두번째 행(위에서 아래로 두번째), 세번째 열(왼쪽에서 오른쪽으로 세번째)의 숫자를 뜻한다.
붓꽃의 예에서는 하나의 데이터 레코드가 4차원이였다는 점을 기억하자. 따라서 이 행렬 표기에서는 하나의 행(row)이 붓꽃 하나에 대한 데이터 레코드가 된다.
하나의 데이터 레코드를 나타낼 때는 하나의 열(column)로 나타내고 복수의 데이터 레코드 집합을 나타낼 때는 하나의 데이터 레코드가 하나의 행(row)으로 표기하는 것은 일관성이 없어 보지만 데이터 분석에서 쓰는 일반적인 관례이므로 익히고 있어야 한다.
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 행렬를 feature matrix라고 하기도 한다.
이 행렬의 크기를 수식으로 표시할 때는 행의 크기 곱하기 열의 크기의 형태로 다음과 같이 나타낸다.
$$ X \in \mathbb{R}^{4\times 6} $$
벡터도 열의 수가 1인 특수한 행렬이기 때문에 벡터의 크기를 표시할 때 행렬 표기에 맞추어 다음과 같이 쓰기도 한다.
$$ x \in \mathbb{R}^{4\times 1} $$
NumPy를 이용하여 행렬을 표기할 때는 2차원 ndarray 객체를 사용한다.
End of explanation
np.diag([1, 2, 3])
Explanation: 특수한 행렬
몇가지 특수한 행렬에 대해서는 별도의 이름이 붙어있다.
행렬에서 행의 숫자와 열의 숫자가 같은 위치를 대각(diagonal)이라고 하고 대각 위치에 있지 않은 것들은 비대각(off-diagonal)이라고 한다. 모든 비대각 요소가 0인 행렬을 대각 행렬(diagonal matrix)이라고 한다.
$$ D \in \mathbb{R}^{N \times N} $$
$$
D =
\begin{bmatrix}
D_{1} & 0 & \cdots & 0 \
0 & D_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & D_{N} \
\end{bmatrix}
$$
NumPy로 대각행렬을 생성하려면 diag 명령을 사용한다.
End of explanation
np.identity(3)
np.eye(4)
Explanation: 대각 행렬 중에서도 모든 대각 성분의 값이 1인 대각 행렬을 단위 행렬(identity matrix)이라고 한다. 단위 행렬은 보통 알파벳 대문자 $I$로 표기하는 경우가 많다.
$$ I \in \mathbb{R}^{N \times N} $$
$$
I =
\begin{bmatrix}
1 & 0 & \cdots & 0 \
0 & 1 & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & 1 \
\end{bmatrix}
$$
NumPy로 단위행렬을 생성하려면 identity 혹은 eye 명령을 사용한다.
End of explanation
X = np.array([[11,12,13],[21,22,23]])
X
X.T
Explanation: 연산
행렬의 연산을 이용하면 대량의 데이터에 대한 계산을 간단한 수식으로 나타낼 수 있다. 물론 행렬에 대한 연산은 보통의 숫자 즉, 스칼라에 대한 사칙 연산과는 다른 규칙을 적용하므로 이 규칙을 외워야 한다.
전치 연산
전치(transpose) 연산은 행렬의 행과 열을 바꾸는 연산을 말한다. 벡터 기호에 $T$라는 윗첨자(super-script)를 붙어서 표기한다. 예를 들어 앞에서 보인 $4\times 6$ 차원의 행렬을 전치 연산하면 $6\times 4$ 차원의 행렬이 된다.
$$
X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
\;\;\;
\rightarrow
\;\;\;
X^T =
\begin{bmatrix}
x_{1, 1} & x_{2, 1} & x_{3, 1} & x_{4, 1} & x_{5, 1} & x_{6, 1} \
x_{1, 2} & x_{2, 2} & x_{3, 2} & x_{4, 2} & x_{5, 2} & x_{6, 2} \
x_{1, 3} & x_{2, 3} & x_{3, 3} & x_{4, 3} & x_{5, 3} & x_{6, 3} \
x_{1, 4} & x_{2, 4} & x_{3, 4} & x_{4, 4} & x_{5, 4} & x_{6, 4} \
\end{bmatrix}
$$
벡터도 열의 수가 1인 특수한 행렬이므로 벡터에 대해서도 전치 연산을 적용할 수 있다. 이 때 $x$와 같이 열의 수가 1인 행렬을 열 벡터(column vector)라고 하고 $x^T$와 같이 행의 수가 1인 행렬을 행 벡터(row vector)라고 한다.
$$
x =
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
\; \rightarrow \;
x^T =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
$$
NumPy에서는 ndarray 객체의 T라는 속성을 이용하여 전치 행렬을 구한다. 이 때 T는 메서드(method)가 아닌 속성(attribute)에 유의한다.
End of explanation
x = np.array([10, 11, 12, 13, 14])
x
y = np.array([0, 1, 2, 3, 4])
y
x + y
x - y
Explanation: 행렬의 행 표기법과 열 표기법
전치 연산과 행 벡터, 열 벡터를 이용하면 행렬을 다음과 같이 복수의 열 벡터들 $c_i$, 또는 복수의 열 벡터들 $r_j^T$ 을 합친(concatenated) 형태로 표기할 수도 있다.
$$
X
=
\begin{bmatrix}
c_1 & c_2 & \cdots & c_M
\end{bmatrix}
=
\begin{bmatrix}
r_1^T \ r_2^T \ \vdots \ r_N^T
\end{bmatrix}
$$
$$ X \in \mathbb{R}^{N\times M} ,\;\;\; c_i \in R^{N \times 1} \; (i=1,\cdots,M) ,\;\;\; r_j \in R^{1 \times M} \; (j=1,\cdots,N) $$
행렬 덧셈과 뺄셈
행렬의 덧셈과 뺄셈은 같은 크기를 가진 두개의 행렬에 대해 정의되며 각각의 원소에 대해 덧셈과 뺄셈을 하면 된다. 이러한 연산을 element-wise 연산이라고 한다.
End of explanation
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
np.dot(x, y)
x = np.array([[1], [2], [3]])
y = np.array([[4], [5], [6]])
np.dot(x.T, y)
Explanation: 벡터 곱셈
두 행렬의 곱셈을 정의하기 전에 우선 두 벡터의 곱셈을 알아보자. 벡터의 곱셈에는 내적(inner product)과 외적(outer product) 두 가지가 있다 여기에서는 내적에 대해서만 설명한다. 내적은 dot product라고 하기도 한다.
두 벡터의 곱(내적)이 정의되려면 우선 두 벡터의 길이가 같으며 앞의 벡터가 행 벡터이고 뒤의 벡터가 열 벡터이어야 한다. 이때 두 벡터의 곱은 다음과 같이 각 원소들을 element-by-element로 곱한 다음에 그 값들을 다시 모두 합해서 하나의 스칼라값으로 계산된다.
$$
x^T y =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{N} \
\end{bmatrix}
= x_1 y_1 + \cdots + x_N y_N
= \sum_{i=1}^N x_i y_i
$$
$$ x \in \mathbb{R}^{N \times 1} , \; y \in \mathbb{R}^{N \times 1} \; \rightarrow \; x^T y \in \mathbb{R} $$
벡터의 곱은 왜 이렇게 복잡하게 정의된 것일까. 벡터의 곱을 사용한 예를 몇가지 살펴보자
가중합
가중합(weighted sum)이란 복수의 데이터를 단순히 합하는 것이 아니라 각각의 수에 중요도에 따른 어떤 가중치를 곱한 후 이 값을 합하는 것을 말한다. 만약 데이터가 $x_1, \cdots, x_N$ 이고 가중치가 $w_1, \cdots, w_N$ 이면 가중합은 다음과 같다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i $$
이를 벡터의 곱으로 나타내면 다음과 같이 $w^Tx$ 또는 $x^Tw$ 라는 간단한 수식으로 표시할 수 있다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i =
\begin{bmatrix}
w_{1} && w_{2} && \cdots && w_{N}
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_N
\end{bmatrix}
= w^Tx =
\begin{bmatrix}
x_{1} && x_{2} && \cdots && x_{N}
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= x^Tw $$
NumPy에서 벡터 혹은 이후에 설명할 행렬의 곱은 dot이라는 명령으로 계산한다. 2차원 행렬로 표시한 벡터의 경우에는 결과값이 스칼라가 아닌 2차원 행렬값임에 유의한다.
End of explanation
A = np.array([[1, 2, 3], [4, 5, 6]])
B = np.array([[1, 2], [3, 4], [5, 6]])
C = np.dot(A, B)
A
B
C
Explanation: 제곱합
데이터 분석시에 분산(variance), 표준 편차(standard deviation)을 구하는 경우에는 각각의 데이터를 제곱한 값을 모두 더하는 계산 즉 제곱합(sum of squares)을 계산하게 된다. 이 경우에도 벡터의 곱을 사용하여 $x^Tx$로 쓸 수 있다.
$$
x^T x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} x_i^2
$$
행렬의 곱셈
벡터의 곱셈을 정의한 후에는 다음과 같이 행렬의 곱셈을 정의할 수 있다.
$A$ 행렬과 $B$ 행렬을 곱한 결과인 $C$ 행렬의 $i$번째 행, $j$번째 열의 원소의 값은 $A$ 행렬의 $i$번째 행 벡터 $a_i^T$와 $B$ 행렬의 $j$번째 열 벡터 $b_j$의 곱으로 계산된 숫자이다.
$$ C = AB \; \rightarrow \; [c_{ij}] = a_i^T b_j $$
이 정의가 성립하려면 앞의 행렬 $A$의 열의 수가 뒤의 행렬 $B$의 행의 수와 일치해야만 한다.
$$ A \in \mathbb{R}^{N \times L} , \; B \in \mathbb{R}^{L \times M} \; \rightarrow \; AB \in \mathbb{R}^{N \times M} $$
End of explanation
from sklearn.datasets import make_regression
X, y = make_regression(4,3)
X
y
w = np.linalg.lstsq(X, y)[0]
w
e = y - np.dot(X, w)
e
Explanation: 그럼 이러한 행렬의 곱셈은 데이터 분석에서 어떤 경우에 사용될까. 몇가지 예를 살펴본다.
가중 벡터합
어떤 데이터 레코드 즉, 벡터의 가중합은 $w^Tx$ 또는 $x^Tw$로 표시할 수 있다는 것을 배웠다. 그런데 만약 이렇게 $w$ 가중치를 사용한 가중합을 하나의 벡터 $x$가 아니라 여러개의 벡터 $x_1, \cdots, x_M$개에 대해서 모두 계산해야 한다면 이 계산을 다음과 같이 $Xw$라는 기호로 간단하게 표시할 수 있다.
$$
\begin{bmatrix}
w_1 x_{1,1} + w_2 x_{1,2} + \cdots + w_N x_{1,N} \
w_1 x_{2,1} + w_2 x_{2,2} + \cdots + w_N x_{2,N} \
\vdots \
w_1 x_{M,1} + w_2 x_{M,2} + \cdots + w_N x_{M,N} \
\end{bmatrix}
=
\begin{bmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,N} \
x_{2,1} & x_{2,2} & \cdots & x_{2,N} \
\vdots & \vdots & \vdots & \vdots \
x_{M,1} & x_{M,2} & \cdots & x_{M,N} \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
=
\begin{bmatrix}
x_1^T \
x_2^T \
\vdots \
x_N^T \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= X w
$$
잔차
선형 회귀 분석(linear regression)을 한 결과는 가중치 벡터 $w$라는 형태로 나타나고 예측치는 이 가중치 벡터를 사용한 독립 변수 데이터 레코드 즉, 벡터 $x_i$의 가중합 $w^Tx_i$이 된다. 이 예측치와 실제 값 $y_i$의 차이를 오차(error) 혹은 잔차(residual) $e_i$ 이라고 한다. 이러한 잔차 값을 모든 독립 변수 벡터에 대해 구하면 잔차 벡터 $e$가 된다.
$$ e_i = y_i - w^Tx_i $$
잔차 벡터는 다음과 같이 $y-Xw$로 간단하게 표기할 수 있다.
$$
e =
\begin{bmatrix}
e_{1} \
e_{2} \
\vdots \
e_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
w^T x_{1} \
w^T x_{2} \
\vdots \
w^T x_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1}w \
x^T_{2}w \
\vdots \
x^T_{M}w \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1} \
x^T_{2} \
\vdots \
x^T_{M} \
\end{bmatrix}
w
= y - Xw
$$
$$
e = y - Xw
$$
End of explanation
np.dot(e.T,e)
Explanation: 잔차 제곱합
잔차의 크기는 잔차 벡터의 각 원소를 제곱한 후 더한 잔차 제곱합(RSS: Residual Sum of Squares)를 이용하여 구한다. 이 값은 $e^Te$로 간단하게 쓸 수 있으며 그 값은 다음과 같이 계산한다.
$$
e^Te = \sum_{i=1}^{N} (y_i - w^Tx_i)^2 = (y - Xw)^T (y - Xw)
$$
End of explanation
x = np.array([1,2,3])
x
A = np.arange(1, 10).reshape(3,3)
A
np.dot(np.dot(x, A), x)
Explanation: 이차 형식
벡터의 이차 형식(Quadratic Form) 이란 어떤 벡터의 각 원소에 대해 가능한 모든 쌍의 조합 $(x_i, x_j)$을 구한 다음 그 곱셈$x_ix_j$을 더한 것을 말한다. 이 때 각 쌍에 대해 서로 다른 가중치 $a_{i,j}$를 적용하여 $a_{i,j}x_ix_j$의 합을 구한다면 다음과 같이 $x^TAx$라는 간단한 식으로 쓸 수 있다.
$$
x^T A x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,N} \
a_{2,1} & a_{2,2} & \cdots & a_{2,N} \
\vdots & \vdots & \ddots & \vdots \
a_{N,1} & a_{N,2} & \cdots & a_{N,N} \
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} \sum_{j=1}^{N} a_{i,j} x_i x_j
$$
예를 들어 $ x = [1, 2, 3]^T $ 이고 A가 다음과 같다면
$$ A =
\begin{pmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9 \
\end{pmatrix}
$$
NumPy 에서 벡터의 이차 형식은 다음과 같이 계산한다.
End of explanation |
7,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook only contains executable code cells for the examples mentioned in
https
Step1: The problem
Step2: The solution
Step3: Using pad_shard_unpad()
Step4: Computing metrics in eval_step
Step6: Multi-host complications | Python Code:
!pip install -q chex einops
# tfds.split_for_jax_process() was added in 4.5.1
!pip install -q tensorflow_datasets -U
# flax.jax_utils.pad_shard_unpad() is only available at HEAD
!pip install -q git+https://github.com/google/flax
import collections
import chex
import einops
import jax
import jax.numpy as jnp
import flax
import flax.linen as nn
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
chex.set_n_cpu_devices(8)
per_device_batch_size = 512
dataset_name = 'mnist'
class FakeModel(nn.Module):
num_classes: int
@nn.compact
def __call__(self, x):
return jax.nn.one_hot(jnp.zeros([len(x)], jnp.int32), self.num_classes)
model = FakeModel(num_classes=10)
variables = {}
inputs = jnp.zeros([2, 28, 28, 1])
model.apply(variables, inputs)
Explanation: This notebook only contains executable code cells for the examples mentioned in
https://flax.readthedocs.io/en/latest/howtos/full_eval.html
Please refer to above link for a an explanation of the problem and the proposed solutions.
setup
End of explanation
# last batch has different shape
collections.Counter(
tuple(batch['image'].shape)
for batch in tfds.load('mnist', split='test').batch(per_device_batch_size)
)
# need to drop remainder when using multiple batch levels in a dataparallel
# setup
sum(
np.prod(batch['label'].shape)
for batch in tfds.load('mnist', split='test')
.batch(per_device_batch_size, drop_remainder=True)
.batch(jax.local_device_count())
)
# having different number of examples for different hosts will result in SPMD
# violation when all examples are to be processed
process_count = 6
[
len(tfds.load(dataset_name, split=tfds.split_for_jax_process(
'test', process_index=process_index, process_count=process_count)))
for process_index in range(process_count)
]
# baseline: simple batching, keep reminder
# => leads to recompilation & only works on single device
@jax.jit
def get_preds(variables, inputs):
print('retrigger compilation', inputs.shape)
return model.apply(variables, inputs)
ds = tfds.load(dataset_name, split='test')
ds = ds.batch(per_device_batch_size, drop_remainder=False)
correct = total = 0
for batch in ds.as_numpy_iterator():
preds = get_preds(variables, batch['image'])
total += len(batch['label'])
correct += (batch['label'] == preds.argmax(axis=1)).sum()
correc = correct.item()
correct, total, correct / total
# when the remainder is dropped, we can use multiple devices and avoid
# recompilations
# => but results are incorrect
@jax.pmap
def get_preds(variables, inputs):
print('retrigger compilation', inputs.shape)
return model.apply(variables, inputs)
ds = tfds.load(dataset_name, split=tfds.split_for_jax_process('test'))
# This `drop_remainder=True` is required so we can do a second batch level.
ds = ds.batch(per_device_batch_size, drop_remainder=True)
# This `drop_remainder=True` is required so we can avoid a recompilation.
ds = ds.batch(jax.local_device_count(), drop_remainder=True)
correct = total = 0
for batch in ds.as_numpy_iterator():
preds = get_preds(variables, batch['image'])
total += len(batch['label'].flatten())
correct += (batch['label'] == preds.argmax(axis=-1)).sum()
correc = correct.item()
correct, total, correct / total
Explanation: The problem
End of explanation
# manually padding
# => precise & allows for data parallelism
@jax.pmap
def get_preds(variables, inputs):
print('retrigger compilation', inputs.shape)
return model.apply(variables, inputs)
ds = tfds.load(dataset_name, split=tfds.split_for_jax_process('test'))
per_host_batch_size = per_device_batch_size * jax.local_device_count()
ds = ds.batch(per_host_batch_size, drop_remainder=False)
shard = lambda x: einops.rearrange(
x, '(d b) ... -> d b ...', d=jax.local_device_count())
unshard = lambda x: einops.rearrange(x, 'd b ... -> (d b) ...')
correct = total = 0
for batch in ds.as_numpy_iterator():
images = batch['image']
n = len(images)
padding = np.zeros([per_host_batch_size - n, *images.shape[1:]], images.dtype)
padded_images = np.concatenate([images, padding])
preds = unshard(get_preds(variables, shard(padded_images)))[:n]
total += n
correct += (batch['label'] == preds.argmax(axis=-1)).sum()
correct = correct.item()
correct, total, correct / total
Explanation: The solution: padding
Manual implementation
End of explanation
# same as before, but using @pad_shard_unshard decorator
# manually padding
# => precise & allows for data parallelism
@jax.pmap
def get_preds(variables, inputs):
print('retrigger compilation', inputs.shape)
return model.apply(variables, inputs)
ds = tfds.load(dataset_name, split=tfds.split_for_jax_process('test'))
per_host_batch_size = per_device_batch_size * jax.local_device_count()
ds = ds.batch(per_host_batch_size, drop_remainder=False)
correct = total = 0
for batch in ds.as_numpy_iterator():
preds = flax.jax_utils.pad_shard_unpad(get_preds)(
variables, batch['image'], min_device_batch=per_device_batch_size)
total += len(batch['image'])
correct += (batch['label'] == preds.argmax(axis=-1)).sum()
correct = correct.item()
correct, total, correct / total
Explanation: Using pad_shard_unpad()
End of explanation
# moving the metrics computation into `eval_step()` and using `static_return`
# this pattern is often used with more complicated `clu.metrics`
def eval_step(metrics, variables, batch):
print('retrigger compilation', {k: v.shape for k, v in batch.items()})
preds = model.apply(variables, batch['image'])
correct = (batch['mask'] & (batch['label'] == preds.argmax(axis=-1))).sum()
total = batch['mask'].sum()
return dict(
correct=metrics['correct'] + jax.lax.psum(correct, axis_name='batch'),
total=metrics['total'] + jax.lax.psum(total, axis_name='batch'),
)
eval_step = jax.pmap(eval_step, axis_name='batch')
eval_step = flax.jax_utils.pad_shard_unpad(
eval_step, static_argnums=(0, 1), static_return=True)
ds = tfds.load(dataset_name, split=tfds.split_for_jax_process('test'))
per_host_batch_size = per_device_batch_size * jax.local_device_count()
ds = ds.batch(per_host_batch_size, drop_remainder=False)
metrics = flax.jax_utils.replicate(dict(
correct=jnp.array(0, jnp.int32),
total=jnp.array(0, jnp.int32),)
)
for batch in ds.as_numpy_iterator():
batch['mask'] = np.ones_like(batch['label'])
metrics = eval_step(
metrics, variables, batch,
min_device_batch=per_device_batch_size)
correct, total = metrics['correct'][0].item(), metrics['total'][0].item()
correct, total, correct / total
Explanation: Computing metrics in eval_step
End of explanation
# infinite zero padding
def with_infinite_padding(dataset):
Adds "infinite padding" to the dataset.
filler_element = tf.nest.map_structure(
lambda spec: tf.zeros(spec.shape, spec.dtype)[None], dataset.element_spec)
filler_element['mask'] = [False]
filler_dataset = tf.data.Dataset.from_tensor_slices(filler_element)
dataset = dataset.map(
lambda features: dict(mask=True, **features),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
return dataset.concatenate(filler_dataset.repeat(None))
@jax.pmap
def get_preds(variables, inputs):
print('retrigger compilation', inputs.shape)
return model.apply(variables, inputs)
count_p = jax.pmap(
lambda mask: jax.lax.psum(mask.sum(), axis_name='batch'),
axis_name='batch',
)
count_correct_p = jax.pmap(
lambda labels, preds, mask:
jax.lax.psum((mask & (labels == preds)).sum(), axis_name='batch'),
axis_name='batch',
)
ds = tfds.load(dataset_name, split=tfds.split_for_jax_process('test'))
ds = with_infinite_padding(ds).batch(per_device_batch_size).batch(jax.local_device_count())
correct = total = 0
for batch in ds.as_numpy_iterator():
n = count_p(batch['mask'])[0].item() # adds sync barrier
if not n: break
preds = get_preds(variables, batch['image']).argmax(axis=-1)
total += n
correct += count_correct_p(batch['label'], preds, batch['mask'])[0]
correct = correct.item()
correct, total, correct / total
Explanation: Multi-host complications
End of explanation |
7,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of classification results
Objective
Step1: Load original model
Step2: Load sample classification results
The implemented classification method does not return a single best-fit model, but an ensemble of probable model (as it is an MCMC sampling from the posterior). As a first test, we will therefore import single models first and check the misclassification rate defined as
Step3: Results of the classification do not necessarily contain the same ids as the units in the initial model. This seems to be the case here, as well. Re-sort
Step4: Now remap results and compare again
Step6: Combined analysis in a single function
Note
Step7: Determine validity of uncertainty estimate
In addition to single model realisations, an esitmate of model uncertainty is calculated (this is, actually, also one of the main "selling points" of the paper). So, we will now check if the correct model is actually in the range of the estimated model uncertainty bounds (i.e.
Step8: We now need to perform the remapping similar to before, but now for the probability fields
Step9: Determination of misclassification statistics
Next step
Step10: First, we again need to check the assignment of the units/ class ids
Step11: We can now determine the misclassification for all results
Step12: It seems to be the case that the upper thin layer vanishes after approimately 30-40 iterations. From then on, the misclassification rate is approximately constant at around 9.5 percent (which is still quite acceptable!).
Let's compare this now to classifications with another (lower) beta value (which should put more weight to the data?)
Step13: Determine validity of estimated probability | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import numpy as np
%matplotlib inline
Explanation: Analysis of classification results
Objective: read back in the classification results and compare to original model
End of explanation
import pynoddy.output
reload(pynoddy.output)
output_name = "feature_out"
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x',
colorbar = True, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
Explanation: Load original model:
End of explanation
# f_set1 = open("../../sandbox/jack/features_lowres-5 with class ID.csv").readlines()
f_set1 = open(r"/Users/flow/Documents/01_work/01_own_docs/02_paper_drafts/jack/classification_result_100Iter.csv").readlines()
f_set1[0]
# initialise classification results array
cf1 = np.empty_like(nout.block)
# iterate through results and append
for f in f_set1:
fl = f.rstrip().split(",")
cf1[int(fl[0]),int(fl[1]),int(fl[2])] = int(fl[-1])
nout.plot_section('x', data = cf1,
colorbar = True, title="", layer_labels = range(5),
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
# compare to original model:
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
nout.plot_section('x', ax = ax1,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
nout.plot_section('x', data = cf1,ax = ax2,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
Explanation: Load sample classification results
The implemented classification method does not return a single best-fit model, but an ensemble of probable model (as it is an MCMC sampling from the posterior). As a first test, we will therefore import single models first and check the misclassification rate defined as:
$$\mbox{MCR} = \frac{\mbox{Number of misclassified voxels}}{\mbox{Total number of voxels}}$$
End of explanation
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(cf1[15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
print np.unique(nout.block)
print np.unique(cf1)
# define id mapping from cluster results to original:
# id_mapping = {2:1, 3:2, 4:5, 5:3, 1:4}
# remapping for result 4:
# id_mapping = {4:5, 3:4, 1:3, 5:2, 2:1}
# remapping for result 5:
id_mapping = {3:5, 1:4, 2:3, 4:2, 5:1}
Explanation: Results of the classification do not necessarily contain the same ids as the units in the initial model. This seems to be the case here, as well. Re-sort:
End of explanation
def re_map(id_val):
return id_mapping[id_val]
re_map_vect = np.vectorize(re_map)
cf1_remap = re_map_vect(cf1)
# compare to original model:
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
nout.plot_section('x', ax = ax1,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
nout.plot_section('x', data = cf1_remap, ax = ax2,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
feature_diff = (nout.block != cf1_remap)
nout.plot_section('x', data = feature_diff,
colorbar = False, title="Difference between real and matched model",
cmap = 'YlOrRd')
# Calculate the misclassification:
np.sum(feature_diff) / float(nout.n_total)
# Export misclassification to VTK:
misclass = feature_diff.astype('int')
nout.export_to_vtk(vtk_filename = "misclass", data=misclass)
Explanation: Now remap results and compare again:
Note: create a vectorised function to enable a direct re-mapping of the entire array while keeping the structure!
End of explanation
def calc_misclassification(nout, filename):
Calculate misclassification for classification results data stored in file
**Arguments**:
- *nout* = NoddyOutput: original model (Noddy object)
- *filename* = filename (with path): file with classification results
f_set1 = open(filename).readlines()
# initialise classification results array
cf1 = np.empty_like(nout.block)
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
cf1[int(fl[0]),int(fl[1]),int(fl[2])] = int(fl[6])
# remap ids
cf1_remap = re_map_vect(cf1)
# determine differences in class ids:
feature_diff = (nout.block != cf1_remap)
# Calculate the misclassification:
misclass = np.sum(feature_diff) / float(nout.n_total)
return misclass
# filename = r"../../sandbox/jack/features_lowres-4 with class ID.csv"
# calc_misclassification(nout, filename)
Explanation: Combined analysis in a single function
Note: function assumes correct EOL character in data file (check/ adjust with vi: %s/\r/\r/g)
Problem: remapping is unfortunatley not identical!
End of explanation
# f_set1 = open("../../sandbox/jack/features_lowres-6 with class ID and Prob.csv").readlines()
f_set1 = open("../../sandbox/jack/features_lowres-8 with Prob (weak Beta).csv").readlines()
f_set1[0]
# initialise classification results array
cf1 = np.empty_like(nout.block)
# Initialise probability array
probs = np.empty((5, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(5):
probs[i2,i,j,k] = float(fl[i2+6])
Explanation: Determine validity of uncertainty estimate
In addition to single model realisations, an esitmate of model uncertainty is calculated (this is, actually, also one of the main "selling points" of the paper). So, we will now check if the correct model is actually in the range of the estimated model uncertainty bounds (i.e.: if all voxets values from the original model actually have a non-zero probability in the estimated model)!
First step: load estimated class probabilities:
End of explanation
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(probs[4,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# Note: map now ids from original model to probability fields in results:
prob_mapping = {4:0, 5:1, 3:2, 1:3, 2:4}
# Check membership for each class in original model
for i in range(1,6):
tmp = np.ones_like(nout.block) * (nout.block==i)
# test if voxels have non-zero probability by checking conjunction with zero-prob voxels
prob_zero = probs[prob_mapping[i],:,:,:] == 0
misidentified = np.sum(tmp * prob_zero)
print i, misidentified
prob_zero = probs[prob_mapping[1],:,:,:] == 0
Explanation: We now need to perform the remapping similar to before, but now for the probability fields:
End of explanation
# f_set1 = open("../../sandbox/jack/features_lowres-7 with 151 realizations.csv").readlines()
f_set1 = open(r"/Users/flow/Documents/01_work/01_own_docs/02_paper_drafts/jack/classification_result_100Iter.csv").readlines()
# Initialise results array
all_results = np.empty((96, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(96):
try:
all_results[i2,i,j,k] = float(fl[i2+5])
except IndexError:
print i2, i, j, k
Explanation: Determination of misclassification statistics
Next step: use multiple results from one chain to determine misclassification statistics.
End of explanation
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[5,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# mapping from results to original:
id_mapping = {2:5, 1:4, 3:3, 5:2, 4:1}
def re_map(id_val):
return id_mapping[id_val]
re_map_vect = np.vectorize(re_map)
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[1:,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[85,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
Explanation: First, we again need to check the assignment of the units/ class ids:
End of explanation
all_misclass = np.empty(90)
for i in range(90):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of suite lowres-7")
plt.xlabel("Model id")
plt.ylabel("MCR")
Explanation: We can now determine the misclassification for all results:
End of explanation
f_set1 = open("../../sandbox/jack/features_lowres-9 with 151 realizations.csv").readlines()
# Initialise results array
all_results = np.empty((151, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(151):
try:
all_results[i2,i,j,k] = float(fl[i2+6])
except IndexError:
print i2, i, j, k
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[20,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# define re-mapping
id_mapping = {2:5, 1:4, 3:3, 5:2, 4:1}
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[1:,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[115,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
all_misclass = np.empty(150)
for i in range(150):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of suite lowres-9")
plt.xlabel("Model id")
plt.ylabel("MCR")
f_set1 = open("../../sandbox/jack/features_lowres-10 with 2000 realizations.csv").readlines()
# Initialise results array
all_results = np.empty((2000, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(2000):
try:
all_results[i2,i,j,k] = float(fl[i2+6])
except IndexError:
print i2, i, j, k
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[20,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# define re-mapping
# id_mapping = {3:5, 4:4, 2:3, 1:2, 5:1, 0:0}
id_mapping = {3:5, 1:4, 2:3, 4:2, 5:1}
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[2:,:,:,:])
np.unique(all_results[0,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[11,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
all_misclass = np.empty(94)
for i in range(94):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of new model suite")
plt.xlabel("Model id")
plt.ylabel("MCR")
plt.hist(all_misclass[100:])
Explanation: It seems to be the case that the upper thin layer vanishes after approimately 30-40 iterations. From then on, the misclassification rate is approximately constant at around 9.5 percent (which is still quite acceptable!).
Let's compare this now to classifications with another (lower) beta value (which should put more weight to the data?):
End of explanation
# f_set1 = open("../../sandbox/jack/features_lowres-6 with class ID and Prob.csv").readlines()
f_set1 = open("../../sandbox/jack/features_lowres-10 with Prob (weak Beta).csv").readlines()
# initialise classification results array
cf1 = np.empty_like(nout.block)
f_set1[0]
# Initialise probability array
probs = np.empty((5, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(5):
probs[i2,i,j,k] = float(fl[i2+6])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(probs[0,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# Note: map now ids from original model to probability fields in results:
prob_mapping = {2:0, 3:1, 5:2, 4:3, 1:4}
# Check membership for each class in original model
for i in range(1,6):
tmp = np.ones_like(nout.block) * (nout.block==i)
# test if voxels have non-zero probability by checking conjunction with zero-prob voxels
prob_zero = probs[prob_mapping[i],:,:,:] == 0
misidentified = np.sum(tmp * prob_zero)
print i, misidentified
info_entropy = np.zeros_like(nout.block)
for prob in probs:
info_entropy[prob > 0] -= prob[prob > 0] * np.log2(prob[prob > 0])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(info_entropy[1,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
nout.export_to_vtk(vtk_filename = "../../sandbox/jack/info_entropy", data = info_entropy)
np.max(probs)
np.max(info_entropy)
Explanation: Determine validity of estimated probability
End of explanation |
7,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing the top hashtags (JSON)
So you have tweets in a JSON file, and you'd like to get a list of the hashtags, from the most frequently occurring hashtags on down.
There are many, many different ways to accomplish this. Since we're working with the tweets in JSON format, this solution will use jq, as well as a few bash shell / command line tools
Step1: Let's see how many hashtags we extracted
Step2: What we'd like to do now is to count up how many of each hashtag we have. We'll use a combination of bash's sort and uniq commands for that. We'll also use the -c option for uniq, which prefaces each line with the count of lines it collapsed together in the process of uniqing a group of identical lines. sort's -nr options will allow us to sort by just the count on each line.
Step3: Let's take a look at what we have now.
Step4: Personally, I have no idea what most of these hashtags are about, but this is apparently what people were tweeting about on October 31, 2018.
And as for how many unique hashtags are in this set | Python Code:
!cat 50tweets.json | jq -cr '[.entities.hashtags][0][].text'
!cat tweets4hashtags.json | jq -cr '[.entities.hashtags][0][].text' > allhashtags.txt
Explanation: Computing the top hashtags (JSON)
So you have tweets in a JSON file, and you'd like to get a list of the hashtags, from the most frequently occurring hashtags on down.
There are many, many different ways to accomplish this. Since we're working with the tweets in JSON format, this solution will use jq, as well as a few bash shell / command line tools: cat, sort, uniq, and wc. If you haven't used jq yet, our Working with Twitter Using jq notebook is a good place to start.
Where are the hashtags in tweet JSON?
When we look at a tweet, we see that it has a key called entities, and that the value of entities contains a key called hashtags. The value of hashtags is a list (note the square brackets); each item in the list contains the text of a single hashtag, and the indices of the characters in the tweet text where the hashtag begins and ends.
```
{
created_at: "Tue Oct 30 09:15:45 +0000 2018",
id: 1057199367411679200,
id_str: "1057199367411679234",
text: "Lesson from Indra's elephant https://t.co/h5K3y5g4Ju #India #Hinduism #Buddhism #History #Culture https://t.co/qFyipqzPnE",
...
entities: {
hashtags: [
{
text: "India",
indices: [
54,
60
]
},
{
text: "Hinduism",
indices: [
61,
70
]
},
{
text: "Buddhism",
indices: [
71,
80
]
},
{
text: "History",
indices: [
81,
89
]
},
{
text: "Culture",
indices: [
90,
98
]
}
],
...
```
When we use jq, we'll need to construct a filter that pulls out the hashtag text values.
End of explanation
!wc -l allhashtags.txt
Explanation: Let's see how many hashtags we extracted:
End of explanation
!cat allhashtags.txt | sort | uniq -c | sort -nr > rankedhashtags.txt
Explanation: What we'd like to do now is to count up how many of each hashtag we have. We'll use a combination of bash's sort and uniq commands for that. We'll also use the -c option for uniq, which prefaces each line with the count of lines it collapsed together in the process of uniqing a group of identical lines. sort's -nr options will allow us to sort by just the count on each line.
End of explanation
!head -n 50 rankedhashtags.txt
Explanation: Let's take a look at what we have now.
End of explanation
!wc -l rankedhashtags.txt
Explanation: Personally, I have no idea what most of these hashtags are about, but this is apparently what people were tweeting about on October 31, 2018.
And as for how many unique hashtags are in this set:
End of explanation |
7,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stability map with MEGNO and WHFast
In this tutorial, we'll create a stability map of a two planet system using the chaos indicator MEGNO (Mean Exponential Growth of Nearby Orbits) and the symplectic integrator WHFast (Rein and Tamayo 2015).
We will integrate a two planet system with massive planets. We vary two orbital parameters, the semi-major axis $a$ and the eccentricity $e$. Let us first define a function that runs one simulation for a given set of initial conditions $(a, e)$.
Step1: Let's try this out and run one simulation
Step2: The return value is the MEGNO. It is about 2, thus the system is regular for these initial conditions. Let's run a whole array of simulations.
Step3: On my laptop (dual core CPU), this takes only 3 seconds!
Let's plot it! | Python Code:
def simulation(par):
a, e = par # unpack parameters
rebound.reset()
rebound.integrator = "whfast-nocor"
rebound.dt = 5.
rebound.add(m=1.) # Star
rebound.add(m=0.000954, a=5.204, anom=0.600, omega=0.257, e=0.048)
rebound.add(m=0.000285, a=a, anom=0.871, omega=1.616, e=e)
rebound.move_to_com()
rebound.init_megno(1e-16)
try:
rebound.integrate(5e2*2.*np.pi,maxR=20.) # integrator for 500 years
return rebound.calculate_megno()
except rebound.ParticleEscaping:
return 10. # At least one particle got ejected, returning large MEGNO.
Explanation: Stability map with MEGNO and WHFast
In this tutorial, we'll create a stability map of a two planet system using the chaos indicator MEGNO (Mean Exponential Growth of Nearby Orbits) and the symplectic integrator WHFast (Rein and Tamayo 2015).
We will integrate a two planet system with massive planets. We vary two orbital parameters, the semi-major axis $a$ and the eccentricity $e$. Let us first define a function that runs one simulation for a given set of initial conditions $(a, e)$.
End of explanation
import rebound
import numpy as np
simulation((7,0.1))
Explanation: Let's try this out and run one simulation
End of explanation
Ngrid = 80
par_a = np.linspace(7.,10.,Ngrid)
par_e = np.linspace(0.,0.5,Ngrid)
parameters = []
for e in par_e:
for a in par_a:
parameters.append((a,e))
from rebound.interruptible_pool import InterruptiblePool
pool = InterruptiblePool()
results = pool.map(simulation,parameters)
Explanation: The return value is the MEGNO. It is about 2, thus the system is regular for these initial conditions. Let's run a whole array of simulations.
End of explanation
results2d = np.array(results).reshape(Ngrid,Ngrid)
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(7,5))
ax = plt.subplot(111)
extent = [min(par_a),max(par_a),min(par_e),max(par_e)]
ax.set_xlim(extent[0],extent[1])
ax.set_xlabel("semi-major axis $a$")
ax.set_ylim(extent[2],extent[3])
ax.set_ylabel("eccentricity $e$")
im = ax.imshow(results2d, interpolation="none", vmin=1.9, vmax=4, cmap="RdYlGn_r", origin="lower", aspect='auto', extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.set_label("MEGNO $\\langle Y \\rangle$")
Explanation: On my laptop (dual core CPU), this takes only 3 seconds!
Let's plot it!
End of explanation |
7,292 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
First off, I'm no mathmatician. I admit that. Yet I still need to understand how ScyPy's sparse matrices work arithmetically in order to switch from a dense NumPy matrix to a SciPy sparse matrix in an application I have to work on. The issue is memory usage. A large dense matrix will consume tons of memory. | Problem:
import numpy as np
from scipy import sparse
V = sparse.random(10, 10, density = 0.05, format = 'dok', random_state = 42)
x = 99
V._update(zip(V.keys(), np.array(list(V.values())) + x)) |
7,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I was inspired by @twiecki and his great post about Bayesian neural networks. But I thought that that way of creating BNNs is not obvious and easy for people. That's why I decided to make Gelato that is a bridge for PyMC3 and Lasagne.
I will use his convolution bnn from the post as an example of how to use gelato API.
Step1: Load Data
Step2: Create priors for weights (Spec classes)
Gelato has a flexible way to define a prior on weight instead of just shared variable. It supports a lot of features, listed below.
Basicly, Specification (Spec) is a delayed expression that depends on shape. I made possible to combine them with tensor operations, manipulate shape and apply custom delayed functions.
Step3: Spec behaves like a tensor and has the same methods
Step4: Methods are used as usually, but instead of a tensor we get another Spec instance.
Step5: These operations are dalayed until user calls expr(shape). When that happens this shape is passed to all defined specs in expression tree. This tree is evaluated and pymc3 variables are created, one per spec instance and shape for it (there exsist some corner cases when one spec instance can get 2 different shapes).
Step6: Note that in example above we specified one variable without shape and other .with_shape(()). So when expression is evaluated this custom shape replaces the shape that was provided in __call__.
Step7: More complex cases can require more accurate shape handling. This can be achived with tags.
Step8: Now if we try to call the expression without tags or we will get KeyError. Unspecified shape is default tag.
Step9: Sometimes it is usefull to change shape with a function, it maps shape to shape
Step10: If you need some more complex transformation other than builtin tensor operations you can use a simple wrapper over a function
Step11: Determinant will be taken only after __call__
Step12: Disclamer
I have to note that Gelato has some overhead magic with wrappring lasagne layers. Espessially there is no need to use gelato layer if you pass gelato Spec class to Lasagne layer to define a weight. But using them happens to be very convenient. There is a handy function set_default_spec that declares what prior to use if none specified. Let's see how it works.
Priors
In this setup I decided to use hyperprior to show it can be done very easily. Moreover it worked better for me, no need to choose regularization constant.
Step13: Network
The following print output should make you think "What's the hell is going on? Is it real?"
Step14: Yes, Gelato layers are Models and Layers at the same time. Moreover context is taken from the previous layers so that you no more need with model
Step15: We now minimize the following objective
$$-1 * ELBO=KL[q(w|mu,rho)||p(w)] - E_q[log p(D|w)]$$
Getting ELBO for model and optimizing it is quite easy and can be cone just in few lines. As we will perform batch training we have to say what population size we have for observed var or else we'll get invalid approximation.
Let's see' what we've created
Step16: Seems something went wrong
Step17: Inference
Step18: Important to note
Calling lasagne.layers.get_output we in fact get symbolic output of the model. The inferred approximation is not yet applied. Good news are that pymc3.variatonal is designed to meet the needs of Bayesian deep learning. It is pretty easy to make it work with minimal efforts and lines of code.
Step19: Making predictions
There are different approaches to make decisions under Bayesian setup. One can use MAP esimation for prediction, other options are to use mean of the posterior or predictive distribution and integrate out any statistics, usually mean or mode. I'll compare MAP and posterior predictive mode
Step20: Seems like MAP estimation may be not the best way to predict. On the other hand it is much faster
Step21: Uncertainty
From a glance uncertainty is not important. But I'll show that it can used for making decisions.
Simple variance
Let's calculate average variance for probability predictions in correct pedictions and wrong predictions. I don't use Chi squared statistics to illustrate the difference as @twiecki did. But still, this picture gives an evidence that mistakes are done with more quantity of uncertainty and thus can be potentially treated as problebatic examples.
Step22: Wow, variance seems to be much higher for pictures where error was done
Expected error rate (Inconsistency in predictions)
We can get the posterior distribution for predictions. That gives a lot of information for us. When we constructed y_pred_posterior we minimized that expected rate of error, taking the mode of the predictive distribution. Note that is true for categorical predictions. For regression problem with L2 loss one preferes to integrate out mean of the posterior.
Disclamer
Step23: Seems like our model is not always confident in predictions and is aware of it. Let's check how often it happens so
Step24: In 37% cases our model is not confident enough and it leeds to mistake.
High confident errors
Next. How often is out model overconfident? In other words it seems all to be okay but we have an error.
Step25: Good news, It happens pretty rare, exactly 4 times.
Step26: I would rather agree with my network sometimes | Python Code:
%env THEANO_FLAGS=device=cuda0
import matplotlib.pyplot as plt
%matplotlib inline
import gelato
import theano
import theano.tensor as tt
theano.config.warn_float64 = 'warn'
import numpy as np
import lasagne
import pymc3 as pm
Explanation: I was inspired by @twiecki and his great post about Bayesian neural networks. But I thought that that way of creating BNNs is not obvious and easy for people. That's why I decided to make Gelato that is a bridge for PyMC3 and Lasagne.
I will use his convolution bnn from the post as an example of how to use gelato API.
End of explanation
from sklearn.datasets import fetch_mldata
from sklearn.model_selection import train_test_split
def load_dataset():
# We first define a download function, supporting both Python 2 and 3.
mnist = fetch_mldata('MNIST original')
data = mnist['data'].reshape((70000, 1, 28, 28))
target = mnist['target']
# We can now download and read the training and test set images and labels.
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=10000)
# We reserve the last 10000 training examples for validation.
X_train, X_val = X_train[:-10000], X_train[-10000:]
y_train, y_val = y_train[:-10000], y_train[-10000:]
# We just return all the arrays in order, as expected in main().
# (It doesn't matter how we do this as long as we can read them again.)
return X_train, y_train, X_val, y_val, X_test, y_test
print("Loading data...")
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset()
total_size = X_train.shape[0]
Explanation: Load Data
End of explanation
from gelato.specs import NormalSpec
expr = NormalSpec() * NormalSpec().with_shape(())
expr
Explanation: Create priors for weights (Spec classes)
Gelato has a flexible way to define a prior on weight instead of just shared variable. It supports a lot of features, listed below.
Basicly, Specification (Spec) is a delayed expression that depends on shape. I made possible to combine them with tensor operations, manipulate shape and apply custom delayed functions.
End of explanation
dir(expr)[70:80]
Explanation: Spec behaves like a tensor and has the same methods
End of explanation
expr.argmin()
Explanation: Methods are used as usually, but instead of a tensor we get another Spec instance.
End of explanation
with pm.Model() as model:
expr((100, 2))
model.vars
Explanation: These operations are dalayed until user calls expr(shape). When that happens this shape is passed to all defined specs in expression tree. This tree is evaluated and pymc3 variables are created, one per spec instance and shape for it (there exsist some corner cases when one spec instance can get 2 different shapes).
End of explanation
list(map(lambda v: v.dshape, model.vars))
Explanation: Note that in example above we specified one variable without shape and other .with_shape(()). So when expression is evaluated this custom shape replaces the shape that was provided in __call__.
End of explanation
expr = NormalSpec().with_tag('one') * NormalSpec().with_tag('two') + NormalSpec()
Explanation: More complex cases can require more accurate shape handling. This can be achived with tags.
End of explanation
with pm.Model() as model:
expr(dict(one=(100, 2), two=(), default=(100, 1)))
list(map(lambda v: v.dshape, model.vars))
Explanation: Now if we try to call the expression without tags or we will get KeyError. Unspecified shape is default tag.
End of explanation
expr = NormalSpec().with_shape(lambda s: (1,) * len(s)) * NormalSpec()
with pm.Model() as model:
expr((10, 10))
list(map(lambda v: v.dshape, model.vars))
Explanation: Sometimes it is usefull to change shape with a function, it maps shape to shape
End of explanation
from gelato.specs import as_spec_op
spec_det = as_spec_op(theano.tensor.nlinalg.det)
spec_det
Explanation: If you need some more complex transformation other than builtin tensor operations you can use a simple wrapper over a function
End of explanation
spec_det(expr)
Explanation: Determinant will be taken only after __call__
End of explanation
from gelato.specs import NormalSpec, LognormalSpec, set_default_spec
# Now every layer without passed specs will use `NormalSpec(sd=hyper)` as a prior
hyper = LognormalSpec(sd=10).with_shape(())
set_default_spec(NormalSpec(sd=hyper))
Explanation: Disclamer
I have to note that Gelato has some overhead magic with wrappring lasagne layers. Espessially there is no need to use gelato layer if you pass gelato Spec class to Lasagne layer to define a weight. But using them happens to be very convenient. There is a handy function set_default_spec that declares what prior to use if none specified. Let's see how it works.
Priors
In this setup I decided to use hyperprior to show it can be done very easily. Moreover it worked better for me, no need to choose regularization constant.
End of explanation
print(issubclass(gelato.layers.InputLayer, pm.Model) and
issubclass(gelato.layers.Conv2DLayer, pm.Model))
print(issubclass(gelato.layers.InputLayer, lasagne.layers.Layer) and
issubclass(gelato.layers.Conv2DLayer, lasagne.layers.Layer))
Explanation: Network
The following print output should make you think "What's the hell is going on? Is it real?"
End of explanation
input_var = pm.Minibatch(X_train, 100, dtype='float32')
target_var = pm.Minibatch(y_train, 100, dtype='int32')
network = gelato.layers.InputLayer(shape=(None, 1, 28, 28),
input_var=input_var)
network = gelato.layers.Conv2DLayer(
network, num_filters=32, filter_size=(5, 5),
nonlinearity=lasagne.nonlinearities.tanh)
# Max-pooling layer of factor 2 in both dimensions:
network = gelato.layers.MaxPool2DLayer(network, pool_size=(2, 2))
# Another convolution with 32 5x5 kernels, and another 2x2 pooling:
network = gelato.layers.Conv2DLayer(
network, num_filters=32, filter_size=(5, 5),
nonlinearity=lasagne.nonlinearities.tanh)
network = gelato.layers.MaxPool2DLayer(network,
pool_size=(2, 2))
n_hid2 = 176
network = gelato.layers.DenseLayer(
network, num_units=n_hid2,
nonlinearity=lasagne.nonlinearities.tanh,
)
n_hid3 = 64
network = gelato.layers.DenseLayer(
network, num_units=n_hid3,
nonlinearity=lasagne.nonlinearities.tanh,
)
# Finally, we'll add the fully-connected output layer, of 10 softmax units:
network = gelato.layers.DenseLayer(
network, num_units=10,
nonlinearity=lasagne.nonlinearities.softmax,
)
prediction = gelato.layers.get_output(network)
# Gelato layers are designed to simlify the whole process
# and are pm.Models as well as lasagne.layer.Layer
# You can work in the contest of the last defined layer to define likelihood
with network:
likelihood = pm.Categorical('out',
prediction,
observed=target_var,
total_size=total_size)
Explanation: Yes, Gelato layers are Models and Layers at the same time. Moreover context is taken from the previous layers so that you no more need with model: ... construction for most usecases.
Minibatches
I prefer storing all my dataset in GPU memory if it is possible to avoid unnessesary data transfers. It is possible with out of the bos pymc3 minibatches
End of explanation
network.vars
Explanation: We now minimize the following objective
$$-1 * ELBO=KL[q(w|mu,rho)||p(w)] - E_q[log p(D|w)]$$
Getting ELBO for model and optimizing it is quite easy and can be cone just in few lines. As we will perform batch training we have to say what population size we have for observed var or else we'll get invalid approximation.
Let's see' what we've created
End of explanation
network.root.vars
Explanation: Seems something went wrong:(
That's not the thing you expect, but that's just a nested model.
You can find your root model accessing network.root. It stores all parameters, potentials, likelihood, etc
End of explanation
# I prefer object oriented style for doing inference in PyMC3 it allows more flexibility
# Remember that we need root model here
with network.root:
advi = pm.ADVI(scale_cost_to_minibatch=False)
advi.fit(80000, obj_optimizer=pm.adam(learning_rate=1e-3))
#import pickle
#params = advi.approx.shared_params
#pickle.dump(params, open('params.pymc3','wb'))
#advi.approx.shared_params = pickle.load(open('params.pymc3','rb'))
plt.figure(figsize=(16,9))
plt.plot(advi.hist, alpha=.3);
Explanation: Inference
End of explanation
from theano.configparser import change_flags
# The right way to compile a function without changing important pymc3 flag `compute_test_value='raise'`
with change_flags(compute_test_value='ignore'):
# create symbolic input image
inpimg = tt.tensor4('input')
# number of samples for posterior predictive distribution
it = tt.iscalar('i')
# posterior predictive probability
_prediction = gelato.layers.get_output(network)
# then replacements follow
prediction = advi.approx.apply_replacements(
_prediction, deterministic=True,
# you can replace minibatch tensor with symbolic input
more_replacements={input_var:inpimg})
predictions = advi.approx.sample_node(
_prediction, it,
more_replacements={input_var:inpimg})
# That is it, finally we compile both functions
predictions_f = theano.function([inpimg, theano.In(it, 's', 10)], predictions)
prediction_f = theano.function([inpimg], prediction)
Explanation: Important to note
Calling lasagne.layers.get_output we in fact get symbolic output of the model. The inferred approximation is not yet applied. Good news are that pymc3.variatonal is designed to meet the needs of Bayesian deep learning. It is pretty easy to make it work with minimal efforts and lines of code.
End of explanation
from scipy.stats import mode
y_pred_MAP = np.argmax(prediction_f(X_test), axis=1)
error_under_MAP = y_pred_MAP != y_test
error_rate_under_MAP = error_under_MAP.mean()
# distribution for probabilistic predictions
y_preds_posterior = predictions_f(X_test, 100)
# take distribution for modes
# than integrate out mode for modes
y_pred_posterior = mode(np.argmax(y_preds_posterior, axis=-1), axis=0).mode[0]
error_under_posterior = y_pred_posterior != y_test
error_rate_under_posterior = error_under_posterior.mean()
print('MAP : %f' % error_rate_under_MAP)
print('predictive posterior mode: %f' % error_rate_under_posterior)
Explanation: Making predictions
There are different approaches to make decisions under Bayesian setup. One can use MAP esimation for prediction, other options are to use mean of the posterior or predictive distribution and integrate out any statistics, usually mean or mode. I'll compare MAP and posterior predictive mode
End of explanation
def check_the_error_at(idx):
print('true:', y_test[error_under_posterior][idx],'prediction:', y_pred_posterior[error_under_posterior][idx])
plt.gray();plt.matshow(X_test[error_under_posterior][idx][0]);plt.show();
check_the_error_at(0)
check_the_error_at(1)
Explanation: Seems like MAP estimation may be not the best way to predict. On the other hand it is much faster
End of explanation
plt.plot(y_preds_posterior.var(0)[error_under_posterior].mean(0), label='errors')
plt.plot(y_preds_posterior.var(0)[~error_under_posterior].mean(0), label='correct')
plt.plot(y_preds_posterior.var(0).mean(0), label='all')
plt.legend()
Explanation: Uncertainty
From a glance uncertainty is not important. But I'll show that it can used for making decisions.
Simple variance
Let's calculate average variance for probability predictions in correct pedictions and wrong predictions. I don't use Chi squared statistics to illustrate the difference as @twiecki did. But still, this picture gives an evidence that mistakes are done with more quantity of uncertainty and thus can be potentially treated as problebatic examples.
End of explanation
y_preds_labels = np.argmax(y_preds_posterior, axis=-1)
prediction_expected_error_rate = (y_preds_labels != y_pred_posterior).mean(0)
plt.hist(prediction_expected_error_rate, bins=20)
plt.title('Expected Error Rate');
plt.xlabel('E[error rate]')
plt.ylabel('#observations')
Explanation: Wow, variance seems to be much higher for pictures where error was done
Expected error rate (Inconsistency in predictions)
We can get the posterior distribution for predictions. That gives a lot of information for us. When we constructed y_pred_posterior we minimized that expected rate of error, taking the mode of the predictive distribution. Note that is true for categorical predictions. For regression problem with L2 loss one preferes to integrate out mean of the posterior.
Disclamer: that's not exactly the expected error for object classification but for estimating the right statistic of distribution. In out case we estimate mode for categorical distribution. This error is about predicting wrong mode and thus taking another desision.
End of explanation
((prediction_expected_error_rate != 0) == error_under_posterior).mean()
Explanation: Seems like our model is not always confident in predictions and is aware of it. Let's check how often it happens so
End of explanation
(prediction_expected_error_rate[error_under_posterior] == 0).mean()
Explanation: In 37% cases our model is not confident enough and it leeds to mistake.
High confident errors
Next. How often is out model overconfident? In other words it seems all to be okay but we have an error.
End of explanation
(prediction_expected_error_rate[error_under_posterior] == 0).sum()
houston_we_have_a_problem = prediction_expected_error_rate[error_under_posterior] == 0
def problem_display():
for i in range(houston_we_have_a_problem.sum()):
print('true:', y_test[error_under_posterior][houston_we_have_a_problem][i],
'prediction:', y_pred_posterior[error_under_posterior][houston_we_have_a_problem][i])
plt.gray();plt.matshow(X_test[error_under_posterior][houston_we_have_a_problem][i][0]);plt.show();
problem_display()
Explanation: Good news, It happens pretty rare, exactly 4 times.
End of explanation
top_three = np.argsort(prediction_expected_error_rate)[-3:][::-1]
top_three
def low_confidence_examples_display():
for i in top_three:
print('true:', y_test[i],
'prediction:', y_pred_posterior[i],
'expected error rate:', prediction_expected_error_rate[i]
)
plt.gray();plt.matshow(X_test[i][0]);plt.show();
low_confidence_examples_display()
Explanation: I would rather agree with my network sometimes :)
Low confidence predictions
Let's see where model estimates the distribution over classes worst
End of explanation |
7,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: Select Based On The Result Of A Select | Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
Explanation: Title: Nested Select
Slug: nested_select
Summary: Nested Select Based On Conditions in SQL.
Date: 2017-01-16 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
Explanation: Create Data
End of explanation
%%sql
-- Select name and age,
SELECT name, age
-- from the table 'criminals',
FROM criminals
-- where age is greater than,
WHERE age >
-- select age,
(SELECT age
-- from criminals
FROM criminals
-- where the name is 'James Smith'
WHERE name == 'James Smith')
Explanation: Select Based On The Result Of A Select
End of explanation |
7,295 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Was trying to generate a pivot table with multiple "values" columns. I know I can use aggfunc to aggregate values the way I want to, but what if I don't want to max or min both columns but instead I want max of one column while min of the other one. So is it possible to do so using pandas? | Problem:
import pandas as pd
import numpy as np
np.random.seed(1)
df = pd.DataFrame({
'A' : ['one', 'one', 'two', 'three'] * 6,
'B' : ['A', 'B', 'C'] * 8,
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 4,
'D' : np.random.randn(24),
'E' : np.random.randn(24)
})
def g(df):
return pd.pivot_table(df, values=['D','E'], index=['B'], aggfunc={'D':np.max, 'E':np.min})
result = g(df.copy()) |
7,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.1 Getting started
Prerequisites
Installation
This tutorial requires signac, so make sure to install the package before starting.
The easiest way to do so is using conda
Step1: We start by removing all data which might be left-over from previous executions of this tutorial.
Step2: A minimal example
For this tutorial we want to compute the volume of an ideal gas as a function of its pressure and thermal energy using the ideal gas equation
$p V = N kT$, where
$N$ refers to the system size, $p$ to the pressure, $kT$ to the thermal energy and $V$ is the volume of the system.
Step3: We can execute the complete study in just a few lines of code.
First, we initialize the project directory and get a project handle
Step4: We iterate over the variable of interest p and construct a complete state point sp which contains all the meta data associated with our data.
In this simple example the meta data is very compact, but in principle the state point may be highly complex.
Next, we obtain a job handle and store the result of the calculation within the job document.
The job document is a persistent dictionary for storage of simple key-value pairs.
Here, we exploit that the state point dictionary sp can easily be passed into the V_idg() function using the keyword expansion syntax (**sp).
Step5: We can then examine our results by iterating over the data space
Step6: That's it.
...
Ok, there's more...
Let's have a closer look at the individual components.
The Basics
The signac data management framework assists the user in managing the data space of individual projects.
All data related to one or multiple projects is stored in a workspace, which by default is a directory called workspace within the project's root directory.
Step7: The core idea is to tightly couple state points, unique sets of parameters, with their associated data.
In general, the parameter space needs to contain all parameters that will affect our data.
For the ideal gas that is a 3-dimensional space spanned by the thermal energy kT, the pressure p and the system size N.
These are the input parameters for our calculations, while the calculated volume V is the output data.
In terms of signac this relationship is represented by an instance of Job.
We use the open_job() method to get a job handle for a specific set of input parameters.
Step8: The job handle tightly couples our input parameters (p, kT, N) with the storage location of the output data.
You can inspect both the input parameters and the storage location explicitly
Step9: For convenience, a job's state point may also be accessed via the short-hand sp attribute.
For example, to access the pressure value p we can use either of the two following expressions
Step10: Each job has a unique id representing the state point.
This means opening a job with the exact same input parameters is guaranteed to have the exact same id.
Step11: The job id is used to uniquely identify data associated with a specific state point.
Think of the job as a container that is used to store all data associated with the state point.
For example, it should be safe to assume that all files that are stored within the job's workspace directory are tightly coupled to the job's statepoint.
Step12: Let's store the volume calculated for each state point in a file called V.txt within the job's workspace.
Step13: Because this is such a common pattern, signac signac allows you to short-cut this with the job.fn() method.
Step14: Sometimes it is easier to temporarily switch the current working directory while storing data for a specific job.
For this purpose, we can use the Job object as context manager.
This means that we switch into the workspace directory associated with the job after entering, and switch back into the original working directory after exiting.
Step15: Another alternative to store light-weight data is the job document as shown in the minimal example.
The job document is a persistent JSON storage file for simple key-value pairs.
Step16: Since we are usually interested in more than one state point, the standard operation is to iterate over all variable(s) of interest, construct the full state point, get the associated job handle, and then either just initialize the job or perform the full operation.
Step17: Let's verify our result by inspecting the data. | Python Code:
import signac
assert signac.__version__ >= '0.8.0'
Explanation: 1.1 Getting started
Prerequisites
Installation
This tutorial requires signac, so make sure to install the package before starting.
The easiest way to do so is using conda:
$ conda config --add channels conda-forge
$ conda install signac
or pip:
pip install signac --user
Please refer to the documentation for detailed instructions on how to install signac.
After succesful installation, the following cell should execute without error:
End of explanation
% rm -rf projects/tutorial/workspace
Explanation: We start by removing all data which might be left-over from previous executions of this tutorial.
End of explanation
def V_idg(N, kT, p):
return N * kT / p
Explanation: A minimal example
For this tutorial we want to compute the volume of an ideal gas as a function of its pressure and thermal energy using the ideal gas equation
$p V = N kT$, where
$N$ refers to the system size, $p$ to the pressure, $kT$ to the thermal energy and $V$ is the volume of the system.
End of explanation
import signac
project = signac.init_project('TutorialProject', 'projects/tutorial')
Explanation: We can execute the complete study in just a few lines of code.
First, we initialize the project directory and get a project handle:
End of explanation
for p in 0.1, 1.0, 10.0:
sp = {'p': p, 'kT': 1.0, 'N': 1000}
job = project.open_job(sp)
job.document['V'] = V_idg(**sp)
Explanation: We iterate over the variable of interest p and construct a complete state point sp which contains all the meta data associated with our data.
In this simple example the meta data is very compact, but in principle the state point may be highly complex.
Next, we obtain a job handle and store the result of the calculation within the job document.
The job document is a persistent dictionary for storage of simple key-value pairs.
Here, we exploit that the state point dictionary sp can easily be passed into the V_idg() function using the keyword expansion syntax (**sp).
End of explanation
for job in project:
print(job.sp.p, job.document['V'])
Explanation: We can then examine our results by iterating over the data space:
End of explanation
print(project.root_directory())
print(project.workspace())
Explanation: That's it.
...
Ok, there's more...
Let's have a closer look at the individual components.
The Basics
The signac data management framework assists the user in managing the data space of individual projects.
All data related to one or multiple projects is stored in a workspace, which by default is a directory called workspace within the project's root directory.
End of explanation
job = project.open_job({'p': 1.0, 'kT': 1.0, 'N': 1000})
Explanation: The core idea is to tightly couple state points, unique sets of parameters, with their associated data.
In general, the parameter space needs to contain all parameters that will affect our data.
For the ideal gas that is a 3-dimensional space spanned by the thermal energy kT, the pressure p and the system size N.
These are the input parameters for our calculations, while the calculated volume V is the output data.
In terms of signac this relationship is represented by an instance of Job.
We use the open_job() method to get a job handle for a specific set of input parameters.
End of explanation
print(job.statepoint())
print(job.workspace())
Explanation: The job handle tightly couples our input parameters (p, kT, N) with the storage location of the output data.
You can inspect both the input parameters and the storage location explicitly:
End of explanation
print(job.statepoint()['p'])
print(job.sp.p)
Explanation: For convenience, a job's state point may also be accessed via the short-hand sp attribute.
For example, to access the pressure value p we can use either of the two following expressions:
End of explanation
job2 = project.open_job({'kT': 1.0, 'N': 1000, 'p': 1.0})
print(job.get_id(), job2.get_id())
Explanation: Each job has a unique id representing the state point.
This means opening a job with the exact same input parameters is guaranteed to have the exact same id.
End of explanation
print(job.workspace())
Explanation: The job id is used to uniquely identify data associated with a specific state point.
Think of the job as a container that is used to store all data associated with the state point.
For example, it should be safe to assume that all files that are stored within the job's workspace directory are tightly coupled to the job's statepoint.
End of explanation
import os
fn_out = os.path.join(job.workspace(), 'V.txt')
with open(fn_out, 'w') as file:
V = V_idg(** job.statepoint())
file.write(str(V) + '\n')
Explanation: Let's store the volume calculated for each state point in a file called V.txt within the job's workspace.
End of explanation
with open(job.fn('V.txt'), 'w') as file:
V = V_idg(** job.statepoint())
file.write(str(V) + '\n')
Explanation: Because this is such a common pattern, signac signac allows you to short-cut this with the job.fn() method.
End of explanation
with job:
with open('V.txt', 'w') as file:
file.write(str(V) + '\n')
Explanation: Sometimes it is easier to temporarily switch the current working directory while storing data for a specific job.
For this purpose, we can use the Job object as context manager.
This means that we switch into the workspace directory associated with the job after entering, and switch back into the original working directory after exiting.
End of explanation
job.document['V'] = V_idg(** job.statepoint())
print(job.statepoint(), job.document)
Explanation: Another alternative to store light-weight data is the job document as shown in the minimal example.
The job document is a persistent JSON storage file for simple key-value pairs.
End of explanation
for pressure in 0.1, 1.0, 10.0:
statepoint = {'p': pressure, 'kT': 1.0, 'N': 1000}
job = project.open_job(statepoint)
job.document['V'] = V_idg(** job.statepoint())
Explanation: Since we are usually interested in more than one state point, the standard operation is to iterate over all variable(s) of interest, construct the full state point, get the associated job handle, and then either just initialize the job or perform the full operation.
End of explanation
for job in project:
print(job.statepoint(), job.document)
Explanation: Let's verify our result by inspecting the data.
End of explanation |
7,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Tuning
Step1: basic Usage
A couple things are needed by the tuner
Step2: prepare the model
Step3: start tuning
Step4: view the best hyper-parameter set
Step5: understading hyper-space
model.params.hyper_space reprensents the model's hyper-parameters search space, which is the cross-product of individual hyper parameter's hyper space. When a Tuner builds a model, for each hyper parameter in model.params, if the hyper-parameter has a hyper-space, then a sample will be taken in the space. However, if the hyper-parameter does not have a hyper-space, then the default value of the hyper-parameter will be used.
Step6: In a DenseBaseline model, only mlp_num_units, mlp_num_layers, and mlp_num_fan_out have pre-defined hyper-space. In other words, only these hyper-parameters will change values during a tuning. Other hyper-parameters, like mlp_activation_func, are fixed and will not change.
Step7: This is similar to the process of a tuner sampling model hyper-parameters, but with one key difference
Step8: setting hyper-space
What if I want the tuner to choose optimizer among adam, adagrad, and rmsprop?
Step9: What about setting mlp_num_layers to a fixed value of 2?
Step10: using callbacks
To save the model during the tuning process, use mz.auto.tuner.callbacks.SaveModel.
Step11: This will save all built models to your mz.USER_TUNED_MODELS_DIR, and can be loaded by
Step12: To load a pre-trained embedding layer into a built model during a tuning process, use mz.auto.tuner.callbacks.LoadEmbeddingMatrix.
Step13: make your own callbacks
To build your own callbacks, inherit mz.auto.tuner.callbacks.Callback and overrides corresponding methods.
A run proceeds in the following way | Python Code:
import matchzoo as mz
train_raw = mz.datasets.toy.load_data('train')
dev_raw = mz.datasets.toy.load_data('dev')
test_raw = mz.datasets.toy.load_data('test')
Explanation: Model Tuning
End of explanation
preprocessor = mz.models.DenseBaseline.get_default_preprocessor()
train = preprocessor.fit_transform(train_raw, verbose=0)
dev = preprocessor.transform(dev_raw, verbose=0)
test = preprocessor.transform(test_raw, verbose=0)
Explanation: basic Usage
A couple things are needed by the tuner:
- a model with a parameters filled
- preprocessed training data
- preprocessed testing data
Since MatchZoo models have pre-defined hyper-spaces, the tuner can start tuning right away once you have the data ready.
prepare the data
End of explanation
model = mz.models.DenseBaseline()
model.params['input_shapes'] = preprocessor.context['input_shapes']
model.params['task'] = mz.tasks.Ranking()
Explanation: prepare the model
End of explanation
tuner = mz.auto.Tuner(
params=model.params,
train_data=train,
test_data=dev,
num_runs=5
)
results = tuner.tune()
Explanation: start tuning
End of explanation
results['best']
results['best']['params'].to_frame()
Explanation: view the best hyper-parameter set
End of explanation
model.params.hyper_space
Explanation: understading hyper-space
model.params.hyper_space reprensents the model's hyper-parameters search space, which is the cross-product of individual hyper parameter's hyper space. When a Tuner builds a model, for each hyper parameter in model.params, if the hyper-parameter has a hyper-space, then a sample will be taken in the space. However, if the hyper-parameter does not have a hyper-space, then the default value of the hyper-parameter will be used.
End of explanation
def sample_and_build(params):
sample = mz.hyper_spaces.sample(params.hyper_space)
print('if sampled:', sample, '\n')
params.update(sample)
print('the built model will have:\n')
print(params, '\n\n\n')
for _ in range(3):
sample_and_build(model.params)
Explanation: In a DenseBaseline model, only mlp_num_units, mlp_num_layers, and mlp_num_fan_out have pre-defined hyper-space. In other words, only these hyper-parameters will change values during a tuning. Other hyper-parameters, like mlp_activation_func, are fixed and will not change.
End of explanation
print(model.params.get('mlp_num_units').hyper_space)
model.params.to_frame()[['Name', 'Hyper-Space']]
Explanation: This is similar to the process of a tuner sampling model hyper-parameters, but with one key difference: a tuner's hyper-space is suggestive. This means the sampling process in a tuner is not truely random but skewed. Scores of the past samples affect future choices: a tuner with more runs knows better about its hyper-space, and take samples in a way that will likely yields better scores.
For more details, consult tuner's backend: hyperopt, and the search algorithm tuner uses: Tree of Parzen Estimators (TPE)
Hyper-spaces can also be represented in a human-readable format.
End of explanation
model.params.get('optimizer').hyper_space = mz.hyper_spaces.choice(['adam', 'adagrad', 'rmsprop'])
for _ in range(10):
print(mz.hyper_spaces.sample(model.params.hyper_space))
Explanation: setting hyper-space
What if I want the tuner to choose optimizer among adam, adagrad, and rmsprop?
End of explanation
model.params['mlp_num_layers'] = 2
model.params.get('mlp_num_layers').hyper_space = None
for _ in range(10):
print(mz.hyper_spaces.sample(model.params.hyper_space))
Explanation: What about setting mlp_num_layers to a fixed value of 2?
End of explanation
tuner.num_runs = 2
tuner.callbacks.append(mz.auto.tuner.callbacks.SaveModel())
results = tuner.tune()
Explanation: using callbacks
To save the model during the tuning process, use mz.auto.tuner.callbacks.SaveModel.
End of explanation
best_model_id = results['best']['model_id']
mz.load_model(mz.USER_TUNED_MODELS_DIR.joinpath(best_model_id))
Explanation: This will save all built models to your mz.USER_TUNED_MODELS_DIR, and can be loaded by:
End of explanation
toy_embedding = mz.datasets.toy.load_embedding()
preprocessor = mz.models.DUET.get_default_preprocessor()
train = preprocessor.fit_transform(train_raw, verbose=0)
dev = preprocessor.transform(dev_raw, verbose=0)
params = mz.models.DUET.get_default_params()
params['task'] = mz.tasks.Ranking()
params.update(preprocessor.context)
params['embedding_output_dim'] = toy_embedding.output_dim
embedding_matrix = toy_embedding.build_matrix(preprocessor.context['vocab_unit'].state['term_index'])
load_embedding_matrix_callback = mz.auto.tuner.callbacks.LoadEmbeddingMatrix(embedding_matrix)
tuner = mz.auto.tuner.Tuner(
params=params,
train_data=train,
test_data=dev,
num_runs=1
)
tuner.callbacks.append(load_embedding_matrix_callback)
results = tuner.tune()
Explanation: To load a pre-trained embedding layer into a built model during a tuning process, use mz.auto.tuner.callbacks.LoadEmbeddingMatrix.
End of explanation
import numpy as np
class ValidateEmbedding(mz.auto.tuner.callbacks.Callback):
def __init__(self, embedding_matrix):
self._matrix = embedding_matrix
def on_build_end(self, tuner, model):
loaded_matrix = model.get_embedding_layer().get_weights()[0]
if np.isclose(self._matrix, loaded_matrix).all():
print("Yes! The my embedding is correctly loaded!")
validate_embedding_matrix_callback = ValidateEmbedding(embedding_matrix)
tuner = mz.auto.tuner.Tuner(
params=params,
train_data=train,
test_data=dev,
num_runs=1,
callbacks=[load_embedding_matrix_callback, validate_embedding_matrix_callback]
)
tuner.callbacks.append(load_embedding_matrix_callback)
results = tuner.tune()
Explanation: make your own callbacks
To build your own callbacks, inherit mz.auto.tuner.callbacks.Callback and overrides corresponding methods.
A run proceeds in the following way:
run start (callback)
build model
build end (callback)
fit and evaluate model
collect result
run end (callback)
This process is repeated for num_runs times in a tuner.
For example, say I want to verify if my embedding matrix is correctly loaded.
End of explanation |
7,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 2</font>
Download
Step1: Dicionários
Step2: Criando dicionários aninhados | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 2</font>
Download: http://github.com/dsacademybr
End of explanation
# Isso é uma lista
estudantes_lst = ["Mateus", 24, "Fernanda", 22, "Tamires", 26, "Cristiano", 25]
estudantes_lst
# Isso é um dicionário
estudantes_dict = {"Mateus":24, "Fernanda":22, "Tamires":26, "Cristiano":25}
estudantes_dict
estudantes_dict["Mateus"]
estudantes_dict["Pedro"] = 23
estudantes_dict["Pedro"]
estudantes_dict["Tamires"]
estudantes_dict.clear()
estudantes_dict
del estudantes_dict
estudantes_dict
estudantes = {"Mateus":24, "Fernanda":22, "Tamires":26, "Cristiano":25}
estudantes
len(estudantes)
estudantes.keys()
estudantes.values()
estudantes.items()
estudantes2 = {"Maria":27, "Erika":28, "Milton":26}
estudantes2
estudantes.update(estudantes2)
estudantes
dic1 = {}
dic1
dic1["key_one"] = 2
print(dic1)
dic1[10] = 5
dic1
dic1[8.2] = "Python"
dic1
dic1["teste"] = 5
dic1
dict1 = {}
dict1
dict1["teste"] = 10
dict1["key"] = "teste"
# Atenção, pois chave e valor podem ser iguais, mas representam coisas diferentes.
dict1
dict2 = {}
dict2["key1"] = "Big Data"
dict2["key2"] = 10
dict2["key3"] = 5.6
dict2
a = dict2["key1"]
b = dict2["key2"]
c = dict2["key3"]
a, b, c
# Dicionário de listas
dict3 = {'key1':1230,'key2':[22,453,73.4],'key3':['leite','maça','batata']}
dict3
dict3['key2']
# Acessando um item da lista, dentro do dicionário
dict3['key3'][0].upper()
# Operações com itens da lista, dentro do dicionário
var1 = dict3['key2'][0] - 2
var1
# Duas operações no mesmo comando, para atualizar um item dentro da lista
dict3['key2'][0] -= 2
dict3
Explanation: Dicionários
End of explanation
# Criando dicionários aninhados
dict_aninhado = {'key1':{'key2_aninhada':{'key3_aninhada':'Dict aninhado em Python'}}}
dict_aninhado
dict_aninhado['key1']['key2_aninhada']['key3_aninhada']
Explanation: Criando dicionários aninhados
End of explanation |
7,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 3
Step1: Visualize initial state
Step2: Run simulation and visualize new state
Step3: Queries about LAMMPS simulation
Step4: Working with LAMMPS Variables
Step5: Accessing Atom data | Python Code:
from lammps import IPyLammps
L = IPyLammps()
# 2d circle of particles inside a box with LJ walls
import math
b = 0
x = 50
y = 20
d = 20
# careful not to slam into wall too hard
v = 0.3
w = 0.08
L.units("lj")
L.dimension(2)
L.atom_style("bond")
L.boundary("f f p")
L.lattice("hex", 0.85)
L.region("box", "block", 0, x, 0, y, -0.5, 0.5)
L.create_box(1, "box", "bond/types", 1, "extra/bond/per/atom", 6)
L.region("circle", "sphere", d/2.0+1.0, d/2.0/math.sqrt(3.0)+1, 0.0, d/2.0)
L.create_atoms(1, "region", "circle")
L.mass(1, 1.0)
L.velocity("all create 0.5 87287 loop geom")
L.velocity("all set", v, w, 0, "sum yes")
L.pair_style("lj/cut", 2.5)
L.pair_coeff(1, 1, 10.0, 1.0, 2.5)
L.bond_style("harmonic")
L.bond_coeff(1, 10.0, 1.2)
L.create_bonds("many", "all", "all", 1, 1.0, 1.5)
L.neighbor(0.3, "bin")
L.neigh_modify("delay", 0, "every", 1, "check yes")
L.fix(1, "all", "nve")
L.fix(2, "all wall/lj93 xlo 0.0 1 1 2.5 xhi", x, "1 1 2.5")
L.fix(3, "all wall/lj93 ylo 0.0 1 1 2.5 yhi", y, "1 1 2.5")
Explanation: Example 3: 2D circle of particles inside of box with LJ walls
Prerequisites
Before running this example, make sure your Python environment can find the LAMMPS shared library (liblammps.so) and the LAMMPS Python package is installed. If you followed the README in this folder, this should already be the case. You can also find more information about how to compile LAMMPS and install the LAMMPS Python package in the LAMMPS manual. There is also a dedicated PyLammps HowTo.
Setup system
End of explanation
L.image(zoom=1.8)
Explanation: Visualize initial state
End of explanation
L.thermo_style("custom step temp epair press")
L.thermo(100)
output = L.run(40000)
L.image(zoom=1.8)
Explanation: Run simulation and visualize new state
End of explanation
L.system
L.system.natoms
L.system.nbonds
L.system.nbondtypes
L.communication
L.fixes
L.computes
L.dumps
L.groups
Explanation: Queries about LAMMPS simulation
End of explanation
L.variable("a index 2")
L.variables
L.variable("t equal temp")
L.variables
import sys
if sys.version_info < (3, 0):
# In Python 2 'print' is a restricted keyword, which is why you have to use the lmp_print function instead.
x = float(L.lmp_print('"${a}"'))
else:
# In Python 3 the print function can be redefined.
# x = float(L.print('"${a}"')")
# To avoid a syntax error in Python 2 executions of this notebook, this line is packed into an eval statement
x = float(eval("L.print('\"${a}\"')"))
x
L.variables['t'].value
L.eval("v_t/2.0")
L.variable("b index a b c")
L.variables['b'].value
L.eval("v_b")
L.variables['b'].definition
L.variable("i loop 10")
L.variables['i'].value
L.next("i")
L.variables['i'].value
L.eval("ke")
Explanation: Working with LAMMPS Variables
End of explanation
L.atoms[0]
dir(L.atoms[0])
L.atoms[0].position
L.atoms[0].id
L.atoms[0].velocity
L.atoms[0].force
L.atoms[0].type
Explanation: Accessing Atom data
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.