Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: TensorFlow Hub によるテキストの分類
Step2: IMDB データセットをダウンロードする
IMDB データセットは、imdb reviews または TensorFlow データセットで提供されています。次のコードを使って、IMDB データセットをマシン(または Colab ランタイム)にダウンロードしてください。
Step3: データを確認する
データの形式を確認してみましょう。各サンプルは、映画レビューを表す文章と対応するラベルです。文章はまったく事前処理されていません。ラベルは 0 または 1 の整数値で、0 は否定的なレビューで 1 は肯定的なレビューを示します。
最初の 10 個のサンプルを出力しましょう。
Step4: 最初の 10 個のラベルも出力しましょう。
Step5: モデルを構築する
ニューラルネットワークは、レイヤーのスタックによって作成されています。これには、次の 3 つのアーキテクチャ上の決定が必要です。
どのようにテキストを表現するか。
モデルにはいくつのレイヤーを使用するか。
各レイヤーにはいくつの非表示ユニットを使用するか。
この例では、入力データは文章で構成されています。予測するラベルは、0 または 1 です。
テキストの表現方法としては、文章を埋め込みベクトルに変換する方法があります。トレーニング済みのテキスト埋め込みを最初のレイヤーとして使用することで、次のような 3 つのメリットを得ることができます。
テキストの事前処理を心配する必要がない。
転移学習を利用できる。
埋め込みのサイズは固定されているため、処理しやすい。
この例では、google/nnlm-en-dim50/2 という、TensorFlow Hub の トレーニング済みのテキスト埋め込みモデル を使用します。
このチュートリアルで使用できる、TFHub からの事前トレーニング済みのテキスト埋め込みは他にも数多くあります。
google/nnlm-en-dim128/2 - google/nnlm-en-dim50/2 と同じデータで同じ NNLM アーキテクチャを使用してトレーニングされていますが、埋め込みのサイズが大きくなっています。より大きなサイズの埋め込みはタスクを改善できますが、モデルのトレーニングに時間がかかる場合があります。
google/nnlm-en-dim128-with-normalization/2 - google/nnlm-en-dim128/2 と同じですが、句読点の削除など、追加のテキスト正規化があります。これは、タスクのテキストに追加の文字や句読点が含まれている場合に役立ちます。
google/universal-sentence-encoder/4 - ディープアベレージングネットワーク(DAN)エンコーダーでトレーニングされた 512 次元の埋め込みを生成するはるかに大きなモデルです。
その他、多数のテキスト埋め込みモデルがあります。TFHub でその他のテキスト埋め込みモデルを検索してください。
では始めに、TensorFlow Hub モデルを使用して文章を埋め込む Keras レイヤーを作成し、いくつかの入力サンプルで試してみましょう。入力テキストの長さに関係なく、埋め込みの出力形状は、(num_examples, embedding_dimension) であるところに注意してください。
Step6: 今度は、完全なモデルを構築しましょう。
Step7: レイヤーは順にスタックされて、分類器が構築されます。
最初のレイヤーは、TensorFlow Hub レイヤーです。このレイヤーは文章から埋め込みベクトルにマッピングするために、トレーニング済みの SavedModel を使用します。使用中のモデル (google/nnlm-en-dim50/2) は文章をトークンに分割し、各トークンを埋め込んで、埋め込みを組み合わせます。その結果、次元は (num_examples, embedding_dimension) となります。この NNLM モデルの場合、embedding_dimension は 50 です。
この固定長の出力ベクトルは、16 個の非表示ユニットを持つ完全に接続された(Dense)レイヤーを介してパイプ処理されます。
最後のレイヤーは、単一の出力ノードと密に接続されています。
モデルをコンパイルしましょう。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルはロジット(線形アクティベーションを持つ単一ユニットレイヤー)を出力するため、binary_crossentropy 損失関数を使用します。
これは、損失関数の唯一の選択肢ではありません。たとえば、mean_squared_error を使用することもできます。ただし、一般的には、確率を扱うには binary_crossentropy の方が適しているといえます。これは、確率分布間、またはこのケースではグランドトゥルース分布と予測間の「距離」を測定するためです。
後で、回帰問題(家の価格を予測するなど)を考察する際に、平均二乗誤差と呼ばれる別の損失関数の使用方法を確認します。
では、オプティマイザと損失関数を使用するようにモデルを構成します。
Step8: モデルをトレーニングする
モデルを 512 個のサンプルのミニバッチで 10 エポック、トレーニングします。これは、x_train とy_trainテンソルのすべてのサンプルを 10 回イテレーションします。トレーニング中、検証セットの 10,000 個のサンプルで、モデルの損失と正解率を監視します。
Step9: モデルを評価する
モデルのパフォーマンスを見てみましょう。2 つの値が返されます。損失(誤差、値が低いほど良)と正確率です。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install tensorflow-hub
!pip install tensorflow-datasets
import os
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
Explanation: TensorFlow Hub によるテキストの分類: 映画レビュー
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a>
</td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a>
</td>
<td> <a href="https://tfhub.dev/s?module-type=text-embedding"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a> </td>
</table>
このノートブックでは、映画レビューのテキストを使用して、レビューを肯定的評価と否定的評価に分類します。これは、機械学習の問題で広く適用されている、重要な分類手法である二項分類の例です。
このチュートリアルでは、TensorFlow Hub と Keras を使用した転移学習の基本アプリケーションを実演します。
ここでは、Internet Movie Database から抽出した 50,000 件の映画レビューを含む、大規模なレビューデータセットを使います。レビューはトレーニング用とテスト用に 25,000 件ずつに分割されています。トレーニング用とテスト用のデータは均衡しています。言い換えると、それぞれが同数の肯定的及び否定的なレビューを含んでいます。
このノートブックでは、TensorFlow でモデルを構築してトレーニングするための tf.keras という高レベル API と、トレーニング済みモデルを TFHub から 1 行のコードで読み込むためのライブラリである tensorflow_hub を使用します。tf.keras を使用した、より高度なテキスト分類チュートリアルについては、<a>MLCC テキスト分類ガイド</a> をご覧ください。
End of explanation
# Split the training set into 60% and 40% to end up with 15,000 examples
# for training, 10,000 examples for validation and 25,000 examples for testing.
train_data, validation_data, test_data = tfds.load(
name="imdb_reviews",
split=('train[:60%]', 'train[60%:]', 'test'),
as_supervised=True)
Explanation: IMDB データセットをダウンロードする
IMDB データセットは、imdb reviews または TensorFlow データセットで提供されています。次のコードを使って、IMDB データセットをマシン(または Colab ランタイム)にダウンロードしてください。
End of explanation
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch
Explanation: データを確認する
データの形式を確認してみましょう。各サンプルは、映画レビューを表す文章と対応するラベルです。文章はまったく事前処理されていません。ラベルは 0 または 1 の整数値で、0 は否定的なレビューで 1 は肯定的なレビューを示します。
最初の 10 個のサンプルを出力しましょう。
End of explanation
train_labels_batch
Explanation: 最初の 10 個のラベルも出力しましょう。
End of explanation
embedding = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
Explanation: モデルを構築する
ニューラルネットワークは、レイヤーのスタックによって作成されています。これには、次の 3 つのアーキテクチャ上の決定が必要です。
どのようにテキストを表現するか。
モデルにはいくつのレイヤーを使用するか。
各レイヤーにはいくつの非表示ユニットを使用するか。
この例では、入力データは文章で構成されています。予測するラベルは、0 または 1 です。
テキストの表現方法としては、文章を埋め込みベクトルに変換する方法があります。トレーニング済みのテキスト埋め込みを最初のレイヤーとして使用することで、次のような 3 つのメリットを得ることができます。
テキストの事前処理を心配する必要がない。
転移学習を利用できる。
埋め込みのサイズは固定されているため、処理しやすい。
この例では、google/nnlm-en-dim50/2 という、TensorFlow Hub の トレーニング済みのテキスト埋め込みモデル を使用します。
このチュートリアルで使用できる、TFHub からの事前トレーニング済みのテキスト埋め込みは他にも数多くあります。
google/nnlm-en-dim128/2 - google/nnlm-en-dim50/2 と同じデータで同じ NNLM アーキテクチャを使用してトレーニングされていますが、埋め込みのサイズが大きくなっています。より大きなサイズの埋め込みはタスクを改善できますが、モデルのトレーニングに時間がかかる場合があります。
google/nnlm-en-dim128-with-normalization/2 - google/nnlm-en-dim128/2 と同じですが、句読点の削除など、追加のテキスト正規化があります。これは、タスクのテキストに追加の文字や句読点が含まれている場合に役立ちます。
google/universal-sentence-encoder/4 - ディープアベレージングネットワーク(DAN)エンコーダーでトレーニングされた 512 次元の埋め込みを生成するはるかに大きなモデルです。
その他、多数のテキスト埋め込みモデルがあります。TFHub でその他のテキスト埋め込みモデルを検索してください。
では始めに、TensorFlow Hub モデルを使用して文章を埋め込む Keras レイヤーを作成し、いくつかの入力サンプルで試してみましょう。入力テキストの長さに関係なく、埋め込みの出力形状は、(num_examples, embedding_dimension) であるところに注意してください。
End of explanation
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
Explanation: 今度は、完全なモデルを構築しましょう。
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: レイヤーは順にスタックされて、分類器が構築されます。
最初のレイヤーは、TensorFlow Hub レイヤーです。このレイヤーは文章から埋め込みベクトルにマッピングするために、トレーニング済みの SavedModel を使用します。使用中のモデル (google/nnlm-en-dim50/2) は文章をトークンに分割し、各トークンを埋め込んで、埋め込みを組み合わせます。その結果、次元は (num_examples, embedding_dimension) となります。この NNLM モデルの場合、embedding_dimension は 50 です。
この固定長の出力ベクトルは、16 個の非表示ユニットを持つ完全に接続された(Dense)レイヤーを介してパイプ処理されます。
最後のレイヤーは、単一の出力ノードと密に接続されています。
モデルをコンパイルしましょう。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルはロジット(線形アクティベーションを持つ単一ユニットレイヤー)を出力するため、binary_crossentropy 損失関数を使用します。
これは、損失関数の唯一の選択肢ではありません。たとえば、mean_squared_error を使用することもできます。ただし、一般的には、確率を扱うには binary_crossentropy の方が適しているといえます。これは、確率分布間、またはこのケースではグランドトゥルース分布と予測間の「距離」を測定するためです。
後で、回帰問題(家の価格を予測するなど)を考察する際に、平均二乗誤差と呼ばれる別の損失関数の使用方法を確認します。
では、オプティマイザと損失関数を使用するようにモデルを構成します。
End of explanation
history = model.fit(train_data.shuffle(10000).batch(512),
epochs=10,
validation_data=validation_data.batch(512),
verbose=1)
Explanation: モデルをトレーニングする
モデルを 512 個のサンプルのミニバッチで 10 エポック、トレーニングします。これは、x_train とy_trainテンソルのすべてのサンプルを 10 回イテレーションします。トレーニング中、検証セットの 10,000 個のサンプルで、モデルの損失と正解率を監視します。
End of explanation
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
Explanation: モデルを評価する
モデルのパフォーマンスを見てみましょう。2 つの値が返されます。損失(誤差、値が低いほど良)と正確率です。
End of explanation |
4,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a classifier to predict the wine color from wine quality attributes using this dataset
Step1: Query for the data and create a numpy array
Step2: Split the data into features (x) and target (y, the last column in the table)
Remember you can cast the results into an numpy array and then slice out what you want
Step3: Create a decision tree with the data
Step4: Run 10-fold cross validation on the model
Step5: If you have time, calculate the feature importance and graph based on the code in the slides from last class | Python Code:
import pg8000
conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', database="training", port=5432, user='dot_student', password='qgis')
cursor = conn.cursor()
database=cursor.execute("SELECT * FROM winequality")
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_sql("SELECT * FROM winequality", conn)
df.head()
df=df.rename(columns = lambda x : str(x)[1:])
df.columns = [x.replace('\'', '') for x in df.columns]
df.columns
Explanation: Create a classifier to predict the wine color from wine quality attributes using this dataset: http://archive.ics.uci.edu/ml/datasets/Wine+Quality
The data is in the database we've been using
host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com'
database='training'
port=5432
user='dot_student'
password='qgis'
table name = 'winequality'
End of explanation
df.info()
Explanation: Query for the data and create a numpy array
End of explanation
x = df.ix[:, df.columns != 'color'].as_matrix() # the attributes
x
y = df['color'].as_matrix() # the attributes
y
Explanation: Split the data into features (x) and target (y, the last column in the table)
Remember you can cast the results into an numpy array and then slice out what you want
End of explanation
from sklearn import tree
import matplotlib.pyplot as plt
import numpy as np
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
Explanation: Create a decision tree with the data
End of explanation
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(dt,x,y,cv=10)
np.mean(scores)
Explanation: Run 10-fold cross validation on the model
End of explanation
df.columns
# running this on the decision tree
plt.plot(dt.feature_importances_,'o')
plt.ylim(0,1)
plt.xlim(0,10)
# free_sulfur_dioxide is the most important feature.
Explanation: If you have time, calculate the feature importance and graph based on the code in the slides from last class
End of explanation |
4,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Principles of Machine Learning
Here we'll dive into the basic principles of machine learning, and how to
utilize them via the Scikit-Learn API.
After briefly introducing scikit-learn's Estimator object, we'll cover supervised learning, including classification and regression problems, and unsupervised learning, including dimensionality reduction and clustering problems.
Step1: The Scikit-learn Estimator Object
Every algorithm is exposed in scikit-learn via an ''Estimator'' object. For instance a linear regression is implemented as so
Step2: Estimator parameters
Step3: Estimated Model parameters
Step4: The model found a line with a slope 2 and intercept 1, as we'd expect.
Supervised Learning
Step5: You can also do probabilistic predictions
Step6: Exercise
Use a different estimator on the same problem
Step7: As above, we can plot a line of best fit
Step8: Scikit-learn also has some more sophisticated models, which can respond to finer features in the data
Step9: Whether either of these is a "good" fit or not depends on a number of things; we'll discuss details of how to choose a model later in the tutorial.
Exercise
Explore the RandomForestRegressor object using IPython's help features (i.e. put a question mark after the object).
What arguments are available to RandomForestRegressor?
How does the above plot change if you change these arguments?
These class-level arguments are known as hyperparameters, and we will discuss later how you to select hyperparameters in the model validation section.
Unsupervised Learning
Step10: Clustering
Step11: Recap
Step12: A more useful way to look at the results is to view the confusion matrix, or the matrix showing the frequency of inputs and outputs
Step13: For each class, all 50 training samples are correctly identified. But this does not mean that our model is perfect! In particular, such a model generalizes extremely poorly to new data. We can simulate this by splitting our data into a training set and a testing set. Scikit-learn contains some convenient routines to do this
Step14: This paints a better picture of the true performance of our classifier
Step15: Original source on the scikit-learn website
Quick Application
Step16: Let's plot a few of these
Step17: Here the data is simply each pixel value within an 8x8 grid
Step18: So our data have 1797 samples in 64 dimensions.
Unsupervised Learning
Step19: We see here that the digits are fairly well-separated in the parameter space; this tells us that a supervised classification algorithm should perform fairly well. Let's give it a try.
Classification on Digits
Let's try a classification task on the digits. The first thing we'll want to do is split the digits into a training and testing sample
Step20: Let's use a simple logistic regression which (despite its confusing name) is a classification algorithm
Step21: We can check our classification accuracy by comparing the true values of the test set to the predictions
Step22: This single number doesn't tell us where we've gone wrong
Step23: We might also take a look at some of the outputs along with their predicted labels. We'll make the bad labels red | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Basic Principles of Machine Learning
Here we'll dive into the basic principles of machine learning, and how to
utilize them via the Scikit-Learn API.
After briefly introducing scikit-learn's Estimator object, we'll cover supervised learning, including classification and regression problems, and unsupervised learning, including dimensionality reduction and clustering problems.
End of explanation
from sklearn.linear_model import LinearRegression
Explanation: The Scikit-learn Estimator Object
Every algorithm is exposed in scikit-learn via an ''Estimator'' object. For instance a linear regression is implemented as so:
End of explanation
model = LinearRegression(normalize=True)
print(model.normalize)
print(model)
model2 = LinearRegression()
model?
Explanation: Estimator parameters: All the parameters of an estimator can be set when it is instantiated, and have suitable default values:
End of explanation
x = np.arange(10)
y = 2 * x + 1
print(x)
print(y)
plt.plot(x, y, 'o');
# The input data for sklearn is 2D: (samples == 10 x features == 1)
X = x[:, np.newaxis]
print(X)
print(y)
# fit the model on our data
model.fit(X, y)
# underscore at the end indicates a fit parameter
print(model.coef_)
print(model.intercept_)
# residual error around fit
model.residues_
model.score(X, y)
Explanation: Estimated Model parameters: When data is fit with an estimator, parameters are estimated from the data at hand. All the estimated parameters are attributes of the estimator object ending by an underscore:
End of explanation
from sklearn import neighbors, datasets
iris = datasets.load_iris()
X, y = iris.data, iris.target
# create the model
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
# fit the model
knn.fit(X, y)
# What kind of iris has 3cm x 5cm sepal and 4cm x 2cm petal?
# call the "predict" method:
result = knn.predict([[5, 3, 4, 2],])
print(iris.target_names[result])
Explanation: The model found a line with a slope 2 and intercept 1, as we'd expect.
Supervised Learning: Classification and Regression
In Supervised Learning, we have a dataset consisting of both features and labels.
The task is to construct an estimator which is able to predict the label of an object
given the set of features. A relatively simple example is predicting the species of
iris given a set of measurements of its flower. This is a relatively simple task.
Some more complicated examples are:
given a multicolor image of an object through a telescope, determine
whether that object is a star, a quasar, or a galaxy.
given a photograph of a person, identify the person in the photo.
given a list of movies a person has watched and their personal rating
of the movie, recommend a list of movies they would like
(So-called recommender systems: a famous example is the Netflix Prize).
What these tasks have in common is that there is one or more unknown
quantities associated with the object which needs to be determined from other
observed quantities.
Supervised learning is further broken down into two categories, classification and regression.
In classification, the label is discrete, while in regression, the label is continuous. For example,
in astronomy, the task of determining whether an object is a star, a galaxy, or a quasar is a
classification problem: the label is from three distinct categories. On the other hand, we might
wish to estimate the age of an object based on such observations: this would be a regression problem,
because the label (age) is a continuous quantity.
Classification Example
K nearest neighbors (kNN) is one of the simplest learning strategies: given a new, unknown observation, look up in your reference database which ones have the closest features and assign the predominant class.
Let's try it out on our iris classification problem:
End of explanation
iris.target_names
knn.predict_proba([[5, 3, 4, 2],])
from fig_code import plot_iris_knn
plot_iris_knn()
Explanation: You can also do probabilistic predictions:
End of explanation
# Create some simple data
import numpy as np
np.random.seed(0)
X = np.random.random(size=(20, 1))
y = 3 * X.squeeze() + 2 + np.random.randn(20)
plt.plot(X.squeeze(), y, 'o');
Explanation: Exercise
Use a different estimator on the same problem: sklearn.svm.SVC.
Note that you don't have to know what it is to use it. We're simply trying out the interface here
If you finish early, try to create a similar plot as above with the SVC estimator.
Regression Example
One of the simplest regression problems is fitting a line to data, which we saw above.
Scikit-learn also contains more sophisticated regression algorithms
End of explanation
model = LinearRegression()
model.fit(X, y)
# Plot the data and the model prediction
X_fit = np.linspace(0, 1, 100)[:, np.newaxis]
y_fit = model.predict(X_fit)
plt.plot(X.squeeze(), y, 'o')
plt.plot(X_fit.squeeze(), y_fit);
Explanation: As above, we can plot a line of best fit:
End of explanation
# Fit a Random Forest
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=10, max_depth=5)
model.fit(X, y)
# Plot the data and the model prediction
X_fit = np.linspace(0, 1, 100)[:, np.newaxis]
y_fit = model.predict(X_fit)
plt.plot(X.squeeze(), y, 'o')
plt.plot(X_fit.squeeze(), y_fit);
Explanation: Scikit-learn also has some more sophisticated models, which can respond to finer features in the data:
End of explanation
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=0.95)
pca.fit(X)
X_reduced = pca.transform(X)
print("Reduced dataset shape:", X_reduced.shape)
import pylab as plt
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], #c=y,
cmap='RdYlBu')
print("Meaning of the 2 components:")
for component in pca.components_:
print(" + ".join("%.3f x %s" % (value, name)
for value, name in zip(component,
iris.feature_names)))
Explanation: Whether either of these is a "good" fit or not depends on a number of things; we'll discuss details of how to choose a model later in the tutorial.
Exercise
Explore the RandomForestRegressor object using IPython's help features (i.e. put a question mark after the object).
What arguments are available to RandomForestRegressor?
How does the above plot change if you change these arguments?
These class-level arguments are known as hyperparameters, and we will discuss later how you to select hyperparameters in the model validation section.
Unsupervised Learning: Dimensionality Reduction and Clustering
Unsupervised Learning addresses a different sort of problem. Here the data has no labels,
and we are interested in finding similarities between the objects in question. In a sense,
you can think of unsupervised learning as a means of discovering labels from the data itself.
Unsupervised learning comprises tasks such as dimensionality reduction, clustering, and
density estimation. For example, in the iris data discussed above, we can used unsupervised
methods to determine combinations of the measurements which best display the structure of the
data. As we'll see below, such a projection of the data can be used to visualize the
four-dimensional dataset in two dimensions. Some more involved unsupervised learning problems are:
given detailed observations of distant galaxies, determine which features or combinations of
features best summarize the information.
given a mixture of two sound sources (for example, a person talking over some music),
separate the two (this is called the blind source separation problem).
given a video, isolate a moving object and categorize in relation to other moving objects which have been seen.
Sometimes the two may even be combined: e.g. Unsupervised learning can be used to find useful
features in heterogeneous data, and then these features can be used within a supervised
framework.
Dimensionality Reduction: PCA
Principle Component Analysis (PCA) is a dimension reduction technique that can find the combinations of variables that explain the most variance.
Consider the iris dataset. It cannot be visualized in a single 2D plot, as it has 4 features. We are going to extract 2 combinations of sepal and petal dimensions to visualize it:
End of explanation
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans
k_means.fit(X)
y_pred = k_means.predict(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred,
cmap='RdYlBu');
Explanation: Clustering: K-means
Clustering groups together observations that are homogeneous with respect to a given criterion, finding ''clusters'' in the data.
Note that these clusters will uncover relevent hidden structure of the data only if the criterion used highlights it.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
X, y = iris.data, iris.target
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(X, y)
y_pred = clf.predict(X)
print(np.all(y == y_pred))
Explanation: Recap: Scikit-learn's estimator interface
Scikit-learn strives to have a uniform interface across all methods,
and we'll see examples of these below. Given a scikit-learn estimator
object named model, the following methods are available:
Available in all Estimators
model.fit() : fit training data. For supervised learning applications,
this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)).
For unsupervised learning applications, this accepts only a single argument,
the data X (e.g. model.fit(X)).
Available in supervised estimators
model.predict() : given a trained model, predict the label of a new set of data.
This method accepts one argument, the new data X_new (e.g. model.predict(X_new)),
and returns the learned label for each object in the array.
model.predict_proba() : For classification problems, some estimators also provide
this method, which returns the probability that a new observation has each categorical label.
In this case, the label with the highest probability is returned by model.predict().
model.score() : for classification or regression problems, most (all?) estimators implement
a score method. Scores are between 0 and 1, with a larger score indicating a better fit.
Available in unsupervised estimators
model.predict() : predict labels in clustering algorithms.
model.transform() : given an unsupervised model, transform new data into the new basis.
This also accepts one argument X_new, and returns the new representation of the data based
on the unsupervised model.
model.fit_transform() : some estimators implement this method,
which more efficiently performs a fit and a transform on the same input data.
Model Validation
An important piece of machine learning is model validation: that is, determining how well your model will generalize from the training data to future unlabeled data. Let's look at an example using the nearest neighbor classifier. This is a very simple classifier: it simply stores all training data, and for any unknown quantity, simply returns the label of the closest training point.
With the iris data, it very easily returns the correct prediction for each of the input points:
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y, y_pred))
Explanation: A more useful way to look at the results is to view the confusion matrix, or the matrix showing the frequency of inputs and outputs:
End of explanation
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=23)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
print(confusion_matrix(ytest, ypred))
Xtest.shape[0] / len(X)
Explanation: For each class, all 50 training samples are correctly identified. But this does not mean that our model is perfect! In particular, such a model generalizes extremely poorly to new data. We can simulate this by splitting our data into a training set and a testing set. Scikit-learn contains some convenient routines to do this:
End of explanation
from IPython.display import Image
Image("http://scikit-learn.org/dev/_static/ml_map.png")
Explanation: This paints a better picture of the true performance of our classifier: apparently there is some confusion between the second and third species, which we might anticipate given what we've seen of the data above.
This is why it's extremely important to use a train/test split when evaluating your models. We'll go into more depth on model evaluation later in this tutorial.
Flow Chart: How to Choose your Estimator
This is a flow chart created by scikit-learn super-contributor Andreas Mueller which gives a nice summary of which algorithms to choose in various situations. Keep it around as a handy reference!
End of explanation
from sklearn import datasets
digits = datasets.load_digits()
digits.images.shape
Explanation: Original source on the scikit-learn website
Quick Application: Optical Character Recognition
To demonstrate the above principles on a more interesting problem, let's consider OCR (Optical Character Recognition) – that is, recognizing hand-written digits.
In the wild, this problem involves both locating and identifying characters in an image. Here we'll take a shortcut and use scikit-learn's set of pre-formatted digits, which is built-in to the library.
Loading and visualizing the digits data
We'll use scikit-learn's data access interface and take a look at this data:
End of explanation
fig, axes = plt.subplots(10, 10, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
ax.imshow(digits.images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(digits.target[i]),
transform=ax.transAxes, color='green')
ax.set_xticks([])
ax.set_yticks([])
Explanation: Let's plot a few of these:
End of explanation
# The images themselves
print(digits.images.shape)
print(digits.images[0])
# The data for use in our algorithms
print(digits.data.shape)
print(digits.data[0])
# The target label
print(digits.target)
Explanation: Here the data is simply each pixel value within an 8x8 grid:
End of explanation
from sklearn.manifold import Isomap
iso = Isomap(n_components=2)
data_projected = iso.fit_transform(digits.data)
data_projected.shape
plt.scatter(data_projected[:, 0], data_projected[:, 1], c=digits.target,
edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('nipy_spectral', 10));
plt.colorbar(label='digit label', ticks=range(10))
plt.clim(-0.5, 9.5)
Explanation: So our data have 1797 samples in 64 dimensions.
Unsupervised Learning: Dimensionality Reduction
We'd like to visualize our points within the 64-dimensional parameter space, but it's difficult to plot points in 64 dimensions!
Instead we'll reduce the dimensions to 2, using an unsupervised method.
Here, we'll make use of a manifold learning algorithm called Isomap, and transform the data to two dimensions.
End of explanation
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(digits.data, digits.target,
random_state=2)
print(Xtrain.shape, Xtest.shape)
Explanation: We see here that the digits are fairly well-separated in the parameter space; this tells us that a supervised classification algorithm should perform fairly well. Let's give it a try.
Classification on Digits
Let's try a classification task on the digits. The first thing we'll want to do is split the digits into a training and testing sample:
End of explanation
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(penalty='l2')
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
Explanation: Let's use a simple logistic regression which (despite its confusing name) is a classification algorithm:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(ytest, ypred)
Explanation: We can check our classification accuracy by comparing the true values of the test set to the predictions:
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix(ytest, ypred))
plt.imshow(np.log(confusion_matrix(ytest, ypred)),
cmap='Blues', interpolation='nearest')
plt.grid(False)
plt.ylabel('true')
plt.xlabel('predicted');
Explanation: This single number doesn't tell us where we've gone wrong: one nice way to do this is to use the confusion matrix
End of explanation
fig, axes = plt.subplots(10, 10, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
ax.imshow(Xtest[i].reshape(8, 8), cmap='binary')
ax.text(0.05, 0.05, str(ypred[i]),
transform=ax.transAxes,
color='green' if (ytest[i] == ypred[i]) else 'red')
ax.set_xticks([])
ax.set_yticks([])
Explanation: We might also take a look at some of the outputs along with their predicted labels. We'll make the bad labels red:
End of explanation |
4,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Basics with Sklearn
First some imports for the notebook and visualization.
Step1: Choosing a dataset
First of all you need a dataset to work on. To keep things simple we will use the iris dataset provided with scikit.
Step2: Splitting the dataset
The dataset needs to be split into a training and test dataset.
This is done to first train our model and then test how good it is on data it has never seen before.
Step3: Decision Tree Classifier
We will use a simple decision tree as the first model and train it.
Step4: Visualize the decision tree
Whereas the decision tree is a simple graph we can visualize it quite simple.
Step5: Evaluating the model
After the model is trained it has to be evaluated.
Step6: KNN-Classifier
Let's try another classifier. Initialize, train, print error.
Step7: Implementing your own KNN
Let's implement a simple knn classifier with k=1.
We have to implement the fit method and the predict method.
Then initialize, train and print error. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Machine Learning Basics with Sklearn
First some imports for the notebook and visualization.
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
Explanation: Choosing a dataset
First of all you need a dataset to work on. To keep things simple we will use the iris dataset provided with scikit.
End of explanation
test_idx = [0, 50, 100]
train_y = np.delete(iris.target, test_idx)
train_X = np.delete(iris.data, test_idx, axis=0)
test_y = iris.target[test_idx]
test_X = iris.data[test_idx]
Explanation: Splitting the dataset
The dataset needs to be split into a training and test dataset.
This is done to first train our model and then test how good it is on data it has never seen before.
End of explanation
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf = clf.fit(train_X,train_y)
Explanation: Decision Tree Classifier
We will use a simple decision tree as the first model and train it.
End of explanation
from sklearn.externals.six import StringIO
import pydot
import matplotlib.image as mpimg
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data, feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True, impurity=False)
pydot_graph = pydot.graph_from_dot_data(dot_data.getvalue())
png_str = pydot_graph.create_png(prog='dot')
# treat the dot output string as an image file
sio = StringIO()
sio.write(png_str)
sio.seek(0)
img = mpimg.imread(sio)
# plot the image
f, axes = plt.subplots(1, 1, figsize=(12,12))
imgplot = axes.imshow(img, aspect='equal')
plt.show()
Explanation: Visualize the decision tree
Whereas the decision tree is a simple graph we can visualize it quite simple.
End of explanation
from sklearn.metrics import accuracy_score
print(accuracy_score(test_y, clf.predict(test_X)))
Explanation: Evaluating the model
After the model is trained it has to be evaluated.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier()
clf = clf.fit(train_X,train_y)
print(accuracy_score(test_y, clf.predict(test_X)))
Explanation: KNN-Classifier
Let's try another classifier. Initialize, train, print error.
End of explanation
from scipy.spatial import distance
class ScrappyKNN(object):
def fit(self, X_train, y_train):
self.X_train = X_train
self.y_train = y_train
return self
def predict(self, X_test):
predictions = []
for row in X_test:
label = self.closest(row)
predictions.append(label)
return predictions
def closest(self, row):
best_dist = distance.euclidean(row, self.X_train[0])
best_index = 0
for i in range(1, len(self.X_train)):
dist = distance.euclidean(row, self.X_train[i])
if dist < best_dist:
best_dist = dist
best_index = i
return self.y_train[best_index]
clf = ScrappyKNN()
clf = clf.fit(train_X,train_y)
print(accuracy_score(test_y, clf.predict(test_X)))
Explanation: Implementing your own KNN
Let's implement a simple knn classifier with k=1.
We have to implement the fit method and the predict method.
Then initialize, train and print error.
End of explanation |
4,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Text Data and Naive Bayes in scikit-learn
Agenda
Working with text data
Representing text as data
Reading SMS data
Vectorizing SMS data
Examining the tokens and their counts
Bonus
Step1: From the scikit-learn documentation
Step2: Summary
Step3: Part 3
Step4: Part 4
Step5: Bonus
Step6: Part 5
Step7: Part 6 | Python Code:
from sklearn.feature_extraction.text import CountVectorizer
# start with a simple example
simple_train = ['call you tonight', 'Call me a cab', 'please call me... PLEASE!', 'help']
# learn the 'vocabulary' of the training data
vect = CountVectorizer()
vect.fit(simple_train)
# vect.get_feature_names()
vect.vocabulary_
# transform training data into a 'document-term matrix'
simple_train_dtm = vect.transform(simple_train)
simple_train_dtm
# print the sparse matrix
print(simple_train_dtm)
# convert sparse matrix to a dense matrix
simple_train_dtm.toarray()
# examine the vocabulary and document-term matrix together
import pandas as pd
pd.DataFrame(simple_train_dtm.toarray(), columns=vect.get_feature_names())
# create a document-term matrix on your own
simple_train = ["call call Sorry, Ill later",
"K Did you me call ah just now",
"I call you later, don't have network. If urgnt, sms me"]
#complete your work below
# instantiate vectorizer
# fit
# transform
# convert to dense matrix
vec2 = CountVectorizer(binary=True)
vec2.fit(simple_train)
my_dtm2 = vec2.transform(simple_train)
pd.DataFrame(my_dtm2.toarray(), columns=vec2.get_feature_names())
Explanation: Working with Text Data and Naive Bayes in scikit-learn
Agenda
Working with text data
Representing text as data
Reading SMS data
Vectorizing SMS data
Examining the tokens and their counts
Bonus: Calculating the "spamminess" of each token
Naive Bayes classification
Building a Naive Bayes model
Comparing Naive Bayes with logistic regression
Part 1: Representing text as data
From the scikit-learn documentation:
Text Analysis is a major application field for machine learning algorithms. However the raw data, a sequence of symbols cannot be fed directly to the algorithms themselves as most of them expect numerical feature vectors with a fixed size rather than the raw text documents with variable length.
We will use CountVectorizer to "convert text into a matrix of token counts":
End of explanation
vect.get_feature_names()
# transform testing data into a document-term matrix (using existing vocabulary)
simple_test = ["please don't call me devon"]
simple_test_dtm = vect.transform(simple_test)
simple_test_dtm.toarray()
# examine the vocabulary and document-term matrix together
pd.DataFrame(simple_test_dtm.toarray(), columns=vect.get_feature_names())
Explanation: From the scikit-learn documentation:
In this scheme, features and samples are defined as follows:
Each individual token occurrence frequency (normalized or not) is treated as a feature.
The vector of all the token frequencies for a given document is considered a multivariate sample.
A corpus of documents can thus be represented by a matrix with one row per document and one column per token (e.g. word) occurring in the corpus.
We call vectorization the general process of turning a collection of text documents into numerical feature vectors. This specific strategy (tokenization, counting and normalization) is called the Bag of Words or "Bag of n-grams" representation. Documents are described by word occurrences while completely ignoring the relative position information of the words in the document.
End of explanation
# read tab-separated file
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/sms.tsv'
col_names = ['label', 'message']
sms = pd.read_table(url, sep='\t', header=None, names=col_names)
print(sms.shape)
sms.head(5)
sms.label.value_counts()
# convert label to a numeric variable
sms['label'] = sms.label.map({'ham':0, 'spam':1})
# define X and y
X = sms.message
y = sms.label
# split into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
Explanation: Summary:
vect.fit(train) learns the vocabulary of the training data
vect.transform(train) uses the fitted vocabulary to build a document-term matrix from the training data
vect.transform(test) uses the fitted vocabulary to build a document-term matrix from the testing data (and ignores tokens it hasn't seen before)
Part 2: Reading SMS data
End of explanation
# instantiate the vectorizer
vect = CountVectorizer()
# learn training data vocabulary, then create document-term matrix
vect.fit(X_train)
X_train_dtm = vect.transform(X_train)
X_train_dtm
# alternative: combine fit and transform into a single step
X_train_dtm = vect.fit_transform(X_train)
X_train_dtm
# transform testing data (using fitted vocabulary) into a document-term matrix
X_test_dtm = vect.transform(X_test)
X_test_dtm
Explanation: Part 3: Vectorizing SMS data
End of explanation
# store token names
X_train_tokens = vect.get_feature_names()
# first 50 tokens
print(X_train_tokens[:50])
# last 50 tokens
print(X_train_tokens[-50:])
# view X_train_dtm as a dense matrix
X_train_dtm.toarray()
# count how many times EACH token appears across ALL messages in X_train_dtm
import numpy as np
X_train_counts = np.sum(X_train_dtm.toarray(), axis=0)
X_train_counts
# create a DataFrame of tokens with their counts
pd.DataFrame({'token':X_train_tokens, 'count':X_train_counts}).sort_values(by='count', ascending=True)
Explanation: Part 4: Examining the tokens and their counts
End of explanation
# create separate DataFrames for ham and spam
sms_ham = sms[sms.label==0] # ham
sms_spam = sms[sms.label==1] # spam
# learn the vocabulary of ALL messages and save it
vect.fit(sms.message)
all_tokens = vect.get_feature_names()
# create document-term matrices for ham and spam
ham_dtm = vect.transform(sms_ham.message)
spam_dtm = vect.transform(sms_spam.message)
ham_dtm.shape, spam_dtm.shape
# count how many times EACH token appears across ALL ham messages
ham_counts = np.sum(ham_dtm.toarray(), axis=0)
ham_counts
# count how many times EACH token appears across ALL spam messages
spam_counts = np.sum(spam_dtm.toarray(), axis=0)
spam_counts
all_tokens[0:5]
# create a DataFrame of tokens with their separate ham and spam counts
token_counts = pd.DataFrame({'token':all_tokens, 'ham':ham_counts, 'spam':spam_counts})
token_counts
# add one to ham and spam counts to avoid dividing by zero (in the step that follows)
token_counts['ham'] = token_counts.ham + 1
token_counts['spam'] = token_counts.spam + 1
# calculate ratio of spam-to-ham for each token
token_counts['spam_ratio'] = token_counts.spam / token_counts.ham
token_counts.sort_values(by='spam_ratio', ascending=False)
#observe spam messages that contain the word 'claim'
claim_messages = sms.message[sms.message.str.contains('claim')]
for message in claim_messages[0:5]:
print(message, '\n')
Explanation: Bonus: Calculating the "spamminess" of each token
End of explanation
# train a Naive Bayes model using X_train_dtm
from sklearn.naive_bayes import MultinomialNB, GaussianNB
nb = MultinomialNB()
nb.fit(X_train_dtm, y_train)
# make class predictions for X_test_dtm
y_pred_class = nb.predict(X_test_dtm)
# calculate accuracy of class predictions
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
print(metrics.classification_report(y_test, y_pred_class))
metrics.confusion_matrix(y_test, y_pred_class)
?metrics.confusion_matrix
# confusion matrix
print(metrics.confusion_matrix(y_test, y_pred_class))
# predict (poorly calibrated) probabilities
y_pred_prob = nb.predict_proba(X_test_dtm)[:, 1]
y_pred_prob
# calculate AUC
print(metrics.roc_auc_score(y_test, y_pred_prob))
# print message text for the false positives
X_test[y_test < y_pred_class]
# print message text for the false negatives
X_test[y_test > y_pred_class]
# what do you notice about the false negatives?
# X_test[3132]
Explanation: Part 5: Building a Naive Bayes model
We will use Multinomial Naive Bayes:
The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work.
End of explanation
#Create a logitic regression
# import/instantiate/fit
# class predictions and predicted probabilities
# calculate accuracy and AUC
Explanation: Part 6: Comparing Naive Bayes with logistic regression
End of explanation |
4,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fundamentals of audio and music analysis
Open source libraries
Python
librosa (ISC / MIT licensed)
pyaudio (MIT licensed)
portaudio
Prepare sound for analysis
NOTE
Step1: [Optional] Recording sound
Step2: Importing sound file | Python Code:
import pyaudio
import wave
Explanation: Fundamentals of audio and music analysis
Open source libraries
Python
librosa (ISC / MIT licensed)
pyaudio (MIT licensed)
portaudio
Prepare sound for analysis
NOTE: Either record your own voice or import a sample from file
End of explanation
# In this step, find out the device id to be used in recording
# Chose the device with input type and microphone in its name
p = pyaudio.PyAudio()
info = p.get_host_api_info_by_index(0)
nb_devices = info.get('deviceCount')
# List all devices ids and names
for i in range (0, nb_devices):
if p.get_device_info_by_host_api_device_index(0,i).get('maxInputChannels') > 0:
print "Id:[%d]\tType:[Input]\tName:[%s] " % (i, p.get_device_info_by_host_api_device_index(0,i).get('name'))
if p.get_device_info_by_host_api_device_index(0,i).get('maxOutputChannels') > 0:
print "Id:[%d]\tType:[Output]\tName:[%s] " % (i, p.get_device_info_by_host_api_device_index(0,i).get('name'))
# Set the params for recorder
INPUT_DEVICE_ID = 0
CHUNK = 1024 # how many samples in a frame that stream will read
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100 # Sample rate
RECORD_SECONDS = 2
WAVE_OUTPUT_FILENAME = "recorded_audio.wav"
# Check if the params supported by your hardware
p = pyaudio.PyAudio()
devinfo = p.get_device_info_by_index(INPUT_DEVICE_ID)
if not p.is_format_supported(float(RATE),
input_device=INPUT_DEVICE_ID,
input_channels=CHANNELS,
input_format=FORMAT):
print "Parameters not supported, please try different values"
p.terminate()
# Record the audio and save it as a wave file
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
input_device_index=INPUT_DEVICE_ID,
frames_per_buffer=CHUNK)
print("* recording")
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print("* done recording")
stream.stop_stream()
stream.close()
p.terminate()
wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
Explanation: [Optional] Recording sound
End of explanation
SOUND_PATH = "recorded_audio.wav"
Explanation: Importing sound file
End of explanation |
4,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Предобработка данных и логистическая регрессия для задачи бинарной классификации
Programming assignment
В задании вам будет предложено ознакомиться с основными техниками предобработки данных, а так же применить их для обучения модели логистической регрессии. Ответ потребуется загрузить в соответствующую форму в виде 6 текстовых файлов.
Для выполнения задания требуется Python версии 2.7, а также актуальные версии библиотек
Step1: Описание датасета
Задача
Step2: Выделим из датасета целевую переменную Grant.Status и обозначим её за y
Теперь X обозначает обучающую выборку, y - ответы на ней
Step3: Теория по логистической регрессии
После осознания того, какую именно задачу требуется решить на этих данных, следующим шагом при реальном анализе был бы подбор подходящего метода. В данном задании выбор метода было произведён за вас, это логистическая регрессия. Кратко напомним вам используемую модель.
Логистическая регрессия предсказывает вероятности принадлежности объекта к каждому классу. Сумма ответов логистической регрессии на одном объекте для всех классов равна единице.
$$ \sum_{k=1}^K \pi_{ik} = 1, \quad \pi_k \equiv P\,(y_i = k \mid x_i, \theta), $$
где
Step4: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий
Step5: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это
Step6: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Пропущенные значения можно так же интерпретировать, для этого существует несколько способов, они различаются для категориальных и вещественных признаков.
Для вещественных признаков
Step7: Преобразование категориальных признаков.
В предыдущей ячейке мы разделили наш датасет ещё на две части
Step8: Как видно, в первые три колонки оказалась закодированна информация о стране, а во вторые две - о поле. При этом для совпадающих элементов выборки строки будут полностью совпадать. Также из примера видно, что кодирование признаков сильно увеличивает их количество, но полностью сохраняет информацию, в том числе о наличии пропущенных значений (их наличие просто становится одним из бинарных признаков в преобразованных данных).
Теперь применим one-hot encoding к категориальным признакам из исходного датасета. Обратите внимание на общий для всех методов преобработки данных интерфейс. Функция
encoder.fit_transform(X)
позволяет вычислить необходимые параметры преобразования, впоследствии к новым данным можно уже применять функцию
encoder.transform(X)
Очень важно применять одинаковое преобразование как к обучающим, так и тестовым данным, потому что в противном случае вы получите непредсказуемые, и, скорее всего, плохие результаты. В частности, если вы отдельно закодируете обучающую и тестовую выборку, то получите вообще говоря разные коды для одних и тех же признаков, и ваше решение работать не будет.
Также параметры многих преобразований (например, рассмотренное ниже масштабирование) нельзя вычислять одновременно на данных из обучения и теста, потому что иначе подсчитанные на тесте метрики качества будут давать смещённые оценки на качество работы алгоритма. Кодирование категориальных признаков не считает на обучающей выборке никаких параметров, поэтому его можно применять сразу к всему датасету.
Step9: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
Обращаем внимание на заданный параметр для генератора случайных чисел
Step10: Описание классов
Итак, мы получили первые наборы данных, для которых выполнены оба ограничения логистической регрессии на входные данные. Обучим на них регрессию, используя имеющийся в библиотеке sklearn функционал по подбору гиперпараметров модели
optimizer = GridSearchCV(estimator, param_grid)
где
Step11: Масштабирование вещественных признаков.
Попробуем как-то улучшить качество классификации. Для этого посмотрим на сами данные
Step12: Как видно из графиков, разные признаки очень сильно отличаются друг от друга по модулю значений (обратите внимание на диапазоны значений осей x и y). В случае обычной регрессии это никак не влияет на качество обучаемой модели, т.к. у меньших по модулю признаков будут большие веса, но при использовании регуляризации, которая штрафует модель за большие веса, регрессия, как правило, начинает работать хуже.
В таких случаях всегда рекомендуется делать стандартизацию (масштабирование) признаков, для того чтобы они меньше отличались друг друга по модулю, но при этом не нарушались никакие другие свойства признакового пространства. При этом даже если итоговое качество модели на тесте уменьшается, это повышает её интерпретабельность, потому что новые веса имеют смысл "значимости" данного признака для итоговой классификации.
Стандартизация осуществляется посредством вычета из каждого признака среднего значения и нормировки на выборочное стандартное отклонение
Step13: Сравнение признаковых пространств.
Построим такие же графики для преобразованных данных
Step14: Как видно из графиков, мы не поменяли свойства признакового пространства
Step24: Балансировка классов.
Алгоритмы классификации могут быть очень чувствительны к несбалансированным классам. Рассмотрим пример с выборками, сэмплированными из двух гауссиан. Их мат. ожидания и матрицы ковариации заданы так, что истинная разделяющая поверхность должна проходить параллельно оси x. Поместим в обучающую выборку 20 объектов, сэмплированных из 1-й гауссианы, и 10 объектов из 2-й. После этого обучим на них линейную регрессию, и построим на графиках объекты и области классификации.
Step25: Как видно, во втором случае классификатор находит разделяющую поверхность, которая ближе к истинной, т.е. меньше переобучается. Поэтому на сбалансированность классов в обучающей выборке всегда следует обращать внимание.
Посмотрим, сбалансированны ли классы в нашей обучающей выборке
Step26: Видно, что нет.
Исправить ситуацию можно разными способами, мы рассмотрим два
Step27: Стратификация выборок.
Рассмотрим ещё раз пример с выборками из нормальных распределений. Посмотрим ещё раз на качество классификаторов, получаемое на тестовых выборках
Step30: Насколько эти цифры реально отражают качество работы алгоритма, если учесть, что тестовая выборка так же несбалансирована, как обучающая? При этом мы уже знаем, что алгоритм логистический регрессии чувствителен к балансировке классов в обучающей выборке, т.е. в данном случае на тесте он будет давать заведомо заниженные результаты. Метрика классификатора на тесте имела бы гораздо больший смысл, если бы объекты были разделы в выборках поровну
Step31: Как видно, после данной процедуры ответ классификатора изменился незначительно, а вот качество увеличилось. При этом, в зависимости от того, как вы разбили изначально данные на обучение и тест, после сбалансированного разделения выборок итоговая метрика на тесте может как увеличиться, так и уменьшиться, но доверять ей можно значительно больше, т.к. она построена с учётом специфики работы классификатора. Данный подход является частным случаем т.н. метода стратификации.
Задание 4. Стратификация выборки.
По аналогии с тем, как это было сделано в начале задания, разбейте выборки X_real_zeros и X_cat_oh на обучение и тест, передавая в функцию
train_test_split(...)
дополнительно параметр
stratify=y
Также обязательно передайте в функцию переменную random_state=0.
Выполните масштабирование новых вещественных выборок, обучите классификатор и его гиперпараметры при помощи метода кросс-валидации, делая поправку на несбалансированные классы при помощи весов. Убедитесь в том, что нашли оптимум accuracy по гиперпараметрам.
Оцените качество классификатора метрике AUC ROC на тестовой выборке.
Полученный ответ передайте функции write_answer_4
Step35: Теперь вы разобрались с основными этапами предобработки данных для линейных классификаторов.
Напомним основные этапы
Step36: Видно, что данный метод преобразования данных уже позволяет строить нелинейные разделяющие поверхности, которые могут более тонко подстраиваться под данные и находить более сложные зависимости. Число признаков в новой модели
Step37: Но при этом одновременно данный метод способствует более сильной способности модели к переобучению из-за быстрого роста числа признаком с увеличением степени $p$. Рассмотрим пример с $p=11$
Step38: Количество признаков в данной модели
Step39: Задание 5. Трансформация вещественных признаков.
Реализуйте по аналогии с примером преобразование вещественных признаков модели при помощи полиномиальных признаков степени 2
Постройте логистическую регрессию на новых данных, одновременно подобрав оптимальные гиперпараметры. Обращаем внимание, что в преобразованных признаках уже присутствует столбец, все значения которого равны 1, поэтому обучать дополнительно значение $b$ не нужно, его функцию выполняет один из весов $w$. В связи с этим во избежание линейной зависимости в датасете, в вызов класса логистической регрессии требуется передавать параметр fit_intercept=False. Для обучения используйте стратифицированные выборки с балансировкой классов при помощи весов, преобразованные признаки требуется заново отмасштабировать.
Получите AUC ROC на тесте и сравните данный результат с использованием обычных признаков.
Передайте полученный ответ в функцию write_answer_5.
Step40: Регрессия Lasso.
К логистической регрессии также можно применить L1-регуляризацию (Lasso), вместо регуляризации L2, которая будет приводить к отбору признаков. Вам предлагается применить L1-регуляцию к исходным признакам и проинтерпретировать полученные результаты (применение отбора признаков к полиномиальным так же можно успешно применять, но в нём уже будет отсутствовать компонента интерпретации, т.к. смысловое значение оригинальных признаков известно, а полиномиальных - уже может быть достаточно нетривиально). Для вызова логистической регрессии с L1-регуляризацией достаточно передать параметр penalty='l1' в инициализацию класса.
Задание 6. Отбор признаков при помощи регрессии Lasso.
Обучите регрессию Lasso на стратифицированных отмасштабированных выборках, используя балансировку классов при помощи весов.
Получите ROC AUC регрессии, сравните его с предыдущими результатами.
Найдите номера вещественных признаков, которые имеют нулевые веса в итоговой модели.
Передайте их список функции write_answer_6. | Python Code:
import pandas as pd
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
Explanation: Предобработка данных и логистическая регрессия для задачи бинарной классификации
Programming assignment
В задании вам будет предложено ознакомиться с основными техниками предобработки данных, а так же применить их для обучения модели логистической регрессии. Ответ потребуется загрузить в соответствующую форму в виде 6 текстовых файлов.
Для выполнения задания требуется Python версии 2.7, а также актуальные версии библиотек:
- NumPy: 1.10.4 и выше
- Pandas: 0.17.1 и выше
- Scikit-learn: 0.17 и выше
End of explanation
data = pd.read_csv('data.csv')
print data.shape
Explanation: Описание датасета
Задача: по 38 признакам, связанных с заявкой на грант (область исследований учёных, информация по их академическому бэкграунду, размер гранта, область, в которой он выдаётся) предсказать, будет ли заявка принята. Датасет включает в себя информацию по 6000 заявкам на гранты, которые были поданы в университете Мельбурна в период с 2004 по 2008 год.
Полную версию данных с большим количеством признаков можно найти на https://www.kaggle.com/c/unimelb.
End of explanation
X = data.drop('Grant.Status', 1)
y = data['Grant.Status']
print y.shape
Explanation: Выделим из датасета целевую переменную Grant.Status и обозначим её за y
Теперь X обозначает обучающую выборку, y - ответы на ней
End of explanation
data.head()
Explanation: Теория по логистической регрессии
После осознания того, какую именно задачу требуется решить на этих данных, следующим шагом при реальном анализе был бы подбор подходящего метода. В данном задании выбор метода было произведён за вас, это логистическая регрессия. Кратко напомним вам используемую модель.
Логистическая регрессия предсказывает вероятности принадлежности объекта к каждому классу. Сумма ответов логистической регрессии на одном объекте для всех классов равна единице.
$$ \sum_{k=1}^K \pi_{ik} = 1, \quad \pi_k \equiv P\,(y_i = k \mid x_i, \theta), $$
где:
- $\pi_{ik}$ - вероятность принадлежности объекта $x_i$ из выборки $X$ к классу $k$
- $\theta$ - внутренние параметры алгоритма, которые настраиваются в процессе обучения, в случае логистической регрессии - $w, b$
Из этого свойства модели в случае бинарной классификации требуется вычислить лишь вероятность принадлежности объекта к одному из классов (вторая вычисляется из условия нормировки вероятностей). Эта вероятность вычисляется, используя логистическую функцию:
$$ P\,(y_i = 1 \mid x_i, \theta) = \frac{1}{1 + \exp(-w^T x_i-b)} $$
Параметры $w$ и $b$ находятся, как решения следующей задачи оптимизации (указаны функционалы с L1 и L2 регуляризацией, с которыми вы познакомились в предыдущих заданиях):
L2-regularization:
$$ Q(X, y, \theta) = \frac{1}{2} w^T w + C \sum_{i=1}^l \log ( 1 + \exp(-y_i (w^T x_i + b ) ) ) \longrightarrow \min\limits_{w,b} $$
L1-regularization:
$$ Q(X, y, \theta) = \sum_{d=1}^D |w_d| + C \sum_{i=1}^l \log ( 1 + \exp(-y_i (w^T x_i + b ) ) ) \longrightarrow \min\limits_{w,b} $$
$C$ - это стандартный гиперпараметр модели, который регулирует то, насколько сильно мы позволяем модели подстраиваться под данные.
Предобработка данных
Из свойств данной модели следует, что:
- все $X$ должны быть числовыми данными (в случае наличия среди них категорий, их требуется некоторым способом преобразовать в вещественные числа)
- среди $X$ не должно быть пропущенных значений (т.е. все пропущенные значения перед применением модели следует каким-то образом заполнить)
Поэтому базовым этапом в предобработке любого датасета для логистической регрессии будет кодирование категориальных признаков, а так же удаление или интерпретация пропущенных значений (при наличии того или другого).
End of explanation
numeric_cols = ['RFCD.Percentage.1', 'RFCD.Percentage.2', 'RFCD.Percentage.3',
'RFCD.Percentage.4', 'RFCD.Percentage.5',
'SEO.Percentage.1', 'SEO.Percentage.2', 'SEO.Percentage.3',
'SEO.Percentage.4', 'SEO.Percentage.5',
'Year.of.Birth.1', 'Number.of.Successful.Grant.1', 'Number.of.Unsuccessful.Grant.1']
categorical_cols = list(set(X.columns.values.tolist()) - set(numeric_cols))
categorical_cols
Explanation: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий:
End of explanation
data.dropna().shape
Explanation: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это:
End of explanation
def calculate_means(numeric_data):
means = np.zeros(numeric_data.shape[1])
for j in range(numeric_data.shape[1]):
to_sum = numeric_data.iloc[:,j]
indices = np.nonzero(~numeric_data.iloc[:,j].isnull())[0]
correction = np.amax(to_sum[indices])
to_sum /= correction
for i in indices:
means[j] += to_sum[i]
means[j] /= indices.size
means[j] *= correction
return pd.Series(means, numeric_data.columns)
#numerical X, NA filled with zeros
X_real_zeros = X[numeric_cols].fillna(0)
print X_real_zeros.shape
X_real_zeros.head()
#numerical X, NA filled with mean columns values
X_real_mean = X[numeric_cols].fillna(value=calculate_means(X[numeric_cols]))
print X_real_mean.shape
X_real_mean.sample(5)
#categorical X, NA filled with 'NA'
X_cat = X[categorical_cols].fillna('NA').astype(str)
print X_cat.shape
X_cat.sample(5)
Explanation: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Пропущенные значения можно так же интерпретировать, для этого существует несколько способов, они различаются для категориальных и вещественных признаков.
Для вещественных признаков:
- заменить на 0 (данный признак давать вклад в предсказание для данного объекта не будет)
- заменить на среднее (каждый пропущенный признак будет давать такой же вклад, как и среднее значение признака на датасете)
Для категориальных:
- интерпретировать пропущенное значение, как ещё одну категорию (данный способ является самым естественным, так как в случае категорий у нас есть уникальная возможность не потерять информацию о наличии пропущенных значений; обратите внимание, что в случае вещественных признаков данная информация неизбежно теряется)
Задание 0. Обработка пропущенных значений.
Заполните пропущенные вещественные значения в X нулями и средними по столбцам, назовите полученные датафреймы X_real_zeros и X_real_mean соответственно. Для подсчёта средних используйте описанную ниже функцию calculate_means, которой требуется передать на вход вещественные признаки из исходного датафрейма.
Все категориальные признаки в X преобразуйте в строки, пропущенные значения требуется также преобразовать в какие-либо строки, которые не являются категориями (например, 'NA'), полученный датафрейм назовите X_cat.
Для объединения выборок здесь и далее в задании рекомендуется использовать функции
np.hstack(...)
np.vstack(...)
End of explanation
from sklearn.linear_model import LogisticRegression as LR
from sklearn.feature_extraction import DictVectorizer as DV
categorial_data = pd.DataFrame({'sex': ['male', 'female', 'male', 'female'],
'nationality': ['American', 'European', 'Asian', 'European']})
print('Исходные данные:\n')
print(categorial_data)
encoder = DV(sparse = False)
encoded_data = encoder.fit_transform(categorial_data.T.to_dict().values())
print('\nЗакодированные данные:\n')
print(encoded_data)
Explanation: Преобразование категориальных признаков.
В предыдущей ячейке мы разделили наш датасет ещё на две части: в одной присутствуют только вещественные признаки, в другой только категориальные. Это понадобится нам для раздельной последующей обработке этих данных, а так же для сравнения качества работы тех или иных методов.
Для использования модели регрессии требуется преобразовать категориальные признаки в вещественные. Рассмотрим основной способ преоборазования категориальных признаков в вещественные: one-hot encoding. Его идея заключается в том, что мы преобразуем категориальный признак при помощи бинарного кода: каждой категории ставим в соответствие набор из нулей и единиц.
Посмотрим, как данный метод работает на простом наборе данных.
End of explanation
encoder = DV(sparse = False)
X_cat_oh = encoder.fit_transform(X_cat.T.to_dict().values())
print X_cat_oh.shape
Explanation: Как видно, в первые три колонки оказалась закодированна информация о стране, а во вторые две - о поле. При этом для совпадающих элементов выборки строки будут полностью совпадать. Также из примера видно, что кодирование признаков сильно увеличивает их количество, но полностью сохраняет информацию, в том числе о наличии пропущенных значений (их наличие просто становится одним из бинарных признаков в преобразованных данных).
Теперь применим one-hot encoding к категориальным признакам из исходного датасета. Обратите внимание на общий для всех методов преобработки данных интерфейс. Функция
encoder.fit_transform(X)
позволяет вычислить необходимые параметры преобразования, впоследствии к новым данным можно уже применять функцию
encoder.transform(X)
Очень важно применять одинаковое преобразование как к обучающим, так и тестовым данным, потому что в противном случае вы получите непредсказуемые, и, скорее всего, плохие результаты. В частности, если вы отдельно закодируете обучающую и тестовую выборку, то получите вообще говоря разные коды для одних и тех же признаков, и ваше решение работать не будет.
Также параметры многих преобразований (например, рассмотренное ниже масштабирование) нельзя вычислять одновременно на данных из обучения и теста, потому что иначе подсчитанные на тесте метрики качества будут давать смещённые оценки на качество работы алгоритма. Кодирование категориальных признаков не считает на обучающей выборке никаких параметров, поэтому его можно применять сразу к всему датасету.
End of explanation
from sklearn.model_selection import train_test_split
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0)
(X_train_real_mean,
X_test_real_mean) = train_test_split(X_real_mean,
test_size=0.3,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0)
print y_train.shape
print y_test.shape
print X_train_real_mean.shape
print X_test_real_mean.shape
print X_train_cat_oh.shape
print X_test_cat_oh.shape
Explanation: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
Обращаем внимание на заданный параметр для генератора случайных чисел: random_state. Так как результаты на обучении и тесте будут зависеть от того, как именно вы разделите объекты, то предлагается использовать заранее определённое значение для получение результатов, согласованных с ответами в системе проверки заданий.
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import roc_auc_score
def plot_scores(optimizer):
scores = [[item[0]['C'],
item[1],
(np.sum((item[2]-item[1])**2)/(item[2].size-1))**0.5] for item in optimizer.grid_scores_]
scores = np.array(scores)
plt.semilogx(scores[:,0], scores[:,1])
plt.fill_between(scores[:,0], scores[:,1]-scores[:,2],
scores[:,1]+scores[:,2], alpha=0.3)
plt.show()
def write_answer_1(auc_1, auc_2):
auc = (auc_1 + auc_2)/2
with open("preprocessing_lr_answer1.txt", "w") as fout:
fout.write(str(auc))
#stacking numerical and categorical features
X_train_zeros = np.hstack( (X_train_real_zeros, X_train_cat_oh) )
X_train_mean = np.hstack( (X_train_real_mean, X_train_cat_oh) )
X_test_zeros = np.hstack( (X_test_real_zeros, X_test_cat_oh) )
X_test_mean = np.hstack( (X_test_real_mean, X_test_cat_oh) )
#GridSearchCV parameters
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
estimator = LogisticRegression()
%%time
#GridSearchCV with zero fillna
optimizer_zeros = GridSearchCV(estimator, param_grid, cv=cv)
optimizer_zeros.fit(X_train_zeros, y_train)
print optimizer_zeros
%%time
#GridSearchCV with mean fillna
optimizer_mean = GridSearchCV(estimator, param_grid, cv=cv)
optimizer_mean.fit(X_train_mean, y_train)
plot_scores(optimizer_zeros)
plot_scores(optimizer_mean)
#GridSearchCV with zero fillna
print 'Best parameter for GridSearchCV with zero fillna', optimizer_zeros.best_params_
roc_auc_score_zeros = roc_auc_score(y_test, optimizer_zeros.best_estimator_.predict_proba(X_test_zeros)[:, 1])
print 'roc_auc_score_zeros', roc_auc_score_zeros
#GridSearchCV with mean fillna
print 'Best parameter for GridSearchCV with mean fillna', optimizer_mean.best_params_
roc_auc_score_mean = roc_auc_score(y_test, optimizer_mean.best_estimator_.predict_proba(X_test_mean)[:, 1])
print 'roc_auc_score_mean', roc_auc_score_mean
write_answer_1(roc_auc_score_zeros, roc_auc_score_mean)
Explanation: Описание классов
Итак, мы получили первые наборы данных, для которых выполнены оба ограничения логистической регрессии на входные данные. Обучим на них регрессию, используя имеющийся в библиотеке sklearn функционал по подбору гиперпараметров модели
optimizer = GridSearchCV(estimator, param_grid)
где:
- estimator - обучающий алгоритм, для которого будет производиться подбор параметров
- param_grid - словарь параметров, ключами которого являются строки-названия, которые передаются алгоритму estimator, а значения - набор параметров для перебора
Данный класс выполняет кросс-валидацию обучающей выборки для каждого набора параметров и находит те, на которых алгоритм работает лучше всего. Этот метод позволяет настраивать гиперпараметры по обучающей выборке, избегая переобучения. Некоторые опциональные параметры вызова данного класса, которые нам понадобятся:
- scoring - функционал качества, максимум которого ищется кросс валидацией, по умолчанию используется функция score() класса estimator
- n_jobs - позволяет ускорить кросс-валидацию, выполняя её параллельно, число определяет количество одновременно запущенных задач
- cv - количество фолдов, на которые разбивается выборка при кросс-валидации
После инициализации класса GridSearchCV, процесс подбора параметров запускается следующим методом:
optimizer.fit(X, y)
На выходе для получения предсказаний можно пользоваться функцией
optimizer.predict(X)
для меток или
optimizer.predict_proba(X)
для вероятностей (в случае использования логистической регрессии).
Также можно напрямую получить оптимальный класс estimator и оптимальные параметры, так как они является атрибутами класса GridSearchCV:
- best_estimator_ - лучший алгоритм
- best_params_ - лучший набор параметров
Класс логистической регрессии выглядит следующим образом:
estimator = LogisticRegression(penalty)
где penalty принимает либо значение 'l2', либо 'l1'. По умолчанию устанавливается значение 'l2', и везде в задании, если об этом не оговорено особо, предполагается использование логистической регрессии с L2-регуляризацией.
Задание 1. Сравнение способов заполнения вещественных пропущенных значений.
Составьте две обучающие выборки из вещественных и категориальных признаков: в одной вещественные признаки, где пропущенные значения заполнены нулями, в другой - средними. Рекомендуется записывать в выборки сначала вещественные, а потом категориальные признаки.
Обучите на них логистическую регрессию, подбирая параметры из заданной сетки param_grid по методу кросс-валидации с числом фолдов cv=3. В качестве оптимизируемой функции используйте заданную по умолчанию.
Постройте два графика оценок точности +- их стандратного отклонения в зависимости от гиперпараметра и убедитесь, что вы действительно нашли её максимум. Также обратите внимание на большую дисперсию получаемых оценок (уменьшить её можно увеличением числа фолдов cv).
Получите две метрики качества AUC ROC на тестовой выборке и сравните их между собой. Какой способ заполнения пропущенных вещественных значений работает лучше? В дальнейшем для выполнения задания в качестве вещественных признаков используйте ту выборку, которая даёт лучшее качество на тесте.
Передайте два значения AUC ROC (сначала для выборки, заполненной средними, потом для выборки, заполненной нулями) в функцию write_answer_1 и запустите её. Полученный файл является ответом на 1 задание.
Информация для интересующихся: вообще говоря, не вполне логично оптимизировать на кросс-валидации заданный по умолчанию в классе логистической регрессии функционал accuracy, а измерять на тесте AUC ROC, но это, как и ограничение размера выборки, сделано для ускорения работы процесса кросс-валидации.
End of explanation
from pandas.plotting import scatter_matrix
data_numeric = pd.DataFrame(X_train_real_zeros, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
Explanation: Масштабирование вещественных признаков.
Попробуем как-то улучшить качество классификации. Для этого посмотрим на сами данные:
End of explanation
from sklearn.preprocessing import StandardScaler
encoder = StandardScaler()
X_train_real_scaled = encoder.fit_transform(X_train_real_zeros)
X_test_real_scaled = encoder.fit_transform(X_test_real_zeros)
Explanation: Как видно из графиков, разные признаки очень сильно отличаются друг от друга по модулю значений (обратите внимание на диапазоны значений осей x и y). В случае обычной регрессии это никак не влияет на качество обучаемой модели, т.к. у меньших по модулю признаков будут большие веса, но при использовании регуляризации, которая штрафует модель за большие веса, регрессия, как правило, начинает работать хуже.
В таких случаях всегда рекомендуется делать стандартизацию (масштабирование) признаков, для того чтобы они меньше отличались друг друга по модулю, но при этом не нарушались никакие другие свойства признакового пространства. При этом даже если итоговое качество модели на тесте уменьшается, это повышает её интерпретабельность, потому что новые веса имеют смысл "значимости" данного признака для итоговой классификации.
Стандартизация осуществляется посредством вычета из каждого признака среднего значения и нормировки на выборочное стандартное отклонение:
$$ x^{scaled}{id} = \dfrac{x{id} - \mu_d}{\sigma_d}, \quad \mu_d = \frac{1}{N} \sum_{i=1}^l x_{id}, \quad \sigma_d = \sqrt{\frac{1}{N-1} \sum_{i=1}^l (x_{id} - \mu_d)^2} $$
Задание 1.5. Масштабирование вещественных признаков.
По аналогии с вызовом one-hot encoder примените масштабирование вещественных признаков для обучающих и тестовых выборок X_train_real_zeros и X_test_real_zeros, используя класс StandardScaler
и методы
StandardScaler.fit_transform(...)
StandardScaler.transform(...)
Сохраните ответ в переменные X_train_real_scaled и X_test_real_scaled соответственно
End of explanation
data_numeric_scaled = pd.DataFrame(X_train_real_scaled, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric_scaled[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
Explanation: Сравнение признаковых пространств.
Построим такие же графики для преобразованных данных:
End of explanation
def write_answer_2(auc):
with open("preprocessing_lr_answer2.txt", "w") as fout:
fout.write(str(auc))
#stacking numerical and categorical features
X_train_scaled = np.hstack( (X_train_real_scaled, X_train_cat_oh) )
X_test_scaled = np.hstack( (X_test_real_scaled, X_test_cat_oh) )
%%time
#GridSearchCV with zero fillna
optimizer_zeros.fit(X_train_scaled, y_train)
print optimizer_zeros
#GridSearchCV with zero fillna
print 'Best parameter for GridSearchCV with scaled num parameters', optimizer_zeros.best_params_
roc_auc_score_scaled = roc_auc_score(y_test, optimizer_zeros.best_estimator_.predict_proba(X_test_scaled)[:, 1])
print 'roc_auc_score', roc_auc_score_scaled
write_answer_2(roc_auc_score_scaled)
Explanation: Как видно из графиков, мы не поменяли свойства признакового пространства: гистограммы распределений значений признаков, как и их scatter-plots, выглядят так же, как и до нормировки, но при этом все значения теперь находятся примерно в одном диапазоне, тем самым повышая интерпретабельность результатов, а также лучше сочетаясь с идеологией регуляризации.
Задание 2. Сравнение качества классификации до и после масштабирования вещественных признаков.
Обучите ещё раз регрессию и гиперпараметры на новых признаках, объединив их с закодированными категориальными.
Проверьте, был ли найден оптимум accuracy по гиперпараметрам во время кроссвалидации.
Получите значение ROC AUC на тестовой выборке, сравните с лучшим результатом, полученными ранее.
Запишите полученный ответ в файл при помощи функции write_answer_2.
End of explanation
np.random.seed(0)
Сэмплируем данные из первой гауссианы
data_0 = np.random.multivariate_normal([0,0], [[0.5,0],[0,0.5]], size=40)
И из второй
data_1 = np.random.multivariate_normal([0,1], [[0.5,0],[0,0.5]], size=40)
На обучение берём 20 объектов из первого класса и 10 из второго
example_data_train = np.vstack([data_0[:20,:], data_1[:10,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((10))])
На тест - 20 из первого и 30 из второго
example_data_test = np.vstack([data_0[20:,:], data_1[10:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((30))])
Задаём координатную сетку, на которой будем вычислять область классификации
xx, yy = np.meshgrid(np.arange(-3, 3, 0.02), np.arange(-3, 3, 0.02))
Обучаем регрессию без балансировки по классам
optimizer = GridSearchCV(LogisticRegression(), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Строим предсказания регрессии для сетки
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
Считаем AUC
auc_wo_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('Without class weights')
plt.show()
print('AUC: %f'%auc_wo_class_weights)
Для второй регрессии в LogisticRegression передаём параметр class_weight='balanced'
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_w_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC: %f'%auc_w_class_weights)
Explanation: Балансировка классов.
Алгоритмы классификации могут быть очень чувствительны к несбалансированным классам. Рассмотрим пример с выборками, сэмплированными из двух гауссиан. Их мат. ожидания и матрицы ковариации заданы так, что истинная разделяющая поверхность должна проходить параллельно оси x. Поместим в обучающую выборку 20 объектов, сэмплированных из 1-й гауссианы, и 10 объектов из 2-й. После этого обучим на них линейную регрессию, и построим на графиках объекты и области классификации.
End of explanation
print(np.sum(y_train==0))
print(np.sum(y_train==1))
Explanation: Как видно, во втором случае классификатор находит разделяющую поверхность, которая ближе к истинной, т.е. меньше переобучается. Поэтому на сбалансированность классов в обучающей выборке всегда следует обращать внимание.
Посмотрим, сбалансированны ли классы в нашей обучающей выборке:
End of explanation
def write_answer_3(auc_1, auc_2):
auc = (auc_1 + auc_2) / 2
with open("preprocessing_lr_answer3.txt", "w") as fout:
fout.write(str(auc))
#GridSearchCV parameters
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
estimator = LogisticRegression(class_weight='balanced')
%%time
#GridSearchCV with balanced weights
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(X_train_scaled, y_train)
print optimizer
#GridSearchCV with balanced weights
print 'Best parameter for GridSearchCV with balanced weights', optimizer.best_params_
roc_auc_score_bal1 = roc_auc_score(y_test, optimizer.best_estimator_.predict_proba(X_test_scaled)[:, 1])
print 'roc_auc_score', roc_auc_score_bal1
#generating new indices for class 1
np.random.seed(0)
num_of_indices = np.sum(y_train==0) - np.sum(y_train==1)
indices_to_add = np.random.randint(np.sum(y_train==1)+1, size=num_of_indices)
X_train_to_add = X_train_scaled[y_train.as_matrix() == 1,:][indices_to_add,:]
y_train_to_add = y_train[indices_to_add]
print y_train_to_add.shape
print y_train.shape
#new X, y train
X_train_balanced = np.vstack( (X_train_scaled, X_train_to_add) )
y_train_balanced = np.append(y_train, y_train_to_add)
print X_train_balanced.shape, X_train_scaled.shape
print y_train_balanced.shape, y_train.shape
#GridSearchCV with balanced weights
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(X_train_scaled, y_train)
print optimizer
#GridSearchCV with balanced weights
print 'Best parameter for GridSearchCV with balanced weights', optimizer.best_params_
roc_auc_score_bal2 = roc_auc_score(y_test, optimizer.best_estimator_.predict_proba(X_test_scaled)[:, 1])
print 'roc_auc_score', roc_auc_score_bal2
write_answer_3(roc_auc_score_bal1, roc_auc_score_bal2)
Explanation: Видно, что нет.
Исправить ситуацию можно разными способами, мы рассмотрим два:
- давать объектам миноритарного класса больший вес при обучении классификатора (рассмотрен в примере выше)
- досэмплировать объекты миноритарного класса, пока число объектов в обоих классах не сравняется
Задание 3. Балансировка классов.
Обучите логистическую регрессию и гиперпараметры с балансировкой классов, используя веса (параметр class_weight='balanced' регрессии) на отмасштабированных выборках, полученных в предыдущем задании. Убедитесь, что вы нашли максимум accuracy по гиперпараметрам.
Получите метрику ROC AUC на тестовой выборке.
Сбалансируйте выборку, досэмплировав в неё объекты из меньшего класса. Для получения индексов объектов, которые требуется добавить в обучающую выборку, используйте следующую комбинацию вызовов функций:
np.random.seed(0)
indices_to_add = np.random.randint(...)
X_train_to_add = X_train[y_train.as_matrix() == 1,:][indices_to_add,:]
После этого добавьте эти объекты в начало или конец обучающей выборки. Дополните соответствующим образом вектор ответов.
Получите метрику ROC AUC на тестовой выборке, сравните с предыдущим результатом.
Внесите ответы в выходной файл при помощи функции write_asnwer_3, передав в неё сначала ROC AUC для балансировки весами, а потом балансировки выборки вручную.
End of explanation
print('AUC ROC for classifier without weighted classes', auc_wo_class_weights)
print('AUC ROC for classifier with weighted classes: ', auc_w_class_weights)
Explanation: Стратификация выборок.
Рассмотрим ещё раз пример с выборками из нормальных распределений. Посмотрим ещё раз на качество классификаторов, получаемое на тестовых выборках:
End of explanation
Разделим данные по классам поровну между обучающей и тестовой выборками
example_data_train = np.vstack([data_0[:20,:], data_1[:20,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((20))])
example_data_test = np.vstack([data_0[20:,:], data_1[20:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((20))])
Обучим классификатор
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_stratified = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC ROC for stratified samples: ', auc_stratified)
Explanation: Насколько эти цифры реально отражают качество работы алгоритма, если учесть, что тестовая выборка так же несбалансирована, как обучающая? При этом мы уже знаем, что алгоритм логистический регрессии чувствителен к балансировке классов в обучающей выборке, т.е. в данном случае на тесте он будет давать заведомо заниженные результаты. Метрика классификатора на тесте имела бы гораздо больший смысл, если бы объекты были разделы в выборках поровну: по 20 из каждого класса на обучени и на тесте. Переформируем выборки и подсчитаем новые ошибки:
End of explanation
def write_answer_4(auc):
with open("preprocessing_lr_answer4.txt", "w") as fout:
fout.write(str(auc))
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0, stratify=y)
print X_train_real_zeros.shape, X_test_real_zeros.shape, y_train.shape
(X_train_real_mean,
X_test_real_mean,
y_train, y_test) = train_test_split(X_real_mean, y,
test_size=0.3,
random_state=0, stratify=y)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0, stratify=y)
encoder = StandardScaler()
X_train_real_scaled = encoder.fit_transform(X_train_real_zeros)
X_test_real_scaled = encoder.fit_transform(X_test_real_zeros)
#stacking numerical and categorical features
X_train_scaled = np.hstack( (X_train_real_scaled, X_train_cat_oh) )
X_test_scaled = np.hstack( (X_test_real_scaled, X_test_cat_oh) )
#GridSearchCV parameters
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
estimator = LogisticRegression(class_weight='balanced')
%%time
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(X_train_scaled, y_train)
print optimizer
#GridSearchCV
print 'Best parameter for GridSearchCV with balanced weights', optimizer.best_params_
roc_auc_score_strat = roc_auc_score(y_test, optimizer.best_estimator_.predict_proba(X_test_scaled)[:, 1])
print 'roc_auc_score', roc_auc_score_strat
write_answer_4(roc_auc_score_strat)
Explanation: Как видно, после данной процедуры ответ классификатора изменился незначительно, а вот качество увеличилось. При этом, в зависимости от того, как вы разбили изначально данные на обучение и тест, после сбалансированного разделения выборок итоговая метрика на тесте может как увеличиться, так и уменьшиться, но доверять ей можно значительно больше, т.к. она построена с учётом специфики работы классификатора. Данный подход является частным случаем т.н. метода стратификации.
Задание 4. Стратификация выборки.
По аналогии с тем, как это было сделано в начале задания, разбейте выборки X_real_zeros и X_cat_oh на обучение и тест, передавая в функцию
train_test_split(...)
дополнительно параметр
stratify=y
Также обязательно передайте в функцию переменную random_state=0.
Выполните масштабирование новых вещественных выборок, обучите классификатор и его гиперпараметры при помощи метода кросс-валидации, делая поправку на несбалансированные классы при помощи весов. Убедитесь в том, что нашли оптимум accuracy по гиперпараметрам.
Оцените качество классификатора метрике AUC ROC на тестовой выборке.
Полученный ответ передайте функции write_answer_4
End of explanation
from sklearn.preprocessing import PolynomialFeatures
Инициализируем класс, который выполняет преобразование
transform = PolynomialFeatures(2)
Обучаем преобразование на обучающей выборке, применяем его к тестовой
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
Обращаем внимание на параметр fit_intercept=False
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('With class weights')
plt.show()
Explanation: Теперь вы разобрались с основными этапами предобработки данных для линейных классификаторов.
Напомним основные этапы:
- обработка пропущенных значений
- обработка категориальных признаков
- стратификация
- балансировка классов
- масштабирование
Данные действия с данными рекомендуется проводить всякий раз, когда вы планируете использовать линейные методы. Рекомендация по выполнению многих из этих пунктов справедлива и для других методов машинного обучения.
Трансформация признаков.
Теперь рассмотрим способы преобразования признаков. Существует достаточно много различных способов трансформации признаков, которые позволяют при помощи линейных методов получать более сложные разделяющие поверхности. Самым базовым является полиномиальное преобразование признаков. Его идея заключается в том, что помимо самих признаков вы дополнительно включаете набор все полиномы степени $p$, которые можно из них построить. Для случая $p=2$ преобразование выглядит следующим образом:
$$ \phi(x_i) = [x_{i,1}^2, ..., x_{i,D}^2, x_{i,1}x_{i,2}, ..., x_{i,D} x_{i,D-1}, x_{i,1}, ..., x_{i,D}, 1] $$
Рассмотрим принцип работы данных признаков на данных, сэмплированных их гауссиан:
End of explanation
print(example_data_train_poly.shape)
Explanation: Видно, что данный метод преобразования данных уже позволяет строить нелинейные разделяющие поверхности, которые могут более тонко подстраиваться под данные и находить более сложные зависимости. Число признаков в новой модели:
End of explanation
transform = PolynomialFeatures(11)
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('Corrected class weights')
plt.show()
Explanation: Но при этом одновременно данный метод способствует более сильной способности модели к переобучению из-за быстрого роста числа признаком с увеличением степени $p$. Рассмотрим пример с $p=11$:
End of explanation
print(example_data_train_poly.shape)
Explanation: Количество признаков в данной модели:
End of explanation
def write_answer_5(auc):
with open("preprocessing_lr_answer5.txt", "w") as fout:
fout.write(str(auc))
transform = PolynomialFeatures(2)
data_train_poly = transform.fit_transform(X_train_real_zeros)
data_test_poly = transform.transform(X_test_real_zeros)
encoder = StandardScaler()
data_train_poly_scaled = encoder.fit_transform(data_train_poly)
data_test_poly_scaled = encoder.fit_transform(data_test_poly)
#stacking numerical and categorical features
data_train_poly_full = np.hstack( (data_train_poly_scaled, X_train_cat_oh) )
data_test_poly_full = np.hstack( (data_test_poly_scaled, X_test_cat_oh) )
#GridSearchCV parameters
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
estimator = LogisticRegression(class_weight='balanced', fit_intercept=False)
%%time
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(data_train_poly_full, y_train)
print optimizer
#GridSearchCV
print 'Best parameter for GridSearchCV with balanced weights', optimizer.best_params_
roc_auc_score_poly = roc_auc_score(y_test, optimizer.best_estimator_.predict_proba(data_test_poly_full)[:, 1])
print 'roc_auc_score', roc_auc_score_poly
write_answer_5(roc_auc_score_poly)
Explanation: Задание 5. Трансформация вещественных признаков.
Реализуйте по аналогии с примером преобразование вещественных признаков модели при помощи полиномиальных признаков степени 2
Постройте логистическую регрессию на новых данных, одновременно подобрав оптимальные гиперпараметры. Обращаем внимание, что в преобразованных признаках уже присутствует столбец, все значения которого равны 1, поэтому обучать дополнительно значение $b$ не нужно, его функцию выполняет один из весов $w$. В связи с этим во избежание линейной зависимости в датасете, в вызов класса логистической регрессии требуется передавать параметр fit_intercept=False. Для обучения используйте стратифицированные выборки с балансировкой классов при помощи весов, преобразованные признаки требуется заново отмасштабировать.
Получите AUC ROC на тесте и сравните данный результат с использованием обычных признаков.
Передайте полученный ответ в функцию write_answer_5.
End of explanation
def write_answer_6(features):
with open("preprocessing_lr_answer6.txt", "w") as fout:
fout.write(" ".join([str(num) for num in features]))
encoder = StandardScaler()
data_train_lasso_scaled = encoder.fit_transform(X_train_real_zeros)
data_test_lasso_scaled = encoder.fit_transform(X_test_real_zeros)
#stacking numerical and categorical features
data_train_lasso_full = np.hstack( (data_train_lasso_scaled, X_train_cat_oh) )
data_test_lasso_full = np.hstack( (data_test_lasso_scaled, X_test_cat_oh) )
#GridSearchCV parameters
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
estimator = LogisticRegression(class_weight='balanced', fit_intercept=False, penalty='l1')
%%time
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(data_train_lasso_full, y_train)
print optimizer
#GridSearchCV
print 'Best parameter for GridSearchCV with balanced weights', optimizer.best_params_
roc_auc_score_lasso = roc_auc_score(y_test, optimizer.best_estimator_.predict_proba(data_test_lasso_full)[:, 1])
print 'roc_auc_score', roc_auc_score_lasso
print optimizer.best_estimator_.coef_.ravel()
print X_train_real_zeros.shape[1]
zero_coefs = [index for index, value in enumerate(optimizer.best_estimator_.coef_[0][:13]) if value == 0]
print zero_coefs
write_answer_6(zero_coefs)
Explanation: Регрессия Lasso.
К логистической регрессии также можно применить L1-регуляризацию (Lasso), вместо регуляризации L2, которая будет приводить к отбору признаков. Вам предлагается применить L1-регуляцию к исходным признакам и проинтерпретировать полученные результаты (применение отбора признаков к полиномиальным так же можно успешно применять, но в нём уже будет отсутствовать компонента интерпретации, т.к. смысловое значение оригинальных признаков известно, а полиномиальных - уже может быть достаточно нетривиально). Для вызова логистической регрессии с L1-регуляризацией достаточно передать параметр penalty='l1' в инициализацию класса.
Задание 6. Отбор признаков при помощи регрессии Lasso.
Обучите регрессию Lasso на стратифицированных отмасштабированных выборках, используя балансировку классов при помощи весов.
Получите ROC AUC регрессии, сравните его с предыдущими результатами.
Найдите номера вещественных признаков, которые имеют нулевые веса в итоговой модели.
Передайте их список функции write_answer_6.
End of explanation |
4,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting with Folium
What is Folium?
Folium builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the leaflet.js library. This allows you to manipulate your data in Geopandas and visualize it on a Leaflet map via Folium.
In this example, we will first use Geopandas to load the geometries (volcano point data), and then create the Folium map with markers representing the different types of volcanoes.
Load geometries
This example uses a freely available volcano dataset. We will be reading the csv file using pandas, and then convert the pandas DataFrame to a Geopandas GeoDataFrame.
Step1: Create Folium map
Folium has a number of built-in tilesets from OpenStreetMap, Mapbox, and Stamen. For example
Step2: This example uses the Stamen Terrain map layer to visualize the volcano terrain.
Step3: Add markers
To represent the different types of volcanoes, you can create Folium markers and add them to your map.
Step4: Folium Heatmaps
Folium is well known for its heatmaps, which create a heatmap layer. To plot a heatmap in Folium, you need a list of latitudes and longitudes. | Python Code:
# Import Libraries
import pandas as pd
import geopandas
import folium
import matplotlib.pyplot as plt
df1 = pd.read_csv('volcano_data_2010.csv')
# Keep only relevant columns
df = df1.loc[:, ("Year", "Name", "Country", "Latitude", "Longitude", "Type")]
df.info()
# Create point geometries
geometry = geopandas.points_from_xy(df.Longitude, df.Latitude)
geo_df = geopandas.GeoDataFrame(df[['Year','Name','Country', 'Latitude', 'Longitude', 'Type']], geometry=geometry)
geo_df.head()
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
df.Type.unique()
fig, ax = plt.subplots(figsize=(24,18))
world.plot(ax=ax, alpha=0.4, color='grey')
geo_df.plot(column='Type', ax=ax, legend=True)
plt.title('Volcanoes')
Explanation: Plotting with Folium
What is Folium?
Folium builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the leaflet.js library. This allows you to manipulate your data in Geopandas and visualize it on a Leaflet map via Folium.
In this example, we will first use Geopandas to load the geometries (volcano point data), and then create the Folium map with markers representing the different types of volcanoes.
Load geometries
This example uses a freely available volcano dataset. We will be reading the csv file using pandas, and then convert the pandas DataFrame to a Geopandas GeoDataFrame.
End of explanation
# Stamen Terrain
map = folium.Map(location = [13.406,80.110], tiles = "Stamen Terrain", zoom_start = 9)
map
# OpenStreetMap
map = folium.Map(location = [13.406,80.110], tiles='OpenStreetMap' , zoom_start = 9)
map
# Stamen Toner
map = folium.Map(location = [13.406,80.110], tiles='Stamen Toner', zoom_start = 9)
map
Explanation: Create Folium map
Folium has a number of built-in tilesets from OpenStreetMap, Mapbox, and Stamen. For example:
End of explanation
# Use terrain map layer to see volcano terrain
map = folium.Map(location = [4,10], tiles = "Stamen Terrain", zoom_start = 3)
Explanation: This example uses the Stamen Terrain map layer to visualize the volcano terrain.
End of explanation
# Create a geometry list from the GeoDataFrame
geo_df_list = [[point.xy[1][0], point.xy[0][0]] for point in geo_df.geometry ]
# Iterate through list and add a marker for each volcano, color-coded by its type.
i = 0
for coordinates in geo_df_list:
#assign a color marker for the type of volcano, Strato being the most common
if geo_df.Type[i] == "Stratovolcano":
type_color = "green"
elif geo_df.Type[i] == "Complex volcano":
type_color = "blue"
elif geo_df.Type[i] == "Shield volcano":
type_color = "orange"
elif geo_df.Type[i] == "Lava dome":
type_color = "pink"
else:
type_color = "purple"
# Place the markers with the popup labels and data
map.add_child(folium.Marker(location = coordinates,
popup =
"Year: " + str(geo_df.Year[i]) + '<br>' +
"Name: " + str(geo_df.Name[i]) + '<br>' +
"Country: " + str(geo_df.Country[i]) + '<br>'
"Type: " + str(geo_df.Type[i]) + '<br>'
"Coordinates: " + str(geo_df_list[i]),
icon = folium.Icon(color = "%s" % type_color)))
i = i + 1
map
Explanation: Add markers
To represent the different types of volcanoes, you can create Folium markers and add them to your map.
End of explanation
# This example uses heatmaps to visualize the density of volcanoes
# which is more in some parts of the world compared to others.
from folium import plugins
map = folium.Map(location = [15,30], tiles='Cartodb dark_matter', zoom_start = 2)
heat_data = [[point.xy[1][0], point.xy[0][0]] for point in geo_df.geometry ]
heat_data
plugins.HeatMap(heat_data).add_to(map)
map
Explanation: Folium Heatmaps
Folium is well known for its heatmaps, which create a heatmap layer. To plot a heatmap in Folium, you need a list of latitudes and longitudes.
End of explanation |
4,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Select clean 83mKr events
KR83m cuts similar to Adam's note
Step1: Get S1s from these events
Step2: Save to disk
Pandas object array is very memory-ineficient. Takes about 25 MB/dataset to store it in this format (even compressed). If we'd want to extract more than O(10) datasets we'd get into trouble already at the extraction stage.
Least we can do is convert to sensible format (waveform matrix, ordinary dataframe) now. Unfortunately dataframe retains 'object' mark even after deleting sum waveform column. Converting to and from a record array removes this.
Step3: Merge with the per-event data (which is useful e.g. for making position-dependent selections)
Step4: Quick look
Step5: S1 is usually at trigger. | Python Code:
# Get SR1 krypton datasets
dsets = hax.runs.datasets
dsets = dsets[dsets['source__type'] == 'Kr83m']
dsets = dsets[dsets['trigger__events_built'] > 10000] # Want a lot of Kr, not diffusion mode
dsets = hax.runs.tags_selection(dsets, include='sciencerun0')
# Sample ten datasets randomly (with fixed seed, so the analysis is reproducible)
dsets = dsets.sample(10, random_state=0)
dsets.number.values
# Suppress rootpy warning about root2rec.. too lazy to fix.
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
data = hax.minitrees.load(dsets.number,
'Basics DoubleScatter Corrections'.split(),
num_workers=5,
preselection=['int_b_x>-60.0',
'600 < s1_b_center_time - s1_a_center_time < 2000',
'-90 < z < -5'])
Explanation: Select clean 83mKr events
KR83m cuts similar to Adam's note:
https://github.com/XENON1T/FirstResults/blob/master/PositionReconstructionSignalCorrections/S2map/s2-correction-xy-kr83m-fit-in-bins.ipynb
Valid second interaction
Time between S1s in [0.6, 2] $\mu s$
z in [-90, -5] cm
End of explanation
from hax.treemakers.peak_treemakers import PeakExtractor
dt = 10 * units.ns
wv_length = pax_config['BasicProperties.SumWaveformProperties']['peak_waveform_length']
waveform_ts = np.arange(-wv_length/2, wv_length/2 + 0.1, dt)
class GetS1s(PeakExtractor):
__version__ = '0.0.1'
uses_arrays = True
# (don't actually need all properties, but useful to check if there's some problem)
peak_fields = ['area', 'range_50p_area', 'area_fraction_top',
'n_contributing_channels', 'left', 'hit_time_std', 'n_hits',
'type', 'detector', 'center_time', 'index_of_maximum',
'sum_waveform',
]
peak_cut_list = ['detector == "tpc"', 'type == "s1"']
def get_data(self, dataset, event_list=None):
# Get the event list from the dataframe selected above
event_list = data[data['run_number'] == hax.runs.get_run_number(dataset)]['event_number'].values
return PeakExtractor.get_data(self, dataset, event_list=event_list)
def extract_data(self, event):
peak_data = PeakExtractor.extract_data(self, event)
# Convert sum waveforms from arcane pyroot buffer type to proper numpy arrays
for p in peak_data:
p['sum_waveform'] = np.array(list(p['sum_waveform']))
return peak_data
s1s = hax.minitrees.load(dsets.number, GetS1s, num_workers=5)
Explanation: Get S1s from these events
End of explanation
waveforms = np.vstack(s1s['sum_waveform'].values)
del s1s['sum_waveform']
s1s = pd.DataFrame(s1s.to_records())
Explanation: Save to disk
Pandas object array is very memory-ineficient. Takes about 25 MB/dataset to store it in this format (even compressed). If we'd want to extract more than O(10) datasets we'd get into trouble already at the extraction stage.
Least we can do is convert to sensible format (waveform matrix, ordinary dataframe) now. Unfortunately dataframe retains 'object' mark even after deleting sum waveform column. Converting to and from a record array removes this.
End of explanation
merged_data = hax.minitrees._merge_minitrees(s1s, data)
del merged_data['index']
np.savez_compressed('sr0_kr_s1s.npz', waveforms=waveforms)
merged_data.to_hdf('sr0_kr_s1s.hdf5', 'data')
Explanation: Merge with the per-event data (which is useful e.g. for making position-dependent selections)
End of explanation
len(s1s)
from pax import units
plt.hist(s1s.left * 10 * units.ns / units.ms, bins=np.linspace(0, 2.5, 100));
plt.yscale('log')
Explanation: Quick look
End of explanation
plt.hist(s1s.area, bins=np.logspace(0, 3, 100));
plt.axvline(35, color='r')
plt.yscale('log')
plt.xscale('log')
np.sum(s1s['area'] > 35)/len(s1s)
Explanation: S1 is usually at trigger.
End of explanation |
4,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 2
Step1: Dataset
Step2: Optional
Step3: pandas.read_csv
Step4: Expect pandas to take >3 minutes on EC2, longer locally
This is a good time to chat with your neighbor
Dicussion topics
- Do you work with a large amount of data daily?
- How big is your data?
- What’s the common use case of your data?
- Do you use any big data analytics tools?
- Do you use any interactive analytics tool?
- What’s are some drawbacks of your current interative analytic tools today?
modin.pandas.read_csv
Step5: Are they equal?
Step6: Concept for exercise
Step7: Are they equal?
Step8: Concept for exercise
Step9: Are they equal?
Step10: Concept for exercise
Step11: Are they equal?
Step12: Concept for exercise
Step13: Are they equal? | Python Code:
import modin.pandas as pd
import pandas
import time
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
Explanation: <center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 2: Speed improvements
GOAL: Learn about common functionality that Modin speeds up by using all of your machine's cores.
Concept for Exercise: read_csv speedups
The most commonly used data ingestion method used in pandas is CSV files (link to pandas survey). This concept is designed to give an idea of the kinds of speedups possible, even on a non-distributed filesystem. Modin also supports other file formats for parallel and distributed reads, which can be found in the documentation.
We will import both Modin and pandas so that the speedups are evident.
Note: Rerunning the read_csv cells many times may result in degraded performance, depending on the memory of the machine
End of explanation
path = "s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv"
Explanation: Dataset: 2015 NYC taxi trip data
We will be using a version of this data already in S3, originally posted in this blog post: https://matthewrocklin.com/blog/work/2017/01/12/dask-dataframes
Size: ~1.8GB
End of explanation
# [Optional] Download data locally. This may take a few minutes to download.
# import urllib.request
# url_path = "https://dask-data.s3.amazonaws.com/nyc-taxi/2015/yellow_tripdata_2015-01.csv"
# urllib.request.urlretrieve(url_path, "taxi.csv")
# path = "taxi.csv"
Explanation: Optional: Note that the dataset takes a while to download. To speed things up a bit, if you prefer to download this file once locally, you can run the following code in the notebook:
End of explanation
start = time.time()
pandas_df = pandas.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3)
end = time.time()
pandas_duration = end - start
print("Time to read with pandas: {} seconds".format(round(pandas_duration, 3)))
Explanation: pandas.read_csv
End of explanation
start = time.time()
modin_df = pd.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3)
end = time.time()
modin_duration = end - start
print("Time to read with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `read_csv`!".format(round(pandas_duration / modin_duration, 2)))
Explanation: Expect pandas to take >3 minutes on EC2, longer locally
This is a good time to chat with your neighbor
Dicussion topics
- Do you work with a large amount of data daily?
- How big is your data?
- What’s the common use case of your data?
- Do you use any big data analytics tools?
- Do you use any interactive analytics tool?
- What’s are some drawbacks of your current interative analytic tools today?
modin.pandas.read_csv
End of explanation
pandas_df
modin_df
Explanation: Are they equal?
End of explanation
start = time.time()
pandas_count = pandas_df.count()
end = time.time()
pandas_duration = end - start
print("Time to count with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_count = modin_df.count()
end = time.time()
modin_duration = end - start
print("Time to count with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `count`!".format(round(pandas_duration / modin_duration, 2)))
Explanation: Concept for exercise: Reduces
In pandas, a reduce would be something along the lines of a sum or count. It computes some summary statistics about the rows or columns. We will be using count.
End of explanation
pandas_count
modin_count
Explanation: Are they equal?
End of explanation
start = time.time()
pandas_isnull = pandas_df.isnull()
end = time.time()
pandas_duration = end - start
print("Time to isnull with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_isnull = modin_df.isnull()
end = time.time()
modin_duration = end - start
print("Time to isnull with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `isnull`!".format(round(pandas_duration / modin_duration, 2)))
Explanation: Concept for exercise: Map operations
In pandas, map operations are operations that do a single pass over the data and do not change its shape. Operations like isnull and applymap are included in this. We will be using isnull.
End of explanation
pandas_isnull
modin_isnull
Explanation: Are they equal?
End of explanation
start = time.time()
rounded_trip_distance_pandas = pandas_df["trip_distance"].apply(round)
end = time.time()
pandas_duration = end - start
print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
rounded_trip_distance_modin = modin_df["trip_distance"].apply(round)
end = time.time()
modin_duration = end - start
print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `apply` on one column!".format(round(pandas_duration / modin_duration, 2)))
Explanation: Concept for exercise: Apply over a single column
Sometimes we want to compute some summary statistics on a single column from our dataset.
End of explanation
rounded_trip_distance_pandas
rounded_trip_distance_modin
Explanation: Are they equal?
End of explanation
start = time.time()
pandas_df["rounded_trip_distance"] = rounded_trip_distance_pandas
end = time.time()
pandas_duration = end - start
print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_df["rounded_trip_distance"] = rounded_trip_distance_modin
end = time.time()
modin_duration = end - start
print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas add a column!".format(round(pandas_duration / modin_duration, 2)))
Explanation: Concept for exercise: Add a column
It is common to need to add a new column to an existing dataframe, here we show that this is significantly faster in Modin due to metadata management and an efficient zero copy implementation.
End of explanation
pandas_df
modin_df
Explanation: Are they equal?
End of explanation |
4,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brainstorm Elekta phantom tutorial dataset
Here we compute the evoked from raw for the Brainstorm Elekta phantom
tutorial dataset. For comparison, see [1]_ and
Step1: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
are read to construct instances of
Step2: Data channel array consisted of 204 MEG planor gradiometers,
102 axial magnetometers, and 3 stimulus channels. Let's get the events
for the phantom, where each dipole (1-32) gets its own event
Step3: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
to save memory
Step4: Let's use Maxwell filtering to clean the data a bit.
Ideally we would have the fine calibration and cross-talk information
for the site of interest, but we don't, so we just do
Step5: We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
Step6: Now we epoch our data, average it, and look at the first dipole response.
The first peak appears around 3 ms. Because we low-passed at 40 Hz,
we can also decimate our data to save memory.
Step7: Let's do some dipole fits. The phantom is properly modeled by a single-shell
sphere with origin (0., 0., 0.). We compute covariance, then do the fits.
Step8: Now we can compare to the actual locations, taking the difference in mm | Python Code:
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
Explanation: Brainstorm Elekta phantom tutorial dataset
Here we compute the evoked from raw for the Brainstorm Elekta phantom
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
data_path = bst_phantom_elekta.data_path()
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname, add_eeg_ref=False)
Explanation: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
are read to construct instances of :class:mne.io.Raw.
End of explanation
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG2421']
Explanation: Data channel array consisted of 204 MEG planor gradiometers,
102 axial magnetometers, and 3 stimulus channels. Let's get the events
for the phantom, where each dipole (1-32) gets its own event:
End of explanation
raw.plot_psd(tmax=60.)
Explanation: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
to save memory:
End of explanation
raw.fix_mag_coil_types()
raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))
Explanation: Let's use Maxwell filtering to clean the data a bit.
Ideally we would have the fine calibration and cross-talk information
for the site of interest, but we don't, so we just do:
End of explanation
raw.filter(None, 40., h_trans_bandwidth='auto', filter_length='auto',
phase='zero')
raw.plot(events=events)
Explanation: We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
End of explanation
tmin, tmax = -0.1, 0.1
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),
decim=5, preload=True, add_eeg_ref=False)
epochs['1'].average().plot()
Explanation: Now we epoch our data, average it, and look at the first dipole response.
The first peak appears around 3 ms. Because we low-passed at 40 Hz,
we can also decimate our data to save memory.
End of explanation
t_peak = 60e-3 # ~60 MS at largest peak
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
cov = mne.compute_covariance(epochs, tmax=0)
data = []
for ii in range(1, 33):
evoked = epochs[str(ii)].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs, raw
dip = fit_dipole(evoked, cov, sphere, n_jobs=1)[0]
Explanation: Let's do some dipole fits. The phantom is properly modeled by a single-shell
sphere with origin (0., 0., 0.). We compute covariance, then do the fits.
End of explanation
actual_pos = mne.dipole.get_phantom_dipoles(kind='122')[0]
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('Differences (mm):\n%s' % diffs[:, np.newaxis])
print('μ = %s' % (np.mean(diffs),))
Explanation: Now we can compare to the actual locations, taking the difference in mm:
End of explanation |
4,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Setting things up
Let's take a look at game.py, which we use to create games. Right now, Signal only does cheap-talk games with a chance player. That is, games in which the state the sender observes is exogenously generated, following a certain probablitiy distribution.
We first generate payoff matrices for sender and receiver. The cell $c_{ij}$ in the sender (receiver) payoff matrix gives the payoff for the sender (receiver) when act $A_j$ is done in state $S_i$.
Step1: So, 3 equiprobable (as per state_chances) states and 3 acts (that's why the payoff matrices are square), and sender and receiver get 1 payoff unit when act $A_i$ is performed in state $S_i$, 0 otherwise. That's why both payoff matrices are the identity matrix.
What we now need is to decide which strategies will each type in the sender/receiver populations play, and calculate the average payoff for each combination of sender and receiver strategies. s.Chance takes care of that
Step2: One common choice is to have one type in each of the populations for each possible pure strategy available to them
Step3: But this is not the only choice. Conceivably, one might want to add into sender_strats a type of senders following a mixed strategy, or ditto for the receiver. One might add such possibilities by hand into the strats arrays, or delete a number of pure strats, or create a strats array from scratch, etc.
Simulating Evolution
Once we have the two strats arrays we want, we can create a population game. From here on game.Evolve takes over
Step4: All right, we can now actually run one of the ODE solvers available in scipy with this object. We create two random population vectors as the initial state, then run, e.g., scipy.integrate.odeint. We also need to give a vector of times for which population snapshots will be calculated (other solvers in scipy.integrate have a slightly different API)
Step5: Right now, Signal calculates the two-population replicator(-mutator) dynamics, in continuous or discrete time. It is able to use both scipy.integrate.odeint and scipy.integrate.ode for this. Let's evolve those two random initial population vectors, following the replicator dynamics in continuous time, using odeint
Step6: results gives the results of the simulations. If you want the additional output that odeint can provide, you can pass the full_output=True flag to replicator_odeintjust as you would to odeint. The same goes for other additional input to odeint.
Step7: This popnulation will no longer evolve. We can now check what it is they are doing in the final state
Step8: Sender and receiver are engaged in a signaling system. We can also check directly the mutual info between states and acts in the final snapshot
Step9: 1.58 (that is, log2(3)) bits is the entropy of states, which is fully recovered in the mutual information between states and act
Step10: We have now the mutual info between states and acts at 1000 end points
Step11: Close enough!
Games without chance player
Let's now work with a game in which the sender has an endogenously generated state. I.e., a sender type will now be individuated by a state together with a probability vector over the set of messages. Receivers are as in the games with chance player discussed above | Python Code:
sender = np.identity(3)
receiver = np.identity(3)
state_chances = np.array([1/3, 1/3, 1/3])
Explanation: 1. Setting things up
Let's take a look at game.py, which we use to create games. Right now, Signal only does cheap-talk games with a chance player. That is, games in which the state the sender observes is exogenously generated, following a certain probablitiy distribution.
We first generate payoff matrices for sender and receiver. The cell $c_{ij}$ in the sender (receiver) payoff matrix gives the payoff for the sender (receiver) when act $A_j$ is done in state $S_i$.
End of explanation
simple_game = game.Chance(state_chances, sender, receiver, 3) # The 3 gives the number of available messages
Explanation: So, 3 equiprobable (as per state_chances) states and 3 acts (that's why the payoff matrices are square), and sender and receiver get 1 payoff unit when act $A_i$ is performed in state $S_i$, 0 otherwise. That's why both payoff matrices are the identity matrix.
What we now need is to decide which strategies will each type in the sender/receiver populations play, and calculate the average payoff for each combination of sender and receiver strategies. s.Chance takes care of that:
End of explanation
sender_strats = simple_game.sender_pure_strats()
receiver_strats = simple_game.receiver_pure_strats()
Explanation: One common choice is to have one type in each of the populations for each possible pure strategy available to them:
End of explanation
simple_evo = game.Evolve(simple_game, sender_strats, receiver_strats)
Explanation: But this is not the only choice. Conceivably, one might want to add into sender_strats a type of senders following a mixed strategy, or ditto for the receiver. One might add such possibilities by hand into the strats arrays, or delete a number of pure strats, or create a strats array from scratch, etc.
Simulating Evolution
Once we have the two strats arrays we want, we can create a population game. From here on game.Evolve takes over:
End of explanation
sender_init = simple_evo.random_sender()
receiver_init = simple_evo.random_receiver()
times = np.arange(1000) # times from 0 to 999, at 1 time-unit increments
Explanation: All right, we can now actually run one of the ODE solvers available in scipy with this object. We create two random population vectors as the initial state, then run, e.g., scipy.integrate.odeint. We also need to give a vector of times for which population snapshots will be calculated (other solvers in scipy.integrate have a slightly different API):
End of explanation
results = simple_evo.replicator_odeint(sender_init, receiver_init, times)
Explanation: Right now, Signal calculates the two-population replicator(-mutator) dynamics, in continuous or discrete time. It is able to use both scipy.integrate.odeint and scipy.integrate.ode for this. Let's evolve those two random initial population vectors, following the replicator dynamics in continuous time, using odeint:
End of explanation
plt.plot(results);
Explanation: results gives the results of the simulations. If you want the additional output that odeint can provide, you can pass the full_output=True flag to replicator_odeintjust as you would to odeint. The same goes for other additional input to odeint.
End of explanation
sender_final, receiver_final = simple_evo.vector_to_populations(results[-1])
# This splits the vector that the solver outputs into two population vectors
winning_sender = sender_strats[sender_final.argmax()]
winning_receiver = receiver_strats[receiver_final.argmax()]
# This gives the strategies with the highest frequency (which we know is 1)
# for sender and receiver in the final population state
print("{}\n\n{}".format(winning_sender, winning_receiver))
Explanation: This popnulation will no longer evolve. We can now check what it is they are doing in the final state:
End of explanation
import analyze
info = analyze.Information(simple_game, winning_sender, winning_receiver)
info.mutual_info_states_acts()
Explanation: Sender and receiver are engaged in a signaling system. We can also check directly the mutual info between states and acts in the final snapshot:
End of explanation
final_info = []
for i in range(1000):
sender_init = simple_evo.random_sender()
receiver_init = simple_evo.random_receiver()
data = simple_evo.replicator_odeint(sender_init, receiver_init, times)
sender_final, receiver_final = simple_evo.vector_to_populations(data[-1])
sender_final_info = simple_evo.sender_to_mixed_strat(sender_final)
receiver_final_info = simple_evo.receiver_to_mixed_strat(receiver_final)
info = analyze.Information(simple_game, sender_final_info, receiver_final_info)
final_info.append(info.mutual_info_states_acts())
Explanation: 1.58 (that is, log2(3)) bits is the entropy of states, which is fully recovered in the mutual information between states and act: a signaling system, as expected.
But we know that a small proportion of random starting points in the "simple game" do not evolve to signaling systems. This percentage, according to Huttegger et al. (2010, p. 183) is about 4.7%. Let's replicate their result.
End of explanation
plt.plot(final_info);
sum(np.array(final_info) < 1.58)/1000
Explanation: We have now the mutual info between states and acts at 1000 end points:
As the plot shows, there are no intermediate values between signaling systems (at 1.58 bits) and the partially pooling configuration at 0.9 bits. So, to calculate the proportion of pooling equilibria, we can look at just that.
End of explanation
import imp
imp.reload(game)
imp.reload(analyze)
sender = np.identity(3)
receiver = np.identity(3)
simple_nonchance = game.NonChance(sender, receiver, 3)
sender_strats = simple_nonchance.sender_pure_strats()
receiver_strats = simple_nonchance.receiver_pure_strats()
avgpayoff = simple_nonchance.avg_payoffs(sender_strats, receiver_strats)
nc_evolve = game.Evolve(simple_nonchance, sender_strats, receiver_strats)
sender_init = nc_evolve.random_sender()
receiver_init = nc_evolve.random_receiver()
times = np.arange(1000)
results = nc_evolve.replicator_odeint(sender_init, receiver_init, times)
plt.plot(results);
sender_final, receiver_final = nc_evolve.vector_to_populations(results[-1])
# This splits the vector that the solver outputs into two population vectors
sender_final_strat = nc_evolve.sender_to_mixed_strat(sender_final)
receiver_final_strat = nc_evolve.receiver_to_mixed_strat(receiver_final)
print("{}\n\n{}".format(sender_final_strat, receiver_final_strat))
ci = analyze.CommonInterest(simple_nonchance)
ci.C_chance()
simple_nonchance.sender_payoff_matrix.dot(receiver_for_sender)
payoffs = np.arange(9).reshape(3,3)
payoffs
senderstrat[0][:, None] * payoffs.dot(np.array([1/3, 0, 2/3])[:, None])
simple_evo.sender_to_mixed_strat(receiver_final)
final_info
Explanation: Close enough!
Games without chance player
Let's now work with a game in which the sender has an endogenously generated state. I.e., a sender type will now be individuated by a state together with a probability vector over the set of messages. Receivers are as in the games with chance player discussed above:
End of explanation |
4,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习工程师纳米学位
模型评价与验证
项目 1
Step1: 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。目标变量:'MEDV',是我们希望预测的变量。他们分别被存在features和prices两个变量名中。
练习:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
Step3: 问题1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值
Step4: 问题2 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
|
Step5: 回答
Step6: 问题 3- 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
答案
Step7: 问题 4 - 学习数据
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练曲线的评分有怎样的变化?测试曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
答案
Step9: 问题 5- 偏差与方差之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
答案
Step10: 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 9- 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
Step11: Answer
Step12: 答案
Step15: 问题 11 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示: 回答几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 模型是否足够健壮来保证预测的一致性?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
答案 | Python Code:
# Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
# 完成
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
Explanation: 机器学习工程师纳米学位
模型评价与验证
项目 1: 预测波士顿房价
欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自UCI机器学习知识库。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。
- 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。
- 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。
End of explanation
# TODO: Minimum price of the data
#目标:计算价值的最小值
minimum_price = prices.min()
# TODO: Maximum price of the data
#目标:计算价值的最大值
maximum_price = prices.max()
# TODO: Mean price of the data
#目标:计算价值的平均值
mean_price = prices.mean()
# TODO: Median price of the data
#目标:计算价值的中值
median_price = prices.median()
# TODO: Standard deviation of prices of the data
#目标:计算价值的标准差
std_price = prices.std()
# Show the calculated statistics
#目标:输出计算的结果
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
Explanation: 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。目标变量:'MEDV',是我们希望预测的变量。他们分别被存在features和prices两个变量名中。
练习:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
End of explanation
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
Explanation: 问题1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点:
- 'RM' 是该地区中每个房屋的平均房间数量;
- 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄);
- 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。
凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。
提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢?
回答:
RM 增大,MEDV 增大,因为房屋面积变大;
LSTAT 增大,MEDV 减小,因为低收入者变多;
PTRATIO 增大,MEDV 增大,因为教育资源变得更加丰富
建模
在项目的第二部分中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。
练习:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。
在下方代码的 performance_metric 函数中,你要实现:
- 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。
- 将他们的表现评分储存到score变量中。
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
Explanation: 问题2 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
你会觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。
运行下方的代码,使用performance_metric函数来计算模型的决定系数。
End of explanation
# TODO: Import 'train_test_split'
from sklearn.model_selection import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
Explanation: 回答: 我觉得成功描述了。因为决定系数很接近1。说明这个模型可以对目标变量进行接近完美的预测
练习: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重新排序,以消除数据集中由于排序而产生的偏差。
在下面的代码中,你需要:
- 使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;
- 最终分离出的子集为X_train,X_test,y_train,和y_test。
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: 问题 3- 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
答案: 我们无法判断模型的好坏
分析模型的表现
在项目的第三部分,我们来看一下几个模型针对不同的数据集在学习和测试上的表现。另外,你需要专注于一个特定的算法,用全部训练集训练时,提高它的'max_depth' 参数,观察这一参数的变化如何影响模型的表现。把你模型的表现画出来对于分析过程十分有益。可视化可以让我们看到一些单看结果看不到的行为。
学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观的显示了随着训练数据量的增加,模型学习曲线的训练评分和测试评分的变化。注意,曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。这个模型的训练和测试部分都使用决定系数R<sup>2</sup>来评分。
运行下方区域中的代码,并利用输出的图形回答下面的问题。
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: 问题 4 - 学习数据
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练曲线的评分有怎样的变化?测试曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
答案: 第二个,最大深度为3。训练曲线开始逐渐降低,测试曲线开始逐渐升高,但它们最后都趋于平稳,所以并不能有效提升模型的表现。
复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练的变化,一个是测试的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。
运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。
End of explanation
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 10)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: 问题 5- 偏差与方差之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
答案: 为1时,出现了很大的偏差,因为此时无论是测试数据还是训练数据b标准系数都很低,测试数据和训练数据的标准系数之间差异很小,说明模型无法对数据进行良好预测。
为 10 时,出现了很大的方差,测试数据和训练数据的标准系数之间差异很大,说明出现了过拟合情况。
问题 6- 最优模型的猜测
你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?
答案: 3。因为此时测试数据和训练数据的分数之间差异最小,且测试数据的标准系数达到最高。
评价模型表现
在这个项目的最后,你将自己建立模型,并使用最优化的fit_model函数,基于客户房子的特征来预测该房屋的价值。
问题 7- 网格搜索(Grid Search)
什么是网格搜索法?如何用它来优化学习算法?
回答: 是一种把参数网格化的算法。通过调整学习算法所使用的参数来优化学习算法。
问题 8- 交叉验证
什么是K折交叉验证法(k-fold cross-validation)?优化模型时,使用这种方法对网格搜索有什么好处?网格搜索是如何结合交叉验证来完成对最佳参数组合的选择的?
提示: 跟为何需要一组测试集的原因差不多,网格搜索时如果不使用交叉验证会有什么问题?GridSearchCV中的'cv_results'属性能告诉我们什么?
答案: K折交叉验证法是将训练数据平均分配到K个容器,每次去其中一个做测试数据,其余做训练数据,进行K次后,对训练结果取平均值的一种获得更高精确度的一种算法。可以时网格搜索的训练结果获得更高的精确度。网格搜索可以使拟合函数尝试所有的参数组合,并返回一个合适的分类器,自动调整至最佳参数组合。
练习:训练模型
在最后一个练习中,你将需要将所学到的内容整合,使用决策树演算法训练一个模型。为了保证你得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
此外,你会发现你的实现使用的是 ShuffleSplit() 。它也是交叉验证的一种方式(见变量 'cv_sets')。虽然这不是问题8中描述的 K-Fold 交叉验证,这个教程验证方法也很有用!这里 ShuffleSplit() 会创造10个('n_splits')混洗过的集合,每个集合中20%('test_size')的数据会被用作验证集。当你在实现的时候,想一想这跟 K-Fold 交叉验证有哪些相同点,哪些不同点?
在下方 fit_model 函数中,你需要做的是:
- 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数;
- 将这个回归函数储存到 'regressor' 变量中;
- 为 'max_depth' 创造一个字典,它的值是从1至10的数组,并储存到 'params' 变量中;
- 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数;
- 将 performance_metric 作为参数传至这个函数中;
- 将评分函数储存到 'scoring_fnc' 变量中;
- 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象;
- 将变量'regressor', 'params', 'scoring_fnc', 和 'cv_sets' 作为参数传至这个对象中;
- 将 GridSearchCV 存到 'grid' 变量中。
如果有同学对python函数如何传递多个参数不熟悉,可以参考这个MIT课程的视频。
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
Explanation: 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 9- 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
Explanation: Answer: Parameter 'max_depth' is 4 for the optimal model.
问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?
提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: 答案:
Predicted selling price for Client 1's home: $403,025.00.Predicted selling price for Client 2's home: $237,478.72.
Predicted selling price for Client 3's home: $931,636.36.这样的价格是合理的。
敏感度
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。
End of explanation
### 你的代码
# Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('bj_housing.csv')
prices = data['Value']
features = data.drop('Value', axis = 1)
print features.head()
print prices.head()
# Success
# 完成
# print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 10)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
client_data = [[128, 3, 2, 0, 2005, 13], [150, 3, 2, 0, 2005, 13]]
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ¥{:,.2f}".format(i+1, price)
Explanation: 问题 11 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示: 回答几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 模型是否足够健壮来保证预测的一致性?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
答案: 不能,首先这只是波士顿的房价,并不具有代表性,而且时间久远,房屋的价格还和其他特性有关,比如装修的程度。
可选问题 - 预测北京房价
(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中bj_housing.csv。
免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。
这个数据集的特征有:
- Area:房屋面积,平方米
- Room:房间数,间
- Living: 厅数,间
- School: 是否为学区房,0或1
- Year: 房屋建造时间,年
- Floor: 房屋所处楼层,层
目标变量:
- Value: 房屋人民币售价,万
你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。
End of explanation |
4,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A well-used functionality in PySAL is the use of PySAL to conduct exploratory spatial data analysis. This notebook will provide an overview of ways to conduct exploratory spatial analysis in Python.
First, let's read in some data
Step1: In PySAL, commonly-used analysis methods are very easy to access. For example, if we were interested in examining the spatial dependence in HR90 we could quickly compute a Moran's $I$ statistic
Step2: Thus, the $I$ statistic is $.383$ for this data, and has a very small $p$ value.
We can visualize the distribution of simulated $I$ statistics using the stored collection of simulated statistics
Step3: A simple way to visualize this distribution is to make a KDEplot (like we've done before), and add a rug showing all of the simulated points, and a vertical line denoting the observed value of the statistic
Step4: Instead, if our $I$ statistic were close to our expected value, I_HR90.EI, our plot might look like this
Step5: In addition to univariate Moran's $I$, PySAL provides many other types of autocorrelation statistics
Step6: Since the statistic is below one with a significant $p$-value, it indicates the same thing as the Moran's $I$ above, moderate significant global spatial dependence in HR90.
In addition, we can compute a global Bivariate Moran statistic, which relates an observation to the spatial lag of another observation
Step7: Local Autocorrelation Statistics
In addition to the Global autocorrelation statistics, PySAL has many local autocorrelation statistics. Let's compute a local Moran statistic for the same data shown above
Step8: Now, instead of a single $I$ statistic, we have an array of local $I_i$ statistics, stored in the .Is attribute, and p-values from the simulation are in p_sim.
Step9: We can adjust the number of permutations used to derive every pseudo-$p$ value by passing a different permutations argument
Step10: In addition to the typical clustermap, a helpful visualization for LISA statistics is a Moran scatterplot with statistically significant LISA values highlighted.
This is very simple, if we use the same strategy we used before
Step11: Then, we want to plot the statistically-significant LISA values in a different color than the others. To do this, first find all of the statistically significant LISAs. Since the $p$-values are in the same order as the $I_i$ statistics, we can do this in the following way
Step12: Then, since we have a lot of points, we can plot the points with a statistically insignficant LISA value lighter using the alpha keyword. In addition, we would like to plot the statistically significant points in a dark red color.
Step13: Matplotlib has a list of named colors and will interpret colors that are provided in hexadecimal strings
Step14: We can also make a LISA map of the data.
Simple exploratory regression
Sometimes, to check for simple spatial heterogeneity, a fixed effects model can be estimated. If the heterogeneity has known bounds. First, though, note that pandas can build dummy variable matrices from categorical data very quickly
Step15: Where this becomes handy is if you have data you know can be turned into a dummy variable, but is not yet correctly encoded.
For example, the same call as above can make a dummy variable matrix to encode state fixed effects using the STATE_NAME variable
Step16: For now, let's estimate a spatial regimes regression on the south/not-south division. To show how a regimes effects plot may look, let's consider one covariate that is likely related and one that is very likely unrelated to $y$. That is our formal specification for the regression will be | Python Code:
data = ps.pdio.read_files(ps.examples.get_path('NAT.shp'))
W = ps.queen_from_shapefile(ps.examples.get_path('NAT.shp'))
W.transform = 'r'
data.head()
Explanation: A well-used functionality in PySAL is the use of PySAL to conduct exploratory spatial data analysis. This notebook will provide an overview of ways to conduct exploratory spatial analysis in Python.
First, let's read in some data:
End of explanation
I_HR90 = ps.Moran(data.HR90.values, W)
I_HR90.I, I_HR90.p_sim
Explanation: In PySAL, commonly-used analysis methods are very easy to access. For example, if we were interested in examining the spatial dependence in HR90 we could quickly compute a Moran's $I$ statistic:
End of explanation
I_HR90.sim[0:5]
Explanation: Thus, the $I$ statistic is $.383$ for this data, and has a very small $p$ value.
We can visualize the distribution of simulated $I$ statistics using the stored collection of simulated statistics:
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.kdeplot(I_HR90.sim, shade=True)
plt.vlines(I_HR90.sim, 0, 1)
plt.vlines(I_HR90.I, 0, 40, 'r')
Explanation: A simple way to visualize this distribution is to make a KDEplot (like we've done before), and add a rug showing all of the simulated points, and a vertical line denoting the observed value of the statistic:
End of explanation
sns.kdeplot(I_HR90.sim, shade=True)
plt.vlines(I_HR90.sim, 0, 1)
plt.vlines(I_HR90.EI+.01, 0, 40, 'r')
Explanation: Instead, if our $I$ statistic were close to our expected value, I_HR90.EI, our plot might look like this:
End of explanation
c_HR90 = ps.Geary(data.HR90.values, W)
#ps.Gamma
#ps.Join_Counts
c_HR90.C, c_HR90.p_sim
Explanation: In addition to univariate Moran's $I$, PySAL provides many other types of autocorrelation statistics:
End of explanation
bv_HRBLK = ps.Moran_BV(data.HR90.values, data.BLK90.values, W)
bv_HRBLK.I, bv_HRBLK.p_sim
Explanation: Since the statistic is below one with a significant $p$-value, it indicates the same thing as the Moran's $I$ above, moderate significant global spatial dependence in HR90.
In addition, we can compute a global Bivariate Moran statistic, which relates an observation to the spatial lag of another observation:
End of explanation
LMo_HR90 = ps.Moran_Local(data.HR90.values, W)
Explanation: Local Autocorrelation Statistics
In addition to the Global autocorrelation statistics, PySAL has many local autocorrelation statistics. Let's compute a local Moran statistic for the same data shown above:
End of explanation
LMo_HR90.Is, LMo_HR90.p_sim
Explanation: Now, instead of a single $I$ statistic, we have an array of local $I_i$ statistics, stored in the .Is attribute, and p-values from the simulation are in p_sim.
End of explanation
LMo_HR90 = ps.Moran_Local(data.HR90.values, W, permutations=9999)
Explanation: We can adjust the number of permutations used to derive every pseudo-$p$ value by passing a different permutations argument:
End of explanation
Lag_HR90 = ps.lag_spatial(W, data.HR90.values)
HR90 = data.HR90.values
Explanation: In addition to the typical clustermap, a helpful visualization for LISA statistics is a Moran scatterplot with statistically significant LISA values highlighted.
This is very simple, if we use the same strategy we used before:
First, construct the spatial lag of the covariate:
End of explanation
sigs = HR90[LMo_HR90.p_sim <= .001]
W_sigs = Lag_HR90[LMo_HR90.p_sim <= .001]
insigs = HR90[LMo_HR90.p_sim > .001]
W_insigs = Lag_HR90[LMo_HR90.p_sim > .001]
Explanation: Then, we want to plot the statistically-significant LISA values in a different color than the others. To do this, first find all of the statistically significant LISAs. Since the $p$-values are in the same order as the $I_i$ statistics, we can do this in the following way
End of explanation
b,a = np.polyfit(HR90, Lag_HR90, 1)
Explanation: Then, since we have a lot of points, we can plot the points with a statistically insignficant LISA value lighter using the alpha keyword. In addition, we would like to plot the statistically significant points in a dark red color.
End of explanation
plt.plot(sigs, W_sigs, '.', color='firebrick')
plt.plot(insigs, W_insigs, '.k', alpha=.2)
# dashed vert at mean of the last year's PCI
plt.vlines(HR90.mean(), Lag_HR90.min(), Lag_HR90.max(), linestyle='--')
# dashed horizontal at mean of lagged PCI
plt.hlines(Lag_HR90.mean(), HR90.min(), HR90.max(), linestyle='--')
# red line of best fit using global I as slope
plt.plot(HR90, a + b*HR90, 'r')
plt.text(s='$I = %.3f$' % I_HR90.I, x=50, y=15, fontsize=18)
plt.title('Moran Scatterplot')
plt.ylabel('Spatial Lag of HR90')
plt.xlabel('HR90')
Explanation: Matplotlib has a list of named colors and will interpret colors that are provided in hexadecimal strings:
End of explanation
pd.get_dummies(data.SOUTH) #dummies for south (already binary)
Explanation: We can also make a LISA map of the data.
Simple exploratory regression
Sometimes, to check for simple spatial heterogeneity, a fixed effects model can be estimated. If the heterogeneity has known bounds. First, though, note that pandas can build dummy variable matrices from categorical data very quickly:
End of explanation
pd.get_dummies(data.STATE_NAME) #dummies for state by name
Explanation: Where this becomes handy is if you have data you know can be turned into a dummy variable, but is not yet correctly encoded.
For example, the same call as above can make a dummy variable matrix to encode state fixed effects using the STATE_NAME variable:
End of explanation
y = data[['HR90']].values
x = data[['BLK90']].values
unrelated_effect = np.random.normal(0,100, size=y.shape[0]).reshape(y.shape)
X = np.hstack((x, unrelated_effect))
regimes = data.SOUTH.values.tolist()
regime_reg = ps.spreg.OLS_Regimes(y, X, regimes)
betas = regime_reg.betas
sebetas = np.sqrt(regime_reg.vm.diagonal())
sebetas
plt.plot(betas,'ok')
plt.axis([-1,6,-1,8])
plt.hlines(0,-1,6, color='k', linestyle='--')
plt.errorbar([0,1,2,3,4,5], betas.flatten(), yerr=sebetas*3, fmt='o', ecolor='r')
plt.xticks([-.5,0.5,1.5,2.5,3.5,4.5, 5.5], ['',
'Not South: Constant',
'Not South: BLK90',
'Not South: Not South',
'South: Constant',
'South: South',
'South: Unrelated',''], rotation='325')
plt.title('Regime Fixed Effects')
Explanation: For now, let's estimate a spatial regimes regression on the south/not-south division. To show how a regimes effects plot may look, let's consider one covariate that is likely related and one that is very likely unrelated to $y$. That is our formal specification for the regression will be:
$$ y = \beta_{0} + x_{1a}\beta_{1a} + x_{1b}\beta_{1b} + x_{2}\beta_{2} + \epsilon $$
Where $x_{1a} = 1$ when an observation is not in the south, and zero when it is not. This is a simple spatial fixed effects setup, where each different spatial unit is treated as having a special effect on observations inside of it.
In addition, we'll add an unrelated term to show how the regression effects visualization works:
End of explanation |
4,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Microsoft Emotion API Data
Run 4_check_img_size.py
This script checks the images are not too large for Micrsofts API, which has a limit of 1kb to 4MB
When it finds an image that is too large, it resizes it to 3.5MB, and saves as "original_filename_resized.jpg"
Check through the results, and remove original file if a resized one has been created. This step could be done programmatically... but being forced to actually look at your data can be good, too.
Run 5_microsoft_Face_API.py
You will need your own API keys. Free key subscription gets you a generous rate limit.
This script only collects face_id, age, smile, gender, and emotion.
Although this is the Face API, it also returned emotion, which was handy.
One JSON is output per image
Read in JSONS and make a dataframe
Step1: Examination of the images that triggered an error revealed that errors were caused by the API being unable to calculate an emotion due to the face being partially obscured or being in profile.
Step2: Remove all rows with Errors
Step3: Visualising sentiment scores | Python Code:
def read_jsons(f, candidate):
tmp_dict = {}
with open(f) as json_file:
data = json.load(json_file)
data = json.loads(data)
print(data)
try:
tmp_dict['age'] = data[0]['faceAttributes']['age']
tmp_dict['gender'] = data[0]['faceAttributes']['gender']
tmp_dict['smile'] = data[0]['faceAttributes']['smile']
if data[0]['faceAttributes']['emotion']['anger'] > 0.55: # this value is the confidence threshold
tmp_dict['anger'] = data[0]['faceAttributes']['emotion']['anger']
if data[0]['faceAttributes']['emotion']['contempt'] > 0.55:
tmp_dict['contempt'] = data[0]['faceAttributes']['emotion']['contempt']
if data[0]['faceAttributes']['emotion']['disgust'] > 0.55:
tmp_dict['disgust'] = data[0]['faceAttributes']['emotion']['disgust']
if data[0]['faceAttributes']['emotion']['fear'] > 0.55:
tmp_dict['fear'] = data[0]['faceAttributes']['emotion']['fear']
if data[0]['faceAttributes']['emotion']['happiness'] > 0.55:
tmp_dict['happiness'] = data[0]['faceAttributes']['emotion']['happiness']
if data[0]['faceAttributes']['emotion']['neutral'] > 0.55:
tmp_dict['neutral'] = data[0]['faceAttributes']['emotion']['neutral']
if data[0]['faceAttributes']['emotion']['sadness'] > 0.55:
tmp_dict['sadness'] = data[0]['faceAttributes']['emotion']['sadness']
if data[0]['faceAttributes']['emotion']['surprise'] > 0.55:
tmp_dict['surprise'] = data[0]['faceAttributes']['emotion']['surprise']
tmp_dict['face_id'] = data[0]['faceId']
tmp_dict['image_file'] = f.split('/')[1]
# populating the row with `Error` allows us to check the original image to understand the caused of the error.
# Often the cause of the error was failure of API to extract face data if the face was partial, in profile, or silhouetted
except (IndexError, KeyError):
tmp_dict['age'] = "Error"
tmp_dict['gender'] = "Error"
tmp_dict['smile'] = "Error"
tmp_dict['anger'] = "Error"
tmp_dict['contempt'] = "Error"
tmp_dict['disgust'] = "Error"
tmp_dict['fear'] = "Error"
tmp_dict['happiness'] = "Error"
tmp_dict['neutral'] = "Error"
tmp_dict['sadness'] = "Error"
tmp_dict['surprise'] = "Error"
tmp_dict['face_id'] = "Error"
tmp_dict['image_file'] = f.split('/')[1]
return tmp_dict
basefilepath = 'MicrosoftAPI_jsons/'
def get_json(path, candidate):
for f in glob.glob(path + '*.json'):
if candidate in f:
print(f)
row_list.append(read_jsons(f, candidate))
row_list = []
get_json(basefilepath, 'hillary_clinton')
HRC_universe = pd.DataFrame(row_list)
row_list = []
get_json(basefilepath, 'donald_trum')
DJT_universe = pd.DataFrame(row_list)
HRC_universe.head()
Explanation: Microsoft Emotion API Data
Run 4_check_img_size.py
This script checks the images are not too large for Micrsofts API, which has a limit of 1kb to 4MB
When it finds an image that is too large, it resizes it to 3.5MB, and saves as "original_filename_resized.jpg"
Check through the results, and remove original file if a resized one has been created. This step could be done programmatically... but being forced to actually look at your data can be good, too.
Run 5_microsoft_Face_API.py
You will need your own API keys. Free key subscription gets you a generous rate limit.
This script only collects face_id, age, smile, gender, and emotion.
Although this is the Face API, it also returned emotion, which was handy.
One JSON is output per image
Read in JSONS and make a dataframe
End of explanation
HRC_universe[HRC_universe.fear == 'Error']
DJT_universe[DJT_universe.fear == 'Error']
Explanation: Examination of the images that triggered an error revealed that errors were caused by the API being unable to calculate an emotion due to the face being partially obscured or being in profile.
End of explanation
HRC_universe = HRC_universe[HRC_universe.fear != 'Error']
DJT_universe = DJT_universe[DJT_universe.fear != 'Error']
num_cols = ['age', 'anger', 'contempt', 'disgust', 'fear', 'happiness', 'neutral', 'sadness', 'smile', 'surprise']
HRC_universe[num_cols] = HRC_universe[num_cols].apply(pd.to_numeric)
DJT_universe[num_cols] = DJT_universe[num_cols].apply(pd.to_numeric)
sentiment_cols = ['anger', 'contempt', 'disgust', 'fear', 'happiness', 'neutral', 'sadness', 'surprise']
HRC_universe_sentiment = HRC_universe[sentiment_cols]
DJT_universe_sentiment = DJT_universe[sentiment_cols]
HRC_universe_sentiment.head()
Explanation: Remove all rows with Errors
End of explanation
HRC_universe_sentiment.mean().plot(kind='bar', ylim=(0,1), color='blue', alpha=.5)
DJT_universe_sentiment.mean().plot(kind='bar',ylim=(0,1), color='red', alpha=.5)
print("anger: ", len(HRC_universe_sentiment[HRC_universe_sentiment.anger >= 0.55]))
print("contempt: ", len(HRC_universe_sentiment[HRC_universe_sentiment.contempt >= 0.55]))
print("disgust: ", len(HRC_universe_sentiment[HRC_universe_sentiment.disgust >= 0.55]))
print("fear: ", len(HRC_universe_sentiment[HRC_universe_sentiment.fear >= 0.55]))
print("happiness: ", len(HRC_universe_sentiment[HRC_universe_sentiment.happiness >= 0.55]))
print("neutral: ", len(HRC_universe_sentiment[HRC_universe_sentiment.neutral >= 0.55]))
print("sadness: ", len(HRC_universe_sentiment[HRC_universe_sentiment.sadness >= 0.55]))
print("surprise: ", len(HRC_universe_sentiment[HRC_universe_sentiment.surprise >= 0.55]))
print("anger: ", len(DJT_universe_sentiment[DJT_universe_sentiment.anger >= 0.55]))
print("contempt: ", len(DJT_universe_sentiment[DJT_universe_sentiment.contempt >= 0.55]))
print("disgust: ", len(DJT_universe_sentiment[DJT_universe_sentiment.disgust >= 0.55]))
print("fear: ", len(DJT_universe_sentiment[DJT_universe_sentiment.fear >= 0.55]))
print("happiness: ", len(DJT_universe_sentiment[DJT_universe_sentiment.happiness >= 0.55]))
print("neutral: ", len(DJT_universe_sentiment[DJT_universe_sentiment.neutral >= 0.55]))
print("sadness: ", len(DJT_universe_sentiment[DJT_universe_sentiment.sadness >= 0.55]))
print("surprise: ", len(DJT_universe_sentiment[DJT_universe_sentiment.surprise >= 0.55]))
Explanation: Visualising sentiment scores
End of explanation |
4,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monte Carlo Explorations
We will conduct some basic Monte Carlo explorations with the grmpy package. This allows us to revisit the key message of the course.
Step1: Questions
What are the remaining sources of heterogeneity?
What is the average treatment effect? Do you expect any to the other conventional treatment effect parameters?
What does this imply for the marginal benefit of treatment?
Step2: However, we still have a considerable amount of treatment effect heterogeneity.
Step5: Now let us get ready to explore the effects of essential heterogeneity with some auxiliary functions.
Step6: How doe he different treatment effect parameters diverge if we introduce essential heterogeneity?
Step7: Let us investigate the essential heterogentiy with respect to the distribution of the unobservables.
Step8: Let us revisit the shape of the marginal benefit of treatment with and without essential hetergeneity.
Step9: Estimation Strategies
Randomization
Ordinary Least Squares
Instrumental Variables
Conventional
Local
Randomization
We start with the baseline model.
Step10: Now we can directly look at the effect of essential heterogeneity.
Step11: Ordinary Least Squares
We first look at a regression in the baseline sample in more detail.
Step12: Now we again investigate the effect of essential heterogeneity on our estimates.
Step13: Coventional Instrumental Variables Regression
Step14: Now we introduce essential heterogeneity.
Step15: Local Instrumental Variables
We look at our baseline specification first.
Step16: Other Objects of Interest
Let us conclude by revisiting some additional objects of interest such as the joint distribution of potential outcomes and benefits and surplus. All of these can be obtained when, for example, the requirements for a factor structure approach as in Carneiro (2003) are met.
Step17: Now we turn to the joint distribution of benefits and costs. What is the meaning of each quadrant? | Python Code:
import pickle as pkl
import numpy as np
import copy
from statsmodels.sandbox.regression.gmm import IV2SLS
from mc_exploration_functions import *
import statsmodels.api as sm
import seaborn.apionly as sns
import grmpy
model_base = get_model_dict('mc_exploration.grmpy.ini')
model_base['SIMULATION']['source'] = 'mc_data'
df_base = grmpy.simulate('mc_exploration.grmpy.ini')
df_base.head()
Explanation: Monte Carlo Explorations
We will conduct some basic Monte Carlo explorations with the grmpy package. This allows us to revisit the key message of the course.
End of explanation
d_treated = (df_base['D'] == 1)
ate = np.mean(df_base['Y1'] - df_base['Y0'])
tt = np.mean(df_base['Y1'].loc[d_treated] - df_base['Y0'].loc[d_treated])
tut = np.mean(df_base['Y1'].loc[~d_treated] - df_base['Y0'].loc[~d_treated])
true_effect = ate
print('Effect ', ate, tt, tut)
Explanation: Questions
What are the remaining sources of heterogeneity?
What is the average treatment effect? Do you expect any to the other conventional treatment effect parameters?
What does this imply for the marginal benefit of treatment?
End of explanation
plot_distribution_of_benefits(df_base)
Explanation: However, we still have a considerable amount of treatment effect heterogeneity.
End of explanation
def update_correlation_structure(model_dict, rho):
This function takes a valid model specification and updates the correlation structure
among the unobservables.
# We first extract the baseline information from the model dictionary.
sd_v = model_dict['DIST']['all'][-1]
sd_u = model_dict['DIST']['all'][0]
# Now we construct the implied covariance, which is relevant for the initialization file.
cov = rho * sd_v * sd_u
model_dict['DIST']['all'][2] = cov
# We print out the specification to an initialization file with the name mc_init.grmpy.ini.
print_model_dict(model_dict)
def collect_effects(model_base, which, grid_points):
This function collects numerous effects for alternative correlation structures.
model_mc = copy.deepcopy(model_base)
effects = []
for rho in np.linspace(0.00, 0.99, grid_points):
# We create a new initialization file with an updated correlation structure.
update_correlation_structure(model_mc, rho)
# We use this new file to simulate a new sample.
df_mc = grmpy.simulate('mc_init.grmpy.ini')
# We extract auxiliary objects for further processing.
endog, exog, instr = df_mc['Y'], df_mc[['X_0', 'D']], df_mc[['X_0', 'Z_1']]
d_treated = df_mc['D'] == 1
# We calculate our parameter of interest.
label = which.lower()
if label == 'randomization':
stat = np.mean(endog.loc[d_treated]) - np.mean(endog.loc[~d_treated])
elif label == 'ordinary_least_squares':
stat = sm.OLS(endog, exog).fit().params[1]
elif label == 'conventional_instrumental_variables':
stat = IV2SLS(endog, exog, instr).fit().params[1]
elif label == 'local_instrumental_variables':
grmpy.estimate('mc_init.grmpy.ini')
stat = get_effect_grmpy()
elif label == 'conventional_average_effects':
ate = np.mean(df_mc['Y1'] - df_mc['Y0'])
tt = np.mean(df_mc['Y1'].loc[d_treated] - df_mc['Y0'].loc[d_treated])
stat = (ate, tt)
else:
raise NotImplementedError
effects += [stat]
return effects
Explanation: Now let us get ready to explore the effects of essential heterogeneity with some auxiliary functions.
End of explanation
effects = collect_effects(model_base, 'conventional_average_effects', 10)
plot_effects(effects)
Explanation: How doe he different treatment effect parameters diverge if we introduce essential heterogeneity?
End of explanation
df_mc = pkl.load(open('mc_data.grmpy.pkl', 'rb'))
for df in [df_base, df_mc]:
plot_joint_distribution_unobservables(df)
Explanation: Let us investigate the essential heterogentiy with respect to the distribution of the unobservables.
End of explanation
for fname in ['data', 'mc_data']:
plot_marginal_effect(get_marginal_effect_grmpy(fname + '.grmpy.info'))
Explanation: Let us revisit the shape of the marginal benefit of treatment with and without essential hetergeneity.
End of explanation
effect = np.mean(df_base['Y'].loc[d_treated]) - np.mean(df_base['Y'].loc[~d_treated])
print('Effect ', effect)
Explanation: Estimation Strategies
Randomization
Ordinary Least Squares
Instrumental Variables
Conventional
Local
Randomization
We start with the baseline model.
End of explanation
effects = collect_effects(model_base, 'randomization', 10)
plot_estimates(true_effect, effects)
Explanation: Now we can directly look at the effect of essential heterogeneity.
End of explanation
results = sm.OLS(df_base['Y'], df_base[['X_0', 'D']]).fit()
results.summary()
Explanation: Ordinary Least Squares
We first look at a regression in the baseline sample in more detail.
End of explanation
effects = collect_effects(model_base, 'ordinary_least_squares', 10)
plot_estimates(true_effect, effects)
Explanation: Now we again investigate the effect of essential heterogeneity on our estimates.
End of explanation
result = IV2SLS(df_base['Y'], df_base[['X_0', 'D']], df_base[['X_0', 'Z_1']]).fit()
result.summary()
Explanation: Coventional Instrumental Variables Regression
End of explanation
effects = collect_effects(model_base, 'conventional_instrumental_variables', 10)
plot_estimates(true_effect, effects)
Explanation: Now we introduce essential heterogeneity.
End of explanation
rslt = grmpy.estimate('mc_exploration.grmpy.ini')
print('Effect ', get_effect_grmpy())
effects = collect_effects(model_base, 'local_instrumental_variables', 10)
plot_estimates(true_effect, effects)
Explanation: Local Instrumental Variables
We look at our baseline specification first.
End of explanation
plot_joint_distribution_potential(df)
Explanation: Other Objects of Interest
Let us conclude by revisiting some additional objects of interest such as the joint distribution of potential outcomes and benefits and surplus. All of these can be obtained when, for example, the requirements for a factor structure approach as in Carneiro (2003) are met.
End of explanation
plot_joint_distribution_benefits_surplus(model_base, df)
Explanation: Now we turn to the joint distribution of benefits and costs. What is the meaning of each quadrant?
End of explanation |
4,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Easy Ab initio calculation with ASE-Siesta-Pyscf
No installation necessary, just download a ready to go container for any system, or run it into the cloud
Are we really on the Amazon cloud??
Step1: I do not have on my laptop an
Step2: We can then run the DFT calculation using Siesta
Step3: The TDDFT calculations with PySCF-NAO | Python Code:
cat /proc/cpuinfo
Explanation: Easy Ab initio calculation with ASE-Siesta-Pyscf
No installation necessary, just download a ready to go container for any system, or run it into the cloud
Are we really on the Amazon cloud??
End of explanation
# import libraries and set up the molecule geometry
from ase.units import Ry, eV, Ha
from ase.calculators.siesta import Siesta
from ase import Atoms
import numpy as np
import matplotlib.pyplot as plt
H2O = Atoms('H2O', positions = [[-0.757, 0.586, 0.000],
[0.757, 0.586, 0.000],
[0.0, 0.0, 0.0]],
cell=[20, 20, 20])
# visualization of the particle
from ase.visualize import view
view(H2O, viewer='x3d')
Explanation: I do not have on my laptop an: Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
We first import the necessary libraries and define the system using ASE
End of explanation
# enter siesta input and run siesta
siesta = Siesta(
mesh_cutoff=150 * Ry,
basis_set='DZP',
pseudo_qualifier='lda',
energy_shift=(10 * 10**-3) * eV,
fdf_arguments={
'SCFMustConverge': False,
'COOP.Write': True,
'WriteDenchar': True,
'PAO.BasisType': 'split',
'DM.Tolerance': 1e-4,
'DM.MixingWeight': 0.1,
'MaxSCFIterations': 300,
'DM.NumberPulay': 4,
'XML.Write': True})
H2O.set_calculator(siesta)
e = H2O.get_potential_energy()
Explanation: We can then run the DFT calculation using Siesta
End of explanation
# compute polarizability using pyscf-nao
siesta.pyscf_tddft(label="siesta", jcutoff=7, iter_broadening=0.15/Ha,
xc_code='LDA,PZ', tol_loc=1e-6, tol_biloc=1e-7, freq = np.arange(0.0, 15.0, 0.05))
# plot polarizability with matplotlib
%matplotlib inline
fig = plt.figure(1)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(siesta.results["freq range"], siesta.results["polarizability nonin"][:, 0, 0].imag)
ax2.plot(siesta.results["freq range"], siesta.results["polarizability inter"][:, 0, 0].imag)
ax1.set_xlabel(r"$\omega$ (eV)")
ax2.set_xlabel(r"$\omega$ (eV)")
ax1.set_ylabel(r"Im($P_{xx}$) (au)")
ax2.set_ylabel(r"Im($P_{xx}$) (au)")
ax1.set_title(r"Non interacting")
ax2.set_title(r"Interacting")
fig.tight_layout()
Explanation: The TDDFT calculations with PySCF-NAO
End of explanation |
4,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
See the FieldTrip website_ for a caveat regarding
the possible interpretation of "significant" clusters.
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Find the FieldTrip neighbor definition to setup sensor connectivity
Step4: Compute permutation statistic
How does it work? We use clustering to bind together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size. For more background read
Step5: Note. The same functions work with source estimate. The only differences
are the origin of the data, the size, and the connectivity definition.
It can be used for single trials or for groups of subjects.
Visualize clusters | Python Code:
# Authors: Denis Engemann <[email protected]>
# Jona Sassenhagen <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import mne
from mne.stats import spatio_temporal_cluster_test
from mne.datasets import sample
from mne.channels import find_ch_connectivity
from mne.viz import plot_compare_evokeds
print(__doc__)
Explanation: Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
See the FieldTrip website_ for a caveat regarding
the possible interpretation of "significant" clusters.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30, fir_design='firwin')
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id)
X = [epochs[k].get_data() for k in event_id] # as 3D matrix
X = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering
Explanation: Read epochs for the channel of interest
End of explanation
connectivity, ch_names = find_ch_connectivity(epochs.info, ch_type='mag')
print(type(connectivity)) # it's a sparse matrix!
plt.imshow(connectivity.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
plt.xlabel('{} Magnetometers'.format(len(ch_names)))
plt.ylabel('{} Magnetometers'.format(len(ch_names)))
plt.title('Between-sensor adjacency')
Explanation: Find the FieldTrip neighbor definition to setup sensor connectivity
End of explanation
# set cluster threshold
threshold = 50.0 # very high, but the test is quite sensitive on this data
# set family-wise p-value
p_accept = 0.01
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=threshold, tail=1,
n_jobs=1, buffer_size=None,
connectivity=connectivity)
T_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
Explanation: Compute permutation statistic
How does it work? We use clustering to bind together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size. For more background read:
Maris/Oostenveld (2007), "Nonparametric statistical testing of EEG- and
MEG-data" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.
doi:10.1016/j.jneumeth.2007.03.024
End of explanation
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = T_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='auto')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
Explanation: Note. The same functions work with source estimate. The only differences
are the origin of the data, the size, and the connectivity definition.
It can be used for single trials or for groups of subjects.
Visualize clusters
End of explanation |
4,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
The goal of this notebook is to explore better methods for the final l2 centorid match during registration in the pipeline.
Generate Data
Step1: Benchmark Current Approach
Step2: KD Tree Matching
In order to solve the runtime issue, I decided to perform only 1-nearest neighbor lookups (since thats what we really care about). The KD tree implements this in log(n) for the time of the lookup, with nlog(n) overhead for building the tree. | Python Code:
def newRandomCentroids(n, l, u):
diff = u-l
return [[random()*diff+l for _ in range(3)] for _ in range(n)]
newRandomCentroids(10, 10, 100)
Explanation: Goal
The goal of this notebook is to explore better methods for the final l2 centorid match during registration in the pipeline.
Generate Data
End of explanation
def l2(a, b):
return np.sqrt(np.sum(np.power(np.subtract(a, b), 2)))
def bruteMatch(A, B, r):
pairs = []
for a in A:
loss = [l2(a, b) for b in B]
if np.min(loss) < r:
pairs.append([a, B[np.argmin(loss)]])
else:
pairs.append([a, [0, 0, 0]])
return pairs
def makePairSet(n, l, u, o):
A = newRandomCentroids(n, l, u)
B = [[(elem[0]+random()*o)-o/2, (elem[1]+random()*o)-o/2, (elem[2]+random()*o)-o/2] for elem in A]
return A, B
A, B = makePairSet(10, 0, 100, 10)
reshapeA = zip(*(A))
reshapeB = zip(*(B))
trace1 = go.Scatter3d(
x = reshapeA[0],
y = reshapeA[1],
z = reshapeA[2],
mode = 'markers',
marker = dict(
size=12,
color=100,
opacity=.7
)
)
trace2 = go.Scatter3d(
x = reshapeB[0],
y = reshapeB[1],
z = reshapeB[2],
mode = 'markers',
marker = dict(
size=12,
color=0,
opacity=.710
)
)
data = [trace1, trace2]
layout = go.Layout(margin=dict(l=0, r=0, t=0, b=0))
fig = go.Figure(data=data, layout=layout)
iplot(fig)
pairs = bruteMatch(A, B, 100)
data = []
for pair in pairs:
i = "rgb(" + str(random()*255) + ',' + str(random()*255) + ',' + str(random()*255)+')'
data.append(go.Scatter3d(
x = zip(*(pair))[0],
y = zip(*(pair))[1],
z = zip(*(pair))[2],
marker = dict(size=12, color=i, opacity=.7),
line = dict(color=i, width=1)
)
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
timeStats = []
for i in range(2000,10000,2000):
A, B = makePairSet(i, 0, 1000, 100)
print i
s = time.time()
pairs = bruteMatch(A, B, 100)
e = time.time()
timeStats.append([i, e-s])
plt.figure()
plt.title('Run time vs Number of points')
x, y = zip(*(timeStats))
plt.scatter(x, y)
plt.show()
Explanation: Benchmark Current Approach
End of explanation
def KDMatch(A, B, r):
tree = KDTree(B)
pairs = []
for a in A:
dist, idx = tree.query(a, k=1, distance_upper_bound = r)
if dist == float('Inf'):
pairs.append(a, [0, 0, 0])
else:
pairs.append([a, B[idx]])
return pairs
A, B = makePairSet(10, 0, 100, 10)
reshapeA = zip(*(A))
reshapeB = zip(*(B))
trace1 = go.Scatter3d(
x = reshapeA[0],
y = reshapeA[1],
z = reshapeA[2],
mode = 'markers',
marker = dict(
size=12,
color=100,
opacity=.7
)
)
trace2 = go.Scatter3d(
x = reshapeB[0],
y = reshapeB[1],
z = reshapeB[2],
mode = 'markers',
marker = dict(
size=12,
color=0,
opacity=.710
)
)
data = [trace1, trace2]
layout = go.Layout(margin=dict(l=0, r=0, t=0, b=0))
fig = go.Figure(data=data, layout=layout)
iplot(fig)
pairs = KDMatch(A, B, 10)
data = []
for pair in pairs:
i = "rgb(" + str(random()*255) + ',' + str(random()*255) + ',' + str(random()*255)+')'
data.append(go.Scatter3d(
x = zip(*(pair))[0],
y = zip(*(pair))[1],
z = zip(*(pair))[2],
marker = dict(size=12, color=i, opacity=.7),
line = dict(color=i, width=1)
)
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
timeStats = []
for i in range(2000,10000,2000):
A, B = makePairSet(i, 0, 1000, 100)
print i
s = time.time()
pairs = KDMatch(A, B, 100)
e = time.time()
timeStats.append([i, e-s])
plt.figure()
plt.title('Run time vs Number of points')
x, y = zip(*(timeStats))
plt.scatter(x, y)
plt.show()
kdTimeStats = []
bruteTimeStats = []
for i in range(2000,10000,2000):
A, B = makePairSet(i, 0, 1000, 100)
print i
s = time.time()
pairs = KDMatch(A, B, 100)
e = time.time()
kdTimeStats.append([i, e-s])
s = time.time()
pairs = bruteMatch(A, B, 100)
e = time.time()
bruteTimeStats.append([i, e-s])
plt.figure()
plt.title('Run time vs Number of points (KD=Red, Brute=Blue)')
x, y = zip(*(kdTimeStats))
plt.scatter(x, y, c='r')
x, y = zip(*(bruteTimeStats))
plt.scatter(x, y, c='b')
plt.show()
Explanation: KD Tree Matching
In order to solve the runtime issue, I decided to perform only 1-nearest neighbor lookups (since thats what we really care about). The KD tree implements this in log(n) for the time of the lookup, with nlog(n) overhead for building the tree.
End of explanation |
4,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Airfoil example
In this example we are building a NACA 2412 airfoil from a list of points.
Lets import everything we need
Step1: Now we build up an array of points from a NACA generator. It consists of 21 x and y coordinates. Since TiGl always works in 3 dimensions, we have to add dummy z values of 0.
Step2: Interpolation of the airfoil with a B-spline
TiGL brings many algorithms, that build curves and surfaces. The core algorithms can be found in the tigl3.geometry package. These algorithms depend however on the opencascade data structures. To make it more convenient, the tigl3.curve_factories package offers more python functions to create curves.
The most basic function is tigl3.curve_factories.points_to_curve. This takes an array of points and builds up the interpolating b-spline.
Step3: The points_to_curve function has some optional parameters as well.
- degree | Python Code:
import tigl3.curve_factories
from OCC.gp import gp_Pnt
from OCC.Display.SimpleGui import init_display
Explanation: Airfoil example
In this example we are building a NACA 2412 airfoil from a list of points.
Lets import everything we need:
End of explanation
# list of points on NACA2412 profile
px = [1.000084, 0.975825, 0.905287, 0.795069, 0.655665, 0.500588, 0.34468, 0.203313, 0.091996, 0.022051, 0.0, 0.026892, 0.098987, 0.208902, 0.346303, 0.499412, 0.653352, 0.792716, 0.90373, 0.975232, 0.999916]
py = [0.001257, 0.006231, 0.019752, 0.03826, 0.057302, 0.072381, 0.079198, 0.072947, 0.054325, 0.028152, 0.0, -0.023408, -0.037507, -0.042346, -0.039941, -0.033493, -0.0245, -0.015499, -0.008033, -0.003035, -0.001257]
points = [pnt for pnt in zip(px, py, [0.]*len(px))]
Explanation: Now we build up an array of points from a NACA generator. It consists of 21 x and y coordinates. Since TiGl always works in 3 dimensions, we have to add dummy z values of 0.
End of explanation
curve = tigl3.curve_factories.interpolate_points(points)
# There are more parameters to control the outcome:
# curve = tigl3.curve_factories.points_to_curve(points, np.linspace(0,1, 21), close_continuous=False)
Explanation: Interpolation of the airfoil with a B-spline
TiGL brings many algorithms, that build curves and surfaces. The core algorithms can be found in the tigl3.geometry package. These algorithms depend however on the opencascade data structures. To make it more convenient, the tigl3.curve_factories package offers more python functions to create curves.
The most basic function is tigl3.curve_factories.points_to_curve. This takes an array of points and builds up the interpolating b-spline.
End of explanation
# start up the gui
display, start_display, add_menu, add_function_to_menu = init_display()
# make tesselation more accurate
display.Context.SetDeviationCoefficient(0.0001)
# draw the points
for point in points:
display.DisplayShape(gp_Pnt(*point), update=False)
# draw the curve
display.DisplayShape(curve)
# match content to screen and start the event loop
display.FitAll()
start_display()
Explanation: The points_to_curve function has some optional parameters as well.
- degree: Controls the polynomial degree. If degree=1, the curve will be piecewise linear.
- params: Controls, at which parameter the points will be interpolated. This array must have the same number of items as the points array!
- close_continuous: If you interpolate e.g. a fuselage section, you probably want a continous passing of the curve at the start and the end of the section. if close_continuous=True, the passing will be continous. For wings, where a discontinous trailing edge is desired, it should be False.
Visualization of the result
Now lets visualize the result. We are using the pythonOCC SimpleGui to draw the curve and the points. The jupyter renderer does not yet support curves and points (only surfaces).
We first draw all the points without updating the viewer. This would be very slow.
Then, we draw the curve.
Note, a separate window will open!
End of explanation |
4,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Support Vector Machine (SVM)
(Maximal margin classifiers)
Support Vector Machines (SVM) separates classes of data by maximizing the "space" (margin) between pairs of these groups. Classification for multiple classes is then supported by a one-vs-all method (just like we previously did for Logistic Regression for Multi-class classification).
Introduction to Support Vector Machines
A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples.
In which sense is the hyperplane obtained optimal? Let’s consider the following simple problem
Step1: In the above picture you can see that there exists multiple lines that offer a solution to the problem. Is any of them better than the others? We can intuitively define a criterion to estimate the worth of the lines
Step2: In machine learning, support vector machines (SVMs) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.
The advantages of support vector machines are
Step3: First we'll start by importing the Data set we are already very familiar with the Iris Data Set from last lecture
Step4: Now we will import the SVC (Support Vector Classification) from the SVM library of Sci Kit Learn, I encourage you to check out the other types of SVM options in the Sci Kit Learn Documentation!
Step5: Now we will split the data into a training set and a testing set and then train our model.
Step6: Now we'll go ahead and see how well our model did!
Step7: Looks like we have achieved a 100% accuracy with Support Vector Classification!
Now that we've gone through a basic implementation of SVM lets go ahead and quickly explore the various kernel types we can use for classification. We can do this by plotting out the boundaries created by each kernel type! We'll start with some imports and by setting up the data.
If we want to do non-linear classification we can employ the kernel trick. Using the kernel trick we can "slice" the feature space with a Hyperplane. For a quick illustraion of what this looks like, check out both the image and the video below!
Step8: The four methods we will explore are two linear models, a Gaussian Radial Basis Function,and a SVC with a polynomial (3rd Degree) kernel.
The linear models LinearSVC() and SVC(kernel='linear') yield slightly different decision boundaries. This can be a consequence of the following differences
Step9: Now that we have fitted the four models, we will go ahead and begin the process of setting up the visual plots. Note
Step10: Now the plot titles
Step11: Finally we will go through each model, set its position as a subplot, then scatter the data points and draw a countour of the decision boundaries. | Python Code:
from IPython.display import Image
Image(url="http://docs.opencv.org/2.4/_images/separating-lines.png")
Explanation: Support Vector Machine (SVM)
(Maximal margin classifiers)
Support Vector Machines (SVM) separates classes of data by maximizing the "space" (margin) between pairs of these groups. Classification for multiple classes is then supported by a one-vs-all method (just like we previously did for Logistic Regression for Multi-class classification).
Introduction to Support Vector Machines
A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples.
In which sense is the hyperplane obtained optimal? Let’s consider the following simple problem:
We'll start by imagining a situation in which we want to seperate a training set with two classes. We have two classes in our set, blue and red. We plot them out in the feature space and we try to place a green line that seperates both classes.
End of explanation
Image(url="http://docs.opencv.org/2.4/_images/optimal-hyperplane.png")
Explanation: In the above picture you can see that there exists multiple lines that offer a solution to the problem. Is any of them better than the others? We can intuitively define a criterion to estimate the worth of the lines:
A line is bad if it passes too close to the points because it will be noise sensitive and it will not generalize correctly. Therefore, our goal should be to find the line passing as far as possible from all points.
Then, the operation of the SVM algorithm is based on finding the hyperplane that gives the largest minimum distance to the training examples. Twice, this distance receives the important name of margin within SVM’s theory. Therefore, the optimal separating hyperplane maximizes the margin of the training data.
End of explanation
#Imports
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: In machine learning, support vector machines (SVMs) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.
The advantages of support vector machines are:
Effective in high dimensional spaces.
Still effective in cases where number of dimensions is greater than the number of samples.
Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.
The disadvantages of support vector machines include:
If the number of features is much greater than the number of samples, the method is likely to give poor performances.
SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below).
So how do we actually mathematically compute that optimal hyperplane? A full explanation can be found in Wikipedia
SVM with Sci-Kit Learn
Now we are ready to jump into some Python code and Sci-Kit Learn, we'll start with some basic imports and we will import Sci Kit Learn along the way while we use it.
End of explanation
from sklearn import datasets
# load the iris datasets
iris = datasets.load_iris()
# Grab features (X) and the Target (Y)
X = iris.data
Y = iris.target
# Show the Built-in Data Description
print iris.DESCR
Explanation: First we'll start by importing the Data set we are already very familiar with the Iris Data Set from last lecture:
End of explanation
# Support Vector Machine Imports
from sklearn.svm import SVC
# Fit a SVM model to the data
model = SVC()
Explanation: Now we will import the SVC (Support Vector Classification) from the SVM library of Sci Kit Learn, I encourage you to check out the other types of SVM options in the Sci Kit Learn Documentation!
End of explanation
from sklearn.cross_validation import train_test_split
# Split the data into Trainging and Testing sets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
# Fit the model
model.fit(X_train,Y_train)
Explanation: Now we will split the data into a training set and a testing set and then train our model.
End of explanation
from sklearn import metrics
# Get predictions
predicted = model.predict(X_test)
expected = Y_test
# Compare results
print metrics.accuracy_score(expected,predicted)
Explanation: Now we'll go ahead and see how well our model did!
End of explanation
# Kernel Trick for the Feature Space
from IPython.display import Image
url='http://i.imgur.com/WuxyO.png'
Image(url)
# Kernel Trick Visualization
from IPython.display import YouTubeVideo
YouTubeVideo('3liCbRZPrZA')
Explanation: Looks like we have achieved a 100% accuracy with Support Vector Classification!
Now that we've gone through a basic implementation of SVM lets go ahead and quickly explore the various kernel types we can use for classification. We can do this by plotting out the boundaries created by each kernel type! We'll start with some imports and by setting up the data.
If we want to do non-linear classification we can employ the kernel trick. Using the kernel trick we can "slice" the feature space with a Hyperplane. For a quick illustraion of what this looks like, check out both the image and the video below!
End of explanation
# Import all SVM
from sklearn import svm
# We'll use all the data and not bother with a split between training and testing. We'll also only use two features.
X = iris.data[:,:2]
Y = iris.target
# SVM regularization parameter
C = 1.0
# SVC with a Linear Kernel (our original example)
svc = svm.SVC(kernel='linear', C=C).fit(X, Y)
# Gaussian Radial Bassis Function
rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X, Y)
# SVC with 3rd degree poynomial
poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, Y)
# SVC Linear
lin_svc = svm.LinearSVC(C=C).fit(X,Y)
Explanation: The four methods we will explore are two linear models, a Gaussian Radial Basis Function,and a SVC with a polynomial (3rd Degree) kernel.
The linear models LinearSVC() and SVC(kernel='linear') yield slightly different decision boundaries. This can be a consequence of the following differences:
LinearSVC minimizes the squared hinge loss while SVC minimizes the regular hinge loss.
LinearSVC uses the One-vs-All (also known as One-vs-Rest) multiclass reduction while SVC uses the One-vs-One multiclass reduction.
End of explanation
# Set the step size
h = 0.02
# X axis min and max
x_min=X[:, 0].min() - 1
x_max =X[:, 0].max() + 1
# Y axis min and max
y_min = X[:, 1].min() - 1
y_max = X[:, 1].max() + 1
# Finally, numpy can create a meshgrid
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h))
Explanation: Now that we have fitted the four models, we will go ahead and begin the process of setting up the visual plots. Note: This example is taken from the Sci-Kit Learn Documentation.
First we define a mesh to plot in. We define the max and min of the plot for the y and x axis by the smallest and larget features in the data set. We can use numpy's built in mesh grid method to construct our plot.
End of explanation
# title for the plots
titles = ['SVC with linear kernel',
'LinearSVC (linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel']
Explanation: Now the plot titles
End of explanation
# Use enumerate for a count
for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
plt.figure(figsize=(15,15))
# Set the subplot position (Size = 2 by 2, position deifined by i count
plt.subplot(2, 2, i + 1)
# SUbplot spacing
plt.subplots_adjust(wspace=0.4, hspace=0.4)
# Define Z as the prediction, not the use of ravel to format the arrays
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
# Contour plot (filled with contourf)
plt.contourf(xx, yy, Z, cmap=plt.cm.terrain, alpha=0.5)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Dark2)
# Labels and Titles
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
Explanation: Finally we will go through each model, set its position as a subplot, then scatter the data points and draw a countour of the decision boundaries.
End of explanation |
4,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content Based Filtering by hand
This lab illustrates how to implement a content based filter using low level Tensorflow operations.
The code here follows the technique explained in Module 2 of Recommendation Engines
Step1: Make sure to restart your kernel to ensure this change has taken place.
Step2: To start, we'll create our list of users, movies and features. While the users and movies represent elements in our database, for a content-based filtering method the features of the movies are likely hand-engineered and rely on domain knowledge to provide the best embedding space. Here we use the categories of Action, Sci-Fi, Comedy, Cartoon, and Drama to describe our movies (and thus our users).
In this example, we will assume our database consists of four users and six movies, listed below.
Step3: Initialize our users, movie ratings and features
We'll need to enter the user's movie ratings and the k-hot encoded movie features matrix. Each row of the users_movies matrix represents a single user's rating (from 1 to 10) for each movie. A zero indicates that the user has not seen/rated that movie. The movies_feats matrix contains the features for each of the given movies. Each row represents one of the six movies, the columns represent the five categories. A one indicates that a movie fits within a given genre/category.
Step4: Computing the user feature matrix
We will compute the user feature matrix; that is, a matrix containing each user's embedding in the five-dimensional feature space.
Step5: Next we normalize each user feature vector to sum to 1. Normalizing isn't strictly neccesary, but it makes it so that rating magnitudes will be comparable between users.
Step6: Ranking feature relevance for each user
We can use the users_feats computed above to represent the relative importance of each movie category for each user.
Step7: Determining movie recommendations.
We'll now use the users_feats tensor we computed above to determine the movie ratings and recommendations for each user.
To compute the projected ratings for each movie, we compute the similarity measure between the user's feature vector and the corresponding movie feature vector.
We will use the dot product as our similarity measure. In essence, this is a weighted movie average for each user.
Step8: The computation above finds the similarity measure between each user and each movie in our database. To focus only on the ratings for new movies, we apply a mask to the all_users_ratings matrix.
If a user has already rated a movie, we ignore that rating. This way, we only focus on ratings for previously unseen/unrated movies.
Step9: Finally let's grab and print out the top 2 rated movies for each user | Python Code:
!pip install tensorflow==2.5
Explanation: Content Based Filtering by hand
This lab illustrates how to implement a content based filter using low level Tensorflow operations.
The code here follows the technique explained in Module 2 of Recommendation Engines: Content Based Filtering.
End of explanation
import numpy as np
import tensorflow as tf
print(tf.__version__)
Explanation: Make sure to restart your kernel to ensure this change has taken place.
End of explanation
users = ['Ryan', 'Danielle', 'Vijay', 'Chris']
movies = ['Star Wars', 'The Dark Knight', 'Shrek', 'The Incredibles', 'Bleu', 'Memento']
features = ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama']
num_users = len(users)
num_movies = len(movies)
num_feats = len(features)
num_recommendations = 2
Explanation: To start, we'll create our list of users, movies and features. While the users and movies represent elements in our database, for a content-based filtering method the features of the movies are likely hand-engineered and rely on domain knowledge to provide the best embedding space. Here we use the categories of Action, Sci-Fi, Comedy, Cartoon, and Drama to describe our movies (and thus our users).
In this example, we will assume our database consists of four users and six movies, listed below.
End of explanation
# each row represents a user's rating for the different movies
users_movies = tf.constant([
[4, 6, 8, 0, 0, 0],
[0, 0, 10, 0, 8, 3],
[0, 6, 0, 0, 3, 7],
[10, 9, 0, 5, 0, 2]],dtype=tf.float32)
# features of the movies one-hot encoded
# e.g. columns could represent ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama']
movies_feats = tf.constant([
[1, 1, 0, 0, 1],
[1, 1, 0, 0, 0],
[0, 0, 1, 1, 0],
[1, 0, 1, 1, 0],
[0, 0, 0, 0, 1],
[1, 0, 0, 0, 1]],dtype=tf.float32)
Explanation: Initialize our users, movie ratings and features
We'll need to enter the user's movie ratings and the k-hot encoded movie features matrix. Each row of the users_movies matrix represents a single user's rating (from 1 to 10) for each movie. A zero indicates that the user has not seen/rated that movie. The movies_feats matrix contains the features for each of the given movies. Each row represents one of the six movies, the columns represent the five categories. A one indicates that a movie fits within a given genre/category.
End of explanation
users_feats = tf.matmul(users_movies,movies_feats)
users_feats
Explanation: Computing the user feature matrix
We will compute the user feature matrix; that is, a matrix containing each user's embedding in the five-dimensional feature space.
End of explanation
users_feats = users_feats/tf.reduce_sum(users_feats,axis=1,keepdims=True)
users_feats
Explanation: Next we normalize each user feature vector to sum to 1. Normalizing isn't strictly neccesary, but it makes it so that rating magnitudes will be comparable between users.
End of explanation
top_users_features = tf.nn.top_k(users_feats, num_feats)[1]
top_users_features
for i in range(num_users):
feature_names = [features[int(index)] for index in top_users_features[i]]
print('{}: {}'.format(users[i],feature_names))
Explanation: Ranking feature relevance for each user
We can use the users_feats computed above to represent the relative importance of each movie category for each user.
End of explanation
users_ratings = tf.matmul(users_feats,tf.transpose(movies_feats))
users_ratings
Explanation: Determining movie recommendations.
We'll now use the users_feats tensor we computed above to determine the movie ratings and recommendations for each user.
To compute the projected ratings for each movie, we compute the similarity measure between the user's feature vector and the corresponding movie feature vector.
We will use the dot product as our similarity measure. In essence, this is a weighted movie average for each user.
End of explanation
users_ratings_new = tf.where(tf.equal(users_movies, tf.zeros_like(users_movies)),
users_ratings,
tf.zeros_like(tf.cast(users_movies, tf.float32)))
users_ratings_new
Explanation: The computation above finds the similarity measure between each user and each movie in our database. To focus only on the ratings for new movies, we apply a mask to the all_users_ratings matrix.
If a user has already rated a movie, we ignore that rating. This way, we only focus on ratings for previously unseen/unrated movies.
End of explanation
top_movies = tf.nn.top_k(users_ratings_new, num_recommendations)[1]
top_movies
for i in range(num_users):
movie_names = [movies[index] for index in top_movies[i]]
print('{}: {}'.format(users[i],movie_names))
Explanation: Finally let's grab and print out the top 2 rated movies for each user
End of explanation |
4,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Carnegie Python Bootcamp
Welcome to the python bootcamp. This thing you're reading is called an ipython notebook and will be your first introduction to the Python programming language. Notebooks are a combination of text markup and code that you can run in real time.
Importing what you need
It is often said that python comes with the batteries included, which means it comes with almost everything you need, bundled up in seperate modules. But not everything is loaded into memory automatically. You need to import the modules you need. Try running the following to import the antigravity module. Just hit CTRL-ENTER in the cell to execute the code
Step1: You will find that python is full of nerdy humour like this. By the way, python is named after Monty Python's Flying Circus, not the reptile. Let's go ahead and import some useful modules you will need. Use this next command cell to import the modules named os, sys, and numpy. You can import them one-by-one or all at once, separated by commas.
Step2: The os module gives you functions relating to the operating system, sys gives you information on the python interpreter and your script, and numpy is the mathematical powerhorse of python, allowing you to manipulate arrays of numbers.
Once you import a module, all the functions and data for that module live in its namespace. So if I wanted to use the getcwd function from the os module, I would have to refer to it as os.getcwd. Try running getcwd all by itself; you'll get an error. Then do it the correct way
Step3: Now you can later use this value. Try printing out x using the print() function. You can also modify the variable. Try adding or subtracting from x and print it out its value again.
Step4: But be careful about the type. Try assigning the value of 1 to y. Now divide it by two and print out the result.
Step5: Here is where we get to the first major difference bewteen python 2 and python 3. In python 2, an integer divided by another integer is kept as an integer (this is simlar behavior to most other programming languages), so 1 divided by 2 is 0. In python 3.x, division always produces a real number (called a float), so 1 divided 2 is 0.5. If you want integer division in python 3, use // instead. This kind of division also works in python 2, so it's worth getting used to it.
Repeat what you did before, but this time start by assigning the value of 1.0 to y first. Also try using integer division on a float variable.
Remeber that variables are simply containers (or labels if you prefer). They don't have a fixed type. Try using the type() function on y.
There are other types of variables. The most commonly used are strings, lists, and arrays. But literally anything can be assigned to a variable. You can even assign the numpy module to the variable np if you don't like typing numpy all the time. Try it out.
Now you can use np.sqrt() rather than numpy.sqrt(). Most python programmers do this with commonly-used modules. In fact, we usually just use the special form
Step6: We can use the len() function to determine the length of the string. Try this out below.
Strings are very similar to lists in python (we'll see that below). Each alpha-numeric character is an element in the string, and you can refer to individual elements (characters) using an index enclosed in square brackets ([]). You can also specify a range of indices by separating them with a colon (
Step7: By itself, the format string didn't do much. In order to inject the number, we use its .format() function.
Step8: You may want to make a format string with more than one number or string. You can do this by specifying multiple format codes and inject an equal number of arguments to the .format() function.
Step9: We decided to show the new style of string formatting, as it is the python 3 way of the future and is more powerful than the old style. Both styles are suppored in both versions of python and the reference above has plenty of examples of both.
Lists and numpy arrays
Lists contain a sequence of objects, typically, a sequence of numbers. The are enclosed by square brackets, (e.g., []), and each item separated by a comma. Here are some examples.
Step10: Lists can also contain a mixture of different types (even lists). Basically, anything can be an element of a list. Try making a list of strings, floats, and the list x1 above. Print it out to see what it looks like. Can you guess how to refer to an element of a list that's in another list?
Numpy arrays allow for more functionality than lists. While they may also contain a mix of object types, you will primarily be working with numpy arrays that are comprised of numbers
Step11: Try adding an integer to x and print it out. Then try other mathematical functions from the numpy module on it.
Here is where the real power of numpy arrays comes into play. We can use numpy to carry out all kinds of mathematical tasks that in other programming languages (like C, FORTRAN, etc) would require some kind of loop. Here are some of the most common tasks we'll use. By using numpy functions on arrays of numbers, we speed up the code a lot. This is commonly referred to as vectorizing your code.
Array creation
There are many functions in numpy that allow you to make arrays from scratch.
We can create an array of zeros
Step12: Take a guess at how to create a 5-element array of ones.
Now suppose you want all the elements to be equal to np.pi. How could you do that as a one-liner. Hint
Step13: Array Math
Just as with spreadsheets, you can do math on entire arrays using operators (+,-,*, /, and **) and numpy functions.
Try doing some math on your arrays x1, x2, and x3.
Step14: What happens if you try to add x1 to the matrix x you created above. Give it a try
Step15: The other common matrix tasks can also be done with either numpy functions, numpy.linalg, or member functions of the object itself
Step16: Array Slicing
Often, you need to access elements or sub-arrays within your array. This is referred to as slicing. We can select individual elements in an array using indices just as we did for strings (note that 0 is the first element and negative indices count backwards from the end). The most general slicing looks like [start
Step17: We can also "slice" N-dimensional arrays. Note that we have used reshape to transform a 1D array into a 2D array with the same total number of elements. This is another handy way to create N-dimensional arrays.
Step18: The reverse of reshape is ravel, which flattens a multi-dimensional array into a 1D array. Try this on x.
Control blocks
So far, we've been running individual sets of commands and looking at their results immediately. Later, we will write a complete program, which is really just bundling up instructions into a recipe for solving a given task. But as the tasks we want to perform become more complicated, we need control blocks. These allow us to
Step19: Notice that after the line containing the for statement, the subsequent lines are indented to indicate that they are all part of the same block. Every line that shares the same indenting will be repeated 5 times.
You can use a for loop to build up a list of elements by appending to an existing list (using its append() member function). For example, to create the first N elements of the Fibonacci sequence
Step20: You may notice (if N is large enough) that you get numbers that have an L at the end. python has a special type called a long integer which allows for abitrarily large numbers.
Here is an example of what not to do with for loops (if you can help it). For loops are more computationally expensive in python than using the numpy functions do do the math. Always try to cast the problem in terms of numpy math and functions. It will make your code faster. Try making N in the following example larger and larger and you'll see the difference.
Step21: Another way to implement the stopwatch is to use the iPython "magic" command %time
Step22: Try making N in the above code block bigger and see how the execution time goes up. Now do the exact same thing in the next code block, but use numpy functions without a loop. See how the execution time improves.
While loops
Similar to a for-loop, a while-loop executes the same code block repeatedly until a condition is no longer true. These are handy if you don't know ahead of time how long a loop will take, but you know you have to stop when a condition is true (or false). As an example, we can estimate the smallest floating point number (called machine-$\epsilon$) by continually dividing by 2 until we get zero.
Step23: Beware, though, that unlike for loops, you can have a never-ending loop if the condition is never false. Your computer is happy to keep grinding away forever. How could you safeguard against this?
If blocks
if statements act as a way to carry out a task only if a certain condition is met. As with the for loops above, everything that is indented after the if statement is part of the block of code that is executed if the condition is met. Likewise for the else block.
Step24: Note the two different uses of the equals sign
Step25: if-else statements execute the code in the if block if the condition is true, otherwise it executes the code in the else block
Step26: One can also have a series of conditions in the form of an elif block
Step27: You can also have multiple conditions that are evaluated
Step28: Exercise
Try this. Use a while loop and an if-else block to generate the Collatz sequence. Start the list with any positive integer. To get the next element of the list, check if the current element is even or odd. If even, divide it by 2. If odd, multiply by 3 and add 1. The sequence ends if you get to 1. Print out the length of the list. The Collatz conjecture states that the sequence will always convert to 1 eventually regardless of the starting integer. The proof of this conjecture is one of the great unsolved problems in mathematics.
Functions
Functions allow you to make a bundle of python statements that are executed whenever the function is called. Each function has arguments you pass in and value(s) that are returned. For example, you've been using the print function. There are also some functions above that you have been using. Now we will make our own.
Step29: Let's put the if, for and def together into one example. Take the code above that generates a Fibonacci sequence and put it into a function called Fibonacci. The function should take one argument (the length of the sequence). It should check if N is less than 2 (which can't be done), or if N is greater than 1000 (which would take a very long time). If these conditions are met, print an error statement and return None (a python special object that generally indicates something went wrong). Otherwise, compute and return the sequence.
Give your function a test run. Make sure it behaves as it should.
Step30: Importing Python packages
Step31: Now you try. Generate a random array of numbers drawn from a Poisson distribution with expectation value 10. Check that the mean of the array is approximately 10 and the standard deviation is approximately sqrt(10).
Here is a practical example of using random numbers. Often in statistics, we have to compute the mean of some population based on a limited sample. For example, a survey may ask car drivers their age and make/model of car. A marketing team may want to know the average age of drivers of Ford Mustangs so they can target their audience. Calculating a mean is easy, but what about the uncertainty in that mean? You could compute the population standard deviation, but that pre-supposes the underlying distribution is Gaussian. Another method, that does not make any assumptions about the distribution is bootstrapping. Randomly remove values from the data and replace them with copies of other values. Compute a new mean. Do that N times and compute the standard deviation of these bootstrapped mean values.
Below is an example of bootstrapping a sample to determine the uncertainty on a measurement. In this case, we will compute the mean of a sample of ages, and the uncertainty on the mean.
Step32: Binning Data
Another common statistical task is binning data. This is often done on noisy spectra to make "real" features more visible. In the following example, we are going to bin galaxies based on their redshift and compute the mean stellar mass for each bin.
Step33: Interpolation
In science, we measure discrete values of data. Sometimes you need to interpolate between two (or more) points. A common example is drawing a smooth line through the data when making a graph. You could do this by hand using numpy, but the module scipy has an entire interpolation package that offers an easy solution.
Step34: Astronomy-specific Example
Now we get into a really specific case. Astropy is a collection of several packages that are very useful for astronomers. It is actively developed and has new stuff all the time. Here, we show how you can use the cosmology calculator to compute the age of the Universe given a redshift.
Step35: Running code
There are many environments in which one can run Python code
Step36: os.path
Up until now, we have been printing output to the screen. You can still do that with command-line scripts, but once you close down the terminal, that output is lost. Your code will need to write to output files, but likely will also have to read from files. os.path gives you some functions that are useful when dealing with files.
You may have to check to see if a file exists, if a folder exists, etc.
Step37: File access with open
In order to access the contents of an existing file, or write contents into a new file, you use the open function. This will return a file object that you can use to read or write data. Here are some common tasks. First, let's write data to a file
Step38: Now, depending what your current working directory is (see beginning of tutorial), there will be a file called a_test_file in that folder. If you like, have a look at it using an editor or file viewer. It will have 10 rows and 3 columns. Note that we needed to have a "newline" (\n) at the end of the string we used in the write() function, otherwise the output would have been one long line.
Now, let's read the file back into python. You can either read the whole thing in at once as a single string, read it in line-by-line or read all lines in at once as a list.
Step39: Use print() to have a look at the data we read in. Note that the rows will include the newline character (\n). You can use the string.strip() function to get rid of it.
In the next tutorial (visualization), we'll show you a better way to read in standard format data files like this. But sometimes you'll be faced with data that these more automatic functions can't handle, so it's good to know how to read it in by hand.
Debugging
You almost never write a script without introducing bugs (the term comes from when computers were mechanical machines and insects literallly interfered with the running of the code). Luckily, python gives a very nice "traceback" report when it encounters a problem with what you've written. let's just generate a mistake on purpose and see what happens. | Python Code:
import antigravity
Explanation: Carnegie Python Bootcamp
Welcome to the python bootcamp. This thing you're reading is called an ipython notebook and will be your first introduction to the Python programming language. Notebooks are a combination of text markup and code that you can run in real time.
Importing what you need
It is often said that python comes with the batteries included, which means it comes with almost everything you need, bundled up in seperate modules. But not everything is loaded into memory automatically. You need to import the modules you need. Try running the following to import the antigravity module. Just hit CTRL-ENTER in the cell to execute the code
End of explanation
# Use this command box to run your own commands.
# By the way, Python ignores anything after a hash (#) symbol, so good for comments
Explanation: You will find that python is full of nerdy humour like this. By the way, python is named after Monty Python's Flying Circus, not the reptile. Let's go ahead and import some useful modules you will need. Use this next command cell to import the modules named os, sys, and numpy. You can import them one-by-one or all at once, separated by commas.
End of explanation
x=1
Explanation: The os module gives you functions relating to the operating system, sys gives you information on the python interpreter and your script, and numpy is the mathematical powerhorse of python, allowing you to manipulate arrays of numbers.
Once you import a module, all the functions and data for that module live in its namespace. So if I wanted to use the getcwd function from the os module, I would have to refer to it as os.getcwd. Try running getcwd all by itself; you'll get an error. Then do it the correct way:
As you can probably guess, os.getcwd gets the current working directory. Try some commands for yourself. The numpy module has many mathematical functions. Try computing the square root (sqrt) of 2. You can also try computing $\sin(\pi)$ (numpy has a built-in value numpy.pi) and even $e^{i\pi}$ (python uses the engineering convention of 1j as $\sqrt{-1}$).
There can also be namespaces inside namespaces. For example, to test if a file exists, you would want to use the isfile() function within the path module, which is itself in the os module. Give it a try:
Here is a non-comprehensive list of modules you may find useful. Documentation for all of them can be found with a quick google search.
os, sys: As mentioned above, these give you access to the operating system, files, and environment.
numpy: Gives you arrays (vectors, matrices) and the ability to do math on them.
scipy: Think of this as "Numerical Recipes for Python". Root-finding, fitting functions, integration, special mathematical functions, etc.
pandas: Primarily used for reading/writing data tables. Useful for data wrangling.
astropy: Astronomy-related functions, including FITS image reading, astrometry, and cosmological calculations.
Getting Help
Almost everything in python has documentation. Just use the help() function on anything in python to get information. Try running help() on a function you used previously. Try as many others as you like.
Variables and Types
A variable is like a box that you put stuff (values) in for later use. In Python, there are lots of different types of variables corresponding to what you put in. Unlike other languages, you don't have to tell python what type each variable is: python figures it out on its own. To put the value 1 into a box called x, just use the equals sign, like you would when solving a math problem.
End of explanation
print(x)
x=x+1
print(x)
Explanation: Now you can later use this value. Try printing out x using the print() function. You can also modify the variable. Try adding or subtracting from x and print it out its value again.
End of explanation
y = 1
y = y/2
print(y)
Explanation: But be careful about the type. Try assigning the value of 1 to y. Now divide it by two and print out the result.
End of explanation
# Strings are enclosed by single or double quotes
s='this is a string'
print (s)
Explanation: Here is where we get to the first major difference bewteen python 2 and python 3. In python 2, an integer divided by another integer is kept as an integer (this is simlar behavior to most other programming languages), so 1 divided by 2 is 0. In python 3.x, division always produces a real number (called a float), so 1 divided 2 is 0.5. If you want integer division in python 3, use // instead. This kind of division also works in python 2, so it's worth getting used to it.
Repeat what you did before, but this time start by assigning the value of 1.0 to y first. Also try using integer division on a float variable.
Remeber that variables are simply containers (or labels if you prefer). They don't have a fixed type. Try using the type() function on y.
There are other types of variables. The most commonly used are strings, lists, and arrays. But literally anything can be assigned to a variable. You can even assign the numpy module to the variable np if you don't like typing numpy all the time. Try it out.
Now you can use np.sqrt() rather than numpy.sqrt(). Most python programmers do this with commonly-used modules. In fact, we usually just use the special form: import numpy as np.
Strings
Strings are collections of alpha-numeric characters and are the primary way data are represented outside your code. So you will find yourself manipulating strings in Python ALL THE TIME. You might need to convert strings from a text file into python data ojects to work with or you might need to do the opposite and generate an output file with final results that can be read by humans. Below are the most common things we need.
Strings are enclosed in matching singe (') or double ("), or even triple (''') quotes. Python doesn't distinguish as long as you match them consistently. Triple-quoted strings can span many lines and are useful for literal text or code documentation.
End of explanation
fmt = "This is a float with two decimals: {:.2f}"
print (fmt)
Explanation: We can use the len() function to determine the length of the string. Try this out below.
Strings are very similar to lists in python (we'll see that below). Each alpha-numeric character is an element in the string, and you can refer to individual elements (characters) using an index enclosed in square brackets ([]). You can also specify a range of indices by separating them with a colon (:). Python indexes from 0 (not 1), so the first element is index [0], the second [1] and so on. Negative indices count from the end of the string. Try printing out the 2nd character of your string s, then the whole string except for the first and last characters.
Specifying a range of indices (as well as more complicated indexing we'll see later) is called slicing. There is also a string module that contains many functions for manipulating strings.
Formatting strings
Sometimes you'll need your integers and floats to be converted into strings and written out into a table with specific formats (e.g., number of significant figures). This involves a syntax that's almost a separate language itself (though if you've used C or C++ it will be very familiar). Here is a good reference: https://pyformat.info/
We'll cover the most important. First, if you just print out a regular floating point number, you get some arbitrary number of significant figures. The same is true if you just try to convert the float to a string using str(), which takes any type of variable and tries to turn it into a string. Try printing the string value of np.pi.
If you only want two significant figures, or you want the whole number of span 15 spaces (to make nicely lined-up colums for a table), you need to use a format string. A format string is just like a regular string, but has special place-holders for your numbers. These are enclosed in curly brackets ({}) and have special codes to specify how to format the variable. Without any other information, a simple {} will be replaced with whatever str() produces for the variable. For more control over numerical values, specify :[width].[prec]f for floats and :[width]d for integers. Replace [width] and [prec] with the total width you want your number to occupy and the number of digits after the decimal, respectively. Here's an example:
End of explanation
# two decimal places
print(fmt.format(x))
Explanation: By itself, the format string didn't do much. In order to inject the number, we use its .format() function.
End of explanation
fmt = "Here is a float: '{:.2f}', and another '{:8.4f}', an integer {:d}, and a string {}"
print (fmt.format(x, np.pi, 1000, 'look ma, no quotes!'))
Explanation: You may want to make a format string with more than one number or string. You can do this by specifying multiple format codes and inject an equal number of arguments to the .format() function.
End of explanation
# A list of floats
x1=[1.,2.,7.,2500.]
print(x1)
# try making a list of strings. Use indexing to print out single elements and slices.
Explanation: We decided to show the new style of string formatting, as it is the python 3 way of the future and is more powerful than the old style. Both styles are suppored in both versions of python and the reference above has plenty of examples of both.
Lists and numpy arrays
Lists contain a sequence of objects, typically, a sequence of numbers. The are enclosed by square brackets, (e.g., []), and each item separated by a comma. Here are some examples.
End of explanation
x=np.array([1.,2.,3.,4.])
print(x)
Explanation: Lists can also contain a mixture of different types (even lists). Basically, anything can be an element of a list. Try making a list of strings, floats, and the list x1 above. Print it out to see what it looks like. Can you guess how to refer to an element of a list that's in another list?
Numpy arrays allow for more functionality than lists. While they may also contain a mix of object types, you will primarily be working with numpy arrays that are comprised of numbers: either integers or floats. For example, you will at some point read in a table of data into a numpy array and do things to it, like add, multiply, etc.
Above, we imported the numpy module as np. We will use this to create arrays.
End of explanation
x1=np.zeros(5)
print(x1)
Explanation: Try adding an integer to x and print it out. Then try other mathematical functions from the numpy module on it.
Here is where the real power of numpy arrays comes into play. We can use numpy to carry out all kinds of mathematical tasks that in other programming languages (like C, FORTRAN, etc) would require some kind of loop. Here are some of the most common tasks we'll use. By using numpy functions on arrays of numbers, we speed up the code a lot. This is commonly referred to as vectorizing your code.
Array creation
There are many functions in numpy that allow you to make arrays from scratch.
We can create an array of zeros:
End of explanation
x=np.ones((4,2))
print(x)
print(x.shape)
Explanation: Take a guess at how to create a 5-element array of ones.
Now suppose you want all the elements to be equal to np.pi. How could you do that as a one-liner. Hint: vectorize.
We can create a sequence of numbers using np.arange(start,stop,step), where you specify a number to start (inclusive), when to stop (non-inclusive), and what step size to have between each element. Make an array called x1 using arange that goes from 0 to 4 inclusive.
Now make an array called x2 that goes from 0 to 10 in steps of 2.
Another handy function is np.linspace(start,stop,N), which gives you a specified number N of elements equally spaced between start and stop. The stop value in this case is inclusive (will be part of the sequence). Try making an array called x3 that goes from 0 to 8 and has 5 elements.
One can also create N-dimensional numpy arrays. For example, images that you read into Python will typically be stored into 2D numpy arrays.
End of explanation
x4=0.5+x1+x2*x3/2.
print(x4)
Explanation: Array Math
Just as with spreadsheets, you can do math on entire arrays using operators (+,-,*, /, and **) and numpy functions.
Try doing some math on your arrays x1, x2, and x3.
End of explanation
# CCW rotation by 180 degrees
theta=np.pi/4
# Rotation matrix
x=np.array([[np.cos(theta),-np.sin(theta)],
[np.sin(theta),np.cos(theta)]])
print(x)
# Lets rotate (1,1) about the origin
y=np.array([1.,1.])
z=np.dot(x,y)
print(y)
print(z)
Explanation: What happens if you try to add x1 to the matrix x you created above. Give it a try:
We also have access to most mathematical functions you'll need. Try raising an array to 3rd power using np.power. Take the base-10 log of an array and make sure it gives you what you expect. A shorthand for raising to a power is **, for example, 2**3=8.
Matrix Math
numpy can treat 2D arrays as matrices and leverage special compiled libraries so that you can do linear algebra routines easily (and very fast). Here, we construct a rotation matrix and use dot do do the dot-product. Try changing theta to different angles (in radians!).
End of explanation
# Taking the transpose
x_t = x.T
print(x_t)
# Computing the inverse
x_i = np.linalg.inv(x)
# Matrix Multiplication
I = np.dot(x_i,x)
print (I)
Explanation: The other common matrix tasks can also be done with either numpy functions, numpy.linalg, or member functions of the object itself
End of explanation
x=np.arange(5)
print(x)
Explanation: Array Slicing
Often, you need to access elements or sub-arrays within your array. This is referred to as slicing. We can select individual elements in an array using indices just as we did for strings (note that 0 is the first element and negative indices count backwards from the end). The most general slicing looks like [start:stop:step]. Below, we create an array. Try to print out the following using slices:
* the first element
* the last element (there's two ways to do this)
* a sub-array from 3rd element to the end
* a sub-array with the last element stripped
* a sub-array with a single element (the last)
* a sub-aray with every second element
* a sub-array with all elements in reverse order
End of explanation
x=np.arange(8)
print(x)
x=x.reshape((4,2))
print(x)
print(x[0,:])
print(x[:,0])
Explanation: We can also "slice" N-dimensional arrays. Note that we have used reshape to transform a 1D array into a 2D array with the same total number of elements. This is another handy way to create N-dimensional arrays.
End of explanation
# range(n) creates a list of n elements
print(range(5))
# We can use it to iterate over a for loop
for ii in range(5):
print(ii)
Explanation: The reverse of reshape is ravel, which flattens a multi-dimensional array into a 1D array. Try this on x.
Control blocks
So far, we've been running individual sets of commands and looking at their results immediately. Later, we will write a complete program, which is really just bundling up instructions into a recipe for solving a given task. But as the tasks we want to perform become more complicated, we need control blocks. These allow us to:
Repeat tasks again and again (loops)
Perform tasks only if certain conditions are met (if-else blocks)
Group instructions into a single logical task (user-defined functions)
python is rather unique in that it uses indenting to indicate the beginning and end of a logical block. This actually forces you to write readable code, which is a really good thing!
For loops
for loops are useful for repeating a series of operations a given number of times. In python, you loop over elements of a list or array. So if you want to loop over a sequence of integers (say, the indices of an array), then you would use the range() function to generate the list of integers. You might also use the len() function if you need the length of the array.
End of explanation
fib = [1,1]
N = 100
for i in range(N-2):
fib.append(fib[-2]+fib[-1])
print (fib)
Explanation: Notice that after the line containing the for statement, the subsequent lines are indented to indicate that they are all part of the same block. Every line that shares the same indenting will be repeated 5 times.
You can use a for loop to build up a list of elements by appending to an existing list (using its append() member function). For example, to create the first N elements of the Fibonacci sequence:
End of explanation
# A slightly more complex example of a for loop:
import time
N = 100
x=np.arange(N)
y=np.zeros(N)
t1 = time.time() # start time
for ii in range(x.size):
y[ii]=x[ii]**2
t2 = time.time() # end time
print ("for loop took "+str(t2-t1)+" seconds")
Explanation: You may notice (if N is large enough) that you get numbers that have an L at the end. python has a special type called a long integer which allows for abitrarily large numbers.
Here is an example of what not to do with for loops (if you can help it). For loops are more computationally expensive in python than using the numpy functions do do the math. Always try to cast the problem in terms of numpy math and functions. It will make your code faster. Try making N in the following example larger and larger and you'll see the difference.
End of explanation
%time for ii in range(x.size): y[ii]=x[ii]**2
Explanation: Another way to implement the stopwatch is to use the iPython "magic" command %time:
End of explanation
Ns = [1.]
while Ns[-1] > 0:
Ns.append(Ns[-1]/2)
print Ns[-10:]
Explanation: Try making N in the above code block bigger and see how the execution time goes up. Now do the exact same thing in the next code block, but use numpy functions without a loop. See how the execution time improves.
While loops
Similar to a for-loop, a while-loop executes the same code block repeatedly until a condition is no longer true. These are handy if you don't know ahead of time how long a loop will take, but you know you have to stop when a condition is true (or false). As an example, we can estimate the smallest floating point number (called machine-$\epsilon$) by continually dividing by 2 until we get zero.
End of explanation
x=5
if x==5:
print('Yes! x is 5')
# The two equal signs evaluate whether x is equal to 5. One can also use >, >=, <, <=, != (not equal to)
Explanation: Beware, though, that unlike for loops, you can have a never-ending loop if the condition is never false. Your computer is happy to keep grinding away forever. How could you safeguard against this?
If blocks
if statements act as a way to carry out a task only if a certain condition is met. As with the for loops above, everything that is indented after the if statement is part of the block of code that is executed if the condition is met. Likewise for the else block.
End of explanation
x=5
if x==3:
print('Yes! x is 3')
Explanation: Note the two different uses of the equals sign: assignment to a variable (=) and the logical comparison of two objects (==).
End of explanation
x=5
if x==3:
print('Yes! x is 3')
else:
print('x is not 3')
print('x is '+str(x))
Explanation: if-else statements execute the code in the if block if the condition is true, otherwise it executes the code in the else block:
End of explanation
x=5
if x==2:
print('Yes! x is 2')
elif x==3:
print('Yes! x is 3')
elif x==4:
print('Yes! x is 4')
else:
print('x is '+str(x))
Explanation: One can also have a series of conditions in the form of an elif block:
End of explanation
x=5
if x > 2 and x*2==10:
print('x is 5')
if x > 7 or x*2 == 10:
print('x is 5')
Explanation: You can also have multiple conditions that are evaluated:
End of explanation
# the function 'myfunc' takes two numbers, x and y, adds them together and returns the results
def myfunc(x,y):
z=x+y
return z
# to call the function, we simply invoke the name and feed it the requisite inputs:
g=myfunc(2,3.)
print(g)
# you can set input parameters to have a default values
def myfunc2(x,y=5.):
z=x+y
return z
g=myfunc2(2.)
print(g)
g=myfunc2(2.,4.)
print(g)
Explanation: Exercise
Try this. Use a while loop and an if-else block to generate the Collatz sequence. Start the list with any positive integer. To get the next element of the list, check if the current element is even or odd. If even, divide it by 2. If odd, multiply by 3 and add 1. The sequence ends if you get to 1. Print out the length of the list. The Collatz conjecture states that the sequence will always convert to 1 eventually regardless of the starting integer. The proof of this conjecture is one of the great unsolved problems in mathematics.
Functions
Functions allow you to make a bundle of python statements that are executed whenever the function is called. Each function has arguments you pass in and value(s) that are returned. For example, you've been using the print function. There are also some functions above that you have been using. Now we will make our own.
End of explanation
Fibonacci(1)
Fibonacci(1001)
Fibonacci(10)
Explanation: Let's put the if, for and def together into one example. Take the code above that generates a Fibonacci sequence and put it into a function called Fibonacci. The function should take one argument (the length of the sequence). It should check if N is less than 2 (which can't be done), or if N is greater than 1000 (which would take a very long time). If these conditions are met, print an error statement and return None (a python special object that generally indicates something went wrong). Otherwise, compute and return the sequence.
Give your function a test run. Make sure it behaves as it should.
End of explanation
# I like to declare all of my imported packages at the top of my script so that I know what is available.
# Also note that there are many ways to import packages.
import numpy.random as npr # Random number generator
from scipy import stats # statistics functions
import scipy.interpolate as si # interpolation functions
from astropy.cosmology import FlatLambdaCDM # Cosmology in flat \Lambda-CDM universe
# Random numbers are useful for many tasks.
# draw 5 random numbers from a uniform distribution between 0 and 1:
x1=npr.uniform(0, 1, size=5)
print(x1)
# draw 5 random numbers from a normal distribution with mean 10 and standard
# deviation 0.5:
x2=npr.normal(10, 0.5, size=5)
print(x2)
# draw 10 random integers between 0 and 5(exclusive)
x3=npr.randint(0,5,10)
print(x3)
Explanation: Importing Python packages: examples
One of the major advantages of python is a wealth of specialized packages for doing common scientific tasks. Sure, you could write your own least-squares fitter using what we've shown you so far, but before you attempt anything like that, take a little time to "google that" and see if a solution exists already.
You will have to import Python modules/packages to carry out many of the tasks you will need for your research. As already discussed, numpy is probably the most useful. scipy and astropy are other popular packages. Lets play around with a few of these to give you an idea of how useful they can be.
Random Numbers
End of explanation
# x below represents a measurement of the ages of N people, where N=x.size
x=np.array([19.,20.,22.,19.,21.,24.,35.,22.,21.])
# This is the mean age:
print(np.mean(x))
# Now we "bootstrap" to determine the error on this measurement:
ntrials=10000 # number of times we will draw a random sample of N ages
x_arr=np.zeros(ntrials) # store the mean of each random sample in this array
for ii in range(ntrials):
# draw N random integers, where N equals the number of samples in x
ix=npr.randint(0,x.size,x.size)
# subscript the original array with these random indices to get a new sample and compute the mean
x_arr[ii]=np.mean(x[ix])
# Finally, compute the standard deviation of the array of mean values to get the uncertainty on the *mean* age
print(np.std(x_arr))
Explanation: Now you try. Generate a random array of numbers drawn from a Poisson distribution with expectation value 10. Check that the mean of the array is approximately 10 and the standard deviation is approximately sqrt(10).
Here is a practical example of using random numbers. Often in statistics, we have to compute the mean of some population based on a limited sample. For example, a survey may ask car drivers their age and make/model of car. A marketing team may want to know the average age of drivers of Ford Mustangs so they can target their audience. Calculating a mean is easy, but what about the uncertainty in that mean? You could compute the population standard deviation, but that pre-supposes the underlying distribution is Gaussian. Another method, that does not make any assumptions about the distribution is bootstrapping. Randomly remove values from the data and replace them with copies of other values. Compute a new mean. Do that N times and compute the standard deviation of these bootstrapped mean values.
Below is an example of bootstrapping a sample to determine the uncertainty on a measurement. In this case, we will compute the mean of a sample of ages, and the uncertainty on the mean.
End of explanation
# This is an example of binning data and computing a particular value for the data in each bin.
# The scipy package is used to carry out this task.
# Lets make some fake data of galaxies spanning random redshifts between 0<z<3:
z=npr.rand(10000)*3.
# And these galaxies have random stellar masses between 9<log(M/Msun)<12:
m=npr.rand(10000)*3.+9.
# Now we want to compute the median stellar mass for galaxies at 0<z<1, 1<z<2, and 2<z<3:
# So lets declalre the bin edges
bins=[0.,1.,2.,3.]
m_med,xbins,btemp = stats.binned_statistic(z,m,statistic='median',bins=bins)
print(bins)
print(m_med)
Explanation: Binning Data
Another common statistical task is binning data. This is often done on noisy spectra to make "real" features more visible. In the following example, we are going to bin galaxies based on their redshift and compute the mean stellar mass for each bin.
End of explanation
# Interpolating between data points is another common task. We'll again use scipy to do some interpolating:
x=np.arange(5.)
y=x**2
print(x)
print(y)
# Linear interpolation
f=si.interp1d(x,y,kind='linear')
# si.interp1d returns a function, f, which can be used to feed values to.
# For example, lets evaluate f(x)
print(f(x))
# And now a different value
print(f(0.5))
# We can employ a higher order interpolation scheme to get more precise results (assuming a smoothly varying function)
f=si.interp1d(x,y,kind='quadratic')
print(f(x))
print(f(0.5))
Explanation: Interpolation
In science, we measure discrete values of data. Sometimes you need to interpolate between two (or more) points. A common example is drawing a smooth line through the data when making a graph. You could do this by hand using numpy, but the module scipy has an entire interpolation package that offers an easy solution.
End of explanation
# The astropy package has all kinds of astronomy related routines.
# Here, we define a cosmology that allows to compute things like
# the age of the universe or Hubble constant at different redshifts
cosmo = FlatLambdaCDM(H0=70., Om0=0.3)
redshift=0.
print(cosmo.age(redshift))
redshift=[0,1,2,3]
print(cosmo.age(redshift))
Explanation: Astronomy-specific Example
Now we get into a really specific case. Astropy is a collection of several packages that are very useful for astronomers. It is actively developed and has new stuff all the time. Here, we show how you can use the cosmology calculator to compute the age of the Universe given a redshift.
End of explanation
print sys.argv
Explanation: Running code
There are many environments in which one can run Python code:
- iPython notebooks like this one are good for running quick snippets of code.
- Spyder (provided with Anaconda) provides a space for writing scripts, executing them, and also for easily looking up definitions of different functions. Very similar to the IDL graphical IDE.
- One can also write code in a plain text editor, like Emacs/Aquamacs. Then execute the code in a terminal running Python or iPython.
Running code from the command-line
This is the most common and agnostic way to run your code. If you send your code to someone else, assume they will run it from the command line. If you are running your code on an HPC cluster, it needs to be run from the command-line. Lastly, writing code that runs with minimal user-interaction makes it more repeatable.
There are two aspects of writing command-line code that you should be familiar with: 1) getting arguments from the command line; and 2) working with files. The first is done through the sys.argv variable, the second is done with os.path package. Here we look at each briefly.
sys.argv
Quite simply, this is a list of the command-line arguments. You've seen several unix commands. Suppose we wanted to write the equivalent of the cp command, but using a python script. Usually, you run the command like this from the command-line:
cp file1 file2
That would copy file1 to file2. So our python script will need to get both the source and destination file name. Here is how I would write a simple script to do the same thing as cp:
import sys
f1 = sys.argv[1]
f2 = sys.argv[2]
print ("copying %s to %s" % (f1,f2))
# Here would be the code to actually copy one file to the other
Note: you can try to print out sys.argv on the next command-block. It will show you how this ipython notebook was actually run. But it is not very useful for doing anything practical.
With more complicated code, your command-line arguments may also get rather complicated (you may have optional arguments, switches, etc). There is a really good module in python for dealing with such complicated arguments so that your script isn't filled with code just to deal with parsing sys.argv. Have a look at the argparse module when you get to the point where you need to deal with complicated command-lines arguments.
End of explanation
# Check to see if a file exists
if os.path.isfile('/bin/ls'):
print("Oh good, you can list files")
if not os.path.isfile('test_output.dat'):
print ("It is safe to use this")
# Check if a folder exists
if os.path.isdir('/tmp'):
print ("you have a tmp folder")
# construct the path to a file using the correct separator
of = os.path.join('tmp','some','file')
print (of)
Explanation: os.path
Up until now, we have been printing output to the screen. You can still do that with command-line scripts, but once you close down the terminal, that output is lost. Your code will need to write to output files, but likely will also have to read from files. os.path gives you some functions that are useful when dealing with files.
You may have to check to see if a file exists, if a folder exists, etc.
End of explanation
# Open a file for writing (note the 'w')
f = open('a_test_file', 'w')
# write a header, always a good idea!
f.write("# This is a test data file. It contains 3 columns\n")
for i in range(10):
f.write("%d %d %d\n" % (i, 2*i, 3*i))
# We need to "close" the file to make sure it is written to disk
f.close()
Explanation: File access with open
In order to access the contents of an existing file, or write contents into a new file, you use the open function. This will return a file object that you can use to read or write data. Here are some common tasks. First, let's write data to a file:
End of explanation
# This time we use 'r' to indicate we only want to read the file
f = open('a_test_file', 'r')
everything = f.read()
# This brings us back to the beginning of the file
f.seek(0)
one_row = f.readline()
f.seek(0)
list_of_rows = f.readlines()
f.close()
print(list_of_rows)
Explanation: Now, depending what your current working directory is (see beginning of tutorial), there will be a file called a_test_file in that folder. If you like, have a look at it using an editor or file viewer. It will have 10 rows and 3 columns. Note that we needed to have a "newline" (\n) at the end of the string we used in the write() function, otherwise the output would have been one long line.
Now, let's read the file back into python. You can either read the whole thing in at once as a single string, read it in line-by-line or read all lines in at once as a list.
End of explanation
x1=np.arange(5)
x2=np.arange(3)
print(x1)
print(x2)
print(x1+x2)
Explanation: Use print() to have a look at the data we read in. Note that the rows will include the newline character (\n). You can use the string.strip() function to get rid of it.
In the next tutorial (visualization), we'll show you a better way to read in standard format data files like this. But sometimes you'll be faced with data that these more automatic functions can't handle, so it's good to know how to read it in by hand.
Debugging
You almost never write a script without introducing bugs (the term comes from when computers were mechanical machines and insects literallly interfered with the running of the code). Luckily, python gives a very nice "traceback" report when it encounters a problem with what you've written. let's just generate a mistake on purpose and see what happens.
End of explanation |
4,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring MzML files with the MS Ontology
In this example, we will learn how to use pronto to extract a hierarchy from the MS Ontology, a controlled vocabulary developed by the Proteomics Standards Initiative to hold metadata about Mass Spectrometry instrumentation and other Protein Identification and Quantitation software. This example is taken from a real situation that kickstarted the development of pronto to extract metadata from MzML files, a file format for Mass Spectrometry data based on XML.
Loading ms.obo
The MS ontology is available online on the OBO Foundry, so unless we are using a local version we can simply use the version found online to load the OBO file. We may get some encoding warnings since ms.obo imports some legacy ontologies, but we should be OK for the most part since we are only querying terms directly.
Step1: Displaying a class hierarchy with Vega
The MS ontology contains a catalog of several instruments, grouped by instrument manufacturers, but not all instruments are at the same depth level. We can easily use the Term.subclasses method to find all instruments defined in the controlled vocabulary. Let's then build a tree from all the subclasses of MS
Step2: Now that we have our tree structure, we can render it simply with Vega to get a better idea of the classes we are inspecting
Step3: Extracting the instruments from an MzML file
MzML files store the metadata corresponding to one or several MS scans using the MS controlled vocabulary, but the location and type of metadata can vary and needs to be extracted from a term subclassing hierarchy. Let's download an example file from the MetaboLights library and parse it with xml.etree
Step4: Now we want to extract the instruments that were used in the MS scan, that are stored as mzml
Step5: Finally we can extract the manufacturer of the instruments we found by checking which one of its superclasses is a direct child of the MS
Step6: Validating the controlled vocabulary terms in an MzML file
All mzml | Python Code:
import pronto
ms = pronto.Ontology.from_obo_library("ms.obo")
Explanation: Exploring MzML files with the MS Ontology
In this example, we will learn how to use pronto to extract a hierarchy from the MS Ontology, a controlled vocabulary developed by the Proteomics Standards Initiative to hold metadata about Mass Spectrometry instrumentation and other Protein Identification and Quantitation software. This example is taken from a real situation that kickstarted the development of pronto to extract metadata from MzML files, a file format for Mass Spectrometry data based on XML.
Loading ms.obo
The MS ontology is available online on the OBO Foundry, so unless we are using a local version we can simply use the version found online to load the OBO file. We may get some encoding warnings since ms.obo imports some legacy ontologies, but we should be OK for the most part since we are only querying terms directly.
End of explanation
instruments = ms['MS:1000031'].subclasses().to_set()
data = []
for term in instruments:
value = {"id": int(term.id[3:]), "name": term.id, "desc": term.name}
parents = term.superclasses(with_self=False, distance=1).to_set() & instruments
if parents:
value['parent'] = int(parents.pop().id[3:])
data.append(value)
Explanation: Displaying a class hierarchy with Vega
The MS ontology contains a catalog of several instruments, grouped by instrument manufacturers, but not all instruments are at the same depth level. We can easily use the Term.subclasses method to find all instruments defined in the controlled vocabulary. Let's then build a tree from all the subclasses of MS:1000031:
End of explanation
import json
import urllib.request
# Let's use the Vega radial tree example as a basis of the visualization
view = json.load(urllib.request.urlopen("https://vega.github.io/vega/examples/radial-tree-layout.vg.json"))
# First replace the default data with our own
view['data'][0].pop('url')
view['data'][0]['values'] = data
view['marks'][1]['encode']['enter']['tooltip'] = {"signal": "datum.desc"}
view['signals'][4]['value'] = 'cluster'
# Render the clustered tree
display({"application/vnd.vega.v5+json": view}, raw=True)
Explanation: Now that we have our tree structure, we can render it simply with Vega to get a better idea of the classes we are inspecting:
End of explanation
import urllib.request
import xml.etree.ElementTree as etree
URL = "http://ftp.ebi.ac.uk/pub/databases/metabolights/studies/public/MTBLS341/pos_Exp2-K3_2-E,5_01_7458.d.mzML"
mzml = etree.parse(urllib.request.urlopen(URL))
Explanation: Extracting the instruments from an MzML file
MzML files store the metadata corresponding to one or several MS scans using the MS controlled vocabulary, but the location and type of metadata can vary and needs to be extracted from a term subclassing hierarchy. Let's download an example file from the MetaboLights library and parse it with xml.etree:
End of explanation
instruments = ms["MS:1000031"].subclasses().to_set().ids
study_instruments = []
path = "mzml:instrumentConfigurationList/mzml:instrumentConfiguration/mzml:cvParam"
for element in mzml.iterfind(path, {'mzml': 'http://psi.hupo.org/ms/mzml'}):
if element.attrib['accession'] in instruments:
study_instruments.append(ms[element.attrib['accession']])
print(study_instruments)
Explanation: Now we want to extract the instruments that were used in the MS scan, that are stored as mzml:cvParam elements: we build a set of all the instruments in the MS ontology, and we iterate over the instruments mzml:cvParam to find the ones that refer to instruments:
End of explanation
manufacturers = ms['MS:1000031'].subclasses(distance=1, with_self=False).to_set()
study_manufacturers = []
for instrument in study_instruments:
study_manufacturers.extend(manufacturers & instrument.superclasses().to_set())
print(study_manufacturers)
Explanation: Finally we can extract the manufacturer of the instruments we found by checking which one of its superclasses is a direct child of the MS:1000031 term. We use the distance argument of subclasses to get the direct subclasses of instrument model, which are the manufacturers, and we use set operations to select manufacturers from the superclasses of each intrument we found.
End of explanation
mismatches = [
element
for element in mzml.iter()
if element.tag == "{http://psi.hupo.org/ms/mzml}cvParam"
if element.get('accession') in ms
if ms[element.get('accession')].name != element.get('name')
]
for m in mismatches:
print(f"{m.get('accession')}: {m.get('name')!r} (should be {ms[m.get('accession')].name!r})")
Explanation: Validating the controlled vocabulary terms in an MzML file
All mzml:cvParam XML elements are required to have the 3 following attributes:
accession, which is the identifier of the term in one of the ontologies imported in the file
cvRef, which is the identifier of the ontology imported in the file
name, which is the textual definition of the term
name in particular is redundant with respect to the actual ontology file, but can help rendering the XML elements. However, some MzML files can have a mismatch between the name and accession attributes. In order to check these mismatches we can use pronto to retrieve the name of all of these controlled vocabulary terms.
End of explanation |
4,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
String Matching
The idea of string matching is to find strings that match a given pattern. We have seen that Pandas provides some useful functions to do that job.
Step1: Imagine I want to have a list of my friends with the amount of money I borrowed to each other, toghether with their names and surnames.
Step2: To merge two dataframes we can use merge function
Step3: Approximate String Matching
Fuzzy String Matching, also called Approximate String Matching, is the process of finding strings that approximatively match a given pattern.
For example, we can have two datasets with information about local municipalities
Step4: The closeness of a match is often measured in terms of edit distance, which is the number of primitive operations necessary to convert the string into an exact match.
Primitive operations are usually
Step5: Partial String Similarity
What to do when we want to find if two strings are simmilar, and one contains the other.
In this case the ratio will be low but if we would know how to split the bigger string, the match would be perfect. Let's see an example
Step6: In fact we can have the following situation
Step7: partial_ratio, seeks the more appealing substring and returns its ratio
Step8: Out of Order
What happens if we have just the same strings but in a different order, let's have an example
Step9: FuzzyWuzzy provides two ways to deal with this situation
Step10: Token Set
The token set approach is similar, but a little bit more flexible. Here, we tokenize both strings, but instead of immediately sorting and comparing, we split the tokens into two groups
Step11: Example
We want to merge both mun_codes and lat_lon_mun. So we have to have a good municipality name in both datasets. From that names we can do
Step12: Step 2
Step13: Step 3
Step14: Step 4
Step15: Step 5
Step16: Step 6 | Python Code:
import pandas as pd
names = pd.DataFrame({"name" : ["Alice","Bob","Charlie","Dennis"],
"surname" : ["Doe","Smith","Sheen","Quaid"]})
names
names.name.str.match("A\w+")
debts = pd.DataFrame({"debtor":["D.Quaid","C.Sheen"],
"amount":[100,10000]})
debts
Explanation: String Matching
The idea of string matching is to find strings that match a given pattern. We have seen that Pandas provides some useful functions to do that job.
End of explanation
debts["surname"] = debts.debtor.str.extract("\w+\.(\w+)")
debts
Explanation: Imagine I want to have a list of my friends with the amount of money I borrowed to each other, toghether with their names and surnames.
End of explanation
names.merge(debts, left_on="surname", right_on="surname", how="left")
names.merge(debts, left_on="surname", right_on="surname", how="right")
names.merge(debts, left_on="surname", right_on="surname", how="inner")
names.merge(debts, left_on="surname", right_on="surname", how="outer")
names.merge(debts, left_index=True, right_index=True, how="left")
Explanation: To merge two dataframes we can use merge function: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html. Merge DataFrame objects by performing a database-style join operation by columns or indexes.
We can use merge in two different ways (very simmilar):
* using the DataFrame method: left_df.merge(right_df, ...)
* using the Pandas function: pd.merge(left_df, right_df, ...)
Common parameters
how : {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘inner’
left: use only keys from left frame, similar to a SQL left outer join; preserve key order
right: use only keys from right frame, similar to a SQL right outer join; preserve key order
outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically
inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys
on : label or list
Field names to join on. Must be found in both DataFrames. If on is None and not merging on indexes, then it merges on the intersection of the columns by default.
left_on : label or list, or array-like
Field names to join on in left DataFrame. Can be a vector or list of vectors of the length of the DataFrame to use a particular vector as the join key instead of columns
right_on : label or list, or array-like
Field names to join on in right DataFrame or vector/list of vectors per left_on docs
left_index : boolean, default False
Use the index from the left DataFrame as the join key(s). If it is a MultiIndex, the number of keys in the other DataFrame (either the index or a number of columns) must match the number of levels
right_index : boolean, default False
Use the index from the right DataFrame as the join key. Same caveats as left_index
End of explanation
lat_lon_mun = pd.read_excel("lat_lon_municipalities.xls", skiprows=2)
lat_lon_mun.head()
mun_codes = pd.read_excel("11codmun.xls", encoding="latin1", skiprows=1)
mun_codes.head()
lat_lon_mun[lat_lon_mun["Población"].str.match(".*anaria.*")]
mun_codes[mun_codes["NOMBRE"].str.match(".*anaria.*")]
"Valsequillo de Gran Canaria" == "Valsequillo de Gran Canaria"
"Palmas de Gran Canaria (Las)" == "Palmas de Gran Canaria, Las"
Explanation: Approximate String Matching
Fuzzy String Matching, also called Approximate String Matching, is the process of finding strings that approximatively match a given pattern.
For example, we can have two datasets with information about local municipalities:
End of explanation
from fuzzywuzzy import fuzz
fuzz.ratio("NEW YORK METS","NEW YORK MEATS")
fuzz.ratio("Palmas de Gran Canaria (Las)","Palmas de Gran Canaria, Las")
Explanation: The closeness of a match is often measured in terms of edit distance, which is the number of primitive operations necessary to convert the string into an exact match.
Primitive operations are usually: insertion (to insert a new character at a given position), deletion (to delete a particular character) and substitution (to replace a character with a new one).
Fuzzy String Matching can have different practical applications. Typical examples are spell-checking, text re-use detection (the politically correct way of calling plagiarism detection), spam filtering, as well as several applications in the bioinformatics domain, e.g. matching DNA sequences.
FuzzyWuzzy
The main modules in FuzzyWuzzy are called fuzz, for string-to-string comparisons, and process to compare a string with a list of strings.
Under the hood, FuzzyWuzzy uses difflib, part of the standard library, so there is nothing extra to install.
String Similarity
The simplest way to compare two strings is with a measurement of edit distance. For example, the following two strings are quite similar:
NEW YORK METS
NEW YORK MEATS
Now, according to the ratio:
Return a measure of the sequences' similarity as a float in the range [0, 1]. Where T is the total number of elements in both sequences, and M is the number of matches, this is 2.0*M / T.
End of explanation
"San Millán de Yécora" == "Millán de Yécora"
fuzz.ratio("San Millán de Yécora", "Millán de Yécora")
Explanation: Partial String Similarity
What to do when we want to find if two strings are simmilar, and one contains the other.
In this case the ratio will be low but if we would know how to split the bigger string, the match would be perfect. Let's see an example:
End of explanation
fuzz.ratio("YANKEES", "NEW YORK YANKEES")
fuzz.ratio("NEW YORK METS", "NEW YORK YANKEES")
Explanation: In fact we can have the following situation:
End of explanation
fuzz.partial_ratio("San Millán de Yécora", "Millán de Yécora")
fuzz.partial_ratio("YANKEES", "NEW YORK YANKEES")
fuzz.partial_ratio("NEW YORK METS", "NEW YORK YANKEES")
Explanation: partial_ratio, seeks the more appealing substring and returns its ratio
End of explanation
s1 = "Las Palmas de Gran Canaria"
s2 = "Gran Canaria, Las Palmas de"
s3 = "Palmas de Gran Canaria, Las"
s4 = "Palmas de Gran Canaria, (Las)"
Explanation: Out of Order
What happens if we have just the same strings but in a different order, let's have an example:
End of explanation
fuzz.token_sort_ratio("Las Palmas de Gran Canaria", "Palmas de Gran Canaria Las")
fuzz.ratio("Las Palmas de Gran Canaria", "Palmas de Gran Canaria Las")
Explanation: FuzzyWuzzy provides two ways to deal with this situation:
Token Sort
The token sort approach involves tokenizing the string in question, sorting the tokens alphabetically, and then joining them back into a string. Then compare the transformed strings with a simple ratio(.
End of explanation
t0 = ["Canaria,","de","Gran", "Palmas"]
t1 = ["Canaria,","de","Gran", "Palmas"] + ["Las"]
t2 = ["Canaria,","de","Gran", "Palmas"] + ["(Las)"]
fuzz.token_sort_ratio("Palmas de Gran Canaria, Las", "Palmas de Gran Canaria, (Las)")
Explanation: Token Set
The token set approach is similar, but a little bit more flexible. Here, we tokenize both strings, but instead of immediately sorting and comparing, we split the tokens into two groups: intersection and remainder. We use those sets to build up a comparison string.
t0 = [SORTED_INTERSECTION]
t1 = [SORTED_INTERSECTION] + [SORTED_REST_OF_STRING1]
t2 = [SORTED_INTERSECTION] + [SORTED_REST_OF_STRING2]
max(ratio(t0,t1),ratio(t0,t2),ratio(t1,t2))
End of explanation
mun_codes.shape
mun_codes.head()
lat_lon_mun.shape
lat_lon_mun.head()
Explanation: Example
We want to merge both mun_codes and lat_lon_mun. So we have to have a good municipality name in both datasets. From that names we can do:
* Exact string matching: match these names common in both datasets
* Approximate string matching: match these names with highest similarity
Step 1: Explore datasets
End of explanation
df1 = mun_codes.merge(lat_lon_mun, left_on="NOMBRE", right_on="Población",how="inner")
df1.head()
Explanation: Step 2: Merge datasets
End of explanation
df1["match_ratio"] = 100
Explanation: Step 3: Create a new variable called match_ratio
End of explanation
df2 = mun_codes.merge(df1, left_on="NOMBRE", right_on="NOMBRE", how="left")
df2.head()
df3 = df2.loc[: ,["CPRO_x","CMUN_x","DC_x","NOMBRE","match_ratio"]]
df3.rename(columns={"CPRO_x": "CPRO", "CMUN_x":"CMUN","DC_x":"DC"},inplace=True)
df3.head()
df3.loc[df3.match_ratio.isnull(),:].head()
Explanation: Step 4: Merge again with original dataset and select those which have no direct match
End of explanation
mun_names = lat_lon_mun["Población"].tolist()
def approx_str_compare(x):
ratio = [fuzz.ratio(x,m) for m in mun_names]
res = pd.DataFrame({"ratio" : ratio,
"name": mun_names})
return res.sort_values(by="ratio",ascending=False).iloc[0,:]
df4 = df3.loc[df3.match_ratio.isnull(),"NOMBRE"].map(approx_str_compare)
Explanation: Step 5: Apply approximate string matching
End of explanation
df4.map(lambda x: x["name"])
df4.map(lambda x: x["ratio"])
df6 = df3.loc[df3.match_ratio.isnull(),:]
df6["match_ratio"] = df4.map(lambda x: x["ratio"])
df6["NOMBRE"] = df4.map(lambda x: x["name"])
df6.head()
df7 = pd.concat([df3,df6])
Explanation: Step 6: Concatenate results
End of explanation |
4,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create TensorFlow Wide and Deep Model
Learning Objective
- Create a Wide and Deep model using the high-level Estimator API
- Determine which features to use as wide columns and which to use as deep columns
Introduction
In this notebook, we'll explore modeling our data using a Wide & Deep Neural Network. As before, we can do this uisng the high-level Estimator API in Tensorflow. Have a look at the various other models available through the Estimator API in the documentation here. In particular, have a look at the implementation for Wide & Deep models.
Start by setting the environment variables related to your project.
Step1: Let's have a look at the csv files we created in the previous notebooks that we will use for training/eval.
Step2: Create TensorFlow model using TensorFlow's Estimator API
First, we'll write an input_fn to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps.
Step3: Create the input function
Now we are ready to create an input function using the Dataset API.
Step4: Create the feature columns
Next, define the feature columns. For a wide and deep model, we need to determine which features we will use as wide features and which to pass as deep features. The function get_wide_deep below will return a tuple containing the wide feature columns and deep feature columns. Have a look at this blog post on wide and deep models to remind yourself how best to describe the features.
Step5: Create the Serving Input function
To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user.
Step6: Create the model and run training and evaluation
Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a Wide & Deep model (i.e. a DNNLinearCombinedRegressor estimator) and the train and evaluation operations.
Step7: Finally, we train the model! | Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: Create TensorFlow Wide and Deep Model
Learning Objective
- Create a Wide and Deep model using the high-level Estimator API
- Determine which features to use as wide columns and which to use as deep columns
Introduction
In this notebook, we'll explore modeling our data using a Wide & Deep Neural Network. As before, we can do this uisng the high-level Estimator API in Tensorflow. Have a look at the various other models available through the Estimator API in the documentation here. In particular, have a look at the implementation for Wide & Deep models.
Start by setting the environment variables related to your project.
End of explanation
%%bash
ls *.csv
Explanation: Let's have a look at the csv files we created in the previous notebooks that we will use for training/eval.
End of explanation
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
CSV_COLUMNS = "weight_pounds,is_male,mother_age,plurality,gestation_weeks".split(',')
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
TRAIN_STEPS = 1000
Explanation: Create TensorFlow model using TensorFlow's Estimator API
First, we'll write an input_fn to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps.
End of explanation
def read_dataset(filename_pattern, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(records = value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename = filename_pattern)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(filenames = file_list) # Read text file
.map(map_func = decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(count = num_epochs).batch(batch_size = batch_size)
return dataset
return _input_fn
Explanation: Create the input function
Now we are ready to create an input function using the Dataset API.
End of explanation
def get_wide_deep():
# Define column types
fc_is_male,fc_plurality,fc_mother_age,fc_gestation_weeks = [\
tf.feature_column.categorical_column_with_vocabulary_list(key = "is_male",
vocabulary_list = ["True", "False", "Unknown"]),
tf.feature_column.categorical_column_with_vocabulary_list(key = "plurality",
vocabulary_list = ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]),
tf.feature_column.numeric_column(key = "mother_age"),
tf.feature_column.numeric_column(key = "gestation_weeks")
]
# Bucketized columns
fc_age_buckets = tf.feature_column.bucketized_column(source_column = fc_mother_age, boundaries = np.arange(start = 15, stop = 45, step = 1).tolist())
fc_gestation_buckets = tf.feature_column.bucketized_column(source_column = fc_gestation_weeks, boundaries = np.arange(start = 17, stop = 47, step = 1).tolist())
# Sparse columns are wide, have a linear relationship with the output
wide = [fc_is_male,
fc_plurality,
fc_age_buckets,
fc_gestation_buckets]
# Feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(keys = wide, hash_bucket_size = 20000)
fc_embed = tf.feature_column.embedding_column(categorical_column = crossed, dimension = 3)
# Continuous columns are deep, have a complex relationship with the output
deep = [fc_mother_age,
fc_gestation_weeks,
fc_embed]
return wide, deep
Explanation: Create the feature columns
Next, define the feature columns. For a wide and deep model, we need to determine which features we will use as wide features and which to pass as deep features. The function get_wide_deep below will return a tuple containing the wide feature columns and deep feature columns. Have a look at this blog post on wide and deep models to remind yourself how best to describe the features.
End of explanation
def serving_input_fn():
feature_placeholders = {
"is_male": tf.placeholder(dtype = tf.string, shape = [None]),
"mother_age": tf.placeholder(dtype = tf.float32, shape = [None]),
"plurality": tf.placeholder(dtype = tf.string, shape = [None]),
"gestation_weeks": tf.placeholder(dtype = tf.float32, shape = [None])
}
features = {
key: tf.expand_dims(input = tensor, axis = -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders)
Explanation: Create the Serving Input function
To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user.
End of explanation
def train_and_evaluate(output_dir):
wide, deep = get_wide_deep()
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(
save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir = output_dir,
linear_feature_columns = wide,
dnn_feature_columns = deep,
dnn_hidden_units = [64, 32],
config = run_config)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("train.csv", mode = tf.estimator.ModeKeys.TRAIN),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter(name = "exporter", serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("eval.csv", mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Explanation: Create the model and run training and evaluation
Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a Wide & Deep model (i.e. a DNNLinearCombinedRegressor estimator) and the train and evaluation operations.
End of explanation
# Run the model
shutil.rmtree(path = "babyweight_trained_wd", ignore_errors = True) # start fresh each time
train_and_evaluate("babyweight_trained_wd")
Explanation: Finally, we train the model!
End of explanation |
4,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6 - Data Compression
This short Notebook will introduce you to how to efficiently compress your data within SampleData datasets.
<div class="alert alert-info">
**Note**
Throughout this notebook, it will be assumed that the reader is familiar with the overview of the SampleData file format and data model presented in the [first notebook of this User Guide](./SampleData_Introduction.ipynb) of this User Guide.
</div>
Data compression with HDF5 and Pytables
HDF5 and data compression
HDF5 allows compression filters to be applied to datasets in a file to minimize the amount of space it consumes. These compression features allow to drastically improve the storage space required for you datasets, as well as the speed of I/O access to datasets, and can differ from one data item to another within the same HDF5 file. A detailed presentation of HDF5 compression possibilities is provided here.
The two main ingredients that control compression performances for HDF5 datasets are the compression filters used (which compression algorithm, with which parameters), and the using a chunked layout for the data. This two features are briefly developed hereafter.
Pytables and data compression
The application of compression filters to HDF5 files with the SampleData class is handled by the Pytable package, on which is built the SampleData HDF5 interface. Pytables implements a specific containers class, the Filters class, to gather the various settings of the compression filters to apply to the datasets in a HDF5 file.
When using the SampleData class, you will have to specify this compression filter settings to class methods dedicated to data compression. These settings are the parameters of the Pytables Filters class. These settings and their possible values are detailed in the next subsection.
Available filter settings
The description given below of compression options available with SampleData/Pytables is exctracted from the Pytables documentation of the Filter class.
complevel (int) – Specifies a compression level for data. The allowed range is 0-9. A value of 0 (the default) disables compression.
complib (str) – Specifies the compression library to be used. Right now, zlib (the default), lzo, bzip2 and blosc are supported. Additional compressors for Blosc like blosc
Step1: This file is zipped in the package to reduce its size. We will have to unzip it to use it and learn how to reduce its size with the SampleData methods. If you are just reading the documentation and not executing it, you may just skip this cell and the next one.
Step2: Dataset presentation
In this tutorial, we will work on a copy of this dataset, to leave the original data unaltered.
We will start by creating an autodeleting copy of the file, and print its content to discover its content.
Step3: As you can see, this dataset already contains a rich content. It is a digital twin of a real polycristalline microstructure of a grade 2 Titanium sample, gathering both experimental and numerical data obtained through Diffraction Contrast Tomography imaging, and FFT-based mechanical simulation.
This dataset has actually been constructed using the Microstructure class of the pymicro package, which is based on the SampleData class. The link between these classes will be discussed in the next tutorial.
This dataset contains only uncompressed data. We will try to reduce its size by using various compression methods on the large data items that it contains. You can see that most of them are stored in the 3DImage Group CellData.
Apply compression settings for a specific array
We will start by compressing the grain_map Field data array of the CellData image. Let us look more closely on this data item
Step4: We can see above that this data item is not compressed (complevel=0), and has a disk size of almost 2 Mb.
To apply a set of compression settings to this data item, you need to
Step5: use the SampleData set_chunkshape_and_compression method with the dictionary and the name of the data item as arguments
Step6: As you can see, the storage size of the data item has been greatly reduced, by more than 10 times (126 Kb vs 1.945 Mb), using this compression settings. Let us see what will change if we use different settings
Step7: As you may observe, is significantly affected by the choice of the compression level. The higher the compression level, the higher the compression ratio, but also the lower the I/O speed. On the other hand, you can also remark that, in the present case, using the shuffle filter deteriorates the compression ratio.
Let us try to with another data item
Step8: On the opposite, for this second array, the shuffle filter improves significantly the compression ratio. However, in this case, you can see that the compression ratio achieved is much lower than for the grain_map array.
<div class="alert alert-warning">
**Warning 1**
The efficiency of compression algorithms in terms of compression ratio is strongly affected by the data itself (variety, value and position of the stored values in the array). Compression filters will not have the same behavior with all data arrays, as you have observed just above. Be aware of this fact, and do not hesitate to conduct tests to find the best settings for you datasets !
</div>
<div class="alert alert-warning">
**Warning 2**
Whenever you change the compression or chunkshape settings of your datasets, the data item is re-created into the *SampleData* dataset, which may be costly in computational time. Be careful if you are dealing with very large data arrays and want to try out several settings to find the best I/O speed / compression ratio compromise, with the `set_chunkshape_and_compression` method. You may want to try on a subset of your large array to speed up the process.
</div>
Apply same compression settings for a serie of nodes
If you need to apply the same compression settings to a list of data items, you may use the set_nodes_compression_chunkshape. This method works exactly like set_chunkshape_and_compression, but take a list of nodenames as arguments instead of just one. The inputted compression settings are then applied to all the nodes in the list
Step9: Lossy compression and data normalization
The compression filters used above preserve exactly the original values of the stored data. However, it is also possible with specific filters a lossy compression, which remove non relevant part of the data. As a result, data compression ratio is usually strongly increased, at the cost that stored data is no longer exactly equal to the inputed data array.
One of the most important feature of data array that increase their compressibility, is the presence of patterns in the data. If a value or a serie of values is repeated multiple times throughout the data array, data compression can be very efficient (the pattern can be stored only once).
Numerical simulation and measurement tools usually output data in a standard simple or double precision floating point numbers, yiedling data arrays with values that have a lot of digits. Typically, these values are all different, and hence these arrays cannot be efficiently compressed.
The Amitex_stress_1 and Amitex_strain_1 data array are two tensor fields outputed by a continuum mechanics FFT-based solver, and typical fall into this category
Step10: Lossy compression
Usually, the relevant precision of data is only of a few digits, so that many values of the array should be considered equal. The idea of lossy compression is to truncate values up to a desired precision, which increases the number of equal values in a dataset and hence increases its compressibility.
Lossy compression can be applied to floating point data arrays in SampleData datasets using the least_significant_digit compression setting. If you set the value of this option to $N$, the data will be truncated after the $N^{th}$ siginificant digit after the decimal point. Let us see an example.
Step11: As you may observe, the compression ratio has been improved, and the retrieved values after lossy compression are effectively equal to the original array up to the third digit after the decimal point.
We will now try to increase the compression ratio by reducing the number of conserved digits to 2
Step12: As you can see, the compression ratio has again been improved, now close to 75%. Know, you know how to do to choose the best compromise between lost precision and compression ratio.
Normalization to improve compression ratio
If you look more closely to the Amitex_stress_1 array values, you can observe that the value of this array have been outputed within a certain scale of values, which in particular impact the number of significant digits that come before the decimal point. Sometimes precision of the data would require less significant digits than its scale of representation.
In that case, storing the complete data array at its original scale is not necessary, and very inefficient in terms of data size. To optimize storage of such datasets, one can normalize them to a form with very few digits before the decimal point (1 or 2), and stored separately their scale to be able to revert the normalization operation when retrieiving data.
This allows to reduce the total number of significant digits of the data, and hence further improve the achievable compression ratio with lossy compression.
The SampleData class allows you to aplly automatically this operation when applying compression settings to your dataset. All you have to do is add to the compression_option dictionary the key normalization with one of its possible values.
To try it, we will close (and delete) our test dataset and recopy the original file, to apply normalization and lossy compression on the original raw data
Step13: Standard Normalization
The standard normalization setting will center and reduce the data of an array $X$ by storing a new array $Y$ that is
Step14: As you can see, the compression ratio has been strongly improved by this normalization operation, reaching 90%.
When looking at the retrieved value after compression, you can see that depending on the field component that is observed, the relative precision loss varies. The third large component value error is less than 1%, which is consistent with the truncation to 2 significant digits. However, it is not the other components, that have smaller values by two or three orders of magnitude, and that are retrieved with larger errors.
This is explained by the fact that the standard normalization option scales the array as a whole. As a result, if there are large differencies in the scale of different components of a vector or tensor field, the precision of the smaller components will be less preserved.
Standard Normalization per components for vector/tensor fields
Another normalization option is available for SampleData field arrays, that allows to apply standard normalization individually to each component of a field in order to keep a constant relative precision for each component when applying lossy compression to the field data array.
To use this option, you will need to set the normalization value to standard_per_component
Step15: As you can see, the error in the retrieved array is now less than 1% for each component of the field value. However, the cost was a reduced improvement of the compression ratio.
Visualization of normalized data
Step16: As you can see, the Amitex_stress_1 Attribute node data in the dataset XDMF file is now provided by a Function item type, involving three data array with the original field shape. This function computes
Step17: Changing the chunksize of a node
Compressing all fields when adding Image or Mesh Groups
Changing the chunksize of a data array with SampleData is very simple. You just have to pass as a tuple the news shape of the chunks you want for your data array, and pass it as an argument to the set_chunkshape_and_compression or set_nodes_compression_chunkshape
Step18: As you can see, the chunkshape has been changed, which has also affected the memory size of the compressed data array. We have indeed reduced the number of chunks in the dataset, which reduces the number of data to store. This modification can also improve or deteriorate the I/O speed of access to your data array in the dataset. The reader is once again refered to dedicated documents to know more ion this matter
Step19: The node has been created with the desired chunkshape and compression filters.
Repacking files
We now recreate a new copy of the original dataset, and try to reduce the size of oll heavy data item, to reduce as much as possible the size of our dataset.
Step20: Now that we have compressed a few of the items of our dataset, the disk size of its HDF5 file should have diminished. Let us check again the size of its data items, and of the file
Step21: The file size has not changed, surprisingly, even if the large Amitex_stress_1 array has been shrinked from almost 50 Mo to roughly 5 Mo. This is due to a specific feature of HDF5 files
Step22: You see that repacking the file has allowed to free some memory space and reduced its size.
<div class="alert alert-info">
**Note**
Note that the size of the file is larger than the size of data items printed by `print_dataset_content`. This extra size is the memory size occupied by the data array storing *Element Tags* for the mesh `grains_mesh`. Element tags are not printed by the printing methods as they can be very numerous and pollute the lecture of the printed information.
</div>
Once again, you should repack your file at carefully chosen times, as is it a very costly operation for large datasets. The SampleData class constructor has an autorepack option. If it is set to True, the file is automatically repacked when closing the dataset.
We can now close our dataset, and remove the original unarchived file | Python Code:
from config import PYMICRO_EXAMPLES_DATA_DIR # import file directory path
import os
dataset_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'example_microstructure') # test dataset file path
tar_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'example_microstructure.tar.gz') # dataset archive path
Explanation: 6 - Data Compression
This short Notebook will introduce you to how to efficiently compress your data within SampleData datasets.
<div class="alert alert-info">
**Note**
Throughout this notebook, it will be assumed that the reader is familiar with the overview of the SampleData file format and data model presented in the [first notebook of this User Guide](./SampleData_Introduction.ipynb) of this User Guide.
</div>
Data compression with HDF5 and Pytables
HDF5 and data compression
HDF5 allows compression filters to be applied to datasets in a file to minimize the amount of space it consumes. These compression features allow to drastically improve the storage space required for you datasets, as well as the speed of I/O access to datasets, and can differ from one data item to another within the same HDF5 file. A detailed presentation of HDF5 compression possibilities is provided here.
The two main ingredients that control compression performances for HDF5 datasets are the compression filters used (which compression algorithm, with which parameters), and the using a chunked layout for the data. This two features are briefly developed hereafter.
Pytables and data compression
The application of compression filters to HDF5 files with the SampleData class is handled by the Pytable package, on which is built the SampleData HDF5 interface. Pytables implements a specific containers class, the Filters class, to gather the various settings of the compression filters to apply to the datasets in a HDF5 file.
When using the SampleData class, you will have to specify this compression filter settings to class methods dedicated to data compression. These settings are the parameters of the Pytables Filters class. These settings and their possible values are detailed in the next subsection.
Available filter settings
The description given below of compression options available with SampleData/Pytables is exctracted from the Pytables documentation of the Filter class.
complevel (int) – Specifies a compression level for data. The allowed range is 0-9. A value of 0 (the default) disables compression.
complib (str) – Specifies the compression library to be used. Right now, zlib (the default), lzo, bzip2 and blosc are supported. Additional compressors for Blosc like blosc:blosclz (‘blosclz’ is the default in case the additional compressor is not specified), blosc:lz4, blosc:lz4hc, blosc:snappy, blosc:zlib and blosc:zstd are supported too. Specifying a compression library which is not available in the system issues a FiltersWarning and sets the library to the default one.
shuffle (bool) – Whether or not to use the Shuffle filter in the HDF5 library. This is normally used to improve the compression ratio. A false value disables shuffling and a true one enables it. The default value depends on whether compression is enabled or not; if compression is enabled, shuffling defaults to be enabled, else shuffling is disabled. Shuffling can only be used when compression is enabled.
bitshuffle (bool) – Whether or not to use the BitShuffle filter in the Blosc library. This is normally used to improve the compression ratio. A false value disables bitshuffling and a true one enables it. The default value is disabled.
fletcher32 (bool) – Whether or not to use the Fletcher32 filter in the HDF5 library. This is used to add a checksum on each data chunk. A false value (the default) disables the checksum.
least_significant_digit (int) – If specified, data will be truncated (quantized). In conjunction with enabling compression, this produces ‘lossy’, but significantly more efficient compression. For example, if least_significant_digit=1, data will be quantized using around(scale*data)/scale, where scale = 2^bits, and bits is determined so that a precision of 0.1 is retained (in this case bits=4). Default is None, or no quantization.
Chunked storage layout
Compressed data is stored in a data array of an HDF5 dataset using a chunked storage mechanism. When chunked storage is used, the data array is split into equally sized chunks each of which is stored separately in the file, as illustred on the diagram below. Compression is applied to each individual chunk. When an I/O operation is performed on a subset of the data array, only chunks that include data from the subset participate in I/O and need to be uncompressed or compressed.
Chunking data allows to:
Generally improve, sometimes drastically, the I/O performance of datasets. This comes from the fact that the chunked layout removes the reading speed anisotropy for data array that depends along which dimension its elements are read (i.e the same number of disk access are required when reading data in rows or columns).
Chunked storage also enables adding more data to a dataset without rewriting the whole dataset.
<img src="./Images/Tutorial_5/chuncked_layout.png" width="50%">
By default, data arrays are stored with a chunked layout in SampleData datasets. The size of chunks is the key parameter that controls the impact on I/O performances for chunked datasets. The shape of chunks is computed automatically by the Pytable package, providing a value yielding generally good I/O performances. if you need to go further in the I/O optimization, you may consult the Pytables documentation page dedicated to compression optimization for I/O speed and storage space. In addition, it is highly recommended to read this document in order to be able to efficiently optimize I/O and storage performances for your chunked datasets. These performance issues will not be discussed in this tutorial.
Compressing your datasets with SampleData
Within Sampledata, data compression can be applied to:
data arrays
structured data arrays
field data arrays
There are two ways to control the compression settings of your SampleData data arrays:
Providing compression settings to data item creation methods
Using the set_chunkshape_and_compression and set_nodes_compression_chunkshape methods
The compression options dictionary
In both cases, you will have to pass the various settings of the compression filter you want to apply to your data to the appropriate SampleData method. All of these methods accept for that purpose a compression_options argument, which must be a dictionary. Its keys and associated values can be chosen among the ones listed in the Available filter settings subsection above.
Compress already existing data arrays
We will start by looking at how we can change compression settings of already existing data in SampleData datasets.
For that, we will use a material science dataset that is part of the Pymicro example datasets.
End of explanation
# Save current directory
cwd = os.getcwd()
# move to example data directory
os.chdir(PYMICRO_EXAMPLES_DATA_DIR)
# unarchive the dataset
os.system(f'tar -xvf {tar_file}')
# get back to UserGuide directory
os.chdir(cwd)
Explanation: This file is zipped in the package to reduce its size. We will have to unzip it to use it and learn how to reduce its size with the SampleData methods. If you are just reading the documentation and not executing it, you may just skip this cell and the next one.
End of explanation
# import SampleData class
from pymicro.core.samples import SampleData as SD
# import Numpy
import numpy as np
# Create a copy of the existing dataset
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
print(data)
data.get_file_disk_size()
Explanation: Dataset presentation
In this tutorial, we will work on a copy of this dataset, to leave the original data unaltered.
We will start by creating an autodeleting copy of the file, and print its content to discover its content.
End of explanation
data.print_node_info('grain_map')
Explanation: As you can see, this dataset already contains a rich content. It is a digital twin of a real polycristalline microstructure of a grade 2 Titanium sample, gathering both experimental and numerical data obtained through Diffraction Contrast Tomography imaging, and FFT-based mechanical simulation.
This dataset has actually been constructed using the Microstructure class of the pymicro package, which is based on the SampleData class. The link between these classes will be discussed in the next tutorial.
This dataset contains only uncompressed data. We will try to reduce its size by using various compression methods on the large data items that it contains. You can see that most of them are stored in the 3DImage Group CellData.
Apply compression settings for a specific array
We will start by compressing the grain_map Field data array of the CellData image. Let us look more closely on this data item:
End of explanation
compression_options = {'complib':'zlib', 'complevel':1}
Explanation: We can see above that this data item is not compressed (complevel=0), and has a disk size of almost 2 Mb.
To apply a set of compression settings to this data item, you need to:
create a dictionary specifying the compression settings:
End of explanation
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
data.print_node_compression_info('grain_map')
Explanation: use the SampleData set_chunkshape_and_compression method with the dictionary and the name of the data item as arguments
End of explanation
# No `shuffle` option:
print('\nUsing the shuffle option, with the zlib compressor and a compression level of 1:')
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
# No `shuffle` option:
print('\nUsing no shuffle option, with the zlib compressor and a compression level of 9:')
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':False}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
# No `shuffle` option:
print('\nUsing the shuffle option, with the lzo compressor and a compression level of 1:')
compression_options = {'complib':'lzo', 'complevel':1, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
# No `shuffle` option:
print('\nUsing no shuffle option, with the lzo compressor and a compression level of 1:')
compression_options = {'complib':'lzo', 'complevel':1, 'shuffle':False}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
Explanation: As you can see, the storage size of the data item has been greatly reduced, by more than 10 times (126 Kb vs 1.945 Mb), using this compression settings. Let us see what will change if we use different settings :
End of explanation
data.print_node_info('Amitex_stress_1')
# No `shuffle` option:
print('\nUsing the shuffle option, with the zlib compressor and a compression level of 1:')
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# No `shuffle` option:
print('\nUsing no shuffle option, with the zlib compressor and a compression level of 1:')
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':False}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
Explanation: As you may observe, is significantly affected by the choice of the compression level. The higher the compression level, the higher the compression ratio, but also the lower the I/O speed. On the other hand, you can also remark that, in the present case, using the shuffle filter deteriorates the compression ratio.
Let us try to with another data item:
End of explanation
# Print current size of disks and their compression settings
data.get_node_disk_size('grain_map_raw')
data.print_node_compression_info('grain_map_raw')
data.get_node_disk_size('uncertainty_map')
data.print_node_compression_info('uncertainty_map')
data.get_node_disk_size('mask')
data.print_node_compression_info('mask')
# Compress datasets
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True}
data.set_nodes_compression_chunkshape(node_list=['grain_map_raw', 'uncertainty_map', 'mask'],
compression_options=compression_options)
# Print new size of disks and their compression settings
data.get_node_disk_size('grain_map_raw')
data.print_node_compression_info('grain_map_raw')
data.get_node_disk_size('uncertainty_map')
data.print_node_compression_info('uncertainty_map')
data.get_node_disk_size('mask')
data.print_node_compression_info('mask')
Explanation: On the opposite, for this second array, the shuffle filter improves significantly the compression ratio. However, in this case, you can see that the compression ratio achieved is much lower than for the grain_map array.
<div class="alert alert-warning">
**Warning 1**
The efficiency of compression algorithms in terms of compression ratio is strongly affected by the data itself (variety, value and position of the stored values in the array). Compression filters will not have the same behavior with all data arrays, as you have observed just above. Be aware of this fact, and do not hesitate to conduct tests to find the best settings for you datasets !
</div>
<div class="alert alert-warning">
**Warning 2**
Whenever you change the compression or chunkshape settings of your datasets, the data item is re-created into the *SampleData* dataset, which may be costly in computational time. Be careful if you are dealing with very large data arrays and want to try out several settings to find the best I/O speed / compression ratio compromise, with the `set_chunkshape_and_compression` method. You may want to try on a subset of your large array to speed up the process.
</div>
Apply same compression settings for a serie of nodes
If you need to apply the same compression settings to a list of data items, you may use the set_nodes_compression_chunkshape. This method works exactly like set_chunkshape_and_compression, but take a list of nodenames as arguments instead of just one. The inputted compression settings are then applied to all the nodes in the list:
End of explanation
import numpy as np
print(f"Data array `grain_map` has {data['grain_map'].size} elements,"
f"and {np.unique(data['grain_map']).size} different values.\n")
print(f"Data array `Amitex_stress_1` has {data['Amitex_stress_1'].size} elements,"
f"and {np.unique(data['Amitex_stress_1']).size} different values.\n")
Explanation: Lossy compression and data normalization
The compression filters used above preserve exactly the original values of the stored data. However, it is also possible with specific filters a lossy compression, which remove non relevant part of the data. As a result, data compression ratio is usually strongly increased, at the cost that stored data is no longer exactly equal to the inputed data array.
One of the most important feature of data array that increase their compressibility, is the presence of patterns in the data. If a value or a serie of values is repeated multiple times throughout the data array, data compression can be very efficient (the pattern can be stored only once).
Numerical simulation and measurement tools usually output data in a standard simple or double precision floating point numbers, yiedling data arrays with values that have a lot of digits. Typically, these values are all different, and hence these arrays cannot be efficiently compressed.
The Amitex_stress_1 and Amitex_strain_1 data array are two tensor fields outputed by a continuum mechanics FFT-based solver, and typical fall into this category: they have almost no equal value or clear data pattern.
As you can see above, the best achieved compression ratio is 60% while for the dataset grain_map, the compression is way more efficient, with a best ratio that climbs up to 97% (62 Kb with zlib compressor and compression level of 9, versus an initial data array of 1.945 Mb). This is due to the nature of the grain_map data array, which is a tridimensional map of grains identification number in microstructure of the Titanium sample represented by the dataset. It is hence an array containing a few integer values that are repeated many times.
Let us analyze these two data arrays values to illustrate this difference:
End of explanation
# We will store a value of an array to verify how it evolves after compression
original_value = data['Amitex_stress_1'][20,20,20]
# Apply lossy compression
data.get_node_disk_size('Amitex_stress_1')
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':3}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20]
print(f'Original array value: {original_value} \n'
f'Array value after lossy compression: {new_value}')
Explanation: Lossy compression
Usually, the relevant precision of data is only of a few digits, so that many values of the array should be considered equal. The idea of lossy compression is to truncate values up to a desired precision, which increases the number of equal values in a dataset and hence increases its compressibility.
Lossy compression can be applied to floating point data arrays in SampleData datasets using the least_significant_digit compression setting. If you set the value of this option to $N$, the data will be truncated after the $N^{th}$ siginificant digit after the decimal point. Let us see an example.
End of explanation
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20]
print(f'Original array value: {original_value} \n'
f'Array value after lossy compression 2 digits: {new_value}')
Explanation: As you may observe, the compression ratio has been improved, and the retrieved values after lossy compression are effectively equal to the original array up to the third digit after the decimal point.
We will now try to increase the compression ratio by reducing the number of conserved digits to 2:
End of explanation
# removing dataset to recreate a copy
del data
# creating a copy of the dataset to try out lossy compression methods
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
Explanation: As you can see, the compression ratio has again been improved, now close to 75%. Know, you know how to do to choose the best compromise between lost precision and compression ratio.
Normalization to improve compression ratio
If you look more closely to the Amitex_stress_1 array values, you can observe that the value of this array have been outputed within a certain scale of values, which in particular impact the number of significant digits that come before the decimal point. Sometimes precision of the data would require less significant digits than its scale of representation.
In that case, storing the complete data array at its original scale is not necessary, and very inefficient in terms of data size. To optimize storage of such datasets, one can normalize them to a form with very few digits before the decimal point (1 or 2), and stored separately their scale to be able to revert the normalization operation when retrieiving data.
This allows to reduce the total number of significant digits of the data, and hence further improve the achievable compression ratio with lossy compression.
The SampleData class allows you to aplly automatically this operation when applying compression settings to your dataset. All you have to do is add to the compression_option dictionary the key normalization with one of its possible values.
To try it, we will close (and delete) our test dataset and recopy the original file, to apply normalization and lossy compression on the original raw data:
End of explanation
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard'}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20,:]
# Get in memory value of the node
memory_value = data.get_node('Amitex_stress_1', as_numpy=False)[20,20,20,:]
print(f'Original array value: {original_value} \n'
f'Array value after normalization and lossy compression 2 digits: {new_value}',
f'Value in memory: {memory_value}')
Explanation: Standard Normalization
The standard normalization setting will center and reduce the data of an array $X$ by storing a new array $Y$ that is:
$Y = \frac{X - \bar{X}}{\sigma(X)}$
where $\bar{X}$ and $\sigma(X)$ are respectively the mean and the standard deviation of the data array $X$.
This operation reduces the number of significant digits before the decimal point to 1 or 2 for the large majority of the data array values. After standard normalization, lossy compression will yield much higher compression ratios for data array that have a non normalized scale.
The SampleData class ensures that when data array are retrieved, or visualized, the user gets or sees the original data, with the normalization reverted.
Let us try to apply it to our stress field Amitex_stress_1.
End of explanation
del data
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard_per_component'}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20,:]
# Get in memory value of the node
memory_value = data.get_node('Amitex_stress_1', as_numpy=False)[20,20,20,:]
print(f'Original array value: {original_value} \n'
f'Array value after normalization per component and lossy compression 2 digits: {new_value}\n',
f'Value in memory: {memory_value}')
Explanation: As you can see, the compression ratio has been strongly improved by this normalization operation, reaching 90%.
When looking at the retrieved value after compression, you can see that depending on the field component that is observed, the relative precision loss varies. The third large component value error is less than 1%, which is consistent with the truncation to 2 significant digits. However, it is not the other components, that have smaller values by two or three orders of magnitude, and that are retrieved with larger errors.
This is explained by the fact that the standard normalization option scales the array as a whole. As a result, if there are large differencies in the scale of different components of a vector or tensor field, the precision of the smaller components will be less preserved.
Standard Normalization per components for vector/tensor fields
Another normalization option is available for SampleData field arrays, that allows to apply standard normalization individually to each component of a field in order to keep a constant relative precision for each component when applying lossy compression to the field data array.
To use this option, you will need to set the normalization value to standard_per_component:
End of explanation
data.print_xdmf()
Explanation: As you can see, the error in the retrieved array is now less than 1% for each component of the field value. However, the cost was a reduced improvement of the compression ratio.
Visualization of normalized data
End of explanation
data.get_node_disk_size('Amitex_stress_1')
data.get_node_disk_size('Amitex_stress_1_norm_std')
data.get_node_disk_size('Amitex_stress_1_norm_mean')
Explanation: As you can see, the Amitex_stress_1 Attribute node data in the dataset XDMF file is now provided by a Function item type, involving three data array with the original field shape. This function computes:
$X' = Y*\sigma(X) + \bar{X}$
where $*$ and $+$ are element-wise product and addition operators for multidimensional arrays. This operation allows to revert the component-wise normalization of data. The Paraview software is able to interpret this syntax of the XDMF format and hence, when visualizing data, you will see the values with the original scaling.
This operation required the creation of two large arrays in the dataset, that the store the mean and standard deviation of each component of the field, repeted for each spatial dimensions of the field data array. It is mandatory to allow visualization of the data with the right scaling in Paraview. However, as these array contain a very low amount of data ($2*N_c$: two times de number of components of the field), they can be very easily compressed and hence do not significantly affect the storage size of the data item, as you may see below:
End of explanation
data.print_node_compression_info('Amitex_stress_1')
data.get_node_disk_size('Amitex_stress_1')
# Change chunkshape of the array
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard_per_component'}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', chunkshape=(10,10,10,6),
compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
data.print_node_compression_info('Amitex_stress_1')
Explanation: Changing the chunksize of a node
Compressing all fields when adding Image or Mesh Groups
Changing the chunksize of a data array with SampleData is very simple. You just have to pass as a tuple the news shape of the chunks you want for your data array, and pass it as an argument to the set_chunkshape_and_compression or set_nodes_compression_chunkshape:
End of explanation
# removing dataset to recreate a copy
del data
# creating a copy of the dataset to try out lossy compression methods
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
# getting the `orientation_map` array
array = data['Amitex_stress_1']
# create a new field for the CellData image group with the `orientation_map` array and add compression settings
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard'}
new_cshape = (10,10,10,3)
# Add data array as field of the CellData Image Group
data.add_field(gridname='CellData', fieldname='test_compression', indexname='testC', array=array,
chunkshape=new_cshape, compression_options=compression_options, replace=True)
# Check size and settings of new field
data.print_node_info('testC')
data.get_node_disk_size('testC')
data.print_node_compression_info('testC')
Explanation: As you can see, the chunkshape has been changed, which has also affected the memory size of the compressed data array. We have indeed reduced the number of chunks in the dataset, which reduces the number of data to store. This modification can also improve or deteriorate the I/O speed of access to your data array in the dataset. The reader is once again refered to dedicated documents to know more ion this matter: here and here.
Compression data and setting chunkshape upon creation of data items
Until here we have only modified the compression settings of already existing data items. In this process, the data items are replaced by the new compressed version of the data, which is a costly operation. For this reason, if they are known in advance, it is best to apply the compression filters and appropriate chunkshape when creating the data item.
If you have read through all the tutorials of this user guide, you should know all the method that allow to create data items in your datasets, like add_data_array, add_field, ædd_mesh... All of these methods accept the two arguments chunkshape and compression_options, that work exaclty as for the set_chunkshape_and_compression or set_nodes_compression_chunkshape methods. You hence use them to create your data items directly with the appropriate compression settings.
Let us see an example. We will get an array from our dataset, and try to recreate it with a new name and some data compression:
End of explanation
# removing dataset to recreate a copy
del data
# creating a copy of the dataset to try out lossy compression methods
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
compression_options1 = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard'}
compression_options2 = {'complib':'zlib', 'complevel':9, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options1)
data.set_nodes_compression_chunkshape(node_list=['grain_map', 'grain_map_raw','mask'],
compression_options=compression_options2)
Explanation: The node has been created with the desired chunkshape and compression filters.
Repacking files
We now recreate a new copy of the original dataset, and try to reduce the size of oll heavy data item, to reduce as much as possible the size of our dataset.
End of explanation
data.print_dataset_content(short=True)
data.get_file_disk_size()
Explanation: Now that we have compressed a few of the items of our dataset, the disk size of its HDF5 file should have diminished. Let us check again the size of its data items, and of the file:
End of explanation
data.repack_h5file()
data.get_file_disk_size()
Explanation: The file size has not changed, surprisingly, even if the large Amitex_stress_1 array has been shrinked from almost 50 Mo to roughly 5 Mo. This is due to a specific feature of HDF5 files: they do not free up the memory space that they have used in the past. The memory space remains associated to the file, and is used in priority when new data is written into the dataset.
After changing the compression settings of one or several nodes in your dataset, if that induced a reduction of your actual data memory size, and that you want your file to be smaller on disk. To retrieve the fried up memory spacae, you may repack your file (overwrite it with a copy of itself, that has just the size require to store all actual data).
To do that, you may use the SampleData method repack_h5file:
End of explanation
# remove SampleData instance
del data
os.remove(dataset_file+'.h5')
os.remove(dataset_file+'.xdmf')
Explanation: You see that repacking the file has allowed to free some memory space and reduced its size.
<div class="alert alert-info">
**Note**
Note that the size of the file is larger than the size of data items printed by `print_dataset_content`. This extra size is the memory size occupied by the data array storing *Element Tags* for the mesh `grains_mesh`. Element tags are not printed by the printing methods as they can be very numerous and pollute the lecture of the printed information.
</div>
Once again, you should repack your file at carefully chosen times, as is it a very costly operation for large datasets. The SampleData class constructor has an autorepack option. If it is set to True, the file is automatically repacked when closing the dataset.
We can now close our dataset, and remove the original unarchived file:
End of explanation |
4,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probabilistic Programming in Python
Author
Step1: Summary
Step2: Summary
Random variables are abstract objects. Methods are available for operating on them algebraically. The probability
distributions, methods for drawing random samples, statistical metrics, are transparently propagated.
1.3 More PPL coolness
Step3: Summary
Conditioning, which is the first step towards inference, is done automatically. A wide variety of conditions can be used. P(A | B) translates to a.given(b).
1.4 Reasoning under uncertainty "PPL style"
Replace with medical example | Python Code:
from lea import *
# the canonical random variable : a fair coin
faircoin = Lea.fromVals('Head', 'Tail')
# toss the coin a few times
faircoin.random(10)
# Amitabh Bachan's coin from Sholay
sholaycoin = Lea.fromVals('Head', 'Head')
# Amitabh always wins (and, heroically, sacrifices himself for Dharamendra!)
sholaycoin.random(10)
# more reasonably, a biased coin
biasedcoin = Lea.fromValFreqs(('Head', 1), ('Tail', 2))
# toss it a few times
biasedcoin.random(10)
# random variables with more states : a fair die
die = Lea.fromVals(1, 2, 3, 4, 5, 6)
# throw the die a few times
die.random(20)
# Lea does standard statistics
# die.mean
# die.mode
# die.var
# die.entropy
Explanation: Probabilistic Programming in Python
Author : Ronojoy Adhikari
Email : [email protected] | Web : www.imsc.res.in/~rjoy
Github : www.github.com/ronojoy | Twitter: @phyrjoy
Part 1 : Introduction to Lea
1.1 : Getting started
End of explanation
# Lets create a pair of dies
die1 = die.clone()
die2 = die.clone()
# The throw of dice
dice = die1 + die2
dice
dice.random(10)
dice.mean
dice.mode
print dice.histo()
Explanation: Summary : Random variables are objects. N samples are drawn from the random variable y using
Python
y.random(N)
Standard statistical measures of distributions are provided. Nothing extraordinary (yet!).
Exercise : Write a Python code that produces the same output as the following Lea code
Python
Lea.fromVals('rain', 'sun').random(20)
How many lines do you need in Python ?
1.2 : Now some PPL coolness
End of explanation
## We can create a new distribution, conditioned on our state of knowledge : P(sum | sum <= 6)
conditionalDice = dice.given(dice<=6)
## What is our best guess for the result of the throw ?
conditionalDice.mode
## Conditioning can be done in many ways : suppose we know that the first die came up 3.
dice.given(die1 == 3)
## Conditioning can be done in still more ways : suppose we know that **either** of the two dies came up 3
dice.given((die1 == 3) | (die2 == 3))
Explanation: Summary
Random variables are abstract objects. Methods are available for operating on them algebraically. The probability
distributions, methods for drawing random samples, statistical metrics, are transparently propagated.
1.3 More PPL coolness : conditioning
"You just threw two dice. Can you guess the result ?"
"Here's a tip : the sum is less than 6"
End of explanation
# Species is a random variable with states "common" and "rare", with probabilities determined by the population. Since
# are only two states, species states are, equivalently, "rare" and "not rare". Species can be a Boolean!
rare = Lea.boolProb(1,1000)
# Similarly, pattern is either "present" or "not present". It too is a Boolean, but, its probability distribution
# is conditioned on "rare" or "not rare"
patternIfrare = Lea.boolProb(98, 100)
patternIfNotrare = Lea.boolProb(5, 100)
# Now, lets build the conditional probability table for P(pattern | species)
pattern = Lea.buildCPT((rare , patternIfrare), ( ~rare , patternIfNotrare))
# Sanity check : do we get what we put in ?
pattern.given(rare)
# Finally, our moment of truth : Bayesian inference - what is P(rare | pattern )?
rare.given(pattern)
# And, now some show off : what is the probability of being rare and having a pattern ?
rare & pattern
# All possible outcomes
Lea.cprod(rare,pattern)
Explanation: Summary
Conditioning, which is the first step towards inference, is done automatically. A wide variety of conditions can be used. P(A | B) translates to a.given(b).
1.4 Reasoning under uncertainty "PPL style"
Replace with medical example
End of explanation |
4,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to
Step1: Problem Statement
Step3: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal
Step4: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
Step5: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
Step7: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from
Step9: Expected Output
Step10: Expected Output
Step11: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
Step13: Observations
Step15: Expected Output
Step16: Expected Output
Step17: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary. | Python Code:
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to: Use regularization in your deep learning models.
Let's first import the packages you are going to use.
End of explanation
train_X, train_Y, test_X, test_Y = load_2D_dataset()
Explanation: Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> Figure 1 </u>: Football field<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
End of explanation
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
Analysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of "lambda" because "lambda" is a reserved keyword in Python.
- in dropout mode -- by setting the keep_prob to a value less than one
You will first try the model without any regularization. Then, you will implement:
- L2 regularization -- functions: "compute_cost_with_regularization()" and "backward_propagation_with_regularization()"
- Dropout -- functions: "forward_propagation_with_dropout()" and "backward_propagation_with_dropout()"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
End of explanation
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
End of explanation
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
End of explanation
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m))*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
Explanation: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
python
np.sum(np.square(Wl))
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
End of explanation
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
End of explanation
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The model() function will call:
- compute_cost_with_regularization instead of compute_cost
- backward_propagation_with_regularization instead of backward_propagation
End of explanation
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
End of explanation
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1<keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2<keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
Explanation: Observations:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
What is L2-regularization actually doing?:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
What you should remember -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
3.1 - Forward propagation with dropout
Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
Instructions:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X < 0.5). Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
End of explanation
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
3.2 - Backward propagation with dropout
Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
Instruction:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1.
2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob).
End of explanation
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function model() will now call:
- forward_propagation_with_dropout instead of forward_propagation.
- backward_propagation_with_dropout instead of backward_propagation.
End of explanation
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
End of explanation |
4,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Le perceptron multicouche avec scikit-learn
Documentation officielle
Step1: Classification
C.f. http
Step2: Une fois le réseau de neurones entrainé, on peut tester de nouveaux exemples
Step3: clf.coefs_ contient les poids du réseau de neurones (une liste d'array)
Step4: Vector of probability estimates $P(y|x)$ per sample $x$
Step5: Régression
C.f. http
Step6: Une fois le réseau de neurones entrainé, on peut tester de nouveaux exemples
Step7: clf.coefs_ contient les poids du réseau de neurones (une liste d'array)
Step8: Régularisation
C.f. http
Step9: Normalisation des données d'entrée
C.f. http | Python Code:
import sklearn
# version >= 0.18 is required
version = [int(num) for num in sklearn.__version__.split('.')]
assert (version[0] >= 1) or (version[1] >= 18)
Explanation: Le perceptron multicouche avec scikit-learn
Documentation officielle: http://scikit-learn.org/stable/modules/neural_networks_supervised.html
Notebooks associés:
- http://www.jdhp.org/docs/notebooks/ai_multilayer_perceptron_fr.html
Vérification de la version de la bibliothèque scikit-learn
Attention: le Perceptron Multicouche n'est implémenté dans scikit-learn que depuis la version 0.18 (septembre 2016).
Le code source de cette implémentation est disponible sur github.
Le long fil de discussion qui précédé l'intégration de cette implémentation est disponible sur la page suivante: issue #3204.
End of explanation
from sklearn.neural_network import MLPClassifier
X = [[0., 0.], [1., 1.]]
y = [0, 1]
clf = MLPClassifier(solver='lbfgs',
alpha=1e-5,
hidden_layer_sizes=(5, 2),
random_state=1)
clf.fit(X, y)
Explanation: Classification
C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#classification
Premier exemple
End of explanation
clf.predict([[2., 2.], [-1., -2.]])
Explanation: Une fois le réseau de neurones entrainé, on peut tester de nouveaux exemples:
End of explanation
clf.coefs_
[coef.shape for coef in clf.coefs_]
Explanation: clf.coefs_ contient les poids du réseau de neurones (une liste d'array):
End of explanation
clf.predict_proba([[2., 2.], [-1., -2.]])
Explanation: Vector of probability estimates $P(y|x)$ per sample $x$:
End of explanation
from sklearn.neural_network import MLPRegressor
X = [[0., 0.], [1., 1.]]
y = [0, 1]
reg = MLPRegressor(solver='lbfgs',
alpha=1e-5,
hidden_layer_sizes=(5, 2),
random_state=1)
reg.fit(X, y)
Explanation: Régression
C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#regression
Premier exemple
End of explanation
reg.predict([[2., 2.], [-1., -2.]])
Explanation: Une fois le réseau de neurones entrainé, on peut tester de nouveaux exemples:
End of explanation
reg.coefs_
[coef.shape for coef in reg.coefs_]
Explanation: clf.coefs_ contient les poids du réseau de neurones (une liste d'array):
End of explanation
# TODO...
Explanation: Régularisation
C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#regularization
End of explanation
X = [[0., 0.], [1., 1.]]
y = [0, 1]
clf = MLPClassifier(hidden_layer_sizes=(15,),
random_state=1,
max_iter=1, # <- !
warm_start=True) # <- !
for i in range(10):
clf.fit(X, y)
print(clf.coefs_)
Explanation: Normalisation des données d'entrée
C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#tips-on-practical-use
Itérer manuellement
C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#more-control-with-warm-start
Itérer manuellement la boucle d'apprentissage peut être pratique pour suivre son évolution ou pour l'orienter.
Voici un exemple où on suit l'évolution des poids du réseau sur 10 itérations:
End of explanation |
4,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 3
Step1: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota
Step2: Nota
Step3: 2. Volatilidad implícita
Step4: 3. Gráficos del Pay Off | Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import datetime
from datetime import datetime
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#algunas opciones para Python
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
Explanation: Clase 3: Trabajando con opciones
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Uso de Pandas para descargar datos financieros
En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.
End of explanation
#Descargar datos de Yahoo! finance
#Tickers
tickers = ['AA','AAPL','MSFT', '^GSPC']
# Fuente
data_source = 'yahoo'
# Fechas: desde 01/01/2014 hasta 12/31/2016.
start_date = '2014-01-01'
end_date = '2016-12-31'
# Usar el pandas data reader. El comando sort_index ordena los datos por fechas
assets = (web.DataReader(tickers, data_source, start_date, end_date)).sort_index('major_axis')
assets
Explanation: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:
*conda install -c conda-forge pandas-datareader *
End of explanation
aapl = web.Options('AAPL', 'yahoo')
appl_opt = aapl.get_all_data().reset_index()
appl_opt
appl_opt['Expiry']
appl_opt['Type']
appl_opt.loc[1080]
call01 = appl_opt[(appl_opt.Expiry=='2018-01-19') & (appl_opt.Type=='call')]
call01
Explanation: Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX.
Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.
End of explanation
ax = call01.set_index('Strike')[['IV']].plot(figsize=(8,6))
ax.axvline(call01.Underlying_Price.iloc[0], color='g');
put01 = appl_opt[(appl_opt.Expiry=='2018-01-19') & (appl_opt.Type=='put')]
put01
ax = put01.set_index('Strike')[['IV']].plot(figsize=(8,6))
ax.axvline(put01.Underlying_Price.iloc[0], color='g');
Explanation: 2. Volatilidad implícita
End of explanation
def call_payoff(ST, K):
return max(0, ST-K)
call_payoff(25, 30)
def call_payoffs(STmin, STmax, K, step=1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(call_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs}, index=maturities)
df.index.name = 'Precio de maduración'
return df
call_payoffs(10,25,15)
def plot_call_payoffs(STmin, STmax, K, step=1):
payoffs = call_payoffs(STmin, STmax, K, step)
plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)
plt.ylabel("Payoff")
plt.xlabel("Precio de maduración")
plt.title('Payoff call, Precio strike={0}'.format(K))
plt.xlim(STmin, STmax)
plt.plot(payoffs.index, payoffs.Payoff.values);
plot_call_payoffs(10, 25, 15)
def put_payoff(ST, K):
return max(0, K-ST)
put_payoff(25, 30)
def put_payoffs(STmin, STmax, K, step=1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(put_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs}, index=maturities)
df.index.name = 'Precio de maduración'
return df
put_payoffs(10,25,15)
def plot_put_payoffs(STmin, STmax, K, step=1):
payoffs = put_payoffs(STmin, STmax, K, step)
plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)
plt.ylabel("Payoff")
plt.xlabel("Precio de maduración")
plt.title('Payoff put, Precio strike={0}'.format(K))
plt.xlim(STmin, STmax)
plt.plot(payoffs.index, payoffs.Payoff.values);
plot_put_payoffs(10, 25, 15)
def call_pnl_buyer(ct, K, STmin, STmax, step = 1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(call_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': payoffs-ct}, index=maturities)
df.index.name = 'Precio de maduración'
return df
call_pnl_buyer(12, 15, 10, 35)
def call_pnl_seller(ct, K, STmin, STmax, step = 1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(call_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': ct-payoffs}, index=maturities)
df.index.name = 'Precio de maduración'
return df
call_pnl_seller(12, 15, 10, 35)
def call_pnl_combined(ct, K, STmin, STmax, step = 1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(call_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnLcomprador': payoffs-ct, 'PnLvendedor': ct-payoffs}, index=maturities)
df.index.name = 'Precio de maduración'
return df
call_pnl_combined(12, 15, 10, 35)
def put_pnl_buyer(ct, K, STmin, STmax, step = 1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(put_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': payoffs-ct}, index=maturities)
df.index.name = 'Precio de maduración'
return df
put_pnl_buyer(2, 15, 10, 30)
def put_pnl_seller(ct, K, STmin, STmax, step = 1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(put_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': ct-payoffs}, index=maturities)
df.index.name = 'Precio de maduración'
return df
put_pnl_seller(2, 15, 10, 30)
def put_pnl_combined(ct, K, STmin, STmax, step = 1):
maturities = np.arange(STmin, STmax+step, step)
payoffs = np.vectorize(put_payoff)(maturities, K)
df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnLcomprador': payoffs-ct, 'PnLvendedor': ct-payoffs}, index=maturities)
df.index.name = 'Precio de maduración'
return df
put_pnl_combined(2, 15, 10, 30)
def plot_pnl(pnl_df, okind, who):
plt.ylim(pnl_df.Payoff.min() - 10, pnl_df.Payoff.max() + 10)
plt.ylabel("Ganancia/pérdida")
plt.xlabel("Precio de maduración")
plt.title('Ganancia y pérdida de una opción {0} para el {1}, Prima={2}, Strike={3}'.format(okind, who, pnl_df.Prima.iloc[0],
pnl_df.Strike.iloc[0]))
plt.ylim(pnl_df.PnL.min()-3, pnl_df.PnL.max() + 3)
plt.xlim(pnl_df.index[0], pnl_df.index[len(pnl_df.index)-1])
plt.plot(pnl_df.index, pnl_df.PnL)
plt.axhline(0, color='g');
plot_pnl(call_pnl_buyer(12, 15, 10, 35), "call", "comprador")
plot_pnl(call_pnl_seller(12, 15, 10, 35), "call", "vendedor")
plot_pnl(put_pnl_buyer(2, 15, 10, 30), "put", "comprador")
Explanation: 3. Gráficos del Pay Off
End of explanation |
4,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mechpy Tutorials
a mechanical engineering toolbox
source code - https
Step1: Reading raw test data example 1
This example shows how to read multiple csv files and plot them together
Step2: Reading test data - example 2
This example shows how to read a different format of data and plot
Step3: another example of plotting data
Step5: Finding the "first" peak and delta-10 threshhold limit on force-displacement data of an aluminum coupon
http
Step6: Modulus | Python Code:
# setup
import numpy as np
import sympy as sp
import pandas as pd
import scipy
from pprint import pprint
sp.init_printing(use_latex='mathjax')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12, 8) # (width, height)
plt.rcParams['font.size'] = 14
plt.rcParams['legend.fontsize'] = 16
from matplotlib import patches
get_ipython().magic('matplotlib') # seperate window
get_ipython().magic('matplotlib inline') # inline plotting
Explanation: Mechpy Tutorials
a mechanical engineering toolbox
source code - https://github.com/nagordon/mechpy
documentation - https://nagordon.github.io/mechpy/web/
Neal Gordon
2017-02-20
material testing analysis
This quick tutorial shows some simple scripts for analyzing material test data
Python Initilaization with module imports
End of explanation
import glob as gb
from matplotlib.pyplot import *
%matplotlib inline
csvdir='./examples/'
e=[]
y=[]
for s in specimen:
files = gb.glob(csvdir + '*.csv') # select all csv files
fig, ax = subplots()
title(s)
Pult = []
for f in files:
d1 = pd.read_csv(f, skiprows=1)
d1 = d1[1:] # remove first row of string
d1.columns = ['t', 'load', 'ext'] # rename columns
d1.head()
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
Pult.append(np.max(d1.load))
plot(d1.ext, d1.load)
ylabel('Pult, lbs')
xlabel('extension, in')
e.append(np.std(Pult))
y.append(np.average(Pult) )
show()
# bar chart
barwidth = 0.35 # the width of the bars
fig, ax = subplots()
x = np.arange(len(specimen))
ax.bar(x, y, width=barwidth, yerr=e)
#ax.set_xticks(x)
xticks(x+barwidth/2, specimen, rotation='vertical')
title('Pult with sample average and stdev of n=3')
ylabel('Pult, lbs')
margins(0.05)
show()
Explanation: Reading raw test data example 1
This example shows how to read multiple csv files and plot them together
End of explanation
f = 'Aluminum_loops.txt'
d1 = pd.read_csv(f, skiprows=4,delimiter='\t')
d1 = d1[1:] # remove first row of string
d1.columns = ['time', 'load', 'cross','ext','strain','stress'] # rename columns
d1.head()
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
plot(d1.ext, d1.load)
ylabel('stress')
xlabel('strain')
d1.head()
Explanation: Reading test data - example 2
This example shows how to read a different format of data and plot
End of explanation
f = 'al_MTS_test.csv'
d1 = pd.read_csv(f, skiprows=3,delimiter=',')
d1 = d1[1:] # remove first row of string
d1 = d1[['Time','Axial Force', 'Axial Fine Displacement', 'Axial Length']]
d1.columns = ['time', 'load', 'strain','cross'] # rename columns
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
plot(d1.strain, d1.load)
ylabel('stress')
xlabel('strain')
Explanation: another example of plotting data
End of explanation
%matplotlib inline
from scipy import signal
from pylab import plot, xlabel, ylabel, title, rcParams, figure
import numpy as np
pltwidth = 16
pltheight = 8
rcParams['figure.figsize'] = (pltwidth, pltheight)
csv = np.genfromtxt('./stress_strain1.csv', delimiter=",")
disp = csv[:,0]
force = csv[:,1]
print('number of data points = %i' % len(disp))
def moving_average(x, window):
Moving average of 'x' with window size 'window'.
y = np.empty(len(x)-window+1)
for i in range(len(y)):
y[i] = np.sum(x[i:i+window])/window
return y
plt1 = plot(disp, force);
xlabel('displacement');
ylabel('force');
figure()
mywindow = 1000 # the larger the filter window, the more agressive the filtering
force2 = moving_average(force, mywindow)
x2 = range(len(force2))
plot(x2, force2);
title('Force smoothed with moving average filter');
# Find f' using diff to find the first intersection of the 0
# mvavgforce = mvavgforce[:len(mvavgforce)/2]
force2p = np.diff(force2)
x2p = range(len(force2p))
plot(x2p, force2p);
title('Slope of the smoothed curve')
i = np.argmax(force2p<0)
### or
# i = where(force2p<0)[0][0]
#### or
# for i, f in enumerate(force2p):
# if f < 0:
# break
plot(x2p, force2p, i,force2p[i],'o', markersize=15);
title('find the point at which the slope goes negative, indicating a switch in the slope direction');
plot(x2, force2, i,force2[i],'o',markersize=15);
title('using that index, plot on the force-displacement curve');
#Now, we need to find the next point from here that is 10 less.
delta = 1
i2 = np.argmax(force2[i]-delta > force2[i:])
# If that point does not exist on the immediate downward sloping path,
#then just choose the max point. In this case, 10 would exist very
#far away from the point and not be desireable
if i2 > i:
i2=0
plot(x2, force2, i,force2[i],'o', i2+i, force2[i2+i] ,'*', markersize=15);
disp
Explanation: Finding the "first" peak and delta-10 threshhold limit on force-displacement data of an aluminum coupon
http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/DataFiltering.ipynb
End of explanation
# remove nan
disp = disp[~np.isnan(force)]
force = force[~np.isnan(force)]
A = 0.1 # area
stress = force/A / 1e3
strain = disp/25.4 * 1e-3
plt.plot(strain, stress)
stress_range = np.array([5, 15])
PL = 0.0005
E_tan = stress/strain
assert(len(stress)==len(strain))
i = (stress > stress_range[0]) & (stress < stress_range[1])
stress_mod = stress[i]
strain_mod = strain[i]
fit = np.polyfit(strain_mod,stress_mod,1)
fit_fn = np.poly1d(fit)
fit_fn
PLi = np.argmax( (stress - (fit_fn(strain-PL)) < 0) )
PLi
# fit_fn is now a function which takes in x and returns an estimate for y
#plt.text(4,4,fit_fn)
plt.plot(strain ,stress, 'y')
plot(strain, fit_fn(strain-PL) , '--k', strain[PLi], stress[PLi],'o')
plt.xlim(0, np.max(strain))
plt.ylim(0, np.max(stress))
print('ultimate stress %f' % np.max(stress))
print('ultimate strain %f' % np.max(strain))
print('strain proportion limit %f' % strain[PLi])
print('stress proportion limit %f' % stress[PLi])
E_tan = E_tan[~np.isinf(E_tan)]
strainE = strain[1:]
plot(strainE, E_tan,'b', strainE[PLi], E_tan[PLi],'o')
plt.ylim([0,25000])
plt.title('Tangent Modulus')
Explanation: Modulus
End of explanation |
4,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
F=1/(np.exp((energy-mu)/kT)+1)
return F
raise NotImplementedError()
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
$$\begin{align}
F(\epsilon)=\frac1{e^{(\epsilon-\mu)/kT}+1}
\end{align}$$
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
energy=np.arange(10.0)
y=fermidist(energy,mu,kT)
plt.plot(energy, y)
plt.xlabel('Energy,$\epsilon(J)$')
plt.ylabel('Probability')
plt.title('Fermi-Dirac Distribution')
plt.ylim(0,1)
plt.xlim(0,10)
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist,mu=(0,5.0),kT=(.1,10.0))
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
4,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Propagate uncertainties with the errors add-on for CO2SYS-Matlab
James Orr<br>
<img align="left" width="50%" src="http
Step1: Specify the directory where you have put the Matlab routines CO2SYS.m, errors.m, and derivnum.m.
Step2: 2. Propagate uncertainties with new errors add-on for CO2SYS-Matlab
Step3: 2.1 Specify input variables and choices
Step4: 2.2 Propagate uncertainties neglecting uncertainties from equilibrium constants & total boron
Step5: 2.3 Propagate uncertainties sequentially accounting for uncertainties from constants & total boron
Default uncertainties in equilibrium constants, but assume no uncertainty in total boron $B_\text{T}$
Step6: Default uncertainties in equilibrium constants & default uncertainty in total $B_\text{T}$ (1%)
Step7: Same calculation, but with a simpler way to specify the defaults for epK and eBt
Step8: 2.4 Summarize effects of accouning for uncertainties in equil. constants & total boron
Fractional increase in propagated uncertainty due to accounting for uncertainties from equilibrium constants (default values)
Step9: Conclusion
Step10: Conclusion
Step11: Reorder CO2SYS output in new array having same order as output array from errors (above, section 2.3) for later division (below)
Step12: Compute percent error
Step13: Compute absolute error of pH from relative error in H
An absolute change in pH is equivalent to a relative change in H+. That is, it can be shown that for small changes in H+ (dH)
Step14: 2.6 Example of using errors.m routine with the pH-At pair
Step15: 2.7 Example of using errors.m routine with the pH-pCO2 pair | Python Code:
%load_ext oct2py.ipython
Explanation: Propagate uncertainties with the errors add-on for CO2SYS-Matlab
James Orr<br>
<img align="left" width="50%" src="http://www.lsce.ipsl.fr/Css/img/banniere_LSCE_75.png"><br><br>
LSCE/IPSL, CEA-CNRS-UVSQ, Gif-sur-Yvette, France
27 February 2018 <br><br>
updated: 29 June 2020
Abstract: This notebook shows you how to use the 'errors' add-on for CO2SYS-Matlab to propagate uncertainties. It uses CO2SYS-Matlab and the add-on routine errors.m (which itself calls another add-on routine derivnum.m) in octave, GNU's clone of Matlab. You can either inspect the HTML version of this file or execute its commands interactively in your browser. For the latter, you'll need to install jupyter notebook, octave, and oct2py, which includes the python-octave interface called octavemagic. Fortunately, that installation is very easy (see below).
Table of Contents:
1. Basics (install & load octave)
2. Propagate uncertainties: use `errors` add-on (with ALK-DIC input pair)
* total uncertainty
* uncertainty from the constants (standard input uncertainties)
* uncertainty from the boron-to-salinity ratio (standard input uncertainty)
* percent relative uncertainty
* convert uncertainty in computed [H+] to uncertainty in computed pH
1. Basics
Run interactively
If you are visualizing this after clicking on the link to this file on github, you are seeing the HTML version of a jupyter notebook. Alternatively, you may run cells interactively and modify them if you have jupyter notebook installed on your machine. To install that software, just download the anaconda open software installer for your computing platform (Windows, OS X, or Linux) from https://www.anaconda.com/ and then follow the easy install instructions at
https://docs.anaconda.com/anaconda/install/
Then just download this jupyter notebook file as well as the 3 routines in the src directory (CO2SYS.m, errors.m, and derivnum.m). Afterwards, you'll only need to install octave and oct2py using the 1-line command in the following section.
Install octavemagic
To install the octavemagic funtionality, we must install oct2py, with the following command at the Unix prompt:
conda install -c conda-forge octave oct2py
That command also installs octave. Then launch the notebook as usual with the following command:
jupyter notebook
A new window or tab should then appear in your browser, showing the files in the directory from where you launched the above command. Then just click on one of the .ipynb files, such as this one.
Once the notebook file appears in your browser, move to any of its cells with your mouse. Run a cell by clicking on it and hitting Ctrl-Return. Alternatively, type Shift-Return to run a cell and then move to the next one. More information on all the interactive commands is available in the Jupyter Notebook Quick Start Guide: http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html
At the top of the notebook, you'll see a number of Tabs (File, Edit, View, Insert, ...). Those tabs provide commands that will allow you to do whatever you like. Under the Help tab you'll find keyboard shortcuts for commands. Alternatively, a cheat sheet for short cuts to commands within jupyter notebook is available at https://zenodo.org/record/44973/files/ipynb-cheat-sheet.pdf . Or use the command palette after typing Ctrl-Shift-P.
Documentation for octavemagic
Details on using octavemagic are here: https://ipython.org/ipython-doc/2/config/extensions/octavemagic.html
Load octave magic function
Because octavemagic is now in conda's oct2py module, it is loaded with a slight modification to what is given on the above web page, i.e., now with the command below
End of explanation
%%octave
addpath ("~/Software/MATLAB/CO2SYS-MATLAB/src")
Explanation: Specify the directory where you have put the Matlab routines CO2SYS.m, errors.m, and derivnum.m.
End of explanation
%%octave
help errors
Explanation: 2. Propagate uncertainties with new errors add-on for CO2SYS-Matlab
End of explanation
%%octave
# Standard input for CO2SYS:
# --------------------------
# Input Variables:
PAR1 = 2300; % ALK
PAR2 = 2000; % DIC
PAR1TYPE = 1; % 1=ALK, 2=DIC, 3=pH, 4=pCO2, 5=fCO2
PAR2TYPE = 2; % Same 5 choices as PAR1TYPE
SAL = 35; % Salinity
TEMPIN = 18; % Temperature (input)
TEMPOUT = 25; % Temperature (output)
PRESIN = 0; % Pressure (input)
PRESOUT = PRESIN; % Pressure (output)
SI = 60; % Total dissolved inorganic silicon (Sit)
PO4 = 2; % Total dissoloved inorganic Phosphorus (Pt)
# Input Parameters:
pHSCALEIN = 2; % pH scale (1=total, 2=seawater, 3=NBS, 4=Free)
K1K2CONSTANTS = 15; % set for K1 & K2: (a) 10=Lueker et al. (2000); (b) 14=Millero (2010)
KSO4CONSTANTS = 1; % KSO4 of Dickson (1990a) & Total dissolved boron (Bt) from Uppstrom (1974)
# Input for CO2SYS:
# --------------------------
# Input variables for error propagation:
r = 0.0; % Correlation between **uncertainties** in PAR1 and PAR2 (-1 < r < 1)
ePAR1 = 2; % uncertainty in PAR1 (same units as PAR1)
ePAR2 = 2; % uncertainty in PAR2 (same units as PAR2)
eSAL = 0; % uncertainty in Salinity
eTEMP = 0; % uncertainty in Temperature
eSI = 4; % uncertainty in Sit
ePO4 = 0.1; % uncertainty in Pt
#Default uncertainties (pK units): epK0 , epK1, epK2, epKb, epKw, epKspA, epKspC
epK = [0.004, 0.015, 0.03, 0.01, 0.01, 0.02, 0.02];
#Default uncertainty for Total boron (0.01, i.e., a 1% relative uncertainty)
eBt = 0.01;
Explanation: 2.1 Specify input variables and choices
End of explanation
%%octave
% With no errors from Ks
epK = 0.0;
eBt = 0.0;
[e, ehead, enice] = errors (PAR1, PAR2, PAR1TYPE, PAR2TYPE, SAL, TEMPIN, TEMPOUT, PRESIN, PRESOUT, SI, PO4,...
ePAR1, ePAR2, eSAL, eTEMP, eSI, ePO4, epK, eBt, r, ...,
pHSCALEIN, K1K2CONSTANTS, KSO4CONSTANTS);
# Print results
# e
# ehead
# enice
# Print (nicely formatted):
printf("%s %s %s %s %s %s %s %s %s \n", ehead{1:9});
printf("%f %f %f %f %f %f %f %f %f \n", e(1:9));
Explanation: 2.2 Propagate uncertainties neglecting uncertainties from equilibrium constants & total boron
End of explanation
%%octave
% With std errors from constants except for Bt
epK = [0.002, 0.0075, 0.015, 0.01, 0.01, 0.02, 0.02];
eBt = 0.0;
[ek, ekhead, eknice] = errors (PAR1, PAR2, PAR1TYPE, PAR2TYPE, SAL, TEMPIN, TEMPOUT, PRESIN, PRESOUT, SI, PO4, ...
ePAR1, ePAR2, eSAL, eTEMP, eSI, ePO4, epK, eBt, r, ...
pHSCALEIN, K1K2CONSTANTS, KSO4CONSTANTS);
printf("%s %s %s %s %s %s %s %s %s \n", ekhead{1:9});
printf("%f %f %f %f %f %f %f %f %f \n", ek(1:9));
Explanation: 2.3 Propagate uncertainties sequentially accounting for uncertainties from constants & total boron
Default uncertainties in equilibrium constants, but assume no uncertainty in total boron $B_\text{T}$
End of explanation
%%octave
% With std uncertainties from constants & Bt
epK = [0.002, 0.0075, 0.015, 0.01, 0.01, 0.02, 0.02];
eBt = 0.02;
[ekb, ekbhead] = errors (PAR1, PAR2, PAR1TYPE, PAR2TYPE, SAL, TEMPIN, TEMPOUT, PRESIN, PRESOUT, SI, PO4, ...
ePAR1, ePAR2, eSAL, eTEMP, eSI, ePO4, epK, eBt, r, ...
pHSCALEIN, K1K2CONSTANTS, KSO4CONSTANTS);
printf("%s %s %s %s %s %s %s %s %s \n", ekbhead{1:9});
printf("%f %f %f %f %f %f %f %f %f \n", ekb(1:9));
Explanation: Default uncertainties in equilibrium constants & default uncertainty in total $B_\text{T}$ (1%)
End of explanation
%%octave
% With std uncertainties from constants & Bt
epK = '';
eBt = '';
[ekb, ekbhead] = errors (PAR1, PAR2, PAR1TYPE, PAR2TYPE, SAL, TEMPIN, TEMPOUT, PRESIN, PRESOUT, SI, PO4, ...
ePAR1, ePAR2, eSAL, eTEMP, eSI, ePO4, epK, eBt, r, ...
pHSCALEIN, K1K2CONSTANTS, KSO4CONSTANTS);
printf("%s %s %s %s %s %s %s %s %s \n", ekbhead{1:9});
printf("%f %f %f %f %f %f %f %f %f \n", ekb(1:9));
Explanation: Same calculation, but with a simpler way to specify the defaults for epK and eBt
End of explanation
%%octave
printf("%s %s %s %s %s %s %s %s %s \n", ekbhead{1:9});
%printf("%f %f %f %f %f %f %f %f %f \n", e(1:9));
%printf("%f %f %f %f %f %f %f %f %f \n", ek(1:9));
printf("%f %f %f %f %f %f %f %f %f \n", ek(1:9) ./ e(1:9));
Explanation: 2.4 Summarize effects of accouning for uncertainties in equil. constants & total boron
Fractional increase in propagated uncertainty due to accounting for uncertainties from equilibrium constants (default values)
End of explanation
%%octave
printf("%s %s %s %s %s %s %s %s %s \n", ekbhead{1:9});
%printf("%f %f %f %f %f %f %f %f %f \n", ek(1:9));
%printf("%f %f %f %f %f %f %f %f %f \n", ekb(1:9));
printf("%f %f %f %f %f %f %f %f %f \n", ekb(1:9) ./ ek(1:9));
Explanation: Conclusion: Accounting for uncertainties from the equilibrium constants increases propagated uncertainties by 1.3 to 5.4 times for the At-Ct pair with default uncertainties.
Fractional increase in propagated uncertainty due to accounting for uncertainty in total boron (default value)
End of explanation
%%octave
% With std uncertainties from Ks
[d, dhead, dnice] = CO2SYS (PAR1, PAR2, PAR1TYPE, PAR2TYPE, SAL, TEMPIN, TEMPOUT, PRESIN, PRESOUT, SI, PO4, ...
pHSCALEIN, K1K2CONSTANTS, KSO4CONSTANTS);
Explanation: Conclusion: Accounting for uncertainty in total boron increases propagated absolute uncertainties by about 3% for most computed variables (except 9% for $[\mathrm{CO}_3^{2-}]$) when using the default uncertainty (eBt = 0.02) but by less than 1% when using the eBt=0.01, i.e., with the At-Ct pair.
2.5 Compute percent relative uncertainties
Compute CO2SYS variables (the reference)
End of explanation
%%octave
ALK = d(:,1); %01 - TAlk (umol/kgSW)
DIC = d(:,2); %02 - TCO2 (umol/kgSW)
pH = d(:,3); %03 - pHin ()
pCO2 = d(:,4); %04 - pCO2 input (uatm)
fCO2 = d(:,5); %05 - fCO2 input (uatm)
HCO3 = d(:,6); %06 - HCO3 input (umol/kgSW)
CO3 = d(:,7); %07 - CO3 input (umol/kgSW)
CO2 = d(:,8); %08 - CO2 input (umol/kgSW)
Hfree = d(:,13); %13 - Hfree input (umol/kgSW)
OmegaCa = d(:,15); %15 - OmegaCa input ()
OmegaAr = d(:,16); %16 - OmegaAr input ()
xCO2 = d(:,17); %17 - xCO2 input (ppm)
H = 10**(-pH) * 1e9; # Convert pH to H+ and then from mol/kg to nmol/kg
dar = [H, pCO2, fCO2, HCO3, CO3, CO2, OmegaCa, OmegaAr, xCO2];
%dar
Explanation: Reorder CO2SYS output in new array having same order as output array from errors (above, section 2.3) for later division (below)
End of explanation
%%octave
perr = 100* ekb(1:9) ./ dar;
# Show uncertainties in percent of base value computed with CO2SYS
printf("%s %s %s %s %s %s %s %s %s \n", ehead{1:9});
printf("%4.2f %6.2f %7.2f %8.2f %7.2f %7.2f %7.2f %7.2f %7.2f \n", perr(1:9));
Explanation: Compute percent error
End of explanation
%%octave
# We drop the minus sign below because uncertainties are positive by definition (1 sigma)
# Note that "(0.01 * perr)" is the fractional error in hydrogen ion concentration (i.e., dH/H)
dpH = (0.01 * perr(1))/log(10);
dpH
Explanation: Compute absolute error of pH from relative error in H
An absolute change in pH is equivalent to a relative change in H+. That is, it can be shown that for small changes in H+ (dH):
\begin{equation}
d\text{pH} = -\frac{1}{\text{ln}(10)} \frac{d\text{H}}{\text{H}}
\end{equation}
To get to this result,
1) start with the basic definition: $\text{pH} = - \text{log}_{10} \, [\text{H}^+]$,
2) convert the logarithm base 10 to a natural log with $\text{log}_{10}(x) = \text{ln}(x) / \text{ln}(10)$,
3) take the derivative of each side, and
4) plug in the definition of the derivative of a natural log, i.e., $d \text{ln}(x) = dx/x$.
Do the actual calculation
End of explanation
%%octave
# Standard input for CO2SYS:
# --------------------------
# Input Variables:
PAR1 = 2300; % ALK
PAR2 = 8.1; % pH
PAR1TYPE = 1; % 1=ALK, 2=DIC, 3=pH, 4=pCO2, 5=fCO2
PAR2TYPE = 3; % Same 5 choices as PAR1TYPE
SAL = 35; % Salinity
TEMPIN = 18; % Temperature (input)
TEMPOUT = 18; % Temperature (output)
PRESIN = 0; % Pressure (input)
PRESOUT = PRESIN; % Pressure (output)
SI = 60; % Total dissolved inorganic silicon (Sit)
PO4 = 2; % Total dissoloved inorganic Phosphorus (Pt)
# Input Parameters:
pHSCALEIN = 1; %% pH scale (1=total, 2=seawater, 3=NBS, 4=Free)
K1K2CONSTANTS = 10; %% set for K1 & K2: (a) 10=Lueker et al. (2000); (b) 14=Millero (2010)
KSO4CONSTANTS = 1; %% KSO4 of Dickson (1990a) & Total dissolved boron (Bt) from Uppstrom (1974)
# Input for CO2SYS:
# --------------------------
# Input variables for error propagation:
r = 0.0; % Correlation between **uncertainties** in PAR1 and PAR2 (-1 < r < 1)
ePAR1 = 2; % uncertainty in PAR1 (same units as PAR1)
ePAR2 = 0.01; % uncertainty in PAR2 (same units as PAR2)
eSAL = 0; % uncertainty in Salinity
eTEMP = 0; % uncertainty in Temperature
eSI = 4; % uncertainty in Sit
ePO4 = 0.1; % uncertainty in Pt
#Default uncertainties: epK0 , epK1, epK2, epKb, epKw, epKspA, epKspC
epK = [0.002, 0.0075, 0.015, 0.01, 0.01, 0.02, 0.02];
#Default uncertainty for Total Boron: 0.01 (i.e., a 1% relative error)
eBt = 0.02;
%%octave
% With std uncertainties from constants & Bt
epK = '';
eBt = '';
[e, ehead, eunits] = errors (PAR1, PAR2, PAR1TYPE, PAR2TYPE, SAL, TEMPIN, TEMPOUT, PRESIN, PRESOUT, SI, PO4, ...
ePAR1, ePAR2, eSAL, eTEMP, eSI, ePO4, epK, eBt, r, ...
pHSCALEIN, K1K2CONSTANTS, KSO4CONSTANTS);
printf("%s %s %s %s %s %s %s %s %s \n", ehead{1:9});
printf("%s %s %s %s %s %s %s %s %s \n", eunits{1:9});
printf("%f %f %f %f %f %f %f %f %f \n", e(1:9));
Explanation: 2.6 Example of using errors.m routine with the pH-At pair
End of explanation
%%octave
PAR1 = 7.91; % pH_T
PAR2 = 500; % pCO2
PAR1TYPE = 3;
PAR2TYPE = 4;
SAL = 35;
TEMPIN = 18;
TEMPOUT = 18;
PRESIN = 0;
PRESOUT = PRESIN;
SI = 0;
PO4 = 0;
pHSCALEIN = 1; % total scale
K1K2CONSTANTS = 10; % Lueker 2000
KSO4CONSTANTS = 1; % KSO4 of Dickson & TB of Uppstrom 1979
ePAR1 = 0.02; % pH
ePAR2 = 5; % pCO2
eSAL = 0;
eTEMP = 0;
eSI = 0;
ePO4 = 0;
% Set input uncertainties for constants and Bt/S to zero
epK=0;
eBt=0;
r=0;
[e, ehead, eunits] = errors (PAR1,PAR2,PAR1TYPE,PAR2TYPE,SAL,TEMPIN,TEMPOUT,PRESIN,PRESOUT,SI,PO4,...
ePAR1,ePAR2,eSAL,eTEMP,eSI,ePO4,epK,eBt,r,pHSCALEIN,K1K2CONSTANTS,KSO4CONSTANTS);
printf("%s %s %s %s %s %s %s %s %s \n", ehead{1:9});
printf("%s %s %s %s %s %s %s %s %s \n", eunits{1:9});
printf("%f %f %f %f %f %f %f %f %f \n", e(1:9));
%%octave
# Use default uncertainties for constants instead of setting them to zero (case in previous cell)
epK = '';
eBt = '';
[e, ehead, eunits] = errors (PAR1,PAR2,PAR1TYPE,PAR2TYPE,SAL,TEMPIN,TEMPOUT,PRESIN,PRESOUT,SI,PO4,...
ePAR1,ePAR2,eSAL,eTEMP,eSI,ePO4,epK,eBt,r,pHSCALEIN,K1K2CONSTANTS,KSO4CONSTANTS);
printf("%s %s %s %s %s %s %s %s %s \n", ehead{1:9});
printf("%s %s %s %s %s %s %s %s %s \n", eunits{1:9});
printf("%f %f %f %f %f %f %f %f %f \n", e(1:9));
%%octave
ehead, eunits
Explanation: 2.7 Example of using errors.m routine with the pH-pCO2 pair
End of explanation |
4,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: pandas DataFrame を読み込む
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 心臓疾患データセットを含む CSV ファイルをダウンロードします。
Step3: pandas を使って CSV を読み取ります。
Step4: データは以下のように表示されます。
Step5: target 列に含まれるラベルを予測するモデルを作成します。
Step6: 配列としての DataFrame
データのデータ型が統一されている場合、または、dtype の場合、NumPy 配列を使用できる場合であればどこでも pandas DataFrame を使用できます。これは、pandas.DataFrame クラスが __array__ プロトコルをサポートしているためであり、TensorFlow の tf.convert_to_tensor 関数がプロトコルをサポートするオブジェクトを受け入れるます。
データセットから数値特徴量を取得します (ここでは、カテゴリカル特徴量をスキップします)。
Step7: DataFrame は、DataFrame.values プロパティまたは numpy.array(df) を使用して NumPy 配列に変換できます。テンソルに変換するには、tf.convert_to_tensor を使用します。
Step8: 一般に、オブジェクトを tf.convert_to_tensor でテンソルに変換すれば、tf.Tensor を渡せる場合は、同様に渡すことができます。
Model.fit メソッド
単一のテンソルとして解釈される DataFrame は、Model.fit メソッドの引数として直接使用できます。
以下は、データセットの数値特徴に関するモデルのトレーニングの例です。
最初のステップは、入力範囲を正規化することです。そのために tf.keras.layers.Normalization レイヤーを使用します。
実行する前にレイヤーの平均と標準偏差を設定するには、必ず Normalization.adapt メソッドを呼び出してください。
Step9: DataFrame の最初の 3 行でレイヤーを呼び出して、このレイヤーからの出力のサンプルを視覚化します。
Step10: 単純なモデルの最初のレイヤーとして正規化レイヤーを使用します。
Step11: DataFrame を x 引数として Model.fit に渡すと、Keras は DataFrame をNumPy 配列と同じように扱います。
Step12: tf.data を適用する
tf.data 変換を均一な dtype の DataFrame に適用する場合、Dataset.from_tensor_slices メソッドは、DataFrame の行を反復処理するデータセットを作成します。各行は、最初は値のベクトルです。モデルをトレーニングするには、(inputs, labels) のペアが必要なので、(features, labels) と Dataset.from_tensor_slices を渡し、必要なスライスのペアを取得します。
Step13: ディレクトリとしての DataFrame
型が異なるデータを処理する場合、DataFrame を単一の配列であるかのように扱うことができなくなります。TensorFlow テンソルでは、すべての要素が同じ dtype である必要があります。
したがって、この場合、各列が均一な dtype を持つ列のディクショナリとして扱う必要があります。DataFrame は配列のディクショナリによく似ているため、通常、必要なのは DataFrame を Python dict にキャストするだけです。多くの重要な TensorFlow API は、配列の (ネストされた) ディクショナリを入力としてサポートしています。
tf.data 入力パイプラインはこれを非常にうまく処理します。すべての tf.data 演算は、ディクショナリとタプルを自動的に処理するので、DataFrame からディクショナリのサンプルのデータセットを作成するには、Dataset.from_tensor_slices でスライスする前に、それをディクショナリにキャストするだけです。
Step14: 以下はデータセットの最初の 3 つのサンプルです。
Step15: Keras のディクショナリ
通常、Keras モデルとレイヤーは単一の入力テンソルを期待しますが、これらのクラスはディクショナリ、タプル、テンソルのネストされた構造を受け入れて返すことができます。これらの構造は「ネスト」と呼ばれます (詳細については、tf.nest モジュールを参照してください)。
ディクショナリを入力として受け入れる Keras モデルを作成するには、2 つの同等の方法があります。
1. モデルサブクラススタイル
tf.keras.Model (または tf.keras.Layer) のサブクラスを記述します。入力を直接処理し、出力を作成します。
Step16: このモデルは、トレーニング用の列のディクショナリまたはディクショナリ要素のデータセットのいずれかを受け入れることができます。
Step17: 最初の 3 つのサンプルの予測は次のとおりです。
Step18: 2. Keras 関数型スタイル
Step19: モデルサブクラスと同じ方法で関数モデルをトレーニングできます。
Step20: 完全なサンプル
異なる型の <code>DataFrame</code> を Keras に渡す場合、各列に対して固有の前処理が必要になる場合があります。この前処理は DataFrame で直接行うことができますが、モデルが正しく機能するためには、入力を常に同じ方法で前処理する必要があります。したがって、最善のアプローチは、前処理をモデルに組み込むことです。<a>Keras 前処理レイヤー</a>は多くの一般的なタスクをカバーしています。
前処理ヘッドを構築する
このデータセットでは、生データの「整数」特徴量の一部は実際にはカテゴリインデックスです。これらのインデックスは実際には順序付けられた数値ではありません (詳細については、<a href="https
Step21: 次に、各入力に適切な前処理を適用し、結果を連結する前処理モデルを構築します。
このセクションでは、Keras Functional API を使用して前処理を実装します。まず、データフレームの列ごとに 1 つの tf.keras.Input を作成します。
Step22: 入力ごとに、Keras レイヤーと TensorFlow 演算を使用していくつかの変換を適用します。各特徴量は、スカラーのバッチとして開始されます (shape=(batch,))。それぞれの出力は、tf.float32 ベクトルのバッチ (shape=(batch, n)) である必要があります。最後のステップでは、これらすべてのベクトルを連結します。
バイナリ入力
バイナリ入力は前処理を必要としないため、ベクトル軸を追加し、float32 にキャストして、前処理された入力のリストに追加します。
Step23: 数値入力
前のセクションと同様に、これらの数値入力は、使用する前に tf.keras.layers.Normalization レイヤーを介して実行する必要があります。違いは、ここでは dict として入力されることです。以下のコードは、DataFrame から数値の特徴量を収集し、それらをスタックし、Normalization.adapt メソッドに渡します。
Step24: 以下のコードは、数値特徴量をスタックし、それらを正規化レイヤーで実行します。
Step25: カテゴリカル特徴量
カテゴリカル特徴量を使用するには、最初にそれらをバイナリベクトルまたは埋め込みのいずれかにエンコードする必要があります。これらの特徴量には少数のカテゴリしか含まれていないため、tf.keras.layers.StringLookup および tf.keras.layers.IntegerLookup レイヤーの両方でサポートされている output_mode='one_hot' オプションを使用して、入力をワンホットベクトルに直接変換します。
次に、これらのレイヤーがどのように機能するかの例を示します。
Step26: 各入力の語彙を決定するには、その語彙をワンホットベクトルに変換するレイヤーを作成します。
Step27: 前処理ヘッドを組み立てる
この時点で、preprocessed はすべての前処理結果の Python リストであり、各結果は (batch_size, depth) の形状をしています。
Step28: 前処理されたすべての特徴量を depth 軸に沿って連結し、各ディクショナリのサンプルを単一のベクトルに変換します。ベクトルには、カテゴリカル特徴量、数値特徴量、およびカテゴリワンホット特徴量が含まれています。
Step29: 次に、その計算からモデルを作成して、再利用できるようにします。
Step30: プリプロセッサをテストするには、<a href="https
Step31: モデルを作成して訓練する
次に、モデルの本体を作成します。前の例と同じ構成を使用します。分類には、いくつかの Dense 正規化線形レイヤーと Dense(1) 出力レイヤーを使用します。
Step32: 次に、Keras 関数型 API を使用して 2 つの部分を組み合わせます。
Step33: このモデルは、入力のディクショナリを想定しています。データを渡す最も簡単な方法は、DataFrame を dict に変換し、その dict を x 引数として Model.fit に渡すことです。
Step34: tf.data を使用しても同様に機能します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import pandas as pd
import tensorflow as tf
SHUFFLE_BUFFER = 500
BATCH_SIZE = 2
Explanation: pandas DataFrame を読み込む
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/pandas_dataframe"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a>
</td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a>
</td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
このチュートリアルでは、<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html" class="external">pandas DataFrames</a> を TensorFlow に読み込む方法の例を示します。
このチュートリアルでは、UCI Machine Learning Repository が提供する小さな<a href="https://archive.ics.uci.edu/ml/datasets/heart+Disease" class="external">心臓疾患データセット</a>を使用します。CSV 形式で数百の行を含むデータセットです。各行は患者に関する情報で、列には属性が記述されています。この情報を使って、患者に心臓疾患があるかどうかを予測します。これは二項分類のタスクです。
pandas を使ってデータを読み取る
End of explanation
csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
Explanation: 心臓疾患データセットを含む CSV ファイルをダウンロードします。
End of explanation
df = pd.read_csv(csv_file)
Explanation: pandas を使って CSV を読み取ります。
End of explanation
df.head()
df.dtypes
Explanation: データは以下のように表示されます。
End of explanation
target = df.pop('target')
Explanation: target 列に含まれるラベルを予測するモデルを作成します。
End of explanation
numeric_feature_names = ['age', 'thalach', 'trestbps', 'chol', 'oldpeak']
numeric_features = df[numeric_feature_names]
numeric_features.head()
Explanation: 配列としての DataFrame
データのデータ型が統一されている場合、または、dtype の場合、NumPy 配列を使用できる場合であればどこでも pandas DataFrame を使用できます。これは、pandas.DataFrame クラスが __array__ プロトコルをサポートしているためであり、TensorFlow の tf.convert_to_tensor 関数がプロトコルをサポートするオブジェクトを受け入れるます。
データセットから数値特徴量を取得します (ここでは、カテゴリカル特徴量をスキップします)。
End of explanation
tf.convert_to_tensor(numeric_features)
Explanation: DataFrame は、DataFrame.values プロパティまたは numpy.array(df) を使用して NumPy 配列に変換できます。テンソルに変換するには、tf.convert_to_tensor を使用します。
End of explanation
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(numeric_features)
Explanation: 一般に、オブジェクトを tf.convert_to_tensor でテンソルに変換すれば、tf.Tensor を渡せる場合は、同様に渡すことができます。
Model.fit メソッド
単一のテンソルとして解釈される DataFrame は、Model.fit メソッドの引数として直接使用できます。
以下は、データセットの数値特徴に関するモデルのトレーニングの例です。
最初のステップは、入力範囲を正規化することです。そのために tf.keras.layers.Normalization レイヤーを使用します。
実行する前にレイヤーの平均と標準偏差を設定するには、必ず Normalization.adapt メソッドを呼び出してください。
End of explanation
normalizer(numeric_features.iloc[:3])
Explanation: DataFrame の最初の 3 行でレイヤーを呼び出して、このレイヤーからの出力のサンプルを視覚化します。
End of explanation
def get_basic_model():
model = tf.keras.Sequential([
normalizer,
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
Explanation: 単純なモデルの最初のレイヤーとして正規化レイヤーを使用します。
End of explanation
model = get_basic_model()
model.fit(numeric_features, target, epochs=15, batch_size=BATCH_SIZE)
Explanation: DataFrame を x 引数として Model.fit に渡すと、Keras は DataFrame をNumPy 配列と同じように扱います。
End of explanation
numeric_dataset = tf.data.Dataset.from_tensor_slices((numeric_features, target))
for row in numeric_dataset.take(3):
print(row)
numeric_batches = numeric_dataset.shuffle(1000).batch(BATCH_SIZE)
model = get_basic_model()
model.fit(numeric_batches, epochs=15)
Explanation: tf.data を適用する
tf.data 変換を均一な dtype の DataFrame に適用する場合、Dataset.from_tensor_slices メソッドは、DataFrame の行を反復処理するデータセットを作成します。各行は、最初は値のベクトルです。モデルをトレーニングするには、(inputs, labels) のペアが必要なので、(features, labels) と Dataset.from_tensor_slices を渡し、必要なスライスのペアを取得します。
End of explanation
numeric_dict_ds = tf.data.Dataset.from_tensor_slices((dict(numeric_features), target))
Explanation: ディレクトリとしての DataFrame
型が異なるデータを処理する場合、DataFrame を単一の配列であるかのように扱うことができなくなります。TensorFlow テンソルでは、すべての要素が同じ dtype である必要があります。
したがって、この場合、各列が均一な dtype を持つ列のディクショナリとして扱う必要があります。DataFrame は配列のディクショナリによく似ているため、通常、必要なのは DataFrame を Python dict にキャストするだけです。多くの重要な TensorFlow API は、配列の (ネストされた) ディクショナリを入力としてサポートしています。
tf.data 入力パイプラインはこれを非常にうまく処理します。すべての tf.data 演算は、ディクショナリとタプルを自動的に処理するので、DataFrame からディクショナリのサンプルのデータセットを作成するには、Dataset.from_tensor_slices でスライスする前に、それをディクショナリにキャストするだけです。
End of explanation
for row in numeric_dict_ds.take(3):
print(row)
Explanation: 以下はデータセットの最初の 3 つのサンプルです。
End of explanation
def stack_dict(inputs, fun=tf.stack):
values = []
for key in sorted(inputs.keys()):
values.append(tf.cast(inputs[key], tf.float32))
return fun(values, axis=-1)
#@title
class MyModel(tf.keras.Model):
def __init__(self):
# Create all the internal layers in init.
super().__init__(self)
self.normalizer = tf.keras.layers.Normalization(axis=-1)
self.seq = tf.keras.Sequential([
self.normalizer,
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
def adapt(self, inputs):
# Stack the inputs and `adapt` the normalization layer.
inputs = stack_dict(inputs)
self.normalizer.adapt(inputs)
def call(self, inputs):
# Stack the inputs
inputs = stack_dict(inputs)
# Run them through all the layers.
result = self.seq(inputs)
return result
model = MyModel()
model.adapt(dict(numeric_features))
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'],
run_eagerly=True)
Explanation: Keras のディクショナリ
通常、Keras モデルとレイヤーは単一の入力テンソルを期待しますが、これらのクラスはディクショナリ、タプル、テンソルのネストされた構造を受け入れて返すことができます。これらの構造は「ネスト」と呼ばれます (詳細については、tf.nest モジュールを参照してください)。
ディクショナリを入力として受け入れる Keras モデルを作成するには、2 つの同等の方法があります。
1. モデルサブクラススタイル
tf.keras.Model (または tf.keras.Layer) のサブクラスを記述します。入力を直接処理し、出力を作成します。
End of explanation
model.fit(dict(numeric_features), target, epochs=5, batch_size=BATCH_SIZE)
numeric_dict_batches = numeric_dict_ds.shuffle(SHUFFLE_BUFFER).batch(BATCH_SIZE)
model.fit(numeric_dict_batches, epochs=5)
Explanation: このモデルは、トレーニング用の列のディクショナリまたはディクショナリ要素のデータセットのいずれかを受け入れることができます。
End of explanation
model.predict(dict(numeric_features.iloc[:3]))
Explanation: 最初の 3 つのサンプルの予測は次のとおりです。
End of explanation
inputs = {}
for name, column in numeric_features.items():
inputs[name] = tf.keras.Input(
shape=(1,), name=name, dtype=tf.float32)
inputs
x = stack_dict(inputs, fun=tf.concat)
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(stack_dict(dict(numeric_features)))
x = normalizer(x)
x = tf.keras.layers.Dense(10, activation='relu')(x)
x = tf.keras.layers.Dense(10, activation='relu')(x)
x = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, x)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'],
run_eagerly=True)
tf.keras.utils.plot_model(model, rankdir="LR", show_shapes=True)
Explanation: 2. Keras 関数型スタイル
End of explanation
model.fit(dict(numeric_features), target, epochs=5, batch_size=BATCH_SIZE)
numeric_dict_batches = numeric_dict_ds.shuffle(SHUFFLE_BUFFER).batch(BATCH_SIZE)
model.fit(numeric_dict_batches, epochs=5)
Explanation: モデルサブクラスと同じ方法で関数モデルをトレーニングできます。
End of explanation
binary_feature_names = ['sex', 'fbs', 'exang']
categorical_feature_names = ['cp', 'restecg', 'slope', 'thal', 'ca']
Explanation: 完全なサンプル
異なる型の <code>DataFrame</code> を Keras に渡す場合、各列に対して固有の前処理が必要になる場合があります。この前処理は DataFrame で直接行うことができますが、モデルが正しく機能するためには、入力を常に同じ方法で前処理する必要があります。したがって、最善のアプローチは、前処理をモデルに組み込むことです。<a>Keras 前処理レイヤー</a>は多くの一般的なタスクをカバーしています。
前処理ヘッドを構築する
このデータセットでは、生データの「整数」特徴量の一部は実際にはカテゴリインデックスです。これらのインデックスは実際には順序付けられた数値ではありません (詳細については、<a href="https://archive.ics.uci.edu/ml/datasets/heart+Disease" class="external">データセットの説明</a>を参照してください)。これらは順序付けされていないため、モデルに直接フィードするのは不適切です。モデルはそれらを順序付けされたものとして解釈するからです。これらの入力を使用するには、ワンホットベクトルまたは埋め込みベクトルとしてエンコードする必要があります。文字列カテゴリカル特徴量でも同じです。
注意: 同一の前処理を必要とする多くの特徴量がある場合は、前処理を適用する前にそれらを連結すると効率的です。
一方、バイナリ特徴量は、通常、エンコードまたは正規化する必要はありません。
各グループに分類される特徴量のリストを作成することから始めます。
End of explanation
inputs = {}
for name, column in df.items():
if type(column[0]) == str:
dtype = tf.string
elif (name in categorical_feature_names or
name in binary_feature_names):
dtype = tf.int64
else:
dtype = tf.float32
inputs[name] = tf.keras.Input(shape=(), name=name, dtype=dtype)
inputs
Explanation: 次に、各入力に適切な前処理を適用し、結果を連結する前処理モデルを構築します。
このセクションでは、Keras Functional API を使用して前処理を実装します。まず、データフレームの列ごとに 1 つの tf.keras.Input を作成します。
End of explanation
preprocessed = []
for name in binary_feature_names:
inp = inputs[name]
inp = inp[:, tf.newaxis]
float_value = tf.cast(inp, tf.float32)
preprocessed.append(float_value)
preprocessed
Explanation: 入力ごとに、Keras レイヤーと TensorFlow 演算を使用していくつかの変換を適用します。各特徴量は、スカラーのバッチとして開始されます (shape=(batch,))。それぞれの出力は、tf.float32 ベクトルのバッチ (shape=(batch, n)) である必要があります。最後のステップでは、これらすべてのベクトルを連結します。
バイナリ入力
バイナリ入力は前処理を必要としないため、ベクトル軸を追加し、float32 にキャストして、前処理された入力のリストに追加します。
End of explanation
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(stack_dict(dict(numeric_features)))
Explanation: 数値入力
前のセクションと同様に、これらの数値入力は、使用する前に tf.keras.layers.Normalization レイヤーを介して実行する必要があります。違いは、ここでは dict として入力されることです。以下のコードは、DataFrame から数値の特徴量を収集し、それらをスタックし、Normalization.adapt メソッドに渡します。
End of explanation
numeric_inputs = {}
for name in numeric_feature_names:
numeric_inputs[name]=inputs[name]
numeric_inputs = stack_dict(numeric_inputs)
numeric_normalized = normalizer(numeric_inputs)
preprocessed.append(numeric_normalized)
preprocessed
Explanation: 以下のコードは、数値特徴量をスタックし、それらを正規化レイヤーで実行します。
End of explanation
vocab = ['a','b','c']
lookup = tf.keras.layers.StringLookup(vocabulary=vocab, output_mode='one_hot')
lookup(['c','a','a','b','zzz'])
vocab = [1,4,7,99]
lookup = tf.keras.layers.IntegerLookup(vocabulary=vocab, output_mode='one_hot')
lookup([-1,4,1])
Explanation: カテゴリカル特徴量
カテゴリカル特徴量を使用するには、最初にそれらをバイナリベクトルまたは埋め込みのいずれかにエンコードする必要があります。これらの特徴量には少数のカテゴリしか含まれていないため、tf.keras.layers.StringLookup および tf.keras.layers.IntegerLookup レイヤーの両方でサポートされている output_mode='one_hot' オプションを使用して、入力をワンホットベクトルに直接変換します。
次に、これらのレイヤーがどのように機能するかの例を示します。
End of explanation
for name in categorical_feature_names:
vocab = sorted(set(df[name]))
print(f'name: {name}')
print(f'vocab: {vocab}\n')
if type(vocab[0]) is str:
lookup = tf.keras.layers.StringLookup(vocabulary=vocab, output_mode='one_hot')
else:
lookup = tf.keras.layers.IntegerLookup(vocabulary=vocab, output_mode='one_hot')
x = inputs[name][:, tf.newaxis]
x = lookup(x)
preprocessed.append(x)
Explanation: 各入力の語彙を決定するには、その語彙をワンホットベクトルに変換するレイヤーを作成します。
End of explanation
preprocessed
Explanation: 前処理ヘッドを組み立てる
この時点で、preprocessed はすべての前処理結果の Python リストであり、各結果は (batch_size, depth) の形状をしています。
End of explanation
preprocesssed_result = tf.concat(preprocessed, axis=-1)
preprocesssed_result
Explanation: 前処理されたすべての特徴量を depth 軸に沿って連結し、各ディクショナリのサンプルを単一のベクトルに変換します。ベクトルには、カテゴリカル特徴量、数値特徴量、およびカテゴリワンホット特徴量が含まれています。
End of explanation
preprocessor = tf.keras.Model(inputs, preprocesssed_result)
tf.keras.utils.plot_model(preprocessor, rankdir="LR", show_shapes=True)
Explanation: 次に、その計算からモデルを作成して、再利用できるようにします。
End of explanation
preprocessor(dict(df.iloc[:1]))
Explanation: プリプロセッサをテストするには、<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" class="external">DataFrame.iloc</a> アクセサを使用して、DataFrame から最初のサンプルをスライスします。次に、それをディクショナリに変換し、ディクショナリをプリプロセッサに渡します。結果は、バイナリ特徴量、正規化された数値特徴量、およびワンホットカテゴリカル特徴量をこの順序で含む単一のベクトルになります。
End of explanation
body = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
Explanation: モデルを作成して訓練する
次に、モデルの本体を作成します。前の例と同じ構成を使用します。分類には、いくつかの Dense 正規化線形レイヤーと Dense(1) 出力レイヤーを使用します。
End of explanation
inputs
x = preprocessor(inputs)
x
result = body(x)
result
model = tf.keras.Model(inputs, result)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: 次に、Keras 関数型 API を使用して 2 つの部分を組み合わせます。
End of explanation
history = model.fit(dict(df), target, epochs=5, batch_size=BATCH_SIZE)
Explanation: このモデルは、入力のディクショナリを想定しています。データを渡す最も簡単な方法は、DataFrame を dict に変換し、その dict を x 引数として Model.fit に渡すことです。
End of explanation
ds = tf.data.Dataset.from_tensor_slices((
dict(df),
target
))
ds = ds.batch(BATCH_SIZE)
import pprint
for x, y in ds.take(1):
pprint.pprint(x)
print()
print(y)
history = model.fit(ds, epochs=5)
Explanation: tf.data を使用しても同様に機能します。
End of explanation |
4,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
人人汽车推荐系统调研综述
这里是一个推荐引擎,使用经典数据集movielens,可以将movies数据替换为人人的车型数据,rating数据替换为从日志系统中收集的所有用户对车的点击次数,浏览时间(权重)。这样可以实现C端对车型的推荐
架构:
①日志系统:搜集用户行为提供离线数据
②推荐引擎:A
Step1: 在_产品-产品协同过滤_中的产品之间的相似性值是通过观察所有对两个产品之间的打分的用户来度量的。
<img src="img/item-item.png"/>
对于_用户-产品协同过滤_,用户之间的相似性值是通过观察所有同时被两个用户打分的产品来度量的。
<img src="img/user-item.png"/>
核心算法方面使用皮尔逊的R来计算距离
两个变量之间的皮尔逊相关系数定义为两个变量之间的协方差和标准差的商
上式定义了总体相关系数,常用希腊小写字母 ρ (rho) 作为代表符号。估算样本的协方差和标准差,可得到样本相关系数(样本皮尔逊系数),常用英文小写字母 r 代表:
Step2: 算法引擎2可以考虑比较文本相似度的余弦相似性算法,这也是推荐系统常用的算法。其中,打分被看成n维空间中的向量,而相似性是基于这些向量之间的角度进行计算的。可以使用sklearn的pairwise_distances函数来计算余弦相似性。注意,输出范围从0到1,因为打分都是正的。 | Python Code:
import numpy as np
import pandas as pd
import os
# 使用pandas加载csv数据
movies = pd.read_csv(os.path.expanduser("~/ml-latest-small/movies.csv"))
ratings = pd.read_csv(os.path.expanduser("~/ml-latest-small/ratings.csv"))
# 去掉无用的维度
ratings.drop(['timestamp'],axis=1,inplace=True)
movies.head()
ratings.head()
# 将movieid替换为moviename
def replace_name(x):
return movies[movies["movieId"]==x].title.values[0]
ratings.movieId = ratings.movieId.map(replace_name)
ratings.head()
# 建立一个透视表
M = ratings.pivot_table(index=['userId'],columns=['movieId'],values='rating')
# 当前维度
M.shape
# M是一个非常稀疏的透视表
M
Explanation: 人人汽车推荐系统调研综述
这里是一个推荐引擎,使用经典数据集movielens,可以将movies数据替换为人人的车型数据,rating数据替换为从日志系统中收集的所有用户对车的点击次数,浏览时间(权重)。这样可以实现C端对车型的推荐
架构:
①日志系统:搜集用户行为提供离线数据
②推荐引擎:A:从数据库或者缓存中拿到用户特征向量(浏览记录 收藏记录 购买记录 停留时间)。B:将用户特征向量通过特征-物品矩阵通过各种推荐算法转换为初始推荐物品列表。C:对初始推荐列表过滤、排名(热门程度、新鲜程度、购买过)。
③UI展示系统 提供标题、缩略图、推荐理由(有了推荐理由用户才愿意点击)评分 根据用户点击情况或者评分来增删推荐引擎或者调整推荐引擎所占的权重。
下面是一个初步的推荐算法引擎,只使用了科学计算包numpy和基于numpy的数据处理的包pandas。基于协同过滤算法,后面会考虑使用tensorflow surprise python-recsys等框架提供的SVD、余弦相似度算法和深度学习神经网络开发更多推荐引擎。
End of explanation
# 算法实现
def pearson(s1, s2):
s1_c = s1 - s1.mean()
s2_c = s2 - s2.mean()
# print(f"s1_c={s1_c}")
# print(f"s2_c={s2_c}")
denominator = np.sqrt(np.sum(s1_c ** 2) * np.sum(s2_c ** 2))
if denominator == 0:
return 0
return np.sum(s1_c * s2_c) / denominator
Explanation: 在_产品-产品协同过滤_中的产品之间的相似性值是通过观察所有对两个产品之间的打分的用户来度量的。
<img src="img/item-item.png"/>
对于_用户-产品协同过滤_,用户之间的相似性值是通过观察所有同时被两个用户打分的产品来度量的。
<img src="img/user-item.png"/>
核心算法方面使用皮尔逊的R来计算距离
两个变量之间的皮尔逊相关系数定义为两个变量之间的协方差和标准差的商
上式定义了总体相关系数,常用希腊小写字母 ρ (rho) 作为代表符号。估算样本的协方差和标准差,可得到样本相关系数(样本皮尔逊系数),常用英文小写字母 r 代表:
End of explanation
# 永不妥协 碟中谍2
pearson(M['Erin Brockovich (2000)'],M['Mission: Impossible II (2000)'])
# 永不妥协 指环王
# pearson(M['Erin Brockovich (2000)'],M['Fingers (1978)'])
# 永不妥协 哈利波特与密室
# pearson(M['Erin Brockovich (2000)'],M['Harry Potter and the Chamber of Secrets (2002)'])
# 哈利波特与密室 哈利波特与阿兹卡班的囚徒
# pearson(M['Harry Potter and the Chamber of Secrets (2002)'],M['Harry Potter and the Prisoner of Azkaban (2004)'])
def get_recs(movie_name, M, num):
reviews = []
for title in M.columns:
if title == movie_name:
continue
cor = pearson(M[movie_name], M[title])
if np.isnan(cor):
continue
else:
reviews.append((title, cor))
reviews.sort(key=lambda tup: tup[1], reverse=True)
return reviews[:num]
# %%time
recs = get_recs('Clerks (1994)', M, 10)
recs[:10]
# %%time
anti_recs = get_recs('Clerks (1994)', M, 8551)
anti_recs[-10:]
Explanation: 算法引擎2可以考虑比较文本相似度的余弦相似性算法,这也是推荐系统常用的算法。其中,打分被看成n维空间中的向量,而相似性是基于这些向量之间的角度进行计算的。可以使用sklearn的pairwise_distances函数来计算余弦相似性。注意,输出范围从0到1,因为打分都是正的。
End of explanation |
4,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Surfaces in pyOpTools
The basic object to create optical components in pyOpTools are the surfaces. They are used to define the border that separates 2 materials (for example air-glass) in an optical component.
Below are some of the Surface Objects supported by pyOpTools.
Step1: Plane Surface
The Plane surface is the most simple surface class in the library. It is defined as an ideal infininite $XY$ plane, located at $Z=0$. To define the Plane limits, its constructor receives as an argument a sub-class of Shape. This sub-classes (Circular, Rectangular, Triangular, etc ) define the limits of the Surface (plane in this case).
Some Plane examples
Below are some examples of Plane surfaces limited by different shapes.
Step2: Spherical Surface
The Spherical surface is another very useful Class to define optical components. It is used to define an spherical cap that has its vertex located at the origin ($(0,0,0)$). The normal to the spherical cap at $X=0$ and $Y=0$ is the vector $(0,0,1)$. As it was the case with the Plane surface, it is used with a Shape subclass to define its edges.
Spherical surface limited by a circular shape example
Step3: Cylindrical Surface
pyOpTools has 2 different types of cylindrical surfaces. The first one is the Cylinder , as its name implies defines a closed cylinder. It is mainly used to define the border of a lens. For example a plano-convex lens can be defined as one circular-limited plane, one circular limited spherical surface, and one cylindrical surface.
Below is an example of a cylindrical surface. Please note that this surface does not receive a Shape subclass.
Step4: The second class is the Cylindrical.
Step5: Aspherical Surface | Python Code:
from pyoptools.all import *
from numpy import pi
Explanation: Surfaces in pyOpTools
The basic object to create optical components in pyOpTools are the surfaces. They are used to define the border that separates 2 materials (for example air-glass) in an optical component.
Below are some of the Surface Objects supported by pyOpTools.
End of explanation
P1=Plane(shape=Circular(radius=(25)))
Plot3D(P1,center=(0,0,0),size=(60,60),rot=[(0,0,0)],scale=6)
P2=Plane(shape=Rectangular(size=(50,50)))
Plot3D(P2,center=(0,0,0),size=(60,60),rot=[(0,0,0)],scale=6)
P3=Plane(shape=Triangular(coord=((0,25),(25,-25),(-25,-25))))
Plot3D(P3,center=(0,0,0),size=(60,60),scale=6)
Explanation: Plane Surface
The Plane surface is the most simple surface class in the library. It is defined as an ideal infininite $XY$ plane, located at $Z=0$. To define the Plane limits, its constructor receives as an argument a sub-class of Shape. This sub-classes (Circular, Rectangular, Triangular, etc ) define the limits of the Surface (plane in this case).
Some Plane examples
Below are some examples of Plane surfaces limited by different shapes.
End of explanation
S=Spherical(curvature=1/200., shape=Circular(radius=145.),reflectivity=0)
Plot3D(S,center=(0,0,0),size=(400,400),scale=1)
Explanation: Spherical Surface
The Spherical surface is another very useful Class to define optical components. It is used to define an spherical cap that has its vertex located at the origin ($(0,0,0)$). The normal to the spherical cap at $X=0$ and $Y=0$ is the vector $(0,0,1)$. As it was the case with the Plane surface, it is used with a Shape subclass to define its edges.
Spherical surface limited by a circular shape example
End of explanation
S3=Cylinder(radius=36,length=100,reflectivity=1)
Plot3D(S3,center=(0,0,0),size=(100,100),rot=[(0,pi/32,0)],scale=4)
Explanation: Cylindrical Surface
pyOpTools has 2 different types of cylindrical surfaces. The first one is the Cylinder , as its name implies defines a closed cylinder. It is mainly used to define the border of a lens. For example a plano-convex lens can be defined as one circular-limited plane, one circular limited spherical surface, and one cylindrical surface.
Below is an example of a cylindrical surface. Please note that this surface does not receive a Shape subclass.
End of explanation
S1=Cylindrical(shape=Rectangular(size=(50,100)),curvature=1/20.)
Plot3D(S1,center=(0,0,0),size=(150,150),rot=[(pi/4,0,0)],scale=2)
S2=Cylindrical(shape=Circular(radius=(50)),curvature=1/100.)
Plot3D(S2,center=(0,0,0),size=(150,150),rot=[(-pi/4,0,0)],scale=2)
Explanation: The second class is the Cylindrical.
End of explanation
%%latex
$$Z=\frac{(Ax*x^2+Ay*y^2)}{(1+\sqrt{(1-(1+Kx)*Ax^2*x^2-(1+Ky)*Ay^2*y^2))}}+ poly2d()$$
sa=Aspherical(shape=Rectangular(size=(5,5)),Ax=.2,Ay=.2,Kx=.1, Ky=.15, poly=poly2d((0,0,0,.5,0,.5)))
Plot3D(sa,center=(0,0,5),size=(10,10),rot=[(-3*pi/10,pi/4,0)],scale=40)
sa=Aspherical(shape=Circular(radius=2.5),Ax=.2,Ay=.2,Kx=.1, Ky=.15, poly=poly2d((0,0,0,.5,0,.5)))
Plot3D(sa,center=(0,0,5),size=(10,10),rot=[(-3*pi/10,pi/4,0)],scale=40)
Explanation: Aspherical Surface
End of explanation |
4,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FCLA/FNLA Fast.ai Numerical/Computational Linear Algebra
Lecture 3
Step1: So if A is approx equal to Q•Q.T•A .. but not equal.. then Q is not the identity, but is very close to it.
Oh, right. Q
Step2: Randomized SVD method
Step3: Computational Complexity for a MxN matrix in SVD is $M^2N+N^3$, so Randomized (Truncated?) SVD is a massive improvement.
2018/3/7
Write a loop to calculate the error of your decomposition as your vary the # of topics. Plot the results. | Python Code:
import torch
import numpy as np
Q = np.eye(3)
print(Q)
print(Q.T)
print(Q @ Q.T)
# construct I matrix
Q = torch.eye(3)
# torch matrix multip
# torch.mm(Q, Q.transpose)
Q @ torch.t(Q)
Explanation: FCLA/FNLA Fast.ai Numerical/Computational Linear Algebra
Lecture 3: New Perspectives on NMF, Randomized SVD
Notes / In-Class Questions
WNixalo - 2018/2/8
Question on section: Truncated SVD
Given A: m x n and Q: m x r; is Q the identity matrix?
A≈QQTA
End of explanation
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn import decomposition
from scipy import linalg
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(suppress=True)
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove)
# newsgroups_test = fetch_20newsgroups(subset='test', categories=categories, remove=remove)
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
vectorizer = CountVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(newsgroups_train.data).todense() # (documents, vocab)
vocab = np.array(vectorizer.get_feature_names())
num_top_words=8
def show_topics(a):
top_words = lambda t: [vocab[i] for i in np.argsort(t)[:-num_top_words-1:-1]]
topic_words = ([top_words(t) for t in a])
return [' '.join(t) for t in topic_words]
# computes an orthonormal matrix whose range approximates the range of A
# power_iteration_normalizer can be safe_sparse_dot (fast but unstable), LU (imbetween), or QR (slow but most accurate)
def randomized_range_finder(A, size, n_iter=5):
# randomly init our Mat to our size; size: num_cols
Q = np.random.normal(size=(A.shape[1], size))
# LU decomp (lower triang * upper triang mat)
# improves accuracy & normalizes
for i in range(n_iter):
Q, _ = linalg.lu(A @ Q, permute_l=True)
Q, _ = linalg.lu(A.T @ Q, permute_l=True)
# QR decomp on A & Q
Q, _ = linalg.qr(A @ Q, mode='economic')
return Q
Explanation: So if A is approx equal to Q•Q.T•A .. but not equal.. then Q is not the identity, but is very close to it.
Oh, right. Q: m x r, not m x m...
If both the columns and rows of Q had been orthonormal, then it would have been the Identity, but only the columns (r) are orthonormal.
Q is a tall, skinny matrix.
AW gives range(A). AW has far more rows than columns ==> in practice these columns are approximately orthonormal (v.unlikely to get lin-dep cols when choosing random values).
QR decomposition is foundational to Numerical Linear Algebra.
Q consists of orthonormal columns, R is upper-triangular.
Calculating Truncated-SVD:
1. Compute approximation to range(A). We want Q with r orthonormal columns such that $$A\approx QQ^TA$$
2. Construct $B = Q^T A$, which is small ($r\times n$)
3. Compute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$): $B = S\, Σ V^T$
4. Since: $$A \approx QQ^TA = Q(S \, ΣV^T)$$ if we set $U = QS$, then we have a low rank approximation $A \approx UΣV^T$.
How to choose $r$?
If we wanted to get 5 cols from a matrix of 100 cols, (5 topics). As a rule of thumb, let's go for 15 instead. You don't want to explicitly pull exactly the amount you want due to the randomized component being present, so you add some buffer.
Since our projection is approximate, we make it a little bigger than we need.
Implementing Randomized SVD:
First we want a randomized range finder.
End of explanation
def randomized_svd(M, n_components, n_oversamples=10, n_iter=4):
# number of random columns we're going to create is the number of
# columns we want + number of oversamples (extra buffer)
n_random = n_components + n_oversamples
Q = randomized_range_finder(M, n_random, n_iter)
# project M to the (k + p) dimensional space using basis vectors
B = Q.T @ M
# compute SVD on the thin matrix: (k + p) wide
Uhat, s, V = linalg.svd(B, full_matrices=False)
del B
U = Q @ Uhat
# return the number of components we want from U, s, V
return U[:, :n_components], s[:n_components], V[:n_components, :]
%time u, s, v = randomized_svd(vectors, 5)
u.shape, s.shape, v.shape
show_topics(v)
Explanation: Randomized SVD method:
End of explanation
# 1. how do I calculate decomposition error?:
# I guess I'll use MSE?
# # NumPy: # https://stackoverflow.com/questions/16774849/mean-squared-error-in-numpy
# def MSEnp(A,B):
# if type(A) == np.ndarray and type(B) == np.ndarray:
# return ((A - B) ** 2).mean()
# else:
# return np.square((A - B)).mean()
# Scikit-Learn:
from sklearn import metrics
MSE = metrics.mean_squared_error # usg: mse(A,B)
# 2. Now how to recompose my decomposition?:
%time B = vectors # original matrix
%time U, S, V = randomized_svd(B, 10) # num_topics = 10
# S is vector of Σ's singular values. Convert back to matrix:
%time Σ = S * np.eye(S.shape[0])
# from SVD formula: A ≈ U@Σ@V.T
%time A = U@Σ@V ## apparently randomized_svd returns V.T, not V ?
# 3. Finally calculated error I guess:
%time mse_error = MSE(A,B)
print(mse_error)
# Im putting way too much effort into this lol
def fib(n):
if n <= 1:
return n
else:
f1 = 1
f2 = 0
for i in range(n):
t = f1 + f2
tmp = f2
f2 += f1
f1 = tmp
return t
for i,e in enumerate(num_topics):
print(f'Topics: {num_topics[i]:>3} ',
f'Time: {num_topics[i]:>3}')
## Setup
import time
B = vectors
num_topics = [fib(i) for i in range(2,14)]
TnE = [] # time & error
## Loop:
for n_topics in num_topics:
t0 = time.time()
U, S, Vt = randomized_svd(B, n_topics)
Σ = S * np.eye(S.shape[0])
A = U@Σ@Vt
TnE.append([time.time() - t0, MSE(A,B)])
for i, tne in enumerate(TnE):
print(f'Topics: {num_topics[i]:>3} '
f'Time: {np.round(tne[0],3):>3} '
f'Error: {np.round(tne[1],12):>3}')
# https://matplotlib.org/users/pyplot_tutorial.html
plt.plot(num_topics, [tne[1] for tne in TnE])
plt.xlabel('No. Topics')
plt.ylabel('MSE Error')
plt.show()
## R.Thomas' class solution:
step = 20
n = 20
error = np.zeros(n)
for i in range(n):
U, s, V = randomized_svd(vectors, i * step)
reconstructed = U @ np.diag(s) @ V
error[i] = np.linalg.norm(vectors - reconstructed)
plt.plot(range(0,n*step,step), error)
Explanation: Computational Complexity for a MxN matrix in SVD is $M^2N+N^3$, so Randomized (Truncated?) SVD is a massive improvement.
2018/3/7
Write a loop to calculate the error of your decomposition as your vary the # of topics. Plot the results.
End of explanation |
4,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Communication between components
Purpose
Step1: The simplest thing we can do with the Redis server is to set and get key values. The keys are strings and the values are strings, so you can do whatever you want here. Here's a quick example of using it to have two different Nengo models talk to each other. If you start both of these models running, the value in the first one is sent to the second one. | Python Code:
import redis
r = redis.StrictRedis(host='localhost')
r.set('key', 'value')
print r.get('key')
Explanation: Communication between components
Purpose: If we are connecting to hardware or otherwise doing something involving multiple computers or even separate processes on the same computer, we need some easy way to pass data between them.
There are a bunch of options here, and usually in the past I've done something like generating a UDP send/receive sort of setup. However, a few people have pointed out Redis as an interesting option, and I'm kind of liking it.
What is Redis?
From the website http://redis.io:
Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.
So that's fairly overkill for what we need, but it turns out that if we turn off persistence, it's a pretty clean solution to what we want. The idea is that you run a server locally, and all your processes just talk with it. It's got good bindings for Python and is nice and fast (it's meant for lots and lots of small messages)
Installing Redis
On Ubuntu, you can just do sudo apt-get install redis-server. This will install it such that the server will automatically start running in the background. You can configure the server by editing /etc/redis/redis.conf and then doing sudo service redis-server restart.
Alternatively, you can build it from source by following these instructions http://redis.io/download#installation:
* wget http://download.redis.io/releases/redis-stable.tar.gz
* tar xzf redis-stable.tar.gz
* cd redis-stable
* make
And now run it with
* src/redis-server
(I prefer the second approach, just because I like only having the server running when I want it and I forget to manually shut down the server with redis-cli shutdown).
For windows users, grab the latest installer from https://github.com/MSOpenTech/redis/releases
Finally, no matter what you do, you'll probably want the Python bindings for redis, which you can get with
pip install redis
Configuring Redis
We don't quite want the default settings for redis, so we need to make some small changes. We do this through a redis.conf file. This will either already exist (if you installed it with apt-get), in which case append these commands to the end of the file, or just make a new text file with just these commands in it.
The main thing to do is turn off persistence. By default, redis dumps all of its information to a file every now and then so that it's possible to do backup recovery. We don't want this for our case, so we add this line to the end of the redis.conf file:
save ""
That's all we definitely have to do, but we may also want to allow remote connections. If you're in a situation where multiple computers are accessing the data (i.e. it's not all just local processes on your computer), then add this line:
bind 0.0.0.0
If you're making your own config file, you can start the Redis server by running
redis-server redis.conf
Passing data with Redis
Once the server is running, we can talk to it from python like this:
End of explanation
import nengo
import numpy as np
r = redis.StrictRedis(host='localhost')
model1 = nengo.Network()
with model1:
stim = nengo.Node(np.sin)
a = nengo.Ensemble(100, 1)
output = nengo.Node(lambda t, x: r.set('decoded_value', x[0]), size_in=1)
nengo.Connection(stim, a)
nengo.Connection(a, output)
import nengo_gui.ipython
nengo_gui.ipython.IPythonViz(model1, 'model1.cfg')
import nengo
import numpy as np
r2 = redis.StrictRedis(host='localhost')
model2 = nengo.Network()
with model2:
reader = nengo.Node(lambda t: float(r2.get('decoded_value')))
a = nengo.Ensemble(100, 1)
nengo.Connection(reader, a)
import nengo_gui.ipython
nengo_gui.ipython.IPythonViz(model2, 'model2.cfg')
Explanation: The simplest thing we can do with the Redis server is to set and get key values. The keys are strings and the values are strings, so you can do whatever you want here. Here's a quick example of using it to have two different Nengo models talk to each other. If you start both of these models running, the value in the first one is sent to the second one.
End of explanation |
4,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Un ipython notebook garde ses résultats depuis la dernière fois, mais pas son état. Il convient donc re-exécuter depuis le début.
Step1: Example of sampling from a probability distribution.
Note that we use the ppf (percent point function) = the inverse of the probability density function. This is where it will likely arise most.
Step2: Running mean example
D'abord nous générons un échantillon aléatoire. Puis nous calculons la somme et la visualisons.
Questions | Python Code:
import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
import sklearn
import pandas as pd
%matplotlib inline
x = np.linspace(-100, 100, 201)
plt.plot(x, x * x)
Explanation: Un ipython notebook garde ses résultats depuis la dernière fois, mais pas son état. Il convient donc re-exécuter depuis le début.
End of explanation
x = np.linspace(ss.norm.ppf(.01), ss.norm.ppf(.99), 100)
plt.plot(x, ss.norm.pdf(x))
plt.show()
Explanation: Example of sampling from a probability distribution.
Note that we use the ppf (percent point function) = the inverse of the probability density function. This is where it will likely arise most.
End of explanation
num_points = 100
index = np.linspace(1, num_points, num_points)
sample = [ss.norm.rvs() for x in range(num_points)]
plt.plot(index, np.cumsum(sample))
plt.plot(index, [x / (i + 1) for i, x in enumerate(sample)])
Explanation: Running mean example
D'abord nous générons un échantillon aléatoire. Puis nous calculons la somme et la visualisons.
Questions :
* Qu'est-ce qui se passe si on change num_points?
* Qu'est-ce qui est la différence entre les deux courbes?
End of explanation |
4,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Tensors
Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
Step2: Operation on Tensors
Step3: Numpy Bridge with Tensors
Step4: Running on the Device
Step5: AUTOGRAD
Step6: Gradient
Step7: Vector Jacobian Product
Step8: Neural Network
A typical training procedure for a neural network is as follows | Python Code:
import torch
Explanation: <a href="https://colab.research.google.com/github/rishuatgithub/MLPy/blob/master/PyTorchStuff.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
All about Pytorch
End of explanation
x = torch.empty(5,3) ## empty
x
x = torch.randn(5,3) ## random initialized
x
x = torch.zeros(5,3, dtype=torch.long)
x
type(x)
x = torch.ones(5,2)
x
myarr = [[10,20.2],[30,40]] ## sample data
x = torch.tensor(myarr)
x
## create a tensor from an existing tensor
x = torch.tensor([[1,2],[3,4]], dtype=torch.int16)
print(f"X tensor: {x}")
y = torch.tensor(x, dtype=torch.float16)
print(f"Y tensor: {y}")
## size of tensor
x.size()
Explanation: Tensors
Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
End of explanation
x = torch.randn(5,2)
x
y = torch.ones(5,2)
y
x + y ## sum of two tensors
torch.add(x,y) ## alternative: sum of two tensors
## In Place addition : Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x.
y.add_(x)
### Standard numpy operations on tensors
x[:1]
type(x[:1])
### Resize the tensor
x = torch.randn(4,4)
x
y = x.view(16)
y
z = x.view([-1,8])
z
## transpose an array
torch.transpose(x, 0,1)
## to get the value of a tensor
x = torch.randn(1)
print(x)
print(x.item())
Explanation: Operation on Tensors
End of explanation
a = torch.ones(5)
a
type(a)
### convert to numpy
a.numpy()
## converting numpy array to torch tensors
import numpy as np
a = np.ones(5)
t = torch.tensor(a, dtype=torch.int)
print(a, type(a))
print(t, type(t))
Explanation: Numpy Bridge with Tensors
End of explanation
### if CUDA is available or not
torch.cuda.is_available()
if torch.cuda.is_available():
device = torch.device("cuda") ## define device
x = torch.ones(5) ## normal stuff
print(x)
y = torch.ones_like(x, device=device) ### running it on gpu
print(y)
x = x.to(device) ## change the execution to device
z = x + y
print(z)
print(z.to("cpu", dtype=torch.int32)) ## change the data type of z using .to and run it on cpu
Explanation: Running on the Device
End of explanation
x = torch.ones(2,2, requires_grad=True)
x
y = x + 2
y
y.grad_fn ### y was created as a result of an operation, hence it has a grad_fn
## more operation on y
z = y*y*3
out = z.mean()
print(z, out)
### .requires_grad_( ... ) changes an existing Tensor’s requires_grad flag in-place. The input flag defaults to False if not given.
a = torch.randn(2,2)
a = ((a*2)/(a-1))
print(a.requires_grad)
a.requires_grad_(True) ## changing the grad inplace
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
Explanation: AUTOGRAD
End of explanation
out ## out contains a single scalar, out.backward() is equivalent to out.backward(torch.tensor(1.)).
out.backward()
print(x.grad) ### Print gradients d(out)/dx
## another example
t1 = torch.ones(1, requires_grad= True)
t2 = torch.ones(1, requires_grad=True)
print(t1, t2)
s = t1+t2
print(s)
s.grad_fn
s.backward()
t1.grad
Explanation: Gradient
End of explanation
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
print(x.requires_grad)
y = x.detach()
print(y.requires_grad)
print(x.eq(y).all())
Explanation: Vector Jacobian Product
End of explanation
import torch.nn as nn
import torch.nn.functional as F
X = torch.tensor(([2, 9], [5, 1], [3, 6]), dtype=torch.float) ## 3x2 tensor
y = torch.tensor(([92], [100], [89]), dtype=torch.float) ## 3x1 tensor
xPredicted = torch.tensor(([4, 8]), dtype=torch.float) # 1 X 2 tensor
print(X)
print(y)
X_max, X_max_ind = torch.max(X, 0) ## return max including indices and max values per col. 0 for col, 1 for row
print(X_max, X_max_ind)
xPredicted_max, _ = torch.max(xPredicted, 0)
print(xPredicted_max)
y_max = torch.max(y)
print(y_max)
## scaling
X = torch.div(X, X_max)
xPredicted = torch.div(xPredicted, xPredicted_max)
y = y/y_max
print(f"X is : {X}")
print(f"xPredicted is : {xPredicted}")
print(f"y is : {y}")
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
## parameters
self.input_size = 2
self.hidden_layer = 3
self.output_layer = 1
## initializing the weights
self.W1 = torch.randn(self.input_size, self.hidden_layer) # 2x3 tensor
self.W2 = torch.randn(self.hidden_layer, self.output_layer) # 3x1 tensor
def forward(self, X):
'''
Forward propagation
'''
self.z = torch.matmul(X, self.W1)
self.z2 = torch.sigmoid(self.z)
self.z3 = torch.matmul(self.z2, self.W2)
o = torch.sigmoid(self.z3) ## final activation function
return o
def sigmoid(self, s):
return 1 / (1 + torch.exp(-s))
def sigmoidPrime(self, s):
# derivative of sigmoid
return s * (1 - s)
def backward(self, X, y, o):
'''
Backward propagation
'''
self.o_error = y - o ## calculate the difference b/w predicted and actual
self.o_delta = self.o_error * self.sigmoidPrime(o) ## derivative of sig to error
self.z2_error = torch.matmul(self.o_delta, torch.t(self.W2))
self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2)
self.W1 += torch.matmul(torch.t(X), self.z2_delta)
self.W2 += torch.matmul(torch.t(self.z2), self.o_delta)
def train(self, X, y):
# forward + backward pass for training
o = self.forward(X)
self.backward(X, y, o)
def saveWeights(self, model):
# we will use the PyTorch internal storage functions
torch.save(model, "NN")
def predict(self):
print ("Predicted data based on trained weights: ")
print ("Input (scaled):" + str(xPredicted))
print ("Output:" + str(self.forward(xPredicted)))
NN = SimpleNN()
print(NN)
for i in range(10): # trains the NN 10 times
print ("#" + str(i) + " Loss: " + str(torch.mean((y - NN(X))**2).detach().item())) # mean sum squared loss
NN.train(X, y)
NN.saveWeights(NN)
NN.predict()
Explanation: Neural Network
A typical training procedure for a neural network is as follows:
Define the neural network that has some learnable parameters (or weights)
Iterate over a dataset of inputs
Process input through the network
Compute the loss (how far is the output from being correct)
Propagate gradients back into the network’s parameters
Update the weights of the network, typically using a simple update rule:
weight = weight - learning_rate * gradient
Simple NN
End of explanation |
4,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulate RAD-seq data
The simulations software simrrls is available at github.com/dereneaton/simrrls. First we create a directory called ipsimdata/ and then simulate data and put it in this directory. Simrrls has a few dependencies that are required.
Global variables
Step1: Set up / clean up directories
Step3: Simulate the RAD data
Step4: Manufacture a psuedo-genome with hits from sim data
These functions take simulated rad-data and a "large" input genome (really it could just be a randomly generated fasta), and randomly inserts a handful of simulated rad tags into the genome. This guarantees that reference mapping will actually do something. For PE simulated data R2 reads are reversed before they're inserted, because smalt is using the -l pe flag, which looks for reads in this orientation --> <--. Also, for PE inner mate distance is fixed at 50. If you wanna get ambitious you could draw this value from a distribution, but seems like more effort than it's worth. This wants to be run from ipsimdata/, but you can run it from anywhere if you update the paths.
Step6: Function to insert reads into simulated genome
Step7: Make pseudo-ref data files
Step8: Run tests
rad_example
Step9: gbs example
Step10: pairddrad example
Step11: Clean up test dirs
Step12: Create zipped simdata archive | Python Code:
## name for our sim data directory
DIR = "./ipsimdata"
## A mouse MT genome used to stick our data into.
INPUT_CHR = "/home/deren/Downloads/MusMT.fa"
## number of RAD loci to simulate
NLOCI = 1000
## number of inserts to reference genome and insert size
N_INSERTS = 100
INSERT_SIZE = 50
Explanation: Simulate RAD-seq data
The simulations software simrrls is available at github.com/dereneaton/simrrls. First we create a directory called ipsimdata/ and then simulate data and put it in this directory. Simrrls has a few dependencies that are required.
Global variables
End of explanation
import os
import shutil
## rm sim dir if it exists, else create it.
while 1:
if os.path.exists(DIR):
shutil.rmtree(DIR)
else:
os.mkdir(DIR)
## make sure dir is finished removing
if os.path.exists(DIR):
break
## rm testdirs if they exist
TESTDIRS = ["./testref1", "./testref2", "./testref3", "./testref4"]
for testdir in TESTDIRS:
if os.path.exists(testdir):
shutil.rmtree(testdir)
Explanation: Set up / clean up directories
End of explanation
import simrrls
print 'simrrls', simrrls.__version__
import subprocess
## this the bash command-line call to simrrls
cmd = \
simrrls -o {odir}/{oname} -f {form} -dm 10 -ds 2
-I 0.01 -L {nloci} -i1 {imin} -i2 {imax}
## simulate rad_example (includes indels)
call = cmd.format(odir=DIR, oname="rad_example", form="rad",
imin=50, imax=100, nloci=NLOCI)
print call
subprocess.check_output(call.split())
## simulate gbs_example (includes indels)
call = cmd.format(odir=DIR, oname="gbs_example", form="gbs",
imin=50, imax=100, nloci=NLOCI)
print call
subprocess.check_output(call.split())
## simulate pairddrad_example (includes indels)
call = cmd.format(odir=DIR, oname="pairddrad_example", form="pairddrad",
imin=50, imax=100, nloci=NLOCI)
print call
subprocess.check_output(call.split())
## simulate gbs_example (includes indels)
call = cmd.format(odir=DIR, oname="pairgbs_example", form="pairgbs",
imin=50, imax=100, nloci=NLOCI)
print call
subprocess.check_output(call.split())
## simulate pairddrad_example (includes indels and merged reads)
call = cmd.format(odir=DIR, oname="pairddrad_wmerge_example", form="pairddrad",
imin=-50, imax=50, nloci=NLOCI)
print call
subprocess.check_output(call.split())
## simulate gbs_example (includes indels and merged reads)
call = cmd.format(odir=DIR, oname="pairgbs_wmerge_example", form="pairgbs",
imin=-50, imax=50, nloci=NLOCI)
print call
subprocess.check_output(call.split())
Explanation: Simulate the RAD data
End of explanation
import itertools
import gzip
import random
from Bio import SeqIO
Explanation: Manufacture a psuedo-genome with hits from sim data
These functions take simulated rad-data and a "large" input genome (really it could just be a randomly generated fasta), and randomly inserts a handful of simulated rad tags into the genome. This guarantees that reference mapping will actually do something. For PE simulated data R2 reads are reversed before they're inserted, because smalt is using the -l pe flag, which looks for reads in this orientation --> <--. Also, for PE inner mate distance is fixed at 50. If you wanna get ambitious you could draw this value from a distribution, but seems like more effort than it's worth. This wants to be run from ipsimdata/, but you can run it from anywhere if you update the paths.
End of explanation
## Utility function
def revcomp(sequence):
"returns reverse complement of a string"
sequence = sequence[::-1].strip()\
.replace("A", "t")\
.replace("T", "a")\
.replace("C", "g")\
.replace("G", "c").upper()
return sequence
def RAD_to_genome(R1s, R2s, n_inserts, insert_sz, input_chr, out_chr):
Writes simulated rad data into a genome fasta file.
Assumes RAD data file has names formatted like the
output from simrrls.
## read in the full genome file
record = SeqIO.read(input_chr, "fasta")
lenchr = len(record.seq)
## read in the RAD data files
dat1 = gzip.open(R1s, 'r')
qiter1 = itertools.izip(*[iter(dat1)]*4)
if R2s:
dat2 = gzip.open(R2s, 'r')
qiter2 = itertools.izip(*[iter(dat2)]*4)
else:
qiter2 = itertools.izip(*[iter(str, 1)]*4)
## sample unique reads from rads
uniqs = []
locid = 0
while len(uniqs) < n_inserts:
## grab a read and get locus id
qrt1 = qiter1.next()
qrt2 = qiter2.next()
iloc = []
ilocid = int(qrt1[0].split("_")[1][5:])
## go until end of locus copies
while ilocid == locid:
iloc.append([qrt1[1].strip(), qrt2[1].strip()])
qrt1 = qiter1.next()
qrt2 = qiter2.next()
ilocid = int(qrt1[0].split("_")[1][5:])
## sample one read
uniqs.append(random.sample(iloc, 1)[0])
locid += 1
## insert RADs into genome
sloc = 100
for ins in range(n_inserts):
## get read, we leave the barcode on cuz it won't hurt
r1 = uniqs[ins][0]
r2 = uniqs[ins][1]
if not r2:
record.seq = record.seq[:sloc]+r1+\
record.seq[sloc:]
else:
record.seq = record.seq[:sloc]+r1+\
record.seq[sloc:sloc+insert_sz]+\
revcomp(r2)+\
record.seq[sloc+insert_sz:]
sloc += 300
## write to file
rlen = len(qrt1[1].strip())
if r2:
rlen *= 2
print("input genome is {} bp".format(lenchr))
print('imputed {} loci {} bp in len'.format(n_inserts, rlen))
print("new pseudo-genome is {} bp".format(len(record.seq)))
output_handle = open(out_chr, "w")
SeqIO.write(record, output_handle, "fasta")
output_handle.close()
Explanation: Function to insert reads into simulated genome
End of explanation
## SE RAD data
DATA_R1 = DIR+"/rad_example_R1_.fastq.gz"
OUTPUT_CHR = DIR+"/rad_example_genome.fa"
RAD_to_genome(DATA_R1, 0, N_INSERTS, INSERT_SIZE, INPUT_CHR, OUTPUT_CHR)
## SE GBS data
DATA_R1 = DIR+"/gbs_example_R1_.fastq.gz"
OUTPUT_CHR = DIR+"/gbs_example_genome.fa"
RAD_to_genome(DATA_R1, 0, N_INSERTS, INSERT_SIZE, INPUT_CHR, OUTPUT_CHR)
## PAIR ddRAD data
DATA_R1 = DIR+"/pairddrad_wmerge_example_R1_.fastq.gz"
DATA_R2 = DIR+"/pairddrad_wmerge_example_R2_.fastq.gz"
OUTPUT_CHR = DIR+"/pairddrad_wmerge_example_genome.fa"
RAD_to_genome(DATA_R1, DATA_R2, N_INSERTS, INSERT_SIZE, INPUT_CHR, OUTPUT_CHR)
## PAIR GBS data
DATA_R1 = DIR+"/pairgbs_wmerge_example_R1_.fastq.gz"
DATA_R2 = DIR+"/pairgbs_wmerge_example_R2_.fastq.gz"
OUTPUT_CHR = DIR+"/pairgbs_wmerge_example_genome.fa"
RAD_to_genome(DATA_R1, DATA_R2, N_INSERTS, INSERT_SIZE, INPUT_CHR, OUTPUT_CHR)
Explanation: Make pseudo-ref data files
End of explanation
import ipyrad as ip
## create an assembly for denovo
data1 = ip.Assembly("denovo")
data1.set_params(1, "testref1")
data1.set_params(2, DIR+'/rad_example_R1_.fastq.gz')
data1.set_params(3, DIR+'/rad_example_barcodes.txt')
## branch into an assembly for reference
data2 = data1.branch("reference")
data2.set_params(5, 'reference')
data2.set_params(6, DIR+'/rad_example_genome.fa')
## assemble both
data1.run(force=True)
data2.run(force=True)
## check results
assert data1.stats_dfs.s7_loci.sum_coverage.max() == NLOCI
assert data2.stats_dfs.s7_loci.sum_coverage.max() == N_INSERTS
Explanation: Run tests
rad_example
End of explanation
import ipyrad as ip
## create an assembly for denovo
data1 = ip.Assembly("denovo")
data1.set_params(1, "testref2")
data1.set_params(2, DIR+'/gbs_example_R1_.fastq.gz')
data1.set_params(3, DIR+'/gbs_example_barcodes.txt')
## branch into an assembly for reference
data2 = data1.branch("reference")
data2.set_params(5, 'reference')
data2.set_params(6, DIR+'/gbs_example_genome.fa')
## assemble both
data1.run(force=True)
data2.run(force=True)
## check results
assert data1.stats_dfs.s7_loci.sum_coverage.max() == NLOCI
assert data2.stats_dfs.s7_loci.sum_coverage.max() == N_INSERTS
Explanation: gbs example
End of explanation
import ipyrad as ip
## create an assembly for denovo
data1 = ip.Assembly("denovo")
data1.set_params(1, "testref3")
data1.set_params(2, DIR+'/pairddrad_wmerge_example_R1_.fastq.gz')
data1.set_params(3, DIR+'/pairddrad_wmerge_example_barcodes.txt')
## branch into an assembly for reference
data2 = data1.branch("reference")
data2.set_params(5, 'reference')
data2.set_params(6, DIR+'/pairddrad_wmerge_example_genome.fa')
## assemble both
data1.run(force=True)
data2.run(force=True)
## check results
assert data1.stats_dfs.s7_loci.sum_coverage.max() == NLOCI
assert data2.stats_dfs.s7_loci.sum_coverage.max() == N_INSERTS
import ipyrad as ip
## create an assembly for denovo
data1 = ip.Assembly("denovo")
data1.set_params(1, "testref4")
data1.set_params(2, DIR+'/pairgbs_wmerge_example_R1_.fastq.gz')
data1.set_params(3, DIR+'/pairgbs_wmerge_example_barcodes.txt')
## branch into an assembly for reference
data2 = data1.branch("reference")
data2.set_params(5, 'reference')
data2.set_params(6, DIR+'/pairgbs_wmerge_example_genome.fa')
## assemble both
data1.run(force=True)
data2.run(force=True)
## check results
assert data1.stats_dfs.s7_loci.sum_coverage.max() == 1000
assert data2.stats_dfs.s7_loci.sum_coverage.max() == N_INSERTS
Explanation: pairddrad example
End of explanation
import glob
import os
## rm dir if it exists, else create it.
for tdir in glob.glob("testref[1-9]"):
shutil.rmtree(tdir)
## rm reference index
for iref in glob.glob(DIR+"/*_genome.fa.*"):
os.remove(iref)
Explanation: Clean up test dirs
End of explanation
%%bash
## compressed dir/ w/ all data files
tar -zcvf ipsimdata.tar.gz ipsimdata/*
Explanation: Create zipped simdata archive
End of explanation |
4,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
抽象数据类型和 Python 类
抽象数据类型
Abstract data Type, ADT
数据类型
Python 基本数据类型:逻辑类型bool,数值类型int和float,字符串str和组合数据类型
str, tuple, frozenset 是不变数据类型,list, set, dict 是可变数据类型
Python类
在Python中,利用class定义(类定义)实现抽象数据类型。
Python 中基于 class 的编程技术,称为面向对象技术
类定义机制用于定义程序里需要的类型,定义好的一个类就像一个系统内部类型,可以产生该类型的对象(实例),实例对象具有这个类描述的行为。
类里定义的变量和函数称为这个类的属性,属性包括:数据属性和方法
执行一个类定义将创建一个类对象(类本身就是一个对象),这种对象主要支持两种操作:属性访问和实例化(创建这个类的实例对象)
数据属性
数据属性分为类数据属性和实例数据属性
类数据属性属于类本身,可以通过类名进行访问/修改
类数据属性也可以被类的所有实例访问/修改
在类定义之后,可以通过类名动态添加类数据属性,新增的类属性也被类和所有实例共有
实例数据属性只能通过实例访问
在实例生成后,还可以动态添加实例数据属性,但是这些实例数据属性只属于该实例
注意:
在类定义的方法中调用类数据属性时,要用引用的方式,如:Student.skills
虽然通过实例可以访问类属性,但是,不建议这么做,最好还是通过类名来访问类属性,从而避免属性隐藏带来的不必要麻烦。
Step2: 特殊的类属性
|类属性 | 含义 |
|
Step3: 方法
实例方法
静态方法
类方法
实例方法
如果希望类里的一个函数能作为该类实例的方法,这个函数至少需要有一个表示其调用对象的形参,放在函数定义参数表的第一个位置,通常取名self(可以用任何名字,这只是Python社区的习惯)。
初始化方法是一种特殊的实例方法。
作用:新建一个实例对象时,自动调用初始化方法,给实例对象绑定一些属性。
Step4: 定义一个Animal类,初始化方法把形参name赋值给实例对象数据属性name
通过animal.get_name()调用实例方法,实例对象animal作为get_name的第一个实参约束到函数的第一个形参self,所以在实例方法定义中对self的操作都是对调用这个实例方法的实例的操作。
总结:实例方法能够对实例对象操作,在定义时至少有一个形参,其中第一个形参通常是self,当实例对象调用方法时,这个对象就被约束到self上(初始化时也类似)。
@staticmethod 静态方法
静态方法就是在类里面定义的普通函数,但根据信息局部化的原则,局部使用的功能不应该定义为全局函数,所以把它定义在类内部。
静态方法的参数表 不应该有 self 参数,在其他方面也没有任何限制.
它不会对类的实例进行操作,但类和类的实例都可以调用静态方法,可以从其定义所在类的名字出发通过圆点形式调用,也可以从该类的对象(实例)通过圆点形式调用。
Step5: @classmethod 类方法
类方法和实例方法一样,都有参数限制,在定义类方法时必须有一个表示其调用类的参数,习惯用 cls 作为参数名,通常用类方法实现与本类所有对象有关的操作。类和实例对象都能调用类方法。
Step6: 私有变量
官方教程:
"Private" instance variables that cannot be accessed except from inside an object don’t exist in Python, However, there is a convention that is followed by most Python code
Step7: 通常在类中定义方法去访问和修改这些私有变量。
数据封装,确保了外部代码不能随意修改对象内部的状态,通过访问限制的保护,代码更加健壮。
通过定义的方法修改参数,可以对参数做检查,同样使代码健壮。
Step8: 约定把以一个下划线开头的名字做为实例对象内部的东西,永远不从对象的外部访问他们
两个下划线开头(不是以两个下划线结尾),在类之外采用属性访问方式直接写这个名字将无法找到他
注意:以两个下划线开头和结尾的是特殊变量(如:_doc_),特殊变量是可以直接访问的。
继承
基于类和对象的程序设计成为面向对象的程序设计(OOP)
定义类
创建实例对象
调用对象的方法完成计算工作,包括对象间的信息交换
基类 派生类
替换原理:一个类的实例对象的上下文可以使用其派生类的实例对象
Python内置函数issubclass检查两个类是否有继承关系
Step9: 注意:
说明文档对于类,函数/方法,以及模块来说是唯一的,也就是说__doc__属性是不能从父类中继承来的。
Step10: 方法查找
一个实例对象调用方法,Python解释器需要确定调用的函数(在哪个类里定义的函数),这个过程沿着继承关系进行。
有一点需要注意:
动态约束
静态约束
在程序设计领域,通过动态约束确定调用关系的函数称为虚函数
Step11: 标准函数 super()
不要一说到 super 就想到父类!super 指的是 MRO 中的下一个类!
MRO
Step12: 首先建立一个实例对象,会自动调用初始化方法,为什么 B 的 init 会被调用:因为 D 没有定义 init,所以会在 MRO 中找下一个类,去查看它有没有定义 init,也就是去调用 B 的 init。
在 MRO 中,基类永远出现在派生类后面,如果有多个基类,基类的相对顺序保持不变
上面的例子的继承链(MRO顺序):[D, B, C, Root, Object]
_slots_
当我们通过一个类创建了实例之后,仍然可以给实例动态添加属性,但是这些属性只属于这个实例。
有些时候,我们可以需要限制类实例对象的属性,这时就要用到类中的__slots__属性了。__slots__属性对于一个tuple,只有这个tuple中出现的属性可以被类实例使用。
使用__slots__要注意,__slots__定义的属性仅对当前类的实例起作用,对继承的子类实例是不起作用的
如果子类本身也有_slots__属性,子类的属性就是自身的__slots__加上父类的__slots_
实例不超过万级别的类,__slots__是不太值得使用的。
Step13: _new_
编程实践
通常把类的定义写在模块最外层,这样定义的类在整个模块(.py文件)都能使用,而且允许其他模块通过 import 语句导入和使用
Python 异常
Python 的异常都是类(class),运行时产生异常就是生成相应类的实例对象,异常处理机制完全基于面向对象的概念和性质。
所有异常类的基类 BaseException,其最主要的子类是 Exception,内置异常类都是从这个类直接或间接派生。
捕捉异常语句:try | Python Code:
class Student(object):
skills = []
def __init__(self, name):
self.name = name
stu = Student('ly')
print Student.skills # 访问类数据属性
Student.skills.append('Python')
print Student.skills
print stu.skills # 通过实例也能访问类数据属性
print dir(Student)
Student.age = 25 # 通过类名动态添加类数据属性
print dir(Student)
print stu.age
Explanation: 抽象数据类型和 Python 类
抽象数据类型
Abstract data Type, ADT
数据类型
Python 基本数据类型:逻辑类型bool,数值类型int和float,字符串str和组合数据类型
str, tuple, frozenset 是不变数据类型,list, set, dict 是可变数据类型
Python类
在Python中,利用class定义(类定义)实现抽象数据类型。
Python 中基于 class 的编程技术,称为面向对象技术
类定义机制用于定义程序里需要的类型,定义好的一个类就像一个系统内部类型,可以产生该类型的对象(实例),实例对象具有这个类描述的行为。
类里定义的变量和函数称为这个类的属性,属性包括:数据属性和方法
执行一个类定义将创建一个类对象(类本身就是一个对象),这种对象主要支持两种操作:属性访问和实例化(创建这个类的实例对象)
数据属性
数据属性分为类数据属性和实例数据属性
类数据属性属于类本身,可以通过类名进行访问/修改
类数据属性也可以被类的所有实例访问/修改
在类定义之后,可以通过类名动态添加类数据属性,新增的类属性也被类和所有实例共有
实例数据属性只能通过实例访问
在实例生成后,还可以动态添加实例数据属性,但是这些实例数据属性只属于该实例
注意:
在类定义的方法中调用类数据属性时,要用引用的方式,如:Student.skills
虽然通过实例可以访问类属性,但是,不建议这么做,最好还是通过类名来访问类属性,从而避免属性隐藏带来的不必要麻烦。
End of explanation
class Student(object):
Student calss
skills = []
def __init__(self, name):
self.name = name
stu = Student('ly')
print Student.__name__ # 类名:Student
print Student.__doc__ # 类的说明文档
print Student.__bases__
print Student.__class__ # 类的__class__
print stu.__class__ # 实例对象的__class__
print isinstance(stu, Student)
print dir(Student)
print dir(stu)
print Student.__name__ # 通过类Student能调用__name__
print stu.__name__ # 但是通过实例对象不能调用__name__
Explanation: 特殊的类属性
|类属性 | 含义 |
|:------ |:-------------|
|_name_| 类的名字(字符串)|
|_doc_ |类的文档 |
|_bases_ |类的所有父类组成的元组 |
|_dict_ |类的属性组成的字典 |
|_module_ |类所属的模块 |
|_class_ |类对象的类型 |
End of explanation
class Animal(object):
# 初始化方法
def __init__(self, name):
self.name = name
# 普通实例方法
def get_name(self):
return self.name
animal = Animal(name='ly')
print animal.name
print animal.get_name()
Explanation: 方法
实例方法
静态方法
类方法
实例方法
如果希望类里的一个函数能作为该类实例的方法,这个函数至少需要有一个表示其调用对象的形参,放在函数定义参数表的第一个位置,通常取名self(可以用任何名字,这只是Python社区的习惯)。
初始化方法是一种特殊的实例方法。
作用:新建一个实例对象时,自动调用初始化方法,给实例对象绑定一些属性。
End of explanation
class Animal(object):
@staticmethod
def hello():
print 'hello'
animal = Animal()
Animal.hello() # 类调用
animal.hello() # 实例对象调用
Explanation: 定义一个Animal类,初始化方法把形参name赋值给实例对象数据属性name
通过animal.get_name()调用实例方法,实例对象animal作为get_name的第一个实参约束到函数的第一个形参self,所以在实例方法定义中对self的操作都是对调用这个实例方法的实例的操作。
总结:实例方法能够对实例对象操作,在定义时至少有一个形参,其中第一个形参通常是self,当实例对象调用方法时,这个对象就被约束到self上(初始化时也类似)。
@staticmethod 静态方法
静态方法就是在类里面定义的普通函数,但根据信息局部化的原则,局部使用的功能不应该定义为全局函数,所以把它定义在类内部。
静态方法的参数表 不应该有 self 参数,在其他方面也没有任何限制.
它不会对类的实例进行操作,但类和类的实例都可以调用静态方法,可以从其定义所在类的名字出发通过圆点形式调用,也可以从该类的对象(实例)通过圆点形式调用。
End of explanation
class Countable(object):
counter = 0
def __init__(self):
Countable.counter += 1 # 在方法中调用类数据属性要用引用的方式
@classmethod
def get_count(cls):
return Countable.counter
a = Countable()
b = Countable()
print Countable.get_count() # 类调用
print a.get_count() # 实例对象调用
Explanation: @classmethod 类方法
类方法和实例方法一样,都有参数限制,在定义类方法时必须有一个表示其调用类的参数,习惯用 cls 作为参数名,通常用类方法实现与本类所有对象有关的操作。类和实例对象都能调用类方法。
End of explanation
class Animal(object):
def __init__(self, name, age):
self.__name = name
self.age = age
animal = Animal('ly', 25)
print animal.age
print animal._Animal__name
print animal.name
Explanation: 私有变量
官方教程:
"Private" instance variables that cannot be accessed except from inside an object don’t exist in Python, However, there is a convention that is followed by most Python code: a name prefixed with an underscore (e.g. _spam) should be treated as a non-public part of the API (whether it is a function, a method or a data member). It should be considered an implementation detail and subject to change without notice.
在内部,python使用一种name mangling 技术,将 __membername替换成 _classname__membername,所以你在外部使用原来的私有成员的名字时,会提示找不到,可以通过这种方式访问私有变量,但是强烈不建议,因为不同版本的Python解释器可能会改成不同的名字。
End of explanation
class Animal(object):
def __init__(self, name, age):
self.__name = name
self.__age = age
def get_age(self):
return self.__age
def modified_age(self, age):
if age > 0 and age < 120: # 参数检查
self.__age = age
animal = Animal('ly', 25)
print animal.get_age()
animal.modified_age(26)
print animal.get_age()
Explanation: 通常在类中定义方法去访问和修改这些私有变量。
数据封装,确保了外部代码不能随意修改对象内部的状态,通过访问限制的保护,代码更加健壮。
通过定义的方法修改参数,可以对参数做检查,同样使代码健壮。
End of explanation
class Mystr(str): # 继承自str
pass
s = Mystr(123)
print issubclass(Mystr, str)
print isinstance(s, str), isinstance(s, Mystr)
Explanation: 约定把以一个下划线开头的名字做为实例对象内部的东西,永远不从对象的外部访问他们
两个下划线开头(不是以两个下划线结尾),在类之外采用属性访问方式直接写这个名字将无法找到他
注意:以两个下划线开头和结尾的是特殊变量(如:_doc_),特殊变量是可以直接访问的。
继承
基于类和对象的程序设计成为面向对象的程序设计(OOP)
定义类
创建实例对象
调用对象的方法完成计算工作,包括对象间的信息交换
基类 派生类
替换原理:一个类的实例对象的上下文可以使用其派生类的实例对象
Python内置函数issubclass检查两个类是否有继承关系
End of explanation
class Parent(object):
'''
parent class
'''
numList = []
def numAdd(self, a, b):
return a+b
class Child(Parent):
pass
parent = Parent()
child = Child()
print Parent.__doc__
print Child.__doc__ # 子类无法继承__doc__
print Child.__bases__
print Parent.__bases__
print Parent.__class__ # 类型
print Child.__class__
print parent.__class__
print child.__class__
Explanation: 注意:
说明文档对于类,函数/方法,以及模块来说是唯一的,也就是说__doc__属性是不能从父类中继承来的。
End of explanation
class Parent(object):
def f(self):
self.g()
def g(self):
print 'Parent.f.g'
class Child(Parent):
def g(self):
print 'Child.f.g'
child = Child()
child.f()
Explanation: 方法查找
一个实例对象调用方法,Python解释器需要确定调用的函数(在哪个类里定义的函数),这个过程沿着继承关系进行。
有一点需要注意:
动态约束
静态约束
在程序设计领域,通过动态约束确定调用关系的函数称为虚函数
End of explanation
class Root(object):
def __init__(self):
print("this is Root")
class B(Root):
def __init__(self):
print("enter B")
super(B, self).__init__()
print("leave B")
class C(Root):
def __init__(self):
print("enter C")
super(C, self).__init__()
print("leave C")
class D(B, C):
pass
d = D()
print(d.__class__.__mro__)
print D.__mro__
print D.mro()
Explanation: 标准函数 super()
不要一说到 super 就想到父类!super 指的是 MRO 中的下一个类!
MRO: Method Resolution Order, 方法解析顺序,代表类继承顺序
python
def super(cls, inst):
mro = inst.__class__.mro() # Always the most derived class
return mro[mro.index(cls) + 1]
super()的作用:
1. 通过inst返回所属类的mro
2. 通过cls定位当前mro中的index,并返回mro中的下一个
End of explanation
class Parent(object):
__slots__ = ('name') # __slots__
def __init__(self, name):
self.name = name
a = Parent('ly')
print a.name
Parent.age = 25 # slots对通过类名动态添加属性没有限制
print a.age
Explanation: 首先建立一个实例对象,会自动调用初始化方法,为什么 B 的 init 会被调用:因为 D 没有定义 init,所以会在 MRO 中找下一个类,去查看它有没有定义 init,也就是去调用 B 的 init。
在 MRO 中,基类永远出现在派生类后面,如果有多个基类,基类的相对顺序保持不变
上面的例子的继承链(MRO顺序):[D, B, C, Root, Object]
_slots_
当我们通过一个类创建了实例之后,仍然可以给实例动态添加属性,但是这些属性只属于这个实例。
有些时候,我们可以需要限制类实例对象的属性,这时就要用到类中的__slots__属性了。__slots__属性对于一个tuple,只有这个tuple中出现的属性可以被类实例使用。
使用__slots__要注意,__slots__定义的属性仅对当前类的实例起作用,对继承的子类实例是不起作用的
如果子类本身也有_slots__属性,子类的属性就是自身的__slots__加上父类的__slots_
实例不超过万级别的类,__slots__是不太值得使用的。
End of explanation
try:
a = 5 / 0
except ZeroDivisionError:
print 'error'
finally:
print 'end'
Explanation: _new_
编程实践
通常把类的定义写在模块最外层,这样定义的类在整个模块(.py文件)都能使用,而且允许其他模块通过 import 语句导入和使用
Python 异常
Python 的异常都是类(class),运行时产生异常就是生成相应类的实例对象,异常处理机制完全基于面向对象的概念和性质。
所有异常类的基类 BaseException,其最主要的子类是 Exception,内置异常类都是从这个类直接或间接派生。
捕捉异常语句:try: ... except ... finally:...
End of explanation |
4,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the LIGO data visualization tutorial!
Installation
Please make sure you have GWpy installed before you begin!
Only execute the below cell if you have not already installed GWpy
Step1: <span style="color
Step2: Here we see data for
Step3: And O2
Step4: <span style="color
Step5: Querying for data around detected events
We can also query for event releases (data on the order of minutes around the event time) specifically by using find_datasets and specifying the event type.
Step6: Querying event GPS times
If we want to know the GPS time of a particular event, we can grab that with event_gps. Let's try that for the first binary neutron star detection, GW170817.
Step7: Detector tags
We can also filter our results by which gravitational wave detectors were active during the time of the event.
LIGO-Livingston = 'L1'<br>
LIGO-Hanford = 'H1'<br>
Virgo = 'V1'<br>
KAGRA = 'K1'<br>
GEO600 = 'G1'<br>
For example, let's see which events LIGO-Livingston was active for
Step8: <span style="color
Step9: Importing open LIGO data
We can use GWpy to download a LIGO time series from the GWOSC with fetch_open_data. Let's try to grab some data around a detected event, say GW150914 (the very first direct gravitational wave detection).
<span style="color
Step10: Now that we've identified a time range of interest, we can grab some data. Let's choose the LIGO-Hanford detector (H1).
Step11: Requiring verbose=True gives us details on the data download (and helps with debugging if needed).
The downloaded file is not stored permanently! If you run this cell again it will be downloaded again. However, you can use cache=True to store the file on your computer if you like.
Plot a LIGO time series
We can use GWpy to plot the data we downloaded from the GWOSC by calling the plot() method of our data object. <br>
<span style="color
Step12: This is the real LIGO data (one of two detectors) used to make the first direct detection of GWs. Do you spot the signal yet?
Let's add a title and y-axis labels to make it more clear what we're looking at.
Step13: Nice.
Make a spectral density plot
What does the data containing the signal look like in the frequency domain?
We can then call the .asd() method to calculate the amplitude spectral density and tranform our TimeSeries into a FrequencySeries.
Syntax
Step14: Now we can make a plot of our FrequencySeries with the same .plot() method we used for a TimeSeries.
Step15: <span style="color
Step16: Filter design in the frequency domain
Now that we have a sense of what frequency content is dominating our noise, let's see if we can dig out the basic shape of our signal with a few simple filters.
First, since we know our signal duration is short (on the order of miliseconds), let's zoom way in on the time series and see if we can spot it.
Step17: Not yet... Let's get rid of some of that dominating low frequency noise and see if we can see our signal any better.
Apply a highpass filter
Use the data.highpass method to apply a highpass filter to supress signals below 20 Hz in the above data.
Step18: Still no. What's left? What does our highpassed data look like in the frequency domain?
Make an ASD of the data that has been passed through the highpass filter. Use an FFT length of 5 seconds and an overlap of 2 seconds.
Step19: We've really supressed that low frequency noise! But there's still too much noise at other frequencies.
Apply a bandpass filter
Because we've already identified our signal as a binary black hole system (two equal-ish mass black holes of roughly 30 solar masses each), we know our signal has frequency content (within our detector's sensitive range) between 50 and 250 Hz.
Let's apply a bandpass filter to look for excess power in that critical frequency range.
Use the data.bandpass method to apply a bandpass filter with corner frequencies of 50 and 250 Hz.
Step20: Starting to look promising, but check out that strong unrelated sinusoid. What the heck is that?
To the frequency domain! (Read
Step21: Wow, check out that strong line; this is the 60 Hz AC power line!
Let's get rid of it with a notch filter.
Apply a notch filter
Use the data.notch method to apply a notch filter that will supress the signal at 60 Hz from nearby AC power lines.
Step22: There it is! Nice work! We can put that on a T-shirt.
<span style="color
Step23: Apply and characterize a whitening filter
It's much easier to spot excess power in the data if we can weight it proportionally by the consistent frequency contributions from the noise its embedded in. (Whitening is a critical step for matched filtering and gravitational wave search algorithms that look for coherent excess power across a gravitational wave detector network.)
We can use GWpy to whiten our data for the time of GW150914 with an FFT length of 5 seconds and an overlap of 2 seconds.
Step24: Hmm, that looks like it still has some high frequency noise left.
What does the whitened data look like in the frequency domain?
Step25: Not perfectly "white" yet. How would you improve it?
<span style="color
Step26: Visualize LIGO data with spectrograms
A great way to visualize how the frequency content of our data is changing over time is with whitened or "normalized" spectrograms.
Let's try it for GW150914.
We can apply the spectrogram2() method in GWpy to our whitened data, and set some other formatting variables to make the plot look nice
Step27: There is something there! It looks like it's sweeping up in frequency over time, but it's still hard to make out the details.
The Q transform
To zoom in even further, we can employ a muli-resolution technique called the Q transform.
Step28: Beautiful. Nice work!
<span style="color
Step29: Challenge 2
Do the same thing for GW170817, but this time using data from LIGO-Hanford. Can you estimate the time delay between the two detectors after running this procedure?
</span> | Python Code:
#! python3 -m pip install gwpy
Explanation: Welcome to the LIGO data visualization tutorial!
Installation
Please make sure you have GWpy installed before you begin!
Only execute the below cell if you have not already installed GWpy
End of explanation
from gwosc.datasets import find_datasets
find_datasets()
Explanation: <span style="color:gray">Jess notes: the following was produced using a python3.7 kernel. All code dependencies should be installed via the installation instructions above except for python; careful with your python paths here.</span>
Learning goals
With this tutorial, you will learn how to:
Use basic GWOSC tools to query for LIGO-Virgo observing run times and GW event times
Download public LIGO (and Virgo) data from the GWOSC with GWpy
Plot a LIGO h(t) time series with GWpy
Make a spectral density plot of a time series
Design a filter in the frequency domain
Apply and characterize a whitening filter
Visualize LIGO data with spectrograms
This tutorial borrows from the excellent work of Duncan Macleod, Jonah Kanner, Alex Nitz, and others involved in the 2018 GWOSC webcourse - you can find many more great examples there.
Let's get started!
Using GWOSC tools
Here we'll use tools from the Gravitational Wave Open Science Center (GWOSC), namely the gwosc python module, which we should have already installed via our GWpy install.
First, let's see what's in GWOSC open datasets:
End of explanation
from gwosc.datasets import run_segment
print(run_segment('O1'))
Explanation: Here we see data for:
GWTC-1 confident events: GW150914, GW151012, GW151226, GW170104, GW170608, GW170729, GW170809, GW170814, GW170817, GW170818, and GW170823
GWTC-1 marginal events: 151008, 151012A, 151116, 161202, 161217, 170208, 170219, 170405, 170412, 170423, 170616, 170630, 170705, 170720
Observing runs: Advanced LIGO and Advanced Virgo observing runs O1 and O2 at different sampling rates, as well as past science runs S5 and S6 (pre 2010).
And background data for an event included in the GWTC-1 catalog, but not in the O2 data release: BKGW170608_16KHZ_R1
Observing runs
Knowing this, let's try to query for the start and end of LIGO-Virgo observing runs. Currently, data from the first two observing runs are available via the GWOSC: O1 and O2.
gwosc.datasets.run_segment will return the start and end GPS times for an observing run given a dataset tag. Let's try O1:
End of explanation
print(run_segment('O2'))
Explanation: And O2:
End of explanation
# complete
Explanation: <span style="color:green">
Exercise
There's an error here! Can you find a way to fix the cell above and print the start and end times for the second LIGO-Virgo observing run (O2)?
</span>
End of explanation
from gwosc.datasets import find_datasets
events = find_datasets(type='event')
print(events)
Explanation: Querying for data around detected events
We can also query for event releases (data on the order of minutes around the event time) specifically by using find_datasets and specifying the event type.
End of explanation
from gwosc.datasets import event_gps
GW170817gps = event_gps('GW170817')
print(GW170817gps)
Explanation: Querying event GPS times
If we want to know the GPS time of a particular event, we can grab that with event_gps. Let's try that for the first binary neutron star detection, GW170817.
End of explanation
find_datasets(type='event', detector= # complete
Explanation: Detector tags
We can also filter our results by which gravitational wave detectors were active during the time of the event.
LIGO-Livingston = 'L1'<br>
LIGO-Hanford = 'H1'<br>
Virgo = 'V1'<br>
KAGRA = 'K1'<br>
GEO600 = 'G1'<br>
For example, let's see which events LIGO-Livingston was active for:
End of explanation
# complete
Explanation: <span style="color:green">
Exercises for using the GWOSC tools
How many events were detected during O2?
Which O2 event releases include data for the Virgo detector?
</span>
End of explanation
# Import
from gwpy.timeseries import TimeSeries
# Grab the event gps time with GWOSC tools
GW150914gps = event_gps('GW150914')
print(GW150914gps)
# Set a start and end time around the event time (in units of seconds) to download LIGO data
window = 15
start = GW150914gps - window
end = GW150914gps + window
# Check that this looks sane
print('start time GPS = '+str(start))
print('end time GPS = '+str(end))
Explanation: Importing open LIGO data
We can use GWpy to download a LIGO time series from the GWOSC with fetch_open_data. Let's try to grab some data around a detected event, say GW150914 (the very first direct gravitational wave detection).
<span style="color:gray"> Note: The first time you import gwpy.timeseries, matplotlib may try to import some extra fonts and that can take a couple minutes. </span>
End of explanation
# Grab H1 data around the time of interest from the GWOSC
data = TimeSeries.fetch_open_data('H1', start, end, verbose=True)
print(data)
Explanation: Now that we've identified a time range of interest, we can grab some data. Let's choose the LIGO-Hanford detector (H1).
End of explanation
plot = data.plot()
Explanation: Requiring verbose=True gives us details on the data download (and helps with debugging if needed).
The downloaded file is not stored permanently! If you run this cell again it will be downloaded again. However, you can use cache=True to store the file on your computer if you like.
Plot a LIGO time series
We can use GWpy to plot the data we downloaded from the GWOSC by calling the plot() method of our data object. <br>
<span style="color:gray">Jess note: This may take 2-3 minutes to render the first time.</span>
End of explanation
plot = data.plot(
title='LIGO-Hanford Observatory data for GW150914',
ylabel='Strain amplitude'
)
Explanation: This is the real LIGO data (one of two detectors) used to make the first direct detection of GWs. Do you spot the signal yet?
Let's add a title and y-axis labels to make it more clear what we're looking at.
End of explanation
gw140914asd = data.asd(5, 2)
print(gw140914asd)
Explanation: Nice.
Make a spectral density plot
What does the data containing the signal look like in the frequency domain?
We can then call the .asd() method to calculate the amplitude spectral density and tranform our TimeSeries into a FrequencySeries.
Syntax:
asd(FFT_length_in_seconds, FFT_overlap_in_seconds, default_time_window='hann', default_FFT_method='welch')
End of explanation
plot = gw140914asd.plot()
ax = plot.gca()
ax.set_xlim(5, 2000)
Explanation: Now we can make a plot of our FrequencySeries with the same .plot() method we used for a TimeSeries.
End of explanation
# complete
Explanation: <span style="color:green">
Exercise
How does this plot change with different FFT lengths and overlap? Try an FFT length of 15 seconds with 7 seconds of overlap and an FFT length of 2 with 1 second of overlap. What do you notice? Feel free to try other combinations as well.
</span>
End of explanation
## Make a new plot, this time zoomed in around the time of GW150914
zoomplot = data.plot(
title='LIGO-Hanford Observatory data for GW150914',
ylabel='Strain amplitude'
)
ax = zoomplot.gca()
ax.set_xlim( # select a window that starts 200 ms before the event and ends 200 ms afer
ax.set_ylabel='Strain amplitude'
Explanation: Filter design in the frequency domain
Now that we have a sense of what frequency content is dominating our noise, let's see if we can dig out the basic shape of our signal with a few simple filters.
First, since we know our signal duration is short (on the order of miliseconds), let's zoom way in on the time series and see if we can spot it.
End of explanation
## apply a highpass filter with a corner frequency of 20 Hz
gw140914hp = data.highpass( # frequency for the highpass filter
## make a plot of the resulting time series
tsplot = gw140914hp.plot(
title='LIGO-Hanford Observatory data for GW150914',
ylabel='Strain amplitude'
)
ax = tsplot.gca()
ax.set_xlim(GW150914gps-0.2, GW150914gps+0.2)
Explanation: Not yet... Let's get rid of some of that dominating low frequency noise and see if we can see our signal any better.
Apply a highpass filter
Use the data.highpass method to apply a highpass filter to supress signals below 20 Hz in the above data.
End of explanation
gw140914hpasd = gw140914hp.asd( # complete
## make a plot of that ASD!
asdplot = gw140914hpasd.plot()
ax = asdplot.gca()
ax.set_xlim(5, 2000)
Explanation: Still no. What's left? What does our highpassed data look like in the frequency domain?
Make an ASD of the data that has been passed through the highpass filter. Use an FFT length of 5 seconds and an overlap of 2 seconds.
End of explanation
## apply a bandpass filter to the highpassed data with a corner frequencies of 50 and 250 Hz
gw140914bp = gw140914hp.bandpass( # complete
## make a plot of the resulting time series
tsplot = gw140914bp.plot(
title='LIGO-Hanford Observatory data for GW150914',
ylabel='Strain amplitude'
)
ax = tsplot.gca()
ax.set_xlim(GW150914gps-0.2, GW150914gps+0.2)
ax.set_ylim(-0.2e-20, 0.2e-20)
Explanation: We've really supressed that low frequency noise! But there's still too much noise at other frequencies.
Apply a bandpass filter
Because we've already identified our signal as a binary black hole system (two equal-ish mass black holes of roughly 30 solar masses each), we know our signal has frequency content (within our detector's sensitive range) between 50 and 250 Hz.
Let's apply a bandpass filter to look for excess power in that critical frequency range.
Use the data.bandpass method to apply a bandpass filter with corner frequencies of 50 and 250 Hz.
End of explanation
gw140914bpasd = gw140914bp.asd( # complete
## make a plot of that ASD
asdplot = gw140914bpasd.plot()
ax = asdplot.gca()
ax.set_xlim(5, 2000)
Explanation: Starting to look promising, but check out that strong unrelated sinusoid. What the heck is that?
To the frequency domain! (Read: let's make a spectrum and check it out.)
Make an ASD of the data that has been passed through the highpass filter. Use an FFT length of 5 seconds and an overlap of 2 seconds.
End of explanation
gw140914n = gw140914bp.notch( # complete
## make a plot of the resulting time series
tsplot = gw140914n.plot(
title='LIGO-Hanford Observatory data for GW150914',
ylabel='Strain amplitude'
)
ax = tsplot.gca()
ax.set_xlim(GW150914gps-0.2, GW150914gps+0.2)
ax.set_ylim(-0.1e-20, 0.1e-20)
Explanation: Wow, check out that strong line; this is the 60 Hz AC power line!
Let's get rid of it with a notch filter.
Apply a notch filter
Use the data.notch method to apply a notch filter that will supress the signal at 60 Hz from nearby AC power lines.
End of explanation
# complete
Explanation: There it is! Nice work! We can put that on a T-shirt.
<span style="color:green">
Exercise
Repeat this for LIGO-Livingston detector data around the time of GW150914. Design your own set of simple filters for LIGO-Livingston, and plot the filtered LIGO-Livingston and LIGO-Hanford time series data overlaid. Can you estimate the difference in signal arrival times between detectors?
</span>
<span style="color:gray">Note: the LIGO-Virgo analyses always use whitening; not the procedure above.</span>
End of explanation
whitened_gw150914 = data.whiten(5,2)
plot = whitened_gw150914.plot(
title='Whitened LIGO Hanford Observatory data for GW150914',
ylabel='Strain amplitude',
xlim=(GW150914gps-0.2, GW150914gps+0.2)
)
Explanation: Apply and characterize a whitening filter
It's much easier to spot excess power in the data if we can weight it proportionally by the consistent frequency contributions from the noise its embedded in. (Whitening is a critical step for matched filtering and gravitational wave search algorithms that look for coherent excess power across a gravitational wave detector network.)
We can use GWpy to whiten our data for the time of GW150914 with an FFT length of 5 seconds and an overlap of 2 seconds.
End of explanation
whitened_gw150914_asd = whitened_gw150914.asd()
plot = whitened_gw150914_asd.plot()
ax = plot.gca()
ax.set_xlim(5, 2000)
Explanation: Hmm, that looks like it still has some high frequency noise left.
What does the whitened data look like in the frequency domain?
End of explanation
# complete
Explanation: Not perfectly "white" yet. How would you improve it?
<span style="color:green">
Exercise
Compare the ASD of your whitened LIGO-Hanford data to the ASD of your data with the simple filter set (highpass, bandpass, notch) for LIGO-Hanford applied. Which frequencies still stand out in which?
</span>
End of explanation
specgram = whitened_gw150914.spectrogram2(fftlength=1/16., overlap=15/256.) ** (1/2.)
plot = specgram.plot(norm='log', cmap='viridis', yscale='log')
ax = plot.gca()
ax.set_title('LIGO-Hanford strain data around GW150914')
ax.set_xlim(GW150914gps-0.5, GW150914gps+0.5)
ax.set_ylim(15,1000)
ax.colorbar(label=r'Strain ASD [1/$\sqrt{\mathrm{Hz}}$]')
Explanation: Visualize LIGO data with spectrograms
A great way to visualize how the frequency content of our data is changing over time is with whitened or "normalized" spectrograms.
Let's try it for GW150914.
We can apply the spectrogram2() method in GWpy to our whitened data, and set some other formatting variables to make the plot look nice:
End of explanation
qspecgram = data.q_transform(outseg=(GW150914gps-0.2, GW150914gps+0.2))
plot = qspecgram.plot()
ax = plot.gca()
ax.set_xscale('seconds')
ax.set_yscale('log')
ax.set_ylim(20, 500)
ax.set_ylabel('Frequency [Hz]')
ax.grid(True, axis='y', which='both')
ax.colorbar(cmap='viridis', label='Normalized energy')
Explanation: There is something there! It looks like it's sweeping up in frequency over time, but it's still hard to make out the details.
The Q transform
To zoom in even further, we can employ a muli-resolution technique called the Q transform.
End of explanation
# complete
Explanation: Beautiful. Nice work!
<span style="color:green">
Challenge
Generate a frequency vs. time plot of LIGO data around the binary neutron star event (GW170817) where you can clearly see the signal track. (Hints: consider using LIGO-Livingston data, a Q transform, and note that binary neutron star signals last for 10s of seconds!)
</span>
End of explanation
# complete
Explanation: Challenge 2
Do the same thing for GW170817, but this time using data from LIGO-Hanford. Can you estimate the time delay between the two detectors after running this procedure?
</span>
End of explanation |
4,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Like a Machine - Chapter 5
Logistic Regression
ACKNOWLEDGEMENT
A lot of the code in this notebook is from John D. Wittenauer's notebooks that cover the exercises in Andrew Ng's course on Machine Learning on Coursera. This is mostly Wittenauer's and Ng's work and acknowledged as such. I've also used some code from Sebastian Raschka's book Python Machine Learning.
The Problem
You're a student applying to an elite university. To be admitted, the university requires you to take 2 entrance exams. The university also publishes a dataset of the scores of students who took both exams and for each student whether he or she was accpeted. Knowing your scores on the two entrance exams, what is your probability of being accepted to this elite university?
Exercise 5-1
Create a list of 5 problems that fit the same structure as the problem above. For example, you could be a physician, a recruiter, head of sales, ...
Load the Data
Step1: Visualize the Data
Step2: Right away we see that this doesn't even look like a regular regression problem -- there are two classes -- Admitted and Not Admitted -- and we'd like to separate them. We need a technique for learning what class a student falls into based on his or her exam scores.
We'll start by trying to draw a line through the plot above that optimally separates the candidates who have been admitted from those who have not been admitted. We'll think like a machine to find the parameter values that define this line. The line is called a decision boundary -- depending on which side of the line a point falls, its fate is decided.
Step3: Because the output of our classification is not a real number (unlike in regression where it can take on a value within a certain range of numbers), we need a function to take exam scores (or similar inputs) and produce two categories of output (say 0 for Not Admitted, and 1 for Admitted). The way to do this is to use the sigmoid function.
What the sigmoid function does is take an expression like the one we're familiar with below
$$h_{\theta}(x) = \theta_{0} x_{0} + \theta_{1} x_{1} + \theta_{2} x_{2}$$
and convert it to the following expression
Step4: The beauty of the sigmoid transoformation is it gives us a way to take a variable with continuous values and transform it into a variable with just two values -- 0 or 1. The sigmoid can never be less than zero; nor can it be greater than 1. When $h_{\theta}(x)$ is around 6 or greater, the sigmoid is, for all practical purposes, equal to 1. Similarly, when $h_{\theta}(x)$ is around -6 or smaller, the sigmoid is for all practical purposes 0.
Many other functions can do this kind of 1 or 0 transformation -- a simple step function will do. The advantage of using a sigmoid is the values can be read off as probabilities.
Steps 1 and 2
Step5: Steps 3 and 4
Step6: Step 5
Step7: Exercise 5-2
Can you explain what's happening in the plot above?
Step8: The cost function is designed to penalize misclassification. For each set of parameter values, there will be a cost value over the entire data set. The optimal parameter values are the ones that minimize this cost function. Remember, to think like a machine is to take a problem like this, turn it into a giant optimization problem, and then devise and implement a technique for finding the optimal paramter values (afterall, you can't try every possible combination of $\theta$ values, because that will take longer than the time for the heat death of the universe!).
Of course our technique for finding the right paramater values is going to be gradient descent. But before we get to that, let's implement the cost function.
Step9: Steps 6 and 7
Step10: We see that the gradient descent is sensitive to both alpha and the number of iterations. Better to implement this using an optimization package that is written by experts. The concept is the same the numerical techniques used are super advanced. So let's take advantage of that.
Finding the Optimal Parameter Values Using Scikit-Learn
Step11: Step 8 | Python Code:
# Use the functions from another notebook in this notebook
%run SharedFunctions.ipynb
# Import our usual libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
path = os.getcwd() + '/Data/ex2data1.txt'
data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])
data.head()
Explanation: Think Like a Machine - Chapter 5
Logistic Regression
ACKNOWLEDGEMENT
A lot of the code in this notebook is from John D. Wittenauer's notebooks that cover the exercises in Andrew Ng's course on Machine Learning on Coursera. This is mostly Wittenauer's and Ng's work and acknowledged as such. I've also used some code from Sebastian Raschka's book Python Machine Learning.
The Problem
You're a student applying to an elite university. To be admitted, the university requires you to take 2 entrance exams. The university also publishes a dataset of the scores of students who took both exams and for each student whether he or she was accpeted. Knowing your scores on the two entrance exams, what is your probability of being accepted to this elite university?
Exercise 5-1
Create a list of 5 problems that fit the same structure as the problem above. For example, you could be a physician, a recruiter, head of sales, ...
Load the Data
End of explanation
positive = data[data['Admitted'].isin([1])]
negative = data[data['Admitted'].isin([0])]
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(positive['Exam 1'], positive['Exam 2'], s=30, c='b', marker='+', label='Admitted')
ax.scatter(negative['Exam 1'], negative['Exam 2'], s=30, c='r', marker='s', label='Not Admitted')
ax.legend(loc='lower right')
ax.set_xlabel('Exam 1 Score')
ax.set_ylabel('Exam 2 Score')
plt.title('Admission to University Based on Exam Scores')
Explanation: Visualize the Data
End of explanation
# A few examples of decision boundaries
#plot_decision_regions(data.iloc[:, 1:3].values, data.iloc[:,3].values, 'ppn')
## TO DO
Explanation: Right away we see that this doesn't even look like a regular regression problem -- there are two classes -- Admitted and Not Admitted -- and we'd like to separate them. We need a technique for learning what class a student falls into based on his or her exam scores.
We'll start by trying to draw a line through the plot above that optimally separates the candidates who have been admitted from those who have not been admitted. We'll think like a machine to find the parameter values that define this line. The line is called a decision boundary -- depending on which side of the line a point falls, its fate is decided.
End of explanation
# Define the sigmoid function or transformation
# NOTE: ALSO PUT INTO THE SharedFunctions notebook
def sigmoid(z):
return 1 / (1 + np.exp(-z))
# Plot the sigmoid function
# Generate the values to be plotted
x_vals = np.linspace(-10,10,100)
y_vals = [sigmoid(x) for x in x_vals]
# Plot the values
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(x_vals, y_vals, 'blue')
# Draw some constant lines to aid visualization
plt.axvline(x=0, color='black')
plt.axhline(y=0.5, color='black')
plt.yticks(np.arange(0,1.1,0.1))
plt.xticks(np.arange(-10,11,1))
plt.xlabel(r'$h_{\theta}(x)$', fontsize=15)
plt.ylabel(r'$g(h_{\theta}(x))$', fontsize=15)
plt.title('The Sigmoid Transformation')
ax.plot
Explanation: Because the output of our classification is not a real number (unlike in regression where it can take on a value within a certain range of numbers), we need a function to take exam scores (or similar inputs) and produce two categories of output (say 0 for Not Admitted, and 1 for Admitted). The way to do this is to use the sigmoid function.
What the sigmoid function does is take an expression like the one we're familiar with below
$$h_{\theta}(x) = \theta_{0} x_{0} + \theta_{1} x_{1} + \theta_{2} x_{2}$$
and convert it to the following expression:
$$g(h_{\theta}(x)) = \frac{1}{1 + e^{-h_{\theta}(x)}}$$
where $e$ is the natural log. So what does the sigmoid function or sigmoid transformation look like?
End of explanation
# add a ones column to the inputs - this makes the matrix multiplication work out easier
data.insert(0, 'Ones', 1)
# set X (training data) and y (target variable)
cols = data.shape[1]
X = data.iloc[:,0:cols-1]
y = data.iloc[:,cols-1:cols]
# convert to matrices
X = np.matrix(X.values)
y = np.matrix(y.values)
X.shape
y.shape
X[0:5,:]
Explanation: The beauty of the sigmoid transoformation is it gives us a way to take a variable with continuous values and transform it into a variable with just two values -- 0 or 1. The sigmoid can never be less than zero; nor can it be greater than 1. When $h_{\theta}(x)$ is around 6 or greater, the sigmoid is, for all practical purposes, equal to 1. Similarly, when $h_{\theta}(x)$ is around -6 or smaller, the sigmoid is for all practical purposes 0.
Many other functions can do this kind of 1 or 0 transformation -- a simple step function will do. The advantage of using a sigmoid is the values can be read off as probabilities.
Steps 1 and 2: Define the Inputs and the Output
How do we find the line that separates the data? Let's begin by defining the inputs.
End of explanation
# theta is a column vector
theta = np.matrix(np.zeros(3)).reshape(3,1)
theta
Explanation: Steps 3 and 4: Define the Model and the Parameters
The model we'll continue to use is based on the familar expression for $h_{\theta}(x)$ that we know, but now modified via the sigmoid function. So we have
$$g_{\theta}(x) = \frac{1}{1 + e^{-(\theta_{0}x_{0} + \theta_{1}x_{1} + \theta_{2}x_{2})}}$$
where the expression in brackets in the power of $e$ is just our old $h_{\theta}(x)$.
The parameters of this model are the 3 values of $\theta$, namely, $\theta_{0}$, $\theta_{1}$, and $\theta_{2}$.
End of explanation
# Visualize the cost function when y = 1 and y = 0
x_vals = np.linspace(0,1,100)
y_1_vals = -np.log(x_vals)
y_0_vals = -np.log(1 - x_vals)
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(x_vals, y_1_vals, color='blue', linestyle='solid', label='y = 1')
ax.plot(x_vals, y_0_vals, color='yellow', linestyle='solid', label='y = 0')
plt.legend(loc='upper center')
plt.xlabel(r'$g_{\theta}(x)$', fontsize=15)
plt.ylabel(r'$J(\theta)$', fontsize=15)
ax.plot
Explanation: Step 5: Define the Cost of Getting it Wrong
The cost of classfying things wrong is defined as
$$J(\theta) = \frac{1}{m} \sum_{i=1}^{m} [-y^{(i)} log(g_{\theta}(x^{(i)})) - (1-y^{(i)}) log(1-g_{\theta}(x^{(i)}))]$$
As always, the cost of getting it wrong is defined over the entire dataset. There are $m$ rows of data and each $x^{(i)}$ consists of the two scores in a row of the dataset.
Let's visualize this cost function. [[VISUALIZE COST FUNCTION]]
End of explanation
y_1_vals
y_0_vals
Explanation: Exercise 5-2
Can you explain what's happening in the plot above?
End of explanation
def logisticCost(X, y, theta):
# Inputs must be matrices
# X = m x n (including bias feature of 1 for x0)
# y = m x 1
# theta = n x 1
# Number of data points in the dataset
m = len(X)
cost_first_term = np.multiply(-y,np.log(sigmoid(X * theta)))
cost_second_term = np.multiply(1-y, np.log(1 - sigmoid(X * theta)))
non_normal_term = np.sum(np.subtract(cost_first_term, cost_second_term))
cost = non_normal_term / m
return cost
# try it out for theta = zeros
logisticCost(X,y,theta)
# try it out for theta = 0.03
logisticCost(X, y, np.matrix([[0.03],[0.03],[0.03]]))
Explanation: The cost function is designed to penalize misclassification. For each set of parameter values, there will be a cost value over the entire data set. The optimal parameter values are the ones that minimize this cost function. Remember, to think like a machine is to take a problem like this, turn it into a giant optimization problem, and then devise and implement a technique for finding the optimal paramter values (afterall, you can't try every possible combination of $\theta$ values, because that will take longer than the time for the heat death of the universe!).
Of course our technique for finding the right paramater values is going to be gradient descent. But before we get to that, let's implement the cost function.
End of explanation
# BEGIN scratch work
# Doing some scratch work to make sure the matrix multiplication works out
# matrix multiplication is great for computational efficiency
test1 = np.matrix([[1],[2],[3],[4]])
test1
test2 = np.matrix([[1,2,3,4],[1,2,3,4], [1,2,3,4], [1,2,3,4]])
test2
test2 * test1
(test1.T * test2).T
# Right, that's the result we need
# END scratch work
# Gradient Descent for Logistic Regression/Classification
def logisticGradDescent(X, y, theta, alpha, iters):
# X is a m x n matrix (including a first column of 1s)
# y is a m x 1 matrix
# theta is an n x 1 matrix
# alpha is the learning rate
# iters is the number of iterations of gradient descent
# Keep track of the evolution of theta and cost values
theta_agg = theta # initial value of the aggregated array
cost_vals = logisticCost(X,y,theta) # initial value of cost for the initial theta value
m = len(X)
# Initialize theta for the iter loop below
theta_val = theta
# Notice there's only 1 loop -- the one over iters -- in this implementation
for i in range(iters):
error = np.subtract(sigmoid(X * theta), y)
sum_error = (error.T * X).T # using the test1, test2 example above
# no need to explicitly sum the error because the matrix multiplication does it automatically
# Multiply by alpha and divide by m to normalize the sum_error
norm_sum_error = np.divide(np.multiply(sum_error, alpha),m)
# norm_sum_error is an n x 1 matrix containing the correction values for each theta parameter
# Update all the thetas simultaneously
theta_val = np.subtract(theta_val, norm_sum_error)
# keep track of the latest theta val
theta_agg = np.c_[theta_agg, theta_val]
# Calculate the cost for these parameter values
cost_vals = np.c_[cost_vals, logisticCost(X, y, theta_val)]
return theta_agg, cost_vals
# Test it out
theta_out, cost_out = logisticGradDescent(X, y, theta, 0.00001, 100)
#theta_out
#cost_out
cost_out.T.shape
np.linspace(1,101,101).shape
plt.plot(np.linspace(1,101,101), cost_out.T)
plt.title(r'Cost vs. Iterations ($\alpha$ fixed)')
plt.xlabel('Iterations')
plt.ylabel('Cost of Being Wrong')
Explanation: Steps 6 and 7: Pick an Iterative Method to Minimize the Cost of Getting it Wrong and Implement It
Once again, the method that will "learn" the optimal values for $\theta$ is gradient descent. For logistic regression we have to change our existing gradientDescent function to account for the sigmoid transformation. Otherwise, the expression for gradient descent looks the same as it did before, namely:
$$\frac{\partial J(\theta)}{\partial \theta_{j}} = \frac{1}{m}\sum_{i=1}^{m}(g_{\theta}(x^{(i)}) - y^{(i)}) x_{j}^{(i)}$$
Let's implement this new gradient descent function for logistic regression now.
End of explanation
from sklearn.linear_model import LogisticRegression
# Solvers that seem to work well are 'liblinear' and 'newton-cg"
lr = LogisticRegression(C=100.0, random_state=0, solver='liblinear', verbose=2)
X_input = data.iloc[:, 1:3].values
y_input = data.iloc[:, 3].values
y_input.shape
lr.fit(X_input, y_input)
Explanation: We see that the gradient descent is sensitive to both alpha and the number of iterations. Better to implement this using an optimization package that is written by experts. The concept is the same the numerical techniques used are super advanced. So let's take advantage of that.
Finding the Optimal Parameter Values Using Scikit-Learn
End of explanation
# Probability of [rejection, admission] for a single set of exam scores
lr.predict_proba(np.array([45, 85]).reshape(1,-1))
# Rejected or Admitted?
lr.predict(np.array([45, 45]).reshape(1,-1))
# Rejected or Admitted for the entire data set
y_pred = lr.predict(X_input)
print y_pred
# How do the predictions compare with the actual labels on the data set?
y_input != y_pred
# How many inputs are misclassified?
print('Misclassified examples: %d' % (y_input != y_pred).sum())
# Accuracy of the classifier
from sklearn.metrics import accuracy_score
accuracy_score(y_input, y_pred)
# From the shared functions
plot_decision_regions(X_input, y_input, lr)
Explanation: Step 8: Results
End of explanation |
4,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6 Jupyter Notebook</div>
Introduction to Funf
Your completion of the notebook exercises will be graded based on your ability to do the following
Step1: 1. Friends and Family
Step2: Review the unique types of records present in the dataset.
Step3: Next, create the weekly bins for the records of the types that you are interested in, and display the weekly summary. Missed calls have been excluded for the purpose of this exercise.
Step4: The calls per week can now be visualized using a barplot.
Step5: 1.1.2 Analyze SMS data
The exercise in the previous section will now be repeated with the SMS records for the same user.
Step6: Review the unique types of records present in the dataset.
Step7: Next, create the weekly bins for the records of the types that you are interested in again (as was done in the previous section), and display the weekly summary.
Step8: The SMSs per week can now be visualized using a barplot.
Step9: Note
Step10: In the output above, the GPS coordinates are visible, and the dataset is very similar in structure to the "Friends and Family" dataset. In order to be able to work with this data as a chronological location trace, it needs to be indexed with sorted timestamps (in human-readable ISO format). The next step is to review the summary statistics of the re-indexed DataFrame.
Step11: Next, the step to add a column with the week will be repeated, and the DataFrame will be grouped by this column.
Step12: Review the number of observations per week by grouping the data by the column added in the previous cell.
Step13: Next, the data will be plotted on a map. Some additional steps to prepare the data are required before this can be done.
The coordinates from the location DataFrame need to be extracted into a simpler format; one without indexes, column names, and unnecessary columns. This example will work on the weekly groupings and use Pandas' DataFrame df.as_matrix() method, which returns a raw NumPy matrix.
Step14: Note
Step15: <br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Copy the code in the previous cell, and change Week 15 to Week 20 to produce the mobility trace for Week 20. You need to replace "map_week15" with "map_week20", and retrieve Element 20 from the variable "weekly_travels".
Optional
Step16: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete | Python Code:
import pandas as pd
import numpy as np
import folium
import matplotlib.pylab as plt
import matplotlib
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10, 8)
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Introduction to Funf
Your completion of the notebook exercises will be graded based on your ability to do the following:
Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets?
Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data?
Notebook objectives
By the end of this notebook, you will be expected to:
Understand basic analysis of call and SMS log data; and
Perform mobility tracing using call record data.
List of exercises
Exercise 1: Visualizing mobility trace patterns.
Exercise 2: Analysis and interpretation of behavior from mobility trace patterns (short range).
Exercise 3: Analysis and interpretation of behavior from mobility trace patterns (long range).
Notebook introduction
Data from the following two publicly-available data sets will be used:
1. Friends and Family by MIT Media Lab; and
2. Student-Life by Dartmouth College.
The "Friends and Family" data set was generated using Funf. The "Student-Life" data set contains similar data and, while not created using it, Funf is quoted as one of its main sources of inspiration. You can read more about the data sets in the provided links.
Note:
Funf was introduced to you in the video content of this module. You are welcome to review the code on GitHub, and download and create your own application. Should you wish to do so, it is a good idea to start with this Wi-Fi Scanner Tutorial.
In the exercises that follow, you will familiarize yourself with some of the key features of the two data sets. The first exercise will focus on social features: call logs and SMS. In the second exercise, you will visualize the mobility trace for a user over a week. In the third exercise, you will extend the time period to a longer term.
There are numerous other features that can be explored in the data sets. Many of them are labeled as “technical”, as a certain degree of data wrangling is required before they can be used to analyze networks of social interactions.
The features demonstrated and contained in the datasets do not form a comprehensive list of all the possible sensor datasets. Additional options include accelerometer data used by fitness trackers, “screen on" status, and many others. When analyzing populations you will most likely start with the basics. You will then expand on these basics by merging additional features from other data sets (where available) that are potentially useful in addressing the particular problem that you are interested in.
<div class="alert alert-warning">
<b>Note</b>:<br>
It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears.
</div>
Load libraries and set options
As usual, your Python toolbox consists of Pandas and NumPy. Folium is also added, as maps will be added to this notebook at a later stage.
End of explanation
# Load the dataset.
calls = pd.read_csv('../data/CallLog.csv')
# Parse datetimes using the Pandas function.
calls['local_time'] = pd.to_datetime(calls['local_time'])
# Index the dataset by the datetime column.
calls.set_index('local_time', inplace=True)
# Sort the dataset in place.
calls.sort_index(inplace=True)
# Set the user to be evaluated.
example_user = 'fa10-01-08'
# Create a subset of the data where participantID.A
# is the selected user using the copy method.
call_example = calls[calls['participantID.A'] == example_user].copy()
# Add a column where the week is derived from the datetime column.
call_example['week'] = call_example.index.map(lambda observation_timestamp:
observation_timestamp.week)
# Display the head of the new dataset.
call_example.head(3)
Explanation: 1. Friends and Family: Call logs and SMS
You will run sample scripts to examine the datasets and visualize the timelines of various sensor readings.
1.1 Analyze the call log of a single user
The "Friends and Family" dataset contains all users' calls in one file. One participant, "fa10-01-08", has been chosen as an example.
1.1.1 Analyze call log data
Data preparation
To start your analysis, you will:
- Load the dataset;
- Parse the datetime, and sort the dataset by this column;
- Filter the dataset to review a single user; and
- Examine the top rows in your new dataset.
End of explanation
# Display the unique records for column called "type".
call_example.type.unique()
Explanation: Review the unique types of records present in the dataset.
End of explanation
# Create a new variable, call_groups, containing call_example grouped by week.
call_groups = call_example.groupby(['week'], sort=False)
# Create a pandas dataframe.
call_weeks = pd.DataFrame(columns=['week', 'outgoing', 'incoming', 'total'])
# Set the index for the new dataframe.
call_weeks.set_index('week', inplace=True)
# Next we create a summary table based on the observed types.
for week, group in call_groups:
inc = 0
out = 0
try:
inc += pd.value_counts(group.type)['incoming+']
except KeyError:
pass
try:
inc += pd.value_counts(group.type)['incoming']
except KeyError:
pass
try:
out += pd.value_counts(group.type)['outgoing+']
except KeyError:
pass
try:
out += pd.value_counts(group.type)['outgoing']
except KeyError:
pass
call_weeks.loc[week] = [out, inc, out+inc]
# Display the head of our new dataset.
call_weeks.head(3)
Explanation: Next, create the weekly bins for the records of the types that you are interested in, and display the weekly summary. Missed calls have been excluded for the purpose of this exercise.
End of explanation
# Set plotting options
fig, ax = plt.subplots()
plt.tight_layout()
# Add outgoing calls to our plot
plt.bar(call_weeks.reset_index()['week'], call_weeks['outgoing'],
color='b', label='outgoing')
# Add incoming calls to our plot
plt.bar(call_weeks.reset_index()['week'], call_weeks['incoming'],
color='r', bottom=call_weeks['outgoing'], label='incoming')
# Plot formatting options
ax.set_xlabel("Week Number", fontsize=14)
ax.set_ylabel("Number of Calls", fontsize=14)
ax.set_title("User's calls per week", fontsize=16)
plt.legend()
Explanation: The calls per week can now be visualized using a barplot.
End of explanation
# Load the dataset.
sms = pd.read_csv('../data/SMSLog.csv')
# Parse datetimes using the Pandas function.
sms['local_time'] = pd.to_datetime(sms['local_time'] )
# Index the dataset by the datetime column.
sms.set_index('local_time', inplace=True)
# Sort the dataset in place.
sms.sort_index(inplace=True)
# We have set the user to be evaluated in the call logs section and will
# reference the variable here. Create a subset of the data where
# participantID.A is the selected user using the copy method.
sms_example = sms[sms['participantID.A'] == example_user].copy()
# Add a column where the week is derived from the datetime column.
sms_example['week'] = sms_example.index.map(lambda observation_timestamp:
observation_timestamp.week)
# Display the head of the new dataset.
sms_example.head(3)
Explanation: 1.1.2 Analyze SMS data
The exercise in the previous section will now be repeated with the SMS records for the same user.
End of explanation
sms_example.type.unique()
Explanation: Review the unique types of records present in the dataset.
End of explanation
# Create a new variable, sms_groups, containing call_example grouped by week.
sms_groups = sms_example.groupby(['week'], sort=False)
# Create a pandas dataframe.
sms_weeks = pd.DataFrame(columns=['week', 'outgoing', 'incoming', 'total'])
# Set the index for the new dataframe.
sms_weeks.set_index('week', inplace=True)
# Next we create a summary table based on the observed types.
for week, group in sms_groups:
try:
inc = pd.value_counts(group.type)['incoming']
except KeyError:
inc = 0
try:
out = pd.value_counts(group.type)['outgoing']
except KeyError:
out = 0
sms_weeks.loc[week] = [out, inc, out+inc]
# Display the head of our new dataset.
sms_weeks.head(3)
Explanation: Next, create the weekly bins for the records of the types that you are interested in again (as was done in the previous section), and display the weekly summary.
End of explanation
# Set plotting options
fig, ax = plt.subplots()
plt.tight_layout()
# Add outgoing sms to our plot
plt.bar(sms_weeks.reset_index()['week'], sms_weeks['outgoing'],
color='b', label='outgoing')
# Add incoming sms to our plot
plt.bar(sms_weeks.reset_index()['week'], sms_weeks['incoming'],
color='r', bottom=sms_weeks['outgoing'], label='incoming')
# Plot formatting options
ax.set_xlabel("Week Number", fontsize=14)
ax.set_ylabel("Number of SMS", fontsize=14)
ax.set_title("User's SMS per week", fontsize=16)
plt.legend()
Explanation: The SMSs per week can now be visualized using a barplot.
End of explanation
# Import the dataset and display the head.
loc = pd.read_csv('../data/dartmouth/location/gps_u31.csv')
loc.head(3)
Explanation: Note:
You can select other users, and re-execute the cells above for both call and SMS logs to test your intuition about the differences in behaviour of students, should you wish to do so. This activity will not be graded.
2. Dartmouth: Location history example
You will run sample scripts to examine the dataset and visualize the timeline of the location data. The Dartmouth dataset has been selected for this example because the locations in the "Friends and Family" dataset are encrypted and not suitable for use in this visual exercise.
2.1 Analyze the location of a single user
The "Student-Life" data set contains separate files for each of the users. User 31 has been selected for this example.
Data preparation will need to be completed before your analysis can start. You need to:
- Load the dataset;
- Parse the datetime; and,
- Sort it by this column.
The dataset will then need to be filtered to review a single user.
End of explanation
# Parse the dates.
loc['time'] = pd.to_datetime(loc['time'], unit='s')
# Set and reindex.
loc.set_index('time', inplace=True)
loc.sort_index(inplace=True)
# Display the head.
loc.head(3)
# Retrieve the start and end dates for the dataset and print the output.
start = pd.to_datetime(loc.index, unit='s').min()
end = pd.to_datetime(loc.index, unit='s').max()
print ("Data covers {} between {} and {}".format(end - start, start, end))
# Calculate the median interval between observations and print the output.
median_interval = pd.Series(pd.to_datetime(loc.index,
unit='s')).diff().median().seconds / 60
print ("It has {} datapoints sampled with median interval of {} minutes."
.format(len(loc), median_interval))
Explanation: In the output above, the GPS coordinates are visible, and the dataset is very similar in structure to the "Friends and Family" dataset. In order to be able to work with this data as a chronological location trace, it needs to be indexed with sorted timestamps (in human-readable ISO format). The next step is to review the summary statistics of the re-indexed DataFrame.
End of explanation
# Add a column containing the week.
loc['week'] = loc.index.map(lambda observation_timestamp:
observation_timestamp.week)
loc.head(3)
Explanation: Next, the step to add a column with the week will be repeated, and the DataFrame will be grouped by this column.
End of explanation
# Group by week and review the output.
week_gr = loc.groupby('week', axis=0)
pd.DataFrame(week_gr.size(), columns=['# of observations'])
Explanation: Review the number of observations per week by grouping the data by the column added in the previous cell.
End of explanation
weekly_travels = {}
for week, points in week_gr:
weekly_travels[week] = points[['latitude', 'longitude']].as_matrix()
Explanation: Next, the data will be plotted on a map. Some additional steps to prepare the data are required before this can be done.
The coordinates from the location DataFrame need to be extracted into a simpler format; one without indexes, column names, and unnecessary columns. This example will work on the weekly groupings and use Pandas' DataFrame df.as_matrix() method, which returns a raw NumPy matrix.
End of explanation
# Set the center of the map and the zoom level.
map_week15 = folium.Map(location=[43.706607,-72.287041], zoom_start=11)
# Plot the locations of observations.
folium.PolyLine(weekly_travels[15], color='blue', weight=3,
opacity=0.5).add_to(map_week15)
map_week15
Explanation: Note:
The Python visualization library, Folium, was introduced in an earlier notebook. However, it is good to know that the center location and starting zoom level are options that you will need to manually set. In many cases, your analysis will be centered around a known coordinate, in which case, you can manually update the location. In other cases, you will need to calculate the position based on your available data.
Now you can plot the data. The following example looks at a specific week.
End of explanation
# Your code here
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Copy the code in the previous cell, and change Week 15 to Week 20 to produce the mobility trace for Week 20. You need to replace "map_week15" with "map_week20", and retrieve Element 20 from the variable "weekly_travels".
Optional:
If you want to, you can attempt to recenter the map, and specify a different zoom level to produce a map that is better suited to the visualization. (The answer is demonstrated in the mobility trace for all of the user's data further down in this notebook.)
End of explanation
# Retrieve all locations.
all_points = loc[['latitude', 'longitude']].as_matrix()
map_alltime = folium.Map(location=[42.9297,-71.4352], zoom_start=8)
folium.PolyLine(all_points ,color='blue', weight=2,
opacity=0.5).add_to(map_week15).add_to(map_alltime)
map_alltime
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
<br>
<div class="alert alert-info">
<b>Exercise 2 Start.</b>
</div>
Instructions
When comparing the visual representation of the two weeks, you will see that the patterns are very similar. Provide a high-level summary of the person's mobility in the cell below.
Here are some questions to guide your answer:
- How many places do you think the person visited during Week 15?
- Compared to other weeks, is this person's behavior predictable?
- Is this representative of general behavior, or is the number of places visited lower or higher than you expected?
Your markdown answer here.
<br>
<div class="alert alert-info">
<b>Exercise 2 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
<br>
<div class="alert alert-info">
<b>Exercise 3 Start.</b>
</div>
Instructions
What can you conclude from comparing the person's mobility from single weeks to the full data set? Provide a high-level summary in the cell below.
Hint: Look at the mobility trace for all of the data for this user.
End of explanation |
4,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook contains examples related to survival analysis, based on Chapter 13 of Think Stats, 2nd Edition, by Allen Downey, available from thinkstats2.com
Step1: The following code looks at examining the survival until marriage of women from the United States from different age brackets and cohorts.
Step2: For complete cases, we know the respondent's age at first marriage. For ongoing cases, we have the respondent's age when interviewed.
Step3: There are only a few cases with unknown marriage dates.
Step4: Here we are investigating the percent of poeple who are married compared to the total data set to make sure that there is not something weird going on with the data. From this it seems like there is definitely something going on with the 1988 data.
Step5: EstimateHazardFunction is an implementation of Kaplan-Meier estimation.
With an estimated hazard function, we can compute a survival function.
Step7: Here we use the surviavl function to look at how the percent of people marries as a function of decade and cohort.
Step9: This function will do a similar thing to the above function, but uses the derivative of the hazard function to investigate how the hazard of marriage changes as a function of age and cohort.
Step10: Plotting the hazard to see trends across cohorts
Step11: Here we plot how the hazard of marriage changes across cohort using a finer break down of age groups. Doing this allows us to see a little more clearly the change in how people of each age group are at hazard in subsequent generations.
Step12: Here we do a similar analysis looking at survival. | Python Code:
from __future__ import print_function, division
import marriage
import thinkstats2
import thinkplot
import pandas
import numpy
from lifelines import KaplanMeierFitter
from collections import defaultdict
import itertools
import math
import matplotlib.pyplot as pyplot
from matplotlib import pylab
%matplotlib inline
Explanation: This notebook contains examples related to survival analysis, based on Chapter 13 of Think Stats, 2nd Edition, by Allen Downey, available from thinkstats2.com
End of explanation
resp8 = marriage.ReadFemResp2013()
resp7 = marriage.ReadFemResp2010()
resp6 = marriage.ReadFemResp2002()
resp5 = marriage.ReadFemResp1995()
resp4 = marriage.ReadFemResp1988()
resp3 = marriage.ReadFemResp1982()
resps = [resp1, resp2, resp3, resp4]
Explanation: The following code looks at examining the survival until marriage of women from the United States from different age brackets and cohorts.
End of explanation
#For each data set, find the number of people who married and the number who have not yet
t_complete = []
t_ongoing = []
for resp in resps:
complete = resp[resp.evrmarry == 1].agemarry
ongoing = resp[resp.evrmarry == 0].age
t_complete.append(complete)
t_ongoing.append(ongoing)
Explanation: For complete cases, we know the respondent's age at first marriage. For ongoing cases, we have the respondent's age when interviewed.
End of explanation
t_nan = []
for complete in t_complete:
nan = [numpy.isnan(complete)]
len(nan)
Explanation: There are only a few cases with unknown marriage dates.
End of explanation
resps = [resp1, resp2, resp3, resp4, resp5]
data_set_names = [2010, 2002, 1995, 1982, 1988]
for i in range(len(resps)):
married = resps[i].agemarry
valued = [m for m in married if str(m) != 'nan']
#print proportion of people who have a value for this
print(data_set_names[i], len(valued)/len(resps[i]))
Explanation: Here we are investigating the percent of poeple who are married compared to the total data set to make sure that there is not something weird going on with the data. From this it seems like there is definitely something going on with the 1988 data.
End of explanation
survival.PlotResampledByDecade(resps, weighted=True)
thinkplot.Config(xlabel='age (years)', ylabel='probability unmarried', legend=True, pos=2)
survival.PlotResampledByDecade(resps, weighted=False)
thinkplot.Config(xlabel='age (years)', ylabel='probability unmarried', legend=True, pos=2)
Explanation: EstimateHazardFunction is an implementation of Kaplan-Meier estimation.
With an estimated hazard function, we can compute a survival function.
End of explanation
def PlotResampledByAge(resps, n=6, **options):
Takes in a list of the groups of respondents and the number of desired age brackets dsiplays a plot comparing the
probability a woman is married against her cohort for n number of age groups
resps -- list of dataframes
n -- number of age brackets
# for i in range(11):
# samples = [thinkstats2.ResampleRowsWeighted(resp)
# for resp in resps]
sample = pandas.concat(resps, ignore_index=True)
groups = sample.groupby('fives')
#number of years per group if there are n groups
group_size = 30/n
#labels age brackets depending on # divs
labels = ['{} to {}'.format(int(15 + group_size * i), int(15+(i+1)*group_size)) for i in range(n)]
# 0 representing 15-24, 1 being 25-34, and 2 being 35-44
#initilize dictionary of size n, with empty lists
prob_dict = {i: [] for i in range(n)}
#TODO: Look into not hardcoding this
decades = [30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95]
for _, group in groups:
#calcualates the survival function for each decade
_, sf = survival.EstimateSurvival(group)
if len(sf.ts) > 1:
#iterates through all n age groups to find the probability of marriage for that group
for group_num in range(0,n):
temp_prob_list = sf.Probs([t for t in sf.ts
if (15 + group_size*group_num) <= t <= (15 + (group_num+1)*group_size)])
if len(temp_prob_list) != 0:
prob_dict[group_num].append(1-sum(temp_prob_list)/len(temp_prob_list))
else:
pass
#set up subplots
iteration = 0
num_plots = numpy.ceil(n/6.0)
for key in prob_dict:
iteration += 1
xs = decades[0:len(prob_dict[key])]
pyplot.subplot(num_plots, 1, numpy.ceil(iteration/6))
thinkplot.plot(xs, prob_dict[key], label=labels[key], **options)
#add labels/legend
thinkplot.Config(xlabel='cohort (decade birth)', ylabel='probability married', legend=True, pos=2)
pylab.legend(loc=1, bbox_to_anchor=(1.35, 0.75))
Explanation: Here we use the surviavl function to look at how the percent of people marries as a function of decade and cohort.
End of explanation
def PlotResampledHazardByAge(resps, n=6, **options):
Takes in a list of the groups of respondents and the number of desired age brackets dsiplays a plot comparing the
probability a woman is married against her cohort for n number of age groups
resps -- list of dataframes
n -- number of age brackets
# for i in range(20):
# samples = [thinkstats2.ResampleRowsWeighted(resp)
# for resp in resps]
# print(len(resps[1]))
sample = pandas.concat(resps, ignore_index=True)
groups = sample.groupby('decade')
#number of years per group if there are n groups
group_size = 30/n
#labels age brackets depending on # divs
labels = ['{} to {}'.format(int(15 + group_size * i), int(15+(i+1)*group_size)) for i in range(n)]
# 0 representing 15-24, 1 being 25-34, and 2 being 35-44
#initilize dictionary of size n, with empty lists
prob_dict = {i: [] for i in range(n)}
#TODO: Look into not hardcoding this
decades = [30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90]
for _, group in groups:
#calcualates the survival function for each decade
_, sf = survival.EstimateSurvival(group)
if len(sf.ts) > 1:
#iterates through all n age groups to find the probability of marriage for that group
for group_num in range(0,n):
temp_prob_list = numpy.diff(sf.Probs([t for t in sf.ts
if (15 + group_size*group_num) <= t <= (15 + (group_num+1)*group_size)]))
if len(temp_prob_list) != 0:
prob_dict[group_num].append(sum(temp_prob_list)/len(temp_prob_list))
else:
pass
#Set up subplots
iteration = 0
num_plots = numpy.ceil(n/6.0)
for key in prob_dict:
iteration += 1
xs = decades[0:len(prob_dict[key])]
pyplot.subplot(num_plots, 1, numpy.ceil(iteration/6))
thinkplot.plot(xs, prob_dict[key], label=labels[key], **options)
#plot labels/legend
thinkplot.Config(xlabel='cohort (decade birth)', ylabel='Hazard of Marriage', legend=True, pos=2)
pylab.legend(loc=1, bbox_to_anchor=(1.35, 0.75))
Explanation: This function will do a similar thing to the above function, but uses the derivative of the hazard function to investigate how the hazard of marriage changes as a function of age and cohort.
End of explanation
pyplot.hold(True)
PlotResampledHazardByAge(resps, 6)
Explanation: Plotting the hazard to see trends across cohorts
End of explanation
pyplot.hold(True)
PlotResampledHazardByAge(resps, 10)
Explanation: Here we plot how the hazard of marriage changes across cohort using a finer break down of age groups. Doing this allows us to see a little more clearly the change in how people of each age group are at hazard in subsequent generations.
End of explanation
pyplot.hold(True)
PlotResampledByAge(resps, 6)
pyplot.hold(True)
PlotResampledByAge(resps, 10)
Explanation: Here we do a similar analysis looking at survival.
End of explanation |
4,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traffic Sign Classification with Keras
Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
Dataset
The network you'll build with Keras is similar to the example in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here
Step1: Overview
Here are the steps you'll take to build the network
Step2: Load the Data
Start by importing the data from the pickle file.
Step3: Preprocess the Data
Shuffle the data
Normalize the features using Min-Max scaling between -0.5 and 0.5
One-Hot Encode the labels
Shuffle the data
Hint
Step4: Normalize the features
Hint
Step5: One-Hot Encode the labels
Hint
Step6: Keras Sequential Model
```python
from keras.models import Sequential
Create the Sequential model
model = Sequential()
``
Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.
Keras Layer
A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this
Step7: Training a Sequential Model
You built a multi-layer neural network in Keras, now let's look at training a neural network.
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
...
Configures the learning process and metrics
model.compile('sgd', 'mean_squared_error', ['accuracy'])
Train the model
History is a record of training loss and metrics
history = model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2)
Calculate test score
test_score = model.evaluate(x_test_data, Y_test_data)
``
The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`.
You can find more optimizers here, loss functions here, and more metrics here.
To train the model, use the fit() function as shown in model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.
Train the Network
Compile the network using adam optimizer and categorical_crossentropy loss function.
Train the network for ten epochs and validate with 20% of the training data.
Step8: Convolutions
Re-construct the previous network
Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.
Add a ReLU activation after the convolutional layer.
Hint 1
Step9: Pooling
Re-construct the network
Add a 2x2 max pooling layer immediately following your convolutional layer.
Step10: Dropout
Re-construct the network
Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
Step11: Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
Step12: Best Validation Accuracy | Python Code:
from urllib.request import urlretrieve
from os.path import isfile
from tqdm import tqdm
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('train.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Train Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/train.p',
'train.p',
pbar.hook)
if not isfile('test.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Test Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/test.p',
'test.p',
pbar.hook)
print('Training and Test data downloaded.')
Explanation: Traffic Sign Classification with Keras
Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
Dataset
The network you'll build with Keras is similar to the example in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here:
End of explanation
import pickle
import numpy as np
import math
# Fix error with TF and Keras
import tensorflow as tf
tf.python.control_flow_ops = tf
print('Modules loaded.')
Explanation: Overview
Here are the steps you'll take to build the network:
Load the training data.
Preprocess the data.
Build a feedforward neural network to classify traffic signs.
Build a convolutional neural network to classify traffic signs.
Evaluate the final neural network on testing data.
Keep an eye on the network’s accuracy over time. Once the accuracy reaches the 98% range, you can be confident that you’ve built and trained an effective model.
End of explanation
with open('train.p', 'rb') as f:
data = pickle.load(f)
# TODO: Load the feature data to the variable X_train
X_train = data['features']
# TODO: Load the label data to the variable y_train
y_train = data['labels']
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert np.array_equal(X_train, data['features']), 'X_train not set to data[\'features\'].'
assert np.array_equal(y_train, data['labels']), 'y_train not set to data[\'labels\'].'
print('Tests passed.')
Explanation: Load the Data
Start by importing the data from the pickle file.
End of explanation
# TODO: Shuffle the data
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train, random_state=0)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert X_train.shape == data['features'].shape, 'X_train has changed shape. The shape shouldn\'t change when shuffling.'
assert y_train.shape == data['labels'].shape, 'y_train has changed shape. The shape shouldn\'t change when shuffling.'
assert not np.array_equal(X_train, data['features']), 'X_train not shuffled.'
assert not np.array_equal(y_train, data['labels']), 'y_train not shuffled.'
print('Tests passed.')
Explanation: Preprocess the Data
Shuffle the data
Normalize the features using Min-Max scaling between -0.5 and 0.5
One-Hot Encode the labels
Shuffle the data
Hint: You can use the scikit-learn shuffle function to shuffle the data.
End of explanation
# TODO: Normalize the data features to the variable X_normalized
def normalize(image_data):
a = -0.5
b = 0.5
x_min = 0
x_max = 255
return a + ((image_data - x_min) * (b - a)) / (x_max - x_min)
X_normalized = normalize(X_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert math.isclose(np.min(X_normalized), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_normalized), 0.5, abs_tol=1e-5), 'The range of the training data is: {} to {}. It must be -0.5 to 0.5'.format(np.min(X_normalized), np.max(X_normalized))
print('Tests passed.')
Explanation: Normalize the features
Hint: You solved this in TensorFlow lab Problem 1.
End of explanation
# TODO: One Hot encode the labels to the variable y_one_hot
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
y_one_hot = label_binarizer.fit_transform(y_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
import collections
assert y_one_hot.shape == (39209, 43), 'y_one_hot is not the correct shape. It\'s {}, it should be (39209, 43)'.format(y_one_hot.shape)
assert next((False for y in y_one_hot if collections.Counter(y) != {0: 42, 1: 1}), True), 'y_one_hot not one-hot encoded.'
print('Tests passed.')
Explanation: One-Hot Encode the labels
Hint: You can use the scikit-learn LabelBinarizer function to one-hot encode the labels.
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
model = Sequential()
# TODO: Build a Multi-layer feedforward neural network with Keras here.
# 1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
# 2nd Layer - Add a fully connected layer
model.add(Dense(128))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 4th Layer - Add a fully connected layer
model.add(Dense(43))
# 5th Layer - Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.activations import relu, softmax
def check_layers(layers, true_layers):
assert len(true_layers) != 0, 'No layers found'
for layer_i in range(len(layers)):
assert isinstance(true_layers[layer_i], layers[layer_i]), 'Layer {} is not a {} layer'.format(layer_i+1, layers[layer_i].__name__)
assert len(true_layers) == len(layers), '{} layers found, should be {} layers'.format(len(true_layers), len(layers))
check_layers([Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[1].output_shape == (None, 128), 'Second layer output is wrong, it should be (128)'
assert model.layers[2].activation == relu, 'Third layer not a relu activation layer'
assert model.layers[3].output_shape == (None, 43), 'Fourth layer output is wrong, it should be (43)'
assert model.layers[4].activation == softmax, 'Fifth layer not a softmax activation layer'
print('Tests passed.')
Explanation: Keras Sequential Model
```python
from keras.models import Sequential
Create the Sequential model
model = Sequential()
``
Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.
Keras Layer
A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this:
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
Create the Sequential model
model = Sequential()
1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
2nd Layer - Add a fully connected layer
model.add(Dense(100))
3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
4th Layer - Add a fully connected layer
model.add(Dense(60))
5th Layer - Add a ReLU activation layer
model.add(Activation('relu'))
```
Keras will automatically infer the shape of all layers after the first layer. This means you only have to set the input dimensions for the first layer.
The first layer from above, model.add(Flatten(input_shape=(32, 32, 3))), sets the input dimension to (32, 32, 3) and output dimension to (3072=32*32*3). The second layer takes in the output of the first layer and sets the output dimenions to (100). This chain of passing output to the next layer continues until the last layer, which is the output of the model.
Build a Multi-Layer Feedforward Network
Build a multi-layer feedforward neural network to classify the traffic sign images.
Set the first layer to a Flatten layer with the input_shape set to (32, 32, 3)
Set the second layer to Dense layer width to 128 output.
Use a ReLU activation function after the second layer.
Set the output layer width to 43, since there are 43 classes in the dataset.
Use a softmax activation function after the output layer.
To get started, review the Keras documentation about models and layers.
The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.
End of explanation
# TODO: Compile and train the model here.
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])
# History is a record of training loss and metrics
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=10, validation_split=0.2)
# Calculate test score
test_score = model.evaluate(X_normalized, y_one_hot)
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.optimizers import Adam
assert model.loss == 'categorical_crossentropy', 'Not using categorical_crossentropy loss function'
assert isinstance(model.optimizer, Adam), 'Not using adam optimizer'
assert len(history.history['acc']) == 10, 'You\'re using {} epochs when you need to use 10 epochs.'.format(len(history.history['acc']))
assert history.history['acc'][-1] > 0.92, 'The training accuracy was: %.3f. It shoud be greater than 0.92' % history.history['acc'][-1]
assert history.history['val_acc'][-1] > 0.85, 'The validation accuracy is: %.3f. It shoud be greater than 0.85' % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Training a Sequential Model
You built a multi-layer neural network in Keras, now let's look at training a neural network.
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
...
Configures the learning process and metrics
model.compile('sgd', 'mean_squared_error', ['accuracy'])
Train the model
History is a record of training loss and metrics
history = model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2)
Calculate test score
test_score = model.evaluate(x_test_data, Y_test_data)
``
The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`.
You can find more optimizers here, loss functions here, and more metrics here.
To train the model, use the fit() function as shown in model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.
Train the Network
Compile the network using adam optimizer and categorical_crossentropy loss function.
Train the network for ten epochs and validate with 20% of the training data.
End of explanation
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
# number of convolutional filters to use
nb_filters = 32
# convolution kernel size
kernel_size = (3, 3)
# input shape
input_shape = (32, 32, 3)
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
# 2nd Layer - Add a fully connected layer
model.add(Dense(128))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 4th Layer - Add a fully connected layer
model.add(Dense(43))
# 5th Layer - Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
check_layers([Convolution2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[0].nb_filter == 32, 'Wrong number of filters, it should be 32'
assert model.layers[0].nb_col == model.layers[0].nb_row == 3, 'Kernel size is wrong, it should be a 3x3'
assert model.layers[0].border_mode == 'valid', 'Wrong padding, it should be valid'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Convolutions
Re-construct the previous network
Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.
Add a ReLU activation after the convolutional layer.
Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.
End of explanation
# TODO: Re-construct the network and add a pooling layer after the convolutional layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
# number of convolutional filters to use
nb_filters = 32
# convolution kernel size
kernel_size = (3, 3)
# input shape
input_shape = (32, 32, 3)
# size of pooling area for max pooling
pool_size = (2, 2)
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(MaxPooling2D(pool_size=pool_size))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
# 2nd Layer - Add a fully connected layer
model.add(Dense(128))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 4th Layer - Add a fully connected layer
model.add(Dense(43))
# 5th Layer - Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[1].pool_size == (2, 2), 'Second layer must be a max pool layer with pool size of 2x2'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Pooling
Re-construct the network
Add a 2x2 max pooling layer immediately following your convolutional layer.
End of explanation
# TODO: Re-construct the network and add dropout after the pooling layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D
# number of convolutional filters to use
nb_filters = 32
# convolution kernel size
kernel_size = (3, 3)
# input shape
input_shape = (32, 32, 3)
# size of pooling area for max pooling
pool_size = (2, 2)
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.5))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
# 2nd Layer - Add a fully connected layer
model.add(Dense(128))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 4th Layer - Add a fully connected layer
model.add(Dense(43))
# 5th Layer - Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Dropout, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[2].p == 0.5, 'Third layer should be a Dropout of 50%'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Dropout
Re-construct the network
Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
End of explanation
# TODO: Build a model
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D
# number of convolutional filters to use
nb_filters = 32
# convolution kernel size
kernel_size = (3, 3)
# input shape
input_shape = (32, 32, 3)
# size of pooling area for max pooling
pool_size = (2, 2)
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=10, validation_split=0.2)
Explanation: Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
End of explanation
# TODO: Load test data
with open('test.p', 'rb') as f:
data_test = pickle.load(f)
X_test = data_test['features']
y_test = data_test['labels']
# TODO: Preprocess data & one-hot encode the labels
X_test_normalized = normalize(X_test)
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
y_test_one_hot = label_binarizer.fit_transform(y_test)
# TODO: Evaluate model on test data
score = model.evaluate(X_test_normalized, y_test_one_hot, verbose=0)
for i in range(len(model.metrics_names)):
print("{0} = {1:.3f}".format(model.metrics_names[i], score[i]))
Explanation: Best Validation Accuracy: (98.32%)
Testing
Once you've picked out your best model, it's time to test it.
Load up the test data and use the evaluate() method to see how well it does.
Hint 1: The evaluate() method should return an array of numbers. Use the metrics_names property to get the labels.
End of explanation |
4,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align='center' ><img src='https
Step1: 5. Impulse response functions
Impulse response functions (IRFs) are a standard tool for analyzing the short run dynamics of dynamic macroeconomic models, such as the Solow growth model, in response to an exogenous shock. The solow.impulse_response.ImpulseResponse class has several attributes and methods for generating and analyzing impulse response functions.
Step2: The solow.Model class provides access to all of the functionality of the solow.impulse_response.ImpulseResponse class through its irf attribute.
Step3: Example
Step4: Take a look at the IRF for the savings rate shock. Note that while capital and output are unaffected at the t=0, both consumption and investment jump (in opposite directions!) in response to the change in the savings rate.
Step5: Example
Step6: Example
Step8: Example | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sympy as sym
import solowpy
# define model parameters
ces_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,
'delta': 0.05, 'alpha': 0.33, 'sigma': 1.01}
# create an instance of the solow.Model class
ces_model = solowpy.CESModel(params=ces_params)
Explanation: <div align='center' ><img src='https://raw.githubusercontent.com/davidrpugh/numerical-methods/master/images/sgpe-logo.jpg' width="1200" height="100"></div>
<div align='right'><img src='https://raw.githubusercontent.com/davidrpugh/numerical-methods/master/images/SIRElogolweb.jpg' width="1200" height="100"></div>
End of explanation
# use tab completion to see the available attributes and methods...
solowpy.impulse_response.ImpulseResponse.
Explanation: 5. Impulse response functions
Impulse response functions (IRFs) are a standard tool for analyzing the short run dynamics of dynamic macroeconomic models, such as the Solow growth model, in response to an exogenous shock. The solow.impulse_response.ImpulseResponse class has several attributes and methods for generating and analyzing impulse response functions.
End of explanation
# use tab completion to see the available attributes and methods...
ces_model.irf.
Explanation: The solow.Model class provides access to all of the functionality of the solow.impulse_response.ImpulseResponse class through its irf attribute.
End of explanation
# 100% increase in the current savings rate...
ces_model.irf.impulse = {'s': 2.0 * ces_model.params['s']}
# in efficiency units...
ces_model.irf.kind = 'efficiency_units'
Explanation: Example: Impact of a change in the savings rate
One can analyze the impact of a doubling of the savings rate on model variables as follows.
End of explanation
# ordering of variables is t, k, y, c, i!
print(ces_model.irf.impulse_response[:25,])
Explanation: Take a look at the IRF for the savings rate shock. Note that while capital and output are unaffected at the t=0, both consumption and investment jump (in opposite directions!) in response to the change in the savings rate.
End of explanation
# check the docstring to see the call signature
ces_model.irf.plot_impulse_response?
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.irf.plot_impulse_response(ax, variable='output')
plt.show()
Explanation: Example: Plotting an impulse response function
One can use a convenience method to to plot the impulse response functions for a particular variable.
End of explanation
# more complicate shocks are possible
ces_model.irf.impulse = {'s': 0.9 * ces_model.params['s'], 'g': 1.05 * ces_model.params['g']}
# in efficiency units...
ces_model.irf.kind = 'per_capita'
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.irf.plot_impulse_response(ax, variable='output', log=True)
plt.show()
Explanation: Example: More complicated impulse responses are possible
Note that by defining impulses as dictionaries, one can analyze extremely general shocks. For example, suppose that an exogenous 5% increase in the growth rate of technology was accompanied by a simultaneous 10% fall in the savings rate.
End of explanation
from IPython.html.widgets import fixed, interact, FloatSliderWidget
def interactive_impulse_response(model, shock, param, variable, kind, log_scale):
Interactive impulse response plotting tool.
# specify the impulse response
model.irf.impulse = {param: shock * model.params[param]}
model.irf.kind = kind
# create the plot
fig, ax = plt.subplots(1, 1, figsize=(8,6))
model.irf.plot_impulse_response(ax, variable=variable, log=log_scale)
irf_widget = interact(interactive_impulse_response,
model=fixed(ces_model),
shock = FloatSliderWidget(min=0.1, max=5.0, step=0.1, value=0.5),
param = ces_model.params.keys(),
variable=['capital', 'output', 'consumption', 'investment'],
kind=['efficiency_units', 'per_capita', 'levels'],
log_scale=False,
)
Explanation: Example: Interactive impulse reponse functions
Using IPython widgets makes it extremely easy to analyze the various impulse response functions.
End of explanation |
4,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OkNLP
This notebook demonstrates the algorithm we used in our project. It shows an example of how we clustered using Nonnegative Matrix Factorization. We manually inspect the output of NMF to determine the best number of clusters for each group. Then, we create word clouds for specific groups and demographic splits.
Imports and Settings
Step1: Data Cleaning
First we read in the data frame and re-categorize some of the demographic information.
We'll have two separate dataframes, one for essay0 and one for essay4.
Step2: Subsample
Step3: Clustering
Convert the users' essays into a tfidf matrix and use NMF to cluster the data points into 25 groups.
Vocabulary includes unigrams, bigrams, and trigrams without redundancies.
Step4: Models
Featurize
Step5: Log Odds Ratio features
Step6: NMF features
Step8: Cross-Validated Estimates
Logistic Regression, naive Bayes, SVM, Random Forest | Python Code:
import warnings
import numpy as np
import pandas as pd
from scipy.sparse import hstack
from sklearn.cross_validation import cross_val_predict
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from utils.categorize_demographics import recategorize
from utils.clean_up import clean_up, col_to_data_matrix
from utils.distinctive_tokens import log_odds_ratio
from utils.happyfuntokenizing import Tokenizer
from utils.nonnegative_matrix_factorization import nmf_labels
warnings.filterwarnings('ignore')
essay_dict = {'essay0' : 'My self summary',
'essay1' : 'What I\'m doing with my life',
'essay2' : 'I\'m really good at',
'essay3' : 'The first thing people notice about me',
'essay4' : 'Favorite books, movies, tv, food',
'essay5' : 'The six things I could never do without',
'essay6' : 'I spend a lot of time thinking about',
'essay7' : 'On a typical Friday night I am',
'essay8' : 'The most private thing I am willing to admit',
'essay9' : 'You should message me if'}
Explanation: OkNLP
This notebook demonstrates the algorithm we used in our project. It shows an example of how we clustered using Nonnegative Matrix Factorization. We manually inspect the output of NMF to determine the best number of clusters for each group. Then, we create word clouds for specific groups and demographic splits.
Imports and Settings
End of explanation
df = pd.read_csv('data/profiles.20120630.csv')
essay_list = ['essay4']
df_4 = clean_up(df, essay_list)
df_4 = recategorize(df_4)
Explanation: Data Cleaning
First we read in the data frame and re-categorize some of the demographic information.
We'll have two separate dataframes, one for essay0 and one for essay4.
End of explanation
df_4_y = df_4[df_4.drugs == 'yes'] #take only users with yes/no drug status
df_4_n = df_4[df_4.drugs == 'no']
df_4_y = df_4_y.sample(6500, random_state=42) #subsample data for both y and no
df_4_n = df_4_n.sample(6500, random_state=42)
drugs = df_4_y.append(df_4_n) #combine dfs
drugs['y'] = drugs['drugs'].apply(lambda x: 1 if x == 'yes' else 0) #add column for 1/0 if drug use
Explanation: Subsample
End of explanation
K = 25
count_matrix, tfidf_matrix, vocab = col_to_data_matrix(drugs, 'essay4', min_df=0.001)
drugs['group'] = nmf_labels(tfidf_matrix, K) #group assignment per user (group with maximum weight)
Explanation: Clustering
Convert the users' essays into a tfidf matrix and use NMF to cluster the data points into 25 groups.
Vocabulary includes unigrams, bigrams, and trigrams without redundancies.
End of explanation
y = drugs.y.values #1/0 vector
X = tfidf_matrix.copy()
Explanation: Models
Featurize
End of explanation
count_0 = count_matrix[np.array(drugs.drugs=='yes'), :].sum(axis=0)
count_1 = count_matrix[np.array(drugs.drugs=='no'), :].sum(axis=0)
counts = np.array(np.vstack((count_0, count_1)))
log_odds = log_odds_ratio(counts, vocab, use_variance=True)
n = 2000
top = log_odds.sort('log_odds_ratio', ascending=False)['features'].tolist()[:n]
bottom = log_odds.sort('log_odds_ratio', ascending=False)['features'].tolist()[-n:]
log_odds_features = top + bottom
log_odds_mask = np.array([t in log_odds_features for t in vocab])
X = X[:,log_odds_mask]
Explanation: Log Odds Ratio features
End of explanation
# nmf = pd.get_dummies(drugs.group, prefix='nmf').values
# X = hstack([X, nmf], format='csr')
Explanation: NMF features
End of explanation
clf0 = LogisticRegression()
clf1 = MultinomialNB()
clf2 = LinearSVC()
clf3 = RandomForestClassifier()
for clf, name in zip([clf0, clf1, clf2, clf3],
['Logistic Regression', 'naive Bayes', 'SVM', 'Random Forest']):
yhat = cross_val_predict(clf, X, y, cv=10)
print("Accuracy: %0.4f [%s]" % (accuracy_score(y, yhat), name))
print(Without feature selection:
Accuracy: 0.6715 [Logistic Regression]
Accuracy: 0.6738 [naive Bayes]
Accuracy: 0.6387 [SVM]
Accuracy: 0.6305 [Random Forest])
Explanation: Cross-Validated Estimates
Logistic Regression, naive Bayes, SVM, Random Forest
End of explanation |
4,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Basic test of the wflow BMI interface
Step1: Startup two models
Step2: <h3>Now we can investigate some model parameters
Step3: <h3>Start and end times
Step4: <h3>Now start the models
Step5: <h4>Define function to view the results | Python Code:
import wflow.wflow_bmi as bmi
import logging
reload(bmi)
%pylab inline
import datetime
from IPython.html.widgets import interact
Explanation: <h1>Basic test of the wflow BMI interface
End of explanation
# This is the LAnd Atmophere (LA) model
LA_model = bmi.wflowbmi_csdms()
LA_model.initialize('../examples/wflow_rhine_sbm/wflow_sbm_bmi.ini',loglevel=logging.ERROR)
# This is the routing (RT) model
RT_model = bmi.wflowbmi_csdms()
RT_model.initialize('../examples/wflow_rhine_sbm/wflow_routing_bmi.ini',loglevel=logging.ERROR)
Explanation: Startup two models:
The wflow_sbm model calculates the runoff from each cell (the LA land-atmosphere model)
the wflow_routing model that uses a kinimatic wave for routing the flow (the RT routing model)
End of explanation
print(LA_model.get_value("timestepsecs"))
print LA_model.get_start_time()
aa = LA_model.get_attribute_names()
LA_model.get_attribute_value("run:reinit")
LA_model.set_attribute_value("run:reinit",'1')
LA_model.get_attribute_value("run:reinit")
imshow(LA_model.get_value("Altitude"))
# Save the old dem, chnage the dem in the model and set it back
origdem = LA_model.get_value("Altitude")
newdem = origdem * 1.6
LA_model.set_value('Altitude',newdem)
diff = origdem - LA_model.get_value("Altitude")
imshow(diff)
imshow(LA_model.get_value("FirstZoneDepth"))
imshow(LA_model.get_value("River"))
Explanation: <h3>Now we can investigate some model parameters
End of explanation
t_end = RT_model.get_end_time()
t_start = RT_model.get_start_time()
t = RT_model.get_current_time()
(t_end - t_start)/(86400)
Explanation: <h3>Start and end times
End of explanation
t_end = RT_model.get_end_time()
t = RT_model.get_start_time()
res = []
resq = []
# Loop in time and put output of SBM in seperate routing module - 1way link
while t < t_end:
LA_model.update()
# Now set the output from the LA model (specific Q) as input to the RT model
thevar = LA_model.get_value("InwaterMM")
RT_model.set_value("IW",thevar) # The IW is set in the wflow_routing.ini var as a forcing
RT_model.update()
resq.append(RT_model.get_value("SurfaceRunoff"))
res.append(thevar)
t = RT_model.get_current_time()
print datetime.datetime.fromtimestamp(t)
LA_model.finalize()
RT_model.finalize()
Explanation: <h3>Now start the models
End of explanation
def browse_res(digits):
n = len(digits)
def view_image(i):
plt.imshow(log(digits[i]+1))
plt.title('Step: %d' % i)
plt.colorbar()
plt.show()
interact(view_image, i=(0,n-1))
browse_res(res)
browse_res(resq)
Explanation: <h4>Define function to view the results
End of explanation |
4,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Divide continuous data into equally-spaced epochs
This tutorial shows how to segment continuous data into a set of epochs spaced
equidistantly in time. The epochs will not be created based on experimental
events; instead, the continuous data will be "chunked" into consecutive epochs
(which may be temporally overlapping, adjacent, or separated).
We will also briefly demonstrate how to use these epochs in connectivity
analysis.
First, we import necessary modules and read in a sample raw data set.
This data set contains brain activity that is event-related, i.e.,
synchronized to the onset of auditory stimuli. However, rather than creating
epochs by segmenting the data around the onset of each stimulus, we will
create 30 second epochs that allow us to perform non-event-related analyses of
the signal.
<div class="alert alert-info"><h4>Note</h4><p>Starting in version 0.25, all functions in the ``mne.connectivity``
sub-module will be housed in a separate package called
Step1: For this tutorial we'll crop and resample the raw data to a manageable size
for our web server to handle, ignore EEG channels, and remove the heartbeat
artifact so we don't get spurious correlations just because of that.
Step2: To create fixed length epochs, we simply call the function and provide it
with the appropriate parameters indicating the desired duration of epochs in
seconds, whether or not to preload data, whether or not to reject epochs that
overlap with raw data segments annotated as bad, whether or not to include
projectors, and finally whether or not to be verbose. Here, we choose a long
epoch duration (30 seconds). To conserve memory, we set preload to
False.
Step3: Characteristics of Fixed Length Epochs
Fixed length epochs are generally unsuitable for event-related analyses. This
can be seen in an image map of our fixed length
epochs. When the epochs are averaged, as seen at the bottom of the plot,
misalignment between onsets of event-related activity results in noise.
Step4: For information about creating epochs for event-related analyses, please see
tut-epochs-class.
Example Use Case for Fixed Length Epochs
Step5: If desired, separate correlation matrices for each epoch can be obtained.
For envelope correlations, this is the default return if you use
Step6: Now we can plot correlation matrices. We'll compare the first and last
30-second epochs of the recording | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.preprocessing import compute_proj_ecg
from mne_connectivity import envelope_correlation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: Divide continuous data into equally-spaced epochs
This tutorial shows how to segment continuous data into a set of epochs spaced
equidistantly in time. The epochs will not be created based on experimental
events; instead, the continuous data will be "chunked" into consecutive epochs
(which may be temporally overlapping, adjacent, or separated).
We will also briefly demonstrate how to use these epochs in connectivity
analysis.
First, we import necessary modules and read in a sample raw data set.
This data set contains brain activity that is event-related, i.e.,
synchronized to the onset of auditory stimuli. However, rather than creating
epochs by segmenting the data around the onset of each stimulus, we will
create 30 second epochs that allow us to perform non-event-related analyses of
the signal.
<div class="alert alert-info"><h4>Note</h4><p>Starting in version 0.25, all functions in the ``mne.connectivity``
sub-module will be housed in a separate package called
:mod:`mne-connectivity <mne_connectivity>`. Download it by running:
.. code-block:: console
$ pip install mne-connectivity</p></div>
End of explanation
raw.crop(tmax=150).resample(100).pick('meg')
ecg_proj, _ = compute_proj_ecg(raw, ch_name='MEG 0511') # No ECG chan
raw.add_proj(ecg_proj)
raw.apply_proj()
Explanation: For this tutorial we'll crop and resample the raw data to a manageable size
for our web server to handle, ignore EEG channels, and remove the heartbeat
artifact so we don't get spurious correlations just because of that.
End of explanation
epochs = mne.make_fixed_length_epochs(raw, duration=30, preload=False)
Explanation: To create fixed length epochs, we simply call the function and provide it
with the appropriate parameters indicating the desired duration of epochs in
seconds, whether or not to preload data, whether or not to reject epochs that
overlap with raw data segments annotated as bad, whether or not to include
projectors, and finally whether or not to be verbose. Here, we choose a long
epoch duration (30 seconds). To conserve memory, we set preload to
False.
End of explanation
event_related_plot = epochs.plot_image(picks=['MEG 1142'])
Explanation: Characteristics of Fixed Length Epochs
Fixed length epochs are generally unsuitable for event-related analyses. This
can be seen in an image map of our fixed length
epochs. When the epochs are averaged, as seen at the bottom of the plot,
misalignment between onsets of event-related activity results in noise.
End of explanation
epochs.load_data().filter(l_freq=8, h_freq=12)
alpha_data = epochs.get_data()
Explanation: For information about creating epochs for event-related analyses, please see
tut-epochs-class.
Example Use Case for Fixed Length Epochs: Connectivity Analysis
Fixed lengths epochs are suitable for many types of analysis, including
frequency or time-frequency analyses, connectivity analyses, or
classification analyses. Here we briefly illustrate their utility in a sensor
space connectivity analysis.
The data from our epochs object has shape (n_epochs, n_sensors, n_times)
and is therefore an appropriate basis for using MNE-Python's envelope
correlation function to compute power-based connectivity in sensor space. The
long duration of our fixed length epochs, 30 seconds, helps us reduce edge
artifacts and achieve better frequency resolution when filtering must
be applied after epoching.
Let's examine the alpha band. We allow default values for filter parameters
(for more information on filtering, please see tut-filter-resample).
End of explanation
corr_matrix = envelope_correlation(alpha_data).get_data()
print(corr_matrix.shape)
Explanation: If desired, separate correlation matrices for each epoch can be obtained.
For envelope correlations, this is the default return if you use
:meth:mne-connectivity:mne_connectivity.EpochConnectivity.get_data:
End of explanation
first_30 = corr_matrix[0]
last_30 = corr_matrix[-1]
corr_matrices = [first_30, last_30]
color_lims = np.percentile(np.array(corr_matrices), [5, 95])
titles = ['First 30 Seconds', 'Last 30 Seconds']
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.suptitle('Correlation Matrices from First 30 Seconds and Last 30 Seconds')
for ci, corr_matrix in enumerate(corr_matrices):
ax = axes[ci]
mpbl = ax.imshow(corr_matrix, clim=color_lims)
ax.set_xlabel(titles[ci])
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.85, 0.2, 0.025, 0.6])
cbar = fig.colorbar(ax.images[0], cax=cax)
cbar.set_label('Correlation Coefficient')
Explanation: Now we can plot correlation matrices. We'll compare the first and last
30-second epochs of the recording:
End of explanation |
4,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions
Step1: 2 - Overview of the Problem set
Problem Statement
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
What you need to remember
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
Explanation: Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions:
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
You will learn to:
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- matplotlib is a famous library to plot graphs in Python.
- PIL and scipy are used here to test your model with your own picture at the end.
End of explanation
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
Explanation: 2 - Overview of the Problem set
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
End of explanation
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:,index]) + ", it's a '" + classes[np.squeeze(train_set_y[:,index])].decode("utf-8") + "' picture.")
Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_y.shape[1]
m_test = test_set_y.shape[1]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise: Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].
End of explanation
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
Explanation: Expected Output for m_train, m_test and num_px:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use:
python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
End of explanation
train_set_x = train_set_x_flatten / 255.
test_set_x = test_set_x_flatten / 255.
Explanation: Expected Output:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
x -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(9.2) = " + str(sigmoid(9.2)))
Explanation: <font color='blue'>
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
3 - General Architecture of the learning algorithm
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network!
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
Mathematical expression of the algorithm:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
Key steps:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call model().
4.1 - Helper functions
Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b)$ to make predictions.
End of explanation
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros(shape=(dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**sigmoid(0)**</td>
<td> 0.5</td>
</tr>
<tr>
<td>**sigmoid(9.2)**</td>
<td> 0.999898970806 </td>
</tr>
</table>
4.2 - Initializing parameters
Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
End of explanation
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = (- 1 / m) * np.sum(Y * np.log(A) + (1 - Y) * (np.log(1 - A))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1 / m) * np.dot(X, (A - Y).T)
db = (1 / m) * np.sum(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
Exercise: Implement a function propagate() that computes the cost function and its gradient.
Hints:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
End of explanation
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw # need to broadcast
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" % (i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99993216]
[ 1.99980262]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.499935230625 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 6.000064773192205</td>
</tr>
</table>
d) Optimization
You have initialized your parameters.
You are also able to compute a cost function and its gradient.
Now, you want to update the parameters using gradient descent.
Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
End of explanation
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1, m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities a[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0, i] = 1 if A[0, i] > 0.5 else 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
print("predictions = " + str(predict(w, b, X)))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.1124579 ]
[ 0.23106775]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.55930492484 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.90158428]
[ 1.76250842]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.430462071679 </td>
</tr>
</table>
Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions:
Calculate $\hat{Y} = A = \sigma(w^T X + b)$
Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
Explanation: Expected Output:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1.]]
</td>
</tr>
</table>
<font color='blue'>
What to remember:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
5 - Merge all functions into a model
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
Exercise: Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
End of explanation
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
Explanation: Run the following cell to train your model.
End of explanation
# Example of a picture that was wrongly classified.
index = 5
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0, index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0, index]].decode("utf-8") + "\" picture.")
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
End of explanation
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
Explanation: Let's also plot the cost function and the gradients.
End of explanation
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
Explanation: Interpretation:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
6 - Further analysis (optional/ungraded exercise)
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
Choice of learning rate
Reminder:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens.
End of explanation
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px, num_px)).reshape((1, num_px * num_px * 3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: Interpretation:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
7 - Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation |
4,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fixing BEM and head surfaces
Sometimes when creating a BEM model the surfaces need manual correction because
of a series of problems that can arise (e.g. intersection between surfaces).
Here, we will see how this can be achieved by exporting the surfaces to the 3D
modeling program Blender, editing them, and
re-importing them. We will also give a simple example of how to use
pymeshfix <tut-fix-meshes-pymeshfix> to fix topological problems.
Much of this tutorial is based on
https
Step1: Exporting surfaces to Blender
In this tutorial, we are working with the MNE-Sample set, for which the
surfaces have no issues. To demonstrate how to fix problematic surfaces, we
are going to manually place one of the inner-skull vertices outside the
outer-skill mesh.
We then convert the surfaces to .obj files and create a new
folder called conv inside the FreeSurfer subject folder to keep them in.
Step2: Editing in Blender
We can now open Blender and import the surfaces. Go to File > Import >
Wavefront (.obj). Navigate to the conv folder and select the file you
want to import. Make sure to select the Keep Vert Order option. You can
also select the Y Forward option to load the axes in the correct direction
(RAS)
Step3: Back in Python, you can read the fixed .obj files and save them as
FreeSurfer .surf files. For the
Step4: Editing the head surfaces
Sometimes the head surfaces are faulty and require manual editing. We use
Step5: High-resolution head
We use | Python Code:
# Authors: Marijn van Vliet <[email protected]>
# Ezequiel Mikulan <[email protected]>
# Manorama Kadwani <[email protected]>
#
# License: BSD-3-Clause
import os
import shutil
import mne
data_path = mne.datasets.sample.data_path()
subjects_dir = data_path / 'subjects'
bem_dir = subjects_dir / 'sample' / 'bem' / 'flash'
surf_dir = subjects_dir / 'sample' / 'surf'
Explanation: Fixing BEM and head surfaces
Sometimes when creating a BEM model the surfaces need manual correction because
of a series of problems that can arise (e.g. intersection between surfaces).
Here, we will see how this can be achieved by exporting the surfaces to the 3D
modeling program Blender, editing them, and
re-importing them. We will also give a simple example of how to use
pymeshfix <tut-fix-meshes-pymeshfix> to fix topological problems.
Much of this tutorial is based on
https://github.com/ezemikulan/blender_freesurfer by Ezequiel Mikulan.
:depth: 3
End of explanation
# Put the converted surfaces in a separate 'conv' folder
conv_dir = subjects_dir / 'sample' / 'conv'
os.makedirs(conv_dir, exist_ok=True)
# Load the inner skull surface and create a problem
# The metadata is empty in this example. In real study, we want to write the
# original metadata to the fixed surface file. Set read_metadata=True to do so.
coords, faces = mne.read_surface(bem_dir / 'inner_skull.surf')
coords[0] *= 1.1 # Move the first vertex outside the skull
# Write the inner skull surface as an .obj file that can be imported by
# Blender.
mne.write_surface(conv_dir / 'inner_skull.obj', coords, faces, overwrite=True)
# Also convert the outer skull surface.
coords, faces = mne.read_surface(bem_dir / 'outer_skull.surf')
mne.write_surface(conv_dir / 'outer_skull.obj', coords, faces, overwrite=True)
Explanation: Exporting surfaces to Blender
In this tutorial, we are working with the MNE-Sample set, for which the
surfaces have no issues. To demonstrate how to fix problematic surfaces, we
are going to manually place one of the inner-skull vertices outside the
outer-skill mesh.
We then convert the surfaces to .obj files and create a new
folder called conv inside the FreeSurfer subject folder to keep them in.
End of explanation
coords, faces = mne.read_surface(conv_dir / 'inner_skull.obj')
coords[0] /= 1.1 # Move the first vertex back inside the skull
mne.write_surface(conv_dir / 'inner_skull_fixed.obj', coords, faces,
overwrite=True)
Explanation: Editing in Blender
We can now open Blender and import the surfaces. Go to File > Import >
Wavefront (.obj). Navigate to the conv folder and select the file you
want to import. Make sure to select the Keep Vert Order option. You can
also select the Y Forward option to load the axes in the correct direction
(RAS):
<img src="file://../../_static/blender_import_obj/blender_import_obj1.jpg" width="800" alt="Importing .obj files in Blender">
For convenience, you can save these settings by pressing the + button
next to Operator Presets.
Repeat the procedure for all surfaces you want to import (e.g. inner_skull
and outer_skull).
You can now edit the surfaces any way you like. See the
Beginner Blender Tutorial Series
to learn how to use Blender. Specifically, part 2 will teach you how to
use the basic editing tools you need to fix the surface.
<img src="file://../../_static/blender_import_obj/blender_import_obj2.jpg" width="800" alt="Editing surfaces in Blender">
Using the fixed surfaces in MNE-Python
In Blender, you can export a surface as an .obj file by selecting it and go
to File > Export > Wavefront (.obj). You need to again select the Y
Forward option and check the Keep Vertex Order box.
<img src="file://../../_static/blender_import_obj/blender_import_obj3.jpg" width="200" alt="Exporting .obj files in Blender">
Each surface needs to be exported as a separate file. We recommend saving
them in the conv folder and ending the file name with _fixed.obj,
although this is not strictly necessary.
In order to be able to run this tutorial script top to bottom, we here
simulate the edits you did manually in Blender using Python code:
End of explanation
# Read the fixed surface
coords, faces = mne.read_surface(conv_dir / 'inner_skull_fixed.obj')
# Backup the original surface
shutil.copy(bem_dir / 'inner_skull.surf', bem_dir / 'inner_skull_orig.surf')
# Overwrite the original surface with the fixed version
# In real study you should provide the correct metadata using ``volume_info=``
# This could be accomplished for example with:
#
# _, _, vol_info = mne.read_surface(bem_dir / 'inner_skull.surf',
# read_metadata=True)
# mne.write_surface(bem_dir / 'inner_skull.surf', coords, faces,
# volume_info=vol_info, overwrite=True)
Explanation: Back in Python, you can read the fixed .obj files and save them as
FreeSurfer .surf files. For the :func:mne.make_bem_model function to find
them, they need to be saved using their original names in the surf
folder, e.g. bem/inner_skull.surf. Be sure to first backup the original
surfaces in case you make a mistake!
End of explanation
# Load the fixed surface
coords, faces = mne.read_surface(bem_dir / 'outer_skin.surf')
# Make sure we are in the correct directory
head_dir = bem_dir.parent
# Remember to backup the original head file in advance!
# Overwrite the original head file
#
# mne.write_head_bem(head_dir / 'sample-head.fif', coords, faces,
# overwrite=True)
Explanation: Editing the head surfaces
Sometimes the head surfaces are faulty and require manual editing. We use
:func:mne.write_head_bem to convert the fixed surfaces to .fif files.
Low-resolution head
For EEG forward modeling, it is possible that outer_skin.surf would be
manually edited. In that case, remember to save the fixed version of
-head.fif from the edited surface file for coregistration.
End of explanation
# If ``-head-dense.fif`` does not exist, you need to run
# ``mne make_scalp_surfaces`` first.
# [0] because a list of surfaces is returned
surf = mne.read_bem_surfaces(head_dir / 'sample-head.fif')[0]
# For consistency only
coords = surf['rr']
faces = surf['tris']
# Write the head as an .obj file for editing
mne.write_surface(conv_dir / 'sample-head.obj',
coords, faces, overwrite=True)
# Usually here you would go and edit your meshes.
#
# Here we just use the same surface as if it were fixed
# Read in the .obj file
coords, faces = mne.read_surface(conv_dir / 'sample-head.obj')
# Remember to backup the original head file in advance!
# Overwrite the original head file
#
# mne.write_head_bem(head_dir / 'sample-head.fif', coords, faces,
# overwrite=True)
Explanation: High-resolution head
We use :func:mne.read_bem_surfaces to read the head surface files. After
editing, we again output the head file with :func:mne.write_head_bem.
Here we use -head.fif for speed.
End of explanation |
4,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2D plots
Demonstration of the 2D plot capabilities
The plot2d plot method make plots of 2-dimensional scalar data
using matplotlibs pcolormesh or the contourf functions.
Note that this method is extended by the mapplot plot method of the psy-maps plugin for visualization on the projected globe.
Step1: First we create some sample data in the form of a 2D parabola
Step2: For a simple 2D plot of a scalar field, we can use the
plot2d plot method
Step3: The plot formatoption controls, how the plot is made. The default is a
pcolormesh
plot, but we can also make a
filled contour
plot. The levels of the contour plot are determined through the levels formatoption.
Step4: The plot2d method has several formatoptions controlling the color coding of your plot
Step5: The most important ones are
cbar | Python Code:
import psyplot.project as psy
import xarray as xr
%matplotlib inline
%config InlineBackend.close_figures = False
import numpy as np
Explanation: 2D plots
Demonstration of the 2D plot capabilities
The plot2d plot method make plots of 2-dimensional scalar data
using matplotlibs pcolormesh or the contourf functions.
Note that this method is extended by the mapplot plot method of the psy-maps plugin for visualization on the projected globe.
End of explanation
x = np.linspace(-1, 1.)
y = np.linspace(-1, 1.)
x2d, y2d = np.meshgrid(x, y)
z = - x2d**2 - y2d**2
ds = xr.Dataset(
{'z': xr.Variable(('x', 'y'), z)},
{'x': xr.Variable(('x', ), x), 'y': xr.Variable(('y', ), y)})
Explanation: First we create some sample data in the form of a 2D parabola
End of explanation
p = psy.plot.plot2d(ds, cmap='Reds', name='z')
Explanation: For a simple 2D plot of a scalar field, we can use the
plot2d plot method:
End of explanation
p.update(plot='contourf', levels=5)
p.show()
Explanation: The plot formatoption controls, how the plot is made. The default is a
pcolormesh
plot, but we can also make a
filled contour
plot. The levels of the contour plot are determined through the levels formatoption.
End of explanation
p.keys('colors')
Explanation: The plot2d method has several formatoptions controlling the color coding of your plot:
End of explanation
psy.close('all')
Explanation: The most important ones are
cbar: To specify the location of the colorbar
bounds: To specify the boundaries for the color coding, i.e.
the categories which data range belongs to which color
cmap: To specify the colormap
End of explanation |
4,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactions and ANOVA
Note
Step1: Take a look at the data
Step2: Fit a linear model
Step3: Have a look at the created design matrix
Step4: Or since we initially passed in a DataFrame, we have a DataFrame available in
Step5: We keep a reference to the original untouched data in
Step6: Influence statistics
Step7: or get a dataframe
Step8: Now plot the residuals within the groups separately
Step9: Now we will test some interactions using anova or f_test
Step10: Do an ANOVA check
Step11: The design matrix as a DataFrame
Step12: The design matrix as an ndarray
Step13: Looks like one observation is an outlier.
Step14: Replot the residuals
Step15: Plot the fitted values
Step16: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
Step17: Minority Employment Data
Step18: One-way ANOVA
Step19: Two-way ANOVA
Step20: Explore the dataset
Step21: Balanced panel
Step22: You have things available in the calling namespace available in the formula evaluation namespace
Step23: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Do not use Type III with non-orthogonal contrast - ie., Treatment | Python Code:
%matplotlib inline
from urllib.request import urlopen
import numpy as np
np.set_printoptions(precision=4, suppress=True)
import pandas as pd
pd.set_option("display.width", 100)
import matplotlib.pyplot as plt
from statsmodels.formula.api import ols
from statsmodels.graphics.api import interaction_plot, abline_plot
from statsmodels.stats.anova import anova_lm
try:
salary_table = pd.read_csv('salary.table')
except: # recent pandas can read URL without urlopen
url = 'http://stats191.stanford.edu/data/salary.table'
fh = urlopen(url)
salary_table = pd.read_table(fh)
salary_table.to_csv('salary.table')
E = salary_table.E
M = salary_table.M
X = salary_table.X
S = salary_table.S
Explanation: Interactions and ANOVA
Note: This script is based heavily on Jonathan Taylor's class notes https://web.stanford.edu/class/stats191/notebooks/Interactions.html
Download and format data:
End of explanation
plt.figure(figsize=(6,6))
symbols = ['D', '^']
colors = ['r', 'g', 'blue']
factor_groups = salary_table.groupby(['E','M'])
for values, group in factor_groups:
i,j = values
plt.scatter(group['X'], group['S'], marker=symbols[j], color=colors[i-1],
s=144)
plt.xlabel('Experience');
plt.ylabel('Salary');
Explanation: Take a look at the data:
End of explanation
formula = 'S ~ C(E) + C(M) + X'
lm = ols(formula, salary_table).fit()
print(lm.summary())
Explanation: Fit a linear model:
End of explanation
lm.model.exog[:5]
Explanation: Have a look at the created design matrix:
End of explanation
lm.model.data.orig_exog[:5]
Explanation: Or since we initially passed in a DataFrame, we have a DataFrame available in
End of explanation
lm.model.data.frame[:5]
Explanation: We keep a reference to the original untouched data in
End of explanation
infl = lm.get_influence()
print(infl.summary_table())
Explanation: Influence statistics
End of explanation
df_infl = infl.summary_frame()
df_infl[:5]
Explanation: or get a dataframe
End of explanation
resid = lm.resid
plt.figure(figsize=(6,6));
for values, group in factor_groups:
i,j = values
group_num = i*2 + j - 1 # for plotting purposes
x = [group_num] * len(group)
plt.scatter(x, resid[group.index], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('Group');
plt.ylabel('Residuals');
Explanation: Now plot the residuals within the groups separately:
End of explanation
interX_lm = ols("S ~ C(E) * X + C(M)", salary_table).fit()
print(interX_lm.summary())
Explanation: Now we will test some interactions using anova or f_test
End of explanation
from statsmodels.stats.api import anova_lm
table1 = anova_lm(lm, interX_lm)
print(table1)
interM_lm = ols("S ~ X + C(E)*C(M)", data=salary_table).fit()
print(interM_lm.summary())
table2 = anova_lm(lm, interM_lm)
print(table2)
Explanation: Do an ANOVA check
End of explanation
interM_lm.model.data.orig_exog[:5]
Explanation: The design matrix as a DataFrame
End of explanation
interM_lm.model.exog
interM_lm.model.exog_names
infl = interM_lm.get_influence()
resid = infl.resid_studentized_internal
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X');
plt.ylabel('standardized resids');
Explanation: The design matrix as an ndarray
End of explanation
drop_idx = abs(resid).argmax()
print(drop_idx) # zero-based index
idx = salary_table.index.drop(drop_idx)
lm32 = ols('S ~ C(E) + X + C(M)', data=salary_table, subset=idx).fit()
print(lm32.summary())
print('\n')
interX_lm32 = ols('S ~ C(E) * X + C(M)', data=salary_table, subset=idx).fit()
print(interX_lm32.summary())
print('\n')
table3 = anova_lm(lm32, interX_lm32)
print(table3)
print('\n')
interM_lm32 = ols('S ~ X + C(E) * C(M)', data=salary_table, subset=idx).fit()
table4 = anova_lm(lm32, interM_lm32)
print(table4)
print('\n')
Explanation: Looks like one observation is an outlier.
End of explanation
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
plt.figure(figsize=(6,6))
resid = resid.reindex(X.index)
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X.loc[idx], resid.loc[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X[~[32]]');
plt.ylabel('standardized resids');
Explanation: Replot the residuals
End of explanation
lm_final = ols('S ~ X + C(E)*C(M)', data = salary_table.drop([drop_idx])).fit()
mf = lm_final.model.data.orig_exog
lstyle = ['-','--']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], S[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
# drop NA because there is no idx 32 in the final model
fv = lm_final.fittedvalues.reindex(idx).dropna()
x = mf.X.reindex(idx).dropna()
plt.plot(x, fv, ls=lstyle[j], color=colors[i-1])
plt.xlabel('Experience');
plt.ylabel('Salary');
Explanation: Plot the fitted values
End of explanation
U = S - X * interX_lm32.params['X']
plt.figure(figsize=(6,6))
interaction_plot(E, M, U, colors=['red','blue'], markers=['^','D'],
markersize=10, ax=plt.gca())
Explanation: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
End of explanation
try:
jobtest_table = pd.read_table('jobtest.table')
except: # do not have data already
url = 'http://stats191.stanford.edu/data/jobtest.table'
jobtest_table = pd.read_table(url)
factor_group = jobtest_table.groupby(['MINORITY'])
fig, ax = plt.subplots(figsize=(6,6))
colors = ['purple', 'green']
markers = ['o', 'v']
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST');
ax.set_ylabel('JPERF');
min_lm = ols('JPERF ~ TEST', data=jobtest_table).fit()
print(min_lm.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST')
ax.set_ylabel('JPERF')
fig = abline_plot(model_results = min_lm, ax=ax)
min_lm2 = ols('JPERF ~ TEST + TEST:MINORITY',
data=jobtest_table).fit()
print(min_lm2.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'] + min_lm2.params['TEST:MINORITY'],
ax=ax, color='green');
min_lm3 = ols('JPERF ~ TEST + MINORITY', data = jobtest_table).fit()
print(min_lm3.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm3.params['Intercept'],
slope = min_lm3.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm3.params['Intercept'] + min_lm3.params['MINORITY'],
slope = min_lm3.params['TEST'], ax=ax, color='green');
min_lm4 = ols('JPERF ~ TEST * MINORITY', data = jobtest_table).fit()
print(min_lm4.summary())
fig, ax = plt.subplots(figsize=(8,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm4.params['Intercept'],
slope = min_lm4.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm4.params['Intercept'] + min_lm4.params['MINORITY'],
slope = min_lm4.params['TEST'] + min_lm4.params['TEST:MINORITY'],
ax=ax, color='green');
# is there any effect of MINORITY on slope or intercept?
table5 = anova_lm(min_lm, min_lm4)
print(table5)
# is there any effect of MINORITY on intercept
table6 = anova_lm(min_lm, min_lm3)
print(table6)
# is there any effect of MINORITY on slope
table7 = anova_lm(min_lm, min_lm2)
print(table7)
# is it just the slope or both?
table8 = anova_lm(min_lm2, min_lm4)
print(table8)
Explanation: Minority Employment Data
End of explanation
try:
rehab_table = pd.read_csv('rehab.table')
except:
url = 'http://stats191.stanford.edu/data/rehab.csv'
rehab_table = pd.read_table(url, delimiter=",")
rehab_table.to_csv('rehab.table')
fig, ax = plt.subplots(figsize=(8,6))
fig = rehab_table.boxplot('Time', 'Fitness', ax=ax, grid=False)
rehab_lm = ols('Time ~ C(Fitness)', data=rehab_table).fit()
table9 = anova_lm(rehab_lm)
print(table9)
print(rehab_lm.model.data.orig_exog)
print(rehab_lm.summary())
Explanation: One-way ANOVA
End of explanation
try:
kidney_table = pd.read_table('./kidney.table')
except:
url = 'http://stats191.stanford.edu/data/kidney.table'
kidney_table = pd.read_csv(url, delim_whitespace=True)
Explanation: Two-way ANOVA
End of explanation
kidney_table.head(10)
Explanation: Explore the dataset
End of explanation
kt = kidney_table
plt.figure(figsize=(8,6))
fig = interaction_plot(kt['Weight'], kt['Duration'], np.log(kt['Days']+1),
colors=['red', 'blue'], markers=['D','^'], ms=10, ax=plt.gca())
Explanation: Balanced panel
End of explanation
kidney_lm = ols('np.log(Days+1) ~ C(Duration) * C(Weight)', data=kt).fit()
table10 = anova_lm(kidney_lm)
print(anova_lm(ols('np.log(Days+1) ~ C(Duration) + C(Weight)',
data=kt).fit(), kidney_lm))
print(anova_lm(ols('np.log(Days+1) ~ C(Duration)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
print(anova_lm(ols('np.log(Days+1) ~ C(Weight)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
Explanation: You have things available in the calling namespace available in the formula evaluation namespace
End of explanation
sum_lm = ols('np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)',
data=kt).fit()
print(anova_lm(sum_lm))
print(anova_lm(sum_lm, typ=2))
print(anova_lm(sum_lm, typ=3))
nosum_lm = ols('np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)',
data=kt).fit()
print(anova_lm(nosum_lm))
print(anova_lm(nosum_lm, typ=2))
print(anova_lm(nosum_lm, typ=3))
Explanation: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Do not use Type III with non-orthogonal contrast - ie., Treatment
End of explanation |
4,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Fermi-Hubbard Model
This notebook shows how to use the tensor_basis constructor to build the Hamiltonian of interacting spinful fermions in 1d, desctibed by the Fermi-Hubbard model (FHM)
Step1: To build the basis for spinful fermions, we take two copies of the basis for spinless fermions and tensor them using the tensor_basis constructor. While the tensor_basis can be used to tensor any two bases objects, it does not allow for passing symmetries, other than particle number conservation (we are currently working on developing a separate class which will allow using all symmetries for spinful fermions).
To this end, we define the number of spin-up and spin-down fermions, and proceed as follows
Step2: Alternatively, one can use the spinful_fermion_basis_1d class as well. This class, unlike the tensor_basis class can handle various 1d symmetries in the usual way and should be preferred for dealing with the FHM.
Step3: Defining the site-coupling lists is the same as before (mind the signs in the fermion hopping operator, though!).
The tensor_basis accepts extended operator strings. The idea is that within the subspace of each basis, we use the operator strings belonging to the corresponding underlying basis (for spinless_fermion_basis_1d, the allowed operators are "+", "-", "n", and "I"). We then use a ...|... to separate the operators that act on spin-up (left) and spin-down (right).
For instance, the hopping operators $c_{j,\uparrow}c^\dagger_{j+1,\uparrow}$ and $c_{j,\downarrow}c^\dagger_{j+1,\downarrow}$ are represented as '-+|I' and 'I|-+', repsectively, where 'I' stands for the identity (and can be dropped, see below). On the other hand, the spin-flip hopping process $c_{j,\uparrow}c^\dagger_{j+1,\downarrow}$ would mix the spin-up and spin-down sectors and would take the form '-|+'. | Python Code:
from quspin.operators import hamiltonian # Hamiltonians and operators
from quspin.basis import spinless_fermion_basis_1d, tensor_basis # Hilbert space fermion and tensor bases
import numpy as np # generic math functions
##### define model parameters #####
L=4 # system size
J=1.0 # hopping
U=np.sqrt(2.0) # interaction
mu=0.0 # chemical potential
Explanation: The Fermi-Hubbard Model
This notebook shows how to use the tensor_basis constructor to build the Hamiltonian of interacting spinful fermions in 1d, desctibed by the Fermi-Hubbard model (FHM):
$$H = -J\sum_{i=0,\sigma}^{L-1} \left(c^\dagger_{i\sigma}c_{i+1,\sigma} - c_{i\sigma}c^\dagger_{i+1,\sigma}\right) - \mu\sum_{i=0,\sigma}^{L-1} n_{i\sigma} +U\sum_{i=0}^{L-1} n_{i\uparrow }n_{i\downarrow } $$
where $J$ is the hopping matrix element, $\mu$: the chemical potential, and $U$ -- the onsite $s$-wave interaction.
We begin by loading the libraries and defining the model parameters:
End of explanation
# define boson basis with 3 states per site L bosons in the lattice
N_up = L//2 + L % 2 # number of fermions with spin up
N_down = L//2 # number of fermions with spin down
basis_up=spinless_fermion_basis_1d(L,Nf=N_up)
basis_down=spinless_fermion_basis_1d(L,Nf=N_down)
basis = tensor_basis(basis_up,basis_down) # spinful fermions
print(basis)
Explanation: To build the basis for spinful fermions, we take two copies of the basis for spinless fermions and tensor them using the tensor_basis constructor. While the tensor_basis can be used to tensor any two bases objects, it does not allow for passing symmetries, other than particle number conservation (we are currently working on developing a separate class which will allow using all symmetries for spinful fermions).
To this end, we define the number of spin-up and spin-down fermions, and proceed as follows:
End of explanation
from quspin.basis import spinful_fermion_basis_1d
basis = spinful_fermion_basis_1d(L,Nf=(N_up,N_down))
print(basis)
Explanation: Alternatively, one can use the spinful_fermion_basis_1d class as well. This class, unlike the tensor_basis class can handle various 1d symmetries in the usual way and should be preferred for dealing with the FHM.
End of explanation
# define site-coupling lists
hop_right=[[-J,i,(i+1)%L] for i in range(L)] #PBC
hop_left= [[+J,i,(i+1)%L] for i in range(L)] #PBC
pot=[[-mu,i] for i in range(L)] # -\mu \sum_j n_{j \sigma}
interact=[[U,i,i] for i in range(L)] # U/2 \sum_j n_{j,up} n_{j,down}
# define static and dynamic lists
static=[
['+-|',hop_left], # up hops left
['-+|',hop_right], # up hops right
['|+-',hop_left], # down hops left
['|-+',hop_right], # down hops right
['n|',pot], # up on-site potention
['|n',pot], # down on-site potention
['n|n',interact] # up-down interaction
]
dynamic=[]
# build Hamiltonian
no_checks = dict(check_pcon=False,check_symm=False,check_herm=False)
H=hamiltonian(static,dynamic,basis=basis,dtype=np.float64,**no_checks)
Explanation: Defining the site-coupling lists is the same as before (mind the signs in the fermion hopping operator, though!).
The tensor_basis accepts extended operator strings. The idea is that within the subspace of each basis, we use the operator strings belonging to the corresponding underlying basis (for spinless_fermion_basis_1d, the allowed operators are "+", "-", "n", and "I"). We then use a ...|... to separate the operators that act on spin-up (left) and spin-down (right).
For instance, the hopping operators $c_{j,\uparrow}c^\dagger_{j+1,\uparrow}$ and $c_{j,\downarrow}c^\dagger_{j+1,\downarrow}$ are represented as '-+|I' and 'I|-+', repsectively, where 'I' stands for the identity (and can be dropped, see below). On the other hand, the spin-flip hopping process $c_{j,\uparrow}c^\dagger_{j+1,\downarrow}$ would mix the spin-up and spin-down sectors and would take the form '-|+'.
End of explanation |
4,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Blowing up things!
So far we learned about functions, conditions and loops. Lets use our knowledge so far to do something fun - blow up things!
In this adventure we will build structures with TNT. Once we are done building with TNT and we are ready we'll explode the TNT.
Lets get started ...
Step1: Task 1
Step2: Task 2
Step3: We need to run the function you defined in Task 2 using the following statement.
python
thread.start_new_thread(placeTNTBlock,())
We need to do so since we are going to toggle the buildingWithTNT variable as the program is running.
Note | Python Code:
# Run this once before starting your tasks
import mcpi.minecraft as minecraft
import mcpi.block as block
import time
import thread
mc = minecraft.Minecraft.create()
Explanation: Blowing up things!
So far we learned about functions, conditions and loops. Lets use our knowledge so far to do something fun - blow up things!
In this adventure we will build structures with TNT. Once we are done building with TNT and we are ready we'll explode the TNT.
Lets get started ...
End of explanation
# Task 1 code
# add a variable with an initial value
# toggle the variable
# print the value of the variable
Explanation: Task 1: Toggling TNT building
We need to know when Steve is building a TNT structure and when he is done building and just wants to move around in the world. There are two phases when Steve is moving around
Steve is moving to build with TNT
Steve is moving around normally and is not building with TNT
We use a variable named buildingWithTNT and set it to the value True when Steve is in the building phase and set the variable to False when he is not building with TNT. Note that the variable takes only two values True and False. When the current value of the variable is True, the next value it should take is False and when the current value is False, the next value the variable should take is True. This technique of setting the next value of the variable to the opposite of its current value is called toggling. In Python an easy way to toggle a boolean variable is shown below
python
myvariable = False # initial value
myvariable = not myvariable # toggles value
Complete the program below to add the variable buildingWithTNT and toggle the variable when run and verify your program runs correctly by printing the variable value with a string like 'buildingWithTNT = False'.
End of explanation
# Task 2 code
Explanation: Task 2: Building with TNT
To build with TNT blocks, we first have to toggle the buildingWithTNT variable by executing the Task1 code block. This will toggle buildingWithTNT to True. Then Steve has to run (or jump or fly) wherever he wants to place the TNT blocks. Once we are done building, we have to toggle the buildingWithTNT variable again by executing the Task1 code block again. This will now toggle the buildingWithTNT variable to False and Steve will now move normally.
Lets write the function placeTNTBlock that will set the block that Steve is currently on to a TNT block. A TNT block has the id block.TNT.id. Remember that in order to define a function, one needs to use def Its a good idea to review functions in Adventure 3 before you write the function placeTNTBlock.
Make sure that in your function, you set a TNT block only if the buildingWithTNT variable is set to True. Use an if conditional to do so. Your program should look like the one below
```python
def myfunction():
while True:
time.sleep(0.1)
pos = mc.player.getTilePos()
# if buildingWithTNT is True set the block at the current position to block.TNT.id
```
Great Job
End of explanation
# Run Task 2 function on using thread.start_new_thread
Explanation: We need to run the function you defined in Task 2 using the following statement.
python
thread.start_new_thread(placeTNTBlock,())
We need to do so since we are going to toggle the buildingWithTNT variable as the program is running.
Note: Ask your instructor for help if you get stuck running your program or run into errors!
End of explanation |
4,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 5
Step1: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment
Step2: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step3: Deploy the embedding lookup model to AI Platform Prediction
Create the embedding lookup model resource in AI Platform
Step4: Next, deploy the model
Step5: Once the model is deployed, you can verify it in the AI Platform console.
Test the deployed embedding lookup AI Platform Prediction model
Set the AI Platform Prediction API information
Step6: Run the caip_embedding_lookup method to retrieve item embeddings. This method accepts item IDs, calls the embedding lookup model in AI Platform Prediction, and returns the appropriate embedding vectors.
Step7: Test the caip_embedding_lookup method with three item IDs
Step8: ScaNN matching service
The ScaNN matching service performs the following steps
Step9: Use Cloud Build to build the Docker container image
The container runs the gunicorn HTTP web server and executes the Flask app variable defined in the main.py module.
The container image to deploy to AI Platform Prediction is defined in a Dockerfile, as shown in the following code snippet
Step10: Run the following command to verify the container image has been built
Step11: Create a service account for AI Platform Prediction
Create a service account to run the custom container. This is required in cases where you want to grant specific permissions to the service account.
Step12: Grant the Cloud ML Engine (AI Platform) service account the iam.serviceAccountAdmin privilege, and grant the caip-serving service account the privileges required by the ScaNN matching service, which are storage.objectViewer and ml.developer.
Step13: Deploy the custom container to AI Platform Prediction
Create the ANN index model resource in AI Platform
Step14: Deploy the custom container to AI Platform prediction. Note that you use the env-vars parameter to pass environmental variables to the Flask application in the container.
Step15: Test the Deployed ScaNN Index Service
After deploying the custom container, test it by running the caip_scann_match method. This method accepts the parameter query_items, whose value is converted into a space-separated string of item IDs and treated as a single query. That is, a single embedding vector is retrieved from the embedding lookup model, and similar item IDs are retrieved from the ScaNN index given this embedding vector.
Step16: Call the caip_scann_match method with five item IDs and request five match items for each
Step17: (Optional) Deploy the matrix factorization model to AI Platform Prediction
Optionally, you can deploy the matrix factorization model in order to perform exact item matching. The model takes Item1_Id as an input and outputs the top 50 recommended item2_Ids.
Exact matching returns better results, but takes significantly longer than approximate nearest neighbor matching. You might want to use exact item matching in cases where you are working with a very small data set and where latency isn't a primary concern.
Export the model from BigQuery ML to Cloud Storage as a SavedModel
Step18: Deploy the exact matching model to AI Platform Prediction | Python Code:
import numpy as np
import tensorflow as tf
Explanation: Part 5: Deploy the solution to AI Platform Prediction
This notebook is the fifth of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to complete the following tasks:
Deploy the embedding lookup model to AI Platform Prediction.
Deploy the ScaNN matching service to AI Platform Prediction by using a custom container. The ScaNN matching service is an application that wraps the ANN index model and provides additional functionality, like mapping item IDs to item embeddings.
Optionally, export and deploy the matrix factorization model to AI Platform for exact matching.
Before starting this notebook, you must run the 04_build_embeddings_scann notebook to build an approximate nearest neighbor (ANN) index for the item embeddings.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
Import libraries
End of explanation
PROJECT_ID = "yourProject" # Change to your project.
PROJECT_NUMBER = "yourProjectNumber" # Change to your project number
BUCKET = "yourBucketName" # Change to the bucket you created.
REGION = "yourPredictionRegion" # Change to your AI Platform Prediction region.
ARTIFACTS_REPOSITORY_NAME = "ml-serving"
EMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR = f"gs://{BUCKET}/bqml/embedding_lookup_model"
EMBEDDNIG_LOOKUP_MODEL_NAME = "item_embedding_lookup"
EMBEDDNIG_LOOKUP_MODEL_VERSION = "v1"
INDEX_DIR = f"gs://{BUCKET}/bqml/scann_index"
SCANN_MODEL_NAME = "index_server"
SCANN_MODEL_VERSION = "v1"
KIND = "song"
!gcloud config set project $PROJECT_ID
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
PROJECT_NUMBER: The number of the Google Cloud project you are using to implement this solution. You can find this in the Project info card on the project dashboard page.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
REGION: The region to use for the AI Platform Prediction job.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
!gcloud ai-platform models create {EMBEDDNIG_LOOKUP_MODEL_NAME} --region={REGION}
Explanation: Deploy the embedding lookup model to AI Platform Prediction
Create the embedding lookup model resource in AI Platform:
End of explanation
!gcloud ai-platform versions create {EMBEDDNIG_LOOKUP_MODEL_VERSION} \
--region={REGION} \
--model={EMBEDDNIG_LOOKUP_MODEL_NAME} \
--origin={EMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR} \
--runtime-version=2.2 \
--framework=TensorFlow \
--python-version=3.7 \
--machine-type=n1-standard-2
print("The model version is deployed to AI Platform Prediction.")
Explanation: Next, deploy the model:
End of explanation
import googleapiclient.discovery
from google.api_core.client_options import ClientOptions
api_endpoint = f"https://{REGION}-ml.googleapis.com"
client_options = ClientOptions(api_endpoint=api_endpoint)
service = googleapiclient.discovery.build(
serviceName="ml", version="v1", client_options=client_options
)
Explanation: Once the model is deployed, you can verify it in the AI Platform console.
Test the deployed embedding lookup AI Platform Prediction model
Set the AI Platform Prediction API information:
End of explanation
def caip_embedding_lookup(input_items):
request_body = {"instances": input_items}
service_name = f"projects/{PROJECT_ID}/models/{EMBEDDNIG_LOOKUP_MODEL_NAME}/versions/{EMBEDDNIG_LOOKUP_MODEL_VERSION}"
print(f"Calling : {service_name}")
response = (
service.projects().predict(name=service_name, body=request_body).execute()
)
if "error" in response:
raise RuntimeError(response["error"])
return response["predictions"]
Explanation: Run the caip_embedding_lookup method to retrieve item embeddings. This method accepts item IDs, calls the embedding lookup model in AI Platform Prediction, and returns the appropriate embedding vectors.
End of explanation
input_items = ["2114406", "2114402 2120788", "abc123"]
embeddings = caip_embedding_lookup(input_items)
print(f"Embeddings retrieved: {len(embeddings)}")
for idx, embedding in enumerate(embeddings):
print(f"{input_items[idx]}: {embedding[:5]}")
Explanation: Test the caip_embedding_lookup method with three item IDs:
End of explanation
!gcloud beta artifacts repositories create {ARTIFACTS_REPOSITORY_NAME} \
--location={REGION} \
--repository-format=docker
!gcloud beta auth configure-docker {REGION}-docker.pkg.dev --quiet
Explanation: ScaNN matching service
The ScaNN matching service performs the following steps:
Receives one or more item IDs from the client.
Calls the embedding lookup model to fetch the embedding vectors of those item IDs.
Uses these embedding vectors to query the ANN index to find approximate nearest neighbor embedding vectors.
Maps the approximate nearest neighbors embedding vectors to their corresponding item IDs.
Sends the item IDs back to the client.
When the client receives the item IDs of the matches, the song title and artist information is fetched from Datastore in real-time to be displayed and served to the client application.
Note: In practice, recommendation systems combine matches (from one or more indices) with user-provided filtering clauses (like where price <= value and colour =red), as well as other item metadata (like item categories, popularity, and recency) to ensure recommendation freshness and diversity. In addition, ranking is commonly applied after generating the matches to decide the order in which they are served to the user.
ScaNN matching service implementation
The ScaNN matching service is implemented as a Flask application that runs on a gunicorn web server. This application is implemented in the main.py module.
The ScaNN matching service application works as follows:
Uses environmental variables to set configuration information, such as the Google Cloud location of the ScaNN index to load.
Loads the ScaNN index as the ScaNNMatcher object is initiated.
As required by AI Platform Prediction, exposes two HTTP endpoints:
health: a GET method to which AI Platform Prediction sends health checks.
predict: a POST method to which AI Platform Prediction forwards prediction requests.
The predict method expects JSON requests in the form {"instances":[{"query": "item123", "show": 10}]}, where query represents the item ID to retrieve matches for, and show represents the number of matches to retrieve.
The predict method works as follows:
1. Validates the received request object.
1. Extracts the `query` and `show` values from the request object.
1. Calls `embedding_lookup.lookup` with the given query item ID to get its embedding vector from the embedding lookup model.
1. Calls `scann_matcher.match` with the query item embedding vector to retrieve its approximate nearest neighbor item IDs from the ANN Index.
The list of matching item IDs are put into JSON format and returned as the response of the predict method.
Deploy the ScaNN matching service to AI Platform Prediction
Package the ScaNN matching service application in a custom container and deploy it to AI Platform Prediction.
Create an Artifact Registry for the Docker container image
End of explanation
IMAGE_URL = f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}/{SCANN_MODEL_NAME}:{SCANN_MODEL_VERSION}"
PORT = 5001
SUBSTITUTIONS = ""
SUBSTITUTIONS += f"_IMAGE_URL={IMAGE_URL},"
SUBSTITUTIONS += f"_PORT={PORT}"
!gcloud builds submit --config=index_server/cloudbuild.yaml \
--substitutions={SUBSTITUTIONS} \
--timeout=1h
Explanation: Use Cloud Build to build the Docker container image
The container runs the gunicorn HTTP web server and executes the Flask app variable defined in the main.py module.
The container image to deploy to AI Platform Prediction is defined in a Dockerfile, as shown in the following code snippet:
```
FROM python:3.8-slim
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . ./
ARG PORT
ENV PORT=$PORT
CMD exec gunicorn --bind :$PORT main:app --workers=1 --threads 8 --timeout 1800
```
Build the container image by using Cloud Build and specifying the cloudbuild.yaml file:
End of explanation
repository_id = f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}"
!gcloud beta artifacts docker images list {repository_id}
Explanation: Run the following command to verify the container image has been built:
End of explanation
SERVICE_ACCOUNT_NAME = "caip-serving"
SERVICE_ACCOUNT_EMAIL = f"{SERVICE_ACCOUNT_NAME}@{PROJECT_ID}.iam.gserviceaccount.com"
!gcloud iam service-accounts create {SERVICE_ACCOUNT_NAME} \
--description="Service account for AI Platform Prediction to access cloud resources."
Explanation: Create a service account for AI Platform Prediction
Create a service account to run the custom container. This is required in cases where you want to grant specific permissions to the service account.
End of explanation
!gcloud projects describe {PROJECT_ID} --format="value(projectNumber)"
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/iam.serviceAccountAdmin \
--member=serviceAccount:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/storage.objectViewer \
--member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/ml.developer \
--member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}
Explanation: Grant the Cloud ML Engine (AI Platform) service account the iam.serviceAccountAdmin privilege, and grant the caip-serving service account the privileges required by the ScaNN matching service, which are storage.objectViewer and ml.developer.
End of explanation
!gcloud ai-platform models create {SCANN_MODEL_NAME} --region={REGION}
Explanation: Deploy the custom container to AI Platform Prediction
Create the ANN index model resource in AI Platform:
End of explanation
HEALTH_ROUTE = f"/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}"
PREDICT_ROUTE = f"/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}:predict"
ENV_VARIABLES = f"PROJECT_ID={PROJECT_ID},"
ENV_VARIABLES += f"REGION={REGION},"
ENV_VARIABLES += f"INDEX_DIR={INDEX_DIR},"
ENV_VARIABLES += f"EMBEDDNIG_LOOKUP_MODEL_NAME={EMBEDDNIG_LOOKUP_MODEL_NAME},"
ENV_VARIABLES += f"EMBEDDNIG_LOOKUP_MODEL_VERSION={EMBEDDNIG_LOOKUP_MODEL_VERSION}"
!gcloud beta ai-platform versions create {SCANN_MODEL_VERSION} \
--region={REGION} \
--model={SCANN_MODEL_NAME} \
--image={IMAGE_URL} \
--ports={PORT} \
--predict-route={PREDICT_ROUTE} \
--health-route={HEALTH_ROUTE} \
--machine-type=n1-standard-4 \
--env-vars={ENV_VARIABLES} \
--service-account={SERVICE_ACCOUNT_EMAIL}
print("The model version is deployed to AI Platform Prediction.")
Explanation: Deploy the custom container to AI Platform prediction. Note that you use the env-vars parameter to pass environmental variables to the Flask application in the container.
End of explanation
import requests
from google.cloud import datastore
client = datastore.Client(PROJECT_ID)
def caip_scann_match(query_items, show=10):
request_body = {"instances": [{"query": " ".join(query_items), "show": show}]}
service_name = f"projects/{PROJECT_ID}/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}"
print(f"Calling: {service_name}")
response = (
service.projects().predict(name=service_name, body=request_body).execute()
)
if "error" in response:
raise RuntimeError(response["error"])
match_tokens = response["predictions"]
keys = [client.key(KIND, int(key)) for key in match_tokens]
items = client.get_multi(keys)
return items
Explanation: Test the Deployed ScaNN Index Service
After deploying the custom container, test it by running the caip_scann_match method. This method accepts the parameter query_items, whose value is converted into a space-separated string of item IDs and treated as a single query. That is, a single embedding vector is retrieved from the embedding lookup model, and similar item IDs are retrieved from the ScaNN index given this embedding vector.
End of explanation
songs = {
"2120788": "Limp Bizkit: My Way",
"1086322": "Jacques Brel: Ne Me Quitte Pas",
"833391": "Ricky Martin: Livin' la Vida Loca",
"1579481": "Dr. Dre: The Next Episode",
"2954929": "Black Sabbath: Iron Man",
}
for item_Id, desc in songs.items():
print(desc)
print("==================")
similar_items = caip_scann_match([item_Id], 5)
for similar_item in similar_items:
print(f'- {similar_item["artist"]}: {similar_item["track_title"]}')
print()
Explanation: Call the caip_scann_match method with five item IDs and request five match items for each:
End of explanation
BQ_DATASET_NAME = "recommendations"
BQML_MODEL_NAME = "item_matching_model"
BQML_MODEL_VERSION = "v1"
BQML_MODEL_OUTPUT_DIR = f"gs://{BUCKET}/bqml/item_matching_model"
!bq --quiet extract -m {BQ_DATASET_NAME}.{BQML_MODEL_NAME} {BQML_MODEL_OUTPUT_DIR}
!saved_model_cli show --dir {BQML_MODEL_OUTPUT_DIR} --tag_set serve --signature_def serving_default
Explanation: (Optional) Deploy the matrix factorization model to AI Platform Prediction
Optionally, you can deploy the matrix factorization model in order to perform exact item matching. The model takes Item1_Id as an input and outputs the top 50 recommended item2_Ids.
Exact matching returns better results, but takes significantly longer than approximate nearest neighbor matching. You might want to use exact item matching in cases where you are working with a very small data set and where latency isn't a primary concern.
Export the model from BigQuery ML to Cloud Storage as a SavedModel
End of explanation
!gcloud ai-platform models create {BQML_MODEL_NAME} --region={REGION}
!gcloud ai-platform versions create {BQML_MODEL_VERSION} \
--region={REGION} \
--model={BQML_MODEL_NAME} \
--origin={BQML_MODEL_OUTPUT_DIR} \
--runtime-version=2.2 \
--framework=TensorFlow \
--python-version=3.7 \
--machine-type=n1-standard-2
print("The model version is deployed to AI Platform Predicton.")
def caip_bqml_matching(input_items, show):
request_body = {"instances": input_items}
service_name = (
f"projects/{PROJECT_ID}/models/{BQML_MODEL_NAME}/versions/{BQML_MODEL_VERSION}"
)
print(f"Calling : {service_name}")
response = (
service.projects().predict(name=service_name, body=request_body).execute()
)
if "error" in response:
raise RuntimeError(response["error"])
match_tokens = response["predictions"][0]["predicted_item2_Id"][:show]
keys = [client.key(KIND, int(key)) for key in match_tokens]
items = client.get_multi(keys)
return items
return response["predictions"]
for item_Id, desc in songs.items():
print(desc)
print("==================")
similar_items = caip_bqml_matching([int(item_Id)], 5)
for similar_item in similar_items:
print(f'- {similar_item["artist"]}: {similar_item["track_title"]}')
print()
Explanation: Deploy the exact matching model to AI Platform Prediction
End of explanation |
4,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
State space models - concentrating the scale out of the likelihood function
Step1: Introduction
(much of this is based on Harvey (1989); see especially section 3.4)
State space models can generically be written as follows (here we focus on time-invariant state space models, but similar results apply also to time-varying models)
Step2: There are two parameters in this model that must be chosen
Step3: We can look at the results from the numerical optimizer in the results attribute mle_retvals
Step4: Concentrating out the scale
Now, there are two ways to reparameterize this model as above
Step5: Again, we can use the built-in fit method to find the maximum likelihood estimate of $h$.
Step6: The estimate of $h$ is provided in the middle table of parameters (ratio.irregular), while the estimate of the scale is provided in the upper table. Below, we will show that these estimates are consistent with those from the previous approach.
And we can again look at the results from the numerical optimizer in the results attribute mle_retvals. It turns out that two fewer iterations were required in this case, since there was one fewer parameter to select. Moreover, since the numerical maximization problem was easier, the optimizer was able to find a value that made the gradiant for this parameter slightly closer to zero than it was above.
Step7: Comparing estimates
Recall that $h = \sigma_\varepsilon^2 / \sigma_\eta^2$ and the scale is $\sigma_*^2 = \sigma_\eta^2$. Using these definitions, we can see that both models produce nearly identical results
Step8: Example
Step9: These two approaches produce about the same loglikelihood and parameters, although the model with the concentrated scale was able to improve the fit very slightly
Step10: This time, about 1/3 fewer iterations of the optimizer are required under the concentrated approach | Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
dta.index = pd.PeriodIndex(start='1959Q1', end='2009Q3', freq='Q')
Explanation: State space models - concentrating the scale out of the likelihood function
End of explanation
class LocalLevel(sm.tsa.statespace.MLEModel):
_start_params = [1., 1.]
_param_names = ['var.level', 'var.irregular']
def __init__(self, endog):
super(LocalLevel, self).__init__(endog, k_states=1, initialization='diffuse')
self['design', 0, 0] = 1
self['transition', 0, 0] = 1
self['selection', 0, 0] = 1
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, unconstrained):
return unconstrained**0.5
def update(self, params, **kwargs):
params = super(LocalLevel, self).update(params, **kwargs)
self['state_cov', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
Explanation: Introduction
(much of this is based on Harvey (1989); see especially section 3.4)
State space models can generically be written as follows (here we focus on time-invariant state space models, but similar results apply also to time-varying models):
$$
\begin{align}
y_t & = Z \alpha_t + \varepsilon_t, \quad \varepsilon_t \sim N(0, H) \
\alpha_{t+1} & = T \alpha_t + R \eta_t \quad \eta_t \sim N(0, Q)
\end{align}
$$
Often, some or all of the values in the matrices $Z, H, T, R, Q$ are unknown and must be estimated; in Statsmodels, estimation is often done by finding the parameters that maximize the likelihood function. In particular, if we collect the parameters in a vector $\psi$, then each of these matrices can be thought of as functions of those parameters, for example $Z = Z(\psi)$, etc.
Usually, the likelihood function is maximized numerically, for example by applying quasi-Newton "hill-climbing" algorithms, and this becomes more and more difficult the more parameters there are. It turns out that in many cases we can reparameterize the model as $[\psi_', \sigma_^2]'$, where $\sigma_^2$ is the "scale" of the model (usually, it replaces one of the error variance terms) and it is possible to find the maximum likelihood estimate of $\sigma_^2$ analytically, by differentiating the likelihood function. This implies that numerical methods are only required to estimate the parameters $\psi_*$, which has dimension one less than that of $\psi$.
Example: local level model
(see, for example, section 4.2 of Harvey (1989))
As a specific example, consider the local level model, which can be written as:
$$
\begin{align}
y_t & = \alpha_t + \varepsilon_t, \quad \varepsilon_t \sim N(0, \sigma_\varepsilon^2) \
\alpha_{t+1} & = \alpha_t + \eta_t \quad \eta_t \sim N(0, \sigma_\eta^2)
\end{align}
$$
In this model, $Z, T,$ and $R$ are all fixed to be equal to $1$, and there are two unknown parameters, so that $\psi = [\sigma_\varepsilon^2, \sigma_\eta^2]$.
Typical approach
First, we show how to define this model without concentrating out the scale, using Statsmodels' state space library:
End of explanation
mod = LocalLevel(dta.infl)
res = mod.fit()
print(res.summary())
Explanation: There are two parameters in this model that must be chosen: var.level $(\sigma_\eta^2)$ and var.irregular $(\sigma_\varepsilon^2)$. We can use the built-in fit method to choose them by numerically maximizing the likelihood function.
In our example, we are applying the local level model to consumer price index inflation.
End of explanation
print(res.mle_retvals)
Explanation: We can look at the results from the numerical optimizer in the results attribute mle_retvals:
End of explanation
class LocalLevelConcentrated(sm.tsa.statespace.MLEModel):
_start_params = [1.]
_param_names = ['ratio.irregular']
def __init__(self, endog):
super(LocalLevelConcentrated, self).__init__(endog, k_states=1, initialization='diffuse')
self['design', 0, 0] = 1
self['transition', 0, 0] = 1
self['selection', 0, 0] = 1
self['state_cov', 0, 0] = 1
self.ssm.filter_concentrated = True
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, unconstrained):
return unconstrained**0.5
def update(self, params, **kwargs):
params = super(LocalLevelConcentrated, self).update(params, **kwargs)
self['obs_cov', 0, 0] = params[0]
Explanation: Concentrating out the scale
Now, there are two ways to reparameterize this model as above:
The first way is to set $\sigma_^2 \equiv \sigma_\varepsilon^2$ so that $\psi_ = \psi / \sigma_\varepsilon^2 = [1, q_\eta]$ where $q_\eta = \sigma_\eta^2 / \sigma_\varepsilon^2$.
The second way is to set $\sigma_^2 \equiv \sigma_\eta^2$ so that $\psi_ = \psi / \sigma_\eta^2 = [h, 1]$ where $h = \sigma_\varepsilon^2 / \sigma_\eta^2$.
In the first case, we only need to numerically maximize the likelihood with respect to $q_\eta$, and in the second case we only need to numerically maximize the likelihood with respect to $h$.
Either approach would work well in most cases, and in the example below we will use the second method.
To reformulate the model to take advantage of the concentrated likelihood function, we need to write the model in terms of the parameter vector $\psi_* = [g, 1]$. Because this parameter vector defines $\sigma_\eta^2 \equiv 1$, we now include a new line self['state_cov', 0, 0] = 1 and the only unknown parameter is $h$. Because our parameter $h$ is no longer a variance, we renamed it here to be ratio.irregular.
The key piece that is required to formulate the model so that the scale can be computed from the Kalman filter recursions (rather than selected numerically) is setting the flag self.ssm.filter_concentrated = True.
End of explanation
mod_conc = LocalLevelConcentrated(dta.infl)
res_conc = mod_conc.fit()
print(res_conc.summary())
Explanation: Again, we can use the built-in fit method to find the maximum likelihood estimate of $h$.
End of explanation
print(res_conc.mle_retvals)
Explanation: The estimate of $h$ is provided in the middle table of parameters (ratio.irregular), while the estimate of the scale is provided in the upper table. Below, we will show that these estimates are consistent with those from the previous approach.
And we can again look at the results from the numerical optimizer in the results attribute mle_retvals. It turns out that two fewer iterations were required in this case, since there was one fewer parameter to select. Moreover, since the numerical maximization problem was easier, the optimizer was able to find a value that made the gradiant for this parameter slightly closer to zero than it was above.
End of explanation
print('Original model')
print('var.level = %.5f' % res.params[0])
print('var.irregular = %.5f' % res.params[1])
print('\nConcentrated model')
print('scale = %.5f' % res_conc.scale)
print('h * scale = %.5f' % (res_conc.params[0] * res_conc.scale))
Explanation: Comparing estimates
Recall that $h = \sigma_\varepsilon^2 / \sigma_\eta^2$ and the scale is $\sigma_*^2 = \sigma_\eta^2$. Using these definitions, we can see that both models produce nearly identical results:
End of explanation
# Typical approach
mod_ar = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct')
res_ar = mod_ar.fit()
# Estimating the model with the scale concentrated out
mod_ar_conc = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct', concentrate_scale=True)
res_ar_conc = mod_ar_conc.fit()
Explanation: Example: SARIMAX
By default in SARIMAX models, the variance term is chosen by numerically maximizing the likelihood function, but an option has been added to allow concentrating the scale out.
End of explanation
print('Loglikelihood')
print('- Original model: %.4f' % res_ar.llf)
print('- Concentrated model: %.4f' % res_ar_conc.llf)
print('\nParameters')
print('- Original model: %.4f, %.4f, %.4f, %.4f' % tuple(res_ar.params))
print('- Concentrated model: %.4f, %.4f, %.4f, %.4f' % (tuple(res_ar_conc.params) + (res_ar_conc.scale,)))
Explanation: These two approaches produce about the same loglikelihood and parameters, although the model with the concentrated scale was able to improve the fit very slightly:
End of explanation
print('Optimizer iterations')
print('- Original model: %d' % res_ar.mle_retvals['iterations'])
print('- Concentrated model: %d' % res_ar_conc.mle_retvals['iterations'])
Explanation: This time, about 1/3 fewer iterations of the optimizer are required under the concentrated approach:
End of explanation |
4,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project
Step3: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
Step5: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
Step7: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https
Step10: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step13: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
Step16: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
Step19: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented
Step22: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
Step25: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
Step27: Train
Implement train to build and train the GANs. Use the following functions you implemented
Step29: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
Step31: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. | Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
# TODO: Implement Function
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs_real, inputs_z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
def discriminator(images, reuse=False):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
# TODO: Implement Function
alpha=0.2
# TODO: Implement Function
with tf.variable_scope('discriminator', reuse=reuse):
# input 28x28x3
x1 = tf.layers.conv2d(images, 64, 5, strides=1,padding='same')
relu1 = tf.maximum(alpha*x1, x1)
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
x3 = tf.layers.conv2d(relu2, 256, 5, strides=1, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha*bn3, bn3)
flat = tf.reshape(relu3, (-1, 7*7*512))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
End of explanation
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
# TODO: Implement Function
alpha = 0.2
# TODO: Implement Function
with tf.variable_scope('generator', reuse=False if is_train==True else True):
x1 = tf.layers.dense(z, 7*7*512)
x1 = tf.reshape(x1, (-1, 7, 7, 512))
x1 = tf.layers.batch_normalization(x1, training=is_train)
x1 = tf.maximum(alpha*x1, x1)
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides = 1, padding='same')
x2 = tf.layers.batch_normalization(x2, training=is_train)
x2 = tf.maximum(alpha*x2, x2)
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides = 2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=is_train)
x3 = tf.maximum(alpha*x3, x3)
logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides = 2, padding='same')
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
# TODO: Implement Function
g_model = generator(input_z, out_channel_dim, is_train=True)
d_model_real, d_logits_real = discriminator(input_real, reuse=False)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# TODO: Implement Function
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
all_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
g_update_ops = [var for var in all_update_ops if var.name.startswith('generator')]
d_update_ops = [var for var in all_update_ops if var.name.startswith('discriminator')]
with tf.control_dependencies(d_update_ops):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
with tf.control_dependencies(g_update_ops):
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
# TODO: Build Model
z_size = z_dim
steps = 0
input_real, input_z, testt_ = model_inputs(*data_shape[1:4], z_dim)
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
steps += 1
batch_images = 2*batch_images
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_z: batch_z})
if steps % 10 == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
if steps % 100 == 0:
show_generator_output(sess, 6, input_z, data_shape[3], data_image_mode)
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
batch_size = 32
z_dim = 100
learning_rate = 0.002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
batch_size = 128
z_dim = 100
learning_rate = 0.002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation |
4,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Github
https
Step3: List Comprehensions
Step4: Dictionaries
Python dictionaries are awesome. They are hash tables and have a lot of neat CS properties. Learn and use them well. | Python Code:
# Create a [list]
days = ['Monday', # multiple lines
'Tuesday', # acceptable
'Wednesday',
'Thursday',
'Friday',
'Saturday',
'Sunday',
] # trailing comma is fine!
days
# Simple for-loop
for day in days:
print(day)
# Double for-loop
for day in days:
for letter in day:
print(letter)
print(days)
print(*days)
# Double for-loop
for day in days:
for letter in day:
print(letter)
print()
for day in days:
for letter in day:
print(letter.lower())
Explanation: Github
https://github.com/jbwhit/OSCON-2015/commit/6750b962606db27f69162b802b5de4f84ac916d5
A few Python Basics
End of explanation
length_of_days = [len(day) for day in days]
length_of_days
letters = [letter for day in days
for letter in day]
print(letters)
letters = [letter for day in days for letter in day]
print(letters)
[num for num in xrange(10) if num % 2]
[num for num in xrange(10) if num % 2 else "doesn't work"]
[num if num % 2 else "works" for num in xrange(10)]
[num for num in xrange(10)]
sorted_letters = sorted([x.lower() for x in letters])
print(sorted_letters)
unique_sorted_letters = sorted(set(sorted_letters))
print("There are", len(unique_sorted_letters), "unique letters in the days of the week.")
print("They are:", ''.join(unique_sorted_letters))
print("They are:", '; '.join(unique_sorted_letters))
def first_three(input_string):
Takes an input string and returns the first 3 characters.
return input_string[:3]
import numpy as np
# tab
np.linspace()
[first_three(day) for day in days]
def last_N(input_string, number=2):
Takes an input string and returns the last N characters.
return input_string[-number:]
[last_N(day, 4) for day in days if len(day) > 6]
from math import pi
print([str(round(pi, i)) for i in xrange(2, 9)])
list_of_lists = [[i, round(pi, i)] for i in xrange(2, 9)]
print(list_of_lists)
for sublist in list_of_lists:
print(sublist)
# Let this be a warning to you!
# If you see python code like the following in your work:
for x in range(len(list_of_lists)):
print("Decimals:", list_of_lists[x][0], "expression:", list_of_lists[x][1])
print(list_of_lists)
# Change it to look more like this:
for decimal, rounded_pi in list_of_lists:
print("Decimals:", decimal, "expression:", rounded_pi)
# enumerate if you really need the index
for index, day in enumerate(days):
print(index, day)
Explanation: List Comprehensions
End of explanation
from IPython.display import IFrame, HTML
HTML('<iframe src=https://en.wikipedia.org/wiki/Hash_table width=100% height=550></iframe>')
fellows = ["Jonathan", "Alice", "Bob"]
universities = ["UCSD", "UCSD", "Vanderbilt"]
for x, y in zip(fellows, universities):
print(x, y)
# Don't do this
{x: y for x, y in zip(fellows, universities)}
# Doesn't work like you might expect
{zip(fellows, universities)}
dict(zip(fellows, universities))
fellows
fellow_dict = {fellow.lower(): university
for fellow, university in zip(fellows, universities)}
fellow_dict
fellow_dict['bob']
rounded_pi = {i:round(pi, i) for i in xrange(2, 9)}
rounded_pi[5]
sum([i ** 2 for i in range(10)])
sum(i ** 2 for i in range(10))
huh = (i ** 2 for i in range(10))
huh.next()
Explanation: Dictionaries
Python dictionaries are awesome. They are hash tables and have a lot of neat CS properties. Learn and use them well.
End of explanation |
4,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Novelty Functions
To detect note onsets, we want to locate sudden changes in the audio signal that mark the beginning of transient regions. Often, an increase in the signal's amplitude envelope will denote an onset candidate. However, that is not always the case, for notes can change from one pitch to another without changing amplitude, e.g. a violin playing slurred notes.
Novelty functions are functions which denote local changes in signal properties such as energy or spectral content. We will look at two novelty functions
Step1: Plot the signal
Step2: Listen
Step3: RMS Energy
librosa.feature.rmse returns the root-mean-square (RMS) energy for each frame of audio. We will compute the RMS energy as well as its first-order difference.
Step4: To obtain an energy novelty function, we perform half-wave rectification (FMP, p. 307) on rmse_diff, i.e. any negative values are set to zero. Equivalently, we can apply the function $\max(0, x)$
Step5: Plot all three functions together
Step6: Log Energy
The human perception of sound intensity is logarithmic in nature. To account for this property, we can apply a logarithm function to the energy before taking the first-order difference.
Because $\log(x)$ diverges as $x$ approaches zero, a common alternative is to use $\log(1 + \lambda x)$. This function equals zero when $x$ is zero, but it behaves like $\log(\lambda x)$ when $\lambda x$ is large. This operation is sometimes called logarithmic compression (FMP, p. 310).
Step7: Spectral-based Novelty Functions
There are two problems with the energy novelty function
Step8: Listen
Step9: The energy novelty function remains roughly constant
Step10: Instead, we will compute a spectral novelty function (FMP, p. 309)
Step11: Questions
Novelty functions are dependent on frame_length and hop_length. Adjust these two parameters. How do they affect the novelty function?
Try with other audio files. How do the novelty functions compare? | Python Code:
x, sr = librosa.load('audio/simple_loop.wav')
print(x.shape, sr)
Explanation: ← Back to Index
Novelty Functions
To detect note onsets, we want to locate sudden changes in the audio signal that mark the beginning of transient regions. Often, an increase in the signal's amplitude envelope will denote an onset candidate. However, that is not always the case, for notes can change from one pitch to another without changing amplitude, e.g. a violin playing slurred notes.
Novelty functions are functions which denote local changes in signal properties such as energy or spectral content. We will look at two novelty functions:
Energy-based novelty functions (FMP, p. 306)
Spectral-based novelty functions (FMP, p. 309)
Energy-based Novelty Functions
Playing a note often coincides with a sudden increase in signal energy. To detect this sudden increase, we will compute an energy novelty function (FMP, p. 307):
Compute the short-time energy in the signal.
Compute the first-order difference in the energy.
Half-wave rectify the first-order difference.
First, load an audio file into the NumPy array x and sampling rate sr.
End of explanation
plt.figure(figsize=(14, 5))
librosa.display.waveplot(x, sr)
Explanation: Plot the signal:
End of explanation
ipd.Audio(x, rate=sr)
Explanation: Listen:
End of explanation
hop_length = 512
frame_length = 1024
rmse = librosa.feature.rmse(x, frame_length=frame_length, hop_length=hop_length).flatten()
rmse_diff = numpy.zeros_like(rmse)
rmse_diff[1:] = numpy.diff(rmse)
print(rmse.shape)
print(rmse_diff.shape)
Explanation: RMS Energy
librosa.feature.rmse returns the root-mean-square (RMS) energy for each frame of audio. We will compute the RMS energy as well as its first-order difference.
End of explanation
energy_novelty = numpy.max([numpy.zeros_like(rmse_diff), rmse_diff], axis=0)
Explanation: To obtain an energy novelty function, we perform half-wave rectification (FMP, p. 307) on rmse_diff, i.e. any negative values are set to zero. Equivalently, we can apply the function $\max(0, x)$:
End of explanation
frames = numpy.arange(len(rmse))
t = librosa.frames_to_time(frames, sr=sr)
plt.figure(figsize=(15, 6))
plt.plot(t, rmse, 'b--', t, rmse_diff, 'g--^', t, energy_novelty, 'r-')
plt.xlim(0, t.max())
plt.xlabel('Time (sec)')
plt.legend(('RMSE', 'delta RMSE', 'energy novelty'))
Explanation: Plot all three functions together:
End of explanation
log_rmse = numpy.log1p(10*rmse)
log_rmse_diff = numpy.zeros_like(log_rmse)
log_rmse_diff[1:] = numpy.diff(log_rmse)
log_energy_novelty = numpy.max([numpy.zeros_like(log_rmse_diff), log_rmse_diff], axis=0)
plt.figure(figsize=(15, 6))
plt.plot(t, log_rmse, 'b--', t, log_rmse_diff, 'g--^', t, log_energy_novelty, 'r-')
plt.xlim(0, t.max())
plt.xlabel('Time (sec)')
plt.legend(('log RMSE', 'delta log RMSE', 'log energy novelty'))
Explanation: Log Energy
The human perception of sound intensity is logarithmic in nature. To account for this property, we can apply a logarithm function to the energy before taking the first-order difference.
Because $\log(x)$ diverges as $x$ approaches zero, a common alternative is to use $\log(1 + \lambda x)$. This function equals zero when $x$ is zero, but it behaves like $\log(\lambda x)$ when $\lambda x$ is large. This operation is sometimes called logarithmic compression (FMP, p. 310).
End of explanation
sr = 22050
def generate_tone(midi):
T = 0.5
t = numpy.linspace(0, T, int(T*sr), endpoint=False)
f = librosa.midi_to_hz(midi)
return numpy.sin(2*numpy.pi*f*t)
x = numpy.concatenate([generate_tone(midi) for midi in [48, 52, 55, 60, 64, 67, 72, 76, 79, 84]])
Explanation: Spectral-based Novelty Functions
There are two problems with the energy novelty function:
It is sensitive to energy fluctuations belonging to the same note.
It is not sensitive to spectral fluctuations between notes where amplitude remains the same.
For example, consider the following audio signal composed of pure tones of equal magnitude:
End of explanation
ipd.Audio(x, rate=sr)
Explanation: Listen:
End of explanation
hop_length = 512
frame_length = 1024
rmse = librosa.feature.rmse(x, frame_length=frame_length, hop_length=hop_length).flatten()
rmse_diff = numpy.zeros_like(rmse)
rmse_diff[1:] = numpy.diff(rmse)
energy_novelty = numpy.max([numpy.zeros_like(rmse_diff), rmse_diff], axis=0)
frames = numpy.arange(len(rmse))
t = librosa.frames_to_time(frames, sr=sr)
plt.figure(figsize=(15, 4))
plt.plot(t, rmse, 'b--', t, rmse_diff, 'g--^', t, energy_novelty, 'r-')
plt.xlim(0, t.max())
plt.xlabel('Time (sec)')
plt.legend(('RMSE', 'delta RMSE', 'energy novelty'))
Explanation: The energy novelty function remains roughly constant:
End of explanation
spectral_novelty = librosa.onset.onset_strength(x, sr=sr)
frames = numpy.arange(len(spectral_novelty))
t = librosa.frames_to_time(frames, sr=sr)
plt.figure(figsize=(15, 4))
plt.plot(t, spectral_novelty, 'r-')
plt.xlim(0, t.max())
plt.xlabel('Time (sec)')
plt.legend(('Spectral Novelty',))
Explanation: Instead, we will compute a spectral novelty function (FMP, p. 309):
Compute the log-amplitude spectrogram.
Within each frequency bin, $k$, compute the energy novelty function as shown earlier, i.e. (a) first-order difference, and (b) half-wave rectification.
Sum across all frequency bins, $k$.
Luckily, librosa has librosa.onset.onset_strength which computes a novelty function using spectral flux.
End of explanation
ls audio
Explanation: Questions
Novelty functions are dependent on frame_length and hop_length. Adjust these two parameters. How do they affect the novelty function?
Try with other audio files. How do the novelty functions compare?
End of explanation |
4,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'></a>
Random Forests
May 2017
<br>
This is a study for a blog post to appear on Data Simple. It will focus on the theory and scikit-learn implementation of the Random Forest machine learning (ML) algorithm.
<br><br>
Contents
Introduction
Decision Trees
Random Forests
Extremely Randomized Trees
<br><br>
<a id='intro'></a>
1. Introduction
Random Forests are a popular example of an Ensemble Learning method. Ensemble Learning consists on combining multiple ML models in order to achieve higher predictive performance than could be obtained using either of the individual models alone.
Let's start by generating some toy data using scikit-learn's make_blobs function. Let's make it a two-dimensional data set, in order to allow easy visualization. There will be, say, 300 observations evenly split between three classes ("Yellow", "Blue", and "Red").
Step1: <a id='trees'></a>
2. Decision Trees
2.1 Making predictions
Let's grow a Decision Tree on our data using scikit-learn to take a closer look at how this works in practice.
Step2: We have grown a tree on our 2-D data set. Classifying new examples is simple
Step3: We can visualize the Decision Tree model using the dot tool in the graphviz package. First generate a graph definition as a .dot file, using the export_graphviz method. Then convert the .dot file to a .png image file.
Step4: Generate .png image using graphviz package
Step5: 2.2 "Growing" a Decision Tree model
scikit-learn uses the Classification And Regression Tree (CART) algorithm, which minimizes the following cost function at each split
Step6: If left unconstrained the algorithm tries hard to fit the data, learning every single data point's class and thus producing an overcomplicated model.
Step7: <a id='forests'></a>
3. Random Forests
3.1 Building an ensemble of Decision Trees
An ensemble of Decision Trees is called a Random Forest. Random Forests are an example of an ensemble using the same ML algorithm to build multiple models each trained on a different random subset of the training data. When the training data sampling for each predictor is performed with replacement, the method is called Boosted aggregating, commonly abbreviated to Bagging.
Let's grow a Random Forest using scikit-kearn's BaggingClassifier
Step8: Parameter oob_score=True above tells scikit-learn to perform Out-of-bag Evaluation after training and we can now use the attribute oob_score_ to check the score.
Step9: 3.2 The RandomForestClassifier class
As you may know, scikit-learn actually has a RandomForestClassifier class pre-built for you, so you can leave the bagging classifier class to use when you're bagging other models for your custom-built ensembles. Let's build a Random Forest with the RandomForestClassifier class and an additional regularization hyperparameter by setting max_leaf_nodes=10.
Step10: <a id='extra_trees'></a>
4. Extremely Randomized Trees
scikit-learn provides yet another ensemble algorithm introducing additional randomness | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# Generate data
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, n_features=2, centers=3,
cluster_std=4, random_state=42)
# Plot data
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", marker='.')
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", marker='.')
plt.plot(X[:, 0][y==2], X[:, 1][y==2], "rd", marker='.')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plt.title("Toy data set\n", fontsize=16)
#plt.savefig('toy_data.png', dpi=300, bbox_inches='tight')
plt.figure(figsize=(4, 3))
Explanation: <a id='top'></a>
Random Forests
May 2017
<br>
This is a study for a blog post to appear on Data Simple. It will focus on the theory and scikit-learn implementation of the Random Forest machine learning (ML) algorithm.
<br><br>
Contents
Introduction
Decision Trees
Random Forests
Extremely Randomized Trees
<br><br>
<a id='intro'></a>
1. Introduction
Random Forests are a popular example of an Ensemble Learning method. Ensemble Learning consists on combining multiple ML models in order to achieve higher predictive performance than could be obtained using either of the individual models alone.
Let's start by generating some toy data using scikit-learn's make_blobs function. Let's make it a two-dimensional data set, in order to allow easy visualization. There will be, say, 300 observations evenly split between three classes ("Yellow", "Blue", and "Red").
End of explanation
from sklearn.tree import DecisionTreeClassifier
# Instantiate the classifier class
tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42)
# Grow a Decision Tree
tree_clf.fit(X, y)
Explanation: <a id='trees'></a>
2. Decision Trees
2.1 Making predictions
Let's grow a Decision Tree on our data using scikit-learn to take a closer look at how this works in practice.
End of explanation
import numpy as np
def predict_class(data_point, classes=['Yellow', 'Blue', 'Red']):
# Convert to appropriate numpy array
data_point = np.array(data_point).reshape(1, -1)
# Classify
result = tree_clf.predict(data_point)
# Print output
print('Predicted class for point {}: {}'.format(data_point,
classes[np.asscalar(result)]))
predict_class([-5, 10])
predict_class([10, 5])
predict_class([-10, -10])
Explanation: We have grown a tree on our 2-D data set. Classifying new examples is simple:
End of explanation
from sklearn.tree import export_graphviz
export_graphviz(tree_clf, out_file='tree.dot',
feature_names=['x1', 'x2'],
class_names=['Yellow', 'Blue', 'Red'],
rounded=True, filled=True)
Explanation: We can visualize the Decision Tree model using the dot tool in the graphviz package. First generate a graph definition as a .dot file, using the export_graphviz method. Then convert the .dot file to a .png image file.
End of explanation
from matplotlib.colors import ListedColormap
def compute_decision_boundaries(clf, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 300)
x2s = np.linspace(axes[2], axes[3], 300)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
return x1, x2, y_pred
def plot_feature_space(clf, X, y, axes, file_name=None):
x1, x2, y_pred = compute_decision_boundaries(clf, X, y, axes)
custom_cmap = ListedColormap(['y','b','r'])
plt.contourf(x1, x2, y_pred, cmap=custom_cmap, alpha=0.1, linewidth=1)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", marker='.')
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", marker='.')
plt.plot(X[:, 0][y==2], X[:, 1][y==2], "rd", marker='.')
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
if file_name is not None:
plt.savefig(file_name, dpi=300, bbox_inches='tight')
plt.figure(figsize=(8, 4))
plot_feature_space(tree_clf, X, y, axes=[-16, 15, -20, 20], file_name='max_depth2.png')
Explanation: Generate .png image using graphviz package:
$ dot -Tpng tree.dot -o tree.png
Here's the tree visualization.
We can now see the way the model predicts new instance classes: it starts off at the root node, with all 300 training examples split evenly between the three classes, and asks whether feature $x_2$ value is lower than or equal to $-1.3707$. It splits the data in two groups according to the answer and then goes on to ask another question in each of the two following internal nodes. These, in turn, split the data between two groups each.
Let's define a function to visualize the decision boundaries and a function to plot them together with the training data using matplotlib.
End of explanation
# Instantiate the classifier class for a deeper tree
tree_clf = DecisionTreeClassifier(max_depth=3, random_state=42)
# Grow a Decision Tree
tree_clf.fit(X, y)
plot_feature_space(tree_clf, X, y, axes=[-16, 15, -20, 20], file_name='max_depth3.png')
Explanation: 2.2 "Growing" a Decision Tree model
scikit-learn uses the Classification And Regression Tree (CART) algorithm, which minimizes the following cost function at each split:
$$ J(k, t_k) = \dfrac{m_{left}}{m}I_{left} + \dfrac{m_{right}}{m}I_{right} $$
$$
with
\begin{cases}
I_{left/right} \text{ impurity of the left/right subset,}\
m_{left/right} \text{ number of instances in the left/right subset.}
\end{cases}
$$
The impurity $I$ measure is typically the Gini impurity. Formally:
$$ I_G(t) = 1 - \sum_{k=1}^K p_{k,t}^2 $$
$$
with
\begin{cases}
k \bbox[3pt]{1, . . . , K} \text{ index of the class,}\
p_{k,t} \text{ ratio of class $k$ instances among the training instances in node $t$}
\end{cases}
$$
You can go back to the image of the tree structure and check the "gini" attribute at each node. Let's take, for example, the purple leaf node $t=4$. Using the definition above, we can calculate the Gini impurity at this is node and get the indicated value of $0.0425$:
$$ I_G(4) = 1 - \sum_{k=1}^K p_{k,4}^2 = \
= 1 - \left( {\left(\dfrac{0}{92}\right)}^2 + {\left(\dfrac{2}{92}\right)}^2 + {\left(\dfrac{90}{92}\right)}^2 \right) =\
= 1 - ( 0 + 0.00047 + 0.95699 ) =\
= 0.0425 $$
I used that argument max_depth=2 above. This was mainly to generate a small tree that was easy to visualize. The result was arguably an underfitting model that could be improved. Here is what it looks like using max_depth=3.
End of explanation
# Instantiate the classifier class with no limits
tree_clf = DecisionTreeClassifier(random_state=42)
# Grow a Decision Tree
tree_clf.fit(X, y)
plot_feature_space(tree_clf, X, y, axes=[-16, 15, -20, 20], file_name='unlimited.png')
Explanation: If left unconstrained the algorithm tries hard to fit the data, learning every single data point's class and thus producing an overcomplicated model.
End of explanation
from sklearn.ensemble import BaggingClassifier
bag_clf = BaggingClassifier(DecisionTreeClassifier(), n_estimators=50,
max_samples=50, bootstrap=True, n_jobs=-1,
oob_score=True, random_state=42)
bag_clf.fit(X, y)
plot_feature_space(bag_clf, X, y, axes=[-16, 15, -20, 20], file_name='bagging.png')
Explanation: <a id='forests'></a>
3. Random Forests
3.1 Building an ensemble of Decision Trees
An ensemble of Decision Trees is called a Random Forest. Random Forests are an example of an ensemble using the same ML algorithm to build multiple models each trained on a different random subset of the training data. When the training data sampling for each predictor is performed with replacement, the method is called Boosted aggregating, commonly abbreviated to Bagging.
Let's grow a Random Forest using scikit-kearn's BaggingClassifier: 50 Decision Trees, each trained on a random sample of 50 training set instances.
End of explanation
print('Out-of-bag evaluation score: {}%'.format(round(bag_clf.oob_score_, 4)*100))
Explanation: Parameter oob_score=True above tells scikit-learn to perform Out-of-bag Evaluation after training and we can now use the attribute oob_score_ to check the score.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=50, max_leaf_nodes=10, n_jobs=-1,
bootstrap=True,
oob_score=True, random_state=42)
rf_clf.fit(X, y)
plot_feature_space(rf_clf, X, y, axes=[-16, 15, -20, 20], file_name='random_forest.png')
print('Out-of-bag evaluation score: {}%'.format(round(rf_clf.oob_score_, 4)*100))
Explanation: 3.2 The RandomForestClassifier class
As you may know, scikit-learn actually has a RandomForestClassifier class pre-built for you, so you can leave the bagging classifier class to use when you're bagging other models for your custom-built ensembles. Let's build a Random Forest with the RandomForestClassifier class and an additional regularization hyperparameter by setting max_leaf_nodes=10.
End of explanation
from sklearn.ensemble import ExtraTreesClassifier
extra_clf = ExtraTreesClassifier(n_estimators=50, max_leaf_nodes=10, n_jobs=-1,
bootstrap=True,
oob_score=True, random_state=42)
extra_clf.fit(X, y)
plot_feature_space(extra_clf, X, y, axes=[-16, 15, -20, 20], file_name='extra_trees.png')
print('Out-of-bag evaluation score: {}%'.format(round(extra_clf.oob_score_, 4)*100))
Explanation: <a id='extra_trees'></a>
4. Extremely Randomized Trees
scikit-learn provides yet another ensemble algorithm introducing additional randomness: the Extremely Randomized Trees ensemble or Extra-Trees. Here's what that looks like with our toy data.
End of explanation |
4,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2.2 DataFrame
Content
Step1: 2.1.1 DataFrame Structure
Initializing a Dataframe.
Step2: 2.2.2 Working with columns
Step3: Exercise
Step4: There are many commonly used column-wide methods/attributes
Step5: 2.2.3 Working with rows
Step6: 2.2.4 Conditional Selection
Step7: 2.2.5 Case Study
Step8: Q1
Step9: Q2
Step10: Q3
Step11: Q4
Step12: Q5
Step13: Q6
Step14: Q7
Step15: Q8 | Python Code:
import numpy as np
import pandas as pd
Explanation: 2.2 DataFrame
Content:
- 2.2.1 DataFrame Structure
- 2.2.2 Working with Columns
- 2.2.3 Working with Rows
- 2.2.4 Conditional Selection
- 2.2.5 Case Study: Olympic Games
A DataFrame is a two dimensional data structure with columns of potentially different data types. It can be considered like a table or a spreadsheet, similar to R's data.frame.
A DataFrame has both row index and column index.
End of explanation
# Create a DataFrame from dictionary
data = {'Region':['Central','East','North','North-East','West'],
'Area':[132.7,93.1,134.5,103.9,201.3],
'Population':[939890,693500,531860,834450,903010]}
df = pd.DataFrame(data)
# Display df
df
# Note that the row index are assigned automatically and column index are arranged in alphabetical order.
# Rearrange columns
df = pd.DataFrame(data,columns=['Region','Population','Area'])
df
# Display columns names
df.columns
# Display all values
df.values
# Get the number of rows and columns of the dataframe i.e. its shape
df.shape
# Size of DataFrame = row x column
df.size
# Number of rows
len(df)
# Programming specific information of the dataframe
df.info()
# Statistical description of numerical columns
df.describe()
Explanation: 2.1.1 DataFrame Structure
Initializing a Dataframe.
End of explanation
# Rename a column label
df = df.rename(columns={'Population':'Pop'})
df
# Select a single column to series
A = df['Area'] # same answer as df.Area
A
# Select a single column to dataframe
B = df[['Area']]
B
# Select multiple columns to dataframe
C = df[['Area','Pop']]
C
# Change order of columns
D = df[['Region','Area','Pop']]
D
# Drop a column by label
E = df.drop('Area',axis=1)
E
# Create a new column 'Density' = 'Population'/'Area'
df['Density'] = df['Pop']/df['Area']
df
# Sort values
df.sort_values(by=['Pop'], ascending=False)
# Find index label for max/min values
df['Density'].idxmax()
Explanation: 2.2.2 Working with columns
End of explanation
# method 1:
df['Region'][df['Density'].idxmax()]
# method 2: Change the index
df1 = df.set_index('Region')
df1['Density'].idxmax()
Explanation: Exercise: Which region has the highest density? Can you get the answer without sorting?
End of explanation
# Get all numerical summaries of a column
df['Pop'].describe()
Explanation: There are many commonly used column-wide methods/attributes:
- df['col'].size
- df['col'].count()
- df['col'].sum()
- df['col'].max()
- df['col'].mean()
- df['col'].std()
End of explanation
# Select multiple rows
df[2:4]
# Select the last row
df[-1:]
# Select all but last row
df[:-1]
# Select all even rows
df[::2]
# Select by .iloc
df.iloc[0:2,1:3]
# Select by .loc
df.loc[0:1,['Pop','Area']]
Explanation: 2.2.3 Working with rows
End of explanation
# Boolean masking
df['Pop']>800000
# Select rows by boolean masking
df[df['Pop']>800000]
# Boolean masking with ==
df['Region']=='Central'
# Select rows by boolean masking
df[df['Region']=='Central']
# Using .loc to find the Area of the Central region.
df.loc[df['Region']=='Central', 'Area']
# Multiple conditions (and: &) (or: |)
(df['Pop'] < 800000) & (df['Density']<8000)
# Select rows by multiple conditions
df[(df['Pop'] < 800000) & (df['Density']<8000)]
# Using .query method
df.query("Pop < 800000 & Density < 8000")
Explanation: 2.2.4 Conditional Selection
End of explanation
# Import data from Excel file
og = pd.read_excel('OlympicGames.xlsx')
# Display first 5 rows of dataframe
og.head()
Explanation: 2.2.5 Case Study: Olympic Games
We can import data from a csv file by using pd.read_csv.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
We can import data from an Excel file by using pd.read_excel.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html
Source of data: https://en.wikipedia.org/wiki/All-time_Olympic_Games_medal_table
End of explanation
og.shape
Explanation: Q1: How many rows and columns are there in the dataframe?
End of explanation
#og.isnull().sum()
og.info()
Explanation: Q2: Are there any missing values in the dataframe?
End of explanation
og['Total'] = og['Gold']+og['Silver']+og['Bronze']
og.head()
Explanation: Q3: Create a new column for total number of Olympic medals.
End of explanation
og[og['Country']=='Singapore']
Explanation: Q4: Select the row where Country = Singapore.
End of explanation
#og[og['Gold'] == og['Gold'].max()]['Country']
og['Country'][og['Gold'].idxmax()]
Explanation: Q5: Which country has won the highest number of Gold medals?
End of explanation
len(og.query("Games >= 25").loc[:, "Country"])
Explanation: Q6: How many countries participated in at least 25 Olympic games (100 years)? Return an integer.
End of explanation
og.sort_values(by='Total', ascending=False)[0:3]['Country'].values
Explanation: Q7: Which are the top 3 countries with highest total Olympic medals?
Challenge! Can you return a list of countries in one line of code?
End of explanation
og['Country'][og[og['Gold']==0]['Total'].idxmax()]
og.Country[og.query("Gold == 0").Total.idxmax()]
Explanation: Q8: Out of the countries which have not won any Gold medals, which country has won the highest number of medals?
Challenge! Can you return the name of the country in one line of code?
End of explanation |
4,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GUI creation and interaction in IPython
Step1: A pop up will appear saying
The kernel appears to have died and will restart automatically
In the terminal, you can also see the following message
QWidget
Step2:
Step3: Now, if you click the button, the callback will be called.
And since Jupyter Notebook is awesome the output will be displayed in the cell's output area (Look at the output area for the previous cell!)
Automatic-connection-using-a-new-Qt-Console
IPython's Frontend Kernel Model
Step4: Now the Jupyter QtConsole will be started by connecting to the ipython kernel that is already running.
You do not believe it?
Well just type the magic %whos to see the truth.
You can redisplay the button now!!
I prefer continuing to work in the notebook, so I will close the button and also the QtConsole.
Pay close attention to the message in the dialog box.
Back in the Notebook turf, you can redisplay the button and continue clicking. | Python Code:
%gui
from PyQt5 import QtWidgets
b1 = QtWidgets.QPushButton("Click Me")
Explanation: GUI creation and interaction in IPython
End of explanation
%gui qt5
from PyQt5 import QtWidgets
b1 = QtWidgets.QPushButton("Click Me")
b1.show()
Explanation: A pop up will appear saying
The kernel appears to have died and will restart automatically
In the terminal, you can also see the following message
QWidget: Must construct a QApplication before a QWidget
[I 11:26:36.051 NotebookApp] KernelRestarter: restarting kernel (1/5)
WARNING:root:kernel 08b46e93-bcf2-49ce-af69-0979e879b7e2 restarted
End of explanation
def on_click_cb():
print("Clicked")
b1.clicked.connect(on_click_cb)
Explanation:
End of explanation
%connect_info
!jupyter kernel list
%qtconsole
Explanation: Now, if you click the button, the callback will be called.
And since Jupyter Notebook is awesome the output will be displayed in the cell's output area (Look at the output area for the previous cell!)
Automatic-connection-using-a-new-Qt-Console
IPython's Frontend Kernel Model
End of explanation
b1.show()
Explanation: Now the Jupyter QtConsole will be started by connecting to the ipython kernel that is already running.
You do not believe it?
Well just type the magic %whos to see the truth.
You can redisplay the button now!!
I prefer continuing to work in the notebook, so I will close the button and also the QtConsole.
Pay close attention to the message in the dialog box.
Back in the Notebook turf, you can redisplay the button and continue clicking.
End of explanation |
4,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of taking 'views' from simulated populations
Step1: Get the mutations that are segregating in each population
Step2: Look at the raw data in the first element of each list
Step3: Let's make that nicer, and convert each list of dictionaries to a Pandas DataFrame object
Step4: The columns are
Step5: We can also take views of gametes
Step6: The format is really ugly. v Each gamete is a dict with two elements
Step7: OK, let's clean that up. We'll focus on the selected mutations for each individual, and turn everything into a pd.DataFrame.
We're only going to do this for the first simulated population.
Step8: We now have a list of lists stored in 'smuts'.
Step9: That's much better. We can use the index to figure out which individual has which mutations, and their effect sizes, etc.
Finally, we can also take views of diploids. Let's get the first two diploids in each population
Step10: Again, the format here is ugly. Each diploid view is a dictionary | Python Code:
from __future__ import print_function
import fwdpy as fp
import pandas as pd
from background_selection_setup import *
Explanation: Example of taking 'views' from simulated populations
End of explanation
mutations = [fp.view_mutations(i) for i in pops]
Explanation: Get the mutations that are segregating in each population:
End of explanation
for i in mutations:
print(i[0])
Explanation: Look at the raw data in the first element of each list:
End of explanation
mutations2 = [pd.DataFrame(i) for i in mutations]
for i in mutations2:
print(i.head())
Explanation: Let's make that nicer, and convert each list of dictionaries to a Pandas DataFrame object:
End of explanation
nmuts = [i[i.neutral == True] for i in mutations2]
for i in nmuts:
print(i.head())
Explanation: The columns are:
g = the generation when the mutation first arose
h = the dominance
n = the number of copies of the mutation in the population. You can use this to get its frequency.
neutral = a boolean
pos = the position of the mutation
s = selection coefficient/effect size
label = The label assigned to a mutation. These labels can be associated with Regions and Sregions. Here, 1 is a mutation from the neutral region, 2 a selected mutation from the 'left' region and 3 a selected mutation from the 'right' regin.
We can do all the usual subsetting, etc., using regular pandas tricks. For example, let's get the neutral mutations for each population:
End of explanation
gametes = [fp.view_gametes(i) for i in pops]
Explanation: We can also take views of gametes:
End of explanation
for i in gametes:
print(i[0])
Explanation: The format is really ugly. v Each gamete is a dict with two elements:
'neutral' is a list of mutations not affecting fitness. The format is the same as for the mutation views above.
'selected' is a list of mutations that do affect fitness. The format is the same as for the mutation views above.
End of explanation
smuts = [i['selected'] for i in gametes[0]]
Explanation: OK, let's clean that up. We'll focus on the selected mutations for each individual, and turn everything into a pd.DataFrame.
We're only going to do this for the first simulated population.
End of explanation
smutsdf = pd.DataFrame()
ind=0
##Add the non-empty individuals to the df
for i in smuts:
if len(i)>0:
smutsdf = pd.concat([smutsdf,pd.DataFrame(i,index=[ind]*len(i))])
ind += 1
smutsdf.head()
Explanation: We now have a list of lists stored in 'smuts'.
End of explanation
dips = [fp.view_diploids(i,[0,1]) for i in pops]
Explanation: That's much better. We can use the index to figure out which individual has which mutations, and their effect sizes, etc.
Finally, we can also take views of diploids. Let's get the first two diploids in each population:
End of explanation
for key in dips[0][0]:
print(key)
Explanation: Again, the format here is ugly. Each diploid view is a dictionary:
End of explanation |
4,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Least Squares
Step1: OLS estimation
Artificial data
Step2: Our model needs an intercept so we add a column of 1s
Step3: Fit and summary
Step4: Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples
Step5: OLS non-linear curve but linear in parameters
We simulate artificial data with a non-linear relationship between x and y
Step6: Fit and summary
Step7: Extract other quantities of interest
Step8: Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
Step9: OLS with dummy variables
We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
Step10: Inspect the data
Step11: Fit and summary
Step12: Draw a plot to compare the true relationship to OLS predictions
Step13: Joint hypothesis test
F test
We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups
Step14: You can also use formula-like syntax to test hypotheses
Step15: Small group effects
If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis
Step16: Multicollinearity
The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
Step17: Fit and summary
Step18: Condition number
One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length
Step19: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
Step20: Dropping an observation
Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates
Step21: We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
Step22: In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
np.random.seed(9876789)
Explanation: Ordinary Least Squares
End of explanation
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
Explanation: OLS estimation
Artificial data:
End of explanation
X = sm.add_constant(X)
y = np.dot(X, beta) + e
Explanation: Our model needs an intercept so we add a column of 1s:
End of explanation
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
Explanation: Fit and summary:
End of explanation
print('Parameters: ', results.params)
print('R2: ', results.rsquared)
Explanation: Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:
End of explanation
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Explanation: OLS non-linear curve but linear in parameters
We simulate artificial data with a non-linear relationship between x and y:
End of explanation
res = sm.OLS(y, X).fit()
print(res.summary())
Explanation: Fit and summary:
End of explanation
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('Predicted values: ', res.predict())
Explanation: Extract other quantities of interest:
End of explanation
prstd, iv_l, iv_u = wls_prediction_std(res)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res.fittedvalues, 'r--.', label="OLS")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
ax.legend(loc='best');
Explanation: Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
End of explanation
nsample = 50
groups = np.zeros(nsample, int)
groups[20:40] = 1
groups[40:] = 2
#dummy = (groups[:,None] == np.unique(groups)).astype(float)
dummy = pd.get_dummies(groups).values
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:,1:]))
X = sm.add_constant(X, prepend=False)
beta = [1., 3, -3, 10]
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + e
Explanation: OLS with dummy variables
We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
End of explanation
print(X[:5,:])
print(y[:5])
print(groups)
print(dummy[:5,:])
Explanation: Inspect the data:
End of explanation
res2 = sm.OLS(y, X).fit()
print(res2.summary())
Explanation: Fit and summary:
End of explanation
prstd, iv_l, iv_u = wls_prediction_std(res2)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res2.fittedvalues, 'r--.', label="Predicted")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
legend = ax.legend(loc="best")
Explanation: Draw a plot to compare the true relationship to OLS predictions:
End of explanation
R = [[0, 1, 0, 0], [0, 0, 1, 0]]
print(np.array(R))
print(res2.f_test(R))
Explanation: Joint hypothesis test
F test
We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:
End of explanation
print(res2.f_test("x2 = x3 = 0"))
Explanation: You can also use formula-like syntax to test hypotheses
End of explanation
beta = [1., 0.3, -0.0, 10]
y_true = np.dot(X, beta)
y = y_true + np.random.normal(size=nsample)
res3 = sm.OLS(y, X).fit()
print(res3.f_test(R))
print(res3.f_test("x2 = x3 = 0"))
Explanation: Small group effects
If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:
End of explanation
from statsmodels.datasets.longley import load_pandas
y = load_pandas().endog
X = load_pandas().exog
X = sm.add_constant(X)
Explanation: Multicollinearity
The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
End of explanation
ols_model = sm.OLS(y, X)
ols_results = ols_model.fit()
print(ols_results.summary())
Explanation: Fit and summary:
End of explanation
norm_x = X.values
for i, name in enumerate(X):
if name == "const":
continue
norm_x[:,i] = X[name]/np.linalg.norm(X[name])
norm_xtx = np.dot(norm_x.T,norm_x)
Explanation: Condition number
One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:
End of explanation
eigs = np.linalg.eigvals(norm_xtx)
condition_number = np.sqrt(eigs.max() / eigs.min())
print(condition_number)
Explanation: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
End of explanation
ols_results2 = sm.OLS(y.iloc[:14], X.iloc[:14]).fit()
print("Percentage change %4.2f%%\n"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))
Explanation: Dropping an observation
Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:
End of explanation
infl = ols_results.get_influence()
Explanation: We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
End of explanation
2./len(X)**.5
print(infl.summary_frame().filter(regex="dfb"))
Explanation: In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
End of explanation |
4,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using groupby(), plot the number of films that have been released each decade in the history of cinema.
Step1: Use groupby() to plot the number of "Hamlet" films made each decade.
Step2: How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?
Step3: In the 1950s decade taken as a whole, how many total roles were available to actors, and how many to actresses, for each "n" number 1 through 5?
Use groupby() to determine how many roles are listed for each of the Pink Panther movies.
Step4: List, in order by year, each of the films in which Frank Oz has played more than 1 role.
Step5: List each of the characters that Frank Oz has portrayed at least twice. | Python Code:
titles.groupby(titles['year']//10 *10)['year'].size().plot(kind='bar')
Explanation: Using groupby(), plot the number of films that have been released each decade in the history of cinema.
End of explanation
titles[titles['title']=='Hamlet'].groupby(titles[titles['title']=='Hamlet']['year']//10 *10)['year'].size().plot(kind='bar')
Explanation: Use groupby() to plot the number of "Hamlet" films made each decade.
End of explanation
c=cast[(cast['year']>=1950)&(cast['year']<1960)&(cast['n']==1)]
c[c['type']=='actor'].groupby('year').size()
c[c['type']=='actress'].groupby('year').size()
Explanation: How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?
End of explanation
cast[cast['title'].str.contains("Pink Panther")].groupby('title').size()
Explanation: In the 1950s decade taken as a whole, how many total roles were available to actors, and how many to actresses, for each "n" number 1 through 5?
Use groupby() to determine how many roles are listed for each of the Pink Panther movies.
End of explanation
type(cast[cast['name']=='Frank Oz'].groupby('title'))
cast[cast['name']=='Frank Oz'].groupby('title').filter(lambda x : len(x)!=1).sort('year')['title'].unique()
Explanation: List, in order by year, each of the films in which Frank Oz has played more than 1 role.
End of explanation
cast[cast['name']=='Frank Oz'].groupby('character').filter(lambda x : len(x)!=1)['character'].unique()
Explanation: List each of the characters that Frank Oz has portrayed at least twice.
End of explanation |
4,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
原始 SPN 实现
原理与算法
这里我们实现了教材上的原始 SPN 算法,相关的数据(秘钥,秘钥编排算法,S盒,P盒,轮数)均保持一致,并在程序内部定义。
程序设计
常量规定
| 变量 | 意义 | 类型 |
| ------------ | ---- | ---- |
| x | 明文 | 整形 |
| piS | S盒 | 整形字典 |
| piP | P盒 | 整形字典 |
| primitiveKey | 主秘钥 | 整形 |
| nr | 轮数 | 整形 |
| m | S盒个数 | 整形 |
| l | S盒大小 | 整形 |
Step1: S-Box 函数定义
Step2: P-Box 函数定义
Step3: 加密函数
Step4: 解密函数
Step5: 测试
测试时使用教材上的测试用例:
| 意义 | 变量 | 值 |
| ---- | ---------- | --------------------- |
| 明文 | PLAINTEXT | 0010 0110 1011 0111 |
| 密文 | CIPHERTEXT | 1011 1100 1101 0110 |
Step6: SPN 密码分析
密码分析预函数
16位数字分4块函数
Step7: 明密文对生成函数
Step8: 线性密码分析
原理与数据
IO
Input
明密文对 $ \tau $
明密文对数量 T
S-Box 逆代换
Output
对应最大计数器的秘钥
数据结构
明密文对使用 list ,每对形成一个元组 (P, C) , 明密文使用定长 int
S-Box 逆代换使用字典数据结构
算法
教材算法 3.2
程序设计
线性攻击函数异或集中模块
Step9: 线性攻击函数
Step10: 差分密码分析
原理与数据
IO
Input
明密文对 $ \tau $
明密文对数量 T
S-Box 逆代换
Output
对应最大计数器的秘钥
数据结构
明密文对使用 list ,每对形成一个元组 (P, C) , 明密文使用定长 int
S-Box 逆代换使用字典数据结构
算法
教材算法 3.3
程序设计
明密文对生成函数
Step11: 差分攻击函数
由于同名符号过多,采取约定进行命名:
| 教材表示 | 程序变量表示 |
| ----------- | ------ |
| $v4_{<2>}$ | v4_2 |
| $(v4_{<2>})*$ | vv4_2 |
| $(v4_{<2>})'$ | vvv4_2 |
Step13: 实施密码分析
Step14: SPN 增强实现
增强措施
增加分组长度
分组长度由16位加长到64位,可以以8字节为单位进行文件加密
增加密钥的长度
主密钥长度由32位加长到2048位,由系统随机函数生成
改进S盒
S盒由系统随机函数生成,对不能通过随机性检测的S盒予以抛弃,并重新生成。
检测表明,S盒随机性得到明显改善,可有效抵抗各种密码分析。
增加轮数
由4轮增加到32轮
程序设计
字典洗牌函数定义
Step15: S-Box P-Box 函数定义
Step16: 密钥随机生成函数
Step17: 常量规定
Step18: 加密函数
Step19: 解密函数
Step20: 文件加解密
原理与标准
文件加密采取迭代式方法分块读取文件,并分别进行加密或解密。
加解密使用前面实现的 增强SPN 加解密模块。
加密时生成密钥,并存储在文件中。解密时从文件读取密钥,并进行解密。
分组密码工作模式
采取电子密码本(Electric Codebook) 模式
需要加密的消息按照块密码的块大小被分为数个块,并对每个块进行独立加密。
字节填充
遵循 ISO/IEC 10118-1 & ISO/IEC 9797-1 标准。
采取 zero padding,填充 \x00。
All the bytes that are required to be padded are padded with zero. The zero padding scheme has not been standardized for encryption, although it is specified for hashes and MACs as Padding Method 1 in ISO/IEC 10118-1 and ISO/IEC 9797-1.
程序设计
常量规定
| 变量 | 解释 |
| -------------- | ------------------------ |
| path2original | 加密源文件路径 |
| path2encrypted | 加密后文件路径 |
| path2decrypted | 解密后文件路径 |
| path2key | 密钥读取或写入路径 |
| BLOCKSIZE | 密码分块大小 |
| ENDIAN | 加解密时均使用大端,不论文件实际存储所采用的方式 |
Step21: 秘钥存入文件
Step22: 加密
Step23: 从文件读取密钥
Step24: 解密
Step25: 测试(包含随机性检测)
直接运行上述模块,然后观察磁盘上的文件:
源文件
Step26: 加密后的文件
Step27: 解密后的文件
Step28: 比较差异 | Python Code:
m, l = 4, 4 # m S-Boxes, l bits in each
piS = {0: 14, 1: 4, 2: 13, 3: 1,
4: 2, 5: 15, 6: 11, 7: 8,
8: 3, 9: 10, 10: 6, 11: 12,
12: 5, 13: 9, 14: 0, 15: 7}
piP = {1: 1, 2: 5, 3: 9, 4: 13,
5: 2, 6: 6, 7: 10, 8: 14,
9: 3, 10: 7, 11: 11, 12: 15,
13: 4, 14: 8, 15: 12, 16: 16}
nr = 4
primitiveKeyStr = '0011 1010 1001 0100 1101 0110 0011 1111'\
.replace(' ','')
primitiveKey = int(primitiveKeyStr, 2)
K = {}
K[1] = (primitiveKey >> 16) & 0xFFFF
K[2] = (primitiveKey >> 12) & 0xFFFF
K[3] = (primitiveKey >> 8) & 0xFFFF
K[4] = (primitiveKey >> 4) & 0xFFFF
K[5] = primitiveKey & 0xFFFF
piSInv = dict((v,k) for k,v in piS.items())
piPInv = dict((v,k) for k,v in piP.items())
Explanation: 原始 SPN 实现
原理与算法
这里我们实现了教材上的原始 SPN 算法,相关的数据(秘钥,秘钥编排算法,S盒,P盒,轮数)均保持一致,并在程序内部定义。
程序设计
常量规定
| 变量 | 意义 | 类型 |
| ------------ | ---- | ---- |
| x | 明文 | 整形 |
| piS | S盒 | 整形字典 |
| piP | P盒 | 整形字典 |
| primitiveKey | 主秘钥 | 整形 |
| nr | 轮数 | 整形 |
| m | S盒个数 | 整形 |
| l | S盒大小 | 整形 |
End of explanation
# subsitution for each bits according to sBoxDict
def subsitutionFunc(ur, sBoxDict):
currentUrBinStrMTimesLBits = \
bin(ur).replace('0b', '').zfill(m * l)
for i in range(1, m + 1):
# S box subsitution by each
# get the bits ready for subsitution
# i to m from left to right
currentSboxInput = \
int(currentUrBinStrMTimesLBits\
[l * (i - 1): l * i], 2) # dec int now
currentSboxOutput = \
sBoxDict[currentSboxInput] # dec int now
# give it to vr in format available
# vr should be int
if i == 1:
vrStrInProgress = \
bin(currentSboxOutput).replace('0b', '').zfill(l)
# 4-bit bin str notation
else:
vrStrInProgress += \
bin(currentSboxOutput).replace('0b', '').zfill(l)
# 8,12,finally 16-bit bin str
vr = int(vrStrInProgress, 2)
return vr # return a int
Explanation: S-Box 函数定义
End of explanation
def permutationFunc(vr, pBoxDict):
currentVrBinStrMTimesLBits = \
bin(vr).replace('0b','').zfill(m * l)
wr = [None] * (m * l)
for i in range(1, m * l + 1): # 1 - 16
wr[pBoxDict[i] - 1] = \
currentVrBinStrMTimesLBits[i-1:i]
wr = int(''.join(wr), 2) # wr is a int now
return wr # return a int
Explanation: P-Box 函数定义
End of explanation
def spnFromTextEncryption(x, piS=piS, piP=piP, K=K):
nr = len(K) - 1
w, u, v = {}, {}, {}
# keys and values in u, v, w are int
w[0] = x
for r in range(1, nr): # 1 to nr -1
## xor round key
u[r] = w[r - 1] ^ K[r]
## subsitution
v[r] = subsitutionFunc(u[r], piS)
## permutation
w[r] = permutationFunc(v[r], piP)
u[nr] = w[nr - 1] ^ K[nr]
v[nr] = subsitutionFunc(u[nr], piS)
y = v[nr] ^ K[nr + 1]
return y
Explanation: 加密函数
End of explanation
def spnFromTextDecryption(y,
piSInv=piSInv,
piPInv=piPInv,
K=K):
nr = len(K) - 1
w, u, v = {}, {}, {}
# keys and values in u, v, w are int
v[nr] = y ^ K[nr + 1]
u[nr] = subsitutionFunc(v[nr], piSInv)
w[nr - 1] = u[nr] ^ K[nr]
for r in range(nr - 1, 0, -1):
## permutation Inverse
v[r] = permutationFunc(w[r], piPInv)
## subsitution Inverse
u[r] = subsitutionFunc(v[r], piSInv)
## xor round key
w[r - 1] = u[r] ^ K[r]
x = w[0]
return x
Explanation: 解密函数
End of explanation
PLAINTEXT = int('0010 0110 1011 0111'.replace(' ' ,''), 2)
CIPHERTEXT = int('1011 1100 1101 0110'.replace(' ', ''), 2)
if spnFromTextEncryption(PLAINTEXT, piS, piP, K) \
== CIPHERTEXT:
print('SPN From Text ENCRYPTION Correctly Implemented!')
else:
print('WARNING!','SPN From Text Implementation INCORRECT')
if spnFromTextDecryption(CIPHERTEXT, piSInv, piPInv, K) \
== PLAINTEXT:
print('SPN From Text DECRYPTION Correctly Implemented!')
else:
print('WARNING!','SPN From Text Implementation INCORRECT')
Explanation: 测试
测试时使用教材上的测试用例:
| 意义 | 变量 | 值 |
| ---- | ---------- | --------------------- |
| 明文 | PLAINTEXT | 0010 0110 1011 0111 |
| 密文 | CIPHERTEXT | 1011 1100 1101 0110 |
End of explanation
def splitBits(toSplit, index):
if index == 4:
shift = 0
elif index == 3:
shift = 4
elif index == 2:
shift = 8
elif index == 1:
shift = 12
else :
raise ValueError("index supposed to be 1, 2, 3 or 4")
return ((toSplit >> shift) & 0b1111)
Explanation: SPN 密码分析
密码分析预函数
16位数字分4块函数
End of explanation
import random
def pcPairGenRand():
plain = random.randint(2**0-1, 2**16-1)
cipher = spnFromTextEncryption(x=plain)
return (plain, cipher)
Explanation: 明密文对生成函数
End of explanation
def zxor(x, u4_2, u4_4):
x5 = (x >> (16 - 5)) & 0b1
x7 = (x >> (16 - 7)) & 0b1
x8 = (x >> (16 - 8)) & 0b1
u4__6 = (u4_2 >> 2) & 0b1
u4__8 = (u4_2 >> 0) & 0b1
u4__14 = (u4_4 >> 2) & 0b1
u4__16 = (u4_4 >> 0) & 0b1
z = x5 ^ x7 ^ x8 ^ u4__6 ^ u4__8 ^ u4__14 ^ u4__16
return z
Explanation: 线性密码分析
原理与数据
IO
Input
明密文对 $ \tau $
明密文对数量 T
S-Box 逆代换
Output
对应最大计数器的秘钥
数据结构
明密文对使用 list ,每对形成一个元组 (P, C) , 明密文使用定长 int
S-Box 逆代换使用字典数据结构
算法
教材算法 3.2
程序设计
线性攻击函数异或集中模块
End of explanation
import copy
def linearAttack(pairs, piSInv=piSInv):
pairsCount = len(pairs)
Count = [[0 for l2 in range(16)] for l1 in range(16)]
for (x, y) in pairs:
for (l1, l2) in \
[(l1, l2) for l1 in range(16) for l2 in range(16)]:
v4_2 = l1 ^ splitBits(y, 2)
v4_4 = l2 ^ splitBits(y, 4)
u4_2 = piSInv[v4_2]
u4_4 = piSInv[v4_4]
z = zxor(x, u4_2, u4_4)
if z == 0: Count[l1][l2] += 1
max = -1
for (l1, l2) in \
[(l1, l2) for l1 in range(16) for l2 in range(16)]:
Count[l1][l2] = \
abs(Count[l1][l2] - int( pairsCount / 2 ))
if Count[l1][l2] > max:
max = Count[l1][l2]
maxkey = copy.copy((l1, l2))
return maxkey
Explanation: 线性攻击函数
End of explanation
def xorPlainPairsGen(count=100):
pairs = []
while True:
plain1 = random.randint(2**0-1, 2**16-1)
plain2 = 0b0000101100000000 ^ plain1
cipher1 = spnFromTextEncryption(x=plain1)
cipher2 = spnFromTextEncryption(x=plain2)
pairs.append((plain1, plain2, cipher1, cipher2))
if len(pairs) == count:
return pairs
Explanation: 差分密码分析
原理与数据
IO
Input
明密文对 $ \tau $
明密文对数量 T
S-Box 逆代换
Output
对应最大计数器的秘钥
数据结构
明密文对使用 list ,每对形成一个元组 (P, C) , 明密文使用定长 int
S-Box 逆代换使用字典数据结构
算法
教材算法 3.3
程序设计
明密文对生成函数
End of explanation
def differentialAttack(pairs, piSInv=piSInv):
# if True:
pairsCount = len(pairs)
Count = [[0 for l2 in range(16)] for l1 in range(16)]
for (x, y, xx, yy) in [(x, y, xx, yy) \
for x, y in pairs for xx, yy in pairs \
if (x, y) != (xx, yy)]:
if splitBits(y, 1) != splitBits(yy, 1) \
or splitBits(y, 3) != splitBits(yy, 3):
print('Failure')
else:
for (l1, l2) in \
[(l1, l2) for l1 in range(16) \
for l2 in range(16)]:
v4_2 = l1 ^ splitBits(y, 2)
v4_4 = l2 ^ splitBits(y, 4)
u4_2 = piSInv[v4_2]
u4_4 = piSInv[v4_4]
vv4_2 = l1 ^ splitBits(yy, 2)
vv4_4 = l2 ^ splitBits(yy, 4)
uu4_2 = piSInv[vv4_2]
uu4_4 = piSInv[vv4_4]
uuu4_2 = u4_2 ^ uu4_2
uuu4_4 = u4_4 ^ uu4_4
print(v4_2, v4_4, u4_2, u4_4)
print(vv4_2, vv4_4, uu4_2, uu4_4)
print(uuu4_2, uuu4_4)
if uuu4_2 == 0b0110 \
and uuu4_4 == 0b0110 :
Count[l1][l2] += 1
max = -1
for (l1, l2) in \
[(l1, l2) for l1 in range(16) \
for l2 in range(16)]:
if Count[l1][l2] > max:
max = Count[l1][l2]
maxkey = copy.copy((l1, l2))
# return maxkey
print(maxkey)
Explanation: 差分攻击函数
由于同名符号过多,采取约定进行命名:
| 教材表示 | 程序变量表示 |
| ----------- | ------ |
| $v4_{<2>}$ | v4_2 |
| $(v4_{<2>})*$ | vv4_2 |
| $(v4_{<2>})'$ | vvv4_2 |
End of explanation
%%time
# generate 8000 pairs
pairs = []
for i in range(8000):
pairs.append(pcPairGenRand())
# lin attack
partialKey = linearAttack(pairs)
%%time
def bruteForce(partialKey=partialKey, pairs=pairs):
print("start brute force")
for bits20 in range(2 ** 20):
for bits4 in range(2 ** 4):
primitiveKeyLocal = \
(bits20 << 12)| (partialKey[0] << 8) \
| (bits4 << 4)|(partialKey[1] << 0)
Kl = {}
Kl[1] = (primitiveKey >> 16) & 0xFFFF
Kl[2] = (primitiveKey >> 12) & 0xFFFF
Kl[3] = (primitiveKey >> 8) & 0xFFFF
Kl[4] = (primitiveKey >> 4) & 0xFFFF
Kl[5] = primitiveKey & 0xFFFF
neq = False
paircount = len(pairs)
for index in range(paircount):
(plain, cipher) = pairs[index]
if spnFromTextEncryption\
(x=plain, K=Kl) != cipher:
break
elif index == paircount:
print(primitiveKeyLocal)
print(done)
Explanation: 实施密码分析
End of explanation
import random
def shuffleDictRand(d):
k, v = list(d.keys()), list(d.values())
random.shuffle(v) # shuffle values
return dict(zip(k, v))
Explanation: SPN 增强实现
增强措施
增加分组长度
分组长度由16位加长到64位,可以以8字节为单位进行文件加密
增加密钥的长度
主密钥长度由32位加长到2048位,由系统随机函数生成
改进S盒
S盒由系统随机函数生成,对不能通过随机性检测的S盒予以抛弃,并重新生成。
检测表明,S盒随机性得到明显改善,可有效抵抗各种密码分析。
增加轮数
由4轮增加到32轮
程序设计
字典洗牌函数定义
End of explanation
def genPiSRand(size=16):
k, v = \
list(range(size)), list(range(size))
# 0-size-1
random.shuffle(v)
return dict(zip(k, v))
def genPiPRand(size=16):
k, v = \
list(range(1, size + 1)), list(range(1, size + 1))
# 0-size-1
random.shuffle(v)
return dict(zip(k, v))
Explanation: S-Box P-Box 函数定义
End of explanation
import random
def genKeyRand(keysize=2048):
return random.randint(0, 2 ** keysize - 1)
Explanation: 密钥随机生成函数
End of explanation
m, l = 16, 4 # m S-Boxes, l bits in each
nr = 32
piS = genPiSRand(l ** 2)
piP = genPiPRand(m * l)
primitiveKeyLen = (nr + 1) * (m * l)
primitiveKey = genKeyRand(primitiveKeyLen)
scheduledKeys = []
for r in range(1, (nr + 1) + 1):
cKey = int(bin(primitiveKey).replace('0b', '')\
.zfill(primitiveKeyLen)\
[(r - 1) * m * l: r * m * l - 1], 2)
scheduledKeys.append(cKey)
# K is a dict from 1 to nr + 1
K = {}
for r in range(len(scheduledKeys)):
K[r + 1] = scheduledKeys[r]
piSInv = dict((v,k) for k,v in piS.items())
piPInv = dict((v,k) for k,v in piP.items())
Explanation: 常量规定
End of explanation
def spnReinforceEncryption(x, piS, piP, K):
nr = len(K) - 1
w, u, v = {}, {}, {}
# keys and values in u, v, w are int
w[0] = x
for r in range(1, nr): # 1 to nr -1
## xor round key
u[r] = w[r - 1] ^ K[r]
## subsitution
v[r] = subsitutionFunc(u[r], piS)
## permutation
w[r] = permutationFunc(v[r], piP)
u[nr] = w[nr - 1] ^ K[nr]
v[nr] = subsitutionFunc(u[nr], piS)
y = v[nr] ^ K[nr + 1]
return y
Explanation: 加密函数
End of explanation
def spnReinforceDecryption(y, piSInv, piPInv, K):
nr = len(K) - 1
w, u, v = {}, {}, {}
# keys and values in u, v, w are int
v[nr] = y ^ K[nr + 1]
u[nr] = subsitutionFunc(v[nr], piSInv)
w[nr - 1] = u[nr] ^ K[nr]
for r in range(nr - 1, 0, -1):
## permutation Inverse
v[r] = permutationFunc(w[r], piPInv)
## subsitution Inverse
u[r] = subsitutionFunc(v[r], piSInv)
## xor round key
w[r - 1] = u[r] ^ K[r]
x = w[0]
return x
Explanation: 解密函数
End of explanation
path2original = \
'~/Course_Project_of_Cryptography_HUST_2017/code/classicPrime.py'
path2encrypted = \
'~/Course_Project_of_Cryptography_HUST_2017/code/encrypted.py'
path2decrypted = \
'~/Course_Project_of_Cryptography_HUST_2017/code/decrypted.py'
path2key = \
'~/Course_Project_of_Cryptography_HUST_2017/code/key'
BLOCKSIZE = 8 # bytes
ENDIAN = 'big' # big endian default
Explanation: 文件加解密
原理与标准
文件加密采取迭代式方法分块读取文件,并分别进行加密或解密。
加解密使用前面实现的 增强SPN 加解密模块。
加密时生成密钥,并存储在文件中。解密时从文件读取密钥,并进行解密。
分组密码工作模式
采取电子密码本(Electric Codebook) 模式
需要加密的消息按照块密码的块大小被分为数个块,并对每个块进行独立加密。
字节填充
遵循 ISO/IEC 10118-1 & ISO/IEC 9797-1 标准。
采取 zero padding,填充 \x00。
All the bytes that are required to be padded are padded with zero. The zero padding scheme has not been standardized for encryption, although it is specified for hashes and MACs as Padding Method 1 in ISO/IEC 10118-1 and ISO/IEC 9797-1.
程序设计
常量规定
| 变量 | 解释 |
| -------------- | ------------------------ |
| path2original | 加密源文件路径 |
| path2encrypted | 加密后文件路径 |
| path2decrypted | 解密后文件路径 |
| path2key | 密钥读取或写入路径 |
| BLOCKSIZE | 密码分块大小 |
| ENDIAN | 加解密时均使用大端,不论文件实际存储所采用的方式 |
End of explanation
import pickle
with open(path2key, 'wb') as fkey:
pickle.dump(primitiveKey, fkey)
fkey.close()
Explanation: 秘钥存入文件
End of explanation
path2read = path2original
path2write = path2encrypted
from functools import partial
with open(path2read, 'rb') as fin:
with open(path2write, 'wb') as fout:
byteBlocks = iter(partial(fin.read, BLOCKSIZE), b'')
for index, value in enumerate(byteBlocks):
# convert to 64-bit int
# pad if not BLOCKSIZE
padTime = 0
while len(value) != BLOCKSIZE:
####### ZERO PADDING ######
value += b'\x00'
padTime += 1
byteBlockInt = int.from_bytes(
value, byteorder = ENDIAN)
# encryption
cipherText = spnReinforceEncryption(
byteBlockInt, piS, piP, K)
# convert to bytes
byteBlock2Write = cipherText.to_bytes(BLOCKSIZE
, byteorder = ENDIAN)
fout.write(byteBlock2Write)
# write a byte block to indicate padding info
fout.write((padTime).to_bytes(BLOCKSIZE, ENDIAN))
fout.close()
fin.close()
Explanation: 加密
End of explanation
import pickle
with open(path2key, 'rb') as fkey:
primitiveKey = pickle.load(fkey)
fkey.close()
Explanation: 从文件读取密钥
End of explanation
path2read = path2encrypted
path2write = path2decrypted
from functools import partial
# read last block for padding info
with open(path2read, 'rb') as fp:
padTime = 0
for lastBlock \
in reversed(list(iter(partial(fp.read, 8), b''))):
padTime = \
int.from_bytes(lastBlock, byteorder = ENDIAN)
break
fp.close()
import os
fileSize = os.stat(path2read).st_size
blockCount = int(fileSize / BLOCKSIZE)
with open(path2read, 'rb') as fin:
with open(path2write, 'wb') as fout:
byteBlocks = iter(partial(fin.read, BLOCKSIZE), b'')
for index, value in enumerate(byteBlocks):
# if reaches last block of padding info
if index + 1 == blockCount:
break
# convert to 64-bit int
byteBlockInt = int.from_bytes(
value, byteorder = ENDIAN)
# decryption
cipherText = spnReinforceDecryption(
byteBlockInt, piSInv, piPInv, K)
# convert to bytes
byteBlock2Write = cipherText.to_bytes(BLOCKSIZE
, byteorder = ENDIAN)
# unpad if reaches last - 1 block
if index + 2 == blockCount:
byteBlock2Write = \
byteBlock2Write[0: BLOCKSIZE - padTime]
fout.write(byteBlock2Write)
fout.close()
fin.close()
Explanation: 解密
End of explanation
!hexdump classicPrime.py
Explanation: 测试(包含随机性检测)
直接运行上述模块,然后观察磁盘上的文件:
源文件
End of explanation
!hexdump encrypted.py
Explanation: 加密后的文件
End of explanation
!hexdump decrypted.py
Explanation: 解密后的文件
End of explanation
!diff classicPrime.py decrypted.py
Explanation: 比较差异
End of explanation |
4,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In the last few years speed dating popularity has grown quickly. Despite its popularity lots of people don't seem as satisfied as they'd like. Most users don't end up finding what they were looking for. That's why a crew of data scientists is going to study data on previous speed dating events in order to make it easier for our users to find their other halves.
Disclaimer
Step1: Every variable is described in the piece of writing called "Speed Dating Data.doc" which is in the same directory as this notebook.
And in cases such as gender or race we can see using numbers is not the best choice. That's why we are going to modify those variables which are not countable.
All this changes were done after the reading of "The elements of data analytic style" as recommended in the course webpage
Tyding choice
As most of the things done when data tyding do not involve any programming skills I have prefered to put them into a black box where they don't disturb the eye of a real programmer.
The processing is inside de function tidy_this_data of the python file called pretty_notebook.py which is in the same repository as this project.
Step2: Evaluating our changes
With regards to the tidying process, a warrant of correct processing is needed. Looking into the issue two big problems appear.
The first problem is
Step3: As the author thinks the problem in this case is that people could not be followed up, we will simply ignore most of the data.
The second problem is
Step4: For uncountable variables this last error is an error only found in the met variable so it is not a big issue but it must be taken into account afterwards when the variables representing a percentage come into stage.
The correction is changing this values for True as the author of the study supposes people wanted to say they had met their pair more than once.
Step5: Countable data
Now it is the time for data that can be evaluated in order to tidy the data some plots will be done to fix possible problems on data scale or distribution.
For instance we know data from different waves has different interval values, that's why we are going to let each element between 0 and 1
Step6: And for other waves, where the range of values was different, we also have numbers between 0 and 1
Step7: In the end we have achieved a pretty normalised dataset. Now it's the turn for the science to begin.
Step8: Data analysis
After having tidied our data we will use some of the tools we have developed within the semester to evaluate if two people are going to match.
We have chosen to show how cross validation ameliorates the result. We will first use a technique which does not guarantee cross validation and then cross-validation. That means, we will train several Machine Learning objects and afterwards they will only influence those points they do not know.
Step9: Answering to the 2nd question
We want to visualize groups in orther to create events where people have more affinity.
A way of visualizing this is by a radar chart. | Python Code:
speedDatingDF = pd.read_csv("Speed Dating Data.csv",encoding = "ISO-8859-1")
#speedDatingDF.dtypes #We can see which type has each attr.
Explanation: Introduction
In the last few years speed dating popularity has grown quickly. Despite its popularity lots of people don't seem as satisfied as they'd like. Most users don't end up finding what they were looking for. That's why a crew of data scientists is going to study data on previous speed dating events in order to make it easier for our users to find their other halves.
Disclaimer: Most of the data has been recorded from heterosexual encounters this makes it difficult to inferentiate the data into our system. (New Speed Dating events are more plural taking into account all sexs and genders)
What are we looking for?
Finding questions
The first thing we have to do is ask ourselfs what conclusions we hope this study leads to. In other words finding the questions this project is going to answer to.
First of all we want to maximize the likelyhood two people fall in love.
"Are these two people going to match?" - (After selecting two people from a new wave)
Secondly we want to be able to group people in order to choose them for special waves
"Which group does someone correspond to?" - (After selecting two people from a new wave)
"Speed Dating" data tidying
The first thing to do is fix possible errors so that it is easier to approach the solution.
End of explanation
speedDatingDF = pretty_notebook.tidy_this_data(speedDatingDF)
Explanation: Every variable is described in the piece of writing called "Speed Dating Data.doc" which is in the same directory as this notebook.
And in cases such as gender or race we can see using numbers is not the best choice. That's why we are going to modify those variables which are not countable.
All this changes were done after the reading of "The elements of data analytic style" as recommended in the course webpage
Tyding choice
As most of the things done when data tyding do not involve any programming skills I have prefered to put them into a black box where they don't disturb the eye of a real programmer.
The processing is inside de function tidy_this_data of the python file called pretty_notebook.py which is in the same repository as this project.
End of explanation
#This same behaviour can be seen in most of the last attributes of feedback
print ( pretty_notebook.values_in_a_column(speedDatingDF.met) )
Explanation: Evaluating our changes
With regards to the tidying process, a warrant of correct processing is needed. Looking into the issue two big problems appear.
The first problem is:
People didn't finish their evaluation and a lot of NAN values can be found in the last variables of the data frame.
End of explanation
#Taking the same column and looking at the values different from NAN an error appears.
values = pretty_notebook.values_in_a_column(speedDatingDF.met)
values = [v for v in values if not np.isnan(v)]
print (values) #True and False are not the unique values
Explanation: As the author thinks the problem in this case is that people could not be followed up, we will simply ignore most of the data.
The second problem is:
People entered wrong values and this were transcribed into the dataframe.
End of explanation
for v in values[2:]: #correction done HERE !!!
speedDatingDF.loc[speedDatingDF['met'] == v, 'met'] = True
#We evaluate if the changes are right
values = pretty_notebook.values_in_a_column(speedDatingDF.met)
values = [v for v in values if not np.isnan(v)]
print (values) #True and False ARE the UNIQUE values
Explanation: For uncountable variables this last error is an error only found in the met variable so it is not a big issue but it must be taken into account afterwards when the variables representing a percentage come into stage.
The correction is changing this values for True as the author of the study supposes people wanted to say they had met their pair more than once.
End of explanation
pretty_notebook.normalize_data(speedDatingDF)
eachWave = speedDatingDF.groupby('wave')
eachWave.get_group(7).iloc[0:10,69:75]
Explanation: Countable data
Now it is the time for data that can be evaluated in order to tidy the data some plots will be done to fix possible problems on data scale or distribution.
For instance we know data from different waves has different interval values, that's why we are going to let each element between 0 and 1:
End of explanation
eachWave.get_group(8).iloc[0:10,69:75]
Explanation: And for other waves, where the range of values was different, we also have numbers between 0 and 1:
End of explanation
# SAVING DATA IN ORDER TO SAVE TIME
speedDatingDF.to_csv('cleanDATAFRAME.csv',index=False)
#IF YOU TRUST MY CLEANING PROCESS
speedDatingDF = pd.read_csv('cleanDATAFRAME.csv')
Explanation: In the end we have achieved a pretty normalised dataset. Now it's the turn for the science to begin.
End of explanation
labels = []
for boolean in speedDatingDF.match:
if boolean:
labels.append(1)
else:
labels.append(-1)
labels = np.array(labels) #If someone got match 1 else -1
the_set = dsf.LabeledSet(6) #We will fill it with the impression he causes and the things each one likes
values = np.array(speedDatingDF.iloc[0:,69:75]) #What the person asked looks for
for i in range(len(values)):
value = values[i]
label = labels[i]
the_set.addExample(value,label)
foret = dsf.ClassifierBaggingTree(5,0.3,0.7,True)
foret.train(the_set)
print("Bagging of decision trees (5 trees): accuracy totale: data=%.4f "%(foret.accuracy(the_set)))
perceps = dsf.ClassifierOOBPerceptron(5,0.3,0.0,True)
perceps.train(the_set)
print("Out of the bag with perceptrons (5 perceptrons): accuracy totale: data=%.4f "%(perceps.accuracy(the_set)))
foretOOB = dsf.ClassifierOOBTree(5,0.3,0.7,True)
foretOOB.train(the_set)
print("Out of the bag with trees (5 trees): accuracy totale: data=%.4f "%(foretOOB.accuracy(the_set)))
Explanation: Data analysis
After having tidied our data we will use some of the tools we have developed within the semester to evaluate if two people are going to match.
We have chosen to show how cross validation ameliorates the result. We will first use a technique which does not guarantee cross validation and then cross-validation. That means, we will train several Machine Learning objects and afterwards they will only influence those points they do not know.
End of explanation
measure_up, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,24:30].dropna(axis=0), 0.05, 100)
looking_for, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,69:75].dropna(axis=0), 0.05, 100)
others_looking_for, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,75:81].dropna(axis=0), 0.05, 100)
possible_pair, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,81:87].dropna(axis=0), 0.05, 100)
data = [
['attractive', 'sincere', 'intelligent','funny','ambitious','hobbies'],
('The surveyed measure up',np.array(measure_up)),
('The surveyed is looking for:',np.array(looking_for)),
('The surveyed thinks others are looking for:',np.array(others_looking_for)),
('Possible matches are looking for:',np.array(possible_pair))
]
rc.print_rc(data,3)
Explanation: Answering to the 2nd question
We want to visualize groups in orther to create events where people have more affinity.
A way of visualizing this is by a radar chart.
End of explanation |
4,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Wolf-Sheep-Grass Model with Soil Creep
This notebook demonstrates coupling of an ABM implemented in Mesa and a grid-based numerical model written in Landlab. The example is the canonical "wolf-sheep-grass" example of an agent-based model. Here we add an additional twist
Step19: Next, we'll define a Mesa model object, representing the wolf-sheep-grass model, along with an agent object, representing grass patches. Note that this Mesa code in the cell below, which implements the wolf-sheep-grass example, was written by the Mesa development team; the original can be found here.
Step20: Create an instance of the WolfSheep model, with the grass option set to True
Step21: Define a function to set up an array representing the growth status of grass on the model grid (in other words, extract the information from the model's GrassPatch agents), as well as a function to plot the current grass status. This is really a translation of data structures
Step22: Run the model and display the results
Step23: One-way coupling
Step24: Import from Landlab a RasterModelGrid (which will be Landlab's version of the model grid), the imshow_grid function (for plotting Landlab grid fields), and the LinearDiffuser component (which will implement down-slope soil creep).
Step25: Interestingly, erosion tends to occur at locations where grass cover upslope captures incoming soil.
So far, however, this is just one-way feedback
Step26: Next we define a new function limit_grass_by_soil that will render any GrassPatches "non-fully-grown" if the soil is thinner than a specified minimum value. In other words, we represent soil limitation with a simple threshold in which the grass in any cell with soil thinner than the threshold can never be fully grown. Again, a more realistic way to do this might be to reduce the regrowth rate, but our simple threshold treatment will serve for the purpose of showing how we can use data from a Landlab field to influence data associated with spatially distributed agents in a Mesa model
Step27: Run the integrated model in a time loop. Our algorithm performs the following sequence of calculations in each iteration
Step28: The next few plots examine the results to illustrate how the interaction of soil creep and grass consumption by mobile agents (sheep) has influenced the landscape | Python Code:
try:
from mesa import Model
except ModuleNotFoundError:
print(
Mesa needs to be installed in order to run this notebook.
Normally Mesa should be pre-installed alongside the Landlab notebook collection.
But it appears that Mesa is not already installed on the system on which you are
running this notebook. You can install Mesa from a command prompt using either:
`conda install -c conda-forge mesa`
or
`pip install mesa`
)
raise
Explanation: Wolf-Sheep-Grass Model with Soil Creep
This notebook demonstrates coupling of an ABM implemented in Mesa and a grid-based numerical model written in Landlab. The example is the canonical "wolf-sheep-grass" example of an agent-based model. Here we add an additional twist: when sheep eat grass, the soil beneath becomes more easily mobile. This then influences soil transport: the transport efficiency is higher where the grass is "damaged". An additional feedback lies in the thickness of the soil: grass will not grow if the soil is too thin.
The rules in this example are deliberately simple. The main goal of this tutorial is to illustrate the mechanics of building an integrated model that combines agent-based elements (via Mesa) with continuum-based elements (via Landlab) on a shared grid.
(Greg Tucker, June 2020; most recent update November 2021)
Running the Mesa Wolf-Sheep-Grass model by itself
To start, here's an example of how to run a Mesa model in a notebook. First, we'll run a check to make sure Mesa is installed and available; if it is not, follow the instructions in the message to install it, then re-start the kernel (Kernel => Restart) and continue.
End of explanation
from collections import defaultdict
from mesa import Agent
from mesa import Model
from mesa.space import MultiGrid
from mesa.datacollection import DataCollector
from mesa.time import RandomActivation
class RandomActivationByBreed(RandomActivation):
A scheduler which activates each type of agent once per step, in random
order, with the order reshuffled every step.
This is equivalent to the NetLogo 'ask breed...' and is generally the
default behavior for an ABM.
Assumes that all agents have a step() method.
def __init__(self, model):
super().__init__(model)
self.agents_by_breed = defaultdict(dict)
def add(self, agent):
Add an Agent object to the schedule
Args:
agent: An Agent to be added to the schedule.
self._agents[agent.unique_id] = agent
agent_class = type(agent)
self.agents_by_breed[agent_class][agent.unique_id] = agent
def remove(self, agent):
Remove all instances of a given agent from the schedule.
del self._agents[agent.unique_id]
agent_class = type(agent)
del self.agents_by_breed[agent_class][agent.unique_id]
def step(self, by_breed=True):
Executes the step of each agent breed, one at a time, in random order.
Args:
by_breed: If True, run all agents of a single breed before running
the next one.
if by_breed:
for agent_class in self.agents_by_breed:
self.step_breed(agent_class)
self.steps += 1
self.time += 1
else:
super().step()
def step_breed(self, breed):
Shuffle order and run all agents of a given breed.
Args:
breed: Class object of the breed to run.
agent_keys = list(self.agents_by_breed[breed].keys())
self.model.random.shuffle(agent_keys)
for agent_key in agent_keys:
self.agents_by_breed[breed][agent_key].step()
def get_breed_count(self, breed_class):
Returns the current number of agents of certain breed in the queue.
return len(self.agents_by_breed[breed_class].values())
class RandomWalker(Agent):
Class implementing random walker methods in a generalized manner.
Not indended to be used on its own, but to inherit its methods to multiple
other agents.
grid = None
x = None
y = None
moore = True
def __init__(self, unique_id, pos, model, moore=True):
grid: The MultiGrid object in which the agent lives.
x: The agent's current x coordinate
y: The agent's current y coordinate
moore: If True, may move in all 8 directions.
Otherwise, only up, down, left, right.
super().__init__(unique_id, model)
self.pos = pos
self.moore = moore
def random_move(self):
Step one cell in any allowable direction.
# Pick the next cell from the adjacent cells.
next_moves = self.model.grid.get_neighborhood(self.pos, self.moore, True)
next_move = self.random.choice(next_moves)
# Now move:
self.model.grid.move_agent(self, next_move)
class Sheep(RandomWalker):
A sheep that walks around, reproduces (asexually) and gets eaten.
The init is the same as the RandomWalker.
energy = None
def __init__(self, unique_id, pos, model, moore, energy=None):
super().__init__(unique_id, pos, model, moore=moore)
self.energy = energy
def step(self):
A model step. Move, then eat grass and reproduce.
self.random_move()
living = True
if self.model.grass:
# Reduce energy
self.energy -= 1
# If there is grass available, eat it
this_cell = self.model.grid.get_cell_list_contents([self.pos])
grass_patch = [obj for obj in this_cell if isinstance(obj, GrassPatch)][0]
if grass_patch.fully_grown:
self.energy += self.model.sheep_gain_from_food
grass_patch.fully_grown = False
# Death
if self.energy < 0:
self.model.grid._remove_agent(self.pos, self)
self.model.schedule.remove(self)
living = False
if living and self.random.random() < self.model.sheep_reproduce:
# Create a new sheep:
if self.model.grass:
self.energy /= 2
lamb = Sheep(
self.model.next_id(), self.pos, self.model, self.moore, self.energy
)
self.model.grid.place_agent(lamb, self.pos)
self.model.schedule.add(lamb)
class Wolf(RandomWalker):
A wolf that walks around, reproduces (asexually) and eats sheep.
energy = None
def __init__(self, unique_id, pos, model, moore, energy=None):
super().__init__(unique_id, pos, model, moore=moore)
self.energy = energy
def step(self):
self.random_move()
self.energy -= 1
# If there are sheep present, eat one
x, y = self.pos
this_cell = self.model.grid.get_cell_list_contents([self.pos])
sheep = [obj for obj in this_cell if isinstance(obj, Sheep)]
if len(sheep) > 0:
sheep_to_eat = self.random.choice(sheep)
self.energy += self.model.wolf_gain_from_food
# Kill the sheep
self.model.grid._remove_agent(self.pos, sheep_to_eat)
self.model.schedule.remove(sheep_to_eat)
# Death or reproduction
if self.energy < 0:
self.model.grid._remove_agent(self.pos, self)
self.model.schedule.remove(self)
else:
if self.random.random() < self.model.wolf_reproduce:
# Create a new wolf cub
self.energy /= 2
cub = Wolf(
self.model.next_id(), self.pos, self.model, self.moore, self.energy
)
self.model.grid.place_agent(cub, cub.pos)
self.model.schedule.add(cub)
class GrassPatch(Agent):
A patch of grass that grows at a fixed rate and it is eaten by sheep
def __init__(self, unique_id, pos, model, fully_grown, countdown):
Creates a new patch of grass
Args:
grown: (boolean) Whether the patch of grass is fully grown or not
countdown: Time for the patch of grass to be fully grown again
super().__init__(unique_id, model)
self.fully_grown = fully_grown
self.countdown = countdown
self.pos = pos
def step(self):
if not self.fully_grown:
if self.countdown <= 0:
# Set as fully grown
self.fully_grown = True
self.countdown = self.model.grass_regrowth_time
else:
self.countdown -= 1
Wolf-Sheep Predation Model
================================
Replication of the model found in NetLogo:
Wilensky, U. (1997). NetLogo Wolf Sheep Predation model.
http://ccl.northwestern.edu/netlogo/models/WolfSheepPredation.
Center for Connected Learning and Computer-Based Modeling,
Northwestern University, Evanston, IL.
class WolfSheep(Model):
Wolf-Sheep Predation Model
height = 20
width = 20
initial_sheep = 100
initial_wolves = 50
sheep_reproduce = 0.04
wolf_reproduce = 0.05
wolf_gain_from_food = 20
grass = False
grass_regrowth_time = 30
sheep_gain_from_food = 4
verbose = False # Print-monitoring
description = (
"A model for simulating wolf and sheep (predator-prey) ecosystem modelling."
)
def __init__(
self,
height=20,
width=20,
initial_sheep=100,
initial_wolves=50,
sheep_reproduce=0.04,
wolf_reproduce=0.05,
wolf_gain_from_food=20,
grass=False,
grass_regrowth_time=30,
sheep_gain_from_food=4,
):
Create a new Wolf-Sheep model with the given parameters.
Args:
initial_sheep: Number of sheep to start with
initial_wolves: Number of wolves to start with
sheep_reproduce: Probability of each sheep reproducing each step
wolf_reproduce: Probability of each wolf reproducing each step
wolf_gain_from_food: Energy a wolf gains from eating a sheep
grass: Whether to have the sheep eat grass for energy
grass_regrowth_time: How long it takes for a grass patch to regrow
once it is eaten
sheep_gain_from_food: Energy sheep gain from grass, if enabled.
super().__init__()
# Set parameters
self.height = height
self.width = width
self.initial_sheep = initial_sheep
self.initial_wolves = initial_wolves
self.sheep_reproduce = sheep_reproduce
self.wolf_reproduce = wolf_reproduce
self.wolf_gain_from_food = wolf_gain_from_food
self.grass = grass
self.grass_regrowth_time = grass_regrowth_time
self.sheep_gain_from_food = sheep_gain_from_food
self.schedule = RandomActivationByBreed(self)
self.grid = MultiGrid(self.height, self.width, torus=True)
self.datacollector = DataCollector(
{
"Wolves": lambda m: m.schedule.get_breed_count(Wolf),
"Sheep": lambda m: m.schedule.get_breed_count(Sheep),
}
)
# Create sheep:
for i in range(self.initial_sheep):
x = self.random.randrange(self.width)
y = self.random.randrange(self.height)
energy = self.random.randrange(2 * self.sheep_gain_from_food)
sheep = Sheep(self.next_id(), (x, y), self, True, energy)
self.grid.place_agent(sheep, (x, y))
self.schedule.add(sheep)
# Create wolves
for i in range(self.initial_wolves):
x = self.random.randrange(self.width)
y = self.random.randrange(self.height)
energy = self.random.randrange(2 * self.wolf_gain_from_food)
wolf = Wolf(self.next_id(), (x, y), self, True, energy)
self.grid.place_agent(wolf, (x, y))
self.schedule.add(wolf)
# Create grass patches
if self.grass:
for agent, x, y in self.grid.coord_iter():
fully_grown = self.random.choice([True, False])
if fully_grown:
countdown = self.grass_regrowth_time
else:
countdown = self.random.randrange(self.grass_regrowth_time)
patch = GrassPatch(self.next_id(), (x, y), self, fully_grown, countdown)
self.grid.place_agent(patch, (x, y))
self.schedule.add(patch)
self.running = True
self.datacollector.collect(self)
def step(self):
self.schedule.step()
# collect data
self.datacollector.collect(self)
if self.verbose:
print(
[
self.schedule.time,
self.schedule.get_breed_count(Wolf),
self.schedule.get_breed_count(Sheep),
]
)
def run_model(self, step_count=200):
if self.verbose:
print("Initial number wolves: ", self.schedule.get_breed_count(Wolf))
print("Initial number sheep: ", self.schedule.get_breed_count(Sheep))
for i in range(step_count):
self.step()
if self.verbose:
print("")
print("Final number wolves: ", self.schedule.get_breed_count(Wolf))
print("Final number sheep: ", self.schedule.get_breed_count(Sheep))
Explanation: Next, we'll define a Mesa model object, representing the wolf-sheep-grass model, along with an agent object, representing grass patches. Note that this Mesa code in the cell below, which implements the wolf-sheep-grass example, was written by the Mesa development team; the original can be found here.
End of explanation
ws = WolfSheep(grass=True)
Explanation: Create an instance of the WolfSheep model, with the grass option set to True:
End of explanation
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import copy
ground_cover_cmap = copy.copy(mpl.cm.get_cmap("YlGn"))
def generate_grass_map(model):
grass_map = np.zeros((model.grid.width, model.grid.height))
for cell in model.grid.coord_iter():
cell_content, x, y = cell
for agent in cell_content:
if type(agent) is GrassPatch:
if agent.fully_grown:
grass_map[x][y] = 2
else:
grass_map[x][y] = 1
return grass_map
def plot_grass_map(grass_map):
plt.imshow(grass_map, interpolation="nearest", cmap=ground_cover_cmap)
plt.colorbar()
Explanation: Define a function to set up an array representing the growth status of grass on the model grid (in other words, extract the information from the model's GrassPatch agents), as well as a function to plot the current grass status. This is really a translation of data structures: the Mesa model stores data inside agents, which themselves reside at particular grid cells. Here we want to extract the information pertaining to the status of each cell's GrassPatch---is it fully grown or "damaged"---and store that information in a simple 2D numpy array.
End of explanation
ws.run_model(step_count=25)
gm = generate_grass_map(ws)
plot_grass_map(gm)
Explanation: Run the model and display the results:
End of explanation
ws = WolfSheep(grass=True)
ws.run_model(step_count=3)
gm = generate_grass_map(ws)
plot_grass_map(gm)
Explanation: One-way coupling: using the grass cover in a soil-creep model
Here we initialize and run the W-S-G model for a short duration. We then extract its map of fully grown versus damaged grass, and use that to set the soil creep coefficient in a model of downslope soil creep. The point here is just to show that it's pretty easy to use a grid from a Mesa model as input to a Landlab-built model.
End of explanation
from landlab import RasterModelGrid, imshow_grid
from landlab.components import LinearDiffuser
import copy
import matplotlib as mpl
# Create a grid the same size as the W-S-G model's grid
rmg = RasterModelGrid((ws.grid.width, ws.grid.height))
# Create elevation field and have it slope down to the south at 10% gradient
elev = rmg.add_zeros("topographic__elevation", at="node")
elev[:] = 0.1 * rmg.y_of_node
# Have one open boundary on the south side
rmg.set_closed_boundaries_at_grid_edges(True, True, True, False)
# Remember the starting elevation so we can calculate cumulative erosion/deposition
initial_elev = np.zeros(rmg.number_of_nodes)
initial_elev[:] = elev
# Create a field for the creep coefficient, and set parameters for two
# rates: slow (full grass cover) and fast (partial or "eaten" grass cover)
creep_coef = rmg.add_zeros("creep_coefficient", at="node")
fast_creep = 0.1
slow_creep = 0.001
# Assign the higher creep coefficient to cells where the grass has
# been eaten and not yet recovered; the slower value is assigned to
# "fully grown" grass patches.
creep_coef[gm.flatten() == 1] = fast_creep
creep_coef[gm.flatten() == 2] = slow_creep
# Instantiate a LinearDiffuser (soil creep) Landlab component
diffuser = LinearDiffuser(rmg, linear_diffusivity=creep_coef)
# Set the time step duration
dt = 0.2 * rmg.dx * rmg.dx / fast_creep
print(f"Time step duration is {dt} years.")
# Run the soil creep model
for i in range(50):
diffuser.run_one_step(dt)
# Calculate and plot the erosion/deposition patterns
ero_dep = elev - initial_elev
maxchange = np.amax(np.abs(ero_dep))
imshow_grid(
rmg,
ero_dep,
vmin=-maxchange,
vmax=maxchange,
cmap=copy.copy(mpl.cm.get_cmap("coolwarm_r")),
colorbar_label="Cumulative deposition (+) or erosion (-), m",
)
# Plot the grass cover again
imshow_grid(
rmg, gm, cmap=ground_cover_cmap, colorbar_label="Ground cover (1 = bare, 2 = grass)"
)
imshow_grid(
rmg,
elev,
cmap=copy.copy(mpl.cm.get_cmap("pink")),
colorbar_label="Elevation above base of slope (m)",
)
Explanation: Import from Landlab a RasterModelGrid (which will be Landlab's version of the model grid), the imshow_grid function (for plotting Landlab grid fields), and the LinearDiffuser component (which will implement down-slope soil creep).
End of explanation
ws = WolfSheep(grass=True)
initial_soil_depth = 0.2
min_depth_for_grass = 0.2
hstar = 0.2
fast_creep = 0.1
slow_creep = 0.001
# Create a grid the same size as the W-S-G model's grid
rmg = RasterModelGrid((ws.grid.width, ws.grid.height))
# Create elevation field and have it slope down to the south at 10% gradient
elev = rmg.add_zeros("topographic__elevation", at="node")
elev[:] = 0.1 * rmg.y_of_node
# Have one open boundary on the south side
rmg.set_closed_boundaries_at_grid_edges(True, True, True, False)
# Remember the starting elevation so we can calculate cumulative erosion/deposition
initial_elev = np.zeros(rmg.number_of_nodes)
initial_elev[:] = elev
# Also remember the elevation of the prior time step, so we can difference
prior_elev = np.zeros(rmg.number_of_nodes)
# Create a field for the creep coefficient, and set parameters for two
# rates: slow (full grass cover) and fast (partial or "eaten" grass cover)
creep_coef = rmg.add_zeros("creep_coefficient", at="node")
# Create a soil-thickness field
soil = rmg.add_zeros("soil__depth", at="node")
soil[:] = initial_soil_depth
# Instantiate a LinearDiffuser (soil creep) Landlab component
diffuser = LinearDiffuser(rmg, linear_diffusivity=creep_coef)
# Set the time step duration
dt = 0.2 * rmg.dx * rmg.dx / fast_creep
print("Time step duration is {dt} years.")
Explanation: Interestingly, erosion tends to occur at locations where grass cover upslope captures incoming soil.
So far, however, this is just one-way feedback: the previously damaged grass patches, as calculated in the wolf-sheep-grass ABM, become susceptible to erosion, but this does not (yet) feed back into future grass growth or erosional loss. Let's turn to that next.
Two-way feedback
Here, we explore two-way feedback by running the two models iteratively. We track soil thickness, and "damage" any grass where the soil is thinner than a given amount. We also limit soil flux according to its thickness, so that absent soil cannot move.
These rules are deliberately simple. One could make the model more realistic by, for example, setting the grass regrowth time (a property of the GrassPatch agents in the ABM) to a value that depends on the thickness of the soil (a Landlab field).
End of explanation
def limit_grass_by_soil(wsg_model, soil, min_soil_depth):
soilmatrix = soil.reshape((wsg_model.width, wsg_model.height))
for cell in wsg_model.grid.coord_iter():
cell_content, x, y = cell
if soilmatrix[x][y] < min_soil_depth:
for agent in cell_content:
if type(agent) is GrassPatch:
agent.fully_grown = False
Explanation: Next we define a new function limit_grass_by_soil that will render any GrassPatches "non-fully-grown" if the soil is thinner than a specified minimum value. In other words, we represent soil limitation with a simple threshold in which the grass in any cell with soil thinner than the threshold can never be fully grown. Again, a more realistic way to do this might be to reduce the regrowth rate, but our simple threshold treatment will serve for the purpose of showing how we can use data from a Landlab field to influence data associated with spatially distributed agents in a Mesa model:
End of explanation
# Main loop
for _ in range(50):
# Assign the higher creep coefficient to cells where the grass has
# been eaten and not yet recovered; the slower value is assigned to
# "fully grown" grass patches.
gm = generate_grass_map(ws)
creep_coef[gm.flatten() == 1] = fast_creep
creep_coef[gm.flatten() == 2] = slow_creep
# Adjust the creep coefficient to account for soil depth
creep_coef *= 1.0 - np.exp(-soil / hstar)
# Run the soil-creep model
prior_elev[:] = elev
diffuser.run_one_step(dt)
# Update the soil cover
soil += elev - prior_elev
# Update the grass cover
limit_grass_by_soil(ws, soil, min_depth_for_grass)
# Run the W-S-G model
ws.step()
Explanation: Run the integrated model in a time loop. Our algorithm performs the following sequence of calculations in each iteration:
Get a copy of the current grass status as a 2D array
Update the soil-creep coefficient Landlab field according to the grass status and the soil thickness
Run soil creep for one time step and update the soil thickness (we could have used a DepthDependentLinearDiffuser for this, but here a simpler approach will suffice)
Set grass in any cells with insufficient soil to be non-fully-grown
Run the wolf-sheep-grass model for one time step
The data exchange happens in two function calls. generate_grass_map translates grass status data from the Mesa model's data structure to a Landlab field, and limit_grass_by_soil translates Landlab's soil thickness field into a restriction on grass status in the Mesa model's GrassPatch agents.
End of explanation
# Calculate and plot the erosion/deposition patterns
ero_dep = elev - initial_elev
maxchange = np.amax(np.abs(ero_dep))
imshow_grid(
rmg,
ero_dep,
vmin=-maxchange,
vmax=maxchange,
cmap="coolwarm_r",
colorbar_label="Depth of soil accumulation (+) or loss (-), m",
)
# Soil thickness
imshow_grid(rmg, soil, colorbar_label="Soil thickness, m")
# Ground cover
imshow_grid(
rmg, gm, cmap=ground_cover_cmap, colorbar_label="Ground cover (1 = bare, 2 = grass)"
)
Explanation: The next few plots examine the results to illustrate how the interaction of soil creep and grass consumption by mobile agents (sheep) has influenced the landscape:
End of explanation |
4,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handy small functions related to astronomical research
Step3: Defining function
Emission related
1. Dust
Step8: 2. Opacity
Step10: Motions
1. free-fall timescale
Step13: 2. Jeans Length and Jeans mass
Step14: 3. Toomore Q parameter
Observatory related
Target source availability
Frequently used unit/coordinate transformation
Brightness tempreature
Plotting
Step15: Plot Planck function in cgs and mks unit
Step16: Plot electron (free-free) optical depth as a function of emission measure
Step17: Plotting the measurements for FU Ori as an practical example. Reference see Liu, H. B. et al. (2017) [arXiv
Step18: Plot free-fall time as a function of cloud particle number density
Step19: Plot Jeans length as a function of cloud particle number density
Step20: Plot Jeans mass as a function of cloud particle number density | Python Code:
import math
import numpy as np
from numpy import size
Explanation: Handy small functions related to astronomical research
End of explanation
def Planckfunc_cgs(freq, temperature):
Calculate Planck function.
Inputs:
freq: frequency, in Hz
temperature: temperature in Kelvin
Return:
Intensity: in cgs unit ( erg s^-1 sr^-1 cm^-2 Hz-1 )
# defining physical constants
c_cgs = 29979245800.0 # light speed
h_cgs = 6.62606885e-27 # planck constant
kB_cgs = 1.38064852e-16 # Boltzmann constant
inputsize = size(freq)
if (inputsize ==1):
A = ( 2.0 * h_cgs * (freq**3.0) ) / ( c_cgs ** 2.0 )
B = math.exp( (h_cgs * freq) / (kB_cgs * temperature) )
return A * ( 1.0 / (B - 1.0) )
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
A = ( 2.0 * h_cgs * (freq[id]**3.0) ) / ( c_cgs ** 2.0 )
B = math.exp( (h_cgs * freq[id]) / (kB_cgs * temperature) )
out_array[id] = A * ( 1.0 / (B - 1.0) )
return out_array
def Planckfunc_mks(freq, temperature):
Calculate Planck function.
Inputs:
freq: frequency, in Hz
temperature: temperature in Kelvin
Return:
Intensity: in mks unit ( J s^-1 sr^-1 m^-2 Hz-1 )
# defining physical constants
c_mks = 299792458.0 # light speed
h_mks = 6.62607004e-34 # planck constant
kB_mks = 1.38064852e-23 # Boltzmann constant
inputsize = size(freq)
if (inputsize ==1):
A = ( 2.0 * h_mks * (freq**3.0) ) / ( c_mks ** 2.0 )
B = math.exp( (h_mks * freq) / (kB_mks * temperature) )
return A * ( 1.0 / (B - 1.0) )
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
A = ( 2.0 * h_mks * (freq[id]**3.0) ) / ( c_mks ** 2.0 )
B = math.exp( (h_mks * freq[id]) / (kB_mks * temperature) )
return out_array
Explanation: Defining function
Emission related
1. Dust
End of explanation
# free-free emission
def emission_measure(ne, ell):
Estimate emission measure, in unit of pc cm^-6
Inputs:
ne: election number volume-density, in unit of cm^-3
ell: line-of-sight thickness of emission region, in unit of pc
Return:
emission measure (EM), in unit of pc cm^-6
emission_measure = math.pow( ne, 2.0 ) * ell
return emission_measure
def tauff_Mezger67(freq, Te, EM):
Calculate electron optical depth for free-free emission.
following the prescription of Mezger & Henderson (1967) and
Keto et al. (2003).
Inputs:
freq: frequency / frequencies, in Hz
Te : electron temperature in Kelvin
EM : emission measure in pc cm^-6
Return:
optical depth (dimension free)
inputsize = size(freq)
if (inputsize ==1):
tauff = 8.235e-2 * math.pow( Te, -1.35 ) * math.pow( freq/1e9, -2.1 ) * EM
return tauff
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
out_array[id] = 8.235e-2 * \
math.pow( Te, -1.35 )* \
math.pow( freq[id]/1e9, -2.1 ) * \
EM
return out_array
# Simplified dust
def dustkappa_cgs(freq, rep_freq, opacity_at_repfreq, opacity_index):
Calculate dust opacity at the specified frequency
Inputs:
freq: frequency / frequencies, in Hz
rep_freq: a frequency which the dust opacity is specified, in Hz
opacity_at_repfreq: opacity at the specified representative frequency, in cm^2 g^-1
opacity index: dust opacity spectral index (dimension free)
Return:
dust opacity, in units of cm^2 g^-1
inputsize = size(freq)
if (inputsize ==1):
opacity = opacity_at_repfreq * math.pow( (freq / rep_freq ) , opacity_index)
return opacity
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
out_array[id] = opacity_at_repfreq * math.pow( (freq[id] / rep_freq ) , opacity_index)
return out_array
# Modified black body flux
def blackbody_Fnu_cgs(freq, temperature, tau, Omega):
Evaluate flux of black body emission, in cgs unit.
Inputs:
Frequency : frequency / frequencies, in Hz
temperature : temperature, in Kelvin
tau : optical depth /depths, dimensionless
Omega : solid angle, in Sr
Return:
flux in cgs unit
inputsize = size(freq)
if (inputsize ==1):
flux = Planckfunc_cgs(freq, temperature) * \
(1.0 - math.exp(-1.0 * tau) )* \
Omega
return flux
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
out_array[id] = Planckfunc_cgs(freq[id], temperature) * \
(1.0 - math.exp(-1.0 * tau[id]) )* \
Omega
return out_array
print ('electron optical depth: ', tauff_Mezger67(33.0*1e9, 8000.0, 1.0e9), end='\n' )
print ('dust opacity: ', dustkappa_cgs(33.0*1e9, 230.0*1e9, 1.0 ,1.75), end='cm${^2}$ g$^{-1}$' )
Explanation: 2. Opacity
End of explanation
def freefall_cgs(density):
Calculate free-fall timescale.
Input:
density: density, in g cm^-3
Return:
Free fall time ( seconds )
# defining physical constants
G_cgs = 6.674e-8
inputsize = size(density)
if (inputsize ==1):
A = 3.0 * math.pi
B = 32.0 * G_cgs * density
time = math.sqrt( A / B )
return time
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
A = 3.0 * math.pi
B = 32.0 * G_cgs * density[id]
out_array[id] = math.sqrt( A / B )
return out_array
Explanation: Motions
1. free-fall timescale
End of explanation
def Jeanslength_cgs(density, temperature, particlemass):
Calculate Jeans Length.
Inputs:
density: density, in g cm^-3
temperature: temperature in Kelvin
particlemass: in g, to be used for calculating sound speed
Return:
Jeans length in cgs unit ( cm )
# defining physical constants
kB_cgs = 1.38064852e-16 # Boltzmann constant
G_cgs = 6.674e-8
inputsize = size(density)
if (inputsize ==1):
A = 15.0 * kB_cgs * temperature
B = 4.0 * math.pi * G_cgs * density * particlemass
length = math.sqrt( A / B )
return length
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
A = 15.0 * kB_cgs * temperature
B = 4.0 * math.pi * G_cgs * density[id] * particlemass
length = math.sqrt( A / B )
out_array[id] = length
return out_array
def Jeansmass_cgs(density, temperature, particlemass):
Calculate Jeans mass.
Inputs:
density: density, in g cm^-3
temperature: temperature in Kelvin
particlemass: in g, to be used for calculating sound speed
Return:
Jeans mass in cgs unit ( g )
inputsize = size(density)
if (inputsize ==1):
mass = (4.0 / 3.0) * math.pi \
* ( Jeanslength_cgs(density, temperature, particlemass) **3 ) \
* density
return mass
else:
out_array = np.arange(0, inputsize) * 0.0
for id in list(range(0,inputsize)):
out_array[id] = (4.0 / 3.0) * math.pi \
* ( Jeanslength_cgs(density[id], temperature, particlemass) **3 ) \
* density[id]
return out_array
Explanation: 2. Jeans Length and Jeans mass
End of explanation
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 3. Toomore Q parameter
Observatory related
Target source availability
Frequently used unit/coordinate transformation
Brightness tempreature
Plotting
End of explanation
freq_array = np.arange(1, 2501) * 1e10 # frequency in Hz
output_array_mks = np.arange(1, 2501) * 0.0
# physical constants
cgsflux_to_Jy = 1e23
mksflux_to_Jy = 1e26
str_to_sqdegree = 3282.80635
str_to_sqarcsecond = 3282.80635 * (3600.0 ** 2.0)
# initializing plotting
fig = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 1)
plt.axis([0.5, 4.9, -5, 5.2])
# plt.axis([2.0, 3.0, -1, 1])
# evaluate Planck function in CGS unit
temperature = 15.0 # Kelvin
output_array_cgs = Planckfunc_cgs(freq_array, temperature) \
* cgsflux_to_Jy / str_to_sqarcsecond
plt.plot(np.log10( freq_array / 1e9) , np.log10(output_array_cgs), \
color = (0, 0, 1.0, 0.2),
linewidth=8, label = '15 K')
# evaluate Planck function in MKS unit
TCMB = 2.72548 # Kelvin
for id in list(range(0, 2500)):
output_array_mks[id] = Planckfunc_mks(freq_array[id], TCMB) * \
mksflux_to_Jy / str_to_sqarcsecond
plt.plot(np.log10( freq_array / 1e9), np.log10(output_array_mks), \
linestyle = 'dashed',
color = (0.2, 0.6, 0, 0.4),
linewidth =5, label = '$T_{CMB}$')
plt.plot(np.log10( freq_array / 1e9), np.log10(output_array_cgs - output_array_mks), \
linestyle = 'dashed',
color = (1.0, 0, 0, 0.4),
linewidth =5, label = '15 K - $T_{CMB}$')
# evaluate Planck function in CGS unit
temperature = 30.0 # Kelvin
output_array_cgs = Planckfunc_cgs(freq_array, temperature) \
* cgsflux_to_Jy / str_to_sqarcsecond
plt.plot(np.log10( freq_array / 1e9) , np.log10(output_array_cgs), \
color = (0, 0, 1.0, 0.4),
linewidth=8, label = '30 K')
# evaluate Planck function in CGS unit
temperature = 100.0 # Kelvin
output_array_cgs = Planckfunc_cgs(freq_array, temperature) \
* cgsflux_to_Jy / str_to_sqarcsecond
plt.plot(np.log10( freq_array / 1e9) , np.log10(output_array_cgs), \
color = (0, 0, 1.0, 0.6),
linewidth=8, label = '100 K')
# evaluate Planck function in CGS unit
temperature = 300.0 # Kelvin
output_array_cgs = Planckfunc_cgs(freq_array, temperature) \
* cgsflux_to_Jy / str_to_sqarcsecond
plt.plot(np.log10( freq_array / 1e9) , np.log10(output_array_cgs), \
color = (0, 0, 1.0, 0.8),
linewidth=8, label = '300 K')
plt.title('Planck Function')
plt.xlabel('Log$_{10}$(Frequency [GHz])')
plt.ylabel('Log$_{10}$(Intensity [Jy arcsecond$^{-2}$])')
plt.legend(loc=2)
Explanation: Plot Planck function in cgs and mks unit
End of explanation
# initializing arrays
freq = np.arange(1, 250) * 1e9 # frequency in Hz # Hz
# initial condition
ell = 1.0 / 2.0626e5 # parsec
# initializing plotting
fig = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 1)
# evaluate electron (free-free) optical depth
Te = 8000.0 # Kelvin
ne = 1e4 # cm^-3
EM = emission_measure( ne, ell )
output_array = tauff_Mezger67(freq, Te, EM)
label = 'T$_{e}$=8000 K, N$_{e}$=%e' % ( round(ne,0) )
plt.plot(np.log10( freq / 1e9 ), np.log10(output_array), \
color = (0.1, 0.0, 0.4, 0.2),
linewidth=8, label = label)
Te = 8000.0 # Kelvin
ne = 1e5 # cm^-3
EM = emission_measure( ne, ell )
output_array = tauff_Mezger67(freq, Te, EM)
label = 'T$_{e}$=8000 K, N$_{e}$=%e' % ( round(ne,0) )
plt.plot(np.log10( freq / 1e9 ), np.log10(output_array), \
color = (0.1, 0.0, 0.4, 0.4),
linewidth=8, label = label)
Te = 8000.0 # Kelvin
ne = 1e6 # cm^-3
EM = emission_measure( ne, ell )
output_array = tauff_Mezger67(freq, Te, EM)
label = 'T$_{e}$=8000 K, N$_{e}$=%e' % ( round(ne,0) )
plt.plot(np.log10( freq / 1e9 ), np.log10(output_array), \
color = (0.1, 0.0, 0.4, 0.6),
linewidth=8, label = label)
plt.title('Electron (free-free optical depth) for 1 AU scale HII region')
plt.xlabel('Log$_{10}$(Frequency [GHz])')
plt.ylabel('Log$_{10}$(Optical depth)')
plt.legend(loc=1)
Explanation: Plot electron (free-free) optical depth as a function of emission measure
End of explanation
# initializing arrays
freq1 = np.arange(1, 50000) * 1e9 # frequency in Hz # Hz
freq2 = np.arange(50000, 150000, 1000) * 1e9
freq = np.concatenate((freq1, freq2), axis=0)
# physical constants
c_mks = 299792458.0 # light speed in m/s
cgsflux_to_Jy = 1e23
# initializing plotting
fig = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 1)
plt.axis([0.5, 2e5, 3e-5, 1e5])
plt.xscale('log')
plt.yscale('log')
# FU Ori
Te_FUOri = 16000.0 # Kelvin
EM_FUOri = 6.98e9
Omega_ff_FUOri = 1.41e-16 # solid angle
FUOri_ff_flux = blackbody_Fnu_cgs(freq, Te_FUOri, tauff_Mezger67(freq, Te_FUOri, EM_FUOri), Omega_ff_FUOri)
plt.plot( ( freq / 1e9 ), (FUOri_ff_flux * cgsflux_to_Jy * 1e3), \
color = (0.1, 0.0, 1, 0.1),
linestyle = 'dashed',
linewidth=8, label = 'Free-free emission')
T_HID_FUOri = 300.0
kappa230Sigma_FUOri = 103.0
Omega_HID_FUOri = 1.38e-14
betaHID_FUOri = 1.75 # dust opacity spectral index
tauHID_FUOri = dustkappa_cgs(freq, 230.0e9, kappa230Sigma_FUOri, betaHID_FUOri)
FUOri_HID_flux = blackbody_Fnu_cgs(freq, T_HID_FUOri, \
tauHID_FUOri, Omega_HID_FUOri)
plt.plot( ( freq / 1e9 ), (FUOri_HID_flux * cgsflux_to_Jy * 1e3), \
color = (0, 0.7, 0.7, 0.1),
linestyle = 'dashdot',
linewidth=8, label = 'Dust emission from HID')
T_disk_FUOri = 60.0
kappa230Sigma_FUOri = 2.06e-2
Omega_disk_FUOri = 3.88e-12
betadisk_FUOri = 1.75 # dust opacity spectral index
taudisk_FUOri = dustkappa_cgs(freq, 230.0e9, kappa230Sigma_FUOri, betadisk_FUOri)
FUOri_disk_flux = blackbody_Fnu_cgs(freq, T_disk_FUOri, \
taudisk_FUOri, Omega_disk_FUOri)
plt.plot( ( freq / 1e9 ), (FUOri_disk_flux * cgsflux_to_Jy * 1e3), \
color = (0.9, 0.05, 0.05, 0.1),
linestyle = 'dotted',
linewidth=8, label = 'Dust emission from extended disk')
# plot summed model
plt.plot( ( freq / 1e9 ), \
( (FUOri_disk_flux + FUOri_HID_flux + FUOri_ff_flux) * cgsflux_to_Jy * 1e3), \
color = (0.1, 0.1, 0.1, 0.5),
linewidth=2, label = 'Summed emission')
# plot observed data
fuori_jvla_freq = np.array([33.48707, 34.51107, 35.48707, 36.51107, 29.42306, 30.51106, 31.48706, 32.51107])
fuori_jvla_freq = fuori_jvla_freq * 1e9
fuori_jvla_mJy = np.array([205.0, 181.0, 199.0, 215.0, 167.0, 137.0, 165.0, 173.0]) * 1e-3
plt.plot( (fuori_jvla_freq / 1e9) , (fuori_jvla_mJy),
'o',
color = (0, 0, 0.9, 0.9))
fuori_alma_freq = np.array([345.784])
fuori_alma_freq = fuori_alma_freq * 1e9
fuori_alma_mJy = np.array([50.1])
plt.plot( (fuori_alma_freq / 1e9) , (fuori_alma_mJy),
'o',
color = (0, 0, 0.9, 0.9))
plt.title('Flux model for FU Ori')
plt.xlabel('Frequency [GHz]')
plt.ylabel('Flux [mJy]')
plt.legend(loc=2)
# FU Ori S
Te_FUOriS = 16000.0 # Kelvin
EM_FUOriS = 4.85e9
Omega_ff_FUOriS = 1.94e-16 # solid angle
# initializing plotting
fig2 = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 2)
plt.axis([0.5, 2e5, 3e-5, 1e5])
plt.xscale('log')
plt.yscale('log')
FUOriS_ff_flux = blackbody_Fnu_cgs(freq, Te_FUOri, tauff_Mezger67(freq, Te_FUOri, EM_FUOri), Omega_ff_FUOri)
plt.plot( ( freq / 1e9 ), (FUOriS_ff_flux * cgsflux_to_Jy * 1e3), \
color = (0.1, 0.0, 1, 0.1),
linestyle = 'dashed',
linewidth=8, label = 'Free-free emission')
T_HID_FUOriS = 360.0
kappa230Sigma_FUOriS = 32.0
Omega_HID_FUOriS = 5.19e-15
betaHID_FUOriS = 1.75 # dust opacity spectral index
tauHID_FUOriS = dustkappa_cgs(freq, 230.0e9, kappa230Sigma_FUOriS, betaHID_FUOriS)
FUOriS_HID_flux = blackbody_Fnu_cgs(freq, T_HID_FUOriS, \
tauHID_FUOriS, Omega_HID_FUOriS)
plt.plot( ( freq / 1e9 ), (FUOriS_HID_flux * cgsflux_to_Jy * 1e3), \
color = (0, 0.7, 0.7, 0.1),
linestyle = 'dashdot',
linewidth=8, label = 'Dust emission from HID')
T_disk_FUOriS = 60.0
kappa230Sigma_FUOriS = 3.87e-2
Omega_disk_FUOriS = 1.04e-12
betadisk_FUOriS = 1.75 # dust opacity spectral index
taudisk_FUOriS = dustkappa_cgs(freq, 230.0e9, kappa230Sigma_FUOriS, betadisk_FUOriS)
FUOriS_disk_flux = blackbody_Fnu_cgs(freq, T_disk_FUOriS, \
taudisk_FUOriS, Omega_disk_FUOriS)
plt.plot( ( freq / 1e9 ), (FUOriS_disk_flux * cgsflux_to_Jy * 1e3), \
color = (0.9, 0.05, 0.05, 0.1),
linestyle = 'dotted',
linewidth=8, label = 'Dust emission from extended disk')
# plot summed model
plt.plot( ( freq / 1e9 ), \
( (FUOriS_disk_flux + FUOriS_HID_flux + FUOriS_ff_flux) * cgsflux_to_Jy * 1e3), \
color = (0.1, 0.1, 0.1, 0.5),
linewidth=2, label = 'Summed emission')
# plot observed data
fuoriS_jvla_freq = np.array([33.48707, 34.51107, 35.48707, 36.51107, 29.42306, 30.51106, 31.48706, 32.51107])
fuoriS_jvla_freq = fuoriS_jvla_freq * 1e9
fuoriS_jvla_mJy = np.array([51.7, 104.0, 110.0, 94.0, 78.0, 81.0, 65.0, 88.0]) * 1e-3
plt.plot(np.log10(fuoriS_jvla_freq / 1e9) , np.log10(fuoriS_jvla_mJy),
'o',
color = (0, 0, 0.9, 0.9))
fuoriS_alma_freq = np.array([345.784])
fuoriS_alma_freq = fuoriS_alma_freq * 1e9
fuoriS_alma_mJy = np.array([21.2])
plt.plot( (fuoriS_alma_freq / 1e9) , (fuoriS_alma_mJy),
'o',
color = (0, 0, 0.9, 0.9))
plt.title('Flux model for FU Ori S')
plt.xlabel('Frequency [GHz]')
plt.ylabel('Flux [mJy]')
plt.legend(loc=2)
# Plot summed SED model for FU Ori and FU Ori S
fig3 = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 2)
plt.axis([0.5, 2e5, 3e-5, 1e5])
plt.xscale('log')
plt.yscale('log')
# plot measurements
sma_freq = np.array([223.7759, 260.3860, 271.2455, 271.7524, 274.3923]) * 1e9
sma_mJy = np.array([17.47, 39.4, 42.5, 42.9, 39.3])
plt.plot( (sma_freq / 1e9) , sma_mJy,
'o',
color = (0, 0, 0.9, 0.9))
# reading Herschel pacs data
pacsfile = 'fuori_pacs_v65_trim.txt'
wavelength_micron = np.loadtxt(pacsfile,
comments='#',
skiprows=0,
usecols=0)
pacs_Jy = np.loadtxt(pacsfile,
comments='#',
skiprows=0,
usecols=1)
pacsfreq = c_mks / ( wavelength_micron * 1e-6 )
plt.plot( (pacsfreq / 1e9) , pacs_Jy * 1e3,
'o',
color = (0, 0, 0.9, 0.9))
# reading Herschel spire data
spirefile = 'fuori_spire_corrected_trim.txt'
wavelength_micron = np.loadtxt(spirefile,
comments='#',
skiprows=0,
usecols=0)
spire_Jy = np.loadtxt(spirefile,
comments='#',
skiprows=0,
usecols=1)
spirefreq = c_mks / ( wavelength_micron * 1e-6 )
plt.plot( (spirefreq / 1e9) , spire_Jy * 1e3,
'o',
color = (0, 0, 0.9, 0.9))
# plot model
plt.plot( ( freq / 1e9 ), \
(FUOri_disk_flux + FUOri_HID_flux + FUOri_ff_flux) \
* cgsflux_to_Jy * 1e3, \
linestyle = 'dashed',
color = (0.1, 0.1, 0.1, 0.2), \
linewidth=4, label = 'FU Ori')
plt.plot( ( freq / 1e9 ), \
(FUOriS_disk_flux + FUOriS_HID_flux + FUOriS_ff_flux) \
* cgsflux_to_Jy * 1e3, \
linestyle = 'dotted',
color = (0.1, 0.1, 0.1, 0.2), \
linewidth=4, label = 'FU Ori S')
plt.plot( ( freq / 1e9 ), \
(FUOri_disk_flux + FUOri_HID_flux + FUOri_ff_flux + \
FUOriS_disk_flux + FUOriS_HID_flux + FUOriS_ff_flux) \
* cgsflux_to_Jy * 1e3, \
color = (0.1, 0.1, 0.1, 0.6), \
linewidth=2, label = 'Summed SED of FU Ori and FU Ori S')
plt.title('Flux model for FU Ori and FU Ori S summed')
plt.xlabel('Frequency [GHz]')
plt.ylabel('Flux [mJy]')
plt.legend(loc=2)
Explanation: Plotting the measurements for FU Ori as an practical example. Reference see Liu, H. B. et al. (2017) [arXiv:1701.06531]
End of explanation
# physical constants
mean_mol_weight = 2.76 # mean molecular weight
mole = 6.02214129e23
year_to_s = 365.0 * 24.0 * 60.0 * 60.0
# initializing arrays
number_density = np.arange(1, 10001, 1) * 1e3
# output_array_cgs = np.arange(1, 10001, 1) * 0.0
# initializing plotting
fig = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 1)
# plt.axis([4.0, 6.5, 0.01, 0.2])
# evaluate free-fall time
density = number_density * mean_mol_weight / mole
output_array_cgs = freefall_cgs(density) / ( year_to_s * 1e5)
plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0.5, 0.0, 0.0, 0.2),
linewidth=8, label = 'Mean molecular weight: 2.76')
plt.title('Free-fall time')
plt.xlabel('Log$_{10}($Molecular gas number density [cm$^{-3}$])')
plt.ylabel('Time [10$^{5}$ year]')
plt.legend(loc=1)
Explanation: Plot free-fall time as a function of cloud particle number density
End of explanation
# physical constants
mean_mol_weight = 2.76 # mean molecular weight
mole = 6.02214129e23
parsec_to_cm = 3.08567758e18
# initializing arrays
number_density = np.arange(1, 10001, 1) * 1e3
output_array_cgs = np.arange(1, 10001, 1) * 0.0
# initializing plotting
fig = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 1)
plt.axis([4.0, 6.5, 0.01, 0.2])
# initial conditions
particlemass = mean_mol_weight / mole
density = number_density * mean_mol_weight / mole
temperature = 10.0
output_array_cgs = Jeanslength_cgs(density, temperature, particlemass) / parsec_to_cm
plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0, 0, 1.0, 0.2),
linewidth=8, label = '10 K')
temperature = 20.0
output_array_cgs = Jeanslength_cgs(density, temperature, particlemass) / parsec_to_cm
plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0, 0.5, 0.5, 0.2),
linewidth=8, label = '20 K')
temperature = 30.0
output_array_cgs = Jeanslength_cgs(density, temperature, particlemass) / parsec_to_cm
plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0, 0.5, 0.0, 0.2),
linewidth=8, label = '30 K')
temperature = 40.0
output_array_cgs = Jeanslength_cgs(density, temperature, particlemass) / parsec_to_cm
plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0.5, 0.5, 0.0, 0.2),
linewidth=8, label = '40 K')
temperature = 50.0
output_array_cgs = Jeanslength_cgs(density, temperature, particlemass) / parsec_to_cm
plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0.5, 0.0, 0.0, 0.2),
linewidth=8, label = '50 K')
plt.title('Jeans length')
plt.xlabel('Log$_{10}($Molecular gas number density [cm$^{-3}$])')
plt.ylabel('Jeans length [pc]')
plt.legend(loc=1)
Explanation: Plot Jeans length as a function of cloud particle number density
End of explanation
# physical constants
mean_mol_weight = 2.76 # mean molecular weight
mole = 6.02214129e23
parsec_to_cm = 3.08567758e18
solar_mass_cgs = 1.9891e33
# initializing arrays
number_density = np.arange(1, 10001, 1) * 1e3
output_array_cgs = np.arange(1, 10001, 1) * 0.0
# initializing plotting
fig = plt.figure(figsize=(9, 14))
plt.subplot(2, 1, 1)
plt.axis([4.0, 6.5, 0.01, 15])
# initial conditions
particlemass = mean_mol_weight / mole
density = number_density * mean_mol_weight / mole
temperature = 10.0
output_array_cgs = Jeansmass_cgs(density, temperature, particlemass) / solar_mass_cgs
line10K = plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0, 0, 1.0, 0.2), \
linewidth=8, \
label = '10 K')
temperature = 20.0
output_array_cgs = Jeansmass_cgs(density, temperature, particlemass) / solar_mass_cgs
line20K = plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0, 0.5, 0.5, 0.2), \
linewidth=8, \
label = '20 K')
temperature = 30.0
output_array_cgs = Jeansmass_cgs(density, temperature, particlemass) / solar_mass_cgs
line30K = plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0, 0.5, 0.0, 0.2), \
linewidth=8, \
label = '30 K')
temperature = 40.0
output_array_cgs = Jeansmass_cgs(density, temperature, particlemass) / solar_mass_cgs
line40K = plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0.5, 0.5, 0.0, 0.2), \
linewidth = 8, \
label = '40 K')
temperature = 50.0
output_array_cgs = Jeansmass_cgs(density, temperature, particlemass) / solar_mass_cgs
line50K = plt.plot(np.log10( number_density ), output_array_cgs, \
color = (0.5, 0.0, 0.0, 0.2), \
linewidth = 8, \
label = '50 K')
plt.title('Jeans mass')
plt.xlabel('Log$_{10}$(Molecular gas number density [cm$^{-3}$])')
plt.ylabel('Jeans mass [$M_{\odot}$]')
plt.legend(loc=1)
Explanation: Plot Jeans mass as a function of cloud particle number density
End of explanation |
4,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Efficiently searching for optimal tuning parameters
From the video series
Step1: More efficient parameter tuning using GridSearchCV
Allows you to define a grid of parameters that will be searched using K-fold cross-validation
Step2: You can set n_jobs = -1 to run computations in parallel (if supported by your computer and OS)
Step3: Searching multiple parameters simultaneously
Example
Step4: Using the best parameters to make predictions
Step5: Reducing computational expense using RandomizedSearchCV
Searching many different parameters at once may be computationally infeasible
RandomizedSearchCV searches a subset of the parameters, and you control the computational "budget"
Step6: Important
Step7: Resources
scikit-learn documentation | Python Code:
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import cross_val_score
import matplotlib.pyplot as plt
%matplotlib inline
# read in the iris data
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
# 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter)
knn = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print(scores)
# use average accuracy as an estimate of out-of-sample accuracy
print(scores.mean())
# search for an optimal value of K for KNN
k_range = list(range(1, 31))
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
print(k_scores)
# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
Explanation: Efficiently searching for optimal tuning parameters
From the video series: Introduction to machine learning with scikit-learn
Agenda
How can K-fold cross-validation be used to search for an optimal tuning parameter?
How can this process be made more efficient?
How do you search for multiple tuning parameters at once?
What do you do with those tuning parameters before making real predictions?
How can the computational expense of this process be reduced?
Review of K-fold cross-validation
Steps for cross-validation:
Dataset is split into K "folds" of equal size
Each fold acts as the testing set 1 time, and acts as the training set K-1 times
Average testing performance is used as the estimate of out-of-sample performance
Benefits of cross-validation:
More reliable estimate of out-of-sample performance than train/test split
Can be used for selecting tuning parameters, choosing between models, and selecting features
Drawbacks of cross-validation:
Can be computationally expensive
Review of parameter tuning using cross_val_score
Goal: Select the best tuning parameters (aka "hyperparameters") for KNN on the iris dataset
End of explanation
from sklearn.grid_search import GridSearchCV
# define the parameter values that should be searched
k_range = list(range(1, 31))
print(k_range)
# create a parameter grid: map the parameter names to the values that should be searched
param_grid = dict(n_neighbors=k_range)
print(param_grid)
# instantiate the grid
grid = GridSearchCV(knn, param_grid, cv=10, scoring='accuracy')
Explanation: More efficient parameter tuning using GridSearchCV
Allows you to define a grid of parameters that will be searched using K-fold cross-validation
End of explanation
# fit the grid with data
grid.fit(X, y)
# view the complete results (list of named tuples)
grid.grid_scores_
# examine the first tuple
print(grid.grid_scores_[0].parameters)
print(grid.grid_scores_[0].cv_validation_scores)
print(grid.grid_scores_[0].mean_validation_score)
# create a list of the mean scores only
grid_mean_scores = [result.mean_validation_score for result in grid.grid_scores_]
print(grid_mean_scores)
# plot the results
plt.plot(k_range, grid_mean_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
# examine the best model
print(grid.best_score_)
print(grid.best_params_)
print(grid.best_estimator_)
Explanation: You can set n_jobs = -1 to run computations in parallel (if supported by your computer and OS)
End of explanation
# define the parameter values that should be searched
k_range = list(range(1, 31))
weight_options = ['uniform', 'distance']
# create a parameter grid: map the parameter names to the values that should be searched
param_grid = dict(n_neighbors=k_range, weights=weight_options)
print(param_grid)
# instantiate and fit the grid
grid = GridSearchCV(knn, param_grid, cv=10, scoring='accuracy')
grid.fit(X, y)
# view the complete results
grid.grid_scores_
# examine the best model
print(grid.best_score_)
print(grid.best_params_)
Explanation: Searching multiple parameters simultaneously
Example: tuning max_depth and min_samples_leaf for a DecisionTreeClassifier
Could tune parameters independently: change max_depth while leaving min_samples_leaf at its default value, and vice versa
But, best performance might be achieved when neither parameter is at its default value
End of explanation
# train your model using all data and the best known parameters
knn = KNeighborsClassifier(n_neighbors=13, weights='uniform')
knn.fit(X, y)
# make a prediction on out-of-sample data
knn.predict([[3, 5, 4, 2]])
# shortcut: GridSearchCV automatically refits the best model using all of the data
grid.predict([[3, 5, 4, 2]])
Explanation: Using the best parameters to make predictions
End of explanation
from sklearn.grid_search import RandomizedSearchCV
# specify "parameter distributions" rather than a "parameter grid"
param_dist = dict(n_neighbors=k_range, weights=weight_options)
Explanation: Reducing computational expense using RandomizedSearchCV
Searching many different parameters at once may be computationally infeasible
RandomizedSearchCV searches a subset of the parameters, and you control the computational "budget"
End of explanation
# n_iter controls the number of searches
rand = RandomizedSearchCV(knn, param_dist, cv=10, scoring='accuracy', n_iter=10, random_state=5)
rand.fit(X, y)
rand.grid_scores_
# examine the best model
print(rand.best_score_)
print(rand.best_params_)
# run RandomizedSearchCV 20 times (with n_iter=10) and record the best score
best_scores = []
for _ in range(20):
rand = RandomizedSearchCV(knn, param_dist, cv=10, scoring='accuracy', n_iter=10)
rand.fit(X, y)
best_scores.append(round(rand.best_score_, 3))
print(best_scores)
Explanation: Important: Specify a continuous distribution (rather than a list of values) for any continous parameters
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Resources
scikit-learn documentation: Grid search, GridSearchCV, RandomizedSearchCV
Timed example: Comparing randomized search and grid search
scikit-learn workshop by Andreas Mueller: Video segment on randomized search (3 minutes), related notebook
Paper by Yoshua Bengio: Random Search for Hyper-Parameter Optimization
Comments or Questions?
Email: kevin@dataschool.io
Website: http://dataschool.io
Twitter: @justmarkham
End of explanation |
4,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Callables in Research
The main purpose of Research is to run pipleines with different configs in parallel but you also can add callables and realize very flexible plans of experiments even without pipelines.
Step1: Simple example
To add your callable into Research use add_callable method
Step2: You also can use args and kwargs for your callables, just add them into add_callable.
Step3: Named Expressions
Obviously, such usage of args and kwargs is not very usefull because can be realized by partial but you also can use named expressions to substitute into functions objects which depends on research objects. For example, you can use ready results of the current research by RR named expression which corresponds to Results(path=res_name).
Step4: Save only the best model
One can use callables to save only the best (in some sense) model, for example, the model with the highest accuracy on the test.
Firstly, define pipelines as usual
Step5: Now define callable which will get train pipeline with model, results for the current experiment, path to the folder with experiment results and current iteration of the reserach.
Step6: To define values of parameters we will use named expressions. RR args and kwargs will be used in Results initialization.
Step7: Let's check that we have only the best models for each config.
Step8: List of the saved models
Step9: Iterations for each config with the best test accuracy | Python Code:
import sys
import os
import shutil
import warnings
warnings.filterwarnings('ignore')
from tensorflow import logging
logging.set_verbosity(logging.ERROR)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import matplotlib
%matplotlib inline
import numpy as np
sys.path.append('../../..')
from batchflow import Pipeline, B, C, V, D, L
from batchflow.opensets import MNIST
from batchflow.models.tf import VGG7, VGG16
from batchflow.research import Research, Option, Results, RP, RR, RD, REP, RID, RI
def clear_previous_results(res_name):
if os.path.exists(res_name):
shutil.rmtree(res_name)
Explanation: Callables in Research
The main purpose of Research is to run pipleines with different configs in parallel but you also can add callables and realize very flexible plans of experiments even without pipelines.
End of explanation
res_name = 'sample_callable_research'
clear_previous_results(res_name)
def randn_std():
return np.random.randn()
research = Research().add_callable(randn_std, returns='random', name='randn_std')
research.run(5, name=res_name)
research.load_results().df
Explanation: Simple example
To add your callable into Research use add_callable method:
End of explanation
clear_previous_results(res_name)
def randn(mean=0, std=1):
return np.random.randn() * std + mean
research = Research().add_callable(randn, mean=2, std=5, returns='random', name='randn')
research.run(5, name=res_name)
research.load_results().df
Explanation: You also can use args and kwargs for your callables, just add them into add_callable.
End of explanation
res_name = 'max_research'
clear_previous_results(res_name)
def stat(results):
return results.random.min(), results.random.max()
research = (Research()
.add_callable(randn, mean=2, std=5, returns='random', name='randn', dump=1)
.add_callable(stat, results=RR().df, returns=['min_value', 'max_value'], name='stat')
)
research.run(5, name=res_name)
research.load_results().df
Explanation: Named Expressions
Obviously, such usage of args and kwargs is not very usefull because can be realized by partial but you also can use named expressions to substitute into functions objects which depends on research objects. For example, you can use ready results of the current research by RR named expression which corresponds to Results(path=res_name).
End of explanation
BATCH_SIZE = 64
mnist = MNIST()
domain = Option('layout', ['cna', 'can']) * Option('bias', [True, False])
model_config={
'inputs/images/shape': B('image_shape'),
'inputs/labels/classes': 10,
'inputs/labels/name': 'targets',
'initial_block/inputs': 'images',
'body/block/layout': C('layout'),
'common/conv/use_bias': C('bias'),
}
train_ppl = (Pipeline()
.init_variable('loss')
.init_model('dynamic', VGG7, 'conv', config=model_config)
.to_array()
.train_model('conv',
images=B('images'), labels=B('labels'),
fetches='loss', save_to=V('loss', mode='w'))
)
train_root = mnist.train.p.run_later(BATCH_SIZE, shuffle=True, n_epochs=None)
test_ppl = (Pipeline()
.init_variable('predictions')
.init_variable('metrics')
.import_model('conv', C('import_from'))
.to_array()
.predict_model('conv',
images=B('images'), labels=B('labels'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics('class', targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a'))
)
test_root = mnist.test.p.run_later(BATCH_SIZE, shuffle=True, n_epochs=1) #Note n_epochs=1
Explanation: Save only the best model
One can use callables to save only the best (in some sense) model, for example, the model with the highest accuracy on the test.
Firstly, define pipelines as usual
End of explanation
import glob
import shutil
def save_model(ppl, results, path, iteration):
best_row = results.iloc[results.accuracy.idxmax()]
if best_row.iteration == iteration:
for item in glob.glob(glob.escape(path) + '/model_*'):
shutil.rmtree(item)
model_path = os.path.join(path, 'model_{}'.format(iteration))
ppl.get_model_by_name("conv").save(model_path)
return path
res_name = 'save_model_research'
clear_previous_results(res_name)
Explanation: Now define callable which will get train pipeline with model, results for the current experiment, path to the folder with experiment results and current iteration of the reserach.
End of explanation
EXECUTE_EACH = 10
research = (Research()
.init_domain(domain)
.add_pipeline(train_root, train_ppl, variables='loss', name='train_ppl')
.add_pipeline(test_root, test_ppl, variables='metrics', run=True, name='test_ppl',
import_from=RP('train_ppl'),
execute=[EXECUTE_EACH, 'last'], dump=[EXECUTE_EACH, 'last'])
.get_metrics(pipeline='test_ppl', metrics_var='metrics', metrics_name='accuracy',
returns='accuracy',
execute=[EXECUTE_EACH, 'last'], dump=[EXECUTE_EACH, 'last'])
.add_callable(save_model, returns='model_path', execute=[EXECUTE_EACH, 'last'],
ppl=RP('train_ppl'),
results=RR(sample_index=RID(), names='test_ppl_metrics').df,
path=L(os.path.join)(RD(), REP()),
iteration=RI())
)
research.run(300, branches=4, name=res_name, bar=True)
Explanation: To define values of parameters we will use named expressions. RR args and kwargs will be used in Results initialization.
End of explanation
results = research.load_results(concat_config=True).df
Explanation: Let's check that we have only the best models for each config.
End of explanation
glob.glob(os.path.join(res_name, 'results', '*', '*', 'model*'))
Explanation: List of the saved models:
End of explanation
results.groupby('config').apply(lambda x: x.loc[x.accuracy.idxmax()])[['config', 'accuracy', 'iteration']]
Explanation: Iterations for each config with the best test accuracy:
End of explanation |
4,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Parametric Regression
Notebook version
Step1: 1. Model-based parametric regression
1.1. The regression problem.
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
NOTE
Step2: 2.2. Summary
Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following
Step3: Fit a Bayesian linear regression model assuming ${\bf z}={\bf x}$ and
Step4: To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots, 128$. Draw all these posteriors along with the prior distribution in the same plot.
Step5: Exercise 2
Step6: 3.5 Maximum likelihood vs Bayesian Inference. Making predictions
Following an <b>ML approach</b>, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as
Step7: Posterior distribution of the target
Since $f^ = f({\bf x}^) = {\bf w}^\top{\bf z}$, $f^*$ is also a Gaussian variable whose posterior mean and variance can be calculated as follows
Step8: Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
4 Maximum evidence model selection
We have already addressed with Bayesian Inference the following two issues
Step9: The above curve may change the position of its maximum from run to run.
We conclude the notebook by plotting the result of the Bayesian inference for M=6 | Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
import time
Explanation: Bayesian Parametric Regression
Notebook version: 1.3 (Sep 26, 2016)
Author: Jerónimo Arenas García ([email protected])
Jesús Cid-Sueiro ([email protected])
Changes: v.1.0 - First version
v.1.1 - ML Model selection included
v.1.2 - Some typos corrected
v.1.3 - Rewriting text, reorganizing content, some exercises.
Pending changes: * Include regression on the stock data
End of explanation
n_points = 20
n_grid = 200
frec = 3
std_n = 0.2
degree = 3
nplots = 20
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_w = 0.03 ### Try increasing this value
var_w = sigma_w * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
xmin = np.min(X_tr)
xmax = np.max(X_tr)
X_grid = np.linspace(xmin-0.2*(xmax-xmin), xmax+0.2*(xmax-xmin),n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
for k in range(nplots):
#Draw weigths fromt the prior distribution
w_iter = np.random.multivariate_normal(mean_w,var_w)
S_grid_iter = np.polyval(w_iter,X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
ax.set_xlim(xmin-0.2*(xmax-xmin), xmax+0.2*(xmax-xmin))
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.set_xlabel('$x$')
ax.set_ylabel('$s$')
plt.show()
Explanation: 1. Model-based parametric regression
1.1. The regression problem.
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
NOTE: In the following, we will use capital letters, ${\bf X}$, $S$, ..., to denote random variables, and lower-case letters ${\bf x}$, s, ..., to the denote the values they can take. When there is no ambigüity, we will remove subindices of the density functions, $p_{{\bf X}, S}({\bf x}, s)= p({\bf x}, s)$ to simplify the mathematical notation.
1.2. Model-based parametric regression
Model-based regression methods assume that all data in the training and test dataset habe been generated by some stochastic process. In parametric regression, we assume that the probability distribution generating the data has a known parametric form, but the values of some parameters are unknown.
In particular, in this notebook we will assume the target variables in all pairs $({\bf x}^{(k)}, s^{(k)})$ from the training and test sets have been generated independently from some posterior distribution $p(s| {\bf x}, {\bf w})$, were ${\bf w}$ is some unknown parameter. The training dataset is used to estimate ${\bf w}$.
Once $p(s|{\bf x},{\bf w})$ is known or can be estimated, Estimation Theory can be applied to estimate $s$ for any input ${\bf x}$. For instance, any of these classical estimates can be used:
Maximum A Posterior (MAP): $\qquad\hat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x}, {\bf w})$
Minimum Mean Square Error (MSE): $\qquad\hat{s}_{\text{MSE}} = \mathbb{E}{S |{\bf x}, {\bf w}}$
<img src="figs/ParametricReg.png", width=300>
1.3.1. Maximum Likelihood (ML) parameter estimation
One way to estimate ${\bf w}$ is to apply the maximum likelihood principle: take the value ${\bf w}_\text{ML}$ maximizing the joint distribution of the target variables given the inputs and given ${\bf w}$, i.e.
$$
{\bf w}\text{ML} = \arg\max{\bf w} p({\bf s}|{\bf X}, {\bf w})
$$
where ${\bf s} = \left(s^{(1)}, \dots, s^{(K)}\right)^\top$ is the vector of target variables and ${\bf X} = \left({\bf x}^{(1)}, \dots, {\bf x}^{(K)}\right)^\top$ is the input matrix.
NOTE: Since the training data inputs are known, all probability density functions and expectations in the remainder of this notebook will be conditioned on ${\bf X}$. To simplify the mathematical notation, from now on we will remove ${\bf X}$ from all conditions. Keep in mind that, in any case, all probabilities and expectations may depend on ${\bf X}$ implicitely.
1.3.2. The Gaussian case
A particularly interesting case arises when the data model is Gaussian:
$$p(s|{\bf x}, {\bf w}) =
\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)
$$
where ${\bf z}=T({\bf x})$ is a vector with components which can be computed directly from the observed variables. Such expression includes a linear regression model, where ${\bf z} = [1; {\bf x}]$, as well as any other non-linear model as long as it can be expressed as a <i>"linear in the parameters"</i> model.
In that case, it can be shown that the likelihood function $p({\bf s}| {\bf w})$ ($\equiv p({\bf s}| {\bf X}, {\bf w})$) is given by
$$
p({\bf s}| {\bf w})
= \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K
\exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right)
$$
which is maximum for the Least Squares solution
$$
{\bf w}_{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s}
$$
1.4. Limitations of the ML estimators.
Since the ML estimation is equivalent to the LS solution under a Gaussian data model, it has the same drawbacks of LS regression. In particular, ML estimation is prone to overfiting. In general, if the number of parameters (i.e. the dimension of ${\bf w}$) is large in relation to the size of the training data, the predictor based on the ML estimate may have a small square error over the training set but a large error over the test set. Therefore, in practice, som cross validation procedures is required to keep the complexity of the predictor function under control depending on the size of the training set.
2. Bayesian Regression
One of the reasons why the ML estimate is prone to overfitting is that the prediction function uses ${\bf w}_\text{ML}$ without taking into account how much uncertain the true value of ${\bf w}$ is.
Bayesian methods utilize such information but considering ${\bf w}$ as a random variable with some prior distribution $p({\bf w})$. The posterior distribution $p({\bf w}|{\bf s})$ will be our measure of the uncertainty about the true value of the model parameters.
In fact, this posterior distribution is a key component of the predictor function. Indeed, the minimum MSE estimate can be computed as
$$
\hat{s}_\text{MSE}
= \mathbb{E}{s|{\bf s}, {\bf x}}
= \int \mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}} p({\bf w}|{\bf s}) d{\bf w}
$$
Since the samples are i.i.d. $\mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}} = \mathbb{E}{s|{\bf w}, {\bf x}}$ and, thus
$$
\hat{s}_\text{MSE}
= \int \mathbb{E}{s|{\bf w}, {\bf x}} p({\bf w}|{\bf s}) d{\bf w}
$$
Noting that $\mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}}$ is the minimum MSE prediction for a given value of ${\bf w}$, we observe that the Bayesian predictor is a weighted sum of these predictions, weighted by its posterior probability (density) of being the correct one.
2.1. Posterior weight distribution
We will express our <i>a priori</i> belief of models using a prior distribution $p({\bf w})$. Then we can infer the <i>a posteriori</i> distribution using Bayes' rule:
$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$
Where:
- $p({\bf s}|{\bf w})$: is the likelihood function
- $p({\bf w})$: is the <i>prior</i> distribution of the weights (assumptions are needed here)
- $p({\bf s})$: is the <i>marginal</i> distribution of the observed data, which could be obtained integrating the expression in the numerator
The previous expression can be interpreted in a rather intuitive way:
Since ${\bf w}$ are the parameters of the model, $p({\bf w})$ express our belief about which models should be preferred over others before we see any data. For instance, since parameter vectors with small norms produce smoother curves, we could assign (<i>a priori</i>) a larger pdf value to models with smaller norms
The likelihood function $p({\bf s}|{\bf w})$ tells us how well the observations can be explained by a particular model
Finally, the posterior distribution $p({\bf w}|{\bf s})$ expresses the estimated goodness of each model (i.e., each parameter vector ${\bf w}$) taking into consideration both the prior and the likelihood of $\bf w$. Thus, a model with large $p({\bf w})$ would have a low posterior value if it offers a poor explanation of the data (i.e., if $p({\bf s}|{\bf w})$ is small), whereas models that fit well with the observations would get emphasized
The posterior distribution of weights opens the door to working with several models at once. Rather thank keeping the estimated best model according to a certain criterion, we can now use all models parameterized by ${\bf w}$, assigning them different degrees of confidence according to $p({\bf w}|{\bf s})$.
2.1.1. A Gaussian Prior
Since each value of ${\bf w}$ determines a regression functions, by stating a prior distributions over the weights we state also a prior distribution over the space of regression functions.
For instance, we will consider a particular example in which we assume a Gaussian prior for the weights given by:
$${\bf w} \sim {\cal N}\left({\bf 0},{\pmb \Sigma}_{p} \right)$$
Example
Assume that the true target variable is related to the input observations through the equation
$$
s = {\bf w}^\top{\bf z} + \varepsilon
$$
where ${\bf z} = T({\bf x})$ is a polynomial transformation of the input, $\varepsilon$ is a Gaussian noise variable and ${\bf w}$ some unknown parameter vector.
Assume a Gausian prior weigh distribution, ${\bf w} \sim {\cal N}\left({\bf 0},{\pmb \Sigma}_{p} \right)$. For each parameter vector ${\bf w}$, there is a polynomial $f({\bf x}) = {\bf w}^\top {\bf z}$ associated to it. Thus, by drawing samples from $p({\bf w})$ we can generate and plot their associated polynomial functions. This is carried out in the following example.
You can check the effect of modifying the variance of the prior distribution.
End of explanation
# True data parameters
w_true = 3
std_n = 0.4
# Generate the whole dataset
n_max = 64
X_tr = 3 * np.random.random((n_max,1)) - 0.5
S_tr = w_true * X_tr + std_n * np.random.randn(n_max,1)
Explanation: 2.2. Summary
Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following:
Assume a parametric data model $p(s| {\bf x},{\bf w})$ and a prior distribution $p({\bf w})$.
Using the data model and the i.i.d. assumption, compute $p({\bf s}|{\bf w})$.
Applying the bayes rule, compute the posterior distribution $p({\bf w}|{\bf s})$.
Compute the MSE estimate of $s$ given ${\bf x}$.
3. Bayesian regression for a Gaussian model.
We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model.
3.1. Step 1: The Gaussian model.
Let as assume that the likelihood function is given by the Gaussian model described in Sec. 1.3.2.
$$
s~|~{\bf w} \sim {\cal N}\left({\bf z}^\top{\bf w}, \sigma_\varepsilon^2 {\bf I} \right)
$$
and that the prior is also Gaussian
$$
{\bf w} \sim {\cal N}\left({\bf 0},{\pmb \Sigma}_{p} \right)
$$
3.2. Step 2: Complete data likelihood
Using the i.i.d. assumption,
$$
{\bf s}~|~{\bf w} \sim {\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)
$$
3.3. Step 3: Posterior weight distribution
The posterior distribution of the weights can be computed using the Bayes rule
$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$
Since both $p({\bf s}|{\bf w})$ and $p({\bf w})$ follow a Gaussian distribution, we know also that the joint distribution and the posterior distribution of ${\bf w}$ given ${\bf s}$ are also Gaussian. Therefore,
$${\bf w}~|~{\bf s} \sim {\cal N}\left({\bf w}\text{MSE}, {\pmb\Sigma}{\bf w}\right)$$
After some algebra, it can be shown that mean and the covariance matrix of the distribution are:
$${\pmb\Sigma}{\bf w} = \left[\frac{1}{\sigma\varepsilon^2} {\bf Z}^{\top}{\bf Z} + {\pmb \Sigma}_p^{-1}\right]^{-1}$$
$${\bf w}\text{MSE} = {\sigma\varepsilon^{-2}} {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}$$
Exercise 1:
Consider the dataset with one-dimensional inputs given by
End of explanation
# Model parameters
sigma_eps = 0.1
mean_w = np.zeros((1,))
sigma_p = 1e6 * np.eye(1)
Explanation: Fit a Bayesian linear regression model assuming ${\bf z}={\bf x}$ and
End of explanation
# No. of points to analyze
n_points = [1, 2, 4, 8, 16, 32, 64]
# Prepare plots
w_grid = np.linspace(2.7, 3.4, 5000) # Sample the w axiss
plt.figure()
# Plot the prior distribution
# p = <FILL IN>
plt.plot(w_grid, p.flatten(),'g-')
for k in n_points:
# Select the first k samples
Zk = X_tr[0:k, :]
Sk = S_tr[0:k]
# Compute the parameters of the posterior distribution
# Sigma_w = <FILL IN>
# w_MSE = <FILL IN>
w_MSE = np.array(w_MSE).flatten()
# Draw weights from the posterior distribution
# p = <FILL IN>
p = p.flatten()
plt.plot(w_grid, p,'g-')
plt.fill_between(w_grid, 0, p, alpha=0.8, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=1, antialiased=True)
plt.xlim(w_grid[0], w_grid[-1])
plt.ylim(0, np.max(p))
plt.xlabel('$w$')
plt.ylabel('$p(w|s)$')
display.clear_output(wait=True)
display.display(plt.gcf())
time.sleep(0.5)
# Remove the temporary plots and fix the last one
display.clear_output(wait=True)
plt.show()
# Print the weight estimate based on the whole dataset
print w_MSE
Explanation: To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots, 128$. Draw all these posteriors along with the prior distribution in the same plot.
End of explanation
# <SOL>
# </SOL>
Explanation: Exercise 2:
Note that, in the example above, the model assumptions are correct: the target variables have been generated by a linear model with noise standard deviation sigma_n which is exactly equal to the value assumed by the model, stored in variable sigma_eps. Check what happens if we take sigma_eps=4*sigma_n or sigma_eps=sigma_n/4.
Does the algorithm fails in that cases?
What differences can you observe with respect to the ideal case sigma_eps=sigma_n?
3.4. Step 4: MSE estimate
Noting that
$$
\mathbb{E}{s|{\bf w}, {\bf x}} = {\bf w}^\top {\bf z}
$$
we can write
$$
\hat{s}\text{MSE}
= \int {\bf w}^\top {\bf z} p({\bf w}|{\bf s}) d{\bf w}
= \left(\int {\bf w} p({\bf w}|{\bf s}) d{\bf w}\right)^\top {\bf z}
= {\bf w}\text{MSE}^\top {\bf z}
$$
where
$$
{\bf w}\text{MSE}
= \int {\bf w} p({\bf w}|{\bf s}) d{\bf w}
= {\sigma\varepsilon^{-2}} {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}
$$
Therefore, in the Gaussian case, the weighted integration of prediction function is equivalent to apply a unique model, with weights ${\bf w}_\text{MSE}$.
Exercise 3:
Plot the minimum MSE predictions of $s$ for inputs $x$ in the interval [-1, 3].
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 12
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .3 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-.5,2.5,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
# Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
for k in range(nplots):
#Draw weights from the posterior distribution
w_iter = np.random.multivariate_normal(posterior_mean,Sigma_w)
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(w_iter[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-.5,2.5)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
plt.show()
Explanation: 3.5 Maximum likelihood vs Bayesian Inference. Making predictions
Following an <b>ML approach</b>, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as:
$$p({s^}|{\bf w}_{ML},{\bf x}^) $$
For the generative model of Section 3.1.2 (additive i.i.d. Gaussian noise), this distribution is:
$$p({s^}|{\bf w}_{ML},{\bf x}^) = \frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^ - {\bf w}_{ML}^\top {\bf z}^\right)^2}{2 \sigma_\varepsilon^2} \right)$$
* The mean of $s^*$ is just the same as the prediction of the LS model, and the same uncertainty is assumed independently of the observation vector (i.e., the variance of the noise of the model).
* If a single value is to be kept, we would probably keep the mean of the distribution, which is equivalent to the LS prediction.
Using <b>Bayesian inference</b>, we retain all models. Then, the inference of the value $s^ = s({\bf x}^)$ is carried out by mixing all models, according to the weights given by the posterior distribution.
\begin{align}p({s^}|{\bf x}^,{\bf s})
& = \int p({s^}~|~{\bf w},{\bf x}^) p({\bf w}~|~{\bf s}) d{\bf w}\end{align}
where:
* $p({s^*}|{\bf w},{\bf x}^*) = \displaystyle\frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$
* $p({\bf w}~|~{\bf s})$: Is the posterior distribution of the weights, that can be computed using Bayes' Theorem.
The following fragment of code draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution.
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 12
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-1,3,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
#Plot the posterior mean
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=4, linestyle='dashdot', antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-1,3)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
Explanation: Posterior distribution of the target
Since $f^ = f({\bf x}^) = {\bf w}^\top{\bf z}$, $f^*$ is also a Gaussian variable whose posterior mean and variance can be calculated as follows:
$$\mathbb{E}{{{\bf z}^}^\top {\bf w}~|~{\bf s}, {\bf z}^} = {{\bf z}^}^\top \mathbb{E}{{\bf w}|{\bf s}} = {\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}$$
$$\text{Cov}\left[{{\bf z}^}^\top {\bf w}~|~{\bf s}, {\bf z}^\right] = {{\bf z}^}^\top \text{Cov}\left[{\bf w}~|~{\bf s}\right] {{\bf z}^} = {{\bf z}^}^\top {\pmb \Sigma}_{\bf w} {{\bf z}^}$$
Therefore, $f^~|~{\bf s}, {\bf x}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}, {{\bf z}^}^\top {\pmb \Sigma}_{\bf w} {{\bf z}^*} \right)$
Finally, for $s^ = f^ + \varepsilon^$, the posterior distribution is $s^~|~{\bf s}, {\bf z}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\pmb\Sigma}{\bf w} {\bf Z}^\top {\bf s}, {{\bf z}^}^\top {\pmb \Sigma}_{\bf w} {{\bf z}^} + \sigma\varepsilon^2\right)$
End of explanation
from math import pi
n_points = 15
frec = 3
std_n = 0.2
max_degree = 12
#Prior distribution parameters
sigma_eps = 0.2
mean_w = np.zeros((degree+1,))
sigma_p = 0.5
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Evaluate the posterior evidence
logE = []
for deg in range(max_degree):
Z_iter = Z[:,:deg+1]
logE_iter = -((deg+1)*np.log(2*pi)/2) \
-np.log(np.linalg.det((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points)))/2 \
-S_tr.T.dot(np.linalg.inv((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points))).dot(S_tr)/2
logE.append(logE_iter[0,0])
plt.plot(np.array(range(max_degree))+1,logE)
plt.xlabel('Polynomia degree')
plt.ylabel('log evidence')
Explanation: Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
4 Maximum evidence model selection
We have already addressed with Bayesian Inference the following two issues:
For a given degree, how do we choose the weights?
Should we focus on just one model, or can we use several models at once?
However, we still needed some assumptions: a parametric model (i.e., polynomial function and <i>a priori</i> degree selection) and several parameters needed to be adjusted.
Though we can recur to cross-validation, Bayesian inference opens the door to other strategies.
We could argue that rather than keeping single selections of these parameters, we could use simultaneously several sets of parameters (and/or several parametric forms), and average them in a probabilistic way ... (like we did with the models)
We will follow a simpler strategy, selecting just the most likely set of parameters according to an ML criterion
4.1 Model evidence
The evidence of a model is defined as
$$L = p({\bf s}~|~{\cal M})$$
where ${\cal M}$ denotes the model itself and any free parameters it may have. For instance, for the polynomial model we have assumed so far, ${\cal M}$ would represent the degree of the polynomia, the variance of the additive noise, and the <i>a priori</i> covariance matrix of the weights
Applying the Theorem of Total probability, we can compute the evidence of the model as
$$L = \int p({\bf s}~|~{\bf f},{\cal M}) p({\bf f}~|~{\cal M}) d{\bf f} $$
For the linear model $f({\bf x}) = {\bf w}^\top{\bf z}$, the evidence can be computed as
$$L = \int p({\bf s}~|~{\bf w},{\cal M}) p({\bf w}~|~{\cal M}) d{\bf w} $$
It is important to notice that these probability density functions are exactly the ones we computed on the previous section. We are just making explicit that they depend on a particular model and the selection of its parameters. Therefore:
$p({\bf s}~|~{\bf w},{\cal M})$ is the likelihood of ${\bf w}$
$p({\bf w}~|~{\cal M})$ is the <i>a priori</i> distribution of the weights
4.2 Model selection via evidence maximization
As we have already mentioned, we could propose a prior distribution for the model parameters, $p({\cal M})$, and use it to infer the posterior. However, this can be very involved (usually no closed-form expressions can be derived)
Alternatively, maximizing the evidence is normally good enough
$${\cal M}{ML} = \arg\max{\cal M} p(s~|~{\cal M})$$
Note that we are using the subscript 'ML' because the evidence can also be referred to as the likelihood of the model
4.3 Example: Selection of the degree of the polynomia
For the previous example we had (we consider a spherical Gaussian for the weights):
${\bf s}~|~{\bf w},{\cal M}~\sim~{\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)$
${\bf w}~|~{\cal M}~\sim~{\cal N}\left({\bf 0},\sigma_p^2 {\bf I} \right)$
In this case, $p({\bf s}~|~{\cal M})$ follows also a Gaussian distribution, and it can be shown that
$L = p({\bf s}~|~{\cal M}) = {\cal N}\left({\bf 0},\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I} \right)$
If we just pursue the maximization of $L$, this is equivalent to maximizing the log of the evidence
$$\log(L) = -\frac{M}{2} \log(2\pi) -{\frac{1}{2}}\log\mid\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\mid - \frac{1}{2} {\bf s}^\top \left(\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\right)^{-1} {\bf s}$$
where $M$ denotes the length of vector ${\bf z}$ (the degree of the polynomia minus 1).
The following fragment of code evaluates the evidence of the model as a function of the degree of the polynomia
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 5 #M-1
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-1,3,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
#Plot the posterior mean
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=4, linestyle='dashdot', antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-1,3)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
Explanation: The above curve may change the position of its maximum from run to run.
We conclude the notebook by plotting the result of the Bayesian inference for M=6
End of explanation |
4,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note that you have to execute the command jupyter notebook in the parent directory of
this directory for otherwise jupyter won't be able to access the file style.css.
Step1: This example has been extracted from the official documentation of Ply.
A Tokenizer for Numbers and the Arithmetical Operators
The module ply.lex contains the code that is necessary to create a scanner.
Step2: We start with a definition of the <em style="color
Step3: There are two ways to define these tokens
Step4: If we need to transform a token, we can define the token via a function. In that case, the first line of the function
has to be a string that is a regular expression. This regular expression then defines the token. After that,
we can add code to transform the token. The string that makes up the token is stored in t.value. Below, this string
is transformed into an integer via the predefined function int.
Step5: The rule below is used to keep track of line numbers. We use the function length since there might be
more than one newline. The member variable lexer.lineno keeps track ofthe current line number. This
is needed for error messages.
Step6: The keyword t_ignore specifies those characters that should be discarded.
In this case, spaces and tabs are ignored. Note that we cannot use a raw string here.
Step7: All characters not recognized by any of the defined tokens are handled by the function t_error.
The function t.lexer.skip(1) skips the character that has not been recognized. Scanning resumes
after this character has been discarded.
Step8: Below the function lex.lex() creates the lexer specified above. Since this code is expected to be part
of some python file but really isn't since it is placed in a Jupyter notebook we have to set the variable
__file__ manually to fool the system into believing that the code given above is located in a file
called hugo.py. Of course, the name hugo is totally irrelevant and could be replaced by any other name.
Step10: Lets test the generated scanner, that is stored in lexer, with the following string
Step11: Let us feed the scanner with the string data. This is done by calling the method input of the generated scanner.
Step12: Now we put the lexer to work by using it as an iterable. This way, we can simply iterate over all the tokens that our scanner recognizes. | Python Code:
from IPython.core.display import HTML
with open ("../style.css", "r") as file:
css = file.read()
HTML(css)
Explanation: Note that you have to execute the command jupyter notebook in the parent directory of
this directory for otherwise jupyter won't be able to access the file style.css.
End of explanation
import ply.lex as lex
Explanation: This example has been extracted from the official documentation of Ply.
A Tokenizer for Numbers and the Arithmetical Operators
The module ply.lex contains the code that is necessary to create a scanner.
End of explanation
tokens = [
'NUMBER',
'PLUS',
'MINUS',
'TIMES',
'DIVIDE',
'LPAREN',
'RPAREN'
]
Explanation: We start with a definition of the <em style="color:blue">token names</em>. Note that all token names have to start with
a capital letter. We have to define these token names as a list with the name tokens:
End of explanation
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
Explanation: There are two ways to define these tokens:
- immediate token definitions define the token by assigning a regular expression to a variable of the form t_name,
where name is the name of the token that is defined.
- functional token definitions define the token via a function. The regular expression that defines the token
is the string appearing in the first line of the function body.
We see examples below. We start with the immediate token definitions. Note that we have to use raw strings here to prevent
the expansion of backslash sequences. Furthermore, operator symbols have to be escaped with a backslash character.
End of explanation
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.value = int(t.value)
return t
Explanation: If we need to transform a token, we can define the token via a function. In that case, the first line of the function
has to be a string that is a regular expression. This regular expression then defines the token. After that,
we can add code to transform the token. The string that makes up the token is stored in t.value. Below, this string
is transformed into an integer via the predefined function int.
End of explanation
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
Explanation: The rule below is used to keep track of line numbers. We use the function length since there might be
more than one newline. The member variable lexer.lineno keeps track ofthe current line number. This
is needed for error messages.
End of explanation
t_ignore = ' \t'
Explanation: The keyword t_ignore specifies those characters that should be discarded.
In this case, spaces and tabs are ignored. Note that we cannot use a raw string here.
End of explanation
def t_error(t):
print(f"Illegal character {t.value[0]} at line {t.lexer.lineno}.")
t.lexer.skip(1)
Explanation: All characters not recognized by any of the defined tokens are handled by the function t_error.
The function t.lexer.skip(1) skips the character that has not been recognized. Scanning resumes
after this character has been discarded.
End of explanation
__file__ = 'hugo'
lexer = lex.lex()
Explanation: Below the function lex.lex() creates the lexer specified above. Since this code is expected to be part
of some python file but really isn't since it is placed in a Jupyter notebook we have to set the variable
__file__ manually to fool the system into believing that the code given above is located in a file
called hugo.py. Of course, the name hugo is totally irrelevant and could be replaced by any other name.
End of explanation
data = 3 + 4 * 10 + 007 + (-20) * 2
3 + 4 * 10 + abc + (-20) * 2
Explanation: Lets test the generated scanner, that is stored in lexer, with the following string:
End of explanation
lexer.input(data)
Explanation: Let us feed the scanner with the string data. This is done by calling the method input of the generated scanner.
End of explanation
for tok in lexer:
print(tok)
Explanation: Now we put the lexer to work by using it as an iterable. This way, we can simply iterate over all the tokens that our scanner recognizes.
End of explanation |
4,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Manipulation with Numpy and Pandas
Handling with large data is easy in Python. In the simplest way using arrays. However, they are pretty slow. Numpy and Panda are two great libraries for dealing with datasets. Numpy isused for homogenous n-dimensional data (matrices). Pandas is used for heterogenous tables (CSV, MS Excel tables). Pandas is internally based on Numpy, too. See http
Step1: Accessing elements
Step2: Operations along an axis
Step3: A quick-ish introduction to Pandas
based on http
Step4: Selection
Note While many of the NumPy access methods work on DataFrames, use the pandas-specific data access methods, .at, .iat, .loc, .iloc and .ix.
See the Indexing section and below.
Step5: Missing Data
Step6: Statistics
besides .describe() there are plenty of other staticial measures and aggregation methods in Pandas/Numpy
Step7: TASKS
Now there are a series of simple tasks | Python Code:
import numpy as np
# Generating a random array
X = np.random.random((3, 5)) # a 3 x 5 array
print(X)
Explanation: Data Manipulation with Numpy and Pandas
Handling with large data is easy in Python. In the simplest way using arrays. However, they are pretty slow. Numpy and Panda are two great libraries for dealing with datasets. Numpy isused for homogenous n-dimensional data (matrices). Pandas is used for heterogenous tables (CSV, MS Excel tables). Pandas is internally based on Numpy, too. See http://scipy-lectures.github.io/ for a more detailed lesson.
End of explanation
# get a single element
X[0, 0]
# get a row
X[1]
# get a column
X[:, 1]
# Transposing an array
X.T
print(X.shape)
print(X.reshape(5, 3)) #change the layout of the matrix
# indexing by an array of integers (fancy indexing)
indices = np.array([3, 1, 0])
print(indices)
X[:, indices]
Explanation: Accessing elements
End of explanation
X
X.shape
np.sum(X, axis=1) # 1...columns
np.max(X, axis=0) # 0...rows
Explanation: Operations along an axis
End of explanation
import numpy as np
import pandas as pd
#use a standard dataset of heterogenous data
cars = pd.read_csv('data/mtcars.csv')
cars.head()
#list all columns
cars.columns
#we want to use the car as the "primary key" of a row
cars.index = cars.pop('car')
cars.head()
#describe our dataset
cars.describe()
cars.sort_index(inplace=True)
cars.head()
cars.sort_values('mpg').head(15)
cars.sort_values('hp', ascending=False).head()
Explanation: A quick-ish introduction to Pandas
based on http://pandas.pydata.org/pandas-docs/stable/10min.html
End of explanation
#single column
cars['mpg']
#depending on the name also cars.mpg works
#or a slice of rows
cars[2:5]
#by label = primary key
cars.loc['Fiat 128':'Lotus Europa']
#selection by position
cars.iloc[3]
cars.iloc[3:5, 0:2]
cars[cars.cyl > 6] # more than 6 cylinders
Explanation: Selection
Note While many of the NumPy access methods work on DataFrames, use the pandas-specific data access methods, .at, .iat, .loc, .iloc and .ix.
See the Indexing section and below.
End of explanation
cars_na = pd.read_csv('data/mtcars_with_nas.csv')
cars_na.isnull().head(4)
#fill with a default value
cars_na.fillna(0).head(4)
#or drop the rows
print(cars_na.shape)
#drop rows with na values
print(cars_na.dropna().shape)
#drop columns with na values
print(cars_na.dropna(axis=1).shape)
#see also http://pandas.pydata.org/pandas-docs/stable/missing_data.html
Explanation: Missing Data
End of explanation
#stats
cars.mean()
cars.mean(axis=1)
#grouping
cars.groupby('cyl').mean()
#grouping different aggregation methods
cars.groupby('cyl').agg({ 'mpg': 'mean', 'qsec': 'min'})
Explanation: Statistics
besides .describe() there are plenty of other staticial measures and aggregation methods in Pandas/Numpy
End of explanation
#loading gapminder data (taken from https://github.com/jennybc/gapminder)
# file located at 'data/gapminder-unfiltered.tsv' it uses tabular character as separator
# use the first column as index
gap = pd.read_csv('data/gapminder-unfiltered.tsv',index_col=0, sep='\t')
#what are the columns of this dataset?
gap.head()
#what is the maximal year contained?
gap['year'].max()
#just select all data of the year 2007
gap2007 = gap[gap.year == 2007]
#locate Austria and print it
gap2007.loc['Austria']
#list the top 10 countries by life expectancy (lifeExp)
gap2007.sort_values('lifeExp',ascending=False).head(10)
#what is the total population (pop) per continent
gap2007.groupby('continent').agg({ 'pop': 'sum'})
Explanation: TASKS
Now there are a series of simple tasks
End of explanation |
4,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 1
Step1: Concept for exercise
Step2: Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.
Concept for Exercise | Python Code:
# Modin engine can be specified either by config
import modin.config as cfg
cfg.Engine.put("dask")
# or by setting the environment variable
# import os
# os.environ["MODIN_ENGINE"] = "dask"
Explanation: <center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 1: How to use Modin
GOAL: Learn how to import Modin to accelerate and scale pandas workflows.
Modin is a drop-in replacement for pandas that distributes the computation
across all of the cores in your machine or in a cluster.
In practical terms, this means that you can continue using the same pandas scripts
as before and expect the behavior and results to be the same. The only thing that needs
to change is the import statement. Normally, you would change:
python
import pandas as pd
to:
python
import modin.pandas as pd
Changing this line of code will allow you to use all of the cores in your machine to do computation on your data. One of the major performance bottlenecks of pandas is that it only uses a single core for any given computation. Modin exposes an API that is identical to pandas, allowing you to continue interacting with your data as you would with pandas. There are no additional commands required to use Modin locally. Partitioning, scheduling, data transfer, and other related concerns are all handled by Modin under the hood.
<p style="text-align:left;">
<h1>pandas on a multicore laptop
<span style="float:right;">
Modin on a multicore laptop
</span>
<div>
<img align="left" src="../../../img/pandas_multicore.png"><img src="../../../img/modin_multicore.png">
</div>
### Concept for exercise: setting Modin engine
Modin uses Ray as an execution engine by default so no additional action is required to start to use it. Alternatively, if you need to use another engine, it should be specified either by setting the Modin config or by setting Modin environment variable before the first operation with Modin as it is shown below. Also, note that the full list of Modin configs and corresponding environment variables can be found in the [Modin Configuration Settings](https://modin.readthedocs.io/en/stable/flow/modin/config.html#modin-configs-list) section of the Modin documentation.
End of explanation
# Note: Do not change this code!
import numpy as np
import pandas
import sys
import modin
pandas.__version__
modin.__version__
# Implement your answer here. You are also free to play with the size
# and shape of the DataFrame, but beware of exceeding your memory!
import pandas as pd
frame_data = np.random.randint(0, 100, size=(2**10, 2**5))
df = pd.DataFrame(frame_data)
# ***** Do not change the code below! It verifies that
# ***** the exercise has been done correctly. *****
try:
assert df is not None
assert frame_data is not None
assert isinstance(frame_data, np.ndarray)
except:
raise AssertionError("Don't change too much of the original code!")
assert "modin.pandas" in sys.modules, "Not quite correct. Remember the single line of code change (See above)"
import modin.pandas
assert pd == modin.pandas, "Remember the single line of code change (See above)"
assert hasattr(df, "_query_compiler"), "Make sure that `df` is a modin.pandas DataFrame."
print("Success! You only need to change one line of code!")
Explanation: Concept for exercise: Dataframe constructor
Often when playing around in pandas, it is useful to create a DataFrame with the constructor. That is where we will start.
```python
import numpy as np
import pandas as pd
frame_data = np.random.randint(0, 100, size=(210, 25))
df = pd.DataFrame(frame_data)
```
When creating a dataframe from a non-distributed object, it will take extra time to partition the data. When this is happening, you will see this message:
UserWarning: Distributing <class 'numpy.ndarray'> object. This may take some time.
End of explanation
# Print the first 10 lines.
df.head(10)
# Print the DataFrame.
df
# Free cell for custom interaction (Play around here!)
df.add_prefix("col")
df.count()
Explanation: Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.
Concept for Exercise: Data Interaction and Printing
When interacting with data, it is very imporant to look at different parts of the data (e.g. df.head()). Here we will show that you can print the modin.pandas DataFrame in the same ways you would pandas.
End of explanation |
4,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Reinforcement Learning in Tensorflow Part 1
Step1: The Bandit
Here we define our bandit. For this example we are using a four-armed bandit. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the arm that will give that positive reward.
Step2: The Agent
The code below established our simple neural agent. It consists of a set of values for each of the bandit arms. Each value is an estimate of the value of the return from choosing the bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
Step3: Training the Agent
We will train our agent by taking actions in our environment, and recieving rewards. Using the rewards and actions, we can know how to properly update our network in order to more often choose actions that will yield the highest rewards over time. | Python Code:
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
Explanation: Simple Reinforcement Learning in Tensorflow Part 1:
The Multi-armed bandit
This tutorial contains a simple example of how to build a policy-gradient based agent that can solve the multi-armed bandit problem. For more information, see this Medium post.
For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, DeepRL-Agents.
End of explanation
#List out our bandit arms.
#Currently arm 4 (index #3) is set to most often provide a positive reward.
bandit_arms = [0.2,0,-0.2,-2]
num_arms = len(bandit_arms)
def pullBandit(bandit):
#Get a random number.
result = np.random.randn(1)
if result > bandit:
#return a positive reward.
return 1
else:
#return a negative reward.
return -1
Explanation: The Bandit
Here we define our bandit. For this example we are using a four-armed bandit. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the arm that will give that positive reward.
End of explanation
tf.reset_default_graph()
#These two lines established the feed-forward part of the network.
weights = tf.Variable(tf.ones([num_arms]))
output = tf.nn.softmax(weights)
#The next six lines establish the training proceedure. We feed the reward and chosen action into the network
#to compute the loss, and use it to update the network.
reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)
action_holder = tf.placeholder(shape=[1],dtype=tf.int32)
responsible_output = tf.slice(output,action_holder,[1])
loss = -(tf.log(responsible_output)*reward_holder)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3)
update = optimizer.minimize(loss)
Explanation: The Agent
The code below established our simple neural agent. It consists of a set of values for each of the bandit arms. Each value is an estimate of the value of the return from choosing the bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
End of explanation
total_episodes = 1000 #Set total number of episodes to train agent on.
total_reward = np.zeros(num_arms) #Set scoreboard for bandit arms to 0.
init = tf.global_variables_initializer()
# Launch the tensorflow graph
with tf.Session() as sess:
sess.run(init)
i = 0
while i < total_episodes:
#Choose action according to Boltzmann distribution.
actions = sess.run(output)
a = np.random.choice(actions,p=actions)
action = np.argmax(actions == a)
reward = pullBandit(bandit_arms[action]) #Get our reward from picking one of the bandit arms.
#Update the network.
_,resp,ww = sess.run([update,responsible_output,weights], feed_dict={reward_holder:[reward],action_holder:[action]})
#Update our running tally of scores.
total_reward[action] += reward
if i % 50 == 0:
print("Running reward for the " + str(num_arms) + " arms of the bandit: " + str(total_reward))
i+=1
print("\nThe agent thinks arm " + str(np.argmax(ww)+1) + " is the most promising....")
if np.argmax(ww) == np.argmax(-np.array(bandit_arms)):
print("...and it was right!")
else:
print("...and it was wrong!")
Explanation: Training the Agent
We will train our agent by taking actions in our environment, and recieving rewards. Using the rewards and actions, we can know how to properly update our network in order to more often choose actions that will yield the highest rewards over time.
End of explanation |
4,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Pulling
Step1: to find a particular class, open the page using chrome,
select the particular subpart of page and click inspect
Name of Movie
Step2: Ratings from Rotten Tomatoes | Python Code:
r = urllib.request.urlopen('https://www.rottentomatoes.com/franchise/batman_movies').read()
#Using Beautiful Soup Library to parse the data
soup = BeautifulSoup(r, "lxml")
type(soup)
len(str(soup.prettify()))
soup
soup.prettify()
#We convert the data to a string format using str.
#Note in R we use str for structure, but in Python we use str to convert to charachter ( like as.charachter or paste command would do in R)
a=str(soup.prettify())
a[1000:20000]
# We try and find location of a particular tag we are interested in.
#Note we are using triple quotes to escape scpecial charachters
a.find('''class="snippet"''')
'''to find a particular class, open the page using chrome,
select the particular subpart of page and click inspect'''
Explanation: Data Pulling
End of explanation
a.find('''class="title"''')
a[33075:33200]
titles = soup.find_all("div", class_="title")
titles
titles[1]
titlesnew=soup.find_all("div",class_="media franchiseItem")
titlesnew
titlesnew[0]
len(titlesnew)
titlesnew[0].a
type(titlesnew)
dir(titlesnew)
titlesnew[0].strong
titlesnew[0].strong.a
titlesnew[0].strong.a.text
titlesnew[0].span
titlesnew[0].span.text
first_rtcore = titlesnew[0].find('span', class_ = 'meter-value')
first_rtcore.text
len(titlesnew)
years = soup.find_all("span", class_="subtle")
years
Explanation: to find a particular class, open the page using chrome,
select the particular subpart of page and click inspect
Name of Movie
End of explanation
a[79900:80000]
a.find('''class="scoreRow"''')
a[97600:97900]
b= soup.find('span', {'class' : 'meter-value'})
print(b)
years = soup.find_all("span", class_="meter-value")
years
name=[]
rating=[]
for i in range(1,11):
name=titlesnew[i].strong.a.text
rating=titlesnew[i].find('span', class_ = 'meter-value').text
print(name,rating)
Explanation: Ratings from Rotten Tomatoes
End of explanation |
4,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Illustrating common terms usage using Wikinews in english
getting data
We get the cirrussearch dump of wikinews (a dump meant for elastic-search indexation).
Step1: Preparing data
we arrange the corpus as required by gensim
Step2: Testing bigram with and without common terms
The Phrases model gives us the possiblity of handling common terms, that is words that appears much time in a text and are there only to link objects between them.
While you could remove them, you may information, for "the president is in america" is not the same as "the president of america"
The common_terms parameter Phrases can help you deal with them in a smarter way, keeping them around but avoiding them to crush frequency statistics.
Step3: bigram with common terms inside
What are (some of) the bigram founds thanks to common terms | Python Code:
LANG="english"
%%bash
fdate=20170327
fname=enwikinews-$fdate-cirrussearch-content.json.gz
if [ ! -e $fname ]
then
wget "https://dumps.wikimedia.org/other/cirrussearch/$fdate/$fname"
fi
# iterator
import gzip
import json
FDATE = 20170327
FNAME = "enwikinews-%s-cirrussearch-content.json.gz" % FDATE
def iter_texts(fpath=FNAME):
with gzip.open(fpath, "rt") as f:
for l in f:
data = json.loads(l)
if "title" in data:
yield data["title"]
yield data["text"]
# also prepare nltk
import nltk
nltk.download("punkt")
nltk.download("stopwords")
Explanation: Illustrating common terms usage using Wikinews in english
getting data
We get the cirrussearch dump of wikinews (a dump meant for elastic-search indexation).
End of explanation
# make a custom tokenizer
import re
from nltk.tokenize import sent_tokenize
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer('\w[\w-]*|\d[\d,]*')
# prepare a text
def prepare(txt):
# lower case
txt = txt.lower()
return [tokenizer.tokenize(sent)
for sent in sent_tokenize(txt, language=LANG)]
# we put all data in ram, it's not so much
corpus = []
for txt in iter_texts():
corpus.extend(prepare(txt))
# how many sentences and words ?
words_count = sum(len(s) for s in corpus)
print("Corpus has %d words in %d sentences" % (words_count, len(corpus)))
Explanation: Preparing data
we arrange the corpus as required by gensim
End of explanation
from gensim.models.phrases import Phrases
# which are the stop words we will use
from nltk.corpus import stopwords
" ".join(stopwords.words(LANG))
# a version of corups without stop words
stop_words = frozenset(stopwords.words(LANG))
def stopwords_filter(txt):
return [w for w in txt if w not in stop_words]
st_corpus = [stopwords_filter(txt) for txt in corpus]
# bigram std
%time bigram = Phrases(st_corpus)
# bigram with common terms
%time bigram_ct = Phrases(corpus, common_terms=stopwords.words(LANG))
Explanation: Testing bigram with and without common terms
The Phrases model gives us the possiblity of handling common terms, that is words that appears much time in a text and are there only to link objects between them.
While you could remove them, you may information, for "the president is in america" is not the same as "the president of america"
The common_terms parameter Phrases can help you deal with them in a smarter way, keeping them around but avoiding them to crush frequency statistics.
End of explanation
# grams that have more than 2 terms, are those with common terms
ct_ngrams = set((g[1], g[0].decode("utf-8"))
for g in bigram_ct.export_phrases(corpus)
if len(g[0].split()) > 2)
ct_ngrams = sorted(list(ct_ngrams))
print(len(ct_ngrams), "grams with common terms found")
# highest scores
ct_ngrams[-20:]
# did we found any bigram with same words but different stopwords
import collections
by_terms = collections.defaultdict(set)
for ngram, score in bigram_ct.export_phrases(corpus):
grams = ngram.split()
by_terms[(grams[0], grams[-1])].add(ngram)
for k, v in by_terms.items():
if len(v) > 1:
print(b"-".join(k).decode("utf-8")," : ", [w.decode("utf-8") for w in v])
Explanation: bigram with common terms inside
What are (some of) the bigram founds thanks to common terms
End of explanation |
4,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Permutation T-test on sensor data
One tests if the signal significantly deviates from 0
during a fixed time window of interest. Here computation
is performed on MNE sample dataset between 40 and 60 ms.
Step1: Set parameters
Step2: View location of significantly active sensors | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne import io
from mne.stats import permutation_t_test
from mne.datasets import sample
print(__doc__)
Explanation: Permutation T-test on sensor data
One tests if the signal significantly deviates from 0
during a fixed time window of interest. Here computation
is performed on MNE sample dataset between 40 and 60 ms.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# pick MEG Gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))
data = epochs.get_data()
times = epochs.times
temporal_mask = np.logical_and(0.04 <= times, times <= 0.06)
data = np.mean(data[:, :, temporal_mask], axis=2)
n_permutations = 50000
T0, p_values, H0 = permutation_t_test(data, n_permutations, n_jobs=1)
significant_sensors = picks[p_values <= 0.05]
significant_sensors_names = [raw.ch_names[k] for k in significant_sensors]
print("Number of significant sensors : %d" % len(significant_sensors))
print("Sensors names : %s" % significant_sensors_names)
Explanation: Set parameters
End of explanation
evoked = mne.EvokedArray(-np.log10(p_values)[:, np.newaxis],
epochs.info, tmin=0.)
# Extract mask and indices of active sensors in the layout
stats_picks = mne.pick_channels(evoked.ch_names, significant_sensors_names)
mask = p_values[:, np.newaxis] <= 0.05
evoked.plot_topomap(ch_type='grad', times=[0], scalings=1,
time_format=None, cmap='Reds', vmin=0., vmax=np.max,
units='-log10(p)', cbar_fmt='-%0.1f', mask=mask,
size=3, show_names=lambda x: x[4:] + ' ' * 20,
time_unit='s')
Explanation: View location of significantly active sensors
End of explanation |
4,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Eager execution basics
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step2: Tensors
A Tensor is a multi-dimensional array. Similar to NumPy ndarray objects, Tensor objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations (tf.add, tf.matmul, tf.linalg.inv etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example
Step3: Each Tensor has a shape and a datatype
Step4: The most obvious differences between NumPy arrays and TensorFlow Tensors are
Step5: GPU acceleration
Many TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example
Step6: Device Names
The Tensor.device property provides a fully qualified string name of the device hosting the contents of the Tensor. This name encodes a bunch of details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of TensorFlow programs, but we'll skip that for now. The string will end with GPU
Step8: Datasets
This section demonstrates the use of the tf.data.Dataset API to build pipelines to feed data to your model. It covers
Step9: Apply transformations
Use the transformations functions like map, batch, shuffle etc. to apply transformations to the records of the dataset. See the API documentation for tf.data.Dataset for details.
Step10: Iterate
When eager execution is enabled Dataset objects support iteration.
If you're familiar with the use of Datasets in TensorFlow graphs, note that there is no need for calls to Dataset.make_one_shot_iterator() or get_next() calls. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
tf.enable_eager_execution()
Explanation: Eager execution basics
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This is an introductory tutorial for using TensorFlow. It will cover:
Importing required packages
Creating and using Tensors
Using GPU acceleration
Datasets
Import TensorFlow
To get started, import the tensorflow module and enable eager execution.
Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
End of explanation
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
Explanation: Tensors
A Tensor is a multi-dimensional array. Similar to NumPy ndarray objects, Tensor objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations (tf.add, tf.matmul, tf.linalg.inv etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
End of explanation
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
Explanation: Each Tensor has a shape and a datatype
End of explanation
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
Explanation: The most obvious differences between NumPy arrays and TensorFlow Tensors are:
Tensors can be backed by accelerator memory (like GPU, TPU).
Tensors are immutable.
NumPy Compatibility
Conversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:
* TensorFlow operations automatically convert NumPy ndarrays to Tensors.
* NumPy operations automatically convert Tensors to NumPy ndarrays.
Tensors can be explicitly converted to NumPy ndarrays by invoking the .numpy() method on them.
These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
End of explanation
x = tf.random_uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
Explanation: GPU acceleration
Many TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
End of explanation
def time_matmul(x):
%timeit tf.matmul(x, x)
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
Explanation: Device Names
The Tensor.device property provides a fully qualified string name of the device hosting the contents of the Tensor. This name encodes a bunch of details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of TensorFlow programs, but we'll skip that for now. The string will end with GPU:<N> if the tensor is placed on the N-th tensor on the host.
Explicit Device Placement
The term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the tf.device context manager. For example:
End of explanation
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write(Line 1
Line 2
Line 3
)
ds_file = tf.data.TextLineDataset(filename)
Explanation: Datasets
This section demonstrates the use of the tf.data.Dataset API to build pipelines to feed data to your model. It covers:
Creating a Dataset.
Iteration over a Dataset with eager execution enabled.
We recommend using the Datasets API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
If you're familiar with TensorFlow graphs, the API for constructing the Dataset object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.
You can use Python iteration over the tf.data.Dataset object and do not need to explicitly create an tf.data.Iterator object.
As a result, the discussion on iterators in the TensorFlow Guide is not relevant when eager execution is enabled.
Create a source Dataset
Create a source dataset using one of the factory functions like Dataset.from_tensors, Dataset.from_tensor_slices or using objects that read from files like TextLineDataset or TFRecordDataset. See the TensorFlow Guide for more information.
End of explanation
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
Explanation: Apply transformations
Use the transformations functions like map, batch, shuffle etc. to apply transformations to the records of the dataset. See the API documentation for tf.data.Dataset for details.
End of explanation
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
Explanation: Iterate
When eager execution is enabled Dataset objects support iteration.
If you're familiar with the use of Datasets in TensorFlow graphs, note that there is no need for calls to Dataset.make_one_shot_iterator() or get_next() calls.
End of explanation |
4,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 1 - Basic SQL DML and DDL
Part 1 - Data manipulation (DML)
Get the survey.db SQLite3 database file from the Software Carpentry lesson and connect to it.
Step1: Basic queries
Step2: Filtering
Step3: Use UNION to create a consolidated list of salinity measurements in which Roerich’s, and only Roerich’s, have been corrected as described in the previous challenge.
Step4: NULL values
Step5: Aggregation
Step6: Aggregation
Step7: Combining tables with JOINs
Step8: Part 2 - Data definition (DDL)
We're going to change the database, so copy the database file first so we have a local backup copy.
Step9: Part 3 - Using Python for queries
ipython-sql is a wonderful jupyter plugin. It's not only how we're talking with SQLite, it also can connect with other databases - we'll use the MySQL support next time.
Among other things, though, keep in mind that this works within a Python notebook, and the plugin allows you to pull data back and then work with straight Python.
Step10: Pandas integration
Step11: Matplotlib integration | Python Code:
!wget http://files.software-carpentry.org/survey.db
%load_ext sql
%sql sqlite:///survey.db
Explanation: Exercise 1 - Basic SQL DML and DDL
Part 1 - Data manipulation (DML)
Get the survey.db SQLite3 database file from the Software Carpentry lesson and connect to it.
End of explanation
%%sql
SELECT personal, family
FROM Person
%%sql
SELECT family, personal
FROM Person
%%sql
SELECT *
FROM Person
ORDER BY family, personal
%%sql
SELECT quant
FROM survey
%%sql
SELECT DISTINCT quant
FROM Survey
%%sql
SELECT taken, person, quant
FROM Survey
ORDER BY taken ASC, person DESC;
%%sql
SELECT DISTINCT quant, person
FROM Survey
ORDER BY quant ASC;
Explanation: Basic queries: SELECT, DISTINCT, FROM, ORDER BY
End of explanation
%%sql
SELECT *
FROM Visited
WHERE site='DR-1'
%%sql
SELECT ident
FROM Visited
WHERE site='DR-1';
from IPython.display import Image
Image(url="http://swcarpentry.github.io/sql-novice-survey/fig/sql-filter.svg")
%%sql
SELECT *
FROM Survey
WHERE person='lake'
OR person='roe'
%%sql
SELECT *
FROM Survey
WHERE person IN ('lake', 'roe')
%%sql
SELECT *
FROM Survey
WHERE quant='sal'
AND (person='lake' OR person='roe')
%%sql
SELECT *
FROM Visited
WHERE site LIKE 'DR%';
%%sql
SELECT DISTINCT person, quant
FROM Survey
WHERE person='lake'
OR person='roe';
%%sql
SELECT reading, 1.05 * reading AS reading_multiplied
FROM Survey
WHERE quant='rad'
%%sql
SELECT taken, reading AS reading_fahrenheit, round(5*(reading-32)/9, 2) AS reading_celsius
FROM Survey
WHERE quant='temp'
%%sql
SELECT personal || ' ' || family AS full_name, personal, family
FROM Person
ORDER BY family, personal
%%sql
SELECT *
FROM Person
WHERE ident='dyer'
UNION
SELECT *
FROM Person
WHERE ident='roe'
Explanation: Filtering: WHERE
End of explanation
%%sql
SELECT *
FROM Survey
WHERE quant = 'sal'
Explanation: Use UNION to create a consolidated list of salinity measurements in which Roerich’s, and only Roerich’s, have been corrected as described in the previous challenge.
End of explanation
%%sql
SELECT *
FROM Visited
%%sql
SELECT *
FROM Visited
WHERE dated < '1930-01-01'
OR dated >= '1930-01-01'
%%sql
SELECT * FROM Visited WHERE dated IS NULL;
%%sql
SELECT * FROM Visited WHERE dated IS NOT NULL;
%%sql
SELECT *
FROM Survey
WHERE quant = 'sal'
AND person != 'lake';
%%sql
SELECT *
FROM Survey
WHERE quant = 'sal'
AND (person != 'lake' OR person IS NULL);
%%sql
SELECT *
FROM Visited
WHERE dated IN ('1927-02-08', NULL)
Explanation: NULL values
End of explanation
%%sql
SELECT MIN(dated)
FROM Visited
Image(url="http://swcarpentry.github.io/sql-novice-survey/fig/sql-aggregation.svg")
%%sql
SELECT AVG(reading)
FROM Survey
WHERE quant='sal'
%%sql
SELECT COUNT(reading)
FROM Survey
WHERE quant='sal'
%%sql
SELECT SUM(reading)
FROM Survey
WHERE quant='sal'
%%sql
SELECT MIN(reading), MAX(reading)
FROM Survey
WHERE quant = 'sal'
AND reading <= 1.0
%%sql
SELECT person, COUNT(*)
FROM Survey
WHERE quant = 'sal'
AND reading <= 1.0
Explanation: Aggregation: min(), max(), count(), avg()
End of explanation
%%sql
SELECT person, COUNT(reading), ROUND(AVG(reading), 2)
FROM Survey
WHERE quant = 'rad'
GROUP BY person
%%sql
SELECT person, quant, COUNT(reading), ROUND(AVG(reading), 2)
FROM Survey
WHERE person IS NOT NULL
GROUP BY person, quant
HAVING quant = 'rad'
ORDER BY person, quant
Explanation: Aggregation: GROUP BY, HAVING
End of explanation
%%sql
SELECT *
FROM Site
JOIN Visited
%%sql
SELECT *
FROM Site
JOIN Visited
ON Site.name = Visited.site
%%sql
SELECT *
FROM Site, Visited
WHERE Site.name = Visited.site
%%sql
SELECT Site.lat, Site.long, Visited.dated
FROM Site
JOIN Visited
ON Site.name = Visited.site
%%sql
SELECT Site.lat, Site.long, Visited.dated, Survey.quant, Survey.reading
FROM Site
JOIN Visited
ON Site.name = Visited.site
JOIN Survey
ON Visited.ident = Survey.taken
WHERE Visited.dated IS NOT NULL
Explanation: Combining tables with JOINs
End of explanation
!cp survey.db modified.db
%sql sqlite:///modified.db
%%sql
DROP TABLE Person;
CREATE TABLE Person(ident TEXT, personal TEXT, family TEXT);
DROP TABLE Site;
CREATE TABLE Site(name TEXT, lat REAL, long REAL);
DROP TABLE Visited;
CREATE TABLE Visited(ident INTEGER, site TEXT, dated TEXT);
DROP TABLE Survey;
CREATE TABLE Survey(taken INTEGER, person TEXT, quant REAL, reading REAL);
%%sql
DROP TABLE Survey;
CREATE TABLE Survey(
taken INTEGER NOT NULL, -- where reading taken
person TEXT, -- may not know who took it
quant REAL NOT NULL, -- the quantity measured
reading REAL NOT NULL, -- the actual reading
PRIMARY KEY (taken, quant),
FOREIGN KEY (taken) REFERENCES Visited(ident),
FOREIGN KEY (person) REFERENCES Person(ident)
);
%%sql
SELECT * FROM Site;
%%sql
INSERT INTO Site values('DR-1', -49.85, -128.57);
INSERT INTO Site values('DR-3', -47.15, -126.72);
INSERT INTO Site values('MSK-4', -48.87, -123.40);
SELECT * FROM Site;
%%sql
CREATE TABLE JustLatLong(lat text, long text);
INSERT INTO JustLatLong SELECT lat, long FROM Site;
SELECT * FROM JustLatLong;
%%sql
SELECT *
FROM Site
WHERE name = 'MSK-4'
%%sql
UPDATE Site
SET lat = -48.87, long = -125.40
WHERE name = 'MSK-4';
%%sql
SELECT *
FROM Site
WHERE name = 'MSK-4'
%%sql
SELECT *
FROM Site
%%sql
DELETE FROM Site
WHERE name = 'DR-3';
%%sql
SELECT *
FROM Site
Explanation: Part 2 - Data definition (DDL)
We're going to change the database, so copy the database file first so we have a local backup copy.
End of explanation
result = _
print result
result.keys
result[3]
Explanation: Part 3 - Using Python for queries
ipython-sql is a wonderful jupyter plugin. It's not only how we're talking with SQLite, it also can connect with other databases - we'll use the MySQL support next time.
Among other things, though, keep in mind that this works within a Python notebook, and the plugin allows you to pull data back and then work with straight Python.
End of explanation
df = result.DataFrame()
df
Explanation: Pandas integration
End of explanation
%sql sqlite:///survey.db
%matplotlib inline
%%sql
SELECT *
FROM Survey
WHERE quant = 'rad'
result = _
result.bar()
Explanation: Matplotlib integration
End of explanation |
4,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Here, you'll use window functions to answer questions about the Chicago Taxi Trips dataset.
Before you get started, run the code cell below to set everything up.
Step1: The following code cell fetches the taxi_trips table from the chicago_taxi_trips dataset. We also preview the first five rows of the table. You'll use the table to answer the questions below.
Step4: Exercises
1) How can you predict the demand for taxis?
Say you work for a taxi company, and you're interested in predicting the demand for taxis. Towards this goal, you'd like to create a plot that shows a rolling average of the daily number of taxi trips. Amend the (partial) query below to return a DataFrame with two columns
Step7: 2) Can you separate and order trips by community area?
The query below returns a DataFrame with three columns from the table
Step10: 3) How much time elapses between trips?
The (partial) query in the code cell below shows, for each trip in the selected time frame, the corresponding taxi_id, trip_start_timestamp, and trip_end_timestamp.
Your task in this exercise is to edit the query to include an additional prev_break column that shows the length of the break (in minutes) that the driver had before each trip started (this corresponds to the time between trip_start_timestamp of the current trip and trip_end_timestamp of the previous trip). Partition the calculation by taxi_id, and order the results within each partition by trip_start_timestamp.
Some sample results are shown below, where all rows correspond to the same driver (or taxi_id). Take the time now to make sure that the values in the prev_break column make sense to you!
Note that the first trip of the day for each driver should have a value of NaN (not a number) in the prev_break column. | Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql_advanced.ex2 import *
print("Setup Complete")
Explanation: Introduction
Here, you'll use window functions to answer questions about the Chicago Taxi Trips dataset.
Before you get started, run the code cell below to set everything up.
End of explanation
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "chicago_taxi_trips" dataset
dataset_ref = client.dataset("chicago_taxi_trips", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "taxi_trips" table
table_ref = dataset_ref.table("taxi_trips")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
Explanation: The following code cell fetches the taxi_trips table from the chicago_taxi_trips dataset. We also preview the first five rows of the table. You'll use the table to answer the questions below.
End of explanation
# Fill in the blank below
avg_num_trips_query =
WITH trips_by_day AS
(
SELECT DATE(trip_start_timestamp) AS trip_date,
COUNT(*) as num_trips
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE trip_start_timestamp >= '2016-01-01' AND trip_start_timestamp < '2018-01-01'
GROUP BY trip_date
ORDER BY trip_date
)
SELECT trip_date,
____
OVER (
____
____
) AS avg_num_trips
FROM trips_by_day
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
avg_num_trips_query =
WITH trips_by_day AS
(
SELECT DATE(trip_start_timestamp) AS trip_date,
COUNT(*) as num_trips
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE trip_start_timestamp >= '2016-01-01' AND trip_start_timestamp < '2018-01-01'
GROUP BY trip_date
ORDER BY trip_date
)
SELECT trip_date,
AVG(num_trips)
OVER (
ORDER BY trip_date
ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING
) AS avg_num_trips
FROM trips_by_day
q_1.check()
Explanation: Exercises
1) How can you predict the demand for taxis?
Say you work for a taxi company, and you're interested in predicting the demand for taxis. Towards this goal, you'd like to create a plot that shows a rolling average of the daily number of taxi trips. Amend the (partial) query below to return a DataFrame with two columns:
- trip_date - contains one entry for each date from January 1, 2016, to December 31, 2017.
- avg_num_trips - shows the average number of daily trips, calculated over a window including the value for the current date, along with the values for the preceding 15 days and the following 15 days, as long as the days fit within the two-year time frame. For instance, when calculating the value in this column for January 5, 2016, the window will include the number of trips for the preceding 4 days, the current date, and the following 15 days.
This query is partially completed for you, and you need only write the part that calculates the avg_num_trips column. Note that this query uses a common table expression (CTE); if you need to review how to use CTEs, you're encouraged to check out this tutorial in the Intro to SQL micro-course.
End of explanation
# Amend the query below
trip_number_query =
SELECT pickup_community_area,
trip_start_timestamp,
trip_end_timestamp
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
trip_number_query =
SELECT pickup_community_area,
trip_start_timestamp,
trip_end_timestamp,
RANK()
OVER (
PARTITION BY pickup_community_area
ORDER BY trip_start_timestamp
) AS trip_number
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
q_2.check()
Explanation: 2) Can you separate and order trips by community area?
The query below returns a DataFrame with three columns from the table: pickup_community_area, trip_start_timestamp, and trip_end_timestamp.
Amend the query to return an additional column called trip_number which shows the order in which the trips were taken from their respective community areas. So, the first trip of the day originating from community area 1 should receive a value of 1; the second trip of the day from the same area should receive a value of 2. Likewise, the first trip of the day from community area 2 should receive a value of 1, and so on.
Note that there are many numbering functions that can be used to solve this problem (depending on how you want to deal with trips that started at the same time from the same community area); to answer this question, please use the RANK() function.
End of explanation
# Fill in the blanks below
break_time_query =
SELECT taxi_id,
trip_start_timestamp,
trip_end_timestamp,
TIMESTAMP_DIFF(
trip_start_timestamp,
____
OVER (
PARTITION BY ____
ORDER BY ____),
MINUTE) as prev_break
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
break_time_query =
SELECT taxi_id,
trip_start_timestamp,
trip_end_timestamp,
TIMESTAMP_DIFF(
trip_start_timestamp,
LAG(trip_end_timestamp, 1) OVER (PARTITION BY taxi_id ORDER BY trip_start_timestamp),
MINUTE) as prev_break
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
q_3.check()
Explanation: 3) How much time elapses between trips?
The (partial) query in the code cell below shows, for each trip in the selected time frame, the corresponding taxi_id, trip_start_timestamp, and trip_end_timestamp.
Your task in this exercise is to edit the query to include an additional prev_break column that shows the length of the break (in minutes) that the driver had before each trip started (this corresponds to the time between trip_start_timestamp of the current trip and trip_end_timestamp of the previous trip). Partition the calculation by taxi_id, and order the results within each partition by trip_start_timestamp.
Some sample results are shown below, where all rows correspond to the same driver (or taxi_id). Take the time now to make sure that the values in the prev_break column make sense to you!
Note that the first trip of the day for each driver should have a value of NaN (not a number) in the prev_break column.
End of explanation |
4,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Momentum
A stock that's going up tends to keep going up...until it doesn't. Momentum is the theory that stocks that have recently gone up will keep going up disproportionate to their underlying value because folks are overenthusiastic about them.
On the first trading day of each week
1. The SPY is higher than 'lookback' months ago, buy
2. If the SPY is lower than 'lookback' months ago, sell your long position.
The 'lookback' time period can be random, meaning a random lookback period is used for each new position.
Optimize
Step1: Some global data
Step2: Define Optimizations
Step3: Run Strategy
Step4: Summarize results
Step5: Bar graphs
Step6: Run Benchmark
Step7: Equity curve | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import datetime
import pinkfish as pf
import strategy
# format price data
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
Explanation: Momentum
A stock that's going up tends to keep going up...until it doesn't. Momentum is the theory that stocks that have recently gone up will keep going up disproportionate to their underlying value because folks are overenthusiastic about them.
On the first trading day of each week
1. The SPY is higher than 'lookback' months ago, buy
2. If the SPY is lower than 'lookback' months ago, sell your long position.
The 'lookback' time period can be random, meaning a random lookback period is used for each new position.
Optimize: lookback period in number of months.
End of explanation
#symbol = '^GSPC'
symbol = 'SPY'
#symbol = 'DIA'
#symbol = 'QQQ'
#symbol = 'IWM'
#symbol = 'TLT'
#symbol = 'GLD'
#symbol = 'AAPL'
#symbol = 'BBRY'
#symbol = 'GDX'
capital = 10000
start = datetime.datetime(1900, 1, 1)
#start = datetime.datetime(*pf.SP500_BEGIN)
end = datetime.datetime.now()
Explanation: Some global data
End of explanation
# Pick one
optimize_lookback = True
optimize_margin = False
# Define lookback ranges
if optimize_lookback:
Xs = range(3, 18+1)
Xs = [str(X) for X in Xs]
# Define margin ranges
elif optimize_margin:
Xs = range(10, 41, 2)
Xs = [str(X) for X in Xs]
options = {
'use_adj' : False,
'use_cache' : True,
'lookback': None,
'margin': 1
}
Explanation: Define Optimizations
End of explanation
strategies = pd.Series(dtype=object)
for X in Xs:
print(X, end=" ")
if optimize_lookback:
options['lookback'] = int(X)
elif optimize_margin:
options['margin'] = int(X)/10
strategies[X] = strategy.Strategy(symbol, capital, start, end, options)
strategies[X].run()
Explanation: Run Strategy
End of explanation
metrics = ('annual_return_rate',
'max_closed_out_drawdown',
'annualized_return_over_max_drawdown',
'drawdown_recovery_period',
'best_month',
'worst_month',
'sharpe_ratio',
'sortino_ratio',
'monthly_std',
'pct_time_in_market',
'total_num_trades',
'pct_profitable_trades',
'avg_points')
df = pf.optimizer_summary(strategies, metrics)
df
Explanation: Summarize results
End of explanation
pf.optimizer_plot_bar_graph(df, 'annual_return_rate')
pf.optimizer_plot_bar_graph(df, 'sharpe_ratio')
pf.optimizer_plot_bar_graph(df, 'max_closed_out_drawdown')
Explanation: Bar graphs
End of explanation
s = strategies[Xs[0]]
benchmark = pf.Benchmark(symbol, capital, s.start, s.end, use_adj=True)
benchmark.run()
Explanation: Run Benchmark
End of explanation
if optimize_lookback: Y = '12'
elif optimize_margin: Y = '20'
pf.plot_equity_curve(strategies[Y].dbal, benchmark=benchmark.dbal)
Explanation: Equity curve
End of explanation |
4,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sochastic simulation algorithm (SSA)
Jens Hahn - 06/06/2016
Last time we have talked about ODE modelling, which is a deterministic and continuous way of modelling. This time, we'll talk about a completely different approach, a stochastic and discrete simulation. Firstly, this means we get a slightly different result, everytime we run the simulation. Secondly, this means we talk about single molecules and not a vague float value that needs to be interpreted to a molecule count.
Gillespie's algorithm
Idea behind
The idea of Gillespie's algorithm is to take into account every reaction event, based on molecule collisions and reaction radii around the molecules. It then asks two questions
Step1: Deterministic, continuous solution
Step2: Stochastic, discrete solution | Python Code:
import math
import numpy as np
import matplotlib.pyplot as pyp
%matplotlib inline
# S -> P*S - B*S*Z - d*S
S = 500
# Z -> B*S*Z + G*R - A*S*Z
Z = 0
# R -> d*S - G*R
R = 0
P = 0.0001 # birth rate
d = 0.01 # 'natural' death percent (per day)
B = 0.0095 # transmission percent (per day)
G = 0.001 # resurect percent (per day)
A = 0.005 # destroy percent (per day)
t = 0
tend = 20
timecourse = [0]
S_tc = [S]
Z_tc = [Z]
R_tc = [R]
while t < tend:
# calculate h_i
h_birth = (S*(S-1)/2)
h_death = S
h_transmission = S*Z
h_resurrect = R
h_destroy = S*Z
R_sum = sum([h_birth*P, h_death*d, h_transmission*B, h_resurrect*G, h_destroy*A])
#print(R_sum)
a_birth = h_birth*P/R_sum
#print('a_birth: ',a_birth)
a_death = h_death*d/R_sum
#print('a_death: ', a_death)
a_transmission = h_transmission*B/R_sum
#print('a_transmission: ', a_transmission)
a_resurrect = h_resurrect*G/R_sum
#print('a_resurrect: ', a_resurrect)
a_destroy = h_destroy*A/R_sum
#print('a_destroy: ', a_destroy)
a = [a_birth, a_death, a_transmission, a_resurrect, a_destroy]
a_sum = sum(a)
r1 = np.random.uniform()
t += - (1./R_sum)*math.log(r1)
timecourse.append(t)
r2 = np.random.uniform()
if r2 > 0 and r2 < sum(a[:1]): # birth
S += 1
#print('birth')
elif r2 > sum(a[:1]) and r2 < sum(a[:2]): # death
S -= 1
R += 1
#print('death')
elif r2 > sum(a[:2]) and r2 < sum(a[:3]): # transmission
S -= 1
Z += 1
#print('transmission')
elif r2 > sum(a[:3]) and r2 < sum(a[:4]): # resurrect
R -= 1
Z += 1
#print('resurrect')
else:
Z -= 1
R += 1
#print('destroy')
S_tc.append(S)
Z_tc.append(Z)
R_tc.append(R)
pyp.plot(timecourse, S_tc)
pyp.plot(timecourse, Z_tc)
pyp.plot(timecourse, R_tc)
print('Susceptible people: ', S)
print('Zombies: ', Z)
print('Dead people: ', R)
Explanation: Sochastic simulation algorithm (SSA)
Jens Hahn - 06/06/2016
Last time we have talked about ODE modelling, which is a deterministic and continuous way of modelling. This time, we'll talk about a completely different approach, a stochastic and discrete simulation. Firstly, this means we get a slightly different result, everytime we run the simulation. Secondly, this means we talk about single molecules and not a vague float value that needs to be interpreted to a molecule count.
Gillespie's algorithm
Idea behind
The idea of Gillespie's algorithm is to take into account every reaction event, based on molecule collisions and reaction radii around the molecules. It then asks two questions:
Considering all probabilities, when does the next reaction event takes place?
Which reaction takes actually place?
Let's get started
We start with a specific parameter, called $c$, the average probability that a reactant molecule will react per unit time. The next parameter we need is called $h$, the number of distinct molecular reactant combinations for a given reaction at time t. Finally, we need the distinct time interval we're focus on, we call it $\delta t$.
To answer the questions 1. and 2. we need 2 random numbers r from a uniform distribution. The first r1 one we use to calculate a number from the exponential distribution:
$$ \text{time} = - \frac{1}{a}\times\ln(r1) $$
The second, r2 we use to get the reaction. We normalise the probabilites to make the sum equal 1. Then we use the random number to get the reaction that will take place next.
End of explanation
import scipy.integrate
import numpy as np
import math
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
k1 = 0.1
k2 = 0.02
k3 = 0.4
k4 = 0.02
def dSdt( S, t):
X, Y = S
dxdt = k1*X - k2*X*Y
dydt = k4*X*Y - k3*Y
return [dxdt, dydt]
time = np.arange(0,300,1)
S0 = np.array([10., 10.] )
result = scipy.integrate.odeint( dSdt, S0, time )
plt.plot( time, result )
Explanation: Deterministic, continuous solution
End of explanation
na = 6.022e23
# species
X = 100
Y = 100
# parameters
k1 = 0.1 # birth X
k2 = 0.02 # eaten X
k3 = 0.4 # death Y
k4 = 0.02 # reproduce Y
# time
t = 0
t_end = 50
# timecourses
timecourse = [0]
X_tc = [X]
Y_tc = [Y]
# loop
while t < t_end:
h_k1 = X*(X-1)/2
h_k2 = X*Y
h_k3 = Y
h_k4 = X*Y
R_sum = sum([h_k1*k1, h_k2*k2, h_k3*k3, h_k4*k4])
a_k1 = h_k1*k1/R_sum
a_k2 = h_k2*k2/R_sum
a_k3 = h_k3*k3/R_sum
a_k4 = h_k4*k4/R_sum
a = [a_k1, a_k2, a_k3, a_k4]
a_sum = sum(a)
r1 = np.random.uniform()
t += - (1./R_sum)*math.log(r1)
timecourse.append(t)
r2 = np.random.uniform()
if r2 < sum(a[:1]): # k1
X += 1
#print('k1')
elif r2 > sum(a[:1]) and r2 < sum(a[:2]): # k2
X -= 1
#print('k2')
elif r2 > sum(a[:2]) and r2 < sum(a[:3]): # k3
Y -= 1
#print('k3')
else: # k4
Y += 1
#print('k4')
X_tc.append(X)
Y_tc.append(Y)
plt.plot( timecourse, X_tc )
plt.plot( timecourse, Y_tc )
print(Y_tc[-10:])
print(X_tc[-10:])
import math
r1 = [(i+1)/10 for i in range(10)]
R_sum = 30
t = [(-(1./R_sum)*math.log(r)) for r in r1]
import matplotlib.pyplot as pyp
%matplotlib inline
pyp.plot(r1, t)
pyp.show()
Explanation: Stochastic, discrete solution
End of explanation |
4,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
Download
Step1: Missão
Step2: Teste da Solução | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
Download: http://github.com/dsacademybr
End of explanation
import math
class PrimeGenerator(object):
def generate_primes(self, max_num):
# Implemente aqui sua solução
def _cross_off(self, array, prime):
# Implemente aqui sua solução
def _next_prime(self, array, prime):
# Implemente aqui sua solução
Explanation: Missão: Gerar uma lista de números primos.
Nível de Dificuldade: Médio
Premissas
É correto que 1 não seja considerado um número primo?
* Sim
Podemos assumir que as entradas são válidas?
* Não
Podemos supor que isso se encaixa na memória?
* Sim
Teste Cases
None -> Exception
Not an int -> Exception
20 -> [False, False, True, True, False, True, False, True, False, False, False, True, False, True, False, False, False, True, False, True]
Algoritmo
Para um número ser primo, ele deve ser 2 ou maior e não pode ser divisível por outro número diferente de si mesmo (e 1).
Todos os números não-primos são divisíveis por um número primo.
Use uma matriz (array) para manter o controle de cada número inteiro até o máximo
Comece em 2, termine em sqrt (max)
* Podemos usar o sqrt (max) em vez do max porque:
* Para cada valor que divide o número de entrada uniformemente, há um complemento b onde a * b = n
* Se a> sqrt (n) então b <sqrt (n) porque sqrt (n ^ 2) = n
* "Cross off" todos os números divisíveis por 2, 3, 5, 7, ... configurando array [index] para False
Animação do Wikipedia:
Solução
End of explanation
%%writefile missao2.py
from nose.tools import assert_equal, assert_raises
class TestMath(object):
def test_generate_primes(self):
prime_generator = PrimeGenerator()
assert_raises(TypeError, prime_generator.generate_primes, None)
assert_raises(TypeError, prime_generator.generate_primes, 98.6)
assert_equal(prime_generator.generate_primes(20), [False, False, True,
True, False, True,
False, True, False,
False, False, True,
False, True, False,
False, False, True,
False, True])
print('Sua solução foi executada com sucesso! Parabéns!')
def main():
test = TestMath()
test.test_generate_primes()
if __name__ == '__main__':
main()
%run -i missao2.py
Explanation: Teste da Solução
End of explanation |
4,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying Blobs
Step1: Digets examples
Classification using (linear) PCA and (nonlinear) Isometric Maps
Step2: Unsuperised learning
Notice that the training labels are unused. The digits have been separated, but they have no meaning.
Step3: Supervised learning
Here, we will use the didgit labels to see how well we can reproduce the labels. | Python Code:
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42, centers=3)
X[:,1] += 0.25*X[:,0]**2
# print(X.shape)
# print(y)
# plt.scatter(X[:, 0], X[:, 1], 20, y, edgecolor='none')
plt.plot(X[:, 0], X[:, 1], 'ok')
from sklearn.cluster import KMeans, AffinityPropagation, SpectralClustering
# cluster = AffinityPropagation()
# cluster = KMeans(n_clusters=3)
cluster = SpectralClustering(n_clusters=3)
# kmeans.fit(X)
# kmeans.labels_
# labels = cluster.predict(X)
labels = cluster.fit_predict(X)
print('Labels: \n', labels)
print('Data: \n', y)
# print(cluster.cluster_centers_)
plt.scatter(X[:, 0], X[:, 1], 20, labels, edgecolor='none')
# for n in range(3):
# plt.plot(cluster.cluster_centers_[n, 0], cluster.cluster_centers_[n, 1], 'ok', markersize=20)
Explanation: Classifying Blobs
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
print(len(digits.images))
fig = plt.figure(figsize=(6, 6))
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.matshow(digits.images[i], cmap=plt.cm.binary)
ax.text(0, 7, str(digits.target[i]))
digits.data.shape
from sklearn.decomposition import RandomizedPCA, PCA
pca = PCA(n_components=2)
proj = pca.fit_transform(digits.data)
plt.scatter(proj[:, 0], proj[:, 1], 30, digits.target, edgecolor='none')
plt.colorbar()
from sklearn.manifold import Isomap
iso = Isomap(n_neighbors=5, n_components=2)
proj = iso.fit_transform(digits.data)
plt.scatter(proj[:, 1], proj[:, 0], 30, digits.target, edgecolor='none')
Explanation: Digets examples
Classification using (linear) PCA and (nonlinear) Isometric Maps
End of explanation
kmeans = KMeans(n_clusters=10, random_state=42)
labels = kmeans.fit(digits.data)
# kmeans.cluster_centers_.shape
fig, axs = plt.subplots(2, 5, figsize=(8, 3))
axs = axs.flatten()
for n in range(10):
axs[n].imshow(kmeans.cluster_centers_[n].reshape(8, 8), cmap=plt.cm.gray_r)
Explanation: Unsuperised learning
Notice that the training labels are unused. The digits have been separated, but they have no meaning.
End of explanation
from sklearn.naive_bayes import GaussianNB
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
clf = GaussianNB()
clf.fit(X_train, y_train)
predicted = clf.predict(X_test)
expected = y_test
from sklearn import metrics
print(metrics.classification_report(expected, predicted))
print(metrics.confusion_matrix(expected, predicted))
Explanation: Supervised learning
Here, we will use the didgit labels to see how well we can reproduce the labels.
End of explanation |
4,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Replication Archive for "Measuring causes of death in populations
Step1: Table 1
Confusion matrices for physician-certified verbal autopsy and random-allocation verbal autopsy. Panel A shows the confusion matrix for physician certified verbal autopsy (with a length-three cause list for clarity). The entry each cell counts the number of deaths truly due to the row cause that were predicted to be due to the column cause. For example, the value 83 in the “other” row, “stroke” column indicates that 83 deaths truly due to causes other than stroke or diabetes were (incorrectly) attributed to stroke by physicians. This table demonstrates that (for this dataset) physicians are right more often than they are wrong when they predict stroke as the cause of death, but wrong more than they are right when they predict diabetes. Panel B shows the confusion matrix for Random Allocation with the same dataset, where random chance predicts stroke and diabetes incorrectly for a vast majority of the cases. True and PCVA data from Lozano et al., 2011, where physicians were presented with VAI data where the underlying cause was known to meet stringent clinical diagnostic criteria, and their results compared to the truth.18
Step2: Panel A
Step3: Panel B
Step4: Table 2
Confusion matrix for Random-From-Train verbal autopsy. The confusion matrix for Random-From-Train (with a length-three cause-list for clarity). As in Table 1, the entry each cell counts the number of deaths truly due to the row cause that were predicted to be due to the column cause. This table demonstrates that while Random-From-Train is inaccurate at the individual level, at the population level, the prediction distribution can closely match the truth.
Step5: This table demonstrates that while Random-From-Train is inaccurate at the individual level, at the population level, the prediction distribution can closely match the truth.
Step7: To do this, we performed a Monte Carlo calculation of the CSMF accuracy of Random Allocation, by simulating a dataset with known CSMF distribution, assigning “predicted” causes of death uniformly at random, and measuring the CSMF accuracy of the predictions.
The distribution of the simulated dataset is an important and subtle detail of this calculation. We sampled the true CSMF distribution from an uninformative Dirichlet distribution (which gives equal probability to all possible CSMF distributions). We generated XXX replicated of the Monte Carlo simulation, and calculated the mean and standard deviation of the CSMF accuracy.
Step9: We also used this simulation framework to perform a Monte Carlo calculation of the concordance for random allocation, which provides a cross-check for the analytical derivation of CCC derived in [ref]. We repeated the simulations for cause lists ranging from 3 to 50 causes.
Step11: This simulation setting also provided us an opportunity to demonstrate the importance of randomly resampling the cause-fraction of the test set from an uninformative Dirichlet distribution (a technical point that perhaps has not been sufficiently appreciated since its introduction in [ref]). To do so, we compared the CCCSMF accuracy of Random Allocation with that of Random-From-Train, where training data was either uniformly distributed among causes or distributed according same distribution as the test data.
Step12: Table 3
CCCSMF accuracy of Random Allocation and Random-From-Train with and without resampling the test CSMF distribution. This table demonstrates the importance of resampling the CSMF distribution in the test set; if the test and train sets have the same CSMF distribution, then simple approaches like Random-From-Train, as well as state-of-the-art approaches like King-Lu,23 can appear to have better performance than is justified, due to “overfitting”.
Step13: Figure 1
Figure 1. CSMF Accuracy of random allocation as a function of CoD list length. The mean CSMF accuracy of random allocation was calculated with 10,000 Monte Carlo replicates for cause-list length ranging from 3 to 50. The CSMF accuracy decreases monotonically as a function of J and appears to stay above 1-1/e≈0.632, which we selected for our chance-correction parameter.
Step14: Figure 2
Figure 2. Comparison of concordance from Monte Carlo calculation and analytic calculation. The analogous chance-correction value for concordance was calculated analytically in Murray et al.13, and we confirmed its accuracy in our simulation environment. The absolute relative difference was always less than 1%.
Step15: Figure 3
Figure 3. Comparison of individual-level and population-level prediction quality for three commonly used methods | Python Code:
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sns
%matplotlib inline
sns.set_style('whitegrid')
sns.set_context('poster')
Explanation: Replication Archive for "Measuring causes of death in populations: a new metric that corrects cause-specific mortality fractions for chance"
End of explanation
df = pd.read_csv('../3-data/pcva_results.csv')
df
df['gs3'] = 'Other'
df.loc[df.gs_text34=='Stroke', 'gs3'] = 'Stroke'
df.loc[df.gs_text34=='Diabetes', 'gs3'] = 'Diabetes'
df.gs3.value_counts()
df['pc3'] = 'Other'
df.loc[df.gs_assigned_MRR_1=='I64', 'pc3'] = 'Stroke'
df.loc[df.gs_assigned_MRR_1=='E10', 'pc3'] = 'Diabetes'
df.pc3.value_counts()
Explanation: Table 1
Confusion matrices for physician-certified verbal autopsy and random-allocation verbal autopsy. Panel A shows the confusion matrix for physician certified verbal autopsy (with a length-three cause list for clarity). The entry each cell counts the number of deaths truly due to the row cause that were predicted to be due to the column cause. For example, the value 83 in the “other” row, “stroke” column indicates that 83 deaths truly due to causes other than stroke or diabetes were (incorrectly) attributed to stroke by physicians. This table demonstrates that (for this dataset) physicians are right more often than they are wrong when they predict stroke as the cause of death, but wrong more than they are right when they predict diabetes. Panel B shows the confusion matrix for Random Allocation with the same dataset, where random chance predicts stroke and diabetes incorrectly for a vast majority of the cases. True and PCVA data from Lozano et al., 2011, where physicians were presented with VAI data where the underlying cause was known to meet stringent clinical diagnostic criteria, and their results compared to the truth.18
End of explanation
cnts = df.groupby(['gs3', 'pc3']).id.count()
cnts = cnts.unstack()
cnts = pd.DataFrame(cnts, columns=['Stroke', 'Diabetes', 'Other'], index = ['Stroke', 'Diabetes', 'Other'])
cnts
# spot-check a few things
assert np.all(cnts.sum(axis=0) - df.pc3.value_counts() == 0)
assert np.all(cnts.sum(axis=1) - df.gs3.value_counts() == 0)
assert cnts.loc['Stroke', 'Stroke'] == ((df.pc3=='Stroke') & (df.gs3=='Stroke')).sum()
assert cnts.loc['Stroke', 'Other'] == ((df.pc3=='Other') & (df.gs3=='Stroke')).sum()
Explanation: Panel A
End of explanation
# set seed for reproducibility
np.random.seed(12345)
df['ra3'] = np.random.choice(['Stroke', 'Diabetes', 'Other'], size=len(df.index))
cnts = df.groupby(['gs3', 'ra3']).id.count()
cnts = cnts.unstack()
cnts = pd.DataFrame(cnts, columns=['Stroke', 'Diabetes', 'Other'], index = ['Stroke', 'Diabetes', 'Other'])
cnts
Explanation: Panel B
End of explanation
df['rft'] = np.random.choice(df.gs3, size=len(df.index))
cnts = df.groupby(['gs3', 'rft']).id.count()
cnts = cnts.unstack()
cnts = pd.DataFrame(cnts, columns=['Stroke', 'Diabetes', 'Other'], index = ['Stroke', 'Diabetes', 'Other'])
cnts
Explanation: Table 2
Confusion matrix for Random-From-Train verbal autopsy. The confusion matrix for Random-From-Train (with a length-three cause-list for clarity). As in Table 1, the entry each cell counts the number of deaths truly due to the row cause that were predicted to be due to the column cause. This table demonstrates that while Random-From-Train is inaccurate at the individual level, at the population level, the prediction distribution can closely match the truth.
End of explanation
pd.DataFrame({'row sums':cnts.sum(0), 'col sums':cnts.sum(1)})
Explanation: This table demonstrates that while Random-From-Train is inaccurate at the individual level, at the population level, the prediction distribution can closely match the truth.
End of explanation
def csmf_acc_for_random(J=34, r=500, n=10000, seed=12345):
Use Monte Carlo to approximate the CSMF accuracy
of randomly assigning deaths to causes
J : int, number of causes
r : int, number of replicates in Monte Carlo approx
n : int, size of test db
# set random seed for reproducibility
np.random.seed(seed)
#######################################################
#######################################################
# generate n CSMFs from uninformative dirichlet prior
csmf_true = np.random.dirichlet(np.ones(J), size=r)
########################################################
########################################################
# generate n CSMFs from random allocation of causes
# to n deaths drawn from this distribution
csmf_rand = np.random.multinomial(n, np.ones(J) / float(J), size=r) / float(n)
assert np.allclose(csmf_rand.sum(axis=1), 1) # rows sum to one (modulo machine precision)
########################################################
########################################################
# calculate CSMF accuracy for all replicates
csmf_acc = 1 - np.sum(np.absolute(csmf_true - csmf_rand), axis=1) \
/ (2 * (1 - np.min(csmf_true, axis=1)))
#plt.title('Mean CSMF Accuracy = %0.2f\nfor I=%d, n=%d, r=%d replicates' % (np.mean(csmf_acc), I, n, r))
#sns.distplot(csmf_acc, rug=True, rug_kws={'alpha':.25})
return csmf_acc
acc = csmf_acc_for_random(J=34)
acc.mean(), acc.std()
Explanation: To do this, we performed a Monte Carlo calculation of the CSMF accuracy of Random Allocation, by simulating a dataset with known CSMF distribution, assigning “predicted” causes of death uniformly at random, and measuring the CSMF accuracy of the predictions.
The distribution of the simulated dataset is an important and subtle detail of this calculation. We sampled the true CSMF distribution from an uninformative Dirichlet distribution (which gives equal probability to all possible CSMF distributions). We generated XXX replicated of the Monte Carlo simulation, and calculated the mean and standard deviation of the CSMF accuracy.
End of explanation
def concordance_for_random(J=34, r=500, n=10000, seed=12345):
Use Monte Carlo to approximate the concordance
of randomly assigning deaths to causes
J : int, number of causes
r : int, number of replicates in Monte Carlo approx
n : int, size of test db
# set random seed for reproducibility
np.random.seed(seed)
#######################################################
#######################################################
# generate r replicates of n underlying causes of death
# (true and predicted)
cause_true = np.floor(np.random.uniform(0, J, size=(r,n)))
cause_pred = np.floor(np.random.uniform(0, J, size=(r,n)))
########################################################
########################################################
# calculate concordance for r replicates
c = np.empty((J,r))
for j in range(J):
n_j = (cause_true == j).sum(axis=1)
n_j = np.array(n_j, dtype=float) # ensure that we get floating point division
c[j] = ((cause_true == j)&(cause_pred == j)).sum(axis=1) / n_j
concordance = np.mean(c, axis=0)
assert concordance.shape == (r,)
return concordance
c = concordance_for_random(J=3)
c.mean(), c.std()
Explanation: We also used this simulation framework to perform a Monte Carlo calculation of the concordance for random allocation, which provides a cross-check for the analytical derivation of CCC derived in [ref]. We repeated the simulations for cause lists ranging from 3 to 50 causes.
End of explanation
def csmf_acc_for_rft(J=34, r=500, n=10000, seed=12345, uniform_train=False):
Use Monte Carlo to approximate the CSMF accuracy
of randomly assigning deaths according to train set distribution
J : int, number of causes
r : int, number of replicates in Monte Carlo approx
n : int, size of test db
uniform_train : bool, should train set have different distribution than test set
# set random seed for reproducibility
np.random.seed(seed)
#######################################################
#######################################################
# generate n CSMFs from uninformative dirichlet prior
csmf_true = np.random.dirichlet(np.ones(J), size=r)
assert np.allclose(csmf_true.sum(axis=1), 1) # after completing, rows sum to one (modulo machine precision)
#######################################################
#######################################################
# generate train set of size n
X_train = np.empty((r,n))
for i in range(r):
X_train[i,:] = np.random.choice(range(len(csmf_true[i])), p=csmf_true[i], size=n)
########################################################
########################################################
# re-calculate csmf for train set
# (since it is a little different than desired)
csmf_true = np.empty((r,J))
for i in range(r):
for j in range(J):
csmf_true[i,j] = (X_train[i] == j).sum() / float(n)
assert np.allclose(csmf_true.sum(axis=1), 1) # rows sum to one (modulo machine precision)
#######################################################
#######################################################
# resample train set to have equal class sizes,
# _if requested_
if uniform_train:
X_train = np.empty((r,n))
for i in range(r):
X_train[i,:] = np.random.choice(range(J), p=np.ones(J)/float(J), size=n)
########################################################
########################################################
# generate test set using random-from-train
X_test = np.empty((r,n))
for i in range(r):
X_test[i] = np.random.choice(X_train[i], size=n, replace=True)
########################################################
########################################################
# calculate csmf for test set
csmf_rft = np.empty((r,J))
for i in range(r):
for j in range(J):
csmf_rft[i,j] = (X_test[i] == j).sum() / float(n)
assert np.allclose(csmf_rft.sum(axis=1), 1) # rows sum to one (modulo machine precision)
########################################################
########################################################
# calculate CSMF accuracy for all replicates
csmf_acc = 1 - np.sum(np.absolute(csmf_true - csmf_rft), axis=1) \
/ (2 * (1 - np.min(csmf_true, axis=1)))
#plt.title('Mean CSMF Accuracy = %0.2f\nfor I=%d, n=%d, r=%d replicates' % (np.mean(csmf_acc), I, n, r))
#sns.distplot(csmf_acc, rug=True, rug_kws={'alpha':.25})
return csmf_acc
acc = csmf_acc_for_rft(J=34)
print(acc.mean(), acc.std())
acc = csmf_acc_for_rft(J=34, uniform_train=True)
print(acc.mean(), acc.std())
import sys
%%time
df = pd.DataFrame(columns=['J'])
for J in range(3,51):
sys.stdout.write(str(J)+' ')
results_J = {'J':J}
acc = csmf_acc_for_random(J, r=10000)
results_J['acc_rand'] = acc.mean()
results_J['acc_rand_lb'] = np.percentile(acc, 2.5)
results_J['acc_rand_ub'] = np.percentile(acc, 97.5)
c = concordance_for_random(J, r=500)
results_J['conc_rand'] = c.mean()
results_J['conc_rand_lb'] = np.percentile(c, 2.5)
results_J['conc_rand_ub'] = np.percentile(c, 97.5)
acc = csmf_acc_for_rft(J, r=500)
results_J['acc_rft'] = acc.mean()
results_J['acc_rft_lb'] = np.percentile(acc, 2.5)
results_J['acc_rft_ub'] = np.percentile(acc, 97.5)
acc = csmf_acc_for_rft(J, r=500, uniform_train=True)
results_J['acc_rft_unif'] = acc.mean()
results_J['acc_rft_unif_lb'] = np.percentile(acc, 2.5)
results_J['acc_rft_unif_ub'] = np.percentile(acc, 97.5)
df = df.append(results_J, ignore_index=True)
print()
df_sim = df
df = df_sim
df.head()
Explanation: This simulation setting also provided us an opportunity to demonstrate the importance of randomly resampling the cause-fraction of the test set from an uninformative Dirichlet distribution (a technical point that perhaps has not been sufficiently appreciated since its introduction in [ref]). To do so, we compared the CCCSMF accuracy of Random Allocation with that of Random-From-Train, where training data was either uniformly distributed among causes or distributed according same distribution as the test data.
End of explanation
df.index = df.J
np.round(1*((df.filter(['acc_rft', 'acc_rand', 'acc_rft_unif']) - .632) / (1-.632)).loc[[5,15,25,35,50,]], 3)
Explanation: Table 3
CCCSMF accuracy of Random Allocation and Random-From-Train with and without resampling the test CSMF distribution. This table demonstrates the importance of resampling the CSMF distribution in the test set; if the test and train sets have the same CSMF distribution, then simple approaches like Random-From-Train, as well as state-of-the-art approaches like King-Lu,23 can appear to have better performance than is justified, due to “overfitting”.
End of explanation
df.acc_rand.plot(color='k',marker='o')
plt.ylabel('CSMF Accuracy\nof Random Allocation', rotation=0, ha='right')
plt.xlabel('Number of Causes ($J$)')
plt.axis(xmin=0, xmax=53)
plt.subplots_adjust(left=.3)
Explanation: Figure 1
Figure 1. CSMF Accuracy of random allocation as a function of CoD list length. The mean CSMF accuracy of random allocation was calculated with 10,000 Monte Carlo replicates for cause-list length ranging from 3 to 50. The CSMF accuracy decreases monotonically as a function of J and appears to stay above 1-1/e≈0.632, which we selected for our chance-correction parameter.
End of explanation
plt.plot(1/df.J, df.conc_rand, 'ko')
plt.ylabel('Monte Carlo Estimate\nof Concordance\nof Random Allocation', rotation=0, ha='right')
plt.xlabel('Analytically Calculated Concordance\nof Random Allocation')
df.index = df.J
for j in [3,5,10,25,50]:
plt.text(1/float(j), df.conc_rand[j], ' $J=%d$'%j, ha='left', va='center', fontsize=24)
plt.subplots_adjust(left=.3)
Explanation: Figure 2
Figure 2. Comparison of concordance from Monte Carlo calculation and analytic calculation. The analogous chance-correction value for concordance was calculated analytically in Murray et al.13, and we confirmed its accuracy in our simulation environment. The absolute relative difference was always less than 1%.
End of explanation
df = pd.read_excel('../3-data/va_methods_comparison.xlsx')
# chance-correct accuracy
df.accuracy = (df.accuracy - 0.632) / (1 - 0.632)
df.head()
fig, ax_list = plt.subplots(2, 3, sharex=True, sharey=True, figsize=(15,10))
for i, hce in enumerate(['With HCE', 'Without HCE']):
for j, module in enumerate(['Neonate', 'Child', 'Adult']):
t = df[(df.hce==hce)&(df.module==module)&df.model.isin(['InterVA', 'PCVA', 'Tariff'])]
t = t.dropna(subset=['accuracy', 'CCC'])
ax = ax_list[i,j]
ax.axvline(0.64, color='grey', linestyle='--', linewidth=2)
ax.plot(t.accuracy, t.CCC, 'o', ms=15, color='grey', mec='k', mew=2, alpha=1)
for k in t.index:
ha = 'left'
dx = .06
dy = 0
if t.model[k] == 'Tariff' and j==0:
ha = 'right'
dx = -.06
elif t.model[k] == 'InterVA':
dy = -.0
ax.text(t.accuracy[k]+dx, t.CCC[k]+dy, t.model[k], ha=ha, va='center', fontsize=18)
if i == 0:
ax.set_title(module)
ax = ax_list[i,2]
ax.set_ylabel(hce, labelpad=-320)
ax = ax_list[0,0]
ax.set_ylabel('Individual-Level Quality (CCC)', ha='center', position=(0,0))
ax = ax_list[1,1]
ax.set_xlabel('Population-Level Quality (CCCSMF Accuracy)')
fig.subplots_adjust(wspace=0.1, hspace=0.1)
ax.set_xticks([-.3, 0, .3])
ax.set_yticks([.2, .4, .6, .8])
ax.axis(xmin=-.4, xmax=.6, ymin=.15, ymax=.55)
Explanation: Figure 3
Figure 3. Comparison of individual-level and population-level prediction quality for three commonly used methods: InterVA, Tariff, physician-certified verbal autopsy (PCVA). Questions that rely on the deceased having health care experience (HCE) are necessary for population-level PCVA quality to surpass random guessing. Data from Murray et al.12
End of explanation |
4,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1) Make a request from the Forecast.io API for where you were born (or lived, or want to visit!).
Tip
Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4) What's the difference between the high and low temperatures for today?
Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
Tip | Python Code:
import requests
# api request for bethesda, maryland
response = requests.get('https://api.forecast.io/forecast/a197f06e1906b1a937ad31d4378b8939/38.9847, -77.0947')
data = response.json()
current = data['currently']
current
Explanation: 1) Make a request from the Forecast.io API for where you were born (or lived, or want to visit!).
Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world!
Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
End of explanation
diff = current['apparentTemperature'] - current['temperature']
print('The current windspeed is', current['windSpeed'],"miles per hour.")
print('It feels', diff, 'degrees warmer than it actually is.')
Explanation: 2) What's the current wind speed? How much warmer does it feel than it actually is?
End of explanation
daily = data['daily']['data']
today = daily[0]
today_moonphase = today['moonPhase']
if today_moonphase < .25:
moonphase = 'new moon'
elif today_moonphase < 0.5:
moonphase = 'first quarter moon'
elif today_moonphase < 0.75:
moonphase = 'full moon'
else:
moonphase = 'last quarter moon'
print('Reported moon phase from Dark Sky forecast is', today_moonphase)
print('So, currently, the', moonphase, 'is visible.')
Explanation: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
End of explanation
highlow_tempdiff = today['temperatureMax'] - today['temperatureMin']
print('The difference between the high and low temperatures for today is', highlow_tempdiff, 'degrees.')
Explanation: 4) What's the difference between the high and low temperatures for today?
End of explanation
for item in daily:
daily_hightemp = item['temperatureMax']
if daily_hightemp < 70:
tempdescription = 'COLD'
elif daily_hightemp < 85:
tempdescription = 'WARM'
else:
tempdescription = 'HOT'
print('The high temp is', daily_hightemp, 'so it is', tempdescription)
Explanation: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
End of explanation
response = requests.get('https://api.forecast.io/forecast/a197f06e1906b1a937ad31d4378b8939/25.7617,-80.1918')
data = response.json()
hourly = data['hourly']['data']
for item in hourly:
if item['cloudCover'] < .5:
print('It is', item['temperature'], 'degrees and cloudy')
else:
print('It is', item['temperature'], 'degrees')
Explanation: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
End of explanation
response1980 = requests.get('https://api.forecast.io/forecast/a197f06e1906b1a937ad31d4378b8939/40.7829,-73.9654,346550400')
data1980 = response1980.json()
christmasday1980 = data1980['daily']['data'][0]
print('It was', christmasday1980['temperatureMax'], 'degrees on Christmas Day, 1980.')
response1990 = requests.get('https://api.forecast.io/forecast/a197f06e1906b1a937ad31d4378b8939/40.7829,-73.9654,662083200')
data1990 = response1990.json()
christmasday1990 = data1990['daily']['data'][0]
print('It was', christmasday1990['temperatureMax'], 'degrees on Christmas Day, 1990.')
response2000 = requests.get('https://api.forecast.io/forecast/a197f06e1906b1a937ad31d4378b8939/40.7829,-73.9654,976838400')
data2000 = response2000.json()
christmasday2000 = data2000['daily']['data'][0]
print('It was', christmasday2000['temperatureMax'], 'degrees on Christmas Day, 2000.')
Explanation: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
Tip: You'll need to use UNIX time, which is the number of seconds since January 1, 1970. Google can help you convert a normal date!
Tip: You'll want to use Forecast.io's "time machine" API at https://developer.forecast.io/docs/v2
End of explanation |
4,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow
Step1: Download the Data
Step2: Dataset Metadata
Step3: Building a TensorFlow Custom Estimator
Creating feature columns
Creating model_fn
Create estimator using the model_fn
Define data input_fn
Define Train and evaluate experiment
Run experiment with parameters
1. Create feature columns
Step4: 2. Create model_fn
Use feature columns to create input_layer
Use tf.keras.layers to define the model architecutre and output
Use binary_classification_head for create EstimatorSpec
Step5: 3. Create estimator
Step6: 4. Data Input Function
Step7: 5. Experiment Definition
Step8: 6. Run Experiment with Parameters | Python Code:
import math
import os
import pandas as pd
import numpy as np
from datetime import datetime
import tensorflow as tf
from tensorflow import data
print "TensorFlow : {}".format(tf.__version__)
SEED = 19831060
Explanation: TensorFlow: Optimizing Learning Rate
End of explanation
DATA_DIR='data'
# !mkdir $DATA_DIR
# !gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR
# !gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR
TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
TRAIN_DATA_SIZE = 32561
EVAL_DATA_SIZE = 16278
Explanation: Download the Data
End of explanation
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],
[0], [0], [0], [''], ['']]
NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']
CATEGORICAL_FEATURE_NAMES = ['gender', 'race', 'education', 'marital_status', 'relationship',
'workclass', 'occupation', 'native_country']
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
TARGET_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt'
NUM_CLASSES = len(TARGET_LABELS)
def get_categorical_features_vocabolary():
data = pd.read_csv(TRAIN_DATA_FILE, names=HEADER)
return {
column: list(data[column].unique())
for column in data.columns if column in CATEGORICAL_FEATURE_NAMES
}
feature_vocabolary = get_categorical_features_vocabolary()
print(feature_vocabolary)
Explanation: Dataset Metadata
End of explanation
def create_feature_columns():
feature_columns = []
for column in NUMERIC_FEATURE_NAMES:
feature_column = tf.feature_column.numeric_column(column)
feature_columns.append(feature_column)
for column in CATEGORICAL_FEATURE_NAMES:
vocabolary = feature_vocabolary[column]
embed_size = round(math.sqrt(len(vocabolary)) * 1.5)
feature_column = tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_vocabulary_list(column, vocabolary),
embed_size)
feature_columns.append(feature_column)
return feature_columns
Explanation: Building a TensorFlow Custom Estimator
Creating feature columns
Creating model_fn
Create estimator using the model_fn
Define data input_fn
Define Train and evaluate experiment
Run experiment with parameters
1. Create feature columns
End of explanation
from tensorflow.python.ops import math_ops
def find_learning_rate(params):
training_step = tf.cast(tf.train.get_global_step(), tf.float32)
factor = tf.cast(tf.multiply(1.e-5, training_step*training_step), tf.float32)
learning_rate = tf.add(params.learning_rate, factor)
return learning_rate
def update_learning_rate(params):
training_step = tf.cast(tf.train.get_global_step(), tf.int32)
base_cycle = tf.floordiv(training_step, params.cycle_length)
current_cycle = tf.cast(tf.round(tf.sqrt(tf.cast(base_cycle, tf.float32))) + 1, tf.int32)
current_cycle_length = tf.cast(tf.multiply(current_cycle, params.cycle_length), tf.int32)
cycle_step = tf.mod(training_step, current_cycle_length)
learning_rate = tf.cond(
tf.equal(cycle_step, 0),
lambda: params.learning_rate,
lambda: tf.train.cosine_decay(
learning_rate=params.learning_rate,
global_step=cycle_step,
decay_steps=current_cycle_length,
alpha=0.0,
)
)
tf.summary.scalar('base_cycle', base_cycle)
tf.summary.scalar('current_cycle', current_cycle)
tf.summary.scalar('current_cycle_length', current_cycle_length)
tf.summary.scalar('cycle_step', cycle_step)
tf.summary.scalar('learning_rate', learning_rate)
return learning_rate
def model_fn(features, labels, mode, params):
is_training = True if mode == tf.estimator.ModeKeys.TRAIN else False
# model body
def _inference(features, mode, params):
feature_columns = create_feature_columns()
input_layer = tf.feature_column.input_layer(features=features, feature_columns=feature_columns)
dense_inputs = input_layer
for i in range(len(params.hidden_units)):
dense = tf.keras.layers.Dense(params.hidden_units[i], activation='relu')(dense_inputs)
dense_dropout = tf.keras.layers.Dropout(params.dropout_prob)(dense, training=is_training)
dense_inputs = dense_dropout
fully_connected = dense_inputs
logits = tf.keras.layers.Dense(units=1, name='logits', activation=None)(fully_connected)
return logits
# model head
head = tf.contrib.estimator.binary_classification_head(
label_vocabulary=TARGET_LABELS,
weight_column=WEIGHT_COLUMN_NAME
)
learning_rate = find_learning_rate(params) if params.lr_search else update_learning_rate(params)
return head.create_estimator_spec(
features=features,
mode=mode,
logits=_inference(features, mode, params),
labels=labels,
optimizer=tf.train.AdamOptimizer(learning_rate)
)
Explanation: 2. Create model_fn
Use feature columns to create input_layer
Use tf.keras.layers to define the model architecutre and output
Use binary_classification_head for create EstimatorSpec
End of explanation
def create_estimator(params, run_config):
feature_columns = create_feature_columns()
estimator = tf.estimator.Estimator(
model_fn,
params=params,
config=run_config
)
return estimator
Explanation: 3. Create estimator
End of explanation
def make_input_fn(file_pattern, batch_size, num_epochs,
mode=tf.estimator.ModeKeys.EVAL):
def _input_fn():
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
column_names=HEADER,
column_defaults=HEADER_DEFAULTS,
label_name=TARGET_NAME,
field_delim=',',
use_quote_delim=True,
header=False,
num_epochs=num_epochs,
shuffle=(mode==tf.estimator.ModeKeys.TRAIN)
)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
return _input_fn
Explanation: 4. Data Input Function
End of explanation
def train_and_evaluate_experiment(params, run_config):
# TrainSpec ####################################
train_input_fn = make_input_fn(
TRAIN_DATA_FILE,
batch_size=params.batch_size,
num_epochs=None,
mode=tf.estimator.ModeKeys.TRAIN
)
train_spec = tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps=params.traning_steps
)
###############################################
# EvalSpec ####################################
eval_input_fn = make_input_fn(
EVAL_DATA_FILE,
num_epochs=1,
batch_size=params.batch_size,
)
eval_spec = tf.estimator.EvalSpec(
name=datetime.utcnow().strftime("%H%M%S"),
input_fn = eval_input_fn,
steps=None,
start_delay_secs=0,
throttle_secs=params.eval_throttle_secs
)
###############################################
tf.logging.set_verbosity(tf.logging.INFO)
if tf.gfile.Exists(run_config.model_dir):
print("Removing previous artefacts...")
tf.gfile.DeleteRecursively(run_config.model_dir)
print ''
estimator = create_estimator(params, run_config)
print ''
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
# tf.estimator.train_and_evaluate(
# estimator=estimator,
# train_spec=train_spec,
# eval_spec=eval_spec
# )
estimator.train(train_input_fn, steps=params.traning_steps)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
Explanation: 5. Experiment Definition
End of explanation
MODELS_LOCATION = 'models/census'
MODEL_NAME = 'dnn_classifier-01'
model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)
BATCH_SIZE = 64
NUM_EPOCHS = 10
steps_per_epoch = int(math.ceil((TRAIN_DATA_SIZE / BATCH_SIZE)))
training_steps = int(steps_per_epoch * NUM_EPOCHS)
print("Training data size: {}".format(TRAIN_DATA_SIZE))
print("Btach data size: {}".format(BATCH_SIZE))
print("Steps per epoch: {}".format(steps_per_epoch))
print("Traing epochs: {}".format(NUM_EPOCHS))
print("Training steps: {}".format(training_steps))
params = tf.contrib.training.HParams(
batch_size=BATCH_SIZE,
traning_steps=training_steps,
hidden_units=[64, 32],
learning_rate=1.e-3,
cycle_length=500,
dropout_prob=0.1,
eval_throttle_secs=0,
lr_search=False
)
run_config = tf.estimator.RunConfig(
tf_random_seed=SEED,
save_checkpoints_steps=steps_per_epoch,
log_step_count_steps=100,
save_summary_steps=1,
keep_checkpoint_max=3,
model_dir=model_dir,
)
train_and_evaluate_experiment(params, run_config)
Explanation: 6. Run Experiment with Parameters
End of explanation |
4,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evolutionary algorithm to calibrate model
1 get data from S&P500
Step1: Define parameter space bounds
We define the parameter bounds as follows.
| Parameter | Values (start, stop, step) |
| -------------| ------------|
| share_chartists | 0 - 1, 0.1 |
| share_mean_reversion | 0 - 1, 0.1 |
| order_expiration_time | 1000 - 10000, 1000 |
| agent_order_price_variability | 1 - 10, 1 |
| agent_order_variability | 0.1 - 5 |
| agent_ma_short | 5 - 100, 5 |
| agent_ma_long | 50 - 400, 50 |
| agents_hold_thresholds | 0.0005 |
| Agent_volume_risk_aversion | 0.1 - 1, 0.1 |
| Agent_propensity_to_switch | 0.1 - 2.2, 0.1 |
| profit_announcement_working_days | 5 - 50, 5 |
| price_to_earnings_spread | 5 - 50, 5 |
| price_to_earnings_heterogeneity | 5 - 50, 5 |
Step2: Sample the parameter space using a latin hypercube
Step3: Run evolutionary algorithm | Python Code:
start_date = '2010-01-01'
end_date = '2016-12-31'
spy = data.DataReader("SPY",
start=start_date,
end=end_date,
data_source='google')['Close']
spy_returns = spy.pct_change()[1:]
spy_volume = data.DataReader("SPY",
start=start_date,
end=end_date,
data_source='google')['Volume']
spy_autocorrelation = autocorrelation_returns(spy_returns, 25)
spy_kurtosis = kurtosis(spy_returns)
spy_autocorrelation_abs = autocorrelation_abs_returns(spy_returns, 25)
spy_hurst = hurst(spy, lag1=2 , lag2=20)
spy_cor_volu_vola = correlation_volume_volatility(spy_volume, spy_returns, window=10)
stylized_facts_spy = [spy_autocorrelation, spy_kurtosis, spy_autocorrelation_abs, spy_hurst, spy_cor_volu_vola]
pd.DataFrame(stylized_facts_spy, columns=['S&P500'],
index=['autocorrelation', 'kurtosis', 'autocorrelation_abs', 'hurst', 'correlation_volume_volatility'])
Explanation: Evolutionary algorithm to calibrate model
1 get data from S&P500
End of explanation
parameter_space = {'share_chartists':[0.0, 1.0], 'share_mean_reversion':[0.0, 1.0], 'order_expiration_time':[1000, 10000],
'agent_order_price_variability':[1, 10], 'agent_order_variability':[0.1, 5.0],
'agent_ma_short':[5, 100], 'agent_ma_long':[50, 400], 'agents_hold_thresholds':[0.00005,0.01],
'agent_volume_risk_aversion':[0.1, 1.0], 'agent_propensity_to_switch':[0.1, 2.2],
'profit_announcement_working_days':[5, 50], 'price_to_earnings_base':[10,20],
'price_to_earnings_heterogeneity':[1.1,2.5], 'price_to_earnings_gap':[4,20],
'longMA_heterogeneity':[1.1,1.8], 'shortMA_heterogeneity':[1.1,1.8], 'shortMA_memory_divider':[1, 10]}
problem = {
'num_vars': 17,
'names': ['share_chartists', 'share_mean_reversion', 'order_expiration_time', 'agent_order_price_variability',
'agent_order_variability', 'agent_ma_short', 'agent_ma_long', 'agents_hold_thresholds',
'agent_volume_risk_aversion', 'agent_propensity_to_switch', 'profit_announcement_working_days',
'price_to_earnings_base', 'price_to_earnings_heterogeneity', 'price_to_earnings_gap',
'longMA_heterogeneity', 'shortMA_heterogeneity', 'shortMA_memory_divider'],
'bounds': [[0.0, 1.0], [0.0, 1.0], [1000, 10000], [1, 10],
[0.1, 5.0], [5, 100], [50, 400], [0.00005,0.01],
[0.1, 1], [0.1, 2.2], [5, 50],
[10,20], [1.1,2.5], [4,20],
[1.1,1.8], [1.1,1.8], [1, 10]]
}
Explanation: Define parameter space bounds
We define the parameter bounds as follows.
| Parameter | Values (start, stop, step) |
| -------------| ------------|
| share_chartists | 0 - 1, 0.1 |
| share_mean_reversion | 0 - 1, 0.1 |
| order_expiration_time | 1000 - 10000, 1000 |
| agent_order_price_variability | 1 - 10, 1 |
| agent_order_variability | 0.1 - 5 |
| agent_ma_short | 5 - 100, 5 |
| agent_ma_long | 50 - 400, 50 |
| agents_hold_thresholds | 0.0005 |
| Agent_volume_risk_aversion | 0.1 - 1, 0.1 |
| Agent_propensity_to_switch | 0.1 - 2.2, 0.1 |
| profit_announcement_working_days | 5 - 50, 5 |
| price_to_earnings_spread | 5 - 50, 5 |
| price_to_earnings_heterogeneity | 5 - 50, 5 |
End of explanation
population_size = 10
latin_hyper_cube = latin.sample(problem=problem, N=population_size)
latin_hyper_cube = latin_hyper_cube.tolist()
# transform some of the parameters to integer
for idx, parameters in enumerate(latin_hyper_cube):
latin_hyper_cube[idx][2] = int(latin_hyper_cube[idx][2])
latin_hyper_cube[idx][3] = int(latin_hyper_cube[idx][3])
latin_hyper_cube[idx][4] = int(latin_hyper_cube[idx][4])
latin_hyper_cube[idx][5] = int(latin_hyper_cube[idx][5])
latin_hyper_cube[idx][6] = int(latin_hyper_cube[idx][6])
latin_hyper_cube[idx][10] = int(latin_hyper_cube[idx][10])
latin_hyper_cube[idx][11] = int(latin_hyper_cube[idx][11])
latin_hyper_cube[idx][13] = int(latin_hyper_cube[idx][13])
latin_hyper_cube[idx][16] = int(latin_hyper_cube[idx][16])
Explanation: Sample the parameter space using a latin hypercube
End of explanation
# create initial population
population = []
for parameters in latin_hyper_cube:
population.append(Individual(parameters, [], np.inf))
all_populations = [population]
av_pop_fitness = []
# fixed parameters
iterations = 10
SIMTIME = 200
NRUNS = 2
backward_simulated_time = 400
initial_total_money = 26000
init_profit = 1000
init_discount_rate = 0.17
number_of_agents = 500
for i in tqdm(range(iterations)):
simulated_population, fitness = simulate_population(all_populations[i], number_of_runs=NRUNS, simulation_time=SIMTIME,
number_of_agents=number_of_agents, init_tot_money=initial_total_money,
init_profit=init_profit,
init_discount_rate=init_discount_rate,
stylized_facts_real_life=stylized_facts_spy)
av_pop_fitness.append(fitness)
all_populations.append(evolve_population(simulated_population, fittest_to_retain=0.3, random_to_retain=0.2,
parents_to_mutate=0.3, parameters_to_mutate=0.1, problem=problem))
fig, ax1 = plt.subplots(1, 1, figsize=(10,5))
ax1.plot(range(len(av_pop_fitness)), av_pop_fitness)
ax1.set_ylabel('Fitness', fontsize='12')
ax1.set_xlabel('Generation', fontsize='12')
all_populations[4][1].stylized_facts
stylized_facts_spy
Explanation: Run evolutionary algorithm
End of explanation |
4,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing an LR-Table-Generator
A Grammar for Grammars
As the goal is to generate an LR-table-generator we first need to implement a parser for context free grammars.
The file arith.g contains an example grammar that describes arithmetic expressions.
Step1: We use <span style="font-variant
Step2: The annotated grammar is stored in the file Grammar.g4.
The parser will return a list of grammar rules, where each rule of the form
$$ a \rightarrow \beta $$
is stored as the tuple (a,) + 𝛽.
Step3: We start by generating both scanner and parser.
Step4: The Class GrammarRule
The class GrammarRule is used to store a single grammar rule. As we have to use objects of type GrammarRule as keys in a dictionary later, we have to provide the methods __eq__, __ne__, and __hash__.
Step5: The function parse_grammar takes a string filename as its argument and returns the grammar that is stored in the specified file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.
Step6: Given a string name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character "'".
Step7: Given a list Rules of GrammarRules, the function collect_variables(Rules) returns the set of all variables occuring in Rules.
Step8: Given a set Rules of GrammarRules, the function collect_tokens(Rules) returns the set of all tokens and literals occuring in Rules.
Step9: Extended Marked Rules
The class ExtendedMarkedRule stores a single marked rule of the form
$$ v \rightarrow \alpha \bullet \beta
Step10: Given an extended marked rule self, the function is_complete checks, whether the extended marked rule self has the form
$$ c \rightarrow \alpha\; \bullet
Step11: Given an extended marked rule self of the form
$$ c \rightarrow \alpha \bullet X\, \delta
Step12: Given a grammar symbol name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character "'".
Step13: Given an extended marked rule, this function returns the variable following the dot. If there is no variable following the dot, the function returns None.
Step14: The function move_dot(self) transforms an extended marked rule of the form
$$ c \rightarrow \alpha \bullet X\, \beta
Step15: The function to_rule(self) turns the extended marked rule self into a GrammarRule, i.e. the extended marked rule
$$ c \rightarrow \alpha \bullet \beta
Step16: The function to_rule(self) turns the extended marked rule self into a MarkedRule, i.e. the extended marked rule
$$ c \rightarrow \alpha \bullet \beta
Step17: The class MarkedRule is similar to the class ExtendedMarkedRule but does not have the follow set.
Step18: Given a set of extended marked rules M, the function combine_rule combines those extended marked ruless that have the same core
Step19: LR-Table-Generation
The class Grammar represents a context free grammar. It stores a list of the GrammarRules of the given grammar.
Each grammar rule of the form
$$ a \rightarrow \beta $$
The start symbol is assumed to be the variable on the left hand side of the first rule. The grammar is augmented with the rule
$$ \widehat{s} \rightarrow s. $$
Here $s$ is the start variable of the given grammar and $\widehat{s}$ is a new variable that is the start variable of the augmented grammar. The symbol $ denotes the end of input. The non-obvious member variables of the class Grammar have the following interpretation
- mStates is the set of all states of the LR-parser. These states are sets of extended marked rules.
- mStateNamesis a dictionary assigning names of the form s0, s1, $\cdots$, sn to the states stored in
mStates. The functions action and goto will be defined for state names, not for states, because
otherwise the table representing these functions would become both huge and unreadable.
- mConflicts is a Boolean variable that will be set to true if the table generation discovers
shift/reduce conflicts or reduce/reduce conflicts.
Step20: Given a set of Variables, the function initialize_dictionary returns a dictionary that assigns the empty set to all variables.
Step21: Given a Grammar, the function compute_tables computes
- the sets First(v) and Follow(v) for every variable v,
- the set of all states of the LR-Parser,
- the action table, and
- the goto table.
Given a grammar g,
- the set g.mFirst is a dictionary such that g.mFirst[a] = First[a] and
- the set g.mFollow is a dictionary such that g.mFollow[a] = Follow[a] for all variables a.
Step22: The function compute_rule_names assigns a unique name to each rule of the grammar. These names are used later
to represent reduce actions in the action table.
Step23: The function compute_first(self) computes the sets $\texttt{First}(c)$ for all variables $c$ and stores them in the dictionary mFirst. Abstractly, given a variable $c$ the function $\texttt{First}(c)$ is the set of all tokens that can start a string that is derived from $c$
Step24: Given a tuple of variables and tokens alpha, the function first_list(alpha) computes the function $\texttt{FirstList}(\alpha)$ that has been defined above. If alpha is nullable, then the result will contain the empty string $\varepsilon = \texttt{''}$.
Step25: The arguments S and T of eps_union are sets that contain tokens and, additionally, they might contain the empty string.
Step26: Given an augmented grammar $G = \langle V,T,R\cup{\widehat{s} \rightarrow s\,\$}, \widehat{s}\rangle$
and a variable $a$, the set of tokens that might follow $a$ is defined as
Step27: If $\mathcal{M}$ is a set of extended marked rules, then the closure of $\mathcal{M}$ is the smallest set $\mathcal{K}$ such that
we have the following
Step28: Given a set of extended marked rules $\mathcal{M}$ and a grammar symbol $X$, the function $\texttt{goto}(\mathcal{M}, X)$
is defined as follows
Step29: The function all_states computes the set of all states of an LR-parser. The function starts with the state
$$ \texttt{closure}\bigl({ \widehat{s} \rightarrow \bullet s
Step30: The following function computes the action table and is defined as follows
Step31: The function compute_goto_table computes the goto table.
Step32: The command below cleans the directory. If you are running windows, you have to replace rm with del. | Python Code:
!type Examples\c-grammar.g
!cat Examples/arith.g
Explanation: Implementing an LR-Table-Generator
A Grammar for Grammars
As the goal is to generate an LR-table-generator we first need to implement a parser for context free grammars.
The file arith.g contains an example grammar that describes arithmetic expressions.
End of explanation
!cat Pure.g4
Explanation: We use <span style="font-variant:small-caps;">Antlr</span> to develop a parser for context free grammars. The pure grammar used to parse context free grammars is stored in the file Pure.g4. It is similar to the grammar that we have already used to implement Earley's algorithm, but allows additionally the use of the operator |, so that all grammar rules that define a variable can be combined in one rule.
End of explanation
!cat -n Grammar.g4
Explanation: The annotated grammar is stored in the file Grammar.g4.
The parser will return a list of grammar rules, where each rule of the form
$$ a \rightarrow \beta $$
is stored as the tuple (a,) + 𝛽.
End of explanation
!antlr4 -Dlanguage=Python3 Grammar.g4
from GrammarLexer import GrammarLexer
from GrammarParser import GrammarParser
import antlr4
Explanation: We start by generating both scanner and parser.
End of explanation
class GrammarRule:
def __init__(self, variable, body):
self.mVariable = variable
self.mBody = body
def __eq__(self, other):
return isinstance(other, GrammarRule) and \
self.mVariable == other.mVariable and \
self.mBody == other.mBody
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
return f'{self.mVariable} → {" ".join(self.mBody)}'
Explanation: The Class GrammarRule
The class GrammarRule is used to store a single grammar rule. As we have to use objects of type GrammarRule as keys in a dictionary later, we have to provide the methods __eq__, __ne__, and __hash__.
End of explanation
def parse_grammar(filename):
input_stream = antlr4.FileStream(filename, encoding="utf-8")
lexer = GrammarLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = GrammarParser(token_stream)
grammar = parser.start()
return [GrammarRule(head, tuple(body)) for head, *body in grammar.g]
grammar = parse_grammar('Examples/c-grammar.g')
grammar
Explanation: The function parse_grammar takes a string filename as its argument and returns the grammar that is stored in the specified file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.
End of explanation
def is_var(name):
return name[0] != "'" and name.islower()
Explanation: Given a string name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character "'".
End of explanation
def collect_variables(Rules):
Variables = set()
for rule in Rules:
Variables.add(rule.mVariable)
for item in rule.mBody:
if is_var(item):
Variables.add(item)
return Variables
Explanation: Given a list Rules of GrammarRules, the function collect_variables(Rules) returns the set of all variables occuring in Rules.
End of explanation
def collect_tokens(Rules):
Tokens = set()
for rule in Rules:
for item in rule.mBody:
if not is_var(item):
Tokens.add(item)
return Tokens
Explanation: Given a set Rules of GrammarRules, the function collect_tokens(Rules) returns the set of all tokens and literals occuring in Rules.
End of explanation
class ExtendedMarkedRule():
def __init__(self, variable, alpha, beta, follow):
self.mVariable = variable
self.mAlpha = alpha
self.mBeta = beta
self.mFollow = follow
def __eq__(self, other):
return isinstance(other, ExtendedMarkedRule) and \
self.mVariable == other.mVariable and \
self.mAlpha == other.mAlpha and \
self.mBeta == other.mBeta and \
self.mFollow == other.mFollow
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.mVariable) + \
hash(self.mAlpha) + \
hash(self.mBeta) + \
hash(self.mFollow)
def __repr__(self):
alphaStr = ' '.join(self.mAlpha)
betaStr = ' '.join(self.mBeta)
if len(self.mFollow) > 1:
followStr = '{' + ','.join(self.mFollow) + '}'
else:
followStr = ','.join(self.mFollow)
return f'{self.mVariable} → {alphaStr} • {betaStr}: {followStr}'
Explanation: Extended Marked Rules
The class ExtendedMarkedRule stores a single marked rule of the form
$$ v \rightarrow \alpha \bullet \beta : L $$
where the variable $v$ is stored in the member variable mVariable, while $\alpha$ and $\beta$ are stored in the variables mAlphaand mBeta respectively. The set of follow tokens $L$ is stored in the variable mFollow. These variables are assumed to contain tuples of grammar symbols. A grammar symbol is either
- a variable,
- a token, or
- a literal, i.e. a string enclosed in single quotes.
Later, we need to maintain sets of marked rules to represent states. Therefore, we have to define the methods __eq__, __ne__, and __hash__.
End of explanation
def is_complete(self):
return len(self.mBeta) == 0
ExtendedMarkedRule.is_complete = is_complete
del is_complete
Explanation: Given an extended marked rule self, the function is_complete checks, whether the extended marked rule self has the form
$$ c \rightarrow \alpha\; \bullet: L,$$
i.e. it checks, whether the $\bullet$ is at the end of the grammar rule.
End of explanation
def symbol_after_dot(self):
if len(self.mBeta) > 0:
return self.mBeta[0]
return None
ExtendedMarkedRule.symbol_after_dot = symbol_after_dot
del symbol_after_dot
Explanation: Given an extended marked rule self of the form
$$ c \rightarrow \alpha \bullet X\, \delta: L, $$
the function symbol_after_dot returns the symbol $X$. If there is no symbol after the $\bullet$, the method returns None.
End of explanation
def is_var(name):
return name[0] != "'" and name.islower()
Explanation: Given a grammar symbol name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character "'".
End of explanation
def next_var(self):
if len(self.mBeta) > 0:
var = self.mBeta[0]
if is_var(var):
return var
return None
ExtendedMarkedRule.next_var = next_var
del next_var
Explanation: Given an extended marked rule, this function returns the variable following the dot. If there is no variable following the dot, the function returns None.
End of explanation
def move_dot(self):
return ExtendedMarkedRule(self.mVariable,
self.mAlpha + (self.mBeta[0],),
self.mBeta[1:],
self.mFollow)
ExtendedMarkedRule.move_dot = move_dot
del move_dot
Explanation: The function move_dot(self) transforms an extended marked rule of the form
$$ c \rightarrow \alpha \bullet X\, \beta: L $$
into an extended marked rule of the form
$$ c \rightarrow \alpha\, X \bullet \beta: L, $$
i.e. the $\bullet$ is moved over the next symbol. Invocation of this method assumes that there is a symbol
following the $\bullet$.
End of explanation
def to_rule(self):
return GrammarRule(self.mVariable, self.mAlpha + self.mBeta)
ExtendedMarkedRule.to_rule = to_rule
del to_rule
Explanation: The function to_rule(self) turns the extended marked rule self into a GrammarRule, i.e. the extended marked rule
$$ c \rightarrow \alpha \bullet \beta: L $$
is turned into the grammar rule
$$ c \rightarrow \alpha\, \beta. $$
End of explanation
def to_marked_rule(self):
return MarkedRule(self.mVariable, self.mAlpha, self.mBeta)
ExtendedMarkedRule.to_marked_rule = to_marked_rule
del to_marked_rule
Explanation: The function to_rule(self) turns the extended marked rule self into a MarkedRule, i.e. the extended marked rule
$$ c \rightarrow \alpha \bullet \beta: L $$
is turned into the marked rule
$$ c \rightarrow \alpha\bullet \beta. $$
End of explanation
class MarkedRule():
def __init__(self, variable, alpha, beta):
self.mVariable = variable
self.mAlpha = alpha
self.mBeta = beta
def __eq__(self, other):
return isinstance(other, MarkedRule) and \
self.mVariable == other.mVariable and \
self.mAlpha == other.mAlpha and \
self.mBeta == other.mBeta
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.mVariable) + \
hash(self.mAlpha) + \
hash(self.mBeta)
def __repr__(self):
alphaStr = ' '.join(self.mAlpha)
betaStr = ' '.join(self.mBeta)
return f'{self.mVariable} → {alphaStr} • {betaStr}'
Explanation: The class MarkedRule is similar to the class ExtendedMarkedRule but does not have the follow set.
End of explanation
def combine_rules(M):
Result = set()
Core = set()
for emr1 in M:
Follow = set()
core1 = emr1.to_marked_rule()
if core1 in Core:
continue
Core.add(core1)
for emr2 in M:
core2 = emr2.to_marked_rule()
if core1 == core2:
Follow |= emr2.mFollow
new_emr = ExtendedMarkedRule(core1.mVariable, core1.mAlpha, core1.mBeta, frozenset(Follow))
Result.add(new_emr)
return frozenset(Result)
Explanation: Given a set of extended marked rules M, the function combine_rule combines those extended marked ruless that have the same core: If
$$ a \rightarrow \beta \bullet \gamma : L $$
is an extended marked rule, then its core is defined as the marked rule
$$ a \rightarrow \beta \bullet \gamma. $$
If $a \rightarrow \beta \bullet \gamma : L_1$ and $a \rightarrow \beta \bullet \gamma : L_2$ are two extended marked rules, then they can be combined into the rule
$$ a \rightarrow \beta \bullet \gamma : L_1\cup L_2 $$
End of explanation
class Grammar():
def __init__(self, Rules):
self.mRules = Rules
self.mStart = Rules[0].mVariable
self.mVariables = collect_variables(Rules)
self.mTokens = collect_tokens(Rules)
self.mStates = set()
self.mStateNames = {}
self.mConflicts = False
self.mVariables.add('ŝ')
self.mTokens.add('$')
self.mRules.append(GrammarRule('ŝ', (self.mStart, ))) # augmenting
self.compute_tables()
Explanation: LR-Table-Generation
The class Grammar represents a context free grammar. It stores a list of the GrammarRules of the given grammar.
Each grammar rule of the form
$$ a \rightarrow \beta $$
The start symbol is assumed to be the variable on the left hand side of the first rule. The grammar is augmented with the rule
$$ \widehat{s} \rightarrow s. $$
Here $s$ is the start variable of the given grammar and $\widehat{s}$ is a new variable that is the start variable of the augmented grammar. The symbol $ denotes the end of input. The non-obvious member variables of the class Grammar have the following interpretation
- mStates is the set of all states of the LR-parser. These states are sets of extended marked rules.
- mStateNamesis a dictionary assigning names of the form s0, s1, $\cdots$, sn to the states stored in
mStates. The functions action and goto will be defined for state names, not for states, because
otherwise the table representing these functions would become both huge and unreadable.
- mConflicts is a Boolean variable that will be set to true if the table generation discovers
shift/reduce conflicts or reduce/reduce conflicts.
End of explanation
def initialize_dictionary(Variables):
return { a: set() for a in Variables }
Explanation: Given a set of Variables, the function initialize_dictionary returns a dictionary that assigns the empty set to all variables.
End of explanation
def compute_tables(self):
self.mFirst = initialize_dictionary(self.mVariables)
self.mFollow = initialize_dictionary(self.mVariables)
self.compute_first()
self.compute_follow()
self.compute_rule_names()
self.all_states()
self.compute_action_table()
self.compute_goto_table()
Grammar.compute_tables = compute_tables
del compute_tables
Explanation: Given a Grammar, the function compute_tables computes
- the sets First(v) and Follow(v) for every variable v,
- the set of all states of the LR-Parser,
- the action table, and
- the goto table.
Given a grammar g,
- the set g.mFirst is a dictionary such that g.mFirst[a] = First[a] and
- the set g.mFollow is a dictionary such that g.mFollow[a] = Follow[a] for all variables a.
End of explanation
def compute_rule_names(self):
self.mRuleNames = {}
counter = 0
for rule in self.mRules:
self.mRuleNames[rule] = 'r' + str(counter)
counter += 1
Grammar.compute_rule_names = compute_rule_names
del compute_rule_names
Explanation: The function compute_rule_names assigns a unique name to each rule of the grammar. These names are used later
to represent reduce actions in the action table.
End of explanation
def compute_first(self):
change = True
while change:
change = False
for rule in self.mRules:
a, body = rule.mVariable, rule.mBody
first_body = self.first_list(body)
if not (first_body <= self.mFirst[a]):
change = True
self.mFirst[a] |= first_body
print('First sets:')
for v in self.mVariables:
print(f'First({v}) = {self.mFirst[v]}')
Grammar.compute_first = compute_first
del compute_first
Explanation: The function compute_first(self) computes the sets $\texttt{First}(c)$ for all variables $c$ and stores them in the dictionary mFirst. Abstractly, given a variable $c$ the function $\texttt{First}(c)$ is the set of all tokens that can start a string that is derived from $c$:
$$\texttt{First}(\texttt{c}) :=
\Bigl{ t \in T \Bigm| \exists \gamma \in (V \cup T)^: \texttt{c} \Rightarrow^ t\,\gamma \Bigr}.
$$
The definition of the function $\texttt{First}()$ is extended to strings from $(V \cup T)^$ as follows:
- $\texttt{FirstList}(\varepsilon) = {}$.
- $\texttt{FirstList}(t \beta) = { t }$ if $t \in T$.
- $\texttt{FirstList}(\texttt{a} \beta) = \left{
\begin{array}[c]{ll}
\texttt{First}(\texttt{a}) \cup \texttt{FirstList}(\beta) & \mbox{if $\texttt{a} \Rightarrow^ \varepsilon$;} \
\texttt{First}(\texttt{a}) & \mbox{otherwise.}
\end{array}
\right.
$
If $\texttt{a}$ is a variable of $G$ and the rules defining $\texttt{a}$ are given as
$$\texttt{a} \rightarrow \alpha_1 \mid \cdots \mid \alpha_n, $$
then we have
$$\texttt{First}(\texttt{a}) = \bigcup\limits_{i=1}^n \texttt{FirstList}(\alpha_i). $$
The dictionary mFirst that stores this function is computed via a fixed point iteration.
End of explanation
def first_list(self, alpha):
if len(alpha) == 0:
return { '' }
elif is_var(alpha[0]):
v, *r = alpha
return eps_union(self.mFirst[v], self.first_list(r))
else:
t = alpha[0]
return { t }
Grammar.first_list = first_list
del first_list
Explanation: Given a tuple of variables and tokens alpha, the function first_list(alpha) computes the function $\texttt{FirstList}(\alpha)$ that has been defined above. If alpha is nullable, then the result will contain the empty string $\varepsilon = \texttt{''}$.
End of explanation
def eps_union(S, T):
if '' in S:
if '' in T:
return S | T
return (S - { '' }) | T
return S
Explanation: The arguments S and T of eps_union are sets that contain tokens and, additionally, they might contain the empty string.
End of explanation
def compute_follow(self):
self.mFollow[self.mStart] = { '$' }
change = True
while change:
change = False
for rule in self.mRules:
a, body = rule.mVariable, rule.mBody
for i in range(len(body)):
if is_var(body[i]):
yi = body[i]
Tail = self.first_list(body[i+1:])
firstTail = eps_union(Tail, self.mFollow[a])
if not (firstTail <= self.mFollow[yi]):
change = True
self.mFollow[yi] |= firstTail
print('Follow sets (note that "$" denotes the end of file):');
for v in self.mVariables:
print(f'Follow({v}) = {self.mFollow[v]}')
Grammar.compute_follow = compute_follow
del compute_follow
Explanation: Given an augmented grammar $G = \langle V,T,R\cup{\widehat{s} \rightarrow s\,\$}, \widehat{s}\rangle$
and a variable $a$, the set of tokens that might follow $a$ is defined as:
$$\texttt{Follow}(a) :=
\bigl{ t \in \widehat{T} \,\bigm|\, \exists \beta,\gamma \in (V \cup \widehat{T})^:
\widehat{s} \Rightarrow^ \beta \,a\, t\, \gamma
\bigr}.
$$
The function compute_follow computes the sets $\texttt{Follow}(a)$ for all variables $a$ via a fixed-point iteration.
End of explanation
def cmp_closure(self, Marked_Rules):
All_Rules = Marked_Rules
New_Rules = Marked_Rules
while True:
More_Rules = set()
for rule in New_Rules:
c = rule.next_var()
if c == None:
continue
delta = rule.mBeta[1:]
L = rule.mFollow
for rule in self.mRules:
head, alpha = rule.mVariable, rule.mBody
if c == head:
newL = frozenset({ x for t in L for x in self.first_list(delta + (t,)) })
More_Rules |= { ExtendedMarkedRule(head, (), alpha, newL) }
if More_Rules <= All_Rules:
return frozenset(All_Rules)
New_Rules = More_Rules - All_Rules
All_Rules |= New_Rules
Grammar.cmp_closure = cmp_closure
del cmp_closure
Explanation: If $\mathcal{M}$ is a set of extended marked rules, then the closure of $\mathcal{M}$ is the smallest set $\mathcal{K}$ such that
we have the following:
- $\mathcal{M} \subseteq \mathcal{K}$,
- If $a \rightarrow \beta \bullet c\, \delta: L$ is a extended marked rule from
$\mathcal{K}$, $c$ is a variable, and $t\in L$ and if, furthermore,
$c \rightarrow \gamma$ is a grammar rule,
then the marked rule $c \rightarrow \bullet \gamma: \texttt{First}(\delta\,t)$
is an element of $\mathcal{K}$:
$$(a \rightarrow \beta \bullet c\, \delta) \in \mathcal{K}
\;\wedge\;
(c \rightarrow \gamma) \in R
\;\Rightarrow\; (c \rightarrow \bullet \gamma: \texttt{First}(\delta\,t)) \in \mathcal{K}
$$
We define $\texttt{closure}(\mathcal{M}) := \mathcal{K}$. The function cmp_closure computes this closure for a given set of extended marked rules via a fixed-point iteration.
End of explanation
def goto(self, Marked_Rules, x):
Result = set()
for mr in Marked_Rules:
if mr.symbol_after_dot() == x:
Result.add(mr.move_dot())
return combine_rules(self.cmp_closure(Result))
Grammar.goto = goto
del goto
Explanation: Given a set of extended marked rules $\mathcal{M}$ and a grammar symbol $X$, the function $\texttt{goto}(\mathcal{M}, X)$
is defined as follows:
$$\texttt{goto}(\mathcal{M}, X) := \texttt{closure}\Bigl( \bigl{
a \rightarrow \beta\, X \bullet \delta:L \bigm| (a \rightarrow \beta \bullet X\, \delta:L) \in \mathcal{M}
\bigr} \Bigr).
$$
End of explanation
def all_states(self):
start_state = self.cmp_closure({ ExtendedMarkedRule('ŝ', (), (self.mStart,), frozenset({'$'})) })
start_state = combine_rules(start_state)
self.mStates = { start_state }
New_States = self.mStates
while True:
More_States = set()
for Rule_Set in New_States:
for mr in Rule_Set:
if not mr.is_complete():
x = mr.symbol_after_dot()
next_state = self.goto(Rule_Set, x)
if next_state not in self.mStates and next_state not in More_States:
More_States.add(next_state)
print('.', end='')
if len(More_States) == 0:
break
New_States = More_States;
self.mStates |= New_States
print('\n', len(self.mStates), sep='')
print("All LR-states:")
counter = 1
self.mStateNames[start_state] = 's0'
print(f's0 = {set(start_state)}')
for state in self.mStates - { start_state }:
self.mStateNames[state] = f's{counter}'
print(f's{counter} = {set(state)}')
counter += 1
Grammar.all_states = all_states
del all_states
Explanation: The function all_states computes the set of all states of an LR-parser. The function starts with the state
$$ \texttt{closure}\bigl({ \widehat{s} \rightarrow \bullet s : {\$} }\bigr) $$
and then tries to compute new states by using the function goto. This computation proceeds via a
fixed-point iteration. Once all states have been computed, the function assigns names to these states.
This association is stored in the dictionary mStateNames.
End of explanation
def compute_action_table(self):
self.mActionTable = {}
print('\nAction Table:')
for state in self.mStates:
stateName = self.mStateNames[state]
actionTable = {}
# compute shift actions
for token in self.mTokens:
newState = self.goto(state, token)
if newState != set():
newName = self.mStateNames[newState]
actionTable[token] = ('shift', newName)
self.mActionTable[stateName, token] = ('shift', newName)
print(f'action("{stateName}", {token}) = ("shift", {newName})')
# compute reduce actions
for mr in state:
if mr.is_complete():
for token in mr.mFollow:
action1 = actionTable.get(token)
action2 = ('reduce', mr.to_rule())
if action1 == None:
actionTable[token] = action2
r = self.mRuleNames[mr.to_rule()]
self.mActionTable[stateName, token] = ('reduce', r)
print(f'action("{stateName}", {token}) = {action2}')
elif action1 != action2:
self.mConflicts = True
print('')
print(f'conflict in state {stateName}:')
print(f'{stateName} = {state}')
print(f'action("{stateName}", {token}) = {action1}')
print(f'action("{stateName}", {token}) = {action2}')
print('')
for mr in state:
if mr == ExtendedMarkedRule('ŝ', (self.mStart,), (), frozenset({'$'})):
actionTable['$'] = 'accept'
self.mActionTable[stateName, '$'] = 'accept'
print(f'action("{stateName}", $) = accept')
Grammar.compute_action_table = compute_action_table
del compute_action_table
Explanation: The following function computes the action table and is defined as follows:
- If $\mathcal{M}$ contains an extended marked rule of the form $a \rightarrow \beta \bullet t\, \delta:L$
then we have
$$\texttt{action}(\mathcal{M},t) := \langle \texttt{shift}, \texttt{goto}(\mathcal{M},t) \rangle.$$
- If $\mathcal{M}$ contains an extended marked rule of the form $a \rightarrow \beta\, \bullet:L$ and we have
$t \in L$, then we define
$$\texttt{action}(\mathcal{M},t) := \langle \texttt{reduce}, a \rightarrow \beta \rangle$$
- If $\mathcal{M}$ contains the extended marked rule $\widehat{s} \rightarrow s \bullet:{\$}$, then we define
$$\texttt{action}(\mathcal{M},\$) := \texttt{accept}. $$
- Otherwise, we have
$$\texttt{action}(\mathcal{M},t) := \texttt{error}. $$
End of explanation
def compute_goto_table(self):
self.mGotoTable = {}
print('\nGoto Table:')
for state in self.mStates:
for var in self.mVariables:
newState = self.goto(state, var)
if newState != set():
stateName = self.mStateNames[state]
newName = self.mStateNames[newState]
self.mGotoTable[stateName, var] = newName
print(f'goto({stateName}, {var}) = {newName}')
Grammar.compute_goto_table = compute_goto_table
del compute_goto_table
%%time
g = Grammar(grammar)
def strip_quotes(t):
if t[0] == "'" and t[-1] == "'":
return t[1:-1]
return t
def dump_parse_table(self, file):
with open(file, 'w', encoding="utf-8") as handle:
handle.write('# Grammar rules:\n')
for rule in self.mRules:
rule_name = self.mRuleNames[rule]
handle.write(f'{rule_name} =("{rule.mVariable}", {rule.mBody})\n')
handle.write('\n# Action table:\n')
handle.write('actionTable = {}\n')
for s, t in self.mActionTable:
action = self.mActionTable[s, t]
t = strip_quotes(t)
if action[0] == 'reduce':
rule_name = action[1]
handle.write(f"actionTable['{s}', '{t}'] = ('reduce', {rule_name})\n")
elif action == 'accept':
handle.write(f"actionTable['{s}', '{t}'] = 'accept'\n")
else:
handle.write(f"actionTable['{s}', '{t}'] = {action}\n")
handle.write('\n# Goto table:\n')
handle.write('gotoTable = {}\n')
for s, v in self.mGotoTable:
state = self.mGotoTable[s, v]
handle.write(f"gotoTable['{s}', '{v}'] = '{state}'\n")
Grammar.dump_parse_table = dump_parse_table
del dump_parse_table
g.dump_parse_table('parse-table.py')
!type parse-table.py
!cat parse-table.py
Explanation: The function compute_goto_table computes the goto table.
End of explanation
!del GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp
!rmdir /S /Q __pycache__
!dir /B
!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp
!rm -r __pycache__
!ls
Explanation: The command below cleans the directory. If you are running windows, you have to replace rm with del.
End of explanation |
4,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Efficient Frontier
Step1: Assume that we have 4 assets, each with a return series of length 1000. We can use numpy.random.randn to sample returns from a normal distribution.
Step2: These return series can be used to create a wide range of portfolios, which all
have different returns and risks (standard deviation). We can produce a wide range
of random weight vectors and plot those portfolios. As we want all our capital to be invested, this vector will have to some to one.
Step3: Next, lets evaluate how many of these random portfolios would perform. Towards this goal we are calculating the mean returns as well as the volatility (here we are using standard deviation). You can also see that there is
a filter that only allows to plot portfolios with a standard deviation of < 2 for better illustration.
Step4: In the code you will notice the calculation of the return with
Step5: Upon plotting those you will observe that they form a characteristic parabolic
shape called the ‘Markowitz bullet‘ with the boundaries being called the ‘efficient
frontier‘, where we have the lowest variance for a given expected.
Step6: Markowitz optimization and the Efficient Frontier
Once we have a good representation of our portfolios as the blue dots show we can calculate the efficient frontier Markowitz-style. This is done by minimising
$$ w^T C w$$
for $w$ on the expected portfolio return $R^T w$ whilst keeping the sum of all the
weights equal to 1
Step7: In yellow you can see the optimal portfolios for each of the desired returns (i.e. the mus). In addition, we get the one optimal portfolio returned
Step8: Backtesting on real market data
This is all very interesting but not very applied. We next demonstrate how you can create a simple algorithm in zipline -- the open-source backtester that powers Quantopian -- to test this optimization on actual historical stock data.
First, lets load in some historical data using Quantopian's get_pricing().
Step9: Next, we'll create a zipline algorithm by defining two functions -- initialize() which is called once before the simulation starts, and handle_data() which is called for every trading bar. We then instantiate the algorithm object.
If you are confused about the syntax of zipline, check out the tutorial. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import cvxopt as opt
from cvxopt import blas, solvers
import pandas as pd
np.random.seed(123)
# Turn off progress printing
solvers.options['show_progress'] = False
Explanation: The Efficient Frontier: Markowitz Portfolio optimization in Python
Authors: Dr. Thomas Starke, David Edwards, Dr. Thomas Wiecki
Notebook released under the Creative Commons Attribution 4.0 License.
Introduction
In this blog post you will learn about the basic idea behind Markowitz portfolio optimization as well as how to do it in Python. We will then show how you can create a simple backtest that rebalances its portfolio in a Markowitz-optimal way. We hope you enjoy it and get a little more enlightened in the process.
We will start by using random data and only later use actual stock data. This will hopefully help you to get a sense of how to use modelling and simulation to improve your understanding of the theoretical concepts. Don‘t forget that the skill of an algo-trader is to put mathematical models into code and this example is great practice.
Let's start with importing a few modules, which we need later and produce a series of normally distributed returns. cvxopt is a convex solver which we will use for the optimization of the portfolio.
Simulations
End of explanation
## NUMBER OF ASSETS
n_assets = 4
## NUMBER OF OBSERVATIONS
n_obs = 1000
return_vec = np.random.randn(n_assets, n_obs)
plt.plot(return_vec.T, alpha=.4);
plt.xlabel('time')
plt.ylabel('returns')
Explanation: Assume that we have 4 assets, each with a return series of length 1000. We can use numpy.random.randn to sample returns from a normal distribution.
End of explanation
def rand_weights(n):
''' Produces n random weights that sum to 1 '''
k = np.random.rand(n)
return k / sum(k)
print rand_weights(n_assets)
print rand_weights(n_assets)
Explanation: These return series can be used to create a wide range of portfolios, which all
have different returns and risks (standard deviation). We can produce a wide range
of random weight vectors and plot those portfolios. As we want all our capital to be invested, this vector will have to some to one.
End of explanation
def random_portfolio(returns):
'''
Returns the mean and standard deviation of returns for a random portfolio
'''
p = np.asmatrix(np.mean(returns, axis=1))
w = np.asmatrix(rand_weights(returns.shape[0]))
C = np.asmatrix(np.cov(returns))
mu = w * p.T
sigma = np.sqrt(w * C * w.T)
# This recursion reduces outliers to keep plots pretty
if sigma > 2:
return random_portfolio(returns)
return mu, sigma
Explanation: Next, lets evaluate how many of these random portfolios would perform. Towards this goal we are calculating the mean returns as well as the volatility (here we are using standard deviation). You can also see that there is
a filter that only allows to plot portfolios with a standard deviation of < 2 for better illustration.
End of explanation
n_portfolios = 500
means, stds = np.column_stack([
random_portfolio(return_vec)
for _ in xrange(n_portfolios)
])
Explanation: In the code you will notice the calculation of the return with:
$$ R = p^T w $$
where $R$ is the expected return, $p^T$ is the transpose of the vector for the mean
returns for each time series and w is the weight vector of the portfolio. $p$ is a Nx1
column vector, so $p^T$ turns into a 1xN row vector which can be multiplied with the
Nx1 weight (column) vector w to give a scalar result. This is equivalent to the dot
product used in the code. Keep in mind that Python has a reversed definition of
rows and columns and the accurate NumPy version of the previous equation would
be R = w * p.T
Next, we calculate the standard deviation with
$$\sigma = \sqrt{w^T C w}$$
where $C$ is the covariance matrix of the returns which is a NxN matrix. Please
note that if we simply calculated the simple standard deviation with the appropriate weighting using std(array(ret_vec).T*w) we would get a slightly different
’bullet’. This is because the simple standard deviation calculation would not take
covariances into account. In the covariance matrix, the values of the diagonal
represent the simple variances of each asset while the off-diagonals are the variances between the assets. By using ordinary std() we effectively only regard the
diagonal and miss the rest. A small but significant difference.
Lets generate the mean returns and volatility for 500 random portfolios:
End of explanation
plt.plot(stds, means, 'o', markersize=5)
plt.xlabel('std')
plt.ylabel('mean')
plt.title('Mean and standard deviation of returns of randomly generated portfolios');
Explanation: Upon plotting those you will observe that they form a characteristic parabolic
shape called the ‘Markowitz bullet‘ with the boundaries being called the ‘efficient
frontier‘, where we have the lowest variance for a given expected.
End of explanation
def optimal_portfolio(returns):
n = len(returns)
returns = np.asmatrix(returns)
N = 100
mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x']
for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
return np.asarray(wt), returns, risks
weights, returns, risks = optimal_portfolio(return_vec)
plt.plot(stds, means, 'o')
plt.ylabel('mean')
plt.xlabel('std')
plt.plot(risks, returns, 'y-o');
Explanation: Markowitz optimization and the Efficient Frontier
Once we have a good representation of our portfolios as the blue dots show we can calculate the efficient frontier Markowitz-style. This is done by minimising
$$ w^T C w$$
for $w$ on the expected portfolio return $R^T w$ whilst keeping the sum of all the
weights equal to 1:
$$ \sum_{i}{w_i} = 1 $$
Here we parametrically run through $R^T w = \mu$ and find the minimum variance
for different $\mu$‘s. This can be done with scipy.optimise.minimize but we have
to define quite a complex problem with bounds, constraints and a Lagrange multiplier. Conveniently, the cvxopt package, a convex solver, does all of that for us. We used one of their examples with some modifications as shown below. You will notice that there are some conditioning expressions in the code. They are simply needed to set up the problem. For more information please have a look at the cvxopt example.
The mus vector produces a series of expected return values $\mu$ in a non-linear and more appropriate way. We will see later that we don‘t need to calculate a lot of these as they perfectly fit a parabola, which can safely be extrapolated for higher values.
End of explanation
print weights
Explanation: In yellow you can see the optimal portfolios for each of the desired returns (i.e. the mus). In addition, we get the one optimal portfolio returned:
End of explanation
data = get_pricing(['IBM', 'GLD', 'XOM', 'AAPL',
'MSFT', 'TLT', 'SHY'],
start_date='2005-06-07', end_date='2014-01-27')
data.loc['price', :, :].plot(figsize=(8,5))
plt.ylabel('price in $');
Explanation: Backtesting on real market data
This is all very interesting but not very applied. We next demonstrate how you can create a simple algorithm in zipline -- the open-source backtester that powers Quantopian -- to test this optimization on actual historical stock data.
First, lets load in some historical data using Quantopian's get_pricing().
End of explanation
import zipline
from zipline.api import (add_history,
history,
set_slippage,
slippage,
set_commission,
commission,
order_target_percent)
from zipline import TradingAlgorithm
def initialize(context):
'''
Called once at the very beginning of a backtest (and live trading).
Use this method to set up any bookkeeping variables.
The context object is passed to all the other methods in your algorithm.
Parameters
context: An initialized and empty Python dictionary that has been
augmented so that properties can be accessed using dot
notation as well as the traditional bracket notation.
Returns None
'''
# Register history container to keep a window of the last 100 prices.
add_history(100, '1d', 'price')
# Turn off the slippage model
set_slippage(slippage.FixedSlippage(spread=0.0))
# Set the commission model (Interactive Brokers Commission)
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(context, data):
'''
Called when a market event occurs for any of the algorithm's
securities.
Parameters
data: A dictionary keyed by security id containing the current
state of the securities in the algo's universe.
context: The same context object from the initialize function.
Stores the up to date portfolio as well as any state
variables defined.
Returns None
'''
# Allow history to accumulate 100 days of prices before trading
# and rebalance every day thereafter.
context.tick += 1
if context.tick < 100:
return
# Get rolling window of past prices and compute returns
prices = history(100, '1d', 'price').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights, _, _ = optimal_portfolio(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
# Instantinate algorithm
algo = TradingAlgorithm(initialize=initialize,
handle_data=handle_data)
# Run algorithm
results = algo.run(data.swapaxes(2, 0, 1))
results.portfolio_value.plot()
Explanation: Next, we'll create a zipline algorithm by defining two functions -- initialize() which is called once before the simulation starts, and handle_data() which is called for every trading bar. We then instantiate the algorithm object.
If you are confused about the syntax of zipline, check out the tutorial.
End of explanation |
4,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Attempting human-like speach
Step1: Define our markiv hcain functions. First to create the dics. First attempt only takes triplets of words a b c and adds {'a b'
Step2: Load the books and buildthe dictionaries, and run some simple test for proof of principle. | Python Code:
import pensieve as pens
import textacy
from collections import defaultdict
from random import random
Explanation: Attempting human-like speach: Markov chains
In orderto make the activitiy sentences in our memory more human-like, we can attempt to build a simple chatbot from the tex as well. A simple, and maybe naive, approach is to build a Markov chain.
First load some modules
End of explanation
def make_markov_chain(docs):
my_dict = defaultdict(list)
inverse_dict = defaultdict(list)
for doc in docs:
print("Reading ",doc)
d = pens.Doc(doc)
for p in d.paragraphs:
for sent in p.doc.sents:
#print(sent.text)
bow = textacy.extract.words(sent)
for i_word, word in enumerate(bow):
if i_word < 3:
continue
key = sent[i_word-2].text+' '+sent[i_word-1].text
value = sent[i_word].text
my_dict[key].append(value)
inverse_dict[value].append(key)
return my_dict, inverse_dict
def sample_from_chain(mv_dict, key):
options = len(mv_dict[key])
x = 999
while x > options-1:
x = int(10*(random()/options)-1)
#rint(x)
#print(x,key, options)
return(mv_dict[key][x])
def make_chain(mkv_chain, key):
counter = 0
chain = key
while key in mkv_chain:
#if counter > 5:
# return chain
chain+=' '+sample_from_chain(mkv_chain,key)
key = chain.split()[-2]+' '+chain.split()[-1]
counter +=1
return chain
all_books = ['../../clusterpot/book1.txt',
'../../clusterpot/book2.txt',
'../../clusterpot/book3.txt',
'../../clusterpot/book4.txt',
'../../clusterpot/book5.txt',
'../../clusterpot/book6.txt',
'../../clusterpot/book7.txt']
Explanation: Define our markiv hcain functions. First to create the dics. First attempt only takes triplets of words a b c and adds {'a b':c} to the dictionary, takes a step forward. The inverse dicitnoary is also saved for some tests, using it to seed the chain.
The dictionaries are sampled with equal probability, could look into using frequency for relative weights.
End of explanation
mkv_chain, inv_chain = make_markov_chain(all_books)
#print(mkv_chain)
for i in range(20):
print('\n',make_chain(mkv_chain,'He said'))
Explanation: Load the books and buildthe dictionaries, and run some simple test for proof of principle.
End of explanation |
4,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
lesson1-rxt50-CA.ipynb -- Code Along of
Step1: Just looking at the fastai library source code while the above works
Step2: Hah! How about that. That settles that mystery. Up above you'll see two progress bars
Step3: Hell yeah. Precomputed Activations kick ass. Just a few seconds..
... And now that fun ends
Step4: That took over 20 minutes, by comparison.
Step5: NOTE
Step6: Analyzing Results
Step7:
Step8: See
Step9:
Step10: << Reload everything and run .TTA() again >>
Step11: Saving Predictions
Step12: Looks good. Need to have type(str) columns for the DataFrame otherwise Pandas won't save it as a .feather.
Step13: Submission
Submission format is id,label, with label being the predicted likelihood of being a dog. data.classes shows that 'dogs' is the 2nd category, so save the 2nd column of predictions to the submission file. Also the LogLoss eval metric judges strongly against total wrong answers (1 or 0 when the answer is NOT 1 or 0), so the predictions will be clipped to [0.05
Step14: Good, so there's no loss of data when saving as a DataFrame.
This model got 0.08151 on the Kaggle Dogs vs Cats Redux competition. Tied for 266/1314.
Testing FDs Issue
https
Step15: Previous testing below
Step16: | Python Code:
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz = 299
ARCH = resnext50
bs = 8 # if a TitanX is maxing out at 28, I'll give this 870M 8..
tfms = tfms_from_model(ARCH, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs, num_workers=4)
learn = ConvLearner.pretrained(ARCH, data, precompute=True, ps=0.5)
# conv_learner.py: class ConvnetBuilder(): ps (float or array of float): dropout parameters
# NOTE: http://forums.fast.ai/t/error-when-trying-to-use-resnext50/7555
# save weights to fastai/fastai/ -- until this is automatic
Explanation: lesson1-rxt50-CA.ipynb -- Code Along of:
https://github.com/fastai/fastai/blob/master/courses/dl1/lesson1-rxt50.ipynb
Reimplementing the dogsvcats classifier in ResNetXt50
Dogs v Cats super-charged:
End of explanation
len(data.aug_dl.dataset), bs * 250
len(data.fix_dl.dataset), bs * 2875
Explanation: Just looking at the fastai library source code while the above works:
class ConvLearner(Learner):
def __init__(self, data, models, precompute=False, **kwargs):
self.precompute = False
super().__init__(data, models, **kwargs)
self.crit = F.binary_cross_entropy if data.is_multi else F.nll_loss
if data.is_reg: self.crit = F.l1_loss
elif self.metrics is None:
self.metrics = [accuracy_multi] if self.data.is_multi else [accuracy]
if precompute: self.save_fc1()
self.freeze()
self.precompute = precompute
further below:
```
def save_fc1(self):
self.get_activations()
act, val_act, test_act = self.activations
m=self.models.top_model
if len(self.activations[0])==0:
predict_to_bcolz(m, self.data.fix_dl, act)
if len(self.activations[1])==0:
predict_to_bcolz(m, self.data.val_dl, val_act)
if len(self.activations[2])==0:
if self.data.test_dl: predict_to_bcolz(m, self.data.test_dl, test_act)
self.fc_data = ImageClassifierData.from_arrays(self.data.path,
(act, self.data.trn_y), (val_act, self.data.val_y), self.data.bs, classes=self.data.classes,
test = test_act if self.data.test_dl else None, num_workers=8)
``
Wait so does this mean whenprecompute=Trueis specified, .. oh of course.. fuck, that's awesome: fastai automatically computes all the activations for the train, validation, AND test data set (if provided) whenprecompute` is set to True. I fucking love this library..
End of explanation
learn.fit(lrs=1e-2, n_cycle=1) # specifyng lrs & n_cycle just to learn the API better
Explanation: Hah! How about that. That settles that mystery. Up above you'll see two progress bars:
2875/2875
250/250
These are the precomputations of the train & validation activations for the ResNetXt50 ConvNet.
The batch size is set to 8 (that answers my other question of how does fastai know how not to overload the RAM on my GPU: it uses the batch size parameter I specifed when initializing the data object, so it's on me).
The first precomputation runs through 2875 minibatches of size 8, for a total of 2875x8=23,000 images. The second run is on the validation set, for 250x8=2000 images. No precomputation is done for the test set as it was not passed in to the data object when it was iniitalized.
That's the whole 25,000 image data set for cats & dogs. Sweet.
End of explanation
learn.activations[0].shape # ResNeXt50 uses FC layers? I thought it was fully Conv
# yup: check learn.summary(); after a flatten, 2 FC (aka Linear) layers. Flatten
# operation produces a 4096-long vector/tensor-thing.
# Oh that's interesting: the way learn's activations are structured: for
# train, valid, and test:
print("Train Activs: {}\nValid Activs: {}\n Test Activs: {}".format(
learn.activations[0].shape, learn.activations[1].shape, learn.activations[2].shape))
learn.precompute=False
learn.fit(lrs=1e-2, n_cycle=2, cycle_len=1)
Explanation: Hell yeah. Precomputed Activations kick ass. Just a few seconds..
... And now that fun ends:
End of explanation
learn.save('RNx_224')
tfms = tfms_from_model(ARCH, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=6, num_workers=4)
learn = ConvLearner.pretrained(ARCH, data, precompute=False, ps=0.5)
learn.load('RNx_224')
learn.unfreeze()
lr = np.array([1e-4, 1e-3, 1e-2])
Explanation: That took over 20 minutes, by comparison.
End of explanation
learn.fit(lrs=lr, n_cycle=3, cycle_len=1)
learn.save('RNx_224_all_50')
learn.load('RNx_224_all_50')
log_preds, y = learn.TTA()
accuracy(log_preds, y)
Explanation: NOTE: Had GPU MEM crashes even at bs=2.. However: restarting Kernel, and re-initializing a learner and loading weights: I can do a batch size of at least 6 (2500 MB).
End of explanation
preds = np.argmax(log_preds, axis=1)
probs = np.exp(log_preds[:,1])
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y, preds)
def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], 4, replace=False)
def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct)
def plot_val_with_title(idxs, title):
imgs = np.stack([data.val_ds[x][0] for x in idxs])
title_probs = [probs[x] for x in idxs]
print(title)
return plots(data.val_ds.denorm(imgs), rows=1, titles=title_probs)
def plots(ims, figsize=(12,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i])
def load_img_id(ds, idx): return np.array(PIL.Image.open(PATH+ds.fnames[idx]))
def plot_val_with_title(idxs, title):
imgs = [load_img_id(data.val_ds,x) for x in idxs]
title_probs = [probs[x] for x in idxs]
print(title)
return plots(imgs, rows=1, titles=title_probs, figsize=(16,8))
def most_by_mask(mask, mult):
idxs = np.where(mask)[0]
return idxs[np.argsort(mult * probs[idxs])[:4]]
def most_by_correct(y, is_correct):
mult = -1 if (y==1)==is_correct else 1
return most_by_mask((preds == data.val_y)==is_correct & (data.val_y == y), mult)
plot_val_with_title(most_by_correct(0, False), "Most incorrect cats")
plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
Explanation: Analyzing Results:
End of explanation
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz = 299
ARCH = resnext50
bs = 8 # if a TitanX is maxing out at 28, I'll give this 870M 8..
tfms = tfms_from_model(ARCH, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=6, num_workers=4, test_name='test1')
learn = ConvLearner.pretrained(ARCH, data, precompute=False, ps=0.5)
learn.load('RNx_224_all_50')
Explanation:
End of explanation
!ulimit -n
log_preds = learn.TTA(is_test=True)
Explanation: See: https://github.com/fastai/fastai/issues/23 and https://github.com/pytorch/pytorch/issues/973
Something about max open file descriptors (fds) -- maybe bc new Archs in fastai make new files instead of folders? Anyway, to avoid the error 2 lines down: specify !ulimit -n 2048 or so. Hopefully a more permanent solution is available soon. Also the fix only seems to work in the terminal, before starting the Jupyter session.
End of explanation
!ulimit -n
# Python fix to ulimit issue above: https://github.com/fastai/fastai/issues/23#issuecomment-345091054
import resource
rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
print(f'getrlimit before:{resource.getrlimit(resource.RLIMIT_NOFILE)}')
resource.setrlimit(resource.RLIMIT_NOFILE, (4096, rlimit[1]))
print(f'getrlimit after:{resource.getrlimit(resource.RLIMIT_NOFILE)}')
# checking:
!ulimit -n
Explanation:
End of explanation
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz = 299
ARCH = resnext50
bs = 8 # if a TitanX is maxing out at 28, I'll give this 870M 8..
tfms = tfms_from_model(ARCH, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=6, num_workers=4, test_name='test1')
learn = ConvLearner.pretrained(ARCH, data, precompute=False, ps=0.5)
learn.load('RNx_224_all_50')
log_preds = learn.TTA(is_test=True)
Explanation: << Reload everything and run .TTA() again >>
End of explanation
# forgot that TTA() returns 2 things
log_preds = log_preds[0]
df = pd.DataFrame(log_preds)
df.head()
data.classes
df.columns = data.classes
df['dogs'][:10]
Explanation: Saving Predictions
End of explanation
# making sure I don't waste 15 minutes by losing predictions to an ssh broken pipe again
pd.DataFrame.to_feather(df, PATH+'results/'+'RNx_225_all_50_logpreds.feather')
Explanation: Looks good. Need to have type(str) columns for the DataFrame otherwise Pandas won't save it as a .feather.
End of explanation
test_preds = np.exp(log_preds)
data.test_dl.dataset.fnames[:10]
learn.data.test_dl.dataset.fnames[:10]
preds = np.clip(test_preds[:,1], 0.05, 0.95)
ids = [i[6:-4] for i in learn.data.test_dl.dataset.fnames]
submission = pd.DataFrame({'id':ids, 'label':preds})
SUBM = 'subm/'
submission.to_csv(PATH+SUBM+'submission_RNx_224_all_50.csv.gz', compression='gzip', index=False)
FileLink(PATH+SUBM+'submission_RNx_224_all_50.csv.gz')
temp = pd.read_feather(PATH+'results/'+'RNx_225_all_50_logpreds.feather')
temp['cats'][0]
log_preds[:,0][0]
Explanation: Submission
Submission format is id,label, with label being the predicted likelihood of being a dog. data.classes shows that 'dogs' is the 2nd category, so save the 2nd column of predictions to the submission file. Also the LogLoss eval metric judges strongly against total wrong answers (1 or 0 when the answer is NOT 1 or 0), so the predictions will be clipped to [0.05:0.95] as that gives better results.
End of explanation
!ulimit -n
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz = 299
ARCH = resnext50
bs = 8
tfms = tfms_from_model(ARCH, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=6, num_workers=4, test_name='test1')
learn = ConvLearner.pretrained(ARCH, data, ps=0.5, precompute=False)
learn.load('RNx_224_all_50')
log_preds = learn.TTA(is_test=True)[0]
log_preds.shape
Explanation: Good, so there's no loss of data when saving as a DataFrame.
This model got 0.08151 on the Kaggle Dogs vs Cats Redux competition. Tied for 266/1314.
Testing FDs Issue
https://github.com/fastai/fastai/issues/23#issuecomment-345412558
THIS TESTING SECTION:
fastai library updated to use loops instead of list comprehension to fix max-open file-descriptors issue -- conda-installed PyTorch uninstalled and replaced w/ source-installs.
End of explanation
log_preds = learn.TTA(is_test=True)[0]
Explanation: Previous testing below:
End of explanation
learn = ConvLearner.pretrained(ARCH, data, ps=0.5, precompute=False)
learn.load('RNx_224_all_50')
log_preds = learn.TTA(is_test=True)
Explanation:
End of explanation |
4,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding Parameters With REBOUNDx
We start by creating a simulation, attaching REBOUNDx, and adding the effects of general relativity
Step1: The documentation page https
Step2: We would now sim.integrate as usual. If we want, we can access these values later (e.g., some effects could update these values as the simulation progresses). Here they don't
Step3: Details
For simples types (ints and floats), assigning variables to parameters makes a copy of the value. For example
Step4: If we now update speed, this will not be reflected in our 'c' parameter
Step5: More complicated objects are assigned as pointers. For example, adding REBOUNDx structures like forces works out of the box. As a simple example (with no meaning whatsoever)
Step6: Now if we update gr, the changes will be reflected in the 'force' parameter
Step7: If the parameter doesn't exist REBOUNDx will raise an exception, which we can catch and handle however we want
Step8: Adding Your Own Parameters
In order to go back and forth between Python and C, REBOUNDx keeps a list of registered parameter names with their corresponding types. This list is compiled from all the parameters used by the various forces and operators in REBOUNDx listed here
Step9: You can register the name permanently on the C side, but can also do it from Python. You must pass a name along with one of the C types
Step10: For example, say we want a double
Step11: Custom Parameters
You can also add your own more complicated custom types (for example from another library) straightfowardly, with a couple caveats. First, the object must be wrapped as a ctypes object in order to communicate with the REBOUNDx C library, e.g.
Step12: We also have to register it as a generic POINTER
Step13: Now when we get the parameter, REBOUNDx does not know how to cast it. You get a ctypes.c_void_p object back, which you have to manually cast to the Structure class we've created. See the ctypes library documentation for details | Python Code:
import rebound
import reboundx
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1.)
ps = sim.particles
rebx = reboundx.Extras(sim)
gr = rebx.load_force('gr')
rebx.add_force(gr)
Explanation: Adding Parameters With REBOUNDx
We start by creating a simulation, attaching REBOUNDx, and adding the effects of general relativity:
End of explanation
ps[1].params['primary'] = 1
gr.params['c'] = 3.e8
Explanation: The documentation page https://reboundx.readthedocs.io/en/latest/effects.html lists the various required and optional parameters that need to be set for each effect in REBOUNDx. Adding these parameters to particles, forces and operators is easy. We do it through the params attribute:
End of explanation
sim.integrate(10.)
gr.params['c']
Explanation: We would now sim.integrate as usual. If we want, we can access these values later (e.g., some effects could update these values as the simulation progresses). Here they don't:
End of explanation
speed = 5
gr.params['c'] = speed
Explanation: Details
For simples types (ints and floats), assigning variables to parameters makes a copy of the value. For example:
End of explanation
speed = 10
gr.params['c']
Explanation: If we now update speed, this will not be reflected in our 'c' parameter:
End of explanation
ps[1].params['force'] = gr
Explanation: More complicated objects are assigned as pointers. For example, adding REBOUNDx structures like forces works out of the box. As a simple example (with no meaning whatsoever):
End of explanation
gr.params['c'] = 10
newgr = ps[1].params['force']
newgr.params['c']
Explanation: Now if we update gr, the changes will be reflected in the 'force' parameter:
End of explanation
try:
waterfrac = ps[1].params['waterfrac']
except:
print('No water on this planet')
Explanation: If the parameter doesn't exist REBOUNDx will raise an exception, which we can catch and handle however we want
End of explanation
try:
gr.params['q'] = 7
except AttributeError as e:
print(e)
Explanation: Adding Your Own Parameters
In order to go back and forth between Python and C, REBOUNDx keeps a list of registered parameter names with their corresponding types. This list is compiled from all the parameters used by the various forces and operators in REBOUNDx listed here: https://reboundx.readthedocs.io/en/latest/effects.html.
If you try to add one that's not on the list, it will complain:
End of explanation
from reboundx.extras import REBX_C_PARAM_TYPES
REBX_C_PARAM_TYPES
Explanation: You can register the name permanently on the C side, but can also do it from Python. You must pass a name along with one of the C types:
End of explanation
rebx.register_param("q", "REBX_TYPE_DOUBLE")
gr.params['q'] = 7
gr.params['q']
Explanation: For example, say we want a double:
End of explanation
from ctypes import *
class SPH_sim(Structure):
_fields_ = [("dt", c_double),
("Nparticles", c_int)]
my_sph_sim = SPH_sim()
my_sph_sim.dt = 0.1
my_sph_sim.Nparticles = 10000
Explanation: Custom Parameters
You can also add your own more complicated custom types (for example from another library) straightfowardly, with a couple caveats. First, the object must be wrapped as a ctypes object in order to communicate with the REBOUNDx C library, e.g.
End of explanation
rebx.register_param("sph", "REBX_TYPE_POINTER")
gr.params['sph'] = my_sph_sim
Explanation: We also have to register it as a generic POINTER:
End of explanation
mysph = gr.params['sph']
mysph = cast(mysph, POINTER(SPH_sim)).contents
mysph.dt
Explanation: Now when we get the parameter, REBOUNDx does not know how to cast it. You get a ctypes.c_void_p object back, which you have to manually cast to the Structure class we've created. See the ctypes library documentation for details:
End of explanation |
4,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Метод главных компонент
В данном задании вам будет предложено ознакомиться с подходом, который переоткрывался в самых разных областях, имеет множество разных интерпретаций, а также несколько интересных обобщений
Step1: Теория
Любой набор данных представляет собой матрицу $X$.
Метод главных компонент последовательно находит следующие линейные комбинации признаков (компоненты) из $X$
Step2: Путём диагонализации истинной матрицы ковариаций $C$, мы можем найти преобразование исходного набора данных, компоненты которого наилучшим образом будут описывать дисперсию, с учётом их ортогональности друг другу
Step3: А теперь сравним эти направления с направлениями, которые выбирает метод главных компонент
Step4: Видно, что уже при небольшом количестве данных они отличаются незначительно. Увеличим размер выборки
Step5: В этом случае главные компоненты значительно точнее приближают истинные направления данных, вдоль которых наблюдается наибольшая дисперсия.
Статистический взгляд на модель
Как формализовать предположения метода, указанные выше? При помощи вероятностной модели!
Задача, стоящая за любым методом уменьшения размерности
Step6: Вариационный взгляд на модель
Мы знаем, что каждой главной компоненте соответствует описываемай ей дисперсия данных (дисперсия данных при проекции на эту компоненту). Она численно равна значению диагональных элементов матрицы $\Lambda$, получаемой из спектрального разложения матрицы ковариации данных (смотри теорию выше).
Исходя из этого, мы можем отсортировать дисперсию данных вдоль этих компонент по убыванию, и уменьшить размерность данных, отбросив $q$ итоговых главных компонент, имеющих наименьшую дисперсию.
Делать это разными двумя способами. Например, если вы вдальнейшем обучаете на данных с уменьшенной размерностью модель классификации или регрессии, то можно запустить итерационный процесс
Step7: Интерпретация главных компонент
В качестве главных компонент мы получаем линейные комбинации исходных призанков, поэтому резонно возникает вопрос об их интерпретации.
Для этого существует несколько подходов, мы рассмотрим два
Step8: Интерпретация главных компонент с использованием данных
Рассмотрим теперь величину, которую можно проинтерпретировать, как квадрат косинуса угла между объектом выборки и главной компонентой
Step9: Анализ основных недостатков метода главных компонент
Рассмотренные выше задачи являются, безусловно, модельными, потому что данные для них были сгенерированы в соответствии с предположениями метода главных компонент. На практике эти предположения, естественно, выполняются далеко не всегда. Рассмотрим типичные ошибки PCA, которые следует иметь в виду перед тем, как его применять.
Направления с максимальной дисперсией в данных неортогональны
Рассмотрим случай выборки, которая сгенерирована из двух вытянутых нормальных распределений
Step10: В чём проблема, почему pca здесь работает плохо? Ответ прост | Python Code:
import numpy as np
import pandas as pd
import matplotlib
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
matplotlib.style.use('ggplot')
%matplotlib inline
Explanation: Метод главных компонент
В данном задании вам будет предложено ознакомиться с подходом, который переоткрывался в самых разных областях, имеет множество разных интерпретаций, а также несколько интересных обобщений: методом главных компонент (principal component analysis).
Programming assignment
Задание разбито на две части:
- работа с модельными данными,
- работа с реальными данными.
В конце каждого пункта от вас требуется получить ответ и загрузить в соответствующую форму в виде набора текстовых файлов.
End of explanation
from sklearn.decomposition import PCA
mu = np.zeros(2)
C = np.array([[3,1],[1,2]])
data = np.random.multivariate_normal(mu, C, size=50)
plt.scatter(data[:,0], data[:,1])
plt.show()
Explanation: Теория
Любой набор данных представляет собой матрицу $X$.
Метод главных компонент последовательно находит следующие линейные комбинации признаков (компоненты) из $X$:
- каждая компонента ортогональна всем остальным и нормированна: $<w_i, w_j> = 0, \quad ||w_i||=1$,
- каждая компонента описывает максимально возможную дисперсию данных (с учётом предыдущего ограничения).
Предположения, в рамках которых данных подход будет работать хорошо:
- линейность компонент: мы предполагаем, что данные можно анализировать линейными методами,
- большие дисперсии важны: предполагается, что наиболее важны те направления в данных, вдоль которых они имеют наибольшую дисперсию,
- все компоненты ортогональны: это предположение позволяет проводить анализ главных компонент при помощи техник линейной алгебры (например, сингулярное разложение матрицы $X$ или спектральное разложение матрицы $X^TX$).
Как это выглядит математически?
Обозначим следующим образом выборочную матрицу ковариации данных: $\hat{C} \propto Q = X^TX$. ($Q$ отличается от $\hat{C}$ нормировкой на число объектов).
Сингулярное разложение матрицы $Q$ выглядит следующим образом:
$$Q = X^TX = W \Lambda W^T$$
Можно строго показать, что столбцы матрицы $W$ являются главными компонентами матрицы $X$, т.е. комбинациями признаков, удовлетворяющих двум условиям, указанным в начале. При этом дисперсия данных вдоль направления, заданного каждой компонентой, равна соответствующему значению диагональной матрицы $\Lambda$.
Как же на основании этого преобразования производить уменьшение размерности? Мы можем отранжировать компоненты, используя значения дисперсий данных вдоль них.
Сделаем это: $\lambda_{(1)} > \lambda_{(2)} > \dots > \lambda_{(D)}$.
Тогда, если мы выберем компоненты, соответствующие первым $d$ дисперсиям из этого списка, мы получим набор из $d$ новых признаков, которые наилучшим образом описывают дисперсию изначального набора данных среди всех других возможных линейных комбинаций исходных признаков матрицы $X$.
- Если $d=D$, то мы вообще не теряем никакой информации.
- Если $d<D$, то мы теряем информацию, которая, при справедливости указанных выше предположений, будет пропорциональна сумме дисперсий отброшенных компонент.
Получается, что метод главных компонент позволяет нам ранжировать полученные компоненты по "значимости", а также запустить процесс их отбора.
Пример
Рассмотрим набор данных, который сэмплирован из многомерного нормального распределения с матрицей ковариации $C = \begin{pmatrix} 3 & 1 \ 1 & 2 \end{pmatrix}$.
End of explanation
v, W_true = np.linalg.eig(C)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
g_patch = mpatches.Patch(color='g', label='True components')
plt.legend(handles=[g_patch])
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1])),
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
Explanation: Путём диагонализации истинной матрицы ковариаций $C$, мы можем найти преобразование исходного набора данных, компоненты которого наилучшим образом будут описывать дисперсию, с учётом их ортогональности друг другу:
End of explanation
def plot_principal_components(data, model, scatter=True, legend=True):
W_pca = model.components_
if scatter:
plt.scatter(data[:,0], data[:,1])
plt.plot(data[:,0], -(W_pca[0,0]/W_pca[0,1])*data[:,0], color="c")
plt.plot(data[:,0], -(W_pca[1,0]/W_pca[1,1])*data[:,0], color="c")
if legend:
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[c_patch], loc='lower right')
# сделаем графики красивыми:
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1]))-0.5,
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))+0.5]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
model = PCA(n_components=2)
model.fit(data)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: А теперь сравним эти направления с направлениями, которые выбирает метод главных компонент:
End of explanation
data_large = np.random.multivariate_normal(mu, C, size=5000)
model = PCA(n_components=2)
model.fit(data_large)
plt.scatter(data_large[:,0], data_large[:,1], alpha=0.1)
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data_large[:,0], (W_true[0,0]/W_true[0,1])*data_large[:,0], color="g")
plt.plot(data_large[:,0], (W_true[1,0]/W_true[1,1])*data_large[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data_large, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: Видно, что уже при небольшом количестве данных они отличаются незначительно. Увеличим размер выборки:
End of explanation
from sklearn.decomposition import PCA
from sklearn.cross_validation import cross_val_score as cv_score
def plot_scores(d_scores):
n_components = np.arange(1,len(d_scores)+1)
plt.plot(n_components, d_scores, 'b', label='PCA scores')
plt.xlim(n_components[0], n_components[-1])
plt.xlabel('n components')
plt.ylabel('cv scores')
plt.legend(loc='lower right')
plt.show()
def write_answer_1(optimal_d):
with open("pca_answer1.txt", "w") as fout:
fout.write(str(optimal_d))
data = pd.read_csv('data_task1.csv')
d_scores = []
# place your code here
D = len(data.columns) + 1
for compon in range(1, D):
model = PCA(n_components=compon)
scores = cv_score(model, data)
d_scores.append(scores.mean())
plot_scores(np.array(d_scores))
Explanation: В этом случае главные компоненты значительно точнее приближают истинные направления данных, вдоль которых наблюдается наибольшая дисперсия.
Статистический взгляд на модель
Как формализовать предположения метода, указанные выше? При помощи вероятностной модели!
Задача, стоящая за любым методом уменьшения размерности: получить из набора зашумлённых признаков $X$ истинные значения $Y$, которые на самом деле определяют набор данных (т.е. сведение датасета с большим количеством признаков к данным, имеющим т.н. "эффективную размерность").
В случае метода главных компонент мы хотим найти направления, вдоль которых максимальна дисперсия, с учётом описанных выше предположений о структуре данных и компонент.
Материал, описанный ниже в данной секции, не обязателен для ознакомления для выполнения следующего задания, т.к. требует некоторых знаний статистики.
Для тех, кто собирается его пропустить: в конце раздела мы получим метрику качества, которая должна определять, насколько данные хорошо описываются построенной моделью при заданном числе компонент. Отбор признаков при этом сводится к тому, что мы выбираем то количеством компонент, при котором используемая метрика (логарифм правдоподобия) является максимальной.
С учётом предположений задача метода главных компонент выглядит следующим образом:
$$ x = Wy + \mu + \epsilon$$
где:
- $x$ -- наблюдаемые данные
- $W$ -- матрица главных компонент (каждый стобец -- одна компонента)
- $y$ -- их проекция на главные компоненты
- $\mu$ -- среднее наблюдаемых данных
- $\epsilon \sim \mathcal{N}(0, \sigma^2I)$ -- нормальный шум
Исходя из распределения шума, выпишем распределение на $x$:
$$p(x \mid y) = \mathcal{N}(Wx + \mu, \sigma^2I) $$
Введём априорное распределение на $y$:
$$p(y) = \mathcal{N}(0, 1)$$
Выведем из этого при помощи формулы Байеса маргинальное распределение на $p(x)$:
$$p(x) = \mathcal{N}(\mu, \sigma^2I + WW^T)$$
Тогда правдоподобие набора данных при условии используемой модели выглядит следующим образом:
$$\mathcal{L} = \sum_{i=1}^N \log p(x_i) = -N/2 \Big( d\log(2\pi) + \log |C| + \text{tr}(C^{-1}S) \Big)$$
где:
- $C = \sigma^2I + WW^T$ -- матрица ковариации в маргинальной модели
- $S = \frac{1}{N} \sum_{i=1}^N (x_i - \mu)(x_i - \mu)^T$ -- выборочная ковариация
Значение $\mathcal{L}$ имеет смысл логарифма вероятности пронаблюдать данные $X$ при условии, что они удовлетворяют предположениям модели метода главных компонент. Чем оно больше -- тем лучше модель описывает наблюдаемые данные.
Задание 1. Автоматическое уменьшение размерности данных при помощи логарифма правдоподобия $\mathcal{L}$
Рассмотрим набор данных размерности $D$, чья реальная размерность значительно меньше наблюдаемой (назовём её $d$). От вас требуется:
Для каждого значения $\hat{d}$ в интервале [1,D] построить модель PCA с $\hat{d}$ главными компонентами.
Оценить средний логарифм правдоподобия данных для каждой модели на генеральной совокупности, используя метод кросс-валидации с 3 фолдами (итоговая оценка значения логарифма правдоподобия усредняется по всем фолдам).
Найти модель, для которой он максимален, и внести в файл ответа число компонент в данной модели, т.е. значение $\hat{d}_{opt}$.
Для оценки логарифма правдоподобия модели для заданного числа главных компонент при помощи метода кросс-валидации используйте следующие функции:
model = PCA(n_components=n)
scores = cv_score(model, data)
Обратите внимание, что scores -- это вектор, длина которого равна числу фолдов. Для получения оценки на правдоподобие модели его значения требуется усреднить.
Для визуализации оценок можете использовать следующую функцию:
plot_scores(d_scores)
которой на вход передаётся вектор полученных оценок логарифма правдоподобия данных для каждого $\hat{d}$.
Для интересующихся: данные для заданий 1 и 2 были сгенерированны в соответствии с предполагаемой PCA моделью. То есть: данные $Y$ с эффективной размерностью $d$, полученные из независимых равномерных распределений, линейно траснформированны случайной матрицей $W$ в пространство размерностью $D$, после чего ко всем признакам был добавлен независимый нормальный шум с дисперсией $\sigma$.
End of explanation
from sklearn.decomposition import PCA
from sklearn.cross_validation import cross_val_score as cv_score
def plot_variances(d_variances):
n_components = np.arange(1,d_variances.size+1)
plt.plot(n_components, d_variances, 'b', label='Component variances')
plt.xlim(n_components[0], n_components[-1])
plt.xlabel('n components')
plt.ylabel('variance')
plt.legend(loc='upper right')
plt.show()
def write_answer_2(optimal_d):
with open("pca_answer2.txt", "w") as fout:
fout.write(str(optimal_d))
data = pd.read_csv('data_task2.csv')
# place your code here
model = PCA(n_components=D)
model.fit(data)
X = model.transform(data)
disp = np.var(X, axis=0)
disp = np.sort(disp)[::-1]
print disp.shape, disp
# data.shape
# disp.sort()
# disp.sort()
dispdiff = np.append(disp[1:], [0])
print dispdiff
# model.explained_variance_ratio_
diff = (disp - dispdiff)[:-1]
# np.array([1,2,3,4])[:-1]
np.argmax(diff)
plot_variances(diff)
# diff
ans = np.argmax(diff) + 1
print ans
Explanation: Вариационный взгляд на модель
Мы знаем, что каждой главной компоненте соответствует описываемай ей дисперсия данных (дисперсия данных при проекции на эту компоненту). Она численно равна значению диагональных элементов матрицы $\Lambda$, получаемой из спектрального разложения матрицы ковариации данных (смотри теорию выше).
Исходя из этого, мы можем отсортировать дисперсию данных вдоль этих компонент по убыванию, и уменьшить размерность данных, отбросив $q$ итоговых главных компонент, имеющих наименьшую дисперсию.
Делать это разными двумя способами. Например, если вы вдальнейшем обучаете на данных с уменьшенной размерностью модель классификации или регрессии, то можно запустить итерационный процесс: удалять компоненты с наименьшей дисперсией по одной, пока качество итоговой модели не станет значительно хуже.
Более общий способ отбора признаков заключается в том, что вы можете посмотреть на разности в дисперсиях в отсортированном ряде $\lambda_{(1)} > \lambda_{(2)} > \dots > \lambda_{(D)}$: $\lambda_{(1)}-\lambda_{(2)}, \dots, \lambda_{(D-1)} - \lambda_{(D)}$, и удалить те компоненты, на которых разность будет наибольшей. Именно этим методом вам и предлагается воспользоваться для тестового набора данных.
Задание 2. Ручное уменьшение размерности признаков посредством анализа дисперсии данных вдоль главных компонент
Рассмотрим ещё один набор данных размерности $D$, чья реальная размерность значительно меньше наблюдаемой (назовём её также $d$). От вас требуется:
Построить модель PCA с $D$ главными компонентами по этим данным.
Спроецировать данные на главные компоненты.
Оценить их дисперсию вдоль главных компонент.
Отсортировать дисперсии в порядке убывания и получить их попарные разности: $\lambda_{(i-1)} - \lambda_{(i)}$.
Найти разность с наибольшим значением и получить по ней оценку на эффективную размерность данных $\hat{d}$.
Построить график дисперсий и убедиться, что полученная оценка на $\hat{d}{opt}$ действительно имеет смысл, после этого внести полученное значение $\hat{d}{opt}$ в файл ответа.
Для построения модели PCA используйте функцию:
model.fit(data)
Для трансформации данных используйте метод:
model.transform(data)
Оценку дисперсий на трансформированных данных от вас потребуется реализовать вручную. Для построения графиков можно воспользоваться функцией
plot_variances(d_variances)
которой следует передать на вход отсортированный по убыванию вектор дисперсий вдоль компонент.
End of explanation
from sklearn import datasets
def plot_iris(transformed_data, target, target_names):
plt.figure()
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(transformed_data[target == i, 0],
transformed_data[target == i, 1], c=c, label=target_name)
plt.legend()
plt.show()
def write_answer_3(list_pc1, list_pc2):
with open("pca_answer3.txt", "w") as fout:
fout.write(" ".join([str(num) for num in list_pc1]))
fout.write(" ")
fout.write(" ".join([str(num) for num in list_pc2]))
# загрузим датасет iris
iris = datasets.load_iris()
data = iris.data
target = iris.target
target_names = iris.target_names
# place your code here
# загрузим датасет iris
iris = datasets.load_iris()
data = iris.data
target = iris.target
target_names = iris.target_names
# place your code here
model = PCA(n_components=4)
X = model.fit_transform(data)
Xtwo = X[:,:2]
from scipy.stats import pearsonr
print pearsonr(data[:,3],Xtwo[:,0])
corr1 = [1, 3, 4]
corr2 = [2]
print corr1, corr2
write_answer_3(corr1, corr2)
plot_iris(Xtwo, target, target_names)
Explanation: Интерпретация главных компонент
В качестве главных компонент мы получаем линейные комбинации исходных призанков, поэтому резонно возникает вопрос об их интерпретации.
Для этого существует несколько подходов, мы рассмотрим два:
- рассчитать взаимосвязи главных компонент с исходными признаками
- рассчитать вклады каждого конкретного наблюдения в главные компоненты
Первый способ подходит в том случае, когда все объекты из набора данных не несут для нас никакой семантической информации, которая уже не запечатлена в наборе признаков.
Второй способ подходит для случая, когда данные имеют более сложную структуру. Например, лица для человека несут больший семантический смысл, чем вектор значений пикселей, которые анализирует PCA.
Рассмотрим подробнее способ 1: он заключается в подсчёте коэффициентов корреляций между исходными признаками и набором главных компонент.
Так как метод главных компонент является линейным, то предлагается для анализа использовать корреляцию Пирсона, выборочный аналог которой имеет следующую формулу:
$$r_{jk} = \frac{\sum_{i=1}^N (x_{ij} - \bar{x}j) (y{ik} - \bar{y}k)}{\sum{i=1}^N (x_{ij} - \bar{x}j)^2 \sum{i=1}^N (y_{ik} - \bar{y}_k)^2} $$
где:
- $\bar{x}_j$ -- среднее значение j-го признака,
- $\bar{y}_k$ -- среднее значение проекции на k-ю главную компоненту.
Корреляция Пирсона является мерой линейной зависимости. Она равна 0 в случае, когда величины независимы, и $\pm 1$, если они линейно зависимы. Исходя из степени корреляции новой компоненты с исходными признаками, можно строить её семантическую интерпретацию, т.к. смысл исходных признаков мы знаем.
Задание 3. Анализ главных компонент при помощи корреляций с исходными признаками.
Обучите метод главных компонент на датасете iris, получите преобразованные данные.
Посчитайте корреляции исходных признаков с их проекциями первые две на главные компоненты.
Для каждого признака найдите компоненту (из двух построенных), с которой он коррелирует больше всего.
На основании п.3 сгруппируйте признаки по компонентам. Составьте два списка: список номеров признаков, которые сильнее коррелируют с первой компонентой, и такой же список для второй. Нумерацию начинать с единицы. Передайте оба списка функции write_answer_3.
Набор данных состоит из 4 признаков, посчитанных для 150 ирисов. Каждый из них принадлежит одному из трёх видов. Визуализацию проекции данного датасета на две компоненты, которые описывают наибольшую дисперсию данных, можно получить при помощи функции
plot_iris(transformed_data, target, target_names)
на вход которой требуется передать данные, преобразованные при помощи PCA, а также информацию о классах. Цвет точек отвечает одному из трёх видов ириса.
Для того чтобы получить имена исходных признаков, используйте следующий список:
iris.feature_names
При подсчёте корреляций не забудьте центрировать признаки и проекции на главные компоненты (вычитать из них среднее).
End of explanation
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition import RandomizedPCA
def write_answer_4(list_pc):
with open("pca_answer4.txt", "w") as fout:
fout.write(" ".join([str(num) for num in list_pc]))
data = fetch_olivetti_faces(shuffle=True, random_state=0).data
image_shape = (64, 64)
model = RandomizedPCA(n_components=10)
X = model.fit_transform(data)
# fmean = X[:,0].mean(axis=0)
fmean = X.mean(axis=0)
# fmean
# plt.imshow(data[3].reshape(image_shape))
disp = []
for i in range(len(X)):
divis = np.sum((X[i] - fmean)**2)
cosin = (X[i] - fmean)**2 / divis
disp.append(cosin)
disp = np.array(disp)
ans = []
for i in range(10):
ans.append(np.argmax(disp[:,i]))
print ans
write_answer_4(ans)
Explanation: Интерпретация главных компонент с использованием данных
Рассмотрим теперь величину, которую можно проинтерпретировать, как квадрат косинуса угла между объектом выборки и главной компонентой:
$$ cos^2_{ik} = \frac{(f_{ik} - \bar{f}{k})^2}{\sum{k=1}^d (f_{ik} - \bar{f}_{k})^2} $$
где
- i -- номер объекта
- k -- номер главной компоненты
- $f_{ik}$ -- модуль проекции объекта на компоненту
Очевидно, что
$$ \sum_{k=1}^d cos^2_{ik} = 1 $$
Это значит, что для каждого объекта мы в виде данной величины получили веса, пропорциональные вкладу, которую вносит данный объект в дисперсию каждой компоненты. Чем больше вклад, тем более значим объект для описания конкретной главной компоненты.
Задание 4. Анализ главных компонент при помощи вкладов в их дисперсию отдельных объектов
Загрузите датасет лиц Olivetti Faces и обучите на нём модель RandomizedPCA (используется при большом количестве признаков и работает быстрее, чем обычный PCA). Получите проекции признаков на 10 первых главных компонент.
Посчитайте для каждого объекта его относительный вклад в дисперсию каждой из 10 компонент, используя формулу из предыдущего раздела (d = 10).
Для каждой компоненты найдите и визуализируйте лицо, которое вносит наибольший относительный вклад в неё. Для визуализации используйте функцию
plt.imshow(image.reshape(image_shape))
Передайте в функцию write_answer_4 список номеров лиц с наибольшим относительным вкладом в дисперсию каждой из компонент, список начинается с 0.
End of explanation
C1 = np.array([[10,0],[0,0.5]])
phi = np.pi/3
C2 = np.dot(C1, np.array([[np.cos(phi), np.sin(phi)],
[-np.sin(phi),np.cos(phi)]]))
data = np.vstack([np.random.multivariate_normal(mu, C1, size=50),
np.random.multivariate_normal(mu, C2, size=50)])
plt.scatter(data[:,0], data[:,1])
# построим истинные интересующие нас компоненты
plt.plot(data[:,0], np.zeros(data[:,0].size), color="g")
plt.plot(data[:,0], 3**0.5*data[:,0], color="g")
# обучим модель pca и построим главные компоненты
model = PCA(n_components=2)
model.fit(data)
plot_principal_components(data, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: Анализ основных недостатков метода главных компонент
Рассмотренные выше задачи являются, безусловно, модельными, потому что данные для них были сгенерированы в соответствии с предположениями метода главных компонент. На практике эти предположения, естественно, выполняются далеко не всегда. Рассмотрим типичные ошибки PCA, которые следует иметь в виду перед тем, как его применять.
Направления с максимальной дисперсией в данных неортогональны
Рассмотрим случай выборки, которая сгенерирована из двух вытянутых нормальных распределений:
End of explanation
C = np.array([[0.5,0],[0,10]])
mu1 = np.array([-2,0])
mu2 = np.array([2,0])
data = np.vstack([np.random.multivariate_normal(mu1, C, size=50),
np.random.multivariate_normal(mu2, C, size=50)])
plt.scatter(data[:,0], data[:,1])
# обучим модель pca и построим главные компоненты
model = PCA(n_components=2)
model.fit(data)
plot_principal_components(data, model)
plt.draw()
Explanation: В чём проблема, почему pca здесь работает плохо? Ответ прост: интересующие нас компоненты в данных коррелированны между собой (или неортогональны, в зависимости от того, какой терминологией пользоваться). Для поиска подобных преобразований требуются более сложные методы, которые уже выходят за рамки метода главных компонент.
Для интересующихся: то, что можно применить непосредственно к выходу метода главных компонент, для получения подобных неортогональных преобразований, называется методами ротации. Почитать о них можно в связи с другим методом уменьшения размерности, который называется Factor Analysis (FA), но ничего не мешает их применять и к главным компонентам.
Интересное направление в данных не совпадает с направлением максимальной дисперсии
Рассмотрим пример, когда дисперсии не отражают интересующих нас направлений в данных:
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.